WorldWideScience

Sample records for real-time motion recognition

  1. Improving the Robustness of Real-Time Myoelectric Pattern Recognition against Arm Position Changes in Transradial Amputees

    Directory of Open Access Journals (Sweden)

    Yanjuan Geng

    2017-01-01

    Full Text Available Previous studies have showed that arm position variations would significantly degrade the classification performance of myoelectric pattern-recognition-based prosthetic control, and the cascade classifier (CC and multiposition classifier (MPC have been proposed to minimize such degradation in offline scenarios. However, it remains unknown whether these proposed approaches could also perform well in the clinical use of a multifunctional prosthesis control. In this study, the online effect of arm position variation on motion identification was evaluated by using a motion-test environment (MTE developed to mimic the real-time control of myoelectric prostheses. The performance of different classifier configurations in reducing the impact of arm position variation was investigated using four real-time metrics based on dataset obtained from transradial amputees. The results of this study showed that, compared to the commonly used motion classification method, the CC and MPC configurations improved the real-time performance across seven classes of movements in five different arm positions (8.7% and 12.7% increments of motion completion rate, resp.. The results also indicated that high offline classification accuracy might not ensure good real-time performance under variable arm positions, which necessitated the investigation of the real-time control performance to gain proper insight on the clinical implementation of EMG-pattern-recognition-based controllers for limb amputees.

  2. Human motion sensing and recognition a fuzzy qualitative approach

    CERN Document Server

    Liu, Honghai; Ji, Xiaofei; Chan, Chee Seng; Khoury, Mehdi

    2017-01-01

    This book introduces readers to the latest exciting advances in human motion sensing and recognition, from the theoretical development of fuzzy approaches to their applications. The topics covered include human motion recognition in 2D and 3D, hand motion analysis with contact sensors, and vision-based view-invariant motion recognition, especially from the perspective of Fuzzy Qualitative techniques. With the rapid development of technologies in microelectronics, computers, networks, and robotics over the last decade, increasing attention has been focused on human motion sensing and recognition in many emerging and active disciplines where human motions need to be automatically tracked, analyzed or understood, such as smart surveillance, intelligent human-computer interaction, robot motion learning, and interactive gaming. Current challenges mainly stem from the dynamic environment, data multi-modality, uncertain sensory information, and real-time issues. These techniques are shown to effectively address the ...

  3. Real-Time Gait Cycle Parameter Recognition Using a Wearable Accelerometry System

    Directory of Open Access Journals (Sweden)

    Jun-Ming Lu

    2011-07-01

    Full Text Available This paper presents the development of a wearable accelerometry system for real-time gait cycle parameter recognition. Using a tri-axial accelerometer, the wearable motion detector is a single waist-mounted device to measure trunk accelerations during walking. Several gait cycle parameters, including cadence, step regularity, stride regularity and step symmetry can be estimated in real-time by using autocorrelation procedure. For validation purposes, five Parkinson’s disease (PD patients and five young healthy adults were recruited in an experiment. The gait cycle parameters among the two subject groups of different mobility can be quantified and distinguished by the system. Practical considerations and limitations for implementing the autocorrelation procedure in such a real-time system are also discussed. This study can be extended to the future attempts in real-time detection of disabling gaits, such as festinating or freezing of gait in PD patients. Ambulatory rehabilitation, gait assessment and personal telecare for people with gait disorders are also possible applications.

  4. Real-time motion-adaptive-optimization (MAO) in TomoTherapy

    Energy Technology Data Exchange (ETDEWEB)

    Lu Weiguo; Chen Mingli; Ruchala, Kenneth J; Chen Quan; Olivera, Gustavo H [TomoTherapy Inc., 1240 Deming Way, Madison, WI (United States); Langen, Katja M; Kupelian, Patrick A [MD Anderson Cancer Center-Orlando, Orlando, FL (United States)], E-mail: wlu@tomotherapy.com

    2009-07-21

    IMRT delivery follows a planned leaf sequence, which is optimized before treatment delivery. However, it is hard to model real-time variations, such as respiration, in the planning procedure. In this paper, we propose a negative feedback system of IMRT delivery that incorporates real-time optimization to account for intra-fraction motion. Specifically, we developed a feasible workflow of real-time motion-adaptive-optimization (MAO) for TomoTherapy delivery. TomoTherapy delivery is characterized by thousands of projections with a fast projection rate and ultra-fast binary leaf motion. The technique of MAO-guided delivery calculates (i) the motion-encoded dose that has been delivered up to any given projection during the delivery and (ii) the future dose that will be delivered based on the estimated motion probability and future fluence map. These two pieces of information are then used to optimize the leaf open time of the upcoming projection right before its delivery. It consists of several real-time procedures, including 'motion detection and prediction', 'delivered dose accumulation', 'future dose estimation' and 'projection optimization'. Real-time MAO requires that all procedures are executed in time less than the duration of a projection. We implemented and tested this technique using a TomoTherapy (registered) research system. The MAO calculation took about 100 ms per projection. We calculated and compared MAO-guided delivery with two other types of delivery, motion-without-compensation delivery (MD) and static delivery (SD), using simulated 1D cases, real TomoTherapy plans and the motion traces from clinical lung and prostate patients. The results showed that the proposed technique effectively compensated for motion errors of all test cases. Dose distributions and DVHs of MAO-guided delivery approached those of SD, for regular and irregular respiration with a peak-to-peak amplitude of 3 cm, and for medium and large

  5. Real-time motion-adaptive-optimization (MAO) in TomoTherapy

    International Nuclear Information System (INIS)

    Lu Weiguo; Chen Mingli; Ruchala, Kenneth J; Chen Quan; Olivera, Gustavo H; Langen, Katja M; Kupelian, Patrick A

    2009-01-01

    IMRT delivery follows a planned leaf sequence, which is optimized before treatment delivery. However, it is hard to model real-time variations, such as respiration, in the planning procedure. In this paper, we propose a negative feedback system of IMRT delivery that incorporates real-time optimization to account for intra-fraction motion. Specifically, we developed a feasible workflow of real-time motion-adaptive-optimization (MAO) for TomoTherapy delivery. TomoTherapy delivery is characterized by thousands of projections with a fast projection rate and ultra-fast binary leaf motion. The technique of MAO-guided delivery calculates (i) the motion-encoded dose that has been delivered up to any given projection during the delivery and (ii) the future dose that will be delivered based on the estimated motion probability and future fluence map. These two pieces of information are then used to optimize the leaf open time of the upcoming projection right before its delivery. It consists of several real-time procedures, including 'motion detection and prediction', 'delivered dose accumulation', 'future dose estimation' and 'projection optimization'. Real-time MAO requires that all procedures are executed in time less than the duration of a projection. We implemented and tested this technique using a TomoTherapy (registered) research system. The MAO calculation took about 100 ms per projection. We calculated and compared MAO-guided delivery with two other types of delivery, motion-without-compensation delivery (MD) and static delivery (SD), using simulated 1D cases, real TomoTherapy plans and the motion traces from clinical lung and prostate patients. The results showed that the proposed technique effectively compensated for motion errors of all test cases. Dose distributions and DVHs of MAO-guided delivery approached those of SD, for regular and irregular respiration with a peak-to-peak amplitude of 3 cm, and for medium and large prostate motions. The results conceptually

  6. Real-time stylistic prediction for whole-body human motions.

    Science.gov (United States)

    Matsubara, Takamitsu; Hyon, Sang-Ho; Morimoto, Jun

    2012-01-01

    The ability to predict human motion is crucial in several contexts such as human tracking by computer vision and the synthesis of human-like computer graphics. Previous work has focused on off-line processes with well-segmented data; however, many applications such as robotics require real-time control with efficient computation. In this paper, we propose a novel approach called real-time stylistic prediction for whole-body human motions to satisfy these requirements. This approach uses a novel generative model to represent a whole-body human motion including rhythmic motion (e.g., walking) and discrete motion (e.g., jumping). The generative model is composed of a low-dimensional state (phase) dynamics and a two-factor observation model, allowing it to capture the diversity of motion styles in humans. A real-time adaptation algorithm was derived to estimate both state variables and style parameter of the model from non-stationary unlabeled sequential observations. Moreover, with a simple modification, the algorithm allows real-time adaptation even from incomplete (partial) observations. Based on the estimated state and style, a future motion sequence can be accurately predicted. In our implementation, it takes less than 15 ms for both adaptation and prediction at each observation. Our real-time stylistic prediction was evaluated for human walking, running, and jumping behaviors. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Real-time intelligent pattern recognition algorithm for surface EMG signals

    Directory of Open Access Journals (Sweden)

    Jahed Mehran

    2007-12-01

    Full Text Available Abstract Background Electromyography (EMG is the study of muscle function through the inquiry of electrical signals that the muscles emanate. EMG signals collected from the surface of the skin (Surface Electromyogram: sEMG can be used in different applications such as recognizing musculoskeletal neural based patterns intercepted for hand prosthesis movements. Current systems designed for controlling the prosthetic hands either have limited functions or can only be used to perform simple movements or use excessive amount of electrodes in order to achieve acceptable results. In an attempt to overcome these problems we have proposed an intelligent system to recognize hand movements and have provided a user assessment routine to evaluate the correctness of executed movements. Methods We propose to use an intelligent approach based on adaptive neuro-fuzzy inference system (ANFIS integrated with a real-time learning scheme to identify hand motion commands. For this purpose and to consider the effect of user evaluation on recognizing hand movements, vision feedback is applied to increase the capability of our system. By using this scheme the user may assess the correctness of the performed hand movement. In this work a hybrid method for training fuzzy system, consisting of back-propagation (BP and least mean square (LMS is utilized. Also in order to optimize the number of fuzzy rules, a subtractive clustering algorithm has been developed. To design an effective system, we consider a conventional scheme of EMG pattern recognition system. To design this system we propose to use two different sets of EMG features, namely time domain (TD and time-frequency representation (TFR. Also in order to decrease the undesirable effects of the dimension of these feature sets, principle component analysis (PCA is utilized. Results In this study, the myoelectric signals considered for classification consists of six unique hand movements. Features chosen for EMG signal

  8. New technique for real-time distortion-invariant multiobject recognition and classification

    Science.gov (United States)

    Hong, Rutong; Li, Xiaoshun; Hong, En; Wang, Zuyi; Wei, Hongan

    2001-04-01

    A real-time hybrid distortion-invariant OPR system was established to make 3D multiobject distortion-invariant automatic pattern recognition. Wavelet transform technique was used to make digital preprocessing of the input scene, to depress the noisy background and enhance the recognized object. A three-layer backpropagation artificial neural network was used in correlation signal post-processing to perform multiobject distortion-invariant recognition and classification. The C-80 and NOA real-time processing ability and the multithread programming technology were used to perform high speed parallel multitask processing and speed up the post processing rate to ROIs. The reference filter library was constructed for the distortion version of 3D object model images based on the distortion parameter tolerance measuring as rotation, azimuth and scale. The real-time optical correlation recognition testing of this OPR system demonstrates that using the preprocessing, post- processing, the nonlinear algorithm os optimum filtering, RFL construction technique and the multithread programming technology, a high possibility of recognition and recognition rate ere obtained for the real-time multiobject distortion-invariant OPR system. The recognition reliability and rate was improved greatly. These techniques are very useful to automatic target recognition.

  9. The Slow Developmental Time Course of Real-Time Spoken Word Recognition

    Science.gov (United States)

    Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob

    2015-01-01

    This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than…

  10. NUI framework based on real-time head pose estimation and hand gesture recognition

    Directory of Open Access Journals (Sweden)

    Kim Hyunduk

    2016-01-01

    Full Text Available The natural user interface (NUI is used for the natural motion interface without using device or tool such as mice, keyboards, pens and markers. In this paper, we develop natural user interface framework based on two recognition module. First module is real-time head pose estimation module using random forests and second module is hand gesture recognition module, named Hand gesture Key Emulation Toolkit (HandGKET. Using the head pose estimation module, we can know where the user is looking and what the user’s focus of attention is. Moreover, using the hand gesture recognition module, we can also control the computer using the user’s hand gesture without mouse and keyboard. In proposed framework, the user’s head direction and hand gesture are mapped into mouse and keyboard event, respectively.

  11. Real-time embedded face recognition for smart home

    NARCIS (Netherlands)

    Zuo, F.; With, de P.H.N.

    2005-01-01

    We propose a near real-time face recognition system for embedding in consumer applications. The system is embedded in a networked home environment and enables personalized services by automatic identification of users. The aim of our research is to design and build a face recognition system that is

  12. Novel methods for real-time 3D facial recognition

    OpenAIRE

    Rodrigues, Marcos; Robinson, Alan

    2010-01-01

    In this paper we discuss our approach to real-time 3D face recognition. We argue the need for real time operation in a realistic scenario and highlight the required pre- and post-processing operations for effective 3D facial recognition. We focus attention to some operations including face and eye detection, and fast post-processing operations such as hole filling, mesh smoothing and noise removal. We consider strategies for hole filling such as bilinear and polynomial interpolation and Lapla...

  13. Real Time MRI Motion Correction with Markerless Tracking

    DEFF Research Database (Denmark)

    Benjaminsen, Claus; Jensen, Rasmus Ramsbøl; Wighton, Paul

    Prospective motion correction for MRI neuroimaging has been demonstrated using MR navigators and external tracking systems using markers. The drawbacks of these two motion estimation methods include prolonged scan time plus lack of compatibility with all image acquisitions, and difficulties...... validating marker attachment resulting in uncertain estimation of the brain motion respectively. We have developed a markerless tracking system, and in this work we demonstrate the use of our system for prospective motion correction, and show that despite being computationally demanding, markerless tracking...... can be implemented for real time motion correction....

  14. Real-Time Hand Posture Recognition Using a Range Camera

    Science.gov (United States)

    Lahamy, Herve

    The basic goal of human computer interaction is to improve the interaction between users and computers by making computers more usable and receptive to the user's needs. Within this context, the use of hand postures in replacement of traditional devices such as keyboards, mice and joysticks is being explored by many researchers. The goal is to interpret human postures via mathematical algorithms. Hand posture recognition has gained popularity in recent years, and could become the future tool for humans to interact with computers or virtual environments. An exhaustive description of the frequently used methods available in literature for hand posture recognition is provided. It focuses on the different types of sensors and data used, the segmentation and tracking methods, the features used to represent the hand postures as well as the classifiers considered in the recognition process. Those methods are usually presented as highly robust with a recognition rate close to 100%. However, a couple of critical points necessary for a successful real-time hand posture recognition system require major improvement. Those points include the features used to represent the hand segment, the number of postures simultaneously recognizable, the invariance of the features with respect to rotation, translation and scale and also the behavior of the classifiers against non-perfect hand segments for example segments including part of the arm or missing part of the palm. A 3D time-of-flight camera named SR4000 has been chosen to develop a new methodology because of its capability to provide in real-time and at high frame rate 3D information on the scene imaged. This sensor has been described and evaluated for its capability for capturing in real-time a moving hand. A new recognition method that uses the 3D information provided by the range camera to recognize hand postures has been proposed. The different steps of this methodology including the segmentation, the tracking, the hand

  15. Validation of Energy Expenditure Prediction Models Using Real-Time Shoe-Based Motion Detectors.

    Science.gov (United States)

    Lin, Shih-Yun; Lai, Ying-Chih; Hsia, Chi-Chun; Su, Pei-Fang; Chang, Chih-Han

    2017-09-01

    This study aimed to verify and compare the accuracy of energy expenditure (EE) prediction models using shoe-based motion detectors with embedded accelerometers. Three physical activity (PA) datasets (unclassified, recognition, and intensity segmentation) were used to develop three prediction models. A multiple classification flow and these models were used to estimate EE. The "unclassified" dataset was defined as the data without PA recognition, the "recognition" as the data classified with PA recognition, and the "intensity segmentation" as the data with intensity segmentation. The three datasets contained accelerometer signals (quantified as signal magnitude area (SMA)) and net heart rate (HR net ). The accuracy of these models was assessed according to the deviation between physically measured EE and model-estimated EE. The variance between physically measured EE and model-estimated EE expressed by simple linear regressions was increased by 63% and 13% using SMA and HR net , respectively. The accuracy of the EE predicted from accelerometer signals is influenced by the different activities that exhibit different count-EE relationships within the same prediction model. The recognition model provides a better estimation and lower variability of EE compared with the unclassified and intensity segmentation models. The proposed shoe-based motion detectors can improve the accuracy of EE estimation and has great potential to be used to manage everyday exercise in real time.

  16. Real-Time Target Motion Animation for Missile Warning System Testing

    Science.gov (United States)

    2006-04-01

    T. Perkins, R. Sundberg, J. Cordell, Z. Tun , and M. Owen, Real-time Target Motion Animation for Missile Warning System Testing, Proc. SPIE Vol 6208...Z39-18 Real-time target motion animation for missile warning system testing Timothy Perkins*a, Robert Sundberga, John Cordellb, Zaw Tunb, Mark

  17. Self-motion perception: assessment by real-time computer-generated animations

    Science.gov (United States)

    Parker, D. E.; Phillips, J. O.

    2001-01-01

    We report a new procedure for assessing complex self-motion perception. In three experiments, subjects manipulated a 6 degree-of-freedom magnetic-field tracker which controlled the motion of a virtual avatar so that its motion corresponded to the subjects' perceived self-motion. The real-time animation created by this procedure was stored using a virtual video recorder for subsequent analysis. Combined real and illusory self-motion and vestibulo-ocular reflex eye movements were evoked by cross-coupled angular accelerations produced by roll and pitch head movements during passive yaw rotation in a chair. Contrary to previous reports, illusory self-motion did not correspond to expectations based on semicircular canal stimulation. Illusory pitch head-motion directions were as predicted for only 37% of trials; whereas, slow-phase eye movements were in the predicted direction for 98% of the trials. The real-time computer-generated animations procedure permits use of naive, untrained subjects who lack a vocabulary for reporting motion perception and is applicable to basic self-motion perception studies, evaluation of motion simulators, assessment of balance disorders and so on.

  18. Real-Time Control of an Exoskeleton Hand Robot with Myoelectric Pattern Recognition.

    Science.gov (United States)

    Lu, Zhiyuan; Chen, Xiang; Zhang, Xu; Tong, Kay-Yu; Zhou, Ping

    2017-08-01

    Robot-assisted training provides an effective approach to neurological injury rehabilitation. To meet the challenge of hand rehabilitation after neurological injuries, this study presents an advanced myoelectric pattern recognition scheme for real-time intention-driven control of a hand exoskeleton. The developed scheme detects and recognizes user's intention of six different hand motions using four channels of surface electromyography (EMG) signals acquired from the forearm and hand muscles, and then drives the exoskeleton to assist the user accomplish the intended motion. The system was tested with eight neurologically intact subjects and two individuals with spinal cord injury (SCI). The overall control accuracy was [Formula: see text] for the neurologically intact subjects and [Formula: see text] for the SCI subjects. The total lag of the system was approximately 250[Formula: see text]ms including data acquisition, transmission and processing. One SCI subject also participated in training sessions in his second and third visits. Both the control accuracy and efficiency tended to improve. These results show great potential for applying the advanced myoelectric pattern recognition control of the wearable robotic hand system toward improving hand function after neurological injuries.

  19. FPGA Implementation of Real-Time Ethernet for Motion Control

    Directory of Open Access Journals (Sweden)

    Chen Youdong

    2013-01-01

    Full Text Available This paper provides an applicable implementation of real-time Ethernet named CASNET, which modifies the Ethernet medium access control (MAC to achieve the real-time requirement for motion control. CASNET is the communication protocol used for motion control system. Verilog hardware description language (VHDL has been used in the MAC logic design. The designed MAC serves as one of the intellectual properties (IPs and is applicable to various industrial controllers. The interface of the physical layer is RJ45. The other layers have been implemented by using C programs. The real-time Ethernet has been implemented by using field programmable gate array (FPGA technology and the proposed solution has been tested through the cycle time, synchronization accuracy, and Wireshark testing.

  20. FPGA-Based Real-Time Motion Detection for Automated Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2016-03-01

    Full Text Available Design of automated video surveillance systems is one of the exigent missions in computer vision community because of their ability to automatically select frames of interest in incoming video streams based on motion detection. This research paper focuses on the real-time hardware implementation of a motion detection algorithm for such vision based automated surveillance systems. A dedicated VLSI architecture has been proposed and designed for clustering-based motion detection scheme. The working prototype of a complete standalone automated video surveillance system, including input camera interface, designed motion detection VLSI architecture, and output display interface, with real-time relevant motion detection capabilities, has been implemented on Xilinx ML510 (Virtex-5 FX130T FPGA platform. The prototyped system robustly detects the relevant motion in real-time in live PAL (720 × 576 resolution video streams directly coming from the camera.

  1. Towards Real-Time Speech Emotion Recognition for Affective E-Learning

    Science.gov (United States)

    Bahreini, Kiavash; Nadolski, Rob; Westera, Wim

    2016-01-01

    This paper presents the voice emotion recognition part of the FILTWAM framework for real-time emotion recognition in affective e-learning settings. FILTWAM (Framework for Improving Learning Through Webcams And Microphones) intends to offer timely and appropriate online feedback based upon learner's vocal intonations and facial expressions in order…

  2. Frame based Motion Detection for real-time Surveillance

    OpenAIRE

    Brajesh Patel; Neelam Patel

    2012-01-01

    In this paper a series of algorithm has been formed to track the feature of motion detection under surveillance system. In the proposed work a pixel variant plays a vital role in detection of moving object of a particular clip. If there is a little bit motion in a frame then it is detected very easily by calculating pixel variance. This algorithm detects the zero variation only when there is no motion in a real-time video sequence. It is simple and easier for motion detection in the fames of ...

  3. Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery.

    Science.gov (United States)

    Rottmann, Joerg; Keall, Paul; Berbeco, Ross

    2013-09-01

    To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time.

  4. Real-time motional Stark effect in jet

    International Nuclear Information System (INIS)

    Alves, D.; Stephen, A.; Hawkes, N.; Dalley, S.; Goodyear, A.; Felton, R.; Joffrin, E.; Fernandes, H.

    2004-01-01

    The increasing importance of real-time measurements and control systems in JET experiments, regarding e.g. Internal Transport Barrier (ITB) and q-profile control, has motivated the development of a real-time motional Stark effect (MSE) system. The MSE diagnostic allows the measurement of local magnetic fields in different locations along the neutral beam path providing, therefore, local measurement of the current and q-profiles. Recently in JET, an upgrade of the MSE diagnostic has been implemented, incorporating a totally new system which allows the use of this diagnostic as a real-time control tool as well as an extended data source for off-line analysis. This paper will briefly describe the technical features of the real-time diagnostic with main focus on the system architecture, which consists of a VME crate hosting three PowerPC processor boards and a fast ADC, all connected via Front Panel Data Port (FPDP). The DSP algorithm implements a lockin-amplifier required to demodulate the JET MSE signals. Some applications for the system will be covered such as: feeding the real-time equilibrium reconstruction code (EQUINOX) and allowing the full coverage analysis of the Neutral Beam time window. A brief comparison between the real-time MSE analysis and the off-line analysis will also be presented

  5. Real-Time (Vision-Based) Road Sign Recognition Using an Artificial Neural Network

    Science.gov (United States)

    Islam, Kh Tohidul; Raj, Ram Gopal

    2017-01-01

    Road sign recognition is a driver support function that can be used to notify and warn the driver by showing the restrictions that may be effective on the current stretch of road. Examples for such regulations are ‘traffic light ahead’ or ‘pedestrian crossing’ indications. The present investigation targets the recognition of Malaysian road and traffic signs in real-time. Real-time video is taken by a digital camera from a moving vehicle and real world road signs are then extracted using vision-only information. The system is based on two stages, one performs the detection and another one is for recognition. In the first stage, a hybrid color segmentation algorithm has been developed and tested. In the second stage, an introduced robust custom feature extraction method is used for the first time in a road sign recognition approach. Finally, a multilayer artificial neural network (ANN) has been created to recognize and interpret various road signs. It is robust because it has been tested on both standard and non-standard road signs with significant recognition accuracy. This proposed system achieved an average of 99.90% accuracy with 99.90% of sensitivity, 99.90% of specificity, 99.90% of f-measure, and 0.001 of false positive rate (FPR) with 0.3 s computational time. This low FPR can increase the system stability and dependability in real-time applications. PMID:28406471

  6. Real-Time (Vision-Based) Road Sign Recognition Using an Artificial Neural Network.

    Science.gov (United States)

    Islam, Kh Tohidul; Raj, Ram Gopal

    2017-04-13

    Road sign recognition is a driver support function that can be used to notify and warn the driver by showing the restrictions that may be effective on the current stretch of road. Examples for such regulations are 'traffic light ahead' or 'pedestrian crossing' indications. The present investigation targets the recognition of Malaysian road and traffic signs in real-time. Real-time video is taken by a digital camera from a moving vehicle and real world road signs are then extracted using vision-only information. The system is based on two stages, one performs the detection and another one is for recognition. In the first stage, a hybrid color segmentation algorithm has been developed and tested. In the second stage, an introduced robust custom feature extraction method is used for the first time in a road sign recognition approach. Finally, a multilayer artificial neural network (ANN) has been created to recognize and interpret various road signs. It is robust because it has been tested on both standard and non-standard road signs with significant recognition accuracy. This proposed system achieved an average of 99.90% accuracy with 99.90% of sensitivity, 99.90% of specificity, 99.90% of f-measure, and 0.001 of false positive rate (FPR) with 0.3 s computational time. This low FPR can increase the system stability and dependability in real-time applications.

  7. Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery

    International Nuclear Information System (INIS)

    Rottmann, Joerg; Berbeco, Ross; Keall, Paul

    2013-01-01

    Purpose: To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient.Methods: 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps.Results: Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm.Conclusions: The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time

  8. Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery

    Energy Technology Data Exchange (ETDEWEB)

    Rottmann, Joerg; Berbeco, Ross [Brigham and Women' s Hospital, Dana Farber-Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 (United States); Keall, Paul [Radiation Physics Laboratory, Sydney Medical School, University of Sydney, Sydney NSW 2006 (Australia)

    2013-09-15

    Purpose: To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient.Methods: 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps.Results: Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm.Conclusions: The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time.

  9. Gliding and Saccadic Gaze Gesture Recognition in Real Time

    DEFF Research Database (Denmark)

    Rozado, David; San Agustin, Javier; Rodriguez, Francisco

    2012-01-01

    , and their corresponding real-time recognition algorithms, Hierarchical Temporal Memory networks and the Needleman-Wunsch algorithm for sequence alignment. Our results show how a specific combination of gaze gesture modality, namely saccadic gaze gestures, and recognition algorithm, Needleman-Wunsch, allows for reliable...... usage of intentional gaze gestures to interact with a computer with accuracy rates of up to 98% and acceptable completion speed. Furthermore, the gesture recognition engine does not interfere with otherwise standard human-machine gaze interaction generating therefore, very low false positive rates...

  10. Haar-like Features for Robust Real-Time Face Recognition

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2013-01-01

    Face recognition is still a very challenging task when the input face image is noisy, occluded by some obstacles, of very low-resolution, not facing the camera, and not properly illuminated. These problems make the feature extraction and consequently the face recognition system unstable....... The proposed system in this paper introduces the novel idea of using Haar-like features, which have commonly been used for object detection, along with a probabilistic classifier for face recognition. The proposed system is simple, real-time, effective and robust against most of the mentioned problems....... Experimental results on public databases show that the proposed system indeed outperforms the state-of-the-art face recognition systems....

  11. The INGV Real Time Strong Motion Database

    Science.gov (United States)

    Massa, Marco; D'Alema, Ezio; Mascandola, Claudia; Lovati, Sara; Scafidi, Davide; Gomez, Antonio; Carannante, Simona; Franceschina, Gianlorenzo; Mirenna, Santi; Augliera, Paolo

    2017-04-01

    The INGV real time strong motion data sharing is assured by the INGV Strong Motion Database. ISMD (http://ismd.mi.ingv.it) was designed in the last months of 2011 in cooperation among different INGV departments, with the aim to organize the distribution of the INGV strong-motion data using standard procedures for data acquisition and processing. The first version of the web portal was published soon after the occurrence of the 2012 Emilia (Northern Italy), Mw 6.1, seismic sequence. At that time ISMD was the first European real time web portal devoted to the engineering seismology community. After four years of successfully operation, the thousands of accelerometric waveforms collected in the archive need necessary a technological improvement of the system in order to better organize the new data archiving and to make more efficient the answer to the user requests. ISMD 2.0 was based on PostgreSQL (www.postgresql.org), an open source object- relational database. The main purpose of the web portal is to distribute few minutes after the origin time the accelerometric waveforms and related metadata of the Italian earthquakes with ML≥3.0. Data are provided both in raw SAC (counts) and automatically corrected ASCII (gal) formats. The web portal also provide, for each event, a detailed description of the ground motion parameters (i.e. Peak Ground Acceleration, Velocity and Displacement, Arias and Housner Intensities) data converted in velocity and displacement, response spectra up to 10.0 s and general maps concerning the recent and the historical seismicity of the area together with information about its seismic hazard. The focal parameters of the events are provided by the INGV National Earthquake Center (CNT, http://cnt.rm.ingv.it). Moreover, the database provides a detailed site characterization section for each strong motion station, based on geological, geomorphological and geophysical information. At present (i.e. January 2017), ISMD includes 987 (121

  12. FPGA-based architecture for motion recovering in real-time

    Science.gov (United States)

    Arias-Estrada, Miguel; Maya-Rueda, Selene E.; Torres-Huitzil, Cesar

    2002-03-01

    A key problem in the computer vision field is the measurement of object motion in a scene. The main goal is to compute an approximation of the 3D motion from the analysis of an image sequence. Once computed, this information can be used as a basis to reach higher level goals in different applications. Motion estimation algorithms pose a significant computational load for the sequential processors limiting its use in practical applications. In this work we propose a hardware architecture for motion estimation in real time based on FPGA technology. The technique used for motion estimation is Optical Flow due to its accuracy, and the density of velocity estimation, however other techniques are being explored. The architecture is composed of parallel modules working in a pipeline scheme to reach high throughput rates near gigaflops. The modules are organized in a regular structure to provide a high degree of flexibility to cover different applications. Some results will be presented and the real-time performance will be discussed and analyzed. The architecture is prototyped in an FPGA board with a Virtex device interfaced to a digital imager.

  13. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion.

    Science.gov (United States)

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-05-05

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.

  14. Application and API for Real-time Visualization of Ground-motions and Tsunami

    Science.gov (United States)

    Aoi, S.; Kunugi, T.; Suzuki, W.; Kubo, T.; Nakamura, H.; Azuma, H.; Fujiwara, H.

    2015-12-01

    Due to the recent progress of seismograph and communication environment, real-time and continuous ground-motion observation becomes technically and economically feasible. K-NET and KiK-net, which are nationwide strong motion networks operated by NIED, cover all Japan by about 1750 stations in total. More than half of the stations transmit the ground-motion indexes and/or waveform data in every second. Traditionally, strong-motion data were recorded by event-triggering based instruments with non-continues telephone line which is connected only after an earthquake. Though the data from such networks mainly contribute to preparations for future earthquakes, huge amount of real-time data from dense network are expected to directly contribute to the mitigation of ongoing earthquake disasters through, e.g., automatic shutdown plants and helping decision-making for initial response. By generating the distribution map of these indexes and uploading them to the website, we implemented the real-time ground motion monitoring system, Kyoshin (strong-motion in Japanese) monitor. This web service (www.kyoshin.bosai.go.jp) started in 2008 and anyone can grasp the current ground motions of Japan. Though this service provides only ground-motion map in GIF format, to take full advantage of real-time strong-motion data to mitigate the ongoing disasters, digital data are important. We have developed a WebAPI to provide real-time data and related information such as ground motions (5 km-mesh) and arrival times estimated from EEW (earthquake early warning). All response data from this WebAPI are in JSON format and are easy to parse. We also developed Kyoshin monitor application for smartphone, 'Kmoni view' using the API. In this application, ground motions estimated from EEW are overlapped on the map with the observed one-second-interval indexes. The application can playback previous earthquakes for demonstration or disaster drill. In mobile environment, data traffic and battery are

  15. A Dynamic Time Warping Approach to Real-Time Activity Recognition for Food Preparation

    Science.gov (United States)

    Pham, Cuong; Plötz, Thomas; Olivier, Patrick

    We present a dynamic time warping based activity recognition system for the analysis of low-level food preparation activities. Accelerometers embedded into kitchen utensils provide continuous sensor data streams while people are using them for cooking. The recognition framework analyzes frames of contiguous sensor readings in real-time with low latency. It thereby adapts to the idiosyncrasies of utensil use by automatically maintaining a template database. We demonstrate the effectiveness of the classification approach by a number of real-world practical experiments on a publically available dataset. The adaptive system shows superior performance compared to a static recognizer. Furthermore, we demonstrate the generalization capabilities of the system by gradually reducing the amount of training samples. The system achieves excellent classification results even if only a small number of training samples is available, which is especially relevant for real-world scenarios.

  16. Real-Time Accumulative Computation Motion Detectors

    Directory of Open Access Journals (Sweden)

    Saturnino Maldonado-Bascón

    2009-12-01

    Full Text Available The neurally inspired accumulative computation (AC method and its application to motion detection have been introduced in the past years. This paper revisits the fact that many researchers have explored the relationship between neural networks and finite state machines. Indeed, finite state machines constitute the best characterized computational model, whereas artificial neural networks have become a very successful tool for modeling and problem solving. The article shows how to reach real-time performance after using a model described as a finite state machine. This paper introduces two steps towards that direction: (a A simplification of the general AC method is performed by formally transforming it into a finite state machine. (b A hardware implementation in FPGA of such a designed AC module, as well as an 8-AC motion detector, providing promising performance results. We also offer two case studies of the use of AC motion detectors in surveillance applications, namely infrared-based people segmentation and color-based people tracking, respectively.

  17. Reliable 5-min real-time MR technique for left-ventricular-wall motion analysis

    International Nuclear Information System (INIS)

    Katoh, Marcus; Spuentrup, Elmar; Guenther, Rolf W.; Buecker, Arno; Kuehl, Harald P.; Lipke, Claudia S.A.

    2007-01-01

    The aim of this study was to investigate the value of a real-time magnetic resonance imaging (MRI) approach for the assessment of left-ventricular-wall motion in patients with insufficient transthoracic echocardiography in terms of accuracy and temporal expenditure. Twenty-five consecutive patients were examined on a 1.5-Tesla whole-body MR system (ACS-NT, Philips Medical Systems, Best, NL) using a real-time and ECG-gated (the current gold standard) steady-state free-precession (SSFP) sequence. Wall motion was analyzed by three observers by consensus interpretation. In addition, the preparation, scanning, and overall examination times were measured. The assessment of the wall motion demonstrated a close agreement between the two modalities resulting in a mean κ coefficient of 0.8. At the same time, each stage of the examination was significantly shortened using the real-time MR approach. Real-time imaging allows for accurate assessment of left-ventricular-wall motion with the added benefit of decreased examination time. Therefore, it may serve as a cost-efficient alternative in patients with insufficient echocardiography. (orig.)

  18. Management of three-dimensional intrafraction motion through real-time DMLC tracking

    International Nuclear Information System (INIS)

    Sawant, Amit; Venkat, Raghu; Srivastava, Vikram; Carlson, David; Povzner, Sergey; Cattell, Herb; Keall, Paul

    2008-01-01

    Tumor tracking using a dynamic multileaf collimator (DMLC) represents a promising approach for intrafraction motion management in thoracic and abdominal cancer radiotherapy. In this work, we develop, empirically demonstrate, and characterize a novel 3D tracking algorithm for real-time, conformal, intensity modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT)-based radiation delivery to targets moving in three dimensions. The algorithm obtains real-time information of target location from an independent position monitoring system and dynamically calculates MLC leaf positions to account for changes in target position. Initial studies were performed to evaluate the geometric accuracy of DMLC tracking of 3D target motion. In addition, dosimetric studies were performed on a clinical linac to evaluate the impact of real-time DMLC tracking for conformal, step-and-shoot (S-IMRT), dynamic (D-IMRT), and VMAT deliveries to a moving target. The efficiency of conformal and IMRT delivery in the presence of tracking was determined. Results show that submillimeter geometric accuracy in all three dimensions is achievable with DMLC tracking. Significant dosimetric improvements were observed in the presence of tracking for conformal and IMRT deliveries to moving targets. A gamma index evaluation with a 3%-3 mm criterion showed that deliveries without DMLC tracking exhibit between 1.7 (S-IMRT) and 4.8 (D-IMRT) times more dose points that fail the evaluation compared to corresponding deliveries with tracking. The efficiency of IMRT delivery, as measured in the lab, was observed to be significantly lower in case of tracking target motion perpendicular to MLC leaf travel compared to motion parallel to leaf travel. Nevertheless, these early results indicate that accurate, real-time DMLC tracking of 3D tumor motion is feasible and can potentially result in significant geometric and dosimetric advantages leading to more effective management of intrafraction motion

  19. The contribution of the body and motion to whole person recognition.

    Science.gov (United States)

    Simhi, Noa; Yovel, Galit

    2016-05-01

    While the importance of faces in person recognition has been the subject of many studies, there are relatively few studies examining recognition of the whole person in motion even though this most closely resembles daily experience. Most studies examining the whole body in motion use point light displays, which have many advantages but are impoverished and unnatural compared to real life. To determine which factors are used when recognizing the whole person in motion we conducted two experiments using naturalistic videos. In Experiment 1 we used a matching task in which the first stimulus in each pair could either be a video or multiple still images from a video of the full body. The second stimulus, on which person recognition was performed, could be an image of either the full body or face alone. We found that the body contributed to person recognition beyond the face, but only after exposure to motion. Since person recognition was performed on still images, the contribution of motion to person recognition was mediated by form-from-motion processes. To assess whether dynamic identity signatures may also contribute to person recognition, in Experiment 2 we presented people in motion and examined person recognition from videos compared to still images. Results show that dynamic identity signatures did not contribute to person recognition beyond form-from-motion processes. We conclude that the face, body and form-from-motion processes all appear to play a role in unfamiliar person recognition, suggesting the importance of considering the whole body and motion when examining person perception. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Retrospective Reconstruction of High Temporal Resolution Cine Images from Real-Time MRI using Iterative Motion Correction

    DEFF Research Database (Denmark)

    Hansen, Michael Schacht; Sørensen, Thomas Sangild; Arai, Andrew

    2012-01-01

    acquisitions in 10 (N = 10) subjects. Acceptable image quality was obtained in all motion-corrected reconstructions, and the resulting mean image quality score was (a) Cartesian real-time: 2.48, (b) Golden Angle real-time: 1.90 (1.00–2.50), (c) Cartesian motion correction: 3.92, (d) Radial motion correction: 4...... and motion correction based on nonrigid registration and can be applied to arbitrary k-space trajectories. The method is demonstrated with real-time Cartesian imaging and Golden Angle radial acquisitions, and the motion-corrected acquisitions are compared with raw real-time images and breath-hold cine...

  1. Real-Time Motion Management of Prostate Cancer Radiotherapy

    DEFF Research Database (Denmark)

    Pommer, Tobias

    of this thesis is to manage prostate motion in real-time by aligning the radiation beam to the prostate using the novel dynamic multileaf collimator (DMLC) tracking method. Specifically, the delivered dose with tracking was compared to the planned dose, and the impact of treatment plan complexity and limitations...

  2. Memory Efficient VLSI Implementation of Real-Time Motion Detection System Using FPGA Platform

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2017-06-01

    Full Text Available Motion detection is the heart of a potentially complex automated video surveillance system, intended to be used as a standalone system. Therefore, in addition to being accurate and robust, a successful motion detection technique must also be economical in the use of computational resources on selected FPGA development platform. This is because many other complex algorithms of an automated video surveillance system also run on the same platform. Keeping this key requirement as main focus, a memory efficient VLSI architecture for real-time motion detection and its implementation on FPGA platform is presented in this paper. This is accomplished by proposing a new memory efficient motion detection scheme and designing its VLSI architecture. The complete real-time motion detection system using the proposed memory efficient architecture along with proper input/output interfaces is implemented on Xilinx ML510 (Virtex-5 FX130T FPGA development platform and is capable of operating at 154.55 MHz clock frequency. Memory requirement of the proposed architecture is reduced by 41% compared to the standard clustering based motion detection architecture. The new memory efficient system robustly and automatically detects motion in real-world scenarios (both for the static backgrounds and the pseudo-stationary backgrounds in real-time for standard PAL (720 × 576 size color video.

  3. Real-time traffic sign recognition based on a general purpose GPU and deep-learning.

    Science.gov (United States)

    Lim, Kwangyong; Hong, Yongwon; Choi, Yeongwoo; Byun, Hyeran

    2017-01-01

    We present a General Purpose Graphics Processing Unit (GPGPU) based real-time traffic sign detection and recognition method that is robust against illumination changes. There have been many approaches to traffic sign recognition in various research fields; however, previous approaches faced several limitations when under low illumination or wide variance of light conditions. To overcome these drawbacks and improve processing speeds, we propose a method that 1) is robust against illumination changes, 2) uses GPGPU-based real-time traffic sign detection, and 3) performs region detecting and recognition using a hierarchical model. This method produces stable results in low illumination environments. Both detection and hierarchical recognition are performed in real-time, and the proposed method achieves 0.97 F1-score on our collective dataset, which uses the Vienna convention traffic rules (Germany and South Korea).

  4. Real time biometric surveillance with gait recognition

    Science.gov (United States)

    Mohapatra, Subasish; Swain, Anisha; Das, Manaswini; Mohanty, Subhadarshini

    2018-04-01

    Bio metric surveillance has become indispensable for every system in the recent years. The contribution of bio metric authentication, identification, and screening purposes are widely used in various domains for preventing unauthorized access. A large amount of data needs to be updated, segregated and safeguarded from malicious software and misuse. Bio metrics is the intrinsic characteristics of each individual. Recently fingerprints, iris, passwords, unique keys, and cards are commonly used for authentication purposes. These methods have various issues related to security and confidentiality. These systems are not yet automated to provide the safety and security. The gait recognition system is the alternative for overcoming the drawbacks of the recent bio metric based authentication systems. Gait recognition is newer as it hasn't been implemented in the real-world scenario so far. This is an un-intrusive system that requires no knowledge or co-operation of the subject. Gait is a unique behavioral characteristic of every human being which is hard to imitate. The walking style of an individual teamed with the orientation of joints in the skeletal structure and inclinations between them imparts the unique characteristic. A person can alter one's own external appearance but not skeletal structure. These are real-time, automatic systems that can even process low-resolution images and video frames. In this paper, we have proposed a gait recognition system and compared the performance with conventional bio metric identification systems.

  5. 4D Unconstrained Real-time Face Recognition Using a Commodity Depthh Camera

    NARCIS (Netherlands)

    Schimbinschi, Florin; Wiering, Marco; Mohan, R.E.; Sheba, J.K.

    2012-01-01

    Robust unconstrained real-time face recognition still remains a challenge today. The recent addition to the market of lightweight commodity depth sensors brings new possibilities for human-machine interaction and therefore face recognition. This article accompanies the reader through a succinct

  6. Action Recognition by Joint Spatial-Temporal Motion Feature

    Directory of Open Access Journals (Sweden)

    Weihua Zhang

    2013-01-01

    Full Text Available This paper introduces a method for human action recognition based on optical flow motion features extraction. Automatic spatial and temporal alignments are combined together in order to encourage the temporal consistence on each action by an enhanced dynamic time warping (DTW algorithm. At the same time, a fast method based on coarse-to-fine DTW constraint to improve computational performance without reducing accuracy is induced. The main contributions of this study include (1 a joint spatial-temporal multiresolution optical flow computation method which can keep encoding more informative motion information than recent proposed methods, (2 an enhanced DTW method to improve temporal consistence of motion in action recognition, and (3 coarse-to-fine DTW constraint on motion features pyramids to speed up recognition performance. Using this method, high recognition accuracy is achieved on different action databases like Weizmann database and KTH database.

  7. Real-time DSP implementation for MRF-based video motion detection.

    Science.gov (United States)

    Dumontier, C; Luthon, F; Charras, J P

    1999-01-01

    This paper describes the real time implementation of a simple and robust motion detection algorithm based on Markov random field (MRF) modeling, MRF-based algorithms often require a significant amount of computations. The intrinsic parallel property of MRF modeling has led most of implementations toward parallel machines and neural networks, but none of these approaches offers an efficient solution for real-world (i.e., industrial) applications. Here, an alternative implementation for the problem at hand is presented yielding a complete, efficient and autonomous real-time system for motion detection. This system is based on a hybrid architecture, associating pipeline modules with one asynchronous module to perform the whole process, from video acquisition to moving object masks visualization. A board prototype is presented and a processing rate of 15 images/s is achieved, showing the validity of the approach.

  8. A Hierarchical Approach to Real-time Activity Recognition in Body Sensor Networks

    DEFF Research Database (Denmark)

    Wang, Liang; Gu, Tao; Tao, Xianping

    2012-01-01

    Real-time activity recognition in body sensor networks is an important and challenging task. In this paper, we propose a real-time, hierarchical model to recognize both simple gestures and complex activities using a wireless body sensor network. In this model, we rst use a fast and lightweight al...

  9. Human Activity Recognition in Real-Times Environments using Skeleton Joints

    Directory of Open Access Journals (Sweden)

    Ajay Kumar

    2016-06-01

    Full Text Available In this research work, we proposed a most effective noble approach for Human activity recognition in real-time environments. We recognize several distinct dynamic human activity actions using kinect. A 3D skeleton data is processed from real-time video gesture to sequence of frames and getter skeleton joints (Energy Joints, orientation, rotations of joint angles from selected setof frames. We are using joint angle and orientations, rotations information from Kinect therefore less computation required. However, after extracting the set of frames we implemented several classification techniques Principal Component Analysis (PCA with several distance based classifiers and Artificial Neural Network (ANN respectively with some variants for classify our all different gesture models. However, we conclude that use very less number of frame (10-15% for train our system efficiently from the entire set of gesture frames. Moreover, after successfully completion of our classification methods we clinch an excellent overall accuracy 94%, 96% and 98% respectively. We finally observe that our proposed system is more useful than comparing to other existing system, therefore our model is best suitable for real-time application such as in video games for player action/gesture recognition.

  10. Real-time high-speed motion blur compensation system based on back-and-forth motion control of galvanometer mirror.

    Science.gov (United States)

    Hayakawa, Tomohiko; Watanabe, Takanoshin; Ishikawa, Masatoshi

    2015-12-14

    We developed a novel real-time motion blur compensation system for the blur caused by high-speed one-dimensional motion between a camera and a target. The system consists of a galvanometer mirror and a high-speed color camera, without the need for any additional sensors. We controlled the galvanometer mirror with continuous back-and-forth oscillating motion synchronized to a high-speed camera. The angular speed of the mirror is given in real time within 10 ms based on the concept of background tracking and rapid raw Bayer block matching. Experiments demonstrated that our system captures motion-invariant images of objects moving at speeds up to 30 km/h.

  11. Three-dimensional liver motion tracking using real-time two-dimensional MRI.

    Science.gov (United States)

    Brix, Lau; Ringgaard, Steffen; Sørensen, Thomas Sangild; Poulsen, Per Rugaard

    2014-04-01

    Combined magnetic resonance imaging (MRI) systems and linear accelerators for radiotherapy (MR-Linacs) are currently under development. MRI is noninvasive and nonionizing and can produce images with high soft tissue contrast. However, new tracking methods are required to obtain fast real-time spatial target localization. This study develops and evaluates a method for tracking three-dimensional (3D) respiratory liver motion in two-dimensional (2D) real-time MRI image series with high temporal and spatial resolution. The proposed method for 3D tracking in 2D real-time MRI series has three steps: (1) Recording of a 3D MRI scan and selection of a blood vessel (or tumor) structure to be tracked in subsequent 2D MRI series. (2) Generation of a library of 2D image templates oriented parallel to the 2D MRI image series by reslicing and resampling the 3D MRI scan. (3) 3D tracking of the selected structure in each real-time 2D image by finding the template and template position that yield the highest normalized cross correlation coefficient with the image. Since the tracked structure has a known 3D position relative to each template, the selection and 2D localization of a specific template translates into quantification of both the through-plane and in-plane position of the structure. As a proof of principle, 3D tracking of liver blood vessel structures was performed in five healthy volunteers in two 5.4 Hz axial, sagittal, and coronal real-time 2D MRI series of 30 s duration. In each 2D MRI series, the 3D localization was carried out twice, using nonoverlapping template libraries, which resulted in a total of 12 estimated 3D trajectories per volunteer. Validation tests carried out to support the tracking algorithm included quantification of the breathing induced 3D liver motion and liver motion directionality for the volunteers, and comparison of 2D MRI estimated positions of a structure in a watermelon with the actual positions. Axial, sagittal, and coronal 2D MRI series

  12. Three-dimensional liver motion tracking using real-time two-dimensional MRI

    Energy Technology Data Exchange (ETDEWEB)

    Brix, Lau, E-mail: lau.brix@stab.rm.dk [Department of Procurement and Clinical Engineering, Region Midt, Olof Palmes Allé 15, 8200 Aarhus N, Denmark and MR Research Centre, Aarhus University Hospital, Skejby, Brendstrupgaardsvej 100, 8200 Aarhus N (Denmark); Ringgaard, Steffen [MR Research Centre, Aarhus University Hospital, Skejby, Brendstrupgaardsvej 100, 8200 Aarhus N (Denmark); Sørensen, Thomas Sangild [Department of Computer Science, Aarhus University, Aabogade 34, 8200 Aarhus N, Denmark and Department of Clinical Medicine, Aarhus University, Brendstrupgaardsvej 100, 8200 Aarhus N (Denmark); Poulsen, Per Rugaard [Department of Clinical Medicine, Aarhus University, Brendstrupgaardsvej 100, 8200 Aarhus N, Denmark and Department of Oncology, Aarhus University Hospital, Nørrebrogade 44, 8000 Aarhus C (Denmark)

    2014-04-15

    Purpose: Combined magnetic resonance imaging (MRI) systems and linear accelerators for radiotherapy (MR-Linacs) are currently under development. MRI is noninvasive and nonionizing and can produce images with high soft tissue contrast. However, new tracking methods are required to obtain fast real-time spatial target localization. This study develops and evaluates a method for tracking three-dimensional (3D) respiratory liver motion in two-dimensional (2D) real-time MRI image series with high temporal and spatial resolution. Methods: The proposed method for 3D tracking in 2D real-time MRI series has three steps: (1) Recording of a 3D MRI scan and selection of a blood vessel (or tumor) structure to be tracked in subsequent 2D MRI series. (2) Generation of a library of 2D image templates oriented parallel to the 2D MRI image series by reslicing and resampling the 3D MRI scan. (3) 3D tracking of the selected structure in each real-time 2D image by finding the template and template position that yield the highest normalized cross correlation coefficient with the image. Since the tracked structure has a known 3D position relative to each template, the selection and 2D localization of a specific template translates into quantification of both the through-plane and in-plane position of the structure. As a proof of principle, 3D tracking of liver blood vessel structures was performed in five healthy volunteers in two 5.4 Hz axial, sagittal, and coronal real-time 2D MRI series of 30 s duration. In each 2D MRI series, the 3D localization was carried out twice, using nonoverlapping template libraries, which resulted in a total of 12 estimated 3D trajectories per volunteer. Validation tests carried out to support the tracking algorithm included quantification of the breathing induced 3D liver motion and liver motion directionality for the volunteers, and comparison of 2D MRI estimated positions of a structure in a watermelon with the actual positions. Results: Axial, sagittal

  13. Three-dimensional liver motion tracking using real-time two-dimensional MRI

    International Nuclear Information System (INIS)

    Brix, Lau; Ringgaard, Steffen; Sørensen, Thomas Sangild; Poulsen, Per Rugaard

    2014-01-01

    Purpose: Combined magnetic resonance imaging (MRI) systems and linear accelerators for radiotherapy (MR-Linacs) are currently under development. MRI is noninvasive and nonionizing and can produce images with high soft tissue contrast. However, new tracking methods are required to obtain fast real-time spatial target localization. This study develops and evaluates a method for tracking three-dimensional (3D) respiratory liver motion in two-dimensional (2D) real-time MRI image series with high temporal and spatial resolution. Methods: The proposed method for 3D tracking in 2D real-time MRI series has three steps: (1) Recording of a 3D MRI scan and selection of a blood vessel (or tumor) structure to be tracked in subsequent 2D MRI series. (2) Generation of a library of 2D image templates oriented parallel to the 2D MRI image series by reslicing and resampling the 3D MRI scan. (3) 3D tracking of the selected structure in each real-time 2D image by finding the template and template position that yield the highest normalized cross correlation coefficient with the image. Since the tracked structure has a known 3D position relative to each template, the selection and 2D localization of a specific template translates into quantification of both the through-plane and in-plane position of the structure. As a proof of principle, 3D tracking of liver blood vessel structures was performed in five healthy volunteers in two 5.4 Hz axial, sagittal, and coronal real-time 2D MRI series of 30 s duration. In each 2D MRI series, the 3D localization was carried out twice, using nonoverlapping template libraries, which resulted in a total of 12 estimated 3D trajectories per volunteer. Validation tests carried out to support the tracking algorithm included quantification of the breathing induced 3D liver motion and liver motion directionality for the volunteers, and comparison of 2D MRI estimated positions of a structure in a watermelon with the actual positions. Results: Axial, sagittal

  14. Real-time recursive motion segmentation of video data on a programmable device

    NARCIS (Netherlands)

    Wittebrood, R.B; Haan, de G.

    2001-01-01

    We previously reported on a recursive algorithm enabling real-time object-based motion estimation (OME) of standard definition video on a digital signal processor (DSP). The algorithm approximates the motion of the objects in the image with parametric motion models and creates a segmentation mask by

  15. Infrared wireless data transfer for real-time motion control

    NARCIS (Netherlands)

    Gajdusek, M.; Overboom, T.T.; Damen, A.A.H.; Bosch, van den P.P.J.

    2009-01-01

    In this paper several wireless solution are compared for their suitability for real-time control of a fast motion system. From the comparison, Very Fast Infrared (VFIR) communication link has been found to be an attractive solution for presented wirelessly controlled manipulator. Because standard

  16. Real-time Human Activity Recognition using a Body Sensor Network

    DEFF Research Database (Denmark)

    Wang, Liang; Gu, Tao; Chen, Hanhua

    2010-01-01

    Real-time activity recognition using body sensor networks is an important and challenging task and it has many potential applications. In this paper, we propose a realtime, hierarchical model to recognize both simple gestures and complex activities using a wireless body sensor network. In this mo...

  17. Real-time Multiresolution Crosswalk Detection with Walk Light Recognition for the Blind

    Directory of Open Access Journals (Sweden)

    ROMIC, K.

    2018-02-01

    Full Text Available Real-time image processing and object detection techniques have a great potential to be applied in digital assistive tools for the blind and visually impaired persons. In this paper, algorithm for crosswalk detection and walk light recognition is proposed with the main aim to help blind person when crossing the road. The proposed algorithm is optimized to work in real-time on portable devices using standard cameras. Images captured by camera are processed while person is moving and decision about detected crosswalk is provided as an output along with the information about walk light if one is present. Crosswalk detection method is based on multiresolution morphological image processing, while the walk light recognition is performed by proposed 6-stage algorithm. The main contributions of this paper are accurate crosswalk detection with small processing time due to multiresolution processing and the recognition of the walk lights covering only small amount of pixels in image. The experiment is conducted using images from video sequences captured in realistic situations on crossings. The results show 98.3% correct crosswalk detections and 89.5% correct walk lights recognition with average processing speed of about 16 frames per second.

  18. Real-time image restoration for iris recognition systems.

    Science.gov (United States)

    Kang, Byung Jun; Park, Kang Ryoung

    2007-12-01

    In the field of biometrics, it has been reported that iris recognition techniques have shown high levels of accuracy because unique patterns of the human iris, which has very many degrees of freedom, are used. However, because conventional iris cameras have small depth-of-field (DOF) areas, input iris images can easily be blurred, which can lead to lower recognition performance, since iris patterns are transformed by the blurring caused by optical defocusing. To overcome these problems, an autofocusing camera can be used. However, this inevitably increases the cost, size, and complexity of the system. Therefore, we propose a new real-time iris image-restoration method, which can increase the camera's DOF without requiring any additional hardware. This paper presents five novelties as compared to previous works: 1) by excluding eyelash and eyelid regions, it is possible to obtain more accurate focus scores from input iris images; 2) the parameter of the point spread function (PSF) can be estimated in terms of camera optics and measured focus scores; therefore, parameter estimation is more accurate than it has been in previous research; 3) because the PSF parameter can be obtained by using a predetermined equation, iris image restoration can be done in real-time; 4) by using a constrained least square (CLS) restoration filter that considers noise, performance can be greatly enhanced; and 5) restoration accuracy can also be enhanced by estimating the weight value of the noise-regularization term of the CLS filter according to the amount of image blurring. Experimental results showed that iris recognition errors when using the proposed restoration method were greatly reduced as compared to those results achieved without restoration or those achieved using previous iris-restoration methods.

  19. Real-time motion analytics during brain MRI improve data quality and reduce costs.

    Science.gov (United States)

    Dosenbach, Nico U F; Koller, Jonathan M; Earl, Eric A; Miranda-Dominguez, Oscar; Klein, Rachel L; Van, Andrew N; Snyder, Abraham Z; Nagel, Bonnie J; Nigg, Joel T; Nguyen, Annie L; Wesevich, Victoria; Greene, Deanna J; Fair, Damien A

    2017-11-01

    Head motion systematically distorts clinical and research MRI data. Motion artifacts have biased findings from many structural and functional brain MRI studies. An effective way to remove motion artifacts is to exclude MRI data frames affected by head motion. However, such post-hoc frame censoring can lead to data loss rates of 50% or more in our pediatric patient cohorts. Hence, many scanner operators collect additional 'buffer data', an expensive practice that, by itself, does not guarantee sufficient high-quality MRI data for a given participant. Therefore, we developed an easy-to-setup, easy-to-use Framewise Integrated Real-time MRI Monitoring (FIRMM) software suite that provides scanner operators with head motion analytics in real-time, allowing them to scan each subject until the desired amount of low-movement data has been collected. Our analyses show that using FIRMM to identify the ideal scan time for each person can reduce total brain MRI scan times and associated costs by 50% or more. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Real-time prediction of respiratory motion based on local regression methods

    International Nuclear Information System (INIS)

    Ruan, D; Fessler, J A; Balter, J M

    2007-01-01

    Recent developments in modulation techniques enable conformal delivery of radiation doses to small, localized target volumes. One of the challenges in using these techniques is real-time tracking and predicting target motion, which is necessary to accommodate system latencies. For image-guided-radiotherapy systems, it is also desirable to minimize sampling rates to reduce imaging dose. This study focuses on predicting respiratory motion, which can significantly affect lung tumours. Predicting respiratory motion in real-time is challenging, due to the complexity of breathing patterns and the many sources of variability. We propose a prediction method based on local regression. There are three major ingredients of this approach: (1) forming an augmented state space to capture system dynamics, (2) local regression in the augmented space to train the predictor from previous observation data using semi-periodicity of respiratory motion, (3) local weighting adjustment to incorporate fading temporal correlations. To evaluate prediction accuracy, we computed the root mean square error between predicted tumor motion and its observed location for ten patients. For comparison, we also investigated commonly used predictive methods, namely linear prediction, neural networks and Kalman filtering to the same data. The proposed method reduced the prediction error for all imaging rates and latency lengths, particularly for long prediction lengths

  1. Adaptive pattern recognition in real-time video-based soccer analysis

    DEFF Research Database (Denmark)

    Schlipsing, Marc; Salmen, Jan; Tschentscher, Marc

    2017-01-01

    are taken into account. Our contribution is twofold: (1) the deliberate use of machine learning and pattern recognition techniques allows us to achieve high classification accuracy in varying environments. We systematically evaluate combinations of image features and learning machines in the given online......Computer-aided sports analysis is demanded by coaches and the media. Image processing and machine learning techniques that allow for "live" recognition and tracking of players exist. But these methods are far from collecting and analyzing event data fully autonomously. To generate accurate results......, human interaction is required at different stages including system setup, calibration, supervision of classifier training, and resolution of tracking conflicts. Furthermore, the real-time constraints are challenging: in contrast to other object recognition and tracking applications, we cannot treat data...

  2. Real-time identification of vehicle motion-modes using neural networks

    Science.gov (United States)

    Wang, Lifu; Zhang, Nong; Du, Haiping

    2015-01-01

    A four-wheel ground vehicle has three body-dominated motion-modes, that is, bounce, roll, and pitch motion-modes. Real-time identification of these motion-modes can make vehicle suspensions, in particular, active suspensions, target on the dominant motion-mode and apply appropriate control strategies to improve its performance with less power consumption. Recently, a motion-mode energy method (MEM) was developed to identify the vehicle body motion-modes. However, this method requires the measurement of full vehicle states and road inputs, which are not always available in practice. This paper proposes an alternative approach to identify vehicle primary motion-modes with acceptable accuracy by employing neural networks (NNs). The effectiveness of the trained NNs is verified on a 10-DOF full-car model under various types of excitation inputs. The results confirm that the proposed method is effective in determining vehicle primary motion-modes with comparable accuracy to the MEM method. Experimental data is further used to validate the proposed method.

  3. Speech Silicon: An FPGA Architecture for Real-Time Hidden Markov-Model-Based Speech Recognition

    Directory of Open Access Journals (Sweden)

    Schuster Jeffrey

    2006-01-01

    Full Text Available This paper examines the design of an FPGA-based system-on-a-chip capable of performing continuous speech recognition on medium sized vocabularies in real time. Through the creation of three dedicated pipelines, one for each of the major operations in the system, we were able to maximize the throughput of the system while simultaneously minimizing the number of pipeline stalls in the system. Further, by implementing a token-passing scheme between the later stages of the system, the complexity of the control was greatly reduced and the amount of active data present in the system at any time was minimized. Additionally, through in-depth analysis of the SPHINX 3 large vocabulary continuous speech recognition engine, we were able to design models that could be efficiently benchmarked against a known software platform. These results, combined with the ability to reprogram the system for different recognition tasks, serve to create a system capable of performing real-time speech recognition in a vast array of environments.

  4. Speech Silicon: An FPGA Architecture for Real-Time Hidden Markov-Model-Based Speech Recognition

    Directory of Open Access Journals (Sweden)

    Alex K. Jones

    2006-11-01

    Full Text Available This paper examines the design of an FPGA-based system-on-a-chip capable of performing continuous speech recognition on medium sized vocabularies in real time. Through the creation of three dedicated pipelines, one for each of the major operations in the system, we were able to maximize the throughput of the system while simultaneously minimizing the number of pipeline stalls in the system. Further, by implementing a token-passing scheme between the later stages of the system, the complexity of the control was greatly reduced and the amount of active data present in the system at any time was minimized. Additionally, through in-depth analysis of the SPHINX 3 large vocabulary continuous speech recognition engine, we were able to design models that could be efficiently benchmarked against a known software platform. These results, combined with the ability to reprogram the system for different recognition tasks, serve to create a system capable of performing real-time speech recognition in a vast array of environments.

  5. Energy-Efficient Real-Time Human Activity Recognition on Smart Mobile Devices

    Directory of Open Access Journals (Sweden)

    Jin Lee

    2016-01-01

    Full Text Available Nowadays, human activity recognition (HAR plays an important role in wellness-care and context-aware systems. Human activities can be recognized in real-time by using sensory data collected from various sensors built in smart mobile devices. Recent studies have focused on HAR that is solely based on triaxial accelerometers, which is the most energy-efficient approach. However, such HAR approaches are still energy-inefficient because the accelerometer is required to run without stopping so that the physical activity of a user can be recognized in real-time. In this paper, we propose a novel approach for HAR process that controls the activity recognition duration for energy-efficient HAR. We investigated the impact of varying the acceleration-sampling frequency and window size for HAR by using the variable activity recognition duration (VARD strategy. We implemented our approach by using an Android platform and evaluated its performance in terms of energy efficiency and accuracy. The experimental results showed that our approach reduced energy consumption by a minimum of about 44.23% and maximum of about 78.85% compared to conventional HAR without sacrificing accuracy.

  6. Three axis electronic flight motion simulator real time control system design and implementation.

    Science.gov (United States)

    Gao, Zhiyuan; Miao, Zhonghua; Wang, Xuyong; Wang, Xiaohua

    2014-12-01

    A three axis electronic flight motion simulator is reported in this paper including the modelling, the controller design as well as the hardware implementation. This flight motion simulator could be used for inertial navigation test and high precision inertial navigation system with good dynamic and static performances. A real time control system is designed, several control system implementation problems were solved including time unification with parallel port interrupt, high speed finding-zero method of rotary inductosyn, zero-crossing management with continuous rotary, etc. Tests were carried out to show the effectiveness of the proposed real time control system.

  7. Three axis electronic flight motion simulator real time control system design and implementation

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Zhiyuan; Miao, Zhonghua, E-mail: zhonghua-miao@163.com; Wang, Xiaohua [School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, 200072 (China); Wang, Xuyong [School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2014-12-15

    A three axis electronic flight motion simulator is reported in this paper including the modelling, the controller design as well as the hardware implementation. This flight motion simulator could be used for inertial navigation test and high precision inertial navigation system with good dynamic and static performances. A real time control system is designed, several control system implementation problems were solved including time unification with parallel port interrupt, high speed finding-zero method of rotary inductosyn, zero-crossing management with continuous rotary, etc. Tests were carried out to show the effectiveness of the proposed real time control system.

  8. A SIMD-VLIW Smart Camera Architecture for Real-Time Face Recognition

    NARCIS (Netherlands)

    Kleihorst, R.P.; Broers, H.A.T.; Abbo, A.A.; Ebrahimmalek, H.; Fatemi, H.; Corporaal, H.; Jonker, P.P.

    2003-01-01

    There is a rapidly growing demand for using smart cameras for various applications in surveillance and identification. Although having a small form-factor, most of these applications demand huge processing performance for real-time processing. Face recognition is one of those applications. In this

  9. Precise and real-time measurement of 3D tumor motion in lung due to breathing and heartbeat, measured during radiotherapy

    International Nuclear Information System (INIS)

    Seppenwoolde, Yvette; Shirato, Hiroki; Kitamura, Kei; Shimizu, Shinichi; Herk, Marcel van; Lebesque, Joos V.; Miyasaka, Kazuo

    2002-01-01

    Purpose: In this work, three-dimensional (3D) motion of lung tumors during radiotherapy in real time was investigated. Understanding the behavior of tumor motion in lung tissue to model tumor movement is necessary for accurate (gated or breath-hold) radiotherapy or CT scanning. Methods: Twenty patients were included in this study. Before treatment, a 2-mm gold marker was implanted in or near the tumor. A real-time tumor tracking system using two fluoroscopy image processor units was installed in the treatment room. The 3D position of the implanted gold marker was determined by using real-time pattern recognition and a calibrated projection geometry. The linear accelerator was triggered to irradiate the tumor only when the gold marker was located within a certain volume. The system provided the coordinates of the gold marker during beam-on and beam-off time in all directions simultaneously, at a sample rate of 30 images per second. The recorded tumor motion was analyzed in terms of the amplitude and curvature of the tumor motion in three directions, the differences in breathing level during treatment, hysteresis (the difference between the inhalation and exhalation trajectory of the tumor), and the amplitude of tumor motion induced by cardiac motion. Results: The average amplitude of the tumor motion was greatest (12±2 mm [SD]) in the cranial-caudal direction for tumors situated in the lower lobes and not attached to rigid structures such as the chest wall or vertebrae. For the lateral and anterior-posterior directions, tumor motion was small both for upper- and lower-lobe tumors (2±1 mm). The time-averaged tumor position was closer to the exhale position, because the tumor spent more time in the exhalation than in the inhalation phase. The tumor motion was modeled as a sinusoidal movement with varying asymmetry. The tumor position in the exhale phase was more stable than the tumor position in the inhale phase during individual treatment fields. However, in many

  10. Robust Real-Time Tracking for Visual Surveillance

    Directory of Open Access Journals (Sweden)

    Aguilera Josep

    2007-01-01

    Full Text Available This paper describes a real-time multi-camera surveillance system that can be applied to a range of application domains. This integrated system is designed to observe crowded scenes and has mechanisms to improve tracking of objects that are in close proximity. The four component modules described in this paper are (i motion detection using a layered background model, (ii object tracking based on local appearance, (iii hierarchical object recognition, and (iv fused multisensor object tracking using multiple features and geometric constraints. This integrated approach to complex scene tracking is validated against a number of representative real-world scenarios to show that robust, real-time analysis can be performed.

  11. Real Time Recognition Of Speakers From Internet Audio Stream

    Directory of Open Access Journals (Sweden)

    Weychan Radoslaw

    2015-09-01

    Full Text Available In this paper we present an automatic speaker recognition technique with the use of the Internet radio lossy (encoded speech signal streams. We show an influence of the audio encoder (e.g., bitrate on the speaker model quality. The model of each speaker was calculated with the use of the Gaussian mixture model (GMM approach. Both the speaker recognition and the further analysis were realized with the use of short utterances to facilitate real time processing. The neighborhoods of the speaker models were analyzed with the use of the ISOMAP algorithm. The experiments were based on four 1-hour public debates with 7–8 speakers (including the moderator, acquired from the Polish radio Internet services. The presented software was developed with the MATLAB environment.

  12. Arm Motion Recognition and Exercise Coaching System for Remote Interaction

    Directory of Open Access Journals (Sweden)

    Hong Zeng

    2016-01-01

    Full Text Available Arm motion recognition and its related applications have become a promising human computer interaction modal due to the rapid integration of numerical sensors in modern mobile-phones. We implement a mobile-phone-based arm motion recognition and exercise coaching system that can help people carrying mobile-phones to do body exercising anywhere at any time, especially for the persons that have very limited spare time and are constantly traveling across cities. We first design improved k-means algorithm to cluster the collecting 3-axis acceleration and gyroscope data of person actions into basic motions. A learning method based on Hidden Markov Model is then designed to classify and recognize continuous arm motions of both learners and coaches, which also measures the action similarities between the persons. We implement the system on MIUI 2S mobile-phone and evaluate the system performance and its accuracy of recognition.

  13. Evaluation of classifier topologies for the real-time classification of simultaneous limb motions.

    Science.gov (United States)

    Ortiz-Catalan, Max; Branemark, Rickard; Hakansson, Bo

    2013-01-01

    The prediction of motion intent through the decoding of myoelectric signals has the potential to improve the functionally of limb prostheses. Considerable research on individual motion classifiers has been done to exploit this idea. A drawback with the individual prediction approach, however, is its limitation to serial control, which is slow, cumbersome, and unnatural. In this work, different classifier topologies suitable for the decoding of mixed classes, and thus capable of predicting simultaneous motions, were investigated in real-time. These topologies resulted in higher offline accuracies than previously achieved, but more importantly, positive indications of their suitability for real-time systems were found. Furthermore, in order to facilitate further development, benchmarking, and cooperation, the algorithms and data generated in this study are freely available as part of BioPatRec, an open source framework for the development of advanced prosthetic control strategies.

  14. Self-Motion Perception: Assessment by Real-Time Computer Generated Animations

    Science.gov (United States)

    Parker, Donald E.

    1999-01-01

    Our overall goal is to develop materials and procedures for assessing vestibular contributions to spatial cognition. The specific objective of the research described in this paper is to evaluate computer-generated animations as potential tools for studying self-orientation and self-motion perception. Specific questions addressed in this study included the following. First, does a non- verbal perceptual reporting procedure using real-time animations improve assessment of spatial orientation? Are reports reliable? Second, do reports confirm expectations based on stimuli to vestibular apparatus? Third, can reliable reports be obtained when self-motion description vocabulary training is omitted?

  15. Monitoring tumor motion by real time 2D/3D registration during radiotherapy.

    Science.gov (United States)

    Gendrin, Christelle; Furtado, Hugo; Weber, Christoph; Bloch, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Bergmann, Helmar; Stock, Markus; Fichtinger, Gabor; Georg, Dietmar; Birkfellner, Wolfgang

    2012-02-01

    In this paper, we investigate the possibility to use X-ray based real time 2D/3D registration for non-invasive tumor motion monitoring during radiotherapy. The 2D/3D registration scheme is implemented using general purpose computation on graphics hardware (GPGPU) programming techniques and several algorithmic refinements in the registration process. Validation is conducted off-line using a phantom and five clinical patient data sets. The registration is performed on a region of interest (ROI) centered around the planned target volume (PTV). The phantom motion is measured with an rms error of 2.56 mm. For the patient data sets, a sinusoidal movement that clearly correlates to the breathing cycle is shown. Videos show a good match between X-ray and digitally reconstructed radiographs (DRR) displacement. Mean registration time is 0.5 s. We have demonstrated that real-time organ motion monitoring using image based markerless registration is feasible. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  16. Double-Windows-Based Motion Recognition in Multi-Floor Buildings Assisted by a Built-In Barometer.

    Science.gov (United States)

    Liu, Maolin; Li, Huaiyu; Wang, Yuan; Li, Fei; Chen, Xiuwan

    2018-04-01

    Accelerometers, gyroscopes and magnetometers in smartphones are often used to recognize human motions. Since it is difficult to distinguish between vertical motions and horizontal motions in the data provided by these built-in sensors, the vertical motion recognition accuracy is relatively low. The emergence of a built-in barometer in smartphones improves the accuracy of motion recognition in the vertical direction. However, there is a lack of quantitative analysis and modelling of the barometer signals, which is the basis of barometer's application to motion recognition, and a problem of imbalanced data also exists. This work focuses on using the barometers inside smartphones for vertical motion recognition in multi-floor buildings through modelling and feature extraction of pressure signals. A novel double-windows pressure feature extraction method, which adopts two sliding time windows of different length, is proposed to balance recognition accuracy and response time. Then, a random forest classifier correlation rule is further designed to weaken the impact of imbalanced data on recognition accuracy. The results demonstrate that the recognition accuracy can reach 95.05% when pressure features and the improved random forest classifier are adopted. Specifically, the recognition accuracy of the stair and elevator motions is significantly improved with enhanced response time. The proposed approach proves effective and accurate, providing a robust strategy for increasing accuracy of vertical motions.

  17. Real-Time Motion Planning and Safe Navigation in Dynamic Multi-Robot Environments

    National Research Council Canada - National Science Library

    Bruce, James R

    2006-01-01

    .... While motion planning has been used for high level robot navigation, or limited to semi-static or single-robot domains, it has often been dismissed for the real-time low-level control of agents due...

  18. Real-time billboard trademark detection and recognition in sports video

    Science.gov (United States)

    Bu, Jiang; Lao, Song-Yan; Bai, Liang

    2013-03-01

    Nowadays, different applications like automatic video indexing, keyword based video search and TV commercials can be developed by detecting and recognizing the billboard trademark. We propose a hierarchical solution for real-time billboard trademark recognition in various sports video, billboard frames are detected in the first level, fuzzy decision tree with easily-computing features are employed to accelerate the process, while in the second level, color and regional SIFT features are combined for the first time to describe the appearance of trademarks, and the shared nearest neighbor (SNN) clustering with x2 distance is utilized instead of traditional K-means clustering to construct the SIFT vocabulary, at last, Latent Semantic Analysis (LSA) based SIFT vocabulary matching is performed on the template trademark and the candidate regions in billboard frame. The preliminary experiments demonstrate the effectiveness of the hierarchical solution, and real time constraints are also met by our solution.

  19. SU-G-BRA-09: Estimation of Motion Tracking Uncertainty for Real-Time Adaptive Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yan, H [Capital Medical University, Beijing, Beijing (China); Chen, Z [Yale New Haven Hospital, New Haven, CT (United States); Nath, R; Liu, W [Yale University School of Medicine, New Haven, CT (United States)

    2016-06-15

    Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertainty through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the

  20. SU-G-BRA-09: Estimation of Motion Tracking Uncertainty for Real-Time Adaptive Imaging

    International Nuclear Information System (INIS)

    Yan, H; Chen, Z; Nath, R; Liu, W

    2016-01-01

    Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertainty through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the

  1. Sistem Kontrol Akses Berbasis Real Time Face Recognition dan Gender Information

    Directory of Open Access Journals (Sweden)

    Putri Nurmala

    2015-06-01

    Full Text Available Face recognition and gender information is a computer application for automatically identifying or verifying a person's face from a camera to capture a person's face. It is usually used in access control systemsand it can be compared to other biometrics such as finger print identification system or iris. Many of face recognition algorithms have been developed in recent years. Face recognition system and gender information inthis system based on the Principal Component Analysis method (PCA. Computational method has a simple and fast compared with the use of the method requires a lot of learning, such as artificial neural network. In thisaccess control system, relay used and Arduino controller. In this essay focuses on face recognition and gender - based information in real time using the method of Principal Component Analysis ( PCA . The result achievedfrom the application design is the identification of a person’s face with gender using PCA. The results achieved by the application is face recognition system using PCA can obtain good results the 85 % success rate in face recognition with face images that have been tested by a few people and a fairly high degree of accuracy.

  2. Flexible Piezoelectric Sensor-Based Gait Recognition

    Directory of Open Access Journals (Sweden)

    Youngsu Cha

    2018-02-01

    Full Text Available Most motion recognition research has required tight-fitting suits for precise sensing. However, tight-suit systems have difficulty adapting to real applications, because people normally wear loose clothes. In this paper, we propose a gait recognition system with flexible piezoelectric sensors in loose clothing. The gait recognition system does not directly sense lower-body angles. It does, however, detect the transition between standing and walking. Specifically, we use the signals from the flexible sensors attached to the knee and hip parts on loose pants. We detect the periodic motion component using the discrete time Fourier series from the signal during walking. We adapt the gait detection method to a real-time patient motion and posture monitoring system. In the monitoring system, the gait recognition operates well. Finally, we test the gait recognition system with 10 subjects, for which the proposed system successfully detects walking with a success rate over 93 %.

  3. Real-Time 3D Motion capture by monocular vision and virtual rendering

    OpenAIRE

    Gomez Jauregui , David Antonio; Horain , Patrick

    2012-01-01

    International audience; Avatars in networked 3D virtual environments allow users to interact over the Internet and to get some feeling of virtual telepresence. However, avatar control may be tedious. Motion capture systems based on 3D sensors have recently reached the consumer market, but webcams and camera-phones are more widespread and cheaper. The proposed demonstration aims at animating a user's avatar from real time 3D motion capture by monoscopic computer vision, thus allowing virtual t...

  4. A motion-compensated image filter for low-dose fluoroscopy in a real-time tumor-tracking radiotherapy system

    International Nuclear Information System (INIS)

    Miyamoto, Naoki; Ishikawa, Masayori; Sutherland, Kenneth

    2015-01-01

    In the real-time tumor-tracking radiotherapy system, a surrogate fiducial marker inserted in or near the tumor is detected by fluoroscopy to realize respiratory-gated radiotherapy. The imaging dose caused by fluoroscopy should be minimized. In this work, an image processing technique is proposed for tracing a moving marker in low-dose imaging. The proposed tracking technique is a combination of a motion-compensated recursive filter and template pattern matching. The proposed image filter can reduce motion artifacts resulting from the recursive process based on the determination of the region of interest for the next frame according to the current marker position in the fluoroscopic images. The effectiveness of the proposed technique and the expected clinical benefit were examined by phantom experimental studies with actual tumor trajectories generated from clinical patient data. It was demonstrated that the marker motion could be traced in low-dose imaging by applying the proposed algorithm with acceptable registration error and high pattern recognition score in all trajectories, although some trajectories were not able to be tracked with the conventional spatial filters or without image filters. The positional accuracy is expected to be kept within ±2 mm. The total computation time required to determine the marker position is a few milliseconds. The proposed image processing technique is applicable for imaging dose reduction. (author)

  5. Development and operation of a real-time simulation at the NASA Ames Vertical Motion Simulator

    Science.gov (United States)

    Sweeney, Christopher; Sheppard, Shirin; Chetelat, Monique

    1993-01-01

    The Vertical Motion Simulator (VMS) facility at the NASA Ames Research Center combines the largest vertical motion capability in the world with a flexible real-time operating system allowing research to be conducted quickly and effectively. Due to the diverse nature of the aircraft simulated and the large number of simulations conducted annually, the challenge for the simulation engineer is to develop an accurate real-time simulation in a timely, efficient manner. The SimLab facility and the software tools necessary for an operating simulation will be discussed. Subsequent sections will describe the development process through operation of the simulation; this includes acceptance of the model, validation, integration and production phases.

  6. Measuring Sea-Ice Motion in the Arctic with Real Time Photogrammetry

    Science.gov (United States)

    Brozena, J. M.; Hagen, R. A.; Peters, M. F.; Liang, R.; Ball, D.

    2014-12-01

    The U.S. Naval Research Laboratory, in coordination with other groups, has been collecting sea-ice data in the Arctic off the north coast of Alaska with an airborne system employing a radar altimeter, LiDAR and a photogrammetric camera in an effort to obtain wide swaths of measurements coincident with Cryosat-2 footprints. Because the satellite tracks traverse areas of moving pack ice, precise real-time estimates of the ice motion are needed to fly a survey grid that will yield complete data coverage. This requirement led us to develop a method to find the ice motion from the aircraft during the survey. With the advent of real-time orthographic photogrammetric systems, we developed a system that measures the sea ice motion in-flight, and also permits post-process modeling of sea ice velocities to correct the positioning of radar and LiDAR data. For the 2013 and 2014 field seasons, we used this Real Time Ice Motion Estimation (RTIME) system to determine ice motion using Applanix's Inflight Ortho software with an Applanix DSS439 system. Operationally, a series of photos were taken in the survey area. The aircraft then turned around and took more photos along the same line several minutes later. Orthophotos were generated within minutes of collection and evaluated by custom software to find photo footprints and potential overlap. Overlapping photos were passed to the correlation software, which selects a series of "chips" in the first photo and looks for the best matches in the second photo. The correlation results are then passed to a density-based clustering algorithm to determine the offset of the photo pair. To investigate any systematic errors in the photogrammetry, we flew several flight lines over a fixed point on various headings, over an area of non-moving ice in 2013. The orthophotos were run through the correlation software to find any residual offsets, and run through additional software to measure chip positions and offsets relative to the aircraft

  7. Towards frameless maskless SRS through real-time 6DoF robotic motion compensation

    Science.gov (United States)

    Belcher, Andrew H.; Liu, Xinmin; Chmura, Steven; Yenice, Kamil; Wiersma, Rodney D.

    2017-12-01

    Stereotactic radiosurgery (SRS) uses precise dose placement to treat conditions of the CNS. Frame-based SRS uses a metal head ring fixed to the patient’s skull to provide high treatment accuracy, but patient comfort and clinical workflow may suffer. Frameless SRS, while potentially more convenient, may increase uncertainty of treatment accuracy and be physiologically confining to some patients. By incorporating highly precise robotics and advanced software algorithms into frameless treatments, we present a novel frameless and maskless SRS system where a robot provides real-time 6DoF head motion stabilization allowing positional accuracies to match or exceed those of traditional frame-based SRS. A 6DoF parallel kinematics robot was developed and integrated with a real-time infrared camera in a closed loop configuration. A novel compensation algorithm was developed based on an iterative closest-path correction approach. The robotic SRS system was tested on six volunteers, whose motion was monitored and compensated for in real-time over 15 min simulated treatments. The system’s effectiveness in maintaining the target’s 6DoF position within preset thresholds was determined by comparing volunteer head motion with and without compensation. Comparing corrected and uncorrected motion, the 6DoF robotic system showed an overall improvement factor of 21 in terms of maintaining target position within 0.5 mm and 0.5 degree thresholds. Although the system’s effectiveness varied among the volunteers examined, for all volunteers tested the target position remained within the preset tolerances 99.0% of the time when robotic stabilization was used, compared to 4.7% without robotic stabilization. The pre-clinical robotic SRS compensation system was found to be effective at responding to sub-millimeter and sub-degree cranial motions for all volunteers examined. The system’s success with volunteers has demonstrated its capability for implementation with frameless and

  8. Towards frameless maskless SRS through real-time 6DoF robotic motion compensation.

    Science.gov (United States)

    Belcher, Andrew H; Liu, Xinmin; Chmura, Steven; Yenice, Kamil; Wiersma, Rodney D

    2017-11-13

    Stereotactic radiosurgery (SRS) uses precise dose placement to treat conditions of the CNS. Frame-based SRS uses a metal head ring fixed to the patient's skull to provide high treatment accuracy, but patient comfort and clinical workflow may suffer. Frameless SRS, while potentially more convenient, may increase uncertainty of treatment accuracy and be physiologically confining to some patients. By incorporating highly precise robotics and advanced software algorithms into frameless treatments, we present a novel frameless and maskless SRS system where a robot provides real-time 6DoF head motion stabilization allowing positional accuracies to match or exceed those of traditional frame-based SRS. A 6DoF parallel kinematics robot was developed and integrated with a real-time infrared camera in a closed loop configuration. A novel compensation algorithm was developed based on an iterative closest-path correction approach. The robotic SRS system was tested on six volunteers, whose motion was monitored and compensated for in real-time over 15 min simulated treatments. The system's effectiveness in maintaining the target's 6DoF position within preset thresholds was determined by comparing volunteer head motion with and without compensation. Comparing corrected and uncorrected motion, the 6DoF robotic system showed an overall improvement factor of 21 in terms of maintaining target position within 0.5 mm and 0.5 degree thresholds. Although the system's effectiveness varied among the volunteers examined, for all volunteers tested the target position remained within the preset tolerances 99.0% of the time when robotic stabilization was used, compared to 4.7% without robotic stabilization. The pre-clinical robotic SRS compensation system was found to be effective at responding to sub-millimeter and sub-degree cranial motions for all volunteers examined. The system's success with volunteers has demonstrated its capability for implementation with frameless and maskless SRS

  9. Parallel Motion Simulation of Large-Scale Real-Time Crowd in a Hierarchical Environmental Model

    Directory of Open Access Journals (Sweden)

    Xin Wang

    2012-01-01

    Full Text Available This paper presents a parallel real-time crowd simulation method based on a hierarchical environmental model. A dynamical model of the complex environment should be constructed to simulate the state transition and propagation of individual motions. By modeling of a virtual environment where virtual crowds reside, we employ different parallel methods on a topological layer, a path layer and a perceptual layer. We propose a parallel motion path matching method based on the path layer and a parallel crowd simulation method based on the perceptual layer. The large-scale real-time crowd simulation becomes possible with these methods. Numerical experiments are carried out to demonstrate the methods and results.

  10. Iris unwrapping using the Bresenham circle algorithm for real-time iris recognition

    Science.gov (United States)

    Carothers, Matthew T.; Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.

    2015-02-01

    An efficient parallel architecture design for the iris unwrapping process in a real-time iris recognition system using the Bresenham Circle Algorithm is presented in this paper. Based on the characteristics of the model parameters this algorithm was chosen over the widely used polar conversion technique as the iris unwrapping model. The architecture design is parallelized to increase the throughput of the system and is suitable for processing an inputted image size of 320 × 240 pixels in real-time using Field Programmable Gate Array (FPGA) technology. Quartus software is used to implement, verify, and analyze the design's performance using the VHSIC Hardware Description Language. The system's predicted processing time is faster than the modern iris unwrapping technique used today∗.

  11. A Motion-Adaptive Deinterlacer via Hybrid Motion Detection and Edge-Pattern Recognition

    Directory of Open Access Journals (Sweden)

    He-Yuan Lin

    2008-03-01

    Full Text Available A novel motion-adaptive deinterlacing algorithm with edge-pattern recognition and hybrid motion detection is introduced. The great variety of video contents makes the processing of assorted motion, edges, textures, and the combination of them very difficult with a single algorithm. The edge-pattern recognition algorithm introduced in this paper exhibits the flexibility in processing both textures and edges which need to be separately accomplished by line average and edge-based line average before. Moreover, predicting the neighboring pixels for pattern analysis and interpolation further enhances the adaptability of the edge-pattern recognition unit when motion detection is incorporated. Our hybrid motion detection features accurate detection of fast and slow motion in interlaced video and also the motion with edges. Using only three fields for detection also renders higher temporal correlation for interpolation. The better performance of our deinterlacing algorithm with higher content-adaptability and less memory cost than the state-of-the-art 4-field motion detection algorithms can be seen from the subjective and objective experimental results of the CIF and PAL video sequences.

  12. A Motion-Adaptive Deinterlacer via Hybrid Motion Detection and Edge-Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Li Hsin-Te

    2008-01-01

    Full Text Available Abstract A novel motion-adaptive deinterlacing algorithm with edge-pattern recognition and hybrid motion detection is introduced. The great variety of video contents makes the processing of assorted motion, edges, textures, and the combination of them very difficult with a single algorithm. The edge-pattern recognition algorithm introduced in this paper exhibits the flexibility in processing both textures and edges which need to be separately accomplished by line average and edge-based line average before. Moreover, predicting the neighboring pixels for pattern analysis and interpolation further enhances the adaptability of the edge-pattern recognition unit when motion detection is incorporated. Our hybrid motion detection features accurate detection of fast and slow motion in interlaced video and also the motion with edges. Using only three fields for detection also renders higher temporal correlation for interpolation. The better performance of our deinterlacing algorithm with higher content-adaptability and less memory cost than the state-of-the-art 4-field motion detection algorithms can be seen from the subjective and objective experimental results of the CIF and PAL video sequences.

  13. Memristive Computational Architecture of an Echo State Network for Real-Time Speech Emotion Recognition

    Science.gov (United States)

    2015-05-28

    recognition is simpler and requires less computational resources compared to other inputs such as facial expressions . The Berlin database of Emotional ...Processing Magazine, IEEE, vol. 18, no. 1, pp. 32– 80, 2001. [15] K. R. Scherer, T. Johnstone, and G. Klasmeyer, “Vocal expression of emotion ...Network for Real-Time Speech- Emotion Recognition 5a. CONTRACT NUMBER IN-HOUSE 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 62788F 6. AUTHOR(S) Q

  14. MO-FG-BRD-00: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.

  15. MO-FG-BRD-00: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management

    International Nuclear Information System (INIS)

    2015-01-01

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow

  16. Self-Organizing Neural Integration of Pose-Motion Features for Human Action Recognition

    Directory of Open Access Journals (Sweden)

    German Ignacio Parisi

    2015-06-01

    Full Text Available The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented towards human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR networks that obtain progressively generalized representations of sensory inputs and learn inherent spatiotemporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best 21 results for a public benchmark of domestic daily actions.

  17. Real Time Facial Expression Recognition Using Webcam and SDK Affectiva

    Directory of Open Access Journals (Sweden)

    Martin Magdin

    2018-06-01

    Full Text Available Facial expression is an essential part of communication. For this reason, the issue of human emotions evaluation using a computer is a very interesting topic, which has gained more and more attention in recent years. It is mainly related to the possibility of applying facial expression recognition in many fields such as HCI, video games, virtual reality, and analysing customer satisfaction etc. Emotions determination (recognition process is often performed in 3 basic phases: face detection, facial features extraction, and last stage - expression classification. Most often you can meet the so-called Ekman’s classification of 6 emotional expressions (or 7 - neutral expression as well as other types of classification - the Russell circular model, which contains up to 24 or the Plutchik’s Wheel of Emotions. The methods used in the three phases of the recognition process have not only improved over the last 60 years, but new methods and algorithms have also emerged that can determine the ViolaJones detector with greater accuracy and lower computational demands. Therefore, there are currently various solutions in the form of the Software Development Kit (SDK. In this publication, we point to the proposition and creation of our system for real-time emotion classification. Our intention was to create a system that would use all three phases of the recognition process, work fast and stable in real time. That’s why we’ve decided to take advantage of existing Affectiva SDKs. By using the classic webcamera we can detect facial landmarks on the image automatically using the Software Development Kit (SDK from Affectiva. Geometric feature based approach is used for feature extraction. The distance between landmarks is used as a feature, and for selecting an optimal set of features, the brute force method is used. The proposed system uses neural network algorithm for classification. The proposed system recognizes 6 (respectively 7 facial expressions

  18. FY1995 four-terminal-device intelligent LSI system for real-time event recognition; 1995 nendo shunji ninshiki kino wo motta 4 tanshi device chino LSI no kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    Development of an intelligent LSI system having real-time response capability for real-word events. This is accomplished by enhancing the functionality of an elemental device, employing ultra-fine-grain parallelism and merging software directly in the LSI hardware. Intelligent functions are created directly on the LSI hardware, thus enabling real-time recognition by electronic systems. The origin of human intelligence lies in the huge memory data base acquired through one's life and the very fast search mechanism to recall the 'most similar' event to the current input. Based on this principle components of intelligent LSI systems have been developed. An analog EEPROM technology capable of storing 256 levels of data per cell without time-consuming write/verify operations has been developed. In situ monitoring of memory content during writing has allowed us high-accuracy data writing. A high-speed parallel-search engine for the minimum distance vector (an associator) has been developed using neuron MOS technology. The associator has been applied to the motion vector detector as an example, which has shown a very fast detection with an extremely simple hardware configuration. The association architecture has been applied to a real-time motion picture compression system, demonstrating three orders of magnitude higher performance than typical CISC processors (Pentium 166MHz). (NEDO)

  19. Action Recognition in Semi-synthetic Images using Motion Primitives

    DEFF Research Database (Denmark)

    Fihl, Preben; Holte, Michael Boelstoft; Moeslund, Thomas B.

    This technical report describes an action recognition approach based on motion primitives. A few characteristic time instances are found in a sequence containing an action and the action is classified from these instances. The characteristic instances are defined solely on the human motion, hence...... motion primitives. The motion primitives are extracted by double difference images and represented by four features. In each frame the primitive, if any, that best explains the observed data is identified. This leads to a discrete recognition problem since a video sequence will be converted into a string...... containing a sequence of symbols, each representing a primitive. After pruning the string a probabilistic Edit Distance classifier is applied to identify which action best describes the pruned string. The method is evaluated on five one-arm gestures. A test is performed with semi-synthetic input data...

  20. Real-Time Multiview Recognition of Human Gestures by Distributed Image Processing

    Directory of Open Access Journals (Sweden)

    Sato Kosuke

    2010-01-01

    Full Text Available Since a gesture involves a dynamic and complex motion, multiview observation and recognition are desirable. For the better representation of gestures, one needs to know, in the first place, from which views a gesture should be observed. Furthermore, it becomes increasingly important how the recognition results are integrated when larger numbers of camera views are considered. To investigate these problems, we propose a framework under which multiview recognition is carried out, and an integration scheme by which the recognition results are integrated online and in realtime. For performance evaluation, we use the ViHASi (Virtual Human Action Silhouette public image database as a benchmark and our Japanese sign language (JSL image database that contains 18 kinds of hand signs. By examining the recognition rates of each gesture for each view, we found gestures that exhibit view dependency and the gestures that do not. Also, we found that the view dependency itself could vary depending on the target gesture sets. By integrating the recognition results of different views, our swarm-based integration provides more robust and better recognition performance than individual fixed-view recognition agents.

  1. Real-time axial motion detection and correction for single photon emission computed tomography using a linear prediction filter

    International Nuclear Information System (INIS)

    Saba, V.; Setayeshi, S.; Ghannadi-Maragheh, M.

    2011-01-01

    We have developed an algorithm for real-time detection and complete correction of the patient motion effects during single photon emission computed tomography. The algorithm is based on a linear prediction filter (LPC). The new prediction of projection data algorithm (PPDA) detects most motions-such as those of the head, legs, and hands-using comparison of the predicted and measured frame data. When the data acquisition for a specific frame is completed, the accuracy of the acquired data is evaluated by the PPDA. If patient motion is detected, the scanning procedure is stopped. After the patient rests in his or her true position, data acquisition is repeated only for the corrupted frame and the scanning procedure is continued. Various experimental data were used to validate the motion detection algorithm; on the whole, the proposed method was tested with approximately 100 test cases. The PPDA shows promising results. Using the PPDA enables us to prevent the scanner from collecting disturbed data during the scan and replaces them with motion-free data by real-time rescanning for the corrupted frames. As a result, the effects of patient motion is corrected in real time. (author)

  2. REAL-TIME FACE RECOGNITION BASED ON OPTICAL FLOW AND HISTOGRAM EQUALIZATION

    Directory of Open Access Journals (Sweden)

    D. Sathish Kumar

    2013-05-01

    Full Text Available Face recognition is one of the intensive areas of research in computer vision and pattern recognition but many of which are focused on recognition of faces under varying facial expressions and pose variation. A constrained optical flow algorithm discussed in this paper, recognizes facial images involving various expressions based on motion vector computation. In this paper, an optical flow computation algorithm which computes the frames of varying facial gestures, and integrating with synthesized image in a probabilistic environment has been proposed. Also Histogram Equalization technique has been used to overcome the effect of illuminations while capturing the input data using camera devices. It also enhances the contrast of the image for better processing. The experimental results confirm that the proposed face recognition system is more robust and recognizes the facial images under varying expressions and pose variations more accurately.

  3. Real-Time and Accurate Indoor Localization with Fusion Model of Wi-Fi Fingerprint and Motion Particle Filter

    Directory of Open Access Journals (Sweden)

    Xinlong Jiang

    2015-01-01

    Full Text Available As the development of Indoor Location Based Service (Indoor LBS, a timely localization and smooth tracking with high accuracy are desperately needed. Unfortunately, any single method cannot meet the requirement of both high accuracy and real-time ability at the same time. In this paper, we propose a fusion location framework with Particle Filter using Wi-Fi signals and motion sensors. In this framework, we use Extreme Learning Machine (ELM regression algorithm to predict position based on motion sensors and use Wi-Fi fingerprint location result to solve the error accumulation of motion sensors based location occasionally with Particle Filter. The experiments show that the trajectory is smoother as the real one than the traditional Wi-Fi fingerprint method.

  4. A Real-time Face/Hand Tracking Method for Chinese Sign Language Recognition

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper introduces a new Chinese Sign Language recognition (CSLR) system and a method of real-time tracking face and hand applied in the system. In the method, an improved agent algorithm is used to extract the region of face and hand and track them. Kalman filter is introduced to forecast the position and rectangle of search, and self-adapting of target color is designed to counteract the effect of illumination.

  5. WiFi-Based Real-Time Calibration-Free Passive Human Motion Detection

    Directory of Open Access Journals (Sweden)

    Liangyi Gong

    2015-12-01

    Full Text Available With the rapid development of WLAN technology, wireless device-free passive human detection becomes a newly-developing technique and holds more potential to worldwide and ubiquitous smart applications. Recently, indoor fine-grained device-free passive human motion detection based on the PHY layer information is rapidly developed. Previous wireless device-free passive human detection systems either rely on deploying specialized systems with dense transmitter-receiver links or elaborate off-line training process, which blocks rapid deployment and weakens system robustness. In the paper, we explore to research a novel fine-grained real-time calibration-free device-free passive human motion via physical layer information, which is independent of indoor scenarios and needs no prior-calibration and normal profile. We investigate sensitivities of amplitude and phase to human motion, and discover that phase feature is more sensitive to human motion, especially to slow human motion. Aiming at lightweight and robust device-free passive human motion detection, we develop two novel and practical schemes: short-term averaged variance ratio (SVR and long-term averaged variance ratio (LVR. We realize system design with commercial WiFi devices and evaluate it in typical multipath-rich indoor scenarios. As demonstrated in the experiments, our approach can achieve a high detection rate and low false positive rate.

  6. WiFi-Based Real-Time Calibration-Free Passive Human Motion Detection.

    Science.gov (United States)

    Gong, Liangyi; Yang, Wu; Man, Dapeng; Dong, Guozhong; Yu, Miao; Lv, Jiguang

    2015-12-21

    With the rapid development of WLAN technology, wireless device-free passive human detection becomes a newly-developing technique and holds more potential to worldwide and ubiquitous smart applications. Recently, indoor fine-grained device-free passive human motion detection based on the PHY layer information is rapidly developed. Previous wireless device-free passive human detection systems either rely on deploying specialized systems with dense transmitter-receiver links or elaborate off-line training process, which blocks rapid deployment and weakens system robustness. In the paper, we explore to research a novel fine-grained real-time calibration-free device-free passive human motion via physical layer information, which is independent of indoor scenarios and needs no prior-calibration and normal profile. We investigate sensitivities of amplitude and phase to human motion, and discover that phase feature is more sensitive to human motion, especially to slow human motion. Aiming at lightweight and robust device-free passive human motion detection, we develop two novel and practical schemes: short-term averaged variance ratio (SVR) and long-term averaged variance ratio (LVR). We realize system design with commercial WiFi devices and evaluate it in typical multipath-rich indoor scenarios. As demonstrated in the experiments, our approach can achieve a high detection rate and low false positive rate.

  7. FY1995 four-terminal-device intelligent LSI system for real-time event recognition; 1995 nendo shunji ninshiki kino wo motta 4 tanshi device chino LSI no kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    Development of an intelligent LSI system having real-time response capability for real-word events. This is accomplished by enhancing the functionality of an elemental device, employing ultra-fine-grain parallelism and merging software directly in the LSI hardware. Intelligent functions are created directly on the LSI hardware, thus enabling real-time recognition by electronic systems. The origin of human intelligence lies in the huge memory data base acquired through one's life and the very fast search mechanism to recall the 'most similar' event to the current input. Based on this principle components of intelligent LSI systems have been developed. An analog EEPROM technology capable of storing 256 levels of data per cell without time-consuming write/verify operations has been developed. In situ monitoring of memory content during writing has allowed us high-accuracy data writing. A high-speed parallel-search engine for the minimum distance vector (an associator) has been developed using neuron MOS technology. The associator has been applied to the motion vector detector as an example, which has shown a very fast detection with an extremely simple hardware configuration. The association architecture has been applied to a real-time motion picture compression system, demonstrating three orders of magnitude higher performance than typical CISC processors (Pentium 166MHz). (NEDO)

  8. Applications of PCA and SVM-PSO Based Real-Time Face Recognition System

    Directory of Open Access Journals (Sweden)

    Ming-Yuan Shieh

    2014-01-01

    Full Text Available This paper incorporates principal component analysis (PCA with support vector machine-particle swarm optimization (SVM-PSO for developing real-time face recognition systems. The integrated scheme aims to adopt the SVM-PSO method to improve the validity of PCA based image recognition systems on dynamically visual perception. The face recognition for most human-robot interaction applications is accomplished by PCA based method because of its dimensionality reduction. However, PCA based systems are only suitable for processing the faces with the same face expressions and/or under the same view directions. Since the facial feature selection process can be considered as a problem of global combinatorial optimization in machine learning, the SVM-PSO is usually used as an optimal classifier of the system. In this paper, the PSO is used to implement a feature selection, and the SVMs serve as fitness functions of the PSO for classification problems. Experimental results demonstrate that the proposed method simplifies features effectively and obtains higher classification accuracy.

  9. Wavelet transform and real-time learning method for myoelectric signal in motion discrimination

    International Nuclear Information System (INIS)

    Liu Haihua; Chen Xinhao; Chen Yaguang

    2005-01-01

    This paper discusses the applicability of the Wavelet transform for analyzing an EMG signal and discriminating motion classes. In many previous works, researchers have dealt with steady EMG and have proposed suitable analyzing methods for the EMG, for example FFT and STFT. Therefore, it is difficult for the previous approaches to discriminate motions from the EMG in the different phases of muscle activity, i.e., pre-activity, in activity, postactivity phases, as well as the period of motion transition from one to another. In this paper, we introduce the Wavelet transform using the Coiflet mother wavelet into our real-time EMG prosthetic hand controller for discriminating motions from steady and unsteady EMG. A preliminary experiment to discriminate three hand motions from four channel EMG in the initial pre-activity and in activity phase is carried out to show the effectiveness of the approach. However, future research efforts are necessary to discriminate more motions much precisely

  10. The study of key issues about integration of GNSS and strong-motion records for real-time earthquake monitoring

    Science.gov (United States)

    Tu, Rui; Zhang, Pengfei; Zhang, Rui; Liu, Jinhai

    2016-08-01

    This paper has studied the key issues about integration of GNSS and strong-motion records for real-time earthquake monitoring. The validations show that the consistence of the coordinate system must be considered firstly to exclude the system bias between GNSS and strong-motion. The GNSS sampling rate is suggested about 1-5 Hz, and we should give the strong-motion's baseline shift with a larger dynamic noise as its variation is very swift. The initialization time of solving the baseline shift is less than one minute, and ambiguity resolution strategy is not greatly improved the solution. The data quality is very important for the solution, we advised to use multi-frequency and multi-system observations. These ideas give an important guide for real-time earthquake monitoring and early warning by the tight integration of GNSS and strong-motion records.

  11. Real-Time Observation of Internal Motion within Ultrafast Dissipative Optical Soliton Molecules

    Science.gov (United States)

    Krupa, Katarzyna; Nithyanandan, K.; Andral, Ugo; Tchofo-Dinda, Patrice; Grelu, Philippe

    2017-06-01

    Real-time access to the internal ultrafast dynamics of complex dissipative optical systems opens new explorations of pulse-pulse interactions and dynamic patterns. We present the first direct experimental evidence of the internal motion of a dissipative optical soliton molecule generated in a passively mode-locked erbium-doped fiber laser. We map the internal motion of a soliton pair molecule by using a dispersive Fourier-transform imaging technique, revealing different categories of internal pulsations, including vibrationlike and phase drifting dynamics. Our experiments agree well with numerical predictions and bring insights to the analogy between self-organized states of lights and states of the matter.

  12. A graphics processing unit accelerated motion correction algorithm and modular system for real-time fMRI.

    Science.gov (United States)

    Scheinost, Dustin; Hampson, Michelle; Qiu, Maolin; Bhawnani, Jitendra; Constable, R Todd; Papademetris, Xenophon

    2013-07-01

    Real-time functional magnetic resonance imaging (rt-fMRI) has recently gained interest as a possible means to facilitate the learning of certain behaviors. However, rt-fMRI is limited by processing speed and available software, and continued development is needed for rt-fMRI to progress further and become feasible for clinical use. In this work, we present an open-source rt-fMRI system for biofeedback powered by a novel Graphics Processing Unit (GPU) accelerated motion correction strategy as part of the BioImage Suite project ( www.bioimagesuite.org ). Our system contributes to the development of rt-fMRI by presenting a motion correction algorithm that provides an estimate of motion with essentially no processing delay as well as a modular rt-fMRI system design. Using empirical data from rt-fMRI scans, we assessed the quality of motion correction in this new system. The present algorithm performed comparably to standard (non real-time) offline methods and outperformed other real-time methods based on zero order interpolation of motion parameters. The modular approach to the rt-fMRI system allows the system to be flexible to the experiment and feedback design, a valuable feature for many applications. We illustrate the flexibility of the system by describing several of our ongoing studies. Our hope is that continuing development of open-source rt-fMRI algorithms and software will make this new technology more accessible and adaptable, and will thereby accelerate its application in the clinical and cognitive neurosciences.

  13. Using Opaque Image Blur for Real-Time Depth-of-Field Rendering and Image-Based Motion Blur

    DEFF Research Database (Denmark)

    Kraus, Martin

    2013-01-01

    While depth of field is an important cinematographic means, its use in real-time computer graphics is still limited by the computational costs that are necessary to achieve a sufficient image quality. Specifically, color bleeding artifacts between objects at different depths are most effectively...... that the opaque image blur can also be used to add motion blur effects to images in real time....

  14. Online 4D ultrasound guidance for real-time motion compensation by MLC tracking.

    Science.gov (United States)

    Ipsen, Svenja; Bruder, Ralf; O'Brien, Rick; Keall, Paul J; Schweikard, Achim; Poulsen, Per R

    2016-10-01

    With the trend in radiotherapy moving toward dose escalation and hypofractionation, the need for highly accurate targeting increases. While MLC tracking is already being successfully used for motion compensation of moving targets in the prostate, current real-time target localization methods rely on repeated x-ray imaging and implanted fiducial markers or electromagnetic transponders rather than direct target visualization. In contrast, ultrasound imaging can yield volumetric data in real-time (3D + time = 4D) without ionizing radiation. The authors report the first results of combining these promising techniques-online 4D ultrasound guidance and MLC tracking-in a phantom. A software framework for real-time target localization was installed directly on a 4D ultrasound station and used to detect a 2 mm spherical lead marker inside a water tank. The lead marker was rigidly attached to a motion stage programmed to reproduce nine characteristic tumor trajectories chosen from large databases (five prostate, four lung). The 3D marker position detected by ultrasound was transferred to a computer program for MLC tracking at a rate of 21.3 Hz and used for real-time MLC aperture adaption on a conventional linear accelerator. The tracking system latency was measured using sinusoidal trajectories and compensated for by applying a kernel density prediction algorithm for the lung traces. To measure geometric accuracy, static anterior and lateral conformal fields as well as a 358° arc with a 10 cm circular aperture were delivered for each trajectory. The two-dimensional (2D) geometric tracking error was measured as the difference between marker position and MLC aperture center in continuously acquired portal images. For dosimetric evaluation, VMAT treatment plans with high and low modulation were delivered to a biplanar diode array dosimeter using the same trajectories. Dose measurements with and without MLC tracking were compared to a static reference dose using 3%/3 mm and 2

  15. User-Independent Motion State Recognition Using Smartphone Sensors.

    Science.gov (United States)

    Gu, Fuqiang; Kealy, Allison; Khoshelham, Kourosh; Shang, Jianga

    2015-12-04

    The recognition of locomotion activities (e.g., walking, running, still) is important for a wide range of applications like indoor positioning, navigation, location-based services, and health monitoring. Recently, there has been a growing interest in activity recognition using accelerometer data. However, when utilizing only acceleration-based features, it is difficult to differentiate varying vertical motion states from horizontal motion states especially when conducting user-independent classification. In this paper, we also make use of the newly emerging barometer built in modern smartphones, and propose a novel feature called pressure derivative from the barometer readings for user motion state recognition, which is proven to be effective for distinguishing vertical motion states and does not depend on specific users' data. Seven types of motion states are defined and six commonly-used classifiers are compared. In addition, we utilize the motion state history and the characteristics of people's motion to improve the classification accuracies of those classifiers. Experimental results show that by using the historical information and human's motion characteristics, we can achieve user-independent motion state classification with an accuracy of up to 90.7%. In addition, we analyze the influence of the window size and smartphone pose on the accuracy.

  16. User-Independent Motion State Recognition Using Smartphone Sensors

    Directory of Open Access Journals (Sweden)

    Fuqiang Gu

    2015-12-01

    Full Text Available The recognition of locomotion activities (e.g., walking, running, still is important for a wide range of applications like indoor positioning, navigation, location-based services, and health monitoring. Recently, there has been a growing interest in activity recognition using accelerometer data. However, when utilizing only acceleration-based features, it is difficult to differentiate varying vertical motion states from horizontal motion states especially when conducting user-independent classification. In this paper, we also make use of the newly emerging barometer built in modern smartphones, and propose a novel feature called pressure derivative from the barometer readings for user motion state recognition, which is proven to be effective for distinguishing vertical motion states and does not depend on specific users’ data. Seven types of motion states are defined and six commonly-used classifiers are compared. In addition, we utilize the motion state history and the characteristics of people’s motion to improve the classification accuracies of those classifiers. Experimental results show that by using the historical information and human’s motion characteristics, we can achieve user-independent motion state classification with an accuracy of up to 90.7%. In addition, we analyze the influence of the window size and smartphone pose on the accuracy.

  17. Robotic real-time translational and rotational head motion correction during frameless stereotactic radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Xinmin; Belcher, Andrew H.; Grelewicz, Zachary; Wiersma, Rodney D., E-mail: rwiersma@uchicago.edu [Department of Radiation and Cellular Oncology, The University of Chicago, Chicago, Illinois 60637 (United States)

    2015-06-15

    Purpose: To develop a control system to correct both translational and rotational head motion deviations in real-time during frameless stereotactic radiosurgery (SRS). Methods: A novel feedback control with a feed-forward algorithm was utilized to correct for the coupling of translation and rotation present in serial kinematic robotic systems. Input parameters for the algorithm include the real-time 6DOF target position, the frame pitch pivot point to target distance constant, and the translational and angular Linac beam off (gating) tolerance constants for patient safety. Testing of the algorithm was done using a 4D (XY Z + pitch) robotic stage, an infrared head position sensing unit and a control computer. The measured head position signal was processed and a resulting command was sent to the interface of a four-axis motor controller, through which four stepper motors were driven to perform motion compensation. Results: The control of the translation of a brain target was decoupled with the control of the rotation. For a phantom study, the corrected position was within a translational displacement of 0.35 mm and a pitch displacement of 0.15° 100% of the time. For a volunteer study, the corrected position was within displacements of 0.4 mm and 0.2° over 98.5% of the time, while it was 10.7% without correction. Conclusions: The authors report a control design approach for both translational and rotational head motion correction. The experiments demonstrated that control performance of the 4D robotic stage meets the submillimeter and subdegree accuracy required by SRS.

  18. Robotic real-time translational and rotational head motion correction during frameless stereotactic radiosurgery

    International Nuclear Information System (INIS)

    Liu, Xinmin; Belcher, Andrew H.; Grelewicz, Zachary; Wiersma, Rodney D.

    2015-01-01

    Purpose: To develop a control system to correct both translational and rotational head motion deviations in real-time during frameless stereotactic radiosurgery (SRS). Methods: A novel feedback control with a feed-forward algorithm was utilized to correct for the coupling of translation and rotation present in serial kinematic robotic systems. Input parameters for the algorithm include the real-time 6DOF target position, the frame pitch pivot point to target distance constant, and the translational and angular Linac beam off (gating) tolerance constants for patient safety. Testing of the algorithm was done using a 4D (XY Z + pitch) robotic stage, an infrared head position sensing unit and a control computer. The measured head position signal was processed and a resulting command was sent to the interface of a four-axis motor controller, through which four stepper motors were driven to perform motion compensation. Results: The control of the translation of a brain target was decoupled with the control of the rotation. For a phantom study, the corrected position was within a translational displacement of 0.35 mm and a pitch displacement of 0.15° 100% of the time. For a volunteer study, the corrected position was within displacements of 0.4 mm and 0.2° over 98.5% of the time, while it was 10.7% without correction. Conclusions: The authors report a control design approach for both translational and rotational head motion correction. The experiments demonstrated that control performance of the 4D robotic stage meets the submillimeter and subdegree accuracy required by SRS

  19. Real-time construction and visualisation of drift-free video mosaics from unconstrained camera motion

    Directory of Open Access Journals (Sweden)

    Mateusz Brzeszcz

    2015-08-01

    Full Text Available This work proposes a novel approach for real-time video mosaicking facilitating drift-free mosaic construction and visualisation, with integrated frame blending and redundancy management, that is shown to be flexible to a range of varying mosaic scenarios. The approach supports unconstrained camera motion with in-sequence loop closing, variation in camera focal distance (zoom and recovery from video sequence breaks. Real-time performance, over extended duration sequences, is realised via novel aspects of frame management within the mosaic representation and thus avoiding the high data redundancy associated with temporally dense, spatially overlapping video frame inputs. This managed set of image frames is visualised in real time using a dynamic mosaic representation of overlapping textured graphics primitives in place of the traditional globally constructed, and hence frequently reconstructed, mosaic image. Within this formulation, subsequent optimisation occurring during online construction can thus efficiency adjust relative frame positions via simple primitive position transforms. Effective visualisation is similarly facilitated by online inter-frame blending to overcome the illumination and colour variance associated with modern camera hardware. The evaluation illustrates overall robustness in video mosaic construction under a diverse range of conditions including indoor and outdoor environments, varying illumination and presence of in-scene motion on varying computational platforms.

  20. MO-FG-BRD-02: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: MV Tracking

    Energy Technology Data Exchange (ETDEWEB)

    Berbeco, R. [Brigham and Women’s Hospital and Dana-Farber Cancer Institute (United States)

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.

  1. MO-FG-BRD-04: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: MR Tracking

    Energy Technology Data Exchange (ETDEWEB)

    Low, D. [University of California Los Angeles: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: MR Tracking (United States)

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.

  2. MO-FG-BRD-03: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: EM Tracking

    Energy Technology Data Exchange (ETDEWEB)

    Keall, P. [University of Sydney (Australia)

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.

  3. MO-FG-BRD-04: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: MR Tracking

    International Nuclear Information System (INIS)

    Low, D.

    2015-01-01

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow

  4. MO-FG-BRD-03: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: EM Tracking

    International Nuclear Information System (INIS)

    Keall, P.

    2015-01-01

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow

  5. MO-FG-BRD-02: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: MV Tracking

    International Nuclear Information System (INIS)

    Berbeco, R.

    2015-01-01

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow

  6. WiFi-Based Real-Time Calibration-Free Passive Human Motion Detection †

    Science.gov (United States)

    Gong, Liangyi; Yang, Wu; Man, Dapeng; Dong, Guozhong; Yu, Miao; Lv, Jiguang

    2015-01-01

    With the rapid development of WLAN technology, wireless device-free passive human detection becomes a newly-developing technique and holds more potential to worldwide and ubiquitous smart applications. Recently, indoor fine-grained device-free passive human motion detection based on the PHY layer information is rapidly developed. Previous wireless device-free passive human detection systems either rely on deploying specialized systems with dense transmitter-receiver links or elaborate off-line training process, which blocks rapid deployment and weakens system robustness. In the paper, we explore to research a novel fine-grained real-time calibration-free device-free passive human motion via physical layer information, which is independent of indoor scenarios and needs no prior-calibration and normal profile. We investigate sensitivities of amplitude and phase to human motion, and discover that phase feature is more sensitive to human motion, especially to slow human motion. Aiming at lightweight and robust device-free passive human motion detection, we develop two novel and practical schemes: short-term averaged variance ratio (SVR) and long-term averaged variance ratio (LVR). We realize system design with commercial WiFi devices and evaluate it in typical multipath-rich indoor scenarios. As demonstrated in the experiments, our approach can achieve a high detection rate and low false positive rate. PMID:26703612

  7. First online real-time evaluation of motion-induced 4D dose errors during radiotherapy delivery

    DEFF Research Database (Denmark)

    Ravkilde, Thomas; Skouboe, Simon; Hansen, Rune

    2018-01-01

    PURPOSE: In radiotherapy, dose deficits caused by tumor motion often far outweigh the discrepancies typically allowed in plan-specific quality assurance (QA). Yet, tumor motion is not usually included in present QA. We here present a novel method for online treatment verification by real......-time motion-including 4D dose reconstruction and dose evaluation and demonstrate its use during stereotactic body radiotherapy (SBRT) delivery with and without MLC tracking. METHODS: Five volumetric modulated arc therapy (VMAT) plans were delivered with and without MLC tracking to a motion stage carrying...... a Delta4 dosimeter. The VMAT plans have previously been used for (non-tracking) liver SBRT with intra-treatment tumor motion recorded by kilovoltage intrafraction monitoring (KIM). The motion stage reproduced the KIM-measured tumor motions in 3D while optical monitoring guided the MLC tracking. Linac...

  8. Hand Gesture Recognition with Leap Motion

    OpenAIRE

    Du, Youchen; Liu, Shenglan; Feng, Lin; Chen, Menghui; Wu, Jie

    2017-01-01

    The recent introduction of depth cameras like Leap Motion Controller allows researchers to exploit the depth information to recognize hand gesture more robustly. This paper proposes a novel hand gesture recognition system with Leap Motion Controller. A series of features are extracted from Leap Motion tracking data, we feed these features along with HOG feature extracted from sensor images into a multi-class SVM classifier to recognize performed gesture, dimension reduction and feature weight...

  9. View Invariant Gesture Recognition using 3D Motion Primitives

    DEFF Research Database (Denmark)

    Holte, Michael Boelstoft; Moeslund, Thomas B.

    2008-01-01

    This paper presents a method for automatic recognition of human gestures. The method works with 3D image data from a range camera to achieve invariance to viewpoint. The recognition is based solely on motion from characteristic instances of the gestures. These instances are denoted 3D motion...

  10. A multiple model approach to respiratory motion prediction for real-time IGRT

    International Nuclear Information System (INIS)

    Putra, Devi; Haas, Olivier C L; Burnham, Keith J; Mills, John A

    2008-01-01

    Respiration induces significant movement of tumours in the vicinity of thoracic and abdominal structures. Real-time image-guided radiotherapy (IGRT) aims to adapt radiation delivery to tumour motion during irradiation. One of the main problems for achieving this objective is the presence of time lag between the acquisition of tumour position and the radiation delivery. Such time lag causes significant beam positioning errors and affects the dose coverage. A method to solve this problem is to employ an algorithm that is able to predict future tumour positions from available tumour position measurements. This paper presents a multiple model approach to respiratory-induced tumour motion prediction using the interacting multiple model (IMM) filter. A combination of two models, constant velocity (CV) and constant acceleration (CA), is used to capture respiratory-induced tumour motion. A Kalman filter is designed for each of the local models and the IMM filter is applied to combine the predictions of these Kalman filters for obtaining the predicted tumour position. The IMM filter, likewise the Kalman filter, is a recursive algorithm that is suitable for real-time applications. In addition, this paper proposes a confidence interval (CI) criterion to evaluate the performance of tumour motion prediction algorithms for IGRT. The proposed CI criterion provides a relevant measure for the prediction performance in terms of clinical applications and can be used to specify the margin to accommodate prediction errors. The prediction performance of the IMM filter has been evaluated using 110 traces of 4-minute free-breathing motion collected from 24 lung-cancer patients. The simulation study was carried out for prediction time 0.1-0.6 s with sampling rates 3, 5 and 10 Hz. It was found that the prediction of the IMM filter was consistently better than the prediction of the Kalman filter with the CV or CA model. There was no significant difference of prediction errors for the

  11. Gesture Recognition from Data Streams of Human Motion Sensor Using Accelerated PSO Swarm Search Feature Selection Algorithm

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2015-01-01

    Full Text Available Human motion sensing technology gains tremendous popularity nowadays with practical applications such as video surveillance for security, hand signing, and smart-home and gaming. These applications capture human motions in real-time from video sensors, the data patterns are nonstationary and ever changing. While the hardware technology of such motion sensing devices as well as their data collection process become relatively mature, the computational challenge lies in the real-time analysis of these live feeds. In this paper we argue that traditional data mining methods run short of accurately analyzing the human activity patterns from the sensor data stream. The shortcoming is due to the algorithmic design which is not adaptive to the dynamic changes in the dynamic gesture motions. The successor of these algorithms which is known as data stream mining is evaluated versus traditional data mining, through a case of gesture recognition over motion data by using Microsoft Kinect sensors. Three different subjects were asked to read three comic strips and to tell the stories in front of the sensor. The data stream contains coordinates of articulation points and various positions of the parts of the human body corresponding to the actions that the user performs. In particular, a novel technique of feature selection using swarm search and accelerated PSO is proposed for enabling fast preprocessing for inducing an improved classification model in real-time. Superior result is shown in the experiment that runs on this empirical data stream. The contribution of this paper is on a comparative study between using traditional and data stream mining algorithms and incorporation of the novel improved feature selection technique with a scenario where different gesture patterns are to be recognized from streaming sensor data.

  12. Self-recognition of avatar motion: how do I know it's me?

    Science.gov (United States)

    Cook, Richard; Johnston, Alan; Heyes, Cecilia

    2012-02-22

    When motion is isolated from form cues and viewed from third-person perspectives, individuals are able to recognize their own whole body movements better than those of friends. Because we rarely see our own bodies in motion from third-person viewpoints, this self-recognition advantage may indicate a contribution to perception from the motor system. Our first experiment provides evidence that recognition of self-produced and friends' motion dissociate, with only the latter showing sensitivity to orientation. Through the use of selectively disrupted avatar motion, our second experiment shows that self-recognition of facial motion is mediated by knowledge of the local temporal characteristics of one's own actions. Specifically, inverted self-recognition was unaffected by disruption of feature configurations and trajectories, but eliminated by temporal distortion. While actors lack third-person visual experience of their actions, they have a lifetime of proprioceptive, somatosensory, vestibular and first-person-visual experience. These sources of contingent feedback may provide actors with knowledge about the temporal properties of their actions, potentially supporting recognition of characteristic rhythmic variation when viewing self-produced motion. In contrast, the ability to recognize the motion signatures of familiar others may be dependent on configural topographic cues.

  13. SU-G-JeP4-12: Real-Time Organ Motion Monitoring Using Ultrasound and KV Fluoroscopy During Lung SBRT Delivery

    International Nuclear Information System (INIS)

    Omari, E; Tai, A; Li, X; Cooper, D; Lachaine, M

    2016-01-01

    Purpose: Real-time ultrasound monitoring during SBRT is advantageous in understanding and identifying motion irregularities which may cause geometric misses. In this work, we propose to utilize real-time ultrasound to track the diaphragm in conjunction with periodical kV fluoroscopy to monitor motion of tumor or landmarks during SBRT delivery. Methods: Transabdominal Ultrasound (TAUS) b-mode images were collected from 10 healthy volunteers using the Clarity Autoscan System (Elekta). The autoscan transducer, which has a center frequency of 5 MHz, was utilized for the scans. The acquired images were contoured using the Clarity Automatic Fusion and Contouring workstation software. Monitoring sessions of 5 minute length were observed and recorded. The position correlation between tumor and diaphragm could be established with periodic kV fluoroscopy periodically acquired during treatment with Elekta XVI. We acquired data using a tissue mimicking ultrasound phantom with embedded spheres placed on a motion stand using ultrasound and kV Fluoroscopy. MIM software was utilized for image fusion. Correlation of diaphragm and target motion was also validated using 4D-MRI and 4D-CBCT. Results: The diaphragm was visualized as a hyperechoic region on the TAUS b-mode images. Volunteer set-up can be adjusted such that TAUS probe will not interfere with treatment beams. A segment of the diaphragm was contoured and selected as our tracking structure. Successful monitoring sessions of the diaphragm were recorded. For some volunteers, diaphragm motion over 2 times larger than the initial motion has been observed during tracking. For the phantom study, we were able to register the 2D kV Fluoroscopy with the US images for position comparison. Conclusion: We demonstrated the feasibility of tracking the diaphragm using real-time ultrasound. Real-time tracking can help in identifying such irregularities in the respiratory motion which is correlated to tumor motion. We also showed the

  14. A prototype percutaneous transhepatic cholangiography training simulator with real-time breathing motion.

    Science.gov (United States)

    Villard, P F; Vidal, F P; Hunt, C; Bello, F; John, N W; Johnson, S; Gould, D A

    2009-11-01

    We present here a simulator for interventional radiology focusing on percutaneous transhepatic cholangiography (PTC). This procedure consists of inserting a needle into the biliary tree using fluoroscopy for guidance. The requirements of the simulator have been driven by a task analysis. The three main components have been identified: the respiration, the real-time X-ray display (fluoroscopy) and the haptic rendering (sense of touch). The framework for modelling the respiratory motion is based on kinematics laws and on the Chainmail algorithm. The fluoroscopic simulation is performed on the graphic card and makes use of the Beer-Lambert law to compute the X-ray attenuation. Finally, the haptic rendering is integrated to the virtual environment and takes into account the soft-tissue reaction force feedback and maintenance of the initial direction of the needle during the insertion. Five training scenarios have been created using patient-specific data. Each of these provides the user with variable breathing behaviour, fluoroscopic display tuneable to any device parameters and needle force feedback. A detailed task analysis has been used to design and build the PTC simulator described in this paper. The simulator includes real-time respiratory motion with two independent parameters (rib kinematics and diaphragm action), on-line fluoroscopy implemented on the Graphics Processing Unit and haptic feedback to feel the soft-tissue behaviour of the organs during the needle insertion.

  15. Combining high-speed SVM learning with CNN feature encoding for real-time target recognition in high-definition video for ISR missions

    Science.gov (United States)

    Kroll, Christine; von der Werth, Monika; Leuck, Holger; Stahl, Christoph; Schertler, Klaus

    2017-05-01

    For Intelligence, Surveillance, Reconnaissance (ISR) missions of manned and unmanned air systems typical electrooptical payloads provide high-definition video data which has to be exploited with respect to relevant ground targets in real-time by automatic/assisted target recognition software. Airbus Defence and Space is developing required technologies for real-time sensor exploitation since years and has combined the latest advances of Deep Convolutional Neural Networks (CNN) with a proprietary high-speed Support Vector Machine (SVM) learning method into a powerful object recognition system with impressive results on relevant high-definition video scenes compared to conventional target recognition approaches. This paper describes the principal requirements for real-time target recognition in high-definition video for ISR missions and the Airbus approach of combining an invariant feature extraction using pre-trained CNNs and the high-speed training and classification ability of a novel frequency-domain SVM training method. The frequency-domain approach allows for a highly optimized implementation for General Purpose Computation on a Graphics Processing Unit (GPGPU) and also an efficient training of large training samples. The selected CNN which is pre-trained only once on domain-extrinsic data reveals a highly invariant feature extraction. This allows for a significantly reduced adaptation and training of the target recognition method for new target classes and mission scenarios. A comprehensive training and test dataset was defined and prepared using relevant high-definition airborne video sequences. The assessment concept is explained and performance results are given using the established precision-recall diagrams, average precision and runtime figures on representative test data. A comparison to legacy target recognition approaches shows the impressive performance increase by the proposed CNN+SVM machine-learning approach and the capability of real-time high

  16. Marginalised Stacked Denoising Autoencoders for Robust Representation of Real-Time Multi-View Action Recognition

    Directory of Open Access Journals (Sweden)

    Feng Gu

    2015-07-01

    Full Text Available Multi-view action recognition has gained a great interest in video surveillance, human computer interaction, and multimedia retrieval, where multiple cameras of different types are deployed to provide a complementary field of views. Fusion of multiple camera views evidently leads to more robust decisions on both tracking multiple targets and analysing complex human activities, especially where there are occlusions. In this paper, we incorporate the marginalised stacked denoising autoencoders (mSDA algorithm to further improve the bag of words (BoWs representation in terms of robustness and usefulness for multi-view action recognition. The resulting representations are fed into three simple fusion strategies as well as a multiple kernel learning algorithm at the classification stage. Based on the internal evaluation, the codebook size of BoWs and the number of layers of mSDA may not significantly affect recognition performance. According to results on three multi-view benchmark datasets, the proposed framework improves recognition performance across all three datasets and outputs record recognition performance, beating the state-of-art algorithms in the literature. It is also capable of performing real-time action recognition at a frame rate ranging from 33 to 45, which could be further improved by using more powerful machines in future applications.

  17. Visual recognition and tracking of objects for robot sensing

    International Nuclear Information System (INIS)

    Lowe, D.G.

    1994-01-01

    An overview is presented of a number of techniques used for recognition and motion tracking of articulated 3-D objects. With recent advances in robust methods for model-based vision and improved performance of computer systems, it will soon be possible to build low-cost, high-reliability systems for model-based motion tracking. Such systems can be expected to open up a wide range of applications in robotics by providing machines with real-time information about their environment. This paper describes a number of techniques for efficiently matching parameterized 3-D models to image features. The matching methods are robust with respect to missing and ambiguous features as well as measurement errors. Unlike most previous work on model-based motion tracking, this system provides for the integrated treatment of matching and measurement errors during motion tracking. The initial application is in a system for real-time motion tracking of articulated 3-D objects. With the future addition of an indexing component, these same techniques can also be used for general model-based recognition. The current real-time implementation is based on matching straight line segments, but some preliminary experiments on matching arbitrary curves are also described. (author)

  18. Three-dimensional, automated, real-time video system for tracking limb motion in brain-machine interface studies.

    Science.gov (United States)

    Peikon, Ian D; Fitzsimmons, Nathan A; Lebedev, Mikhail A; Nicolelis, Miguel A L

    2009-06-15

    Collection and analysis of limb kinematic data are essential components of the study of biological motion, including research into biomechanics, kinesiology, neurophysiology and brain-machine interfaces (BMIs). In particular, BMI research requires advanced, real-time systems capable of sampling limb kinematics with minimal contact to the subject's body. To answer this demand, we have developed an automated video tracking system for real-time tracking of multiple body parts in freely behaving primates. The system employs high-contrast markers painted on the animal's joints to continuously track the three-dimensional positions of their limbs during activity. Two-dimensional coordinates captured by each video camera are combined and converted to three-dimensional coordinates using a quadratic fitting algorithm. Real-time operation of the system is accomplished using direct memory access (DMA). The system tracks the markers at a rate of 52 frames per second (fps) in real-time and up to 100fps if video recordings are captured to be later analyzed off-line. The system has been tested in several BMI primate experiments, in which limb position was sampled simultaneously with chronic recordings of the extracellular activity of hundreds of cortical cells. During these recordings, multiple computational models were employed to extract a series of kinematic parameters from neuronal ensemble activity in real-time. The system operated reliably under these experimental conditions and was able to compensate for marker occlusions that occurred during natural movements. We propose that this system could also be extended to applications that include other classes of biological motion.

  19. An analog VLSI real time optical character recognition system based on a neural architecture

    International Nuclear Information System (INIS)

    Bo, G.; Caviglia, D.; Valle, M.

    1999-01-01

    In this paper a real time Optical Character Recognition system is presented: it is based on a feature extraction module and a neural network classifier which have been designed and fabricated in analog VLSI technology. Experimental results validate the circuit functionality. The results obtained from a validation based on a mixed approach (i.e., an approach based on both experimental and simulation results) confirm the soundness and reliability of the system

  20. An analog VLSI real time optical character recognition system based on a neural architecture

    Energy Technology Data Exchange (ETDEWEB)

    Bo, G.; Caviglia, D.; Valle, M. [Genoa Univ. (Italy). Dip. of Biophysical and Electronic Engineering

    1999-03-01

    In this paper a real time Optical Character Recognition system is presented: it is based on a feature extraction module and a neural network classifier which have been designed and fabricated in analog VLSI technology. Experimental results validate the circuit functionality. The results obtained from a validation based on a mixed approach (i.e., an approach based on both experimental and simulation results) confirm the soundness and reliability of the system.

  1. Leap Motion Device Used to Control a Real Anthropomorphic Gripper

    Directory of Open Access Journals (Sweden)

    Ionel Staretu

    2016-06-01

    Full Text Available This paper presents for the first time the use of the Leap Motion device to control an anthropomorphic gripper with five fingers. First, a description of the Leap Motion device is presented, highlighting its main functional characteristics, followed by testing of its use for capturing the movements of a human hand's fingers in different configurations. Next, the HandCommander soft module and the Interface Controller application are described. The HandCommander is a software module created to facilitate interaction between a human hand and the GraspIT virtual environment, and the Interface Controller application is required to send motion data to the virtual environment and to test the communication protocol. For the test, a prototype of an anthropomorphic gripper with five fingers was made, including a proper hardware system of command and control, which is briefly presented in this paper. Following the creation of the prototype, the command system performance test was conducted under real conditions, evaluating the recognition efficiency of the objects to be gripped and the efficiency of the command and control strategies for the gripping process. The gripping test is exemplified by the gripping of an object, such as a screw spanner. It was found that the command system, both in terms of capturing human hand gestures with the Leap Motion device and effective object gripping, is operational. Suggestive figures are presented as examples.

  2. Real-time motion compensated patient positioning and non-rigid deformation estimation using 4-D shape priors.

    Science.gov (United States)

    Wasza, Jakob; Bauer, Sebastian; Hornegger, Joachim

    2012-01-01

    Over the last years, range imaging (RI) techniques have been proposed for patient positioning and respiration analysis in motion compensation. Yet, current RI based approaches for patient positioning employ rigid-body transformations, thus neglecting free-form deformations induced by respiratory motion. Furthermore, RI based respiration analysis relies on non-rigid registration techniques with run-times of several seconds. In this paper we propose a real-time framework based on RI to perform respiratory motion compensated positioning and non-rigid surface deformation estimation in a joint manner. The core of our method are pre-procedurally obtained 4-D shape priors that drive the intra-procedural alignment of the patient to the reference state, simultaneously yielding a rigid-body table transformation and a free-form deformation accounting for respiratory motion. We show that our method outperforms conventional alignment strategies by a factor of 3.0 and 2.3 in the rotation and translation accuracy, respectively. Using a GPU based implementation, we achieve run-times of 40 ms.

  3. Modeling of the motion of automobile elastic wheel in real-time for creation of wheeled vehicles motion control electronic systems

    Science.gov (United States)

    Balakina, E. V.; Zotov, N. M.; Fedin, A. P.

    2018-02-01

    Modeling of the motion of the elastic wheel of the vehicle in real-time is used in the tasks of constructing different models in the creation of wheeled vehicles motion control electronic systems, in the creation of automobile stand-simulators etc. The accuracy and the reliability of simulation of the parameters of the wheel motion in real-time when rolling with a slip within the given road conditions are determined not only by the choice of the model, but also by the inaccuracy and instability of the numerical calculation. It is established that the inaccuracy and instability of the calculation depend on the size of the step of integration and the numerical method being used. The analysis of these inaccuracy and instability when wheel rolling with a slip was made and recommendations for reducing them were developed. It is established that the total allowable range of steps of integration is 0.001.0.005 s; the strongest instability is manifested in the calculation of the angular and linear accelerations of the wheel; the weakest instability is manifested in the calculation of the translational velocity of the wheel and moving of the center of the wheel; the instability is less at large values of slip angle and on more slippery surfaces. A new method of the average acceleration is suggested, which allows to significantly reduce (up to 100%) the manifesting of instability of the solution in the calculation of all parameters of motion of the elastic wheel for different braking conditions and for the entire range of steps of integration. The results of research can be applied to the selection of control algorithms in vehicles motion control electronic systems and in the testing stand-simulators

  4. Real-time object recognition in multidimensional images based on joined extended structural tensor and higher-order tensor decomposition methods

    Science.gov (United States)

    Cyganek, Boguslaw; Smolka, Bogdan

    2015-02-01

    In this paper a system for real-time recognition of objects in multidimensional video signals is proposed. Object recognition is done by pattern projection into the tensor subspaces obtained from the factorization of the signal tensors representing the input signal. However, instead of taking only the intensity signal the novelty of this paper is first to build the Extended Structural Tensor representation from the intensity signal that conveys information on signal intensities, as well as on higher-order statistics of the input signals. This way the higher-order input pattern tensors are built from the training samples. Then, the tensor subspaces are built based on the Higher-Order Singular Value Decomposition of the prototype pattern tensors. Finally, recognition relies on measurements of the distance of a test pattern projected into the tensor subspaces obtained from the training tensors. Due to high-dimensionality of the input data, tensor based methods require high memory and computational resources. However, recent achievements in the technology of the multi-core microprocessors and graphic cards allows real-time operation of the multidimensional methods as is shown and analyzed in this paper based on real examples of object detection in digital images.

  5. A New Profile Shape Matching Stereovision Algorithm for Real-time Human Pose and Hand Gesture Recognition

    Directory of Open Access Journals (Sweden)

    Dong Zhang

    2014-02-01

    Full Text Available This paper presents a new profile shape matching stereovision algorithm that is designed to extract 3D information in real time. This algorithm obtains 3D information by matching profile intensity shapes of each corresponding row of the stereo image pair. It detects the corresponding matching patterns of the intensity profile rather than the intensity values of individual pixels or pixels in a small neighbourhood. This approach reduces the effect of the intensity and colour variations caused by lighting differences. As with all real-time vision algorithms, there is always a trade-off between accuracy and processing speed. This algorithm achieves a balance between the two to produce accurate results for real-time applications. To demonstrate its performance, the proposed algorithm is tested for human pose and hand gesture recognition to control a smart phone and an entertainment system.

  6. Threats of Password Pattern Leakage Using Smartwatch Motion Recognition Sensors

    Directory of Open Access Journals (Sweden)

    Jihun Kim

    2017-06-01

    Full Text Available Thanks to the development of Internet of Things (IoT technologies, wearable markets have been growing rapidly. Smartwatches can be said to be the most representative product in wearable markets, and involve various hardware technologies in order to overcome the limitations of small hardware. Motion recognition sensors are a representative example of those hardware technologies. However, smartwatches and motion recognition sensors that can be worn by users may pose security threats of password pattern leakage. In the present paper, passwords are inferred through experiments to obtain password patterns inputted by users using motion recognition sensors, and verification of the results and the accuracy of the results is shown.

  7. Comparison of sEMG-Based Feature Extraction and Motion Classification Methods for Upper-Limb Movement

    Directory of Open Access Journals (Sweden)

    Shuxiang Guo

    2015-04-01

    Full Text Available The surface electromyography (sEMG technique is proposed for muscle activation detection and intuitive control of prostheses or robot arms. Motion recognition is widely used to map sEMG signals to the target motions. One of the main factors preventing the implementation of this kind of method for real-time applications is the unsatisfactory motion recognition rate and time consumption. The purpose of this paper is to compare eight combinations of four feature extraction methods (Root Mean Square (RMS, Detrended Fluctuation Analysis (DFA, Weight Peaks (WP, and Muscular Model (MM and two classifiers (Neural Networks (NN and Support Vector Machine (SVM, for the task of mapping sEMG signals to eight upper-limb motions, to find out the relation between these methods and propose a proper combination to solve this issue. Seven subjects participated in the experiment and six muscles of the upper-limb were selected to record sEMG signals. The experimental results showed that NN classifier obtained the highest recognition accuracy rate (88.7% during the training process while SVM performed better in real-time experiments (85.9%. For time consumption, SVM took less time than NN during the training process but needed more time for real-time computation. Among the four feature extraction methods, WP had the highest recognition rate for the training process (97.7% while MM performed the best during real-time tests (94.3%. The combination of MM and NN is recommended for strict real-time applications while a combination of MM and SVM will be more suitable when time consumption is not a key requirement.

  8. MO-FG-BRD-01: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: Introduction and KV Tracking

    International Nuclear Information System (INIS)

    Fahimian, B.

    2015-01-01

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow

  9. MO-FG-BRD-01: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: Introduction and KV Tracking

    Energy Technology Data Exchange (ETDEWEB)

    Fahimian, B. [Stanford University (United States)

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.

  10. Real-time prediction and gating of respiratory motion using an extended Kalman filter and Gaussian process regression

    International Nuclear Information System (INIS)

    Bukhari, W; Hong, S-M

    2015-01-01

    Motion-adaptive radiotherapy aims to deliver a conformal dose to the target tumour with minimal normal tissue exposure by compensating for tumour motion in real time. The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting and gating respiratory motion that utilizes a model-based and a model-free Bayesian framework by combining them in a cascade structure. The algorithm, named EKF-GPR + , implements a gating function without pre-specifying a particular region of the patient’s breathing cycle. The algorithm first employs an extended Kalman filter (LCM-EKF) to predict the respiratory motion and then uses a model-free Gaussian process regression (GPR) to correct the error of the LCM-EKF prediction. The GPR is a non-parametric Bayesian algorithm that yields predictive variance under Gaussian assumptions. The EKF-GPR + algorithm utilizes the predictive variance from the GPR component to capture the uncertainty in the LCM-EKF prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification allows us to pause the treatment beam over such instances. EKF-GPR + implements the gating function by using simple calculations based on the predictive variance with no additional detection mechanism. A sparse approximation of the GPR algorithm is employed to realize EKF-GPR + in real time. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPR + . The experimental results show that the EKF-GPR + algorithm effectively reduces the prediction error in a root-mean-square (RMS) sense by employing the gating function, albeit at the cost of a reduced duty cycle. As an example, EKF-GPR + reduces the patient-wise RMS error to 37%, 39% and 42

  11. Real-time prediction and gating of respiratory motion using an extended Kalman filter and Gaussian process regression

    Science.gov (United States)

    Bukhari, W.; Hong, S.-M.

    2015-01-01

    Motion-adaptive radiotherapy aims to deliver a conformal dose to the target tumour with minimal normal tissue exposure by compensating for tumour motion in real time. The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting and gating respiratory motion that utilizes a model-based and a model-free Bayesian framework by combining them in a cascade structure. The algorithm, named EKF-GPR+, implements a gating function without pre-specifying a particular region of the patient’s breathing cycle. The algorithm first employs an extended Kalman filter (LCM-EKF) to predict the respiratory motion and then uses a model-free Gaussian process regression (GPR) to correct the error of the LCM-EKF prediction. The GPR is a non-parametric Bayesian algorithm that yields predictive variance under Gaussian assumptions. The EKF-GPR+ algorithm utilizes the predictive variance from the GPR component to capture the uncertainty in the LCM-EKF prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification allows us to pause the treatment beam over such instances. EKF-GPR+ implements the gating function by using simple calculations based on the predictive variance with no additional detection mechanism. A sparse approximation of the GPR algorithm is employed to realize EKF-GPR+ in real time. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPR+. The experimental results show that the EKF-GPR+ algorithm effectively reduces the prediction error in a root-mean-square (RMS) sense by employing the gating function, albeit at the cost of a reduced duty cycle. As an example, EKF-GPR+ reduces the patient-wise RMS error to 37%, 39% and 42% in

  12. Real-time prediction and gating of respiratory motion using an extended Kalman filter and Gaussian process regression.

    Science.gov (United States)

    Bukhari, W; Hong, S-M

    2015-01-07

    Motion-adaptive radiotherapy aims to deliver a conformal dose to the target tumour with minimal normal tissue exposure by compensating for tumour motion in real time. The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting and gating respiratory motion that utilizes a model-based and a model-free Bayesian framework by combining them in a cascade structure. The algorithm, named EKF-GPR(+), implements a gating function without pre-specifying a particular region of the patient's breathing cycle. The algorithm first employs an extended Kalman filter (LCM-EKF) to predict the respiratory motion and then uses a model-free Gaussian process regression (GPR) to correct the error of the LCM-EKF prediction. The GPR is a non-parametric Bayesian algorithm that yields predictive variance under Gaussian assumptions. The EKF-GPR(+) algorithm utilizes the predictive variance from the GPR component to capture the uncertainty in the LCM-EKF prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification allows us to pause the treatment beam over such instances. EKF-GPR(+) implements the gating function by using simple calculations based on the predictive variance with no additional detection mechanism. A sparse approximation of the GPR algorithm is employed to realize EKF-GPR(+) in real time. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPR(+). The experimental results show that the EKF-GPR(+) algorithm effectively reduces the prediction error in a root-mean-square (RMS) sense by employing the gating function, albeit at the cost of a reduced duty cycle. As an example, EKF-GPR(+) reduces the patient-wise RMS error to 37%, 39% and

  13. Using an external surrogate for predictor model training in real-time motion management of lung tumors

    Energy Technology Data Exchange (ETDEWEB)

    Rottmann, Joerg; Berbeco, Ross [Brigham and Women’s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 (United States)

    2014-12-15

    Purpose: Precise prediction of respiratory motion is a prerequisite for real-time motion compensation techniques such as beam, dynamic couch, or dynamic multileaf collimator tracking. Collection of tumor motion data to train the prediction model is required for most algorithms. To avoid exposure of patients to additional dose from imaging during this procedure, the feasibility of training a linear respiratory motion prediction model with an external surrogate signal is investigated and its performance benchmarked against training the model with tumor positions directly. Methods: The authors implement a lung tumor motion prediction algorithm based on linear ridge regression that is suitable to overcome system latencies up to about 300 ms. Its performance is investigated on a data set of 91 patient breathing trajectories recorded from fiducial marker tracking during radiotherapy delivery to the lung of ten patients. The expected 3D geometric error is quantified as a function of predictor lookahead time, signal sampling frequency and history vector length. Additionally, adaptive model retraining is evaluated, i.e., repeatedly updating the prediction model after initial training. Training length for this is gradually increased with incoming (internal) data availability. To assess practical feasibility model calculation times as well as various minimum data lengths for retraining are evaluated. Relative performance of model training with external surrogate motion data versus tumor motion data is evaluated. However, an internal–external motion correlation model is not utilized, i.e., prediction is solely driven by internal motion in both cases. Results: Similar prediction performance was achieved for training the model with external surrogate data versus internal (tumor motion) data. Adaptive model retraining can substantially boost performance in the case of external surrogate training while it has little impact for training with internal motion data. A minimum

  14. Real-Time Correction By Optical Tracking with Integrated Geometric Distortion Correction for Reducing Motion Artifacts in fMRI

    Science.gov (United States)

    Rotenberg, David J.

    Artifacts caused by head motion are a substantial source of error in fMRI that limits its use in neuroscience research and clinical settings. Real-time scan-plane correction by optical tracking has been shown to correct slice misalignment and non-linear spin-history artifacts, however residual artifacts due to dynamic magnetic field non-uniformity may remain in the data. A recently developed correction technique, PLACE, can correct for absolute geometric distortion using the complex image data from two EPI images, with slightly shifted k-space trajectories. We present a correction approach that integrates PLACE into a real-time scan-plane update system by optical tracking, applied to a tissue-equivalent phantom undergoing complex motion and an fMRI finger tapping experiment with overt head motion to induce dynamic field non-uniformity. Experiments suggest that including volume by volume geometric distortion correction by PLACE can suppress dynamic geometric distortion artifacts in a phantom and in vivo and provide more robust activation maps.

  15. A real-time dynamic-MLC control algorithm for delivering IMRT to targets undergoing 2D rigid motion in the beam's eye view

    International Nuclear Information System (INIS)

    McMahon, Ryan; Berbeco, Ross; Nishioka, Seiko; Ishikawa, Masayori; Papiez, Lech

    2008-01-01

    An MLC control algorithm for delivering intensity modulated radiation therapy (IMRT) to targets that are undergoing two-dimensional (2D) rigid motion in the beam's eye view (BEV) is presented. The goal of this method is to deliver 3D-derived fluence maps over a moving patient anatomy. Target motion measured prior to delivery is first used to design a set of planned dynamic-MLC (DMLC) sliding-window leaf trajectories. During actual delivery, the algorithm relies on real-time feedback to compensate for target motion that does not agree with the motion measured during planning. The methodology is based on an existing one-dimensional (1D) algorithm that uses on-the-fly intensity calculations to appropriately adjust the DMLC leaf trajectories in real-time during exposure delivery [McMahon et al., Med. Phys. 34, 3211-3223 (2007)]. To extend the 1D algorithm's application to 2D target motion, a real-time leaf-pair shifting mechanism has been developed. Target motion that is orthogonal to leaf travel is tracked by appropriately shifting the positions of all MLC leaves. The performance of the tracking algorithm was tested for a single beam of a fractionated IMRT treatment, using a clinically derived intensity profile and a 2D target trajectory based on measured patient data. Comparisons were made between 2D tracking, 1D tracking, and no tracking. The impact of the tracking lag time and the frequency of real-time imaging were investigated. A study of the dependence of the algorithm's performance on the level of agreement between the motion measured during planning and delivery was also included. Results demonstrated that tracking both components of the 2D motion (i.e., parallel and orthogonal to leaf travel) results in delivered fluence profiles that are superior to those that track the component of motion that is parallel to leaf travel alone. Tracking lag time effects may lead to relatively large intensity delivery errors compared to the other sources of error investigated

  16. Real-time optical tracking for motion compensated irradiation with scanned particle beams at CNAO

    Energy Technology Data Exchange (ETDEWEB)

    Fattori, G., E-mail: giovanni.fattori@psi.ch [Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy); Seregni, M. [Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy); Pella, A. [Centro Nazionale di Adroterapia Oncologica (CNAO), Strada Campeggi 53, 27100 Pavia (Italy); Riboldi, M. [Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy); Capasso, L. [Istituto Nazionale di Fisica Nucleare, Section of Torino, Torino 10125 (Italy); Donetti, M. [Centro Nazionale di Adroterapia Oncologica (CNAO), Strada Campeggi 53, 27100 Pavia (Italy); Istituto Nazionale di Fisica Nucleare, Section of Torino, Torino 10125 (Italy); Ciocca, M. [Centro Nazionale di Adroterapia Oncologica (CNAO), Strada Campeggi 53, 27100 Pavia (Italy); Giordanengo, S. [Istituto Nazionale di Fisica Nucleare, Section of Torino, Torino 10125 (Italy); Pullia, M. [Centro Nazionale di Adroterapia Oncologica (CNAO), Strada Campeggi 53, 27100 Pavia (Italy); Marchetto, F. [Istituto Nazionale di Fisica Nucleare, Section of Torino, Torino 10125 (Italy); Baroni, G. [Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy); Centro Nazionale di Adroterapia Oncologica (CNAO), Strada Campeggi 53, 27100 Pavia (Italy)

    2016-08-11

    Purpose: We describe the interface developed at the National Center for Oncological Hadrontherapy in Pavia to provide the dose delivery systems with real time respiratory motion information captured with an optical tracking system. An experimental study is presented to assess the technical feasibility of the implemented organ motion compensation framework, by analyzing the film response when irradiated with proton beams. Methods: The motion monitoring solution is based on a commercial hardware for motion capture running in-house developed software for respiratory signal processing. As part of the integration, the latency of data transmission to the dose delivery system was experimentally quantified and accounted for by signal time prediction. A respiratory breathing phantom is presented and used to test tumor tracking based either on the optical measurement of the target position or internal-external correlation models and beam gating, as driven by external surrogates. Beam tracking was tested considering the full target motion excursion (25×18 mm), whereas it is limited to 6×2 mm in the gating window. The different motion mitigation strategies were evaluated by comparing the experimental film responses with respect to static irradiation conditions. Dose inhomogeneity (IC) and conformity (CI) are provided as main indexes for dose quality assessment considering the irradiation in static condition as reference. Results: We measured 20.6 ms overall latency for motion signal processing. Dose measurements showed that beam tracking largely preserved dose homogeneity and conformity, showing maximal IC and CI variations limited to +0.10 and −0.01 with respect to the static reference. Gating resulted in slightly larger discrepancies (ΔIC=+0.20, ΔCI=−0.13) due to uncompensated residual motion in the gating window. Conclusions: The preliminary beam tracking and gating results verified the functionality of the prototypal solution for organ motion compensation based on

  17. Real-time optical tracking for motion compensated irradiation with scanned particle beams at CNAO

    International Nuclear Information System (INIS)

    Fattori, G.; Seregni, M.; Pella, A.; Riboldi, M.; Capasso, L.; Donetti, M.; Ciocca, M.; Giordanengo, S.; Pullia, M.; Marchetto, F.; Baroni, G.

    2016-01-01

    Purpose: We describe the interface developed at the National Center for Oncological Hadrontherapy in Pavia to provide the dose delivery systems with real time respiratory motion information captured with an optical tracking system. An experimental study is presented to assess the technical feasibility of the implemented organ motion compensation framework, by analyzing the film response when irradiated with proton beams. Methods: The motion monitoring solution is based on a commercial hardware for motion capture running in-house developed software for respiratory signal processing. As part of the integration, the latency of data transmission to the dose delivery system was experimentally quantified and accounted for by signal time prediction. A respiratory breathing phantom is presented and used to test tumor tracking based either on the optical measurement of the target position or internal-external correlation models and beam gating, as driven by external surrogates. Beam tracking was tested considering the full target motion excursion (25×18 mm), whereas it is limited to 6×2 mm in the gating window. The different motion mitigation strategies were evaluated by comparing the experimental film responses with respect to static irradiation conditions. Dose inhomogeneity (IC) and conformity (CI) are provided as main indexes for dose quality assessment considering the irradiation in static condition as reference. Results: We measured 20.6 ms overall latency for motion signal processing. Dose measurements showed that beam tracking largely preserved dose homogeneity and conformity, showing maximal IC and CI variations limited to +0.10 and −0.01 with respect to the static reference. Gating resulted in slightly larger discrepancies (ΔIC=+0.20, ΔCI=−0.13) due to uncompensated residual motion in the gating window. Conclusions: The preliminary beam tracking and gating results verified the functionality of the prototypal solution for organ motion compensation based on

  18. Cooperating the BDS, GPS, GLONASS and strong-motion observations for real-time deformation monitoring

    Science.gov (United States)

    Tu, Rui; Liu, Jinhai; Lu, Cuixian; Zhang, Rui; Zhang, Pengfei; Lu, Xiaochun

    2017-06-01

    An approach of cooperating the BDS, GPS, GLONASS and strong-motion (SM) records for real-time deformation monitoring was presented, which was validated by the experimental data. In this approach, the Global Navigation Satellite System (GNSS) data were processed with the real-time kinematic positioning technology to retrieve the GNSS displacement, and the SM data were calibrated to acquire the raw acceleration; a Kalman filter was then applied to combine the GNSS displacement and the SM acceleration to obtain the integrated displacement, velocity and acceleration. The validation results show that the advantages of each sensor are completely complementary. For the SM, the baseline shifts are estimated and corrected, and the high-precision velocity and displacement are recovered. While the noise of GNSS can be reduced by using the SM-derived high-resolution acceleration, thus the high-precision and broad-band deformation information can be obtained in real time. The proposed method indicates a promising potential and capability in deformation monitoring of the high-building, dam, bridge and landslide.

  19. Structural Motion Grammar for Universal Use of Leap Motion: Amusement and Functional Contents Focused

    Directory of Open Access Journals (Sweden)

    Byungseok Lee

    2018-01-01

    Full Text Available Motions using Leap Motion controller are not standardized while the use of it is spreading in media contents. Each content defines its own motions, thereby creating confusion for users. Therefore, to alleviate user inconvenience, this study categorized the commonly used motion by Amusement and Functional Contents and defined the Structural Motion Grammar that can be universally used based on the classification. To this end, the Motion Lexicon was defined, which is a fundamental motion vocabulary, and an algorithm that enables real-time recognition of Structural Motion Grammar was developed. Moreover, the proposed method was verified by user evaluation and quantitative comparison tests.

  20. Changing predictions, stable recognition: Children’s representations of downward incline motion

    OpenAIRE

    Hast, Michael; Howe, Christine

    2017-01-01

    Various studies to-date have demonstrated children hold ill-conceived expressed beliefs about the physical world such as that one ball will fall faster than another because it is heavier. At the same time they also demonstrate accurate recognition of dynamic events. How these representations relate is still unresolved. This study examined 5- to 11-year-olds’ (N = 130) predictions and recognition of motion down inclines. Predictions were typically in error, matching previous work, but children...

  1. Sampling-based real-time motion planning under state uncertainty for autonomous micro-aerial vehicles in GPS-denied environments.

    Science.gov (United States)

    Li, Dachuan; Li, Qing; Cheng, Nong; Song, Jingyan

    2014-11-18

    This paper presents a real-time motion planning approach for autonomous vehicles with complex dynamics and state uncertainty. The approach is motivated by the motion planning problem for autonomous vehicles navigating in GPS-denied dynamic environments, which involves non-linear and/or non-holonomic vehicle dynamics, incomplete state estimates, and constraints imposed by uncertain and cluttered environments. To address the above motion planning problem, we propose an extension of the closed-loop rapid belief trees, the closed-loop random belief trees (CL-RBT), which incorporates predictions of the position estimation uncertainty, using a factored form of the covariance provided by the Kalman filter-based estimator. The proposed motion planner operates by incrementally constructing a tree of dynamically feasible trajectories using the closed-loop prediction, while selecting candidate paths with low uncertainty using efficient covariance update and propagation. The algorithm can operate in real-time, continuously providing the controller with feasible paths for execution, enabling the vehicle to account for dynamic and uncertain environments. Simulation results demonstrate that the proposed approach can generate feasible trajectories that reduce the state estimation uncertainty, while handling complex vehicle dynamics and environment constraints.

  2. Automatic data-driven real-time segmentation and recognition of surgical workflow.

    Science.gov (United States)

    Dergachyova, Olga; Bouget, David; Huaulmé, Arnaud; Morandi, Xavier; Jannin, Pierre

    2016-06-01

    With the intention of extending the perception and action of surgical staff inside the operating room, the medical community has expressed a growing interest towards context-aware systems. Requiring an accurate identification of the surgical workflow, such systems make use of data from a diverse set of available sensors. In this paper, we propose a fully data-driven and real-time method for segmentation and recognition of surgical phases using a combination of video data and instrument usage signals, exploiting no prior knowledge. We also introduce new validation metrics for assessment of workflow detection. The segmentation and recognition are based on a four-stage process. Firstly, during the learning time, a Surgical Process Model is automatically constructed from data annotations to guide the following process. Secondly, data samples are described using a combination of low-level visual cues and instrument information. Then, in the third stage, these descriptions are employed to train a set of AdaBoost classifiers capable of distinguishing one surgical phase from others. Finally, AdaBoost responses are used as input to a Hidden semi-Markov Model in order to obtain a final decision. On the MICCAI EndoVis challenge laparoscopic dataset we achieved a precision and a recall of 91 % in classification of 7 phases. Compared to the analysis based on one data type only, a combination of visual features and instrument signals allows better segmentation, reduction of the detection delay and discovery of the correct phase order.

  3. LabVIEW Real-Time

    CERN Multimedia

    CERN. Geneva; Flockhart, Ronald Bruce; Seppey, P

    2003-01-01

    With LabVIEW Real-Time, you can choose from a variety of RT Series hardware. Add a real-time data acquisition component into a larger measurement and automation system or create a single stand-alone real-time solution with data acquisition, signal conditioning, motion control, RS-232, GPIB instrumentation, and Ethernet connectivity. With the various hardware options, you can create a system to meet your precise needs today, while the modularity of the system means you can add to the solution as your system requirements grow. If you are interested in Reliable and Deterministic systems for Measurement and Automation, you will profit from this seminar. Agenda: Real-Time Overview LabVIEW RT Hardware Platforms - Linux on PXI Programming with LabVIEW RT Real-Time Operating Systems concepts Timing Applications Data Transfer

  4. Reaction Time of Facial Affect Recognition in Asperger's Disorder for Cartoon and Real, Static and Moving Faces

    Science.gov (United States)

    Miyahara, Motohide; Bray, Anne; Tsujii, Masatsugu; Fujita, Chikako; Sugiyama, Toshiro

    2007-01-01

    This study used a choice reaction-time paradigm to test the perceived impairment of facial affect recognition in Asperger's disorder. Twenty teenagers with Asperger's disorder and 20 controls were compared with respect to the latency and accuracy of response to happy or disgusted facial expressions, presented in cartoon or real images and in…

  5. Real-time image processing II; Proceedings of the Meeting, Orlando, FL, Apr. 16-18, 1990

    Science.gov (United States)

    Juday, Richard D. (Editor)

    1990-01-01

    The present conference discusses topics in the fields of feature extraction and implementation, filter and correlation algorithms, optical correlators, high-level algorithms, and digital image processing for ranging and remote driving. Attention is given to a nonlinear filter derived from topological image features, IR image segmentation through iterative thresholding, orthogonal subspaces for correlation masking, composite filter trees and image recognition via binary search, and features of matrix-coherent optical image processing. Also discussed are multitarget tracking via hybrid joint transform correlator, binary joint Fourier transform correlator considerations, global image processing operations on parallel architectures, real-time implementation of a differential range finder, and real-time binocular stereo range and motion detection.

  6. Real-time image processing II; Proceedings of the Meeting, Orlando, FL, Apr. 16-18, 1990

    Science.gov (United States)

    Juday, Richard D.

    The present conference discusses topics in the fields of feature extraction and implementation, filter and correlation algorithms, optical correlators, high-level algorithms, and digital image processing for ranging and remote driving. Attention is given to a nonlinear filter derived from topological image features, IR image segmentation through iterative thresholding, orthogonal subspaces for correlation masking, composite filter trees and image recognition via binary search, and features of matrix-coherent optical image processing. Also discussed are multitarget tracking via hybrid joint transform correlator, binary joint Fourier transform correlator considerations, global image processing operations on parallel architectures, real-time implementation of a differential range finder, and real-time binocular stereo range and motion detection.

  7. Unsupervised markerless 3-DOF motion tracking in real time using a single low-budget camera.

    Science.gov (United States)

    Quesada, Luis; León, Alejandro J

    2012-10-01

    Motion tracking is a critical task in many computer vision applications. Existing motion tracking techniques require either a great amount of knowledge on the target object or specific hardware. These requirements discourage the wide spread of commercial applications based on motion tracking. In this paper, we present a novel three degrees of freedom motion tracking system that needs no knowledge on the target object and that only requires a single low-budget camera that can be found installed in most computers and smartphones. Our system estimates, in real time, the three-dimensional position of a nonmodeled unmarked object that may be nonrigid, nonconvex, partially occluded, self-occluded, or motion blurred, given that it is opaque, evenly colored, enough contrasting with the background in each frame, and that it does not rotate. Our system is also able to determine the most relevant object to track in the screen. Our proposal does not impose additional constraints, therefore it allows a market-wide implementation of applications that require the estimation of the three position degrees of freedom of an object.

  8. Two-dimensional statistical linear discriminant analysis for real-time robust vehicle-type recognition

    Science.gov (United States)

    Zafar, I.; Edirisinghe, E. A.; Acar, S.; Bez, H. E.

    2007-02-01

    Automatic vehicle Make and Model Recognition (MMR) systems provide useful performance enhancements to vehicle recognitions systems that are solely based on Automatic License Plate Recognition (ALPR) systems. Several car MMR systems have been proposed in literature. However these approaches are based on feature detection algorithms that can perform sub-optimally under adverse lighting and/or occlusion conditions. In this paper we propose a real time, appearance based, car MMR approach using Two Dimensional Linear Discriminant Analysis that is capable of addressing this limitation. We provide experimental results to analyse the proposed algorithm's robustness under varying illumination and occlusions conditions. We have shown that the best performance with the proposed 2D-LDA based car MMR approach is obtained when the eigenvectors of lower significance are ignored. For the given database of 200 car images of 25 different make-model classifications, a best accuracy of 91% was obtained with the 2D-LDA approach. We use a direct Principle Component Analysis (PCA) based approach as a benchmark to compare and contrast the performance of the proposed 2D-LDA approach to car MMR. We conclude that in general the 2D-LDA based algorithm supersedes the performance of the PCA based approach.

  9. Management of threatened abortion with real-time sonography.

    Science.gov (United States)

    Anderson, S G

    1980-02-01

    Real-time sonography was used to evaluate 158 patients with threatened abortion. Fetal motion was first detected during the seventh gestational week and with increasing frequency thereafter in 73 patients with viable pregnancies continuing to term. Only 2 of 65 patients who aborted demonstrated fetal motion. The presence or absence of fetal motion was most reliable after 7 weeks' gestation for establishing a prognosis for a given pregnancy. Seventy-two of 74 pregnancies with fetal motion continued to term, whereas 63 of 64 pregnancies without fetal motion aborted. A method for using real-time sonography in the management of threatened abortion is presented.

  10. Attention, biological motion, and action recognition.

    Science.gov (United States)

    Thompson, James; Parasuraman, Raja

    2012-01-02

    Interacting with others in the environment requires that we perceive and recognize their movements and actions. Neuroimaging and neuropsychological studies have indicated that a number of brain regions, particularly the superior temporal sulcus, are involved in a number of processes essential for action recognition, including the processing of biological motion and processing the intentions of actions. We review the behavioral and neuroimaging evidence suggesting that while some aspects of action recognition might be rapid and effective, they are not necessarily automatic. Attention is particularly important when visual information about actions is degraded or ambiguous, or if competing information is present. We present evidence indicating that neural responses associated with the processing of biological motion are strongly modulated by attention. In addition, behavioral and neuroimaging evidence shows that drawing inferences from the actions of others is attentionally demanding. The role of attention in action observation has implications for everyday social interactions and workplace applications that depend on observing, understanding and interpreting actions. Published by Elsevier Inc.

  11. Kernel density estimation-based real-time prediction for respiratory motion

    International Nuclear Information System (INIS)

    Ruan, Dan

    2010-01-01

    Effective delivery of adaptive radiotherapy requires locating the target with high precision in real time. System latency caused by data acquisition, streaming, processing and delivery control necessitates prediction. Prediction is particularly challenging for highly mobile targets such as thoracic and abdominal tumors undergoing respiration-induced motion. The complexity of the respiratory motion makes it difficult to build and justify explicit models. In this study, we honor the intrinsic uncertainties in respiratory motion and propose a statistical treatment of the prediction problem. Instead of asking for a deterministic covariate-response map and a unique estimate value for future target position, we aim to obtain a distribution of the future target position (response variable) conditioned on the observed historical sample values (covariate variable). The key idea is to estimate the joint probability distribution (pdf) of the covariate and response variables using an efficient kernel density estimation method. Then, the problem of identifying the distribution of the future target position reduces to identifying the section in the joint pdf based on the observed covariate. Subsequently, estimators are derived based on this estimated conditional distribution. This probabilistic perspective has some distinctive advantages over existing deterministic schemes: (1) it is compatible with potentially inconsistent training samples, i.e., when close covariate variables correspond to dramatically different response values; (2) it is not restricted by any prior structural assumption on the map between the covariate and the response; (3) the two-stage setup allows much freedom in choosing statistical estimates and provides a full nonparametric description of the uncertainty for the resulting estimate. We evaluated the prediction performance on ten patient RPM traces, using the root mean squared difference between the prediction and the observed value normalized by the

  12. [Clinical analysis of real-time iris recognition guided LASIK with femtosecond laser flap creation for myopic astigmatism].

    Science.gov (United States)

    Jie, Li-ming; Wang, Qian; Zheng, Lin

    2013-08-01

    To assess the safety, efficacy, stability and changes in cylindrical degree and axis after real-time iris recognition guided LASIK with femtosecond laser flap creation for the correction of myopic astigmatism. Retrospective case series. This observational case study comprised 136 patients (249 eyes) with myopic astigmatism in a 6-month trial. Patients were divided into 3 groups according to the pre-operative cylindrical degree: Group 1, -0.75 to -1.25 D, 106 eyes;Group 2, -1.50 to -2.25 D, 89 eyes and Group 3, -2.50 to -5.00 D, 54 eyes. They were also grouped by pre-operative astigmatism axis:Group A, with the rule astigmatism (WTRA), 156 eyes; Group B, against the rule astigmatism (ATRA), 64 eyes;Group C, oblique axis astigmatism, 29 eyes. After femtosecond laser flap created, real-time iris recognized excimer ablation was performed. The naked visual acuity, the best-corrected visual acuity, the degree and axis of astigmatism were analyzed and compared at 1, 3 and 6 months postoperatively. Static iris recognition detected that eye cyclotorsional misalignment was 2.37° ± 2.16°, dynamic iris recognition detected that the intraoperative cyclotorsional misalignment range was 0-4.3°. Six months after operation, the naked visual acuity was 0.5 or better in 100% cases. No eye lost ≥ 1 line of best spectacle-corrected visual acuity (BSCVA). Six months after operation, the naked vision of 227 eyes surpassed the BSCVA, and 87 eyes gained 1 line of BSCVA. The degree of astigmatism decreased from (-1.72 ± 0.77) D (pre-operation) to (-0.29 ± 0.25) D (post-operation). Six months after operation, WTRA from 157 eyes (pre-operation) decreased to 43 eyes (post-operation), ATRA from 63 eyes (pre-operation) decreased to 28 eyes (post-operation), oblique astigmatism increased from 29 eyes to 34 eyes and 144 eyes became non-astigmatism. The real-time iris recognition guided LASIK with femtosecond laser flap creation can compensate deviation from eye cyclotorsion, decrease

  13. Real-time change detection in data streams with FPGAs

    International Nuclear Information System (INIS)

    Vega, J.; Dormido-Canto, S.; Cruz, T.; Ruiz, M.; Barrera, E.; Castro, R.; Murari, A.; Ochando, M.

    2014-01-01

    Highlights: • Automatic recognition of changes in data streams of multidimensional signals. • Detection algorithm based on testing exchangeability on-line. • Real-time and off-line applicability. • Real-time implementation in FPGAs. - Abstract: The automatic recognition of changes in data streams is useful in both real-time and off-line data analyses. This article shows several effective change-detecting algorithms (based on martingales) and describes their real-time applicability in the data acquisition systems through the use of Field Programmable Gate Arrays (FPGA). The automatic event recognition system is absolutely general and it does not depend on either the particular event to detect or the specific data representation (waveforms, images or multidimensional signals). The developed approach provides good results for change detection in both the temporal evolution of profiles and the two-dimensional spatial distribution of volume emission intensity. The average computation time in the FPGA is 210 μs per profile

  14. Real-time tracking of tumor motions and deformations along the leaf travel direction with the aid of a synchronized dynamic MLC leaf sequencer

    International Nuclear Information System (INIS)

    Tacke, Martin; Nill, Simeon; Oelfke, Uwe

    2007-01-01

    Advanced radiotherapeutical techniques like intensity-modulated radiation therapy (IMRT) are based on an accurate knowledge of the location of the radiation target. An accurate dose delivery, therefore, requires a method to account for the inter- and intrafractional target motion and the target deformation occurring during the course of treatment. A method to compensate in real time for changes in the position and shape of the target is the use of a dynamic multileaf collimator (MLC) technique which can be devised to automatically arrange the treatment field according to real-time image information. So far, various approaches proposed for leaf sequencers have had to rely on a priori known target motion data and have aimed to optimize the overall treatment time. Since for a real-time dose delivery the target motion is not known a priori, the velocity range of the leading leaves is restricted by a safety margin to c x v max while the following leaves can travel with an additional maximum speed to compensate for the respective target movements. Another aspect to be considered is the tongue and groove effect. A uniform radiation field can only be achieved if the leaf movements are synchronized. The method presented in this note is the first to combine a synchronizing sequencer and real-time tracking with a dynamic MLC. The newly developed algorithm is capable of online optimizing the leaf velocities by minimizing the overall treatment time while at the same time it synchronizes the leaf trajectories in order to avoid the tongue and groove effect. The simultaneous synchronization is performed with the help of an online-calculated mid-time leaf trajectory which is common for all leaf pairs and which takes into account the real-time target motion and deformation information. (note)

  15. Real-time tracking of tumor motions and deformations along the leaf travel direction with the aid of a synchronized dynamic MLC leaf sequencer.

    Science.gov (United States)

    Tacke, Martin; Nill, Simeon; Oelfke, Uwe

    2007-11-21

    Advanced radiotherapeutical techniques like intensity-modulated radiation therapy (IMRT) are based on an accurate knowledge of the location of the radiation target. An accurate dose delivery, therefore, requires a method to account for the inter- and intrafractional target motion and the target deformation occurring during the course of treatment. A method to compensate in real time for changes in the position and shape of the target is the use of a dynamic multileaf collimator (MLC) technique which can be devised to automatically arrange the treatment field according to real-time image information. So far, various approaches proposed for leaf sequencers have had to rely on a priori known target motion data and have aimed to optimize the overall treatment time. Since for a real-time dose delivery the target motion is not known a priori, the velocity range of the leading leaves is restricted by a safety margin to c x v(max) while the following leaves can travel with an additional maximum speed to compensate for the respective target movements. Another aspect to be considered is the tongue and groove effect. A uniform radiation field can only be achieved if the leaf movements are synchronized. The method presented in this note is the first to combine a synchronizing sequencer and real-time tracking with a dynamic MLC. The newly developed algorithm is capable of online optimizing the leaf velocities by minimizing the overall treatment time while at the same time it synchronizes the leaf trajectories in order to avoid the tongue and groove effect. The simultaneous synchronization is performed with the help of an online-calculated mid-time leaf trajectory which is common for all leaf pairs and which takes into account the real-time target motion and deformation information.

  16. Real-time 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy.

    Science.gov (United States)

    Furtado, Hugo; Steiner, Elisabeth; Stock, Markus; Georg, Dietmar; Birkfellner, Wolfgang

    2013-10-01

    Intra-fractional respiratory motion during radiotherapy leads to a larger planning target volume (PTV). Real-time tumor motion tracking by two-dimensional (2D)/3D registration using on-board kilo-voltage (kV) imaging can allow for a reduction of the PTV though motion along the imaging beam axis cannot be resolved using only one projection image. We present a retrospective patient study investigating the impact of paired portal mega-voltage (MV) and kV images on registration accuracy. Material and methods. We used data from 10 patients suffering from non-small cell lung cancer (NSCLC) undergoing stereotactic body radiation therapy (SBRT) lung treatment. For each patient we acquired a planning computed tomography (CT) and sequences of kV and MV images during treatment. We compared the accuracy of motion tracking in six degrees-of-freedom (DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. Results. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 2.9 mm to 1.5 mm and the motion along AP was successfully extracted. Mean registration time was 188 ms. Conclusion. Our evaluation shows that using kV-MV image pairs leads to improved motion extraction in six DOF and is suitable for real-time tumor motion tracking with a conventional LINAC.

  17. Real-time ultrasound-tagging to track the 2D motion of the common carotid artery wall in vivo

    Energy Technology Data Exchange (ETDEWEB)

    Zahnd, Guillaume, E-mail: g.zahnd@erasmusmc.nl [Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus MC, Rotterdam 3000 CA (Netherlands); Salles, Sébastien; Liebgott, Hervé; Vray, Didier [Université de Lyon, CREATIS, CNRS UMR 5220, INSERM U1044, INSA-Lyon, Université Lyon 1, Lyon 69100 (France); Sérusclat, André [Department of Radiology, Louis Pradel Hospital, Lyon 69500 (France); Moulin, Philippe [Department of Endocrinology, Louis Pradel Hospital, Hospices Civils de Lyon, Université Lyon 1, Lyon 69100, France and INSERM UMR 1060, Lyon 69500 (France)

    2015-02-15

    Purpose: Tracking the motion of biological tissues represents an important issue in the field of medical ultrasound imaging. However, the longitudinal component of the motion (i.e., perpendicular to the beam axis) remains more challenging to extract due to the rather coarse resolution cell of ultrasound scanners along this direction. The aim of this study is to introduce a real-time beamforming strategy dedicated to acquire tagged images featuring a distinct pattern in the objective to ease the tracking. Methods: Under the conditions of the Fraunhofer approximation, a specific apodization function was applied to the received raw channel data, in real-time during image acquisition, in order to introduce a periodic oscillations pattern along the longitudinal direction of the radio frequency signal. Analytic signals were then extracted from the tagged images, and subpixel motion tracking of the intima–media complex was subsequently performed offline, by means of a previously introduced bidimensional analytic phase-based estimator. Results: The authors’ framework was applied in vivo on the common carotid artery from 20 young healthy volunteers and 6 elderly patients with high atherosclerosis risk. Cine-loops of tagged images were acquired during three cardiac cycles. Evaluated against reference trajectories manually generated by three experienced analysts, the mean absolute tracking error was 98 ± 84 μm and 55 ± 44 μm in the longitudinal and axial directions, respectively. These errors corresponded to 28% ± 23% and 13% ± 9% of the longitudinal and axial amplitude of the assessed motion, respectively. Conclusions: The proposed framework enables tagged ultrasound images of in vivo tissues to be acquired in real-time. Such unconventional beamforming strategy contributes to improve tracking accuracy and could potentially benefit to the interpretation and diagnosis of biomedical images.

  18. Real-time speech gisting for ATC applications

    Science.gov (United States)

    Dunkelberger, Kirk A.

    1995-06-01

    Command and control within the ATC environment remains primarily voice-based. Hence, automatic real time, speaker independent, continuous speech recognition (CSR) has many obvious applications and implied benefits to the ATC community: automated target tagging, aircraft compliance monitoring, controller training, automatic alarm disabling, display management, and many others. However, while current state-of-the-art CSR systems provide upwards of 98% word accuracy in laboratory environments, recent low-intrusion experiments in the ATCT environments demonstrated less than 70% word accuracy in spite of significant investments in recognizer tuning. Acoustic channel irregularities and controller/pilot grammar verities impact current CSR algorithms at their weakest points. It will be shown herein, however, that real time context- and environment-sensitive gisting can provide key command phrase recognition rates of greater than 95% using the same low-intrusion approach. The combination of real time inexact syntactic pattern recognition techniques and a tight integration of CSR, gisting, and ATC database accessor system components is the key to these high phase recognition rates. A system concept for real time gisting in the ATC context is presented herein. After establishing an application context, discussion presents a minimal CSR technology context then focuses on the gisting mechanism, desirable interfaces into the ATCT database environment, and data and control flow within the prototype system. Results of recent tests for a subset of the functionality are presented together with suggestions for further research.

  19. Motion Primitives for Action Recognition

    DEFF Research Database (Denmark)

    Fihl, Preben; Holte, Michael Boelstoft; Moeslund, Thomas B.

    2007-01-01

    the actions as a sequence of temporal isolated instances, denoted primitives. These primitives are each defined by four features extracted from motion images. The primitives are recognized in each frame based on a trained classifier resulting in a sequence of primitives. From this sequence we recognize......The number of potential applications has made automatic recognition of human actions a very active research area. Different approaches have been followed based on trajectories through some state space. In this paper we also model an action as a trajectory through a state space, but we represent...... different temporal actions using a probabilistic Edit Distance method. The method is tested on different actions with and without noise and the results show recognition rates of 88.7% and 85.5%, respectively....

  20. A parallelizable real-time motion tracking algorithm with applications to ultrasonic strain imaging

    International Nuclear Information System (INIS)

    Jiang, J; Hall, T J

    2007-01-01

    Ultrasound-based mechanical strain imaging systems utilize signals from conventional diagnostic ultrasound systems to image tissue elasticity contrast that provides new diagnostically valuable information. Previous works (Hall et al 2003 Ultrasound Med. Biol. 29 427, Zhu and Hall 2002 Ultrason. Imaging 24 161) demonstrated that uniaxial deformation with minimal elevation motion is preferred for breast strain imaging and real-time strain image feedback to operators is important to accomplish this goal. The work reported here enhances the real-time speckle tracking algorithm with two significant modifications. One fundamental change is that the proposed algorithm is a column-based algorithm (a column is defined by a line of data parallel to the ultrasound beam direction, i.e. an A-line), as opposed to a row-based algorithm (a row is defined by a line of data perpendicular to the ultrasound beam direction). Then, displacement estimates from its adjacent columns provide good guidance for motion tracking in a significantly reduced search region to reduce computational cost. Consequently, the process of displacement estimation can be naturally split into at least two separated tasks, computed in parallel, propagating outward from the center of the region of interest (ROI). The proposed algorithm has been implemented and optimized in a Windows (registered) system as a stand-alone ANSI C++ program. Results of preliminary tests, using numerical and tissue-mimicking phantoms, and in vivo tissue data, suggest that high contrast strain images can be consistently obtained with frame rates (10 frames s -1 ) that exceed our previous methods

  1. Facial Expression Emotion Detection for Real-Time Embedded Systems

    Directory of Open Access Journals (Sweden)

    Saeed Turabzadeh

    2018-01-01

    Full Text Available Recently, real-time facial expression recognition has attracted more and more research. In this study, an automatic facial expression real-time system was built and tested. Firstly, the system and model were designed and tested on a MATLAB environment followed by a MATLAB Simulink environment that is capable of recognizing continuous facial expressions in real-time with a rate of 1 frame per second and that is implemented on a desktop PC. They have been evaluated in a public dataset, and the experimental results were promising. The dataset and labels used in this study were made from videos, which were recorded twice from five participants while watching a video. Secondly, in order to implement in real-time at a faster frame rate, the facial expression recognition system was built on the field-programmable gate array (FPGA. The camera sensor used in this work was a Digilent VmodCAM — stereo camera module. The model was built on the Atlys™ Spartan-6 FPGA development board. It can continuously perform emotional state recognition in real-time at a frame rate of 30. A graphical user interface was designed to display the participant’s video in real-time and two-dimensional predict labels of the emotion at the same time.

  2. An experiment of a 3D real-time robust visual odometry for intelligent vehicles

    OpenAIRE

    Rodriguez Florez , Sergio Alberto; Fremont , Vincent; Bonnifait , Philippe

    2009-01-01

    International audience; Vision systems are nowadays very promising for many on-board vehicles perception functionalities, like obstacles detection/recognition and ego-localization. In this paper, we present a 3D visual odometric method that uses a stereo-vision system to estimate the 3D ego-motion of a vehicle in outdoor road conditions. In order to run in real-time, the studied technique is sparse meaning that it makes use of feature points that are tracked during several frames. A robust sc...

  3. Real-time sonography in obstetrics.

    Science.gov (United States)

    Anderson, S G

    1978-03-01

    Three hundred fifty real-time scans were performed on pregnant women for various indications. Placental localization was satisfactorily obtained in 173 of 174 studies. Estimates of fetal gestation from directly measured biparietal diameter were +/-2 weeks of actual gestation in 153 of 172 (88.9%) measurements. The presence or absence of fetal motion and cardiac activity established a diagnosis of fetal viability or fetal death in 32 patients after the first trimester. Accurate diagnosis was made in 52 of 57 patients with threatened abortions, and two of these errors occurred in scans performed before completion of the eighth postmenstrual week. Because of the ability to demonstrate fetal motion, real-time sonography should have many applications in obstetrics.

  4. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    International Nuclear Information System (INIS)

    Min Yugang; Santhanam, Anand; Ruddy, Bari H; Neelakkantan, Harini; Meeks, Sanford L; Kupelian, Patrick A

    2010-01-01

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  5. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    Energy Technology Data Exchange (ETDEWEB)

    Min Yugang; Santhanam, Anand; Ruddy, Bari H [University of Central Florida, FL (United States); Neelakkantan, Harini; Meeks, Sanford L [M D Anderson Cancer Center Orlando, FL (United States); Kupelian, Patrick A, E-mail: anand.santhanam@orlandohealth.co [Department of Radiation Oncology, University of California, Los Angeles, CA (United States)

    2010-09-07

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  6. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion.

    Science.gov (United States)

    Min, Yugang; Santhanam, Anand; Neelakkantan, Harini; Ruddy, Bari H; Meeks, Sanford L; Kupelian, Patrick A

    2010-09-07

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  7. Changing predictions, stable recognition: Children's representations of downward incline motion.

    Science.gov (United States)

    Hast, Michael; Howe, Christine

    2017-11-01

    Various studies to-date have demonstrated children hold ill-conceived expressed beliefs about the physical world such as that one ball will fall faster than another because it is heavier. At the same time, they also demonstrate accurate recognition of dynamic events. How these representations relate is still unresolved. This study examined 5- to 11-year-olds' (N = 130) predictions and recognition of motion down inclines. Predictions were typically in error, matching previous work, but children largely recognized correct events as correct and rejected incorrect ones. The results also demonstrate while predictions change with increasing age, recognition shows signs of stability. The findings provide further support for a hybrid model of object representations and argue in favour of stable core cognition existing alongside developmental changes. Statement of contribution What is already known on this subject? Children's predictions of physical events show limitations in accuracy Their recognition of such events suggests children may use different knowledge sources in their reasoning What the present study adds? Predictions fluctuate more strongly than recognition, suggesting stable core cognition But recognition also shows some fluctuation, arguing for a hybrid model of knowledge representation. © 2017 The British Psychological Society.

  8. Patient cloth with motion recognition sensors based on flexible piezoelectric materials.

    Science.gov (United States)

    Youngsu Cha; Kihyuk Nam; Doik Kim

    2017-07-01

    In this paper, we introduce a patient cloth for position monitoring using motion recognition sensors based on flexible piezoelectric materials. The motion recognition sensors are embedded in three parts, which are the knee, hip and back, in the patient cloth. We use polyvinylidene fluoride (PVDF) as the flexible piezoelectric material for the sensors. By using the piezoelectric effect of the PVDF, we detect electrical signals when the cloth is bent or extended. We analyze the sensing values for our human motions by processing the sensor outputs in a custom-made program. Specifically, we focus on the transitions between standing and sitting, and sitting knee extension and supine position, which are important motions for patient monitoring.

  9. A multi-mode real-time terrain parameter estimation method for wheeled motion control of mobile robots

    Science.gov (United States)

    Li, Yuankai; Ding, Liang; Zheng, Zhizhong; Yang, Qizhi; Zhao, Xingang; Liu, Guangjun

    2018-05-01

    For motion control of wheeled planetary rovers traversing on deformable terrain, real-time terrain parameter estimation is critical in modeling the wheel-terrain interaction and compensating the effect of wheel slipping. A multi-mode real-time estimation method is proposed in this paper to achieve accurate terrain parameter estimation. The proposed method is composed of an inner layer for real-time filtering and an outer layer for online update. In the inner layer, sinkage exponent and internal frictional angle, which have higher sensitivity than that of the other terrain parameters to wheel-terrain interaction forces, are estimated in real time by using an adaptive robust extended Kalman filter (AREKF), whereas the other parameters are fixed with nominal values. The inner layer result can help synthesize the current wheel-terrain contact forces with adequate precision, but has limited prediction capability for time-variable wheel slipping. To improve estimation accuracy of the result from the inner layer, an outer layer based on recursive Gauss-Newton (RGN) algorithm is introduced to refine the result of real-time filtering according to the innovation contained in the history data. With the two-layer structure, the proposed method can work in three fundamental estimation modes: EKF, REKF and RGN, making the method applicable for flat, rough and non-uniform terrains. Simulations have demonstrated the effectiveness of the proposed method under three terrain types, showing the advantages of introducing the two-layer structure.

  10. Development of real-time motion capture system for 3D on-line games linked with virtual character

    Science.gov (United States)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  11. Real-time prediction of respiratory motion based on a local dynamic model in an augmented space.

    Science.gov (United States)

    Hong, S-M; Jung, B-H; Ruan, D

    2011-03-21

    Motion-adaptive radiotherapy aims to deliver ablative radiation dose to the tumor target with minimal normal tissue exposure, by accounting for real-time target movement. In practice, prediction is usually necessary to compensate for system latency induced by measurement, communication and control. This work focuses on predicting respiratory motion, which is most dominant for thoracic and abdominal tumors. We develop and investigate the use of a local dynamic model in an augmented space, motivated by the observation that respiratory movement exhibits a locally circular pattern in a plane augmented with a delayed axis. By including the angular velocity as part of the system state, the proposed dynamic model effectively captures the natural evolution of respiratory motion. The first-order extended Kalman filter is used to propagate and update the state estimate. The target location is predicted by evaluating the local dynamic model equations at the required prediction length. This method is complementary to existing work in that (1) the local circular motion model characterizes 'turning', overcoming the limitation of linear motion models; (2) it uses a natural state representation including the local angular velocity and updates the state estimate systematically, offering explicit physical interpretations; (3) it relies on a parametric model and is much less data-satiate than the typical adaptive semiparametric or nonparametric method. We tested the performance of the proposed method with ten RPM traces, using the normalized root mean squared difference between the predicted value and the retrospective observation as the error metric. Its performance was compared with predictors based on the linear model, the interacting multiple linear models and the kernel density estimator for various combinations of prediction lengths and observation rates. The local dynamic model based approach provides the best performance for short to medium prediction lengths under relatively

  12. Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm

    Science.gov (United States)

    Lahamy, H.; Lichti, D.

    2011-09-01

    Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.

  13. Real-time Human Activity Recognition

    Science.gov (United States)

    Albukhary, N.; Mustafah, Y. M.

    2017-11-01

    The traditional Closed-circuit Television (CCTV) system requires human to monitor the CCTV for 24/7 which is inefficient and costly. Therefore, there’s a need for a system which can recognize human activity effectively in real-time. This paper concentrates on recognizing simple activity such as walking, running, sitting, standing and landing by using image processing techniques. Firstly, object detection is done by using background subtraction to detect moving object. Then, object tracking and object classification are constructed so that different person can be differentiated by using feature detection. Geometrical attributes of tracked object, which are centroid and aspect ratio of identified tracked are manipulated so that simple activity can be detected.

  14. Pattern recognition techniques and neo-deterministic seismic hazard: Time dependent scenarios for North-Eastern Italy

    International Nuclear Information System (INIS)

    Peresan, A.; Vaccari, F.; Panza, G.F.; Zuccolo, E.; Gorshkov, A.

    2009-05-01

    An integrated neo-deterministic approach to seismic hazard assessment has been developed that combines different pattern recognition techniques, designed for the space-time identification of strong earthquakes, with algorithms for the realistic modeling of seismic ground motion. The integrated approach allows for a time dependent definition of the seismic input, through the routine updating of earthquake predictions. The scenarios of expected ground motion, associated with the alarmed areas, are defined by means of full waveform modeling. A set of neo-deterministic scenarios of ground motion is defined at regional and local scale, thus providing a prioritization tool for timely prevention and mitigation actions. Constraints about the space and time of occurrence of the impending strong earthquakes are provided by three formally defined and globally tested algorithms, which have been developed according to a pattern recognition scheme. Two algorithms, namely CN and M8, are routinely used for intermediate-term middle-range earthquake predictions, while a third algorithm allows for the identification of the areas prone to large events. These independent procedures have been combined to better constrain the alarmed area. The pattern recognition of earthquake-prone areas does not belong to the family of earthquake prediction algorithms since it does not provide any information about the time of occurrence of the expected earthquakes. Nevertheless, it can be considered as the term-less zero-approximation, which restrains the alerted areas (e.g. defined by CN or M8) to the more precise location of large events. Italy is the only region of moderate seismic activity where the two different prediction algorithms CN and M8S (i.e. a spatially stabilized variant of M8) are applied simultaneously and a real-time test of predictions, for earthquakes with magnitude larger than 5.4, is ongoing since 2003. The application of the CN to the Adriatic region (s.l.), which is relevant

  15. Real-time intensity based 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy

    Science.gov (United States)

    Furtado, H.; Steiner, E.; Stock, M.; Georg, D.; Birkfellner, W.

    2014-03-01

    Intra-fractional respiratorymotion during radiotherapy is one of themain sources of uncertainty in dose application creating the need to extend themargins of the planning target volume (PTV). Real-time tumormotion tracking by 2D/3D registration using on-board kilo-voltage (kV) imaging can lead to a reduction of the PTV. One limitation of this technique when using one projection image, is the inability to resolve motion along the imaging beam axis. We present a retrospective patient study to investigate the impact of paired portal mega-voltage (MV) and kV images, on registration accuracy. We used data from eighteen patients suffering from non small cell lung cancer undergoing regular treatment at our center. For each patient we acquired a planning CT and sequences of kV and MV images during treatment. Our evaluation consisted of comparing the accuracy of motion tracking in 6 degrees-of-freedom(DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. We use graphics processing unit rendering for real-time performance. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 3.3 mm to 1.8 mm and the motion along AP was successfully extracted. The mean registration time was of 190+/-35ms. Our evaluation shows that using kVMV image pairs leads to improved motion extraction in 6 DOF. Therefore, this approach is suitable for accurate, real-time tumor motion tracking with a conventional LINAC.

  16. Human detection and motion analysis at security points

    Science.gov (United States)

    Ozer, I. Burak; Lv, Tiehan; Wolf, Wayne H.

    2003-08-01

    This paper presents a real-time video surveillance system for the recognition of specific human activities. Specifically, the proposed automatic motion analysis is used as an on-line alarm system to detect abnormal situations in a campus environment. A smart multi-camera system developed at Princeton University is extended for use in smart environments in which the camera detects the presence of multiple persons as well as their gestures and their interaction in real-time.

  17. Learning motion concepts using real-time microcomputer-based laboratory tools

    Science.gov (United States)

    Thornton, Ronald K.; Sokoloff, David R.

    1990-09-01

    Microcomputer-based laboratory (MBL) tools have been developed which interface to Apple II and Macintosh computers. Students use these tools to collect physical data that are graphed in real time and then can be manipulated and analyzed. The MBL tools have made possible discovery-based laboratory curricula that embody results from educational research. These curricula allow students to take an active role in their learning and encourage them to construct physical knowledge from observation of the physical world. The curricula encourage collaborative learning by taking advantage of the fact that MBL tools present data in an immediately understandable graphical form. This article describes one of the tools—the motion detector (hardware and software)—and the kinematics curriculum. The effectiveness of this curriculum compared to traditional college and university methods for helping students learn basic kinematics concepts has been evaluated by pre- and post-testing and by observation. There is strong evidence for significantly improved learning and retention by students who used the MBL materials, compared to those taught in lecture.

  18. Real-time tumor motion estimation using respiratory surrogate via memory-based learning

    International Nuclear Information System (INIS)

    Li Ruijiang; Xing Lei; Lewis, John H; Berbeco, Ross I

    2012-01-01

    Respiratory tumor motion is a major challenge in radiation therapy for thoracic and abdominal cancers. Effective motion management requires an accurate knowledge of the real-time tumor motion. External respiration monitoring devices (optical, etc) provide a noninvasive, non-ionizing, low-cost and practical approach to obtain the respiratory signal. Due to the highly complex and nonlinear relations between tumor and surrogate motion, its ultimate success hinges on the ability to accurately infer the tumor motion from respiratory surrogates. Given their widespread use in the clinic, such a method is critically needed. We propose to use a powerful memory-based learning method to find the complex relations between tumor motion and respiratory surrogates. The method first stores the training data in memory and then finds relevant data to answer a particular query. Nearby data points are assigned high relevance (or weights) and conversely distant data are assigned low relevance. By fitting relatively simple models to local patches instead of fitting one single global model, it is able to capture highly nonlinear and complex relations between the internal tumor motion and external surrogates accurately. Due to the local nature of weighting functions, the method is inherently robust to outliers in the training data. Moreover, both training and adapting to new data are performed almost instantaneously with memory-based learning, making it suitable for dynamically following variable internal/external relations. We evaluated the method using respiratory motion data from 11 patients. The data set consists of simultaneous measurement of 3D tumor motion and 1D abdominal surface (used as the surrogate signal in this study). There are a total of 171 respiratory traces, with an average peak-to-peak amplitude of ∼15 mm and average duration of ∼115 s per trace. Given only 5 s (roughly one breath) pretreatment training data, the method achieved an average 3D error of 1.5 mm and 95

  19. Real-time tumor motion estimation using respiratory surrogate via memory-based learning

    Science.gov (United States)

    Li, Ruijiang; Lewis, John H.; Berbeco, Ross I.; Xing, Lei

    2012-08-01

    Respiratory tumor motion is a major challenge in radiation therapy for thoracic and abdominal cancers. Effective motion management requires an accurate knowledge of the real-time tumor motion. External respiration monitoring devices (optical, etc) provide a noninvasive, non-ionizing, low-cost and practical approach to obtain the respiratory signal. Due to the highly complex and nonlinear relations between tumor and surrogate motion, its ultimate success hinges on the ability to accurately infer the tumor motion from respiratory surrogates. Given their widespread use in the clinic, such a method is critically needed. We propose to use a powerful memory-based learning method to find the complex relations between tumor motion and respiratory surrogates. The method first stores the training data in memory and then finds relevant data to answer a particular query. Nearby data points are assigned high relevance (or weights) and conversely distant data are assigned low relevance. By fitting relatively simple models to local patches instead of fitting one single global model, it is able to capture highly nonlinear and complex relations between the internal tumor motion and external surrogates accurately. Due to the local nature of weighting functions, the method is inherently robust to outliers in the training data. Moreover, both training and adapting to new data are performed almost instantaneously with memory-based learning, making it suitable for dynamically following variable internal/external relations. We evaluated the method using respiratory motion data from 11 patients. The data set consists of simultaneous measurement of 3D tumor motion and 1D abdominal surface (used as the surrogate signal in this study). There are a total of 171 respiratory traces, with an average peak-to-peak amplitude of ∼15 mm and average duration of ∼115 s per trace. Given only 5 s (roughly one breath) pretreatment training data, the method achieved an average 3D error of 1.5 mm and 95

  20. A triboelectric motion sensor in wearable body sensor network for human activity recognition.

    Science.gov (United States)

    Hui Huang; Xian Li; Ye Sun

    2016-08-01

    The goal of this study is to design a novel triboelectric motion sensor in wearable body sensor network for human activity recognition. Physical activity recognition is widely used in well-being management, medical diagnosis and rehabilitation. Other than traditional accelerometers, we design a novel wearable sensor system based on triboelectrification. The triboelectric motion sensor can be easily attached to human body and collect motion signals caused by physical activities. The experiments are conducted to collect five common activity data: sitting and standing, walking, climbing upstairs, downstairs, and running. The k-Nearest Neighbor (kNN) clustering algorithm is adopted to recognize these activities and validate the feasibility of this new approach. The results show that our system can perform physical activity recognition with a successful rate over 80% for walking, sitting and standing. The triboelectric structure can also be used as an energy harvester for motion harvesting due to its high output voltage in random low-frequency motion.

  1. Real-time movie image enhancement in NMR

    International Nuclear Information System (INIS)

    Doyle, M.; Mansfield, P.

    1986-01-01

    Clinical NMR motion picture (movie) images can now be produced routinely in real-time by ultra-high-speed echo-planar imaging (EPI). The single-shot image quality depends on both pixel resolution and signal-to-noise ratio (S/N), both factors being intertradeable. If image S/N is sacrificed rather than resolution, it is shown that S/N may be greatly enhanced subsequently without vitiating spatial resolution or foregoing real motional effects when the object motion is periodic. This is achieved by a Fourier filtering process. Experimental results are presented which demonstrate the technique for a normal functioning heart. (author)

  2. Dance-the-Music: an educational platform for the modeling, recognition and audiovisual monitoring of dance steps using spatiotemporal motion templates

    Science.gov (United States)

    Maes, Pieter-Jan; Amelynck, Denis; Leman, Marc

    2012-12-01

    In this article, a computational platform is presented, entitled "Dance-the-Music", that can be used in a dance educational context to explore and learn the basics of dance steps. By introducing a method based on spatiotemporal motion templates, the platform facilitates to train basic step models from sequentially repeated dance figures performed by a dance teacher. Movements are captured with an optical motion capture system. The teachers' models can be visualized from a first-person perspective to instruct students how to perform the specific dance steps in the correct manner. Moreover, recognition algorithms-based on a template matching method-can determine the quality of a student's performance in real time by means of multimodal monitoring techniques. The results of an evaluation study suggest that the Dance-the-Music is effective in helping dance students to master the basics of dance figures.

  3. Reverse control for humanoid robot task recognition.

    Science.gov (United States)

    Hak, Sovannara; Mansard, Nicolas; Stasse, Olivier; Laumond, Jean Paul

    2012-12-01

    Efficient methods to perform motion recognition have been developed using statistical tools. Those methods rely on primitive learning in a suitable space, for example, the latent space of the joint angle and/or adequate task spaces. Learned primitives are often sequential: A motion is segmented according to the time axis. When working with a humanoid robot, a motion can be decomposed into parallel subtasks. For example, in a waiter scenario, the robot has to keep some plates horizontal with one of its arms while placing a plate on the table with its free hand. Recognition can thus not be limited to one task per consecutive segment of time. The method presented in this paper takes advantage of the knowledge of what tasks the robot is able to do and how the motion is generated from this set of known controllers, to perform a reverse engineering of an observed motion. This analysis is intended to recognize parallel tasks that have been used to generate a motion. The method relies on the task-function formalism and the projection operation into the null space of a task to decouple the controllers. The approach is successfully applied on a real robot to disambiguate motion in different scenarios where two motions look similar but have different purposes.

  4. On Integral Invariants for Effective 3-D Motion Trajectory Matching and Recognition.

    Science.gov (United States)

    Shao, Zhanpeng; Li, Youfu

    2016-02-01

    Motion trajectories tracked from the motions of human, robots, and moving objects can provide an important clue for motion analysis, classification, and recognition. This paper defines some new integral invariants for a 3-D motion trajectory. Based on two typical kernel functions, we design two integral invariants, the distance and area integral invariants. The area integral invariants are estimated based on the blurred segment of noisy discrete curve to avoid the computation of high-order derivatives. Such integral invariants for a motion trajectory enjoy some desirable properties, such as computational locality, uniqueness of representation, and noise insensitivity. Moreover, our formulation allows the analysis of motion trajectories at a range of scales by varying the scale of kernel function. The features of motion trajectories can thus be perceived at multiscale levels in a coarse-to-fine manner. Finally, we define a distance function to measure the trajectory similarity to find similar trajectories. Through the experiments, we examine the robustness and effectiveness of the proposed integral invariants and find that they can capture the motion cues in trajectory matching and sign recognition satisfactorily.

  5. A Novel Model-Based Driving Behavior Recognition System Using Motion Sensors

    Directory of Open Access Journals (Sweden)

    Minglin Wu

    2016-10-01

    Full Text Available In this article, a novel driving behavior recognition system based on a specific physical model and motion sensory data is developed to promote traffic safety. Based on the theory of rigid body kinematics, we build a specific physical model to reveal the data change rule during the vehicle moving process. In this work, we adopt a nine-axis motion sensor including a three-axis accelerometer, a three-axis gyroscope and a three-axis magnetometer, and apply a Kalman filter for noise elimination and an adaptive time window for data extraction. Based on the feature extraction guided by the built physical model, various classifiers are accomplished to recognize different driving behaviors. Leveraging the system, normal driving behaviors (such as accelerating, braking, lane changing and turning with caution and aggressive driving behaviors (such as accelerating, braking, lane changing and turning with a sudden can be classified with a high accuracy of 93.25%. Compared with traditional driving behavior recognition methods using machine learning only, the proposed system possesses a solid theoretical basis, performs better and has good prospects.

  6. SU-F-303-17: Real Time Dose Calculation of MRI Guided Co-60 Radiotherapy Treatments On Free Breathing Patients, Using a Motion Model and Fast Monte Carlo Dose Calculation

    International Nuclear Information System (INIS)

    Thomas, D; O’Connell, D; Lamb, J; Cao, M; Yang, Y; Agazaryan, N; Lee, P; Low, D

    2015-01-01

    Purpose: To demonstrate real-time dose calculation of free-breathing MRI guided Co−60 treatments, using a motion model and Monte-Carlo dose calculation to accurately account for the interplay between irregular breathing motion and an IMRT delivery. Methods: ViewRay Co-60 dose distributions were optimized on ITVs contoured from free-breathing CT images of lung cancer patients. Each treatment plan was separated into 0.25s segments, accounting for the MLC positions and beam angles at each time point. A voxel-specific motion model derived from multiple fast-helical free-breathing CTs and deformable registration was calculated for each patient. 3D images for every 0.25s of a simulated treatment were generated in real time, here using a bellows signal as a surrogate to accurately account for breathing irregularities. Monte-Carlo dose calculation was performed every 0.25s of the treatment, with the number of histories in each calculation scaled to give an overall 1% statistical uncertainty. Each dose calculation was deformed back to the reference image using the motion model and accumulated. The static and real-time dose calculations were compared. Results: Image generation was performed in real time at 4 frames per second (GPU). Monte-Carlo dose calculation was performed at approximately 1frame per second (CPU), giving a total calculation time of approximately 30 minutes per treatment. Results show both cold- and hot-spots in and around the ITV, and increased dose to contralateral lung as the tumor moves in and out of the beam during treatment. Conclusion: An accurate motion model combined with a fast Monte-Carlo dose calculation allows almost real-time dose calculation of a free-breathing treatment. When combined with sagittal 2D-cine-mode MRI during treatment to update the motion model in real time, this will allow the true delivered dose of a treatment to be calculated, providing a useful tool for adaptive planning and assessing the effectiveness of gated treatments

  7. Micro-motion Recognition of Spatial Cone Target Based on ISAR Image Sequences

    Directory of Open Access Journals (Sweden)

    Changyong Shu

    2016-04-01

    Full Text Available The accurate micro-motions recognition of spatial cone target is the foundation of the characteristic parameter acquisition. For this reason, a micro-motion recognition method based on the distinguishing characteristics extracted from the Inverse Synthetic Aperture Radar (ISAR sequences is proposed in this paper. The projection trajectory formula of cone node strong scattering source and cone bottom slip-type strong scattering sources, which are located on the spatial cone target, are deduced under three micro-motion types including nutation, precession, and spinning, and the correctness is verified by the electromagnetic simulation. By comparison, differences are found among the projection of the scattering sources with different micro-motions, the coordinate information of the scattering sources in the Inverse Synthetic Aperture Radar sequences is extracted by the CLEAN algorithm, and the spinning is recognized by setting the threshold value of Doppler. The double observation points Interacting Multiple Model Kalman Filter is used to separate the scattering sources projection of the nutation target or precession target, and the cross point number of each scattering source’s projection track is used to classify the nutation or precession. Finally, the electromagnetic simulation data are used to verify the effectiveness of the micro-motion recognition method.

  8. PRIMAS: a real-time 3D motion-analysis system

    Science.gov (United States)

    Sabel, Jan C.; van Veenendaal, Hans L. J.; Furnee, E. Hans

    1994-03-01

    The paper describes a CCD TV-camera-based system for real-time multicamera 2D detection of retro-reflective targets and software for accurate and fast 3D reconstruction. Applications of this system can be found in the fields of sports, biomechanics, rehabilitation research, and various other areas of science and industry. The new feature of real-time 3D opens an even broader perspective of application areas; animations in virtual reality are an interesting example. After presenting an overview of the hardware and the camera calibration method, the paper focuses on the real-time algorithms used for matching of the images and subsequent 3D reconstruction of marker positions. When using a calibrated setup of two cameras, it is now possible to track at least ten markers at 100 Hz. Limitations in the performance are determined by the visibility of the markers, which could be improved by adding a third camera.

  9. Real-Time Motion Management of Prostate Cancer Radiotherapy

    DEFF Research Database (Denmark)

    Pommer, Tobias

    , and for prostate cancer treatments, the proximity of the bladder and rectum makes radiotherapy treatment of this site a challenging task. Furthermore, the prostate may move during the radiation delivery and treatment margins are necessary to ensure that it is still receiving the intended dose. The main aim...... of the MLC on the performance of DMLC tracking were investigated. We found that for prostate motion, the main tracking error arose from the finite leaf width affecting the MLCs ability to construct the desired shape. Furthermore, we also attempted to model prostate motion using a random walk model. We found...... that for the slow and drifting motion, the model could satisfactory replicate the motion of the prostate, while the rapid and transient prostate motion observed in some cases was challenging for the model. We therefore added simulated transient motion to the random walk model, which slightly improved the results...

  10. Development of a real-time monitoring system for intra-fractional motion in intracranial treatment using pressure sensors.

    Science.gov (United States)

    Inata, Hiroki; Araki, Fujio; Kuribayashi, Yuta; Hamamoto, Yasushi; Nakayama, Shigeki; Sodeoka, Noritaka; Kiriyama, Tetsukazu; Nishizaki, Osamu

    2015-09-21

    This study developed a dedicated real-time monitoring system to detect intra-fractional head motion in intracranial radiotherapy using pressure sensors. The dedicated real-time monitoring system consists of pressure sensors with a thickness of 0.6 mm and a radius of 9.1 mm, a thermoplastic mask, a vacuum pillow, and a baseplate. The four sensors were positioned at superior-inferior and right-left sides under the occipital area. The sampling rate of pressure sensors was set to 5 Hz. First, we confirmed that the relationship between the force and the displacement of the vacuum pillow follows Hook's law. Next, the spring constant for the vacuum pillow was determined from the relationship between the force given to the vacuum pillow and the displacement of the head, detected by Cyberknife target locating system (TLS) acquisitions in clinical application. Finally, the accuracy of our system was evaluated by using the 2  ×  2 confusion matrix. The regression lines between the force, y, and the displacement, x, of the vacuum pillow were given by y = 3.8x, y = 4.4x, and y = 5.0x when the degree of inner pressure was  -12 kPa,-20 kPa, and  -27 kPa, respectively. The spring constant of the vacuum pillow was 1.6 N mm(-1) from the 6D positioning data of a total of 2999 TLS acquisitions in 19 patients. Head motions of 1 mm, 1.5 mm, and 2 mm were detected in real-time with the accuracies of 67%, 84%, and 89%, respectively. Our system can detect displacement of the head continuously during every interval of TLS with a resolution of 1-2 mm without any radiation exposure.

  11. Graphics processing unit accelerated intensity-based optical coherence tomography angiography using differential frames with real-time motion correction.

    Science.gov (United States)

    Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi

    2014-02-01

    We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.

  12. An interdimensional correlation framework for real-time estimation of six degree of freedom target motion using a single x-ray imager during radiotherapy

    Science.gov (United States)

    Nguyen, D. T.; Bertholet, J.; Kim, J.-H.; O'Brien, R.; Booth, J. T.; Poulsen, P. R.; Keall, P. J.

    2018-01-01

    Increasing evidence suggests that intrafraction tumour motion monitoring needs to include both 3D translations and 3D rotations. Presently, methods to estimate the rotation motion require the 3D translation of the target to be known first. However, ideally, translation and rotation should be estimated concurrently. We present the first method to directly estimate six-degree-of-freedom (6DoF) motion from the target’s projection on a single rotating x-ray imager in real-time. This novel method is based on the linear correlations between the superior-inferior translations and the motion in the other five degrees-of-freedom. The accuracy of the method was evaluated in silico with 81 liver tumour motion traces from 19 patients with three implanted markers. The ground-truth motion was estimated using the current gold standard method where each marker’s 3D position was first estimated using a Gaussian probability method, and the 6DoF motion was then estimated from the 3D positions using an iterative method. The 3D position of each marker was projected onto a gantry-mounted imager with an imaging rate of 11 Hz. After an initial 110° gantry rotation (200 images), a correlation model between the superior-inferior translations and the five other DoFs was built using a least square method. The correlation model was then updated after each subsequent frame to estimate 6DoF motion in real-time. The proposed algorithm had an accuracy (±precision) of  -0.03  ±  0.32 mm, -0.01  ±  0.13 mm and 0.03  ±  0.52 mm for translations in the left-right (LR), superior-inferior (SI) and anterior-posterior (AP) directions respectively; and, 0.07  ±  1.18°, 0.07  ±  1.00° and 0.06  ±  1.32° for rotations around the LR, SI and AP axes respectively on the dataset. The first method to directly estimate real-time 6DoF target motion from segmented marker positions on a 2D imager was devised. The algorithm was evaluated using 81

  13. A Real-time License Plate Detection System for Parking Access

    Directory of Open Access Journals (Sweden)

    Roenadi Koesdijarto

    2010-08-01

    Full Text Available The automatic and real-time license plate detection system can be used as an access control entry of vehicles into the parking area. The problem is how to recognize the vehicles that will go into the parking lot and how to recognize various types of license plates in various light conditions quickly and accurately. In this research, the prototype was developed with a detection system to recognize the vehicles that will enter the parking area, and a license plate recognition system. In the license plate recognition system, the Fourier transform and Hidden Markov model method have proposed to detect location of license plate and as characters segmentation to recognize Indonesia license plates. The research results have shown that the developed prototype system has successfully recognized all Indonesia license plates in several of light condition and camera position. The percentage of plate recognition in the real-time experiment is 84.38%, and the average execution time for all recognition process is 5.834 second.

  14. Development of a frameless stereotactic radiosurgery system based on real-time 6D position monitoring and adaptive head motion compensation

    Energy Technology Data Exchange (ETDEWEB)

    Wiersma, Rodney D; Wen Zhifei; Sadinski, Meredith; Farrey, Karl; Yenice, Kamil M [Department of Radiation and Cellular Oncology, University of Chicago, Chicago, IL 60637 (United States)], E-mail: rwiersma@uchicago.edu

    2010-01-21

    Stereotactic radiosurgery delivers radiation with great spatial accuracy. To achieve sub-millimeter accuracy for intracranial SRS, a head ring is rigidly fixated to the skull to create a fixed reference. For some patients, the invasiveness of the ring can be highly uncomfortable and not well tolerated. In addition, placing and removing the ring requires special expertise from a neurosurgeon, and patient setup time for SRS can often be long. To reduce the invasiveness, hardware limitations and setup time, we are developing a system for performing accurate head positioning without the use of a head ring. The proposed method uses real-time 6D optical position feedback for turning on and off the treatment beam (gating) and guiding a motor-controlled 3D head motion compensation stage. The setup consists of a central control computer, an optical patient motion tracking system and a 3D motion compensation stage attached to the front of the LINAC couch. A styrofoam head cast was custom-built for patient support and was mounted on the compensation stage. The motion feedback of the markers was processed by the control computer, and the resulting motion of the target was calculated using a rigid body model. If the target deviated beyond a preset position of 0.2 mm, an automatic position correction was performed with stepper motors to adjust the head position via the couch mount motion platform. In the event the target deviated more than 1 mm, a safety relay switch was activated and the treatment beam was turned off. The feasibility of the concept was tested using five healthy volunteers. Head motion data were acquired with and without the use of motion compensation over treatment times of 15 min. On average, test subjects exceeded the 0.5 mm tolerance 86% of the time and the 1.0 mm tolerance 45% of the time without motion correction. With correction, this percentage was reduced to 5% and 2% for the 0.5 mm and 1.0 mm tolerances, respectively.

  15. Development of a frameless stereotactic radiosurgery system based on real-time 6D position monitoring and adaptive head motion compensation

    International Nuclear Information System (INIS)

    Wiersma, Rodney D; Wen Zhifei; Sadinski, Meredith; Farrey, Karl; Yenice, Kamil M

    2010-01-01

    Stereotactic radiosurgery delivers radiation with great spatial accuracy. To achieve sub-millimeter accuracy for intracranial SRS, a head ring is rigidly fixated to the skull to create a fixed reference. For some patients, the invasiveness of the ring can be highly uncomfortable and not well tolerated. In addition, placing and removing the ring requires special expertise from a neurosurgeon, and patient setup time for SRS can often be long. To reduce the invasiveness, hardware limitations and setup time, we are developing a system for performing accurate head positioning without the use of a head ring. The proposed method uses real-time 6D optical position feedback for turning on and off the treatment beam (gating) and guiding a motor-controlled 3D head motion compensation stage. The setup consists of a central control computer, an optical patient motion tracking system and a 3D motion compensation stage attached to the front of the LINAC couch. A styrofoam head cast was custom-built for patient support and was mounted on the compensation stage. The motion feedback of the markers was processed by the control computer, and the resulting motion of the target was calculated using a rigid body model. If the target deviated beyond a preset position of 0.2 mm, an automatic position correction was performed with stepper motors to adjust the head position via the couch mount motion platform. In the event the target deviated more than 1 mm, a safety relay switch was activated and the treatment beam was turned off. The feasibility of the concept was tested using five healthy volunteers. Head motion data were acquired with and without the use of motion compensation over treatment times of 15 min. On average, test subjects exceeded the 0.5 mm tolerance 86% of the time and the 1.0 mm tolerance 45% of the time without motion correction. With correction, this percentage was reduced to 5% and 2% for the 0.5 mm and 1.0 mm tolerances, respectively.

  16. Gait Recognition Using Wearable Motion Recording Sensors

    Directory of Open Access Journals (Sweden)

    Davrondzhon Gafurov

    2009-01-01

    Full Text Available This paper presents an alternative approach, where gait is collected by the sensors attached to the person's body. Such wearable sensors record motion (e.g. acceleration of the body parts during walking. The recorded motion signals are then investigated for person recognition purposes. We analyzed acceleration signals from the foot, hip, pocket and arm. Applying various methods, the best EER obtained for foot-, pocket-, arm- and hip- based user authentication were 5%, 7%, 10% and 13%, respectively. Furthermore, we present the results of our analysis on security assessment of gait. Studying gait-based user authentication (in case of hip motion under three attack scenarios, we revealed that a minimal effort mimicking does not help to improve the acceptance chances of impostors. However, impostors who know their closest person in the database or the genders of the users can be a threat to gait-based authentication. We also provide some new insights toward the uniqueness of gait in case of foot motion. In particular, we revealed the following: a sideway motion of the foot provides the most discrimination, compared to an up-down or forward-backward directions; and different segments of the gait cycle provide different level of discrimination.

  17. Study on the Forecast of Ground Motion Parameters from Real Time Earthquake Information Based on Wave Form Data at the Front Site

    OpenAIRE

    萩原, 由訓; 源栄, 正人; 三辻, 和弥; 野畑, 有秀; Yoshinori, HAGIWARA; Masato, MOTOSAKA; Kazuya, MITSUJI; Arihide, NOBATA; (株)大林組 技術研究所; 東北大学大学院工学研究科; 山形大学地域教育文化学部生活総合学科生活環境科学コース; (株)大林組 技術研究所; Obayashi Corporation Technical Research Institute; Graduate School of Eng., Tohoku University; Faculty of Education, Art and Science, Yamagata University

    2011-01-01

    The Japan Meteorological Agency(JMA) provides Earthquake Early Warnings(EEW) for advanced users from August 1, 2006. Advanced EEW users can forecaste seismic ground motion (example: Seismic Intensity, Peak Ground Acceleration) from information of the earthquake in EEW. But there are limits to the accuracy and the earliness of the forecasting. This paper describes regression equation to decrease the error and to increase rapidity of the forecast of ground motion parameters from Real Time Earth...

  18. Boat, wake, and wave real-time simulation

    Science.gov (United States)

    Świerkowski, Leszek; Gouthas, Efthimios; Christie, Chad L.; Williams, Owen M.

    2009-05-01

    We describe the extension of our real-time scene generation software VIRSuite to include the dynamic simulation of small boats and their wakes within an ocean environment. Extensive use has been made of the programmabilty available in the current generation of GPUs. We have demonstrated that real-time simulation is feasible, even including such complexities as dynamical calculation of the boat motion, wake generation and calculation of an FFTgenerated sea state.

  19. Robust Sensor-Orientation-Independent Feature Selection for Animal Activity Recognition on Collar Tags

    NARCIS (Netherlands)

    Kamminga, Jacob Wilhelm; Le Viet Duc, Duc Viet; Meijers, Jan Pieter; Bisby, Helena C.; Meratnia, Nirvana; Havinga, Paul J.M.

    2018-01-01

    Fundamental challenges faced by real-time animal activity recognition include variation in motion data due to changing sensor orientations, numerous features, and energy and processing constraints of animal tags. This paper aims at finding small optimal feature sets that are lightweight and robust

  20. 3D Hand Gesture Analysis through a Real-Time Gesture Search Engine

    Directory of Open Access Journals (Sweden)

    Shahrouz Yousefi

    2015-06-01

    Full Text Available 3D gesture recognition and tracking are highly desired features of interaction design in future mobile and smart environments. Specifically, in virtual/augmented reality applications, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities such as touchscreens. In this paper, we introduce a novel solution for real-time 3D gesture-based interaction by finding the best match from an extremely large gesture database. This database includes images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique matching algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query frames and database and retrieving the best match. Once the best match is found from the database in each moment, the pre-recorded 3D motion parameters can instantly be used for natural interaction. The proposed bare-hand interaction technology performs in real time with high accuracy using an ordinary camera.

  1. A Finite State Machine Approach to Algorithmic Lateral Inhibition for Real-Time Motion Detection †

    Directory of Open Access Journals (Sweden)

    María T. López

    2018-05-01

    Full Text Available Many researchers have explored the relationship between recurrent neural networks and finite state machines. Finite state machines constitute the best-characterized computational model, whereas artificial neural networks have become a very successful tool for modeling and problem solving. The neurally-inspired lateral inhibition method, and its application to motion detection tasks, have been successfully implemented in recent years. In this paper, control knowledge of the algorithmic lateral inhibition (ALI method is described and applied by means of finite state machines, in which the state space is constituted from the set of distinguishable cases of accumulated charge in a local memory. The article describes an ALI implementation for a motion detection task. For the implementation, we have chosen to use one of the members of the 16-nm Kintex UltraScale+ family of Xilinx FPGAs. FPGAs provide the necessary accuracy, resolution, and precision to run neural algorithms alongside current sensor technologies. The results offered in this paper demonstrate that this implementation provides accurate object tracking performance on several datasets, obtaining a high F-score value (0.86 for the most complex sequence used. Moreover, it outperforms implementations of a complete ALI algorithm and a simplified version of the ALI algorithm—named “accumulative computation”—which was run about ten years ago, now reaching real-time processing times that were simply not achievable at that time for ALI.

  2. SU-E-J-61: Monitoring Tumor Motion in Real-Time with EPID Imaging During Cervical Cancer Treatment

    International Nuclear Information System (INIS)

    Mao, W; Hrycushko, B; Yan, Y; Foster, R; Albuquerque, K

    2015-01-01

    Purpose: Traditional external beam radiotherapy for cervical cancer requires setup by external skin marks. In order to improve treatment accuracy and reduce planning margin for more conformal therapy, it is essential to monitor tumor positions interfractionally and intrafractionally. We demonstrate feasibility of monitoring cervical tumor motion online using EPID imaging from Beam’s Eye View. Methods: Prior to treatment, 1∼2 cylindrical radio opaque markers were implanted into inferior aspect of cervix tumor. During external beam treatments on a Varian 2100C by 4-field 3D plans, treatment beam images were acquired continuously by an EPID. A Matlab program was developed to locate internal markers on MV images. Based on 2D marker positions obtained from different treatment fields, their 3D positions were estimated for every treatment fraction. Results: There were 398 images acquired during different treatment fractions of three cervical cancer patients. Markers were successfully located on every frame of image at an analysis speed of about 1 second per frame. Intrafraction motions were evaluated by comparing marker positions relative to the position on the first frame of image. The maximum intrafraction motion of the markers was 1.6 mm. Interfraction motions were evaluated by comparing 3D marker positions at different treatment fractions. The maximum interfraction motion was up to 10 mm. Careful comparison found that this is due to patient positioning since the bony structures shifted with the markers. Conclusion: This method provides a cost-free and simple solution for online tumor tracking for cervical cancer treatment since it is feasible to acquire and export EPID images with fast analysis in real time. This method does not need any extra equipment or deliver extra dose to patients. The online tumor motion information will be very useful to reduce planning margins and improve treatment accuracy, which is particularly important for SBRT treatment with long

  3. SU-E-J-61: Monitoring Tumor Motion in Real-Time with EPID Imaging During Cervical Cancer Treatment

    Energy Technology Data Exchange (ETDEWEB)

    Mao, W; Hrycushko, B; Yan, Y; Foster, R; Albuquerque, K [UT Southwestern Medical Center, Dallas, TX (United States)

    2015-06-15

    Purpose: Traditional external beam radiotherapy for cervical cancer requires setup by external skin marks. In order to improve treatment accuracy and reduce planning margin for more conformal therapy, it is essential to monitor tumor positions interfractionally and intrafractionally. We demonstrate feasibility of monitoring cervical tumor motion online using EPID imaging from Beam’s Eye View. Methods: Prior to treatment, 1∼2 cylindrical radio opaque markers were implanted into inferior aspect of cervix tumor. During external beam treatments on a Varian 2100C by 4-field 3D plans, treatment beam images were acquired continuously by an EPID. A Matlab program was developed to locate internal markers on MV images. Based on 2D marker positions obtained from different treatment fields, their 3D positions were estimated for every treatment fraction. Results: There were 398 images acquired during different treatment fractions of three cervical cancer patients. Markers were successfully located on every frame of image at an analysis speed of about 1 second per frame. Intrafraction motions were evaluated by comparing marker positions relative to the position on the first frame of image. The maximum intrafraction motion of the markers was 1.6 mm. Interfraction motions were evaluated by comparing 3D marker positions at different treatment fractions. The maximum interfraction motion was up to 10 mm. Careful comparison found that this is due to patient positioning since the bony structures shifted with the markers. Conclusion: This method provides a cost-free and simple solution for online tumor tracking for cervical cancer treatment since it is feasible to acquire and export EPID images with fast analysis in real time. This method does not need any extra equipment or deliver extra dose to patients. The online tumor motion information will be very useful to reduce planning margins and improve treatment accuracy, which is particularly important for SBRT treatment with long

  4. Real Time Hand Motion Reconstruction System for Trans-Humeral Amputees Using EEG and EMG

    Directory of Open Access Journals (Sweden)

    Jacobo Fernandez-Vargas

    2016-08-01

    Full Text Available Predicting a hand’s position using only biosignals is a complex problem that has not been completely solved. The only reliable solutions currently available require invasive surgery. The attempts using non-invasive technologies are rare, and usually have led to lower correlation values between the real and the reconstructed position than those required for real-world applications. In this study, we propose a solution for reconstructing the hand’s position in three dimensions using EEG and EMG to detect from the shoulder area. This approach would be valid for most trans-humeral amputees. In order to find the best solution, we tested four different architectures for the system based on artificial neural networks. Our results show that it is possible to reconstruct the hand’s motion trajectory with a correlation value up to 0.809 compared to a typical value in the literature of 0.6. We also demonstrated that both EEG and EMG contribute jointly to the motion reconstruction. Furthermore, we discovered that the system architectures do not change the results radically. In addition, our results suggest that different motions may have different brain activity patterns that could be detected through EEG. Finally, we suggest a method to study non-linear relations in the brain through the EEG signals, which may lead to a more accurate system.

  5. Real-time numerical shake prediction and updating for earthquake early warning

    Science.gov (United States)

    Wang, Tianyun; Jin, Xing; Wei, Yongxiang; Huang, Yandan

    2017-12-01

    Ground motion prediction is important for earthquake early warning systems, because the region's peak ground motion indicates the potential disaster. In order to predict the peak ground motion quickly and precisely with limited station wave records, we propose a real-time numerical shake prediction and updating method. Our method first predicts the ground motion based on the ground motion prediction equation after P waves detection of several stations, denoted as the initial prediction. In order to correct the prediction error of the initial prediction, an updating scheme based on real-time simulation of wave propagation is designed. Data assimilation technique is incorporated to predict the distribution of seismic wave energy precisely. Radiative transfer theory and Monte Carlo simulation are used for modeling wave propagation in 2-D space, and the peak ground motion is calculated as quickly as possible. Our method has potential to predict shakemap, making the potential disaster be predicted before the real disaster happens. 2008 M S8.0 Wenchuan earthquake is studied as an example to show the validity of the proposed method.

  6. The recognition of female voice based on voice registers in singing techniques in real-time using hankel transform method and macdonald function

    Science.gov (United States)

    Meiyanti, R.; Subandi, A.; Fuqara, N.; Budiman, M. A.; Siahaan, A. P. U.

    2018-03-01

    A singer doesn’t just recite the lyrics of a song, but also with the use of particular sound techniques to make it more beautiful. In the singing technique, more female have a diverse sound registers than male. There are so many registers of the human voice, but the voice registers used while singing, among others, Chest Voice, Head Voice, Falsetto, and Vocal fry. Research of speech recognition based on the female’s voice registers in singing technique is built using Borland Delphi 7.0. Speech recognition process performed by the input recorded voice samples and also in real time. Voice input will result in weight energy values based on calculations using Hankel Transformation method and Macdonald Functions. The results showed that the accuracy of the system depends on the accuracy of sound engineering that trained and tested, and obtained an average percentage of the successful introduction of the voice registers record reached 48.75 percent, while the average percentage of the successful introduction of the voice registers in real time to reach 57 percent.

  7. Motion-Corrected Real-Time Cine Magnetic Resonance Imaging of the Heart: Initial Clinical Experience.

    Science.gov (United States)

    Rahsepar, Amir Ali; Saybasili, Haris; Ghasemiesfe, Ahmadreza; Dolan, Ryan S; Shehata, Monda L; Botelho, Marcos P; Markl, Michael; Spottiswoode, Bruce; Collins, Jeremy D; Carr, James C

    2018-01-01

    Free-breathing real-time (RT) imaging can be used in patients with difficulty in breath-holding; however, RT cine imaging typically experiences poor image quality compared with segmented cine imaging because of low resolution. Here, we validate a novel unsupervised motion-corrected (MOCO) reconstruction technique for free-breathing RT cardiac images, called MOCO-RT. Motion-corrected RT uses elastic image registration to generate a single heartbeat of high-quality data from a free-breathing RT acquisition. Segmented balanced steady-state free precession (bSSFP) cine images and free-breathing RT images (Cartesian, TGRAPPA factor 4) were acquired with the same spatial/temporal resolution in 40 patients using clinical 1.5 T magnetic resonance scanners. The respiratory cycle was estimated using the reconstructed RT images, and nonrigid unsupervised motion correction was applied to eliminate breathing motion. Conventional segmented RT and MOCO-RT single-heartbeat cine images were analyzed to evaluate left ventricular (LV) function and volume measurements. Two radiologists scored images for overall image quality, artifact, noise, and wall motion abnormalities. Intraclass correlation coefficient was used to assess the reliability of MOCO-RT measurement. Intraclass correlation coefficient showed excellent reliability (intraclass correlation coefficient ≥ 0.95) of MOCO-RT with segmented cine in measuring LV function, mass, and volume. Comparison of the qualitative ratings indicated comparable image quality for MOCO-RT (4.80 ± 0.35) with segmented cine (4.45 ± 0.88, P = 0.215) and significantly higher than conventional RT techniques (3.51 ± 0.41, P cine (1.51 ± 0.90, P = 0.088 and 1.23 ± 0.45, P = 0.182) were not different. Wall motion abnormality ratings were comparable among different techniques (P = 0.96). The MOCO-RT technique can be used to process conventional free-breathing RT cine images and provides comparable quantitative assessment of LV function and volume

  8. A novel rotational invariants target recognition method for rotating motion blurred images

    Science.gov (United States)

    Lan, Jinhui; Gong, Meiling; Dong, Mingwei; Zeng, Yiliang; Zhang, Yuzhen

    2017-11-01

    The imaging of the image sensor is blurred due to the rotational motion of the carrier and reducing the target recognition rate greatly. Although the traditional mode that restores the image first and then identifies the target can improve the recognition rate, it takes a long time to recognize. In order to solve this problem, a rotating fuzzy invariants extracted model was constructed that recognizes target directly. The model includes three metric layers. The object description capability of metric algorithms that contain gray value statistical algorithm, improved round projection transformation algorithm and rotation-convolution moment invariants in the three metric layers ranges from low to high, and the metric layer with the lowest description ability among them is as the input which can eliminate non pixel points of target region from degenerate image gradually. Experimental results show that the proposed model can improve the correct target recognition rate of blurred image and optimum allocation between the computational complexity and function of region.

  9. Analysis of intra-fraction prostate motion and derivation of duration-dependent margins for radiotherapy using real-time 4D ultrasound

    Directory of Open Access Journals (Sweden)

    Eric Pei Ping Pang

    2018-01-01

    Full Text Available Background and purpose: During radiotherapy, prostate motion changes over time. Quantifying and accounting for this motion is essential. This study aimed to assess intra-fraction prostate motion and derive duration-dependent planning margins for two treatment techniques. Material and methods: A four-dimension (4D transperineal ultrasound Clarity® system was used to track prostate motion. We analysed 1913 fractions from 60 patients undergoing volumetric-modulated arc therapy (VMAT to the prostate. The mean VMAT treatment duration was 3.4 min. Extended monitoring was conducted weekly to simulate motion during intensity-modulated radiation therapy (IMRT treatment (an additional seven minutes. A motion-time trend analysis was conducted and the mean intra-fraction motion between VMAT and IMRT treatments compared. Duration-dependent margins were calculated and anisotropic margins for VMAT and IMRT treatments were derived. Results: There were statistically significant differences in the mean intra-fraction motion between VMAT and the simulated IMRT duration in the inferior (0.1 mm versus 0.3 mm and posterior (−0.2 versus −0.4 mm directions respectively (p ≪ 0.01. An intra-fraction motion trend inferiorly and posteriorly was observed. The recommended minimum anisotropic margins are 1.7 mm/2.7 mm (superior/inferior; 0.8 mm (left/right, 1.7 mm/2.9 mm (anterior/posterior for VMAT treatments and 2.9 mm/4.3 mm (superior/inferior, 1.5 mm (left/right, 2.8 mm/4.8 mm (anterior/posterior for IMRT treatments. Smaller anisotropic margins were required for VMAT compared to IMRT (differences ranging from 1.2 to 1.6 mm superiorly/inferiorly, 0.7 mm laterally and 1.1–1.9 mm anteriorly/posteriorly. Conclusions: VMAT treatment is preferred over IMRT as prostate motion increases with time. Larger margins should be employed in the inferior and posterior directions for both treatment durations. Duration-dependent margins should

  10. TH-AB-202-05: BEST IN PHYSICS (JOINT IMAGING-THERAPY): First Online Ultrasound-Guided MLC Tracking for Real-Time Motion Compensation in Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Ipsen, S; Bruder, R; Schweikard, A [University of Luebeck, Luebeck, DE (United States); O’Brien, R; Keall, P [University of Sydney, Sydney (Australia); Poulsen, P [Aarhus University Hospital, Aarhus (Denmark)

    2016-06-15

    Purpose: While MLC tracking has been successfully used for motion compensation of moving targets, current real-time target localization methods rely on correlation models with x-ray imaging or implanted electromagnetic transponders rather than direct target visualization. In contrast, ultrasound imaging yields volumetric data in real-time (4D) without ionizing radiation. We report the first results of online 4D ultrasound-guided MLC tracking in a phantom. Methods: A real-time tracking framework was installed on a 4D ultrasound station (Vivid7 dimension, GE) and used to detect a 2mm spherical lead marker inside a water tank. The volumetric frame rate was 21.3Hz (47ms). The marker was rigidly attached to a motion stage programmed to reproduce nine tumor trajectories (five prostate, four lung). The 3D marker position from ultrasound was used for real-time MLC aperture adaption. The tracking system latency was measured and compensated by prediction for lung trajectories. To measure geometric accuracy, anterior and lateral conformal fields with 10cm circular aperture were delivered for each trajectory. The tracking error was measured as the difference between marker position and MLC aperture in continuous portal imaging. For dosimetric evaluation, 358° VMAT fields were delivered to a biplanar diode array dosimeter using the same trajectories. Dose measurements with and without MLC tracking were compared to a static reference dose using a 3%/3 mm γ-test. Results: The tracking system latency was 170ms. The mean root-mean-square tracking error was 1.01mm (0.75mm prostate, 1.33mm lung). Tracking reduced the mean γ-failure rate from 13.9% to 4.6% for prostate and from 21.8% to 0.6% for lung with high-modulation VMAT plans and from 5% (prostate) and 18% (lung) to 0% with low modulation. Conclusion: Real-time ultrasound tracking was successfully integrated with MLC tracking for the first time and showed similar accuracy and latency as other methods while holding the

  11. Identification of strong earthquake ground motion by using pattern recognition

    International Nuclear Information System (INIS)

    Suzuki, Kohei; Tozawa, Shoji; Temmyo, Yoshiharu.

    1983-01-01

    The method of grasping adequately the technological features of complex waveform of earthquake ground motion and utilizing them as the input to structural systems has been proposed by many researchers, and the method of making artificial earthquake waves to be used for the aseismatic design of nuclear facilities has not been established in the unified form. In this research, earthquake ground motion was treated as an irregular process with unsteady amplitude and frequency, and the running power spectral density was expressed as a dark and light image on a plane of the orthogonal coordinate system with both time and frequency axes. The method of classifying this image into a number of technologically important categories by pattern recognition was proposed. This method is based on the concept called compound similarity method in the image technology, entirely different from voice diagnosis, and it has the feature that the result of identification can be quantitatively evaluated by the analysis of correlation of spatial images. Next, the standard pattern model of the simulated running power spectral density corresponding to the representative classification categories was proposed. Finally, the method of making unsteady simulated earthquake motion was shown. (Kako, I.)

  12. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  13. Action Recognition using Motion Primitives

    DEFF Research Database (Denmark)

    Moeslund, Thomas B.; Fihl, Preben; Holte, Michael Boelstoft

    the actions as a sequence of temporal isolated instances, denoted primitives. These primitives are each defined by four features extracted from motion images. The primitives are recognized in each frame based on a trained classifier resulting in a sequence of primitives. From this sequence we recognize......The number of potential applications has made automatic recognition of human actions a very active research area. Different approaches have been followed based on trajectories through some state space. In this paper we also model an action as a trajectory through a state space, but we represent...... different temporal actions using a probabilistic Edit Distance method. The method is tested on different actions with and without noise and the results show recognizing rates of 88.7% and 85.5%, respectively....

  14. Surface EMG signals based motion intent recognition using multi-layer ELM

    Science.gov (United States)

    Wang, Jianhui; Qi, Lin; Wang, Xiao

    2017-11-01

    The upper-limb rehabilitation robot is regard as a useful tool to help patients with hemiplegic to do repetitive exercise. The surface electromyography (sEMG) contains motion information as the electric signals are generated and related to nerve-muscle motion. These sEMG signals, representing human's intentions of active motions, are introduced into the rehabilitation robot system to recognize upper-limb movements. Traditionally, the feature extraction is an indispensable part of drawing significant information from original signals, which is a tedious task requiring rich and related experience. This paper employs a deep learning scheme to extract the internal features of the sEMG signals using an advanced Extreme Learning Machine based auto-encoder (ELMAE). The mathematical information contained in the multi-layer structure of the ELM-AE is used as the high-level representation of the internal features of the sEMG signals, and thus a simple ELM can post-process the extracted features, formulating the entire multi-layer ELM (ML-ELM) algorithm. The method is employed for the sEMG based neural intentions recognition afterwards. The case studies show the adopted deep learning algorithm (ELM-AE) is capable of yielding higher classification accuracy compared to the Principle Component Analysis (PCA) scheme in 5 different types of upper-limb motions. This indicates the effectiveness and the learning capability of the ML-ELM in such motion intent recognition applications.

  15. A real time mobile-based face recognition with fisherface methods

    Science.gov (United States)

    Arisandi, D.; Syahputra, M. F.; Putri, I. L.; Purnamawati, S.; Rahmat, R. F.; Sari, P. P.

    2018-03-01

    Face Recognition is a field research in Computer Vision that study about learning face and determine the identity of the face from a picture sent to the system. By utilizing this face recognition technology, learning process about people’s identity between students in a university will become simpler. With this technology, student won’t need to browse student directory in university’s server site and look for the person with certain face trait. To obtain this goal, face recognition application use image processing methods consist of two phase, pre-processing phase and recognition phase. In pre-processing phase, system will process input image into the best image for recognition phase. Purpose of this pre-processing phase is to reduce noise and increase signal in image. Next, to recognize face phase, we use Fisherface Methods. This methods is chosen because of its advantage that would help system of its limited data. Therefore from experiment the accuracy of face recognition using fisherface is 90%.

  16. Real-time unmanned aircraft systems surveillance video mosaicking using GPU

    Science.gov (United States)

    Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.

    2010-04-01

    Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.

  17. MonoSLAM: real-time single camera SLAM.

    Science.gov (United States)

    Davison, Andrew J; Reid, Ian D; Molton, Nicholas D; Stasse, Olivier

    2007-06-01

    We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.

  18. A study on the development of a real-time intelligent system for ultrasonic flaw classification

    International Nuclear Information System (INIS)

    Song, Sung Jin; Kim, Hak Joon; Lee, Hyun; Lee, Seung Seok

    1998-01-01

    In spite of significant progress in research on ultrasonic pattern recognition it is not widely used in many practical field inspection in weldments. For the convenience of field application of this methodology, following four key issues have to be suitably addressed; 1) a software where the ultrasonic pattern recognition algorithm is efficiently implemented, 2) a real-time ultrasonic testing system which can capture the digitized ultrasonic flaw signal so the pattern recognition software can be applied in a real-time fashion, 3) database of ultrasonic flaw signals in weldments, which is served as a foundation of the ultrasonic pattern recognition algorithm, and finally, 4) ultrasonic features which should be invariant to operational variables of the ultrasonic test system. Presented here is the recent progress in the development of a real-time ultrasonic flaw classification by the novel combination of followings; an intelligent software for ultrasonic flaw classification in weldments, a computer-base real-time ultrasonic nondestructive evaluation system, database of ultrasonic flaw signals, and invariant ultrasonic features called 'normalized features.'

  19. Real-Time Motion Capture Toolbox (RTMocap): an open-source code for recording 3-D motion kinematics to study action-effect anticipations during motor and social interactions.

    Science.gov (United States)

    Lewkowicz, Daniel; Delevoye-Turrell, Yvonne

    2016-03-01

    We present here a toolbox for the real-time motion capture of biological movements that runs in the cross-platform MATLAB environment (The MathWorks, Inc., Natick, MA). It provides instantaneous processing of the 3-D movement coordinates of up to 20 markers at a single instant. Available functions include (1) the setting of reference positions, areas, and trajectories of interest; (2) recording of the 3-D coordinates for each marker over the trial duration; and (3) the detection of events to use as triggers for external reinforcers (e.g., lights, sounds, or odors). Through fast online communication between the hardware controller and RTMocap, automatic trial selection is possible by means of either a preset or an adaptive criterion. Rapid preprocessing of signals is also provided, which includes artifact rejection, filtering, spline interpolation, and averaging. A key example is detailed, and three typical variations are developed (1) to provide a clear understanding of the importance of real-time control for 3-D motion in cognitive sciences and (2) to present users with simple lines of code that can be used as starting points for customizing experiments using the simple MATLAB syntax. RTMocap is freely available (http://sites.google.com/site/RTMocap/) under the GNU public license for noncommercial use and open-source development, together with sample data and extensive documentation.

  20. Real Time Animation of Trees Based on BBSC in Computer Games

    Directory of Open Access Journals (Sweden)

    Xuefeng Ao

    2009-01-01

    Full Text Available That researchers in the field of computer games usually find it is difficult to simulate the motion of actual 3D model trees lies in the fact that the tree model itself has very complicated structure, and many sophisticated factors need to be considered during the simulation. Though there are some works on simulating 3D tree and its motion, few of them are used in computer games due to the high demand for real-time in computer games. In this paper, an approach of animating trees in computer games based on a novel tree model representation—Ball B-Spline Curves (BBSCs are proposed. By taking advantage of the good features of the BBSC-based model, physical simulation of the motion of leafless trees with wind blowing becomes easier and more efficient. The method can generate realistic 3D tree animation in real-time, which meets the high requirement for real time in computer games.

  1. Real-time WAMI streaming target tracking in fog

    Science.gov (United States)

    Chen, Yu; Blasch, Erik; Chen, Ning; Deng, Anna; Ling, Haibin; Chen, Genshe

    2016-05-01

    Real-time information fusion based on WAMI (Wide-Area Motion Imagery), FMV (Full Motion Video), and Text data is highly desired for many mission critical emergency or security applications. Cloud Computing has been considered promising to achieve big data integration from multi-modal sources. In many mission critical tasks, however, powerful Cloud technology cannot satisfy the tight latency tolerance as the servers are allocated far from the sensing platform, actually there is no guaranteed connection in the emergency situations. Therefore, data processing, information fusion, and decision making are required to be executed on-site (i.e., near the data collection). Fog Computing, a recently proposed extension and complement for Cloud Computing, enables computing on-site without outsourcing jobs to a remote Cloud. In this work, we have investigated the feasibility of processing streaming WAMI in the Fog for real-time, online, uninterrupted target tracking. Using a single target tracking algorithm, we studied the performance of a Fog Computing prototype. The experimental results are very encouraging that validated the effectiveness of our Fog approach to achieve real-time frame rates.

  2. A Self-Powered Insole for Human Motion Recognition

    Directory of Open Access Journals (Sweden)

    Yingzhou Han

    2016-09-01

    Full Text Available Biomechanical energy harvesting is a feasible solution for powering wearable sensors by directly driving electronics or acting as wearable self-powered sensors. A wearable insole that not only can harvest energy from foot pressure during walking but also can serve as a self-powered human motion recognition sensor is reported. The insole is designed as a sandwich structure consisting of two wavy silica gel film separated by a flexible piezoelectric foil stave, which has higher performance compared with conventional piezoelectric harvesters with cantilever structure. The energy harvesting insole is capable of driving some common electronics by scavenging energy from human walking. Moreover, it can be used to recognize human motion as the waveforms it generates change when people are in different locomotion modes. It is demonstrated that different types of human motion such as walking and running are clearly classified by the insole without any external power source. This work not only expands the applications of piezoelectric energy harvesters for wearable power supplies and self-powered sensors, but also provides possible approaches for wearable self-powered human motion monitoring that is of great importance in many fields such as rehabilitation and sports science.

  3. Tree-based indexing for real-time ConvNet landmark-based visual place recognition

    Directory of Open Access Journals (Sweden)

    Yi Hou

    2017-01-01

    Full Text Available Recent impressive studies on using ConvNet landmarks for visual place recognition take an approach that involves three steps: (a detection of landmarks, (b description of the landmarks by ConvNet features using a convolutional neural network, and (c matching of the landmarks in the current view with those in the database views. Such an approach has been shown to achieve the state-of-the-art accuracy even under significant viewpoint and environmental changes. However, the computational burden in step (c significantly prevents this approach from being applied in practice, due to the complexity of linear search in high-dimensional space of the ConvNet features. In this article, we propose two simple and efficient search methods to tackle this issue. Both methods are built upon tree-based indexing. Given a set of ConvNet features of a query image, the first method directly searches the features’ approximate nearest neighbors in a tree structure that is constructed from ConvNet features of database images. The database images are voted on by features in the query image, according to a lookup table which maps each ConvNet feature to its corresponding database image. The database image with the highest vote is considered the solution. Our second method uses a coarse-to-fine procedure: the coarse step uses the first method to coarsely find the top-N database images, and the fine step performs a linear search in Hamming space of the hash codes of the ConvNet features to determine the best match. Experimental results demonstrate that our methods achieve real-time search performance on five data sets with different sizes and various conditions. Most notably, by achieving an average search time of 0.035 seconds/query, our second method improves the matching efficiency by the three orders of magnitude over a linear search baseline on a database with 20,688 images, with negligible loss in place recognition accuracy.

  4. Dynamic Time Warping Distance Method for Similarity Test of Multipoint Ground Motion Field

    Directory of Open Access Journals (Sweden)

    Yingmin Li

    2010-01-01

    Full Text Available The reasonability of artificial multi-point ground motions and the identification of abnormal records in seismic array observations, are two important issues in application and analysis of multi-point ground motion fields. Based on the dynamic time warping (DTW distance method, this paper discusses the application of similarity measurement in the similarity analysis of simulated multi-point ground motions and the actual seismic array records. Analysis results show that the DTW distance method not only can quantitatively reflect the similarity of simulated ground motion field, but also offers advantages in clustering analysis and singularity recognition of actual multi-point ground motion field.

  5. Research on Three-dimensional Motion History Image Model and Extreme Learning Machine for Human Body Movement Trajectory Recognition

    Directory of Open Access Journals (Sweden)

    Zheng Chang

    2015-01-01

    Full Text Available Based on the traditional machine vision recognition technology and traditional artificial neural networks about body movement trajectory, this paper finds out the shortcomings of the traditional recognition technology. By combining the invariant moments of the three-dimensional motion history image (computed as the eigenvector of body movements and the extreme learning machine (constructed as the classification artificial neural network of body movements, the paper applies the method to the machine vision of the body movement trajectory. In detail, the paper gives a detailed introduction about the algorithm and realization scheme of the body movement trajectory recognition based on the three-dimensional motion history image and the extreme learning machine. Finally, by comparing with the results of the recognition experiments, it attempts to verify that the method of body movement trajectory recognition technology based on the three-dimensional motion history image and extreme learning machine has a more accurate recognition rate and better robustness.

  6. Near Real-Time Processing and Archiving of GPS Surveys for Crustal Motion Monitoring

    Science.gov (United States)

    Crowell, B. W.; Bock, Y.

    2008-12-01

    We present an inverse instantaneous RTK method for rapidly processing and archiving GPS data for crustal motion surveys that gives positional accuracy similar to traditional post-processing methods. We first stream 1 Hz data from GPS receivers over Bluetooth to Verizon XV6700 smartphones equipped with Geodetics, Inc. RTD Rover software. The smartphone transmits raw receiver data to a real-time server at the Scripps Orbit and Permanent Array Center (SOPAC) running RTD Pro. At the server, instantaneous positions are computed every second relative to the three closest base stations in the California Real Time Network (CRTN), using ultra-rapid orbits produced by SOPAC, the NOAATrop real-time tropospheric delay model, and ITRF2005 coordinates computed by SOPAC for the CRTN stations. The raw data are converted on-the-fly to RINEX format at the server. Data in both formats are stored on the server along with a file of instantaneous positions, computed independently at each observation epoch. The single-epoch instantaneous positions are continuously transmitted back to the field surveyor's smartphone, where RTD Rover computes a median position and interquartile range for each new epoch of observation. The best-fit solution is the last median position and is available as soon as the survey is completed. We describe how we used this method to process 1 Hz data from the February, 2008 Imperial Valley GPS survey of 38 geodetic monuments established by Imperial College, London in the 1970's, and previously measured by SOPAC using rapid-static GPS methods in 1993, 1999 and 2000, as well as 14 National Geodetic Survey (NGS) monuments. For redundancy, each monument was surveyed for about 15 minutes at least twice and at staggered intervals using two survey teams operating autonomously. Archiving of data and the overall project at SOPAC is performed using the PGM software, developed by the California Spatial Reference Center (CSRC) for the National Geodetic Survey (NGS). The

  7. TH-AB-202-10: Quantifying the Accuracy and Precision of Six Degree-Of-Freedom Motion Estimation for Use in Real-Time Tumor Motion Monitoring During Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J [The University of Sydney, Sydney, New South Wales (Australia); Nguyen, D; O’Brien, R; Keall, P [University of Sydney, Sydney, NSW (Australia); Huang, C [Sydney Medical School, Camperdown (Australia); Caillet, V [The University of Sydney, Sydney, NSW (Australia); Poulsen, P [Aarhus University Hospital, Aarhus (Denmark); Booth, J [Royal North Shore Hospital, Sydney (Australia)

    2016-06-15

    Purpose: Kilovoltage intrafraction monitoring (KIM) scheme has been successfully used to simultaneously monitor 3D tumor motion during radiotherapy. Recently, an iterative closest point (ICP) algorithm was implemented in KIM to also measure rotations about three axes, enabling real-time tracking of tumor motion in six degrees-of-freedom (DoF). This study aims to evaluate the accuracy of the six DoF motion estimates of KIM by comparing it with the corresponding motion (i) measured by the Calypso; and (ii) derived from kV/MV triangulation. Methods: (i) Various motions (static and dynamic) were applied to a CIRS phantom with three embedded electromagnetic transponders (Calypso Medical) using a 5D motion platform (HexaMotion) and a rotating treatment couch while both KIM and Calypso were used to concurrently track the phantom motion in six DoF. (ii) KIM was also used to retrospectively estimate six DoF motion from continuous sets of kV projections of a prostate, implanted with three gold fiducial markers (2 patients with 80 fractions in total), acquired during the treatment. Corresponding motion was obtained from kV/MV triangulation using a closed form least squares method based on three markers’ positions. Only the frames where all three markers were present were used in the analysis. The mean differences between the corresponding motion estimates were calculated for each DoF. Results: Experimental results showed that the mean of absolute differences in six DoF phantom motion measured by Calypso and KIM were within 1.1° and 0.7 mm. kV/MV triangulation derived six DoF prostate tumor better agreed with KIM estimated motion with the mean (s.d.) difference of up to 0.2° (1.36°) and 0.2 (0.25) mm for rotation and translation, respectively. Conclusion: These results suggest that KIM can provide an accurate six DoF intrafraction tumor during radiotherapy.

  8. Real-Time Dynamic MLC Tracking for Intensity Modulated Arc Therapy

    DEFF Research Database (Denmark)

    Falk, Marianne

    Motion management of intra-fraction tumour motion during radiotherapy treatment can be a challenging task in order to achieve tumour control as well as minimizing the dose to the surrounding healthy tissue. Real-time dynamic multileaf collimator (MLC) tracking is a novel method for intra-fraction...

  9. Self-organization comprehensive real-time state evaluation model for oil pump unit on the basis of operating condition classification and recognition

    Science.gov (United States)

    Liang, Wei; Yu, Xuchao; Zhang, Laibin; Lu, Wenqing

    2018-05-01

    In oil transmission station, the operating condition (OC) of an oil pump unit sometimes switches accordingly, which will lead to changes in operating parameters. If not taking the switching of OCs into consideration while performing a state evaluation on the pump unit, the accuracy of evaluation would be largely influenced. Hence, in this paper, a self-organization Comprehensive Real-Time State Evaluation Model (self-organization CRTSEM) is proposed based on OC classification and recognition. However, the underlying model CRTSEM is built through incorporating the advantages of Gaussian Mixture Model (GMM) and Fuzzy Comprehensive Evaluation Model (FCEM) first. That is to say, independent state models are established for every state characteristic parameter according to their distribution types (i.e. the Gaussian distribution and logistic regression distribution). Meanwhile, Analytic Hierarchy Process (AHP) is utilized to calculate the weights of state characteristic parameters. Then, the OC classification is determined by the types of oil delivery tasks, and CRTSEMs of different standard OCs are built to constitute the CRTSEM matrix. On the other side, the OC recognition is realized by a self-organization model that is established on the basis of Back Propagation (BP) model. After the self-organization CRTSEM is derived through integration, real-time monitoring data can be inputted for OC recognition. At the end, the current state of the pump unit can be evaluated by using the right CRTSEM. The case study manifests that the proposed self-organization CRTSEM can provide reasonable and accurate state evaluation results for the pump unit. Besides, the assumption that the switching of OCs will influence the results of state evaluation is also verified.

  10. Real-time Pedestrian Crossing Recognition for Assistive Outdoor Navigation.

    Science.gov (United States)

    Fontanesi, Simone; Frigerio, Alessandro; Fanucci, Luca; Li, William

    2015-01-01

    Navigation in urban environments can be difficult for people who are blind or visually impaired. In this project, we present a system and algorithms for recognizing pedestrian crossings in outdoor environments. Our goal is to provide navigation cues for crossing the street and reaching an island or sidewalk safely. Using a state-of-the-art Multisense S7S sensor, we collected 3D pointcloud data for real-time detection of pedestrian crossing and generation of directional guidance. We demonstrate improvements to a baseline, monocular-camera-based system by integrating 3D spatial prior information extracted from the pointcloud. Our system's parameters can be set to the actual dimensions of real-world settings, which enables robustness of occlusion and perspective transformation. The system works especially well in non-occlusion situations, and is reasonably accurate under different kind of conditions. As well, our large dataset of pedestrian crossings, organized by different types and situations of pedestrian crossings in order to reflect real-word environments, is publicly available in a commonly used format (ROS bagfiles) for further research.

  11. Time-resolved measurements with intense ultrashort laser pulses: a 'molecular movie' in real time

    International Nuclear Information System (INIS)

    Rudenko, A; Ergler, Th; Feuerstein, B; Zrost, K; Schroeter, C D; Moshammer, R; Ullrich, J

    2007-01-01

    We report on the high-resolution multidimensional real-time mapping of H 2 + and D 2 + nuclear wave packets performed employing time-resolved three-dimensional Coulomb explosion imaging with intense laser pulses. Exploiting a combination of a 'reaction microscope' spectrometer and a pump-probe setup with two intense 6-7 fs laser pulses, we simultaneously visualize both vibrational and rotational motion of the molecule, and obtain a sequence of snapshots of the squared ro-vibrational wave function with time-step resolution of ∼ 0.3 fs, allowing us to reconstruct a real-time movie of the ultrafast molecular motion. We observe fast dephasing, or 'collapse' of the vibrational wave packet and its subsequent revival, as well as signatures of rotational excitation. For D 2 + we resolve also the fractional revivals resulting from the interference between the counter-propagating parts of the wave packet

  12. Strategies for real-time position control of a single atom in cavity QED

    International Nuclear Information System (INIS)

    Lynn, T W; Birnbaum, K; Kimble, H J

    2005-01-01

    Recent realizations of single-atom trapping and tracking in cavity QED open the door for feedback schemes which actively stabilize the motion of a single atom in real time. We present feedback algorithms for cooling the radial component of motion for a single atom trapped by strong coupling to single-photon fields in an optical cavity. Performance of various algorithms is studied through simulations of single-atom trajectories, with full dynamical and measurement noise included. Closed loop feedback algorithms compare favourably to open loop 'switching' analogues, demonstrating the importance of applying actual position information in real time. The high optical information rate in current experiments enables real-time tracking that approaches the standard quantum limit for broadband position measurements, suggesting that realistic active feedback schemes may reach a regime where measurement backaction appreciably alters the motional dynamics

  13. Impact of Sliding Window Length in Indoor Human Motion Modes and Pose Pattern Recognition Based on Smartphone Sensors

    Directory of Open Access Journals (Sweden)

    Gaojing Wang

    2018-06-01

    Full Text Available Human activity recognition (HAR is essential for understanding people’s habits and behaviors, providing an important data source for precise marketing and research in psychology and sociology. Different approaches have been proposed and applied to HAR. Data segmentation using a sliding window is a basic step during the HAR procedure, wherein the window length directly affects recognition performance. However, the window length is generally randomly selected without systematic study. In this study, we examined the impact of window length on smartphone sensor-based human motion and pose pattern recognition. With data collected from smartphone sensors, we tested a range of window lengths on five popular machine-learning methods: decision tree, support vector machine, K-nearest neighbor, Gaussian naïve Bayesian, and adaptive boosting. From the results, we provide recommendations for choosing the appropriate window length. Results corroborate that the influence of window length on the recognition of motion modes is significant but largely limited to pose pattern recognition. For motion mode recognition, a window length between 2.5–3.5 s can provide an optimal tradeoff between recognition performance and speed. Adaptive boosting outperformed the other methods. For pose pattern recognition, 0.5 s was enough to obtain a satisfactory result. In addition, all of the tested methods performed well.

  14. Motion Primitives and Probabilistic Edit Distance for Action Recognition

    DEFF Research Database (Denmark)

    Fihl, Preben; Holte, Michael Boelstoft; Moeslund, Thomas B.

    2009-01-01

    the actions as a sequence of temporal isolated instances, denoted primitives. These primitives are each defined by four features extracted from motion images. The primitives are recognized in each frame based on a trained classifier resulting in a sequence of primitives. From this sequence we recognize......The number of potential applications has made automatic recognition of human actions a very active research area. Different approaches have been followed based on trajectories through some state space. In this paper we also model an action as a trajectory through a state space, but we represent...... different temporal actions using a probabilistic Edit Distance method. The method is tested on different actions with and without noise and the results show recognition rates of 88.7% and 85.5%, respectively....

  15. Wide area surveillance real-time motion detection systems

    CERN Document Server

    2014-01-01

    The book describes a system for visual surveillance using intelligent cameras. The camera uses robust techniques for detecting and tracking moving objects. The real time capture of the objects is then stored int he database. The tracking data stored in the database is analysed to study the camera view, detect and track objects, and study object behavior. These set of models provide a robust framework for coordinating the tracking of objects between overlapping and non-overlapping cameras, and recording the activity of objects detected by the system.

  16. Speed and amplitude of lung tumor motion precisely detected in four-dimensional setup and in real-time tumor-tracking radiotherapy

    International Nuclear Information System (INIS)

    Shirato, Hiroki; Suzuki, Keishiro; Sharp, Gregory C.; Fujita, Katsuhisa R.T.; Onimaru, Rikiya; Fujino, Masaharu; Kato, Norio; Osaka, Yasuhiro; Kinoshita, Rumiko; Taguchi, Hiroshi; Onodera, Shunsuke; Miyasaka, Kazuo

    2006-01-01

    Background: To reduce the uncertainty of registration for lung tumors, we have developed a four-dimensional (4D) setup system using a real-time tumor-tracking radiotherapy system. Methods and Materials: During treatment planning and daily setup in the treatment room, the trajectory of the internal fiducial marker was recorded for 1 to 2 min at the rate of 30 times per second by the real-time tumor-tracking radiotherapy system. To maximize gating efficiency, the patient's position on the treatment couch was adjusted using the 4D setup system with fine on-line remote control of the treatment couch. Results: The trajectory of the marker detected in the 4D setup system was well visualized and used for daily setup. Various degrees of interfractional and intrafractional changes in the absolute amplitude and speed of the internal marker were detected. Readjustments were necessary during each treatment session, prompted by baseline shifting of the tumor position. Conclusion: The 4D setup system was shown to be useful for reducing the uncertainty of tumor motion and for increasing the efficiency of gated irradiation. Considering the interfractional and intrafractional changes in speed and amplitude detected in this study, intercepting radiotherapy is the safe and cost-effective method for 4D radiotherapy using real-time tracking technology

  17. Recognition of "real-world" musical excerpts by cochlear implant recipients and normal-hearing adults.

    Science.gov (United States)

    Gfeller, Kate; Olszewski, Carol; Rychener, Marly; Sena, Kimberly; Knutson, John F; Witt, Shelley; Macpherson, Beth

    2005-06-01

    The purposes of this study were (a) to compare recognition of "real-world" music excerpts by postlingually deafened adults using cochlear implants and normal-hearing adults; (b) to compare the performance of cochlear implant recipients using different devices and processing strategies; and (c) to examine the variability among implant recipients in recognition of musical selections in relation to performance on speech perception tests, performance on cognitive tests, and demographic variables. Seventy-nine cochlear implant users and 30 normal-hearing adults were tested on open-set recognition of systematically selected excerpts from musical recordings heard in real life. The recognition accuracy of the two groups was compared for three musical genre: classical, country, and pop. Recognition accuracy was correlated with speech recognition scores, cognitive measures, and demographic measures, including musical background. Cochlear implant recipients were significantly less accurate in recognition of previously familiar (known before hearing loss) musical excerpts than normal-hearing adults (p genre. Implant recipients were most accurate in the recognition of country items and least accurate in the recognition of classical items. There were no significant differences among implant recipients due to implant type (Nucleus, Clarion, or Ineraid), or programming strategy (SPEAK, CIS, or ACE). For cochlear implant recipients, correlations between melody recognition and other measures were moderate to weak in strength; those with statistically significant correlations included age at time of testing (negatively correlated), performance on selected speech perception tests, and the amount of focused music listening following implantation. Current-day cochlear implants are not effective in transmitting several key structural features (i.e., pitch, harmony, timbral blends) of music essential to open-set recognition of well-known musical selections. Consequently, implant

  18. An Online Full-Body Motion Recognition Method Using Sparse and Deficient Signal Sequences

    Directory of Open Access Journals (Sweden)

    Chengyu Guo

    2014-01-01

    Full Text Available This paper presents a method to recognize continuous full-body human motion online by using sparse, low-cost sensors. The only input signals needed are linear accelerations without any rotation information, which are provided by four Wiimote sensors attached to the four human limbs. Based on the fused hidden Markov model (FHMM and autoregressive process, a predictive fusion model (PFM is put forward, which considers the different influences of the upper and lower limbs, establishes HMM for each part, and fuses them using a probabilistic fusion model. Then an autoregressive process is introduced in HMM to predict the gesture, which enables the model to deal with incomplete signal data. In order to reduce the number of alternatives in the online recognition process, a graph model is built that rejects parts of motion types based on the graph structure and previous recognition results. Finally, an online signal segmentation method based on semantics information and PFM is presented to finish the efficient recognition task. The results indicate that the method is robust with a high recognition rate of sparse and deficient signals and can be used in various interactive applications.

  19. Two-dimensional laser servoing for precision motion control of an ODV robotic license plate recognition system

    Science.gov (United States)

    Song, Zhen; Moore, Kevin L.; Chen, YangQuan; Bahl, Vikas

    2003-09-01

    As an outgrowth of series of projects focused on mobility of unmanned ground vehicles (UGV), an omni-directional (ODV), multi-robot, autonomous mobile parking security system has been developed. The system has two types of robots: the low-profile Omni-Directional Inspection System (ODIS), which can be used for under-vehicle inspections, and the mid-sized T4 robot, which serves as a ``marsupial mothership'' for the ODIS vehicles and performs coarse resolution inspection. A key task for the T4 robot is license plate recognition (LPR). For a successful LPR task without compromising the recognition rate, the robot must be able to identify the bumper locations of vehicles in the parking area and then precisely position the LPR camera relative to the bumper. This paper describes a 2D-laser scanner based approach to bumper identification and laser servoing for the T4 robot. The system uses a gimbal-mounted scanning laser. As the T4 robot travels down a row of parking stalls, data is collected from the laser every 100ms. For each parking stall in the range of the laser during the scan, the data is matched to a ``bumper box'' corresponding to where a car bumper is expected, resulting in a point cloud of data corresponding to a vehicle bumper for each stall. Next, recursive line-fitting algorithms are used to determine a line for the data in each stall's ``bumper box.'' The fitting technique uses Hough based transforms, which are robust against segmentation problems and fast enough for real-time line fitting. Once a bumper line is fitted with an acceptable confidence, the bumper location is passed to the T4 motion controller, which moves to position the LPR camera properly relative to the bumper. The paper includes examples and results that show the effectiveness of the technique, including its ability to work in real-time.

  20. Radical stereotactic radiosurgery with real-time tumor motion tracking in the treatment of small peripheral lung tumors

    Directory of Open Access Journals (Sweden)

    Chang Thomas

    2007-10-01

    Full Text Available Abstract Background Recent developments in radiotherapeutic technology have resulted in a new approach to treating patients with localized lung cancer. We report preliminary clinical outcomes using stereotactic radiosurgery with real-time tumor motion tracking to treat small peripheral lung tumors. Methods Eligible patients were treated over a 24-month period and followed for a minimum of 6 months. Fiducials (3–5 were placed in or near tumors under CT-guidance. Non-isocentric treatment plans with 5-mm margins were generated. Patients received 45–60 Gy in 3 equal fractions delivered in less than 2 weeks. CT imaging and routine pulmonary function tests were completed at 3, 6, 12, 18, 24 and 30 months. Results Twenty-four consecutive patients were treated, 15 with stage I lung cancer and 9 with single lung metastases. Pneumothorax was a complication of fiducial placement in 7 patients, requiring tube thoracostomy in 4. All patients completed radiation treatment with minimal discomfort, few acute side effects and no procedure-related mortalities. Following treatment transient chest wall discomfort, typically lasting several weeks, developed in 7 of 11 patients with lesions within 5 mm of the pleura. Grade III pneumonitis was seen in 2 patients, one with prior conventional thoracic irradiation and the other treated with concurrent Gefitinib. A small statistically significant decline in the mean % predicted DLCO was observed at 6 and 12 months. All tumors responded to treatment at 3 months and local failure was seen in only 2 single metastases. There have been no regional lymph node recurrences. At a median follow-up of 12 months, the crude survival rate is 83%, with 3 deaths due to co-morbidities and 1 secondary to metastatic disease. Conclusion Radical stereotactic radiosurgery with real-time tumor motion tracking is a promising well-tolerated treatment option for small peripheral lung tumors.

  1. Complex Human Activity Recognition Using Smartphone and Wrist-Worn Motion Sensors

    NARCIS (Netherlands)

    Shoaib, M.; Bosch, S.; Durmaz, O.; Scholten, Johan; Havinga, Paul J.M.

    2016-01-01

    The position of on-body motion sensors plays an important role in human activity recognition. Most often, mobile phone sensors at the trouser pocket or an equivalent position are used for this purpose. However, this position is not suitable for recognizing activities that involve hand gestures, such

  2. Toward real-time regional earthquake simulation II: Real-time Online earthquake Simulation (ROS) of Taiwan earthquakes

    Science.gov (United States)

    Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh

    2014-06-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  3. Pilot study on real-time motion detection in UAS video data by human observer and image exploitation algorithm

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Voit, Michael; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2017-05-01

    Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction with target tracking algorithms, a gaze-enhanced user interface is beneficial. In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm. Besides identifying an appropriate interaction technique for the user interface - again, we compare gaze-based and traditional mouse-based interaction - we focus on the benefit an IDM algorithm might provide for an UAS video analyst. In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing with automatic support, once performing without it. We compare the two conditions considering performance in terms of effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire). The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.

  4. Real Time Structured Light and Applications

    DEFF Research Database (Denmark)

    Wilm, Jakob

    Structured light scanning is a versatile method for 3D shape acquisition. While much faster than most competing measurement techniques, most high-end structured light scans still take in the order of seconds to complete. Low-cost sensors such as Microsoft Kinect and time of flight cameras have made......, increased processing power, and methods presented in this thesis, it is possible to perform structured light scans in real time with 20 depth measurements per second. This offers new opportunities for studying dynamic scenes, quality control, human-computer interaction and more. This thesis discusses...... several aspects of real time structured light systems and presents contributions within calibration, scene coding and motion correction aspects. The problem of reliable and fast calibration of such systems is addressed with a novel calibration scheme utilising radial basis functions [Contribution B...

  5. Review of Real-Time 3-Dimensional Image Guided Radiation Therapy on Standard-Equipped Cancer Radiation Therapy Systems: Are We at the Tipping Point for the Era of Real-Time Radiation Therapy?

    Science.gov (United States)

    Keall, Paul J; Nguyen, Doan Trang; O'Brien, Ricky; Zhang, Pengpeng; Happersett, Laura; Bertholet, Jenny; Poulsen, Per R

    2018-04-14

    To review real-time 3-dimensional (3D) image guided radiation therapy (IGRT) on standard-equipped cancer radiation therapy systems, focusing on clinically implemented solutions. Three groups in 3 continents have clinically implemented novel real-time 3D IGRT solutions on standard-equipped linear accelerators. These technologies encompass kilovoltage, combined megavoltage-kilovoltage, and combined kilovoltage-optical imaging. The cancer sites treated span pelvic and abdominal tumors for which respiratory motion is present. For each method the 3D-measured motion during treatment is reported. After treatment, dose reconstruction was used to assess the treatment quality in the presence of motion with and without real-time 3D IGRT. The geometric accuracy was quantified through phantom experiments. A literature search was conducted to identify additional real-time 3D IGRT methods that could be clinically implemented in the near future. The real-time 3D IGRT methods were successfully clinically implemented and have been used to treat more than 200 patients. Systematic target position shifts were observed using all 3 methods. Dose reconstruction demonstrated that the delivered dose is closer to the planned dose with real-time 3D IGRT than without real-time 3D IGRT. In addition, compromised target dose coverage and variable normal tissue doses were found without real-time 3D IGRT. The geometric accuracy results with real-time 3D IGRT had a mean error of real-time 3D IGRT methods using standard-equipped radiation therapy systems that could also be clinically implemented. Multiple clinical implementations of real-time 3D IGRT on standard-equipped cancer radiation therapy systems have been demonstrated. Many more approaches that could be implemented were identified. These solutions provide a pathway for the broader adoption of methods to make radiation therapy more accurate, impacting tumor and normal tissue dose, margins, and ultimately patient outcomes. Copyright © 2018

  6. Correction of Motion Artifacts for Real-Time Structured Light

    DEFF Research Database (Denmark)

    Wilm, Jakob; Olesen, Oline Vinter; Paulsen, Rasmus Reinhold

    2015-01-01

    While the problem of motion is often mentioned in conjunction with structured light imaging, few solutions have thus far been proposed. A method is demonstrated to correct for object or camera motion during structured light 3D scene acquisition. The method is based on the combination of a suitabl...

  7. A real-time comparison between direct control, sequential pattern recognition control and simultaneous pattern recognition control using a Fitts' law style assessment procedure.

    Science.gov (United States)

    Wurth, Sophie M; Hargrove, Levi J

    2014-05-30

    Pattern recognition (PR) based strategies for the control of myoelectric upper limb prostheses are generally evaluated through offline classification accuracy, which is an admittedly useful metric, but insufficient to discuss functional performance in real time. Existing functional tests are extensive to set up and most fail to provide a challenging, objective framework to assess the strategy performance in real time. Nine able-bodied and two amputee subjects gave informed consent and participated in the local Institutional Review Board approved study. We designed a two-dimensional target acquisition task, based on the principles of Fitts' law for human motor control. Subjects were prompted to steer a cursor from the screen center of into a series of subsequently appearing targets of different difficulties. Three cursor control systems were tested, corresponding to three electromyography-based prosthetic control strategies: 1) amplitude-based direct control (the clinical standard of care), 2) sequential PR control, and 3) simultaneous PR control, allowing for a concurrent activation of two degrees of freedom (DOF). We computed throughput (bits/second), path efficiency (%), reaction time (second), and overshoot (%)) and used general linear models to assess significant differences between the strategies for each metric. We validated the proposed methodology by achieving very high coefficients of determination for Fitts' law. Both PR strategies significantly outperformed direct control in two-DOF targets and were more intuitive to operate. In one-DOF targets, the simultaneous approach was the least precise. The direct control was efficient in one-DOF targets but cumbersome to operate in two-DOF targets through a switch-depended sequential cursor control. We designed a test, capable of comprehensively describing prosthetic control strategies in real time. When implemented on control subjects, the test was able to capture statistically significant differences (p

  8. Three-Dimensional Intrafractional Motion of Breast During Tangential Breast Irradiation Monitored With High-Sampling Frequency Using a Real-Time Tumor-Tracking Radiotherapy System

    International Nuclear Information System (INIS)

    Kinoshita, Rumiko; Shimizu, Shinichi; Taguchi, Hiroshi; Katoh, Norio; Fujino, Masaharu; Onimaru, Rikiya; Aoyama, Hidefumi; Katoh, Fumi; Omatsu, Tokuhiko; Ishikawa, Masayori; Shirato, Hiroki

    2008-01-01

    Purpose: To evaluate the three-dimensional intrafraction motion of the breast during tangential breast irradiation using a real-time tracking radiotherapy (RT) system with a high-sampling frequency. Methods and Materials: A total of 17 patients with breast cancer who had received breast conservation RT were included in this study. A 2.0-mm gold marker was placed on the skin near the nipple of the breast for RT. A fluoroscopic real-time tumor-tracking RT system was used to monitor the marker. The range of motion of each patient was calculated in three directions. Results: The mean ± standard deviation of the range of respiratory motion was 1.0 ± 0.6 mm (median, 0.9; 95% confidence interval [CI] of the marker position, 0.4-2.6), 1.3 ± 0.5 mm (median, 1.1; 95% CI, 0.5-2.5), and 2.6 ± 1.4 (median, 2.3; 95% CI, 1.0-6.9) for the right-left, craniocaudal, and anteroposterior direction, respectively. No correlation was found between the range of motion and the body mass index or respiratory function. The mean ± standard deviation of the absolute value of the baseline shift in the right-left, craniocaudal, and anteroposterior direction was 0.2 ± 0.2 mm (range, 0.0-0.8 mm), 0.3 ± 0.2 mm (range, 0.0-0.7 mm), and 0.8 ± 0.7 mm (range, 0.1-1.8 mm), respectively. Conclusion: Both the range of motion and the baseline shift were within a few millimeters in each direction. As long as the conventional wedge-pair technique and the proper immobilization are used, the intrafraction three-dimensional change in the breast surface did not much influence the dose distribution

  9. Complex Human Activity Recognition Using Smartphone and Wrist-Worn Motion Sensors.

    Science.gov (United States)

    Shoaib, Muhammad; Bosch, Stephan; Incel, Ozlem Durmaz; Scholten, Hans; Havinga, Paul J M

    2016-03-24

    The position of on-body motion sensors plays an important role in human activity recognition. Most often, mobile phone sensors at the trouser pocket or an equivalent position are used for this purpose. However, this position is not suitable for recognizing activities that involve hand gestures, such as smoking, eating, drinking coffee and giving a talk. To recognize such activities, wrist-worn motion sensors are used. However, these two positions are mainly used in isolation. To use richer context information, we evaluate three motion sensors (accelerometer, gyroscope and linear acceleration sensor) at both wrist and pocket positions. Using three classifiers, we show that the combination of these two positions outperforms the wrist position alone, mainly at smaller segmentation windows. Another problem is that less-repetitive activities, such as smoking, eating, giving a talk and drinking coffee, cannot be recognized easily at smaller segmentation windows unlike repetitive activities, like walking, jogging and biking. For this purpose, we evaluate the effect of seven window sizes (2-30 s) on thirteen activities and show how increasing window size affects these various activities in different ways. We also propose various optimizations to further improve the recognition of these activities. For reproducibility, we make our dataset publicly available.

  10. Emotion Recognition in Face and Body Motion in Bulimia Nervosa.

    Science.gov (United States)

    Dapelo, Marcela Marin; Surguladze, Simon; Morris, Robin; Tchanturia, Kate

    2017-11-01

    Social cognition has been studied extensively in anorexia nervosa (AN), but there are few studies in bulimia nervosa (BN). This study investigated the ability of people with BN to recognise emotions in ambiguous facial expressions and in body movement. Participants were 26 women with BN, who were compared with 35 with AN, and 42 healthy controls. Participants completed an emotion recognition task by using faces portraying blended emotions, along with a body emotion recognition task by using videos of point-light walkers. The results indicated that BN participants exhibited difficulties recognising disgust in less-ambiguous facial expressions, and a tendency to interpret non-angry faces as anger, compared with healthy controls. These difficulties were similar to those found in AN. There were no significant differences amongst the groups in body motion emotion recognition. The findings suggest that difficulties with disgust and anger recognition in facial expressions may be shared transdiagnostically in people with eating disorders. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association.

  11. SU-G-JeP4-14: Assessment of Inter- and Intra-Fractional Motion for Extremity Soft Tissue Sarcoma Patients by Using In-House Real-Time Optical Image-Based Monitoring System

    Energy Technology Data Exchange (ETDEWEB)

    Kim, H [Interdisciplinary Program in Radiation Applied Life Science, College of Medicine, Seoul National University, Seoul (Korea, Republic of); Kim, I [Dept. of Radiation Oncology, Seoul National University Hospital, Seoul (Korea, Republic of); Ye, S [Dept. of Radiation Oncology, Seoul National University Hospital, Seoul (Korea, Republic of); Program in Biomedical Radiation Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul (Korea, Republic of)

    2016-06-15

    Purpose: This study aimed to assess inter- and intra-fractional motion for extremity Soft Tissue Sarcoma (STS) patients, by using in-house real-time optical image-based monitoring system (ROIMS) with infra-red (IR) external markers. Methods: Inter- and intra-fractional motions for five extremity (1 upper, 4 lower) STS patients received postoperative 3D conformal radiotherapy (3D-CRT) were measured by registering the image acquired by ROIMS with the planning CT image (REG-ROIMS). To compare with the X-ray image-based monitoring, pre- and post-treatment cone beam computed tomography (CBCT) scans were performed once per week and registered with planning CT image as well (REG-CBCT). If the CBCT scan is not feasible due to the large couch shift, AP and LR on-board imager (OBI) images were acquired. The comparison was done by calculating mutual information (MI) of those registered images. Results: The standard deviation (SD) of the inter-fractional motion was 2.6 mm LR, 2.8 mm SI, and 2.0 mm AP, and the SD of the intra-fractional motion was 1.4 mm, 2.1 mm, and 1.3 mm in each axis, respectively. The SD of rotational inter-fractional motion was 0.6° pitch, 0.9° yaw, and 0.8° roll and the SD of rotational intra-fractional motion was 0.4° pitch, 0.9° yaw, and 0.7° roll. The derived averaged MI values were 0.83, 0.92 for REG-CBCT without rotation and REG-ROIMS with rotation, respectively. Conclusion: The in-house real-time optical image-based monitoring system was implemented clinically and confirmed the feasibility to assess inter- and intra-fractional motion for extremity STS patients while the daily basis and real-time CBCT scan is not feasible in clinic.

  12. SU-G-JeP4-14: Assessment of Inter- and Intra-Fractional Motion for Extremity Soft Tissue Sarcoma Patients by Using In-House Real-Time Optical Image-Based Monitoring System

    International Nuclear Information System (INIS)

    Kim, H; Kim, I; Ye, S

    2016-01-01

    Purpose: This study aimed to assess inter- and intra-fractional motion for extremity Soft Tissue Sarcoma (STS) patients, by using in-house real-time optical image-based monitoring system (ROIMS) with infra-red (IR) external markers. Methods: Inter- and intra-fractional motions for five extremity (1 upper, 4 lower) STS patients received postoperative 3D conformal radiotherapy (3D-CRT) were measured by registering the image acquired by ROIMS with the planning CT image (REG-ROIMS). To compare with the X-ray image-based monitoring, pre- and post-treatment cone beam computed tomography (CBCT) scans were performed once per week and registered with planning CT image as well (REG-CBCT). If the CBCT scan is not feasible due to the large couch shift, AP and LR on-board imager (OBI) images were acquired. The comparison was done by calculating mutual information (MI) of those registered images. Results: The standard deviation (SD) of the inter-fractional motion was 2.6 mm LR, 2.8 mm SI, and 2.0 mm AP, and the SD of the intra-fractional motion was 1.4 mm, 2.1 mm, and 1.3 mm in each axis, respectively. The SD of rotational inter-fractional motion was 0.6° pitch, 0.9° yaw, and 0.8° roll and the SD of rotational intra-fractional motion was 0.4° pitch, 0.9° yaw, and 0.7° roll. The derived averaged MI values were 0.83, 0.92 for REG-CBCT without rotation and REG-ROIMS with rotation, respectively. Conclusion: The in-house real-time optical image-based monitoring system was implemented clinically and confirmed the feasibility to assess inter- and intra-fractional motion for extremity STS patients while the daily basis and real-time CBCT scan is not feasible in clinic.

  13. Intrafractional Baseline Shift or Drift of Lung Tumor Motion During Gated Radiation Therapy With a Real-Time Tumor-Tracking System

    International Nuclear Information System (INIS)

    Takao, Seishin; Miyamoto, Naoki; Matsuura, Taeko; Onimaru, Rikiya; Katoh, Norio; Inoue, Tetsuya; Sutherland, Kenneth Lee; Suzuki, Ryusuke; Shirato, Hiroki; Shimizu, Shinichi

    2016-01-01

    Purpose: To investigate the frequency and amplitude of baseline shift or drift (shift/drift) of lung tumors in stereotactic body radiation therapy (SBRT), using a real-time tumor-tracking radiation therapy (RTRT) system. Methods and Materials: Sixty-eight patients with peripheral lung tumors were treated with SBRT using the RTRT system. One of the fiducial markers implanted near the tumor was used for the real-time monitoring of the intrafractional tumor motion every 0.033 seconds by the RTRT system. When baseline shift/drift is determined by the system, the position of the treatment couch is adjusted to compensate for the shift/drift. Therefore, the changes in the couch position correspond to the baseline shift/drift in the tumor motion. The frequency and amount of adjustment to the couch positions in the left-right (LR), cranio-caudal (CC), and antero-posterior (AP) directions have been analyzed for 335 fractions administered to 68 patients. Results: The average change in position of the treatment couch during the treatment time was 0.45 ± 2.23 mm (mean ± standard deviation), −1.65 ± 5.95 mm, and 1.50 ± 2.54 mm in the LR, CC, and AP directions, respectively. Overall the baseline shift/drift occurs toward the cranial and posterior directions. The incidence of baseline shift/drift exceeding 3 mm was 6.0%, 15.5%, 14.0%, and 42.1% for the LR, CC, AP, and for the square-root of sum of 3 directions, respectively, within 10 minutes of the start of treatment, and 23.0%, 37.6%, 32.5%, and 71.6% within 30 minutes. Conclusions: Real-time monitoring and frequent adjustments of the couch position and/or adding appropriate margins are suggested to be essential to compensate for possible underdosages due to baseline shift/drift in SBRT for lung cancers.

  14. Adaptive Radiation Therapy for Postprostatectomy Patients Using Real-Time Electromagnetic Target Motion Tracking During External Beam Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Mingyao [Department of Radiation Oncology, Washington University School of Medicine, Saint Louis, Missouri (United States); Bharat, Shyam [Philips Research North America, Briarcliff Manor, New York (United States); Michalski, Jeff M.; Gay, Hiram A. [Department of Radiation Oncology, Washington University School of Medicine, Saint Louis, Missouri (United States); Hou, Wei-Hsien [St Louis University School of Medicine, St Louis, Missouri (United States); Parikh, Parag J., E-mail: pparikh@radonc.wustl.edu [Department of Radiation Oncology, Washington University School of Medicine, Saint Louis, Missouri (United States)

    2013-03-15

    Purpose: Using real-time electromagnetic (EM) transponder tracking data recorded by the Calypso 4D Localization System, we report inter- and intrafractional target motion of the prostate bed, describe a strategy to evaluate treatment adequacy in postprostatectomy patients receiving intensity modulated radiation therapy (IMRT), and propose an adaptive workflow. Methods and Materials: Tracking data recorded by Calypso EM transponders was analyzed for postprostatectomy patients that underwent step-and-shoot IMRT. Rigid target motion parameters during beam delivery were calculated from recorded transponder positions in 16 patients with rigid transponder geometry. The delivered doses to the clinical target volume (CTV) were estimated from the planned dose matrix and the target motion for the first 3, 5, 10, and all fractions. Treatment adequacy was determined by comparing the delivered minimum dose (D{sub min}) with the planned D{sub min} to the CTV. Treatments were considered adequate if the delivered CTV D{sub min} is at least 95% of the planned CTV D{sub min}. Results: Translational target motion was minimal for all 16 patients (mean: 0.02 cm; range: −0.12 cm to 0.07 cm). Rotational motion was patient-specific, and maximum pitch, yaw, and roll were 12.2, 4.1, and 10.5°, respectively. We observed inadequate treatments in 5 patients. In these treatments, we observed greater target rotations along with large distances between the CTV centroid and transponder centroid. The treatment adequacy from the initial 10 fractions successfully predicted the overall adequacy in 4 of 5 inadequate treatments and 10 of 11 adequate treatments. Conclusion: Target rotational motion could cause underdosage to partial volume of the postprostatectomy targets. Our adaptive treatment strategy is applicable to post-prostatectomy patients receiving IMRT to evaluate and improve radiation therapy delivery.

  15. Development of an Earthquake Early Warning System Using Real-Time Strong Motion Signals.

    Science.gov (United States)

    Wu, Yih-Min; Kanamori, Hiroo

    2008-01-09

    As urbanization progresses worldwide, earthquakes pose serious threat to livesand properties for urban areas near major active faults on land or subduction zonesoffshore. Earthquake Early Warning (EEW) can be a useful tool for reducing earthquakehazards, if the spatial relation between cities and earthquake sources is favorable for suchwarning and their citizens are properly trained to respond to earthquake warning messages.An EEW system forewarns an urban area of forthcoming strong shaking, normally with afew sec to a few tens of sec of warning time, i.e., before the arrival of the destructive Swavepart of the strong ground motion. Even a few second of advanced warning time willbe useful for pre-programmed emergency measures for various critical facilities, such asrapid-transit vehicles and high-speed trains to avoid potential derailment; it will be alsouseful for orderly shutoff of gas pipelines to minimize fire hazards, controlled shutdown ofhigh-technological manufacturing operations to reduce potential losses, and safe-guardingof computer facilities to avoid loss of vital databases. We explored a practical approach toEEW with the use of a ground-motion period parameter τc and a high-pass filtered verticaldisplacement amplitude parameter Pd from the initial 3 sec of the P waveforms. At a givensite, an earthquake magnitude could be determined from τ c and the peak ground-motionvelocity (PGV) could be estimated from Pd. In this method, incoming strong motion acceleration signals are recursively converted to ground velocity and displacement. A Pwavetrigger is constantly monitored. When a trigger occurs, τ c and Pd are computed. Theearthquake magnitude and the on-site ground-motion intensity could be estimated and thewarning could be issued. In an ideal situation, such warnings would be available within 10sec of the origin time of a large earthquake whose subsequent ground motion may last fortens of seconds.

  16. Real-time physics-based 3D biped character animation using an inverted pendulum model.

    Science.gov (United States)

    Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee

    2010-01-01

    We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.

  17. Radiotherapy beyond cancer: Target localization in real-time MRI and treatment planning for cardiac radiosurgery

    International Nuclear Information System (INIS)

    Ipsen, S.; Blanck, O.; Rades, D.; Oborn, B.; Bode, F.; Liney, G.; Hunold, P.; Schweikard, A.; Keall, P. J.

    2014-01-01

    Purpose: Atrial fibrillation (AFib) is the most common cardiac arrhythmia that affects millions of patients world-wide. AFib is usually treated with minimally invasive, time consuming catheter ablation techniques. While recently noninvasive radiosurgery to the pulmonary vein antrum (PVA) in the left atrium has been proposed for AFib treatment, precise target location during treatment is challenging due to complex respiratory and cardiac motion. A MRI linear accelerator (MRI-Linac) could solve the problems of motion tracking and compensation using real-time image guidance. In this study, the authors quantified target motion ranges on cardiac magnetic resonance imaging (MRI) and analyzed the dosimetric benefits of margin reduction assuming real-time motion compensation was applied. Methods: For the imaging study, six human subjects underwent real-time cardiac MRI under free breathing. The target motion was analyzed retrospectively using a template matching algorithm. The planning study was conducted on a CT of an AFib patient with a centrally located esophagus undergoing catheter ablation, representing an ideal case for cardiac radiosurgery. The target definition was similar to the ablation lesions at the PVA created during catheter treatment. Safety margins of 0 mm (perfect tracking) to 8 mm (untracked respiratory motion) were added to the target, defining the planning target volume (PTV). For each margin, a 30 Gy single fraction IMRT plan was generated. Additionally, the influence of 1 and 3 T magnetic fields on the treatment beam delivery was simulated using Monte Carlo calculations to determine the dosimetric impact of MRI guidance for two different Linac positions. Results: Real-time cardiac MRI showed mean respiratory target motion of 10.2 mm (superior–inferior), 2.4 mm (anterior–posterior), and 2 mm (left–right). The planning study showed that increasing safety margins to encompass untracked respiratory motion leads to overlapping structures even in the

  18. Radiotherapy beyond cancer: Target localization in real-time MRI and treatment planning for cardiac radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Ipsen, S. [Radiation Physics Laboratory, Sydney Medical School, The University of Sydney, Sydney, New South Wales 2006, Australia and Institute for Robotics and Cognitive Systems, University of Luebeck, Luebeck 23562 (Germany); Blanck, O.; Rades, D. [Department of Radiation Oncology, University of Luebeck and University Medical Center Schleswig-Holstein, Campus Luebeck, Luebeck 23562 (Germany); Oborn, B. [Illawarra Cancer Care Centre (ICCC), Wollongong, New South Wales 2500, Australia and Centre for Medical Radiation Physics (CMRP), University of Wollongong, Wollongong, New South Wales 2500 (Australia); Bode, F. [Medical Department II, University of Luebeck and University Medical Center Schleswig-Holstein, Campus Luebeck, Luebeck 23562 (Germany); Liney, G. [Ingham Institute for Applied Medical Research, Liverpool Hospital, Liverpool, New South Wales 2170 (Australia); Hunold, P. [Department of Radiology and Nuclear Medicine, University of Luebeck and University Medical Center Schleswig-Holstein, Campus Luebeck, Luebeck 23562 (Germany); Schweikard, A. [Institute for Robotics and Cognitive Systems, University of Luebeck, Luebeck 23562 (Germany); Keall, P. J., E-mail: paul.keall@sydney.edu.au [Radiation Physics Laboratory, Sydney Medical School, The University of Sydney, Sydney, New South Wales 2006 (Australia)

    2014-12-15

    Purpose: Atrial fibrillation (AFib) is the most common cardiac arrhythmia that affects millions of patients world-wide. AFib is usually treated with minimally invasive, time consuming catheter ablation techniques. While recently noninvasive radiosurgery to the pulmonary vein antrum (PVA) in the left atrium has been proposed for AFib treatment, precise target location during treatment is challenging due to complex respiratory and cardiac motion. A MRI linear accelerator (MRI-Linac) could solve the problems of motion tracking and compensation using real-time image guidance. In this study, the authors quantified target motion ranges on cardiac magnetic resonance imaging (MRI) and analyzed the dosimetric benefits of margin reduction assuming real-time motion compensation was applied. Methods: For the imaging study, six human subjects underwent real-time cardiac MRI under free breathing. The target motion was analyzed retrospectively using a template matching algorithm. The planning study was conducted on a CT of an AFib patient with a centrally located esophagus undergoing catheter ablation, representing an ideal case for cardiac radiosurgery. The target definition was similar to the ablation lesions at the PVA created during catheter treatment. Safety margins of 0 mm (perfect tracking) to 8 mm (untracked respiratory motion) were added to the target, defining the planning target volume (PTV). For each margin, a 30 Gy single fraction IMRT plan was generated. Additionally, the influence of 1 and 3 T magnetic fields on the treatment beam delivery was simulated using Monte Carlo calculations to determine the dosimetric impact of MRI guidance for two different Linac positions. Results: Real-time cardiac MRI showed mean respiratory target motion of 10.2 mm (superior–inferior), 2.4 mm (anterior–posterior), and 2 mm (left–right). The planning study showed that increasing safety margins to encompass untracked respiratory motion leads to overlapping structures even in the

  19. Radiotherapy beyond cancer: target localization in real-time MRI and treatment planning for cardiac radiosurgery.

    Science.gov (United States)

    Ipsen, S; Blanck, O; Oborn, B; Bode, F; Liney, G; Hunold, P; Rades, D; Schweikard, A; Keall, P J

    2014-12-01

    Atrial fibrillation (AFib) is the most common cardiac arrhythmia that affects millions of patients world-wide. AFib is usually treated with minimally invasive, time consuming catheter ablation techniques. While recently noninvasive radiosurgery to the pulmonary vein antrum (PVA) in the left atrium has been proposed for AFib treatment, precise target location during treatment is challenging due to complex respiratory and cardiac motion. A MRI linear accelerator (MRI-Linac) could solve the problems of motion tracking and compensation using real-time image guidance. In this study, the authors quantified target motion ranges on cardiac magnetic resonance imaging (MRI) and analyzed the dosimetric benefits of margin reduction assuming real-time motion compensation was applied. For the imaging study, six human subjects underwent real-time cardiac MRI under free breathing. The target motion was analyzed retrospectively using a template matching algorithm. The planning study was conducted on a CT of an AFib patient with a centrally located esophagus undergoing catheter ablation, representing an ideal case for cardiac radiosurgery. The target definition was similar to the ablation lesions at the PVA created during catheter treatment. Safety margins of 0 mm (perfect tracking) to 8 mm (untracked respiratory motion) were added to the target, defining the planning target volume (PTV). For each margin, a 30 Gy single fraction IMRT plan was generated. Additionally, the influence of 1 and 3 T magnetic fields on the treatment beam delivery was simulated using Monte Carlo calculations to determine the dosimetric impact of MRI guidance for two different Linac positions. Real-time cardiac MRI showed mean respiratory target motion of 10.2 mm (superior-inferior), 2.4 mm (anterior-posterior), and 2 mm (left-right). The planning study showed that increasing safety margins to encompass untracked respiratory motion leads to overlapping structures even in the ideal scenario, compromising

  20. Hardware Approach for Real Time Machine Stereo Vision

    Directory of Open Access Journals (Sweden)

    Michael Tornow

    2006-02-01

    Full Text Available Image processing is an effective tool for the analysis of optical sensor information for driver assistance systems and controlling of autonomous robots. Algorithms for image processing are often very complex and costly in terms of computation. In robotics and driver assistance systems, real-time processing is necessary. Signal processing algorithms must often be drastically modified so they can be implemented in the hardware. This task is especially difficult for continuous real-time processing at high speeds. This article describes a hardware-software co-design for a multi-object position sensor based on a stereophotogrammetric measuring method. In order to cover a large measuring area, an optimized algorithm based on an image pyramid is implemented in an FPGA as a parallel hardware solution for depth map calculation. Object recognition and tracking are then executed in real-time in a processor with help of software. For this task a statistical cluster method is used. Stabilization of the tracking is realized through use of a Kalman filter. Keywords: stereophotogrammetry, hardware-software co-design, FPGA, 3-d image analysis, real-time, clustering and tracking.

  1. Real-time face and gesture analysis for human-robot interaction

    Science.gov (United States)

    Wallhoff, Frank; Rehrl, Tobias; Mayer, Christoph; Radig, Bernd

    2010-05-01

    Human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, or gestures. Facial expressions and gestures are one of the main nonverbal communication mechanisms and pass large amounts of information between human dialog partners. Therefore, to allow for intuitive human-machine interaction, a real-time capable processing and recognition of facial expressions, hand and head gestures are of great importance. We present a system that is tackling these challenges. The input features for the dynamic head gestures and facial expressions are obtained from a sophisticated three-dimensional model, which is fitted to the user in a real-time capable manner. Applying this model different kinds of information are extracted from the image data and afterwards handed over to a real-time capable data-transferring framework, the so-called Real-Time DataBase (RTDB). In addition to the head and facial-related features, also low-level image features regarding the human hand - optical flow, Hu-moments are stored into the RTDB for the evaluation process of hand gestures. In general, the input of a single camera is sufficient for the parallel evaluation of the different gestures and facial expressions. The real-time capable recognition of the dynamic hand and head gestures are performed via different Hidden Markov Models, which have proven to be a quick and real-time capable classification method. On the other hand, for the facial expressions classical decision trees or more sophisticated support vector machines are used for the classification process. These obtained results of the classification processes are again handed over to the RTDB, where other processes (like a Dialog Management Unit) can easily access them without any blocking effects. In addition, an adjustable amount of history can be stored by the RTDB buffer unit.

  2. Real Time Surface Registration for PET Motion Tracking

    DEFF Research Database (Denmark)

    Wilm, Jakob; Olesen, Oline Vinter; Paulsen, Rasmus Reinhold

    2011-01-01

    to create point clouds representing parts of the patient's face. The movement is estimated by a rigid registration of the point clouds. The registration should be done using a robust algorithm that can handle partial overlap and ideally operate in real time. We present an optimized Iterative Closest Point......Head movement during high resolution Positron Emission Tomography brain studies causes blur and artifacts in the images. Therefore, attempts are being made to continuously monitor the pose of the head and correct for this movement. Specifically, our method uses a structured light scanner system...... algorithm that operates at 10 frames per second on partial human face surfaces. © 2011 Springer-Verlag....

  3. Real-Time Study of Prostate Intrafraction Motion During External Beam Radiotherapy With Daily Endorectal Balloon

    Energy Technology Data Exchange (ETDEWEB)

    Both, Stefan, E-mail: Stefan.Both@uphs.upenn.edu [Department of Radiation Oncology, Hospital of University of Pennsylvania, Philadelphia, PA (United States); Wang, Ken Kang-Hsin; Plastaras, John P.; Deville, Curtiland; Bar Ad, Voika; Tochner, Zelig; Vapiwala, Neha [Department of Radiation Oncology, Hospital of University of Pennsylvania, Philadelphia, PA (United States)

    2011-12-01

    Purpose: To prospectively investigate intrafraction prostate motion during radiofrequency-guided prostate radiotherapy with implanted electromagnetic transponders when daily endorectal balloon (ERB) is used. Methods and Materials: Intrafraction prostate motion from 24 patients in 787 treatment sessions was evaluated based on three-dimensional (3D), lateral, cranial-caudal (CC), and anterior-posterior (AP) displacements. The mean percentage of time with 3D, lateral, CC, and AP prostate displacements >2, 3, 4, 5, 6, 7, 8, 9, and 10 mm in 1 minute intervals was calculated for up to 6 minutes of treatment time. Correlation between the mean percentage time with 3D prostate displacement >3 mm vs. treatment week was investigated. Results: The percentage of time with 3D prostate movement >2, 3, and 4 mm increased with elapsed treatment time (p < 0.05). Prostate movement >5 mm was independent of elapsed treatment time (p = 0.11). The overall mean time with prostate excursions >3 mm was 5%. Directional analysis showed negligible lateral prostate motion; AP and CC motion were comparable. The fraction of time with 3D prostate movement >3 mm did not depend on treatment week of (p > 0.05) over a 4-minute mean treatment time. Conclusions: Daily endorectal balloon consistently stabilizes the prostate, preventing clinically significant displacement (>5 mm). A 3-mm internal margin may sufficiently account for 95% of intrafraction prostate movement for up to 6 minutes of treatment time. Directional analysis suggests that the lateral internal margin could be further reduced to 2 mm.

  4. Real-Time Video Stylization Using Object Flows.

    Science.gov (United States)

    Lu, Cewu; Xiao, Yao; Tang, Chi-Keung

    2017-05-05

    We present a real-time video stylization system and demonstrate a variety of painterly styles rendered on real video inputs. The key technical contribution lies on the object flow, which is robust to inaccurate optical flow, unknown object transformation and partial occlusion as well. Since object flows relate regions of the same object across frames, shower-door effect can be effectively reduced where painterly strokes and textures are rendered on video objects. The construction of object flows is performed in real time and automatically after applying metric learning. To reduce temporal flickering, we extend the bilateral filtering into motion bilateral filtering. We propose quantitative metrics to measure the temporal coherence on structures and textures of our stylized videos, and perform extensive experiments to compare our stylized results with baseline systems and prior works specializing in watercolor and abstraction.

  5. Development of an Earthquake Early Warning System Using Real-Time Strong Motion Signals

    Directory of Open Access Journals (Sweden)

    Hiroo Kanamori

    2008-01-01

    Full Text Available As urbanization progresses worldwide, earthquakes pose serious threat to livesand properties for urban areas near major active faults on land or subduction zonesoffshore. Earthquake Early Warning (EEW can be a useful tool for reducing earthquakehazards, if the spatial relation between cities and earthquake sources is favorable for suchwarning and their citizens are properly trained to respond to earthquake warning messages.An EEW system forewarns an urban area of forthcoming strong shaking, normally with afew sec to a few tens of sec of warning time, i.e., before the arrival of the destructive Swavepart of the strong ground motion. Even a few second of advanced warning time willbe useful for pre-programmed emergency measures for various critical facilities, such asrapid-transit vehicles and high-speed trains to avoid potential derailment; it will be alsouseful for orderly shutoff of gas pipelines to minimize fire hazards, controlled shutdown ofhigh-technological manufacturing operations to reduce potential losses, and safe-guardingof computer facilities to avoid loss of vital databases. We explored a practical approach toEEW with the use of a ground-motion period parameter τc and a high-pass filtered verticaldisplacement amplitude parameter Pd from the initial 3 sec of the P waveforms. At a givensite, an earthquake magnitude could be determined from τc and the peak ground-motionvelocity (PGV could be estimated from Pd. In this method, incoming strong motion acceleration signals are recursively converted to ground velocity and displacement. A Pwavetrigger is constantly monitored. When a trigger occurs, τc and Pd are computed. Theearthquake magnitude and the on-site ground-motion intensity could be estimated and thewarning could be issued. In an ideal situation, such warnings would be available within 10sec of the origin time of a large earthquake whose subsequent ground motion may last fortens of seconds.

  6. Design of real-time communication system for image recognition based colony picking instrument

    Science.gov (United States)

    Wang, Qun; Zhang, Rongfu; Yan, Hua; Wu, Huamin

    2017-11-01

    In order to aachieve autommated observatiion and pickinng of monocloonal colonies, an overall dessign and realizzation of real-time commmunication system based on High-throoughput monooclonal auto-piicking instrumment is propossed. The real-time commmunication system is commposed of PCC-PLC commuunication systtem and Centrral Control CComputer (CCC)-PLC communicatioon system. Bassed on RS232 synchronous serial communnication methood to develop a set of dedicated shoort-range commmunication prootocol betweenn the PC and PPLC. Furthermmore, the systemm uses SQL SSERVER database to rrealize the dataa interaction between PC andd CCC. Moreoover, the commmunication of CCC and PC, adopted Socket Ethernnet communicaation based on TCP/IP protoccol. TCP full-dduplex data cannnel to ensure real-time data eexchange as well as immprove system reliability andd security. We tested the commmunication syystem using sppecially develooped test software, thee test results show that the sysstem can realizze the communnication in an eefficient, safe aand stable way between PLC, PC andd CCC, keep thhe real-time conntrol to PLC annd colony inforrmation collecttion.

  7. Real time eye tracking using Kalman extended spatio-temporal context learning

    Science.gov (United States)

    Munir, Farzeen; Minhas, Fayyaz ul Amir Asfar; Jalil, Abdul; Jeon, Moongu

    2017-06-01

    Real time eye tracking has numerous applications in human computer interaction such as a mouse cursor control in a computer system. It is useful for persons with muscular or motion impairments. However, tracking the movement of the eye is complicated by occlusion due to blinking, head movement, screen glare, rapid eye movements, etc. In this work, we present the algorithmic and construction details of a real time eye tracking system. Our proposed system is an extension of Spatio-Temporal context learning through Kalman Filtering. Spatio-Temporal Context Learning offers state of the art accuracy in general object tracking but its performance suffers due to object occlusion. Addition of the Kalman filter allows the proposed method to model the dynamics of the motion of the eye and provide robust eye tracking in cases of occlusion. We demonstrate the effectiveness of this tracking technique by controlling the computer cursor in real time by eye movements.

  8. A method for real-time memory efficient implementation of blob detection in large images

    Directory of Open Access Journals (Sweden)

    Petrović Vladimir L.

    2017-01-01

    Full Text Available In this paper we propose a method for real-time blob detection in large images with low memory cost. The method is suitable for implementation on the specialized parallel hardware such as multi-core platforms, FPGA and ASIC. It uses parallelism to speed-up the blob detection. The input image is divided into blocks of equal sizes to which the maximally stable extremal regions (MSER blob detector is applied in parallel. We propose the usage of multiresolution analysis for detection of large blobs which are not detected by processing the small blocks. This method can find its place in many applications such as medical imaging, text recognition, as well as video surveillance or wide area motion imagery (WAMI. We explored the possibilities of usage of detected blobs in the feature-based image alignment as well. When large images are processed, our approach is 10 to over 20 times more memory efficient than the state of the art hardware implementation of the MSER.

  9. Real-Time Laser Ultrasound Tomography for Profilometry of Solids

    Science.gov (United States)

    Zarubin, V. P.; Bychkov, A. S.; Karabutov, A. A.; Simonova, V. A.; Kudinov, I. A.; Cherepetskaya, E. B.

    2018-01-01

    We studied the possibility of applying laser ultrasound tomography for profilometry of solids. The proposed approach provides high spatial resolution and efficiency, as well as profilometry of contaminated objects or objects submerged in liquids. The algorithms for the construction of tomograms and recognition of the profiles of studied objects using the parallel programming technology NDIVIA CUDA are proposed. A prototype of the real-time laser ultrasound profilometer was used to obtain the profiles of solid surfaces of revolution. The proposed method allows the real-time determination of the surface position for cylindrical objects with an approximation accuracy of up to 16 μm.

  10. Measurement of time delays in gated radiotherapy for realistic respiratory motions

    International Nuclear Information System (INIS)

    Chugh, Brige P.; Quirk, Sarah; Conroy, Leigh; Smith, Wendy L.

    2014-01-01

    Purpose: Gated radiotherapy is used to reduce internal motion margins, escalate target dose, and limit normal tissue dose; however, its temporal accuracy is limited. Beam-on and beam-off time delays can lead to treatment inefficiencies and/or geographic misses; therefore, AAPM Task Group 142 recommends verifying the temporal accuracy of gating systems. Many groups use sinusoidal phantom motion for this, under the tacit assumption that use of sinusoidal motion for determining time delays produces negligible error. The authors test this assumption by measuring gating time delays for several realistic motion shapes with increasing degrees of irregularity. Methods: Time delays were measured on a linear accelerator with a real-time position management system (Varian TrueBeam with RPM system version 1.7.5) for seven motion shapes: regular sinusoidal; regular realistic-shape; large (40%) and small (10%) variations in amplitude; large (40%) variations in period; small (10%) variations in both amplitude and period; and baseline drift (30%). Film streaks of radiation exposure were generated for each motion shape using a programmable motion phantom. Beam-on and beam-off time delays were determined from the difference between the expected and observed streak length. Results: For the system investigated, all sine, regular realistic-shape, and slightly irregular amplitude variation motions had beam-off and beam-on time delays within the AAPM recommended limit of less than 100 ms. In phase-based gating, even small variations in period resulted in some time delays greater than 100 ms. Considerable time delays over 1 s were observed with highly irregular motion. Conclusions: Sinusoidal motion shapes can be considered a reasonable approximation to the more complex and slightly irregular shapes of realistic motion. When using phase-based gating with predictive filters even small variations in period can result in time delays over 100 ms. Clinical use of these systems for patients

  11. Measurement of time delays in gated radiotherapy for realistic respiratory motions

    Energy Technology Data Exchange (ETDEWEB)

    Chugh, Brige P.; Quirk, Sarah; Conroy, Leigh; Smith, Wendy L., E-mail: Wendy.Smith@albertahealthservices.ca [Department of Medical Physics, Tom Baker Cancer Centre, Calgary, Alberta T2N 4N2 (Canada)

    2014-09-15

    Purpose: Gated radiotherapy is used to reduce internal motion margins, escalate target dose, and limit normal tissue dose; however, its temporal accuracy is limited. Beam-on and beam-off time delays can lead to treatment inefficiencies and/or geographic misses; therefore, AAPM Task Group 142 recommends verifying the temporal accuracy of gating systems. Many groups use sinusoidal phantom motion for this, under the tacit assumption that use of sinusoidal motion for determining time delays produces negligible error. The authors test this assumption by measuring gating time delays for several realistic motion shapes with increasing degrees of irregularity. Methods: Time delays were measured on a linear accelerator with a real-time position management system (Varian TrueBeam with RPM system version 1.7.5) for seven motion shapes: regular sinusoidal; regular realistic-shape; large (40%) and small (10%) variations in amplitude; large (40%) variations in period; small (10%) variations in both amplitude and period; and baseline drift (30%). Film streaks of radiation exposure were generated for each motion shape using a programmable motion phantom. Beam-on and beam-off time delays were determined from the difference between the expected and observed streak length. Results: For the system investigated, all sine, regular realistic-shape, and slightly irregular amplitude variation motions had beam-off and beam-on time delays within the AAPM recommended limit of less than 100 ms. In phase-based gating, even small variations in period resulted in some time delays greater than 100 ms. Considerable time delays over 1 s were observed with highly irregular motion. Conclusions: Sinusoidal motion shapes can be considered a reasonable approximation to the more complex and slightly irregular shapes of realistic motion. When using phase-based gating with predictive filters even small variations in period can result in time delays over 100 ms. Clinical use of these systems for patients

  12. Microcomputer-based real-time optical signal processing system

    Science.gov (United States)

    Yu, F. T. S.; Cao, M. F.; Ludman, J. E.

    1986-01-01

    A microcomputer-based real-time programmable optical signal processing system utilizing a Magneto-Optic Spatial Light Modulator (MOSLM) and a Liquid Crystal Light Valve (LCLV) is described. This system can perform a myriad of complicated optical operations, such as image correlation, image subtraction, matrix multiplication and many others. The important assets of this proposed system must be the programmability and the capability of real-time addressing. The design specification and the progress toward practical implementation of this proposed system are discussed. Some preliminary experimental demonstrations are conducted. The feasible applications of this proposed system to image correlation for optical pattern recognition, image subtraction for IC chip inspection and matrix multiplication for optical computing are demonstrated.

  13. Developments in real-time monitoring for geologic hazard warnings (Invited)

    Science.gov (United States)

    Leith, W. S.; Mandeville, C. W.; Earle, P. S.

    2013-12-01

    Real-time data from global, national and local sensor networks enable prompt alerts and warnings of earthquakes, tsunami, volcanic eruptions, geomagnetic storms , broad-scale crustal deformation and landslides. State-of-the-art seismic systems can locate and evaluate earthquake sources in seconds, enabling 'earthquake early warnings' to be broadcast ahead of the damaging surface waves so that protective actions can be taken. Strong motion monitoring systems in buildings now support near-real-time structural damage detection systems, and in quiet times can be used for state-of-health monitoring. High-rate GPS data are being integrated with seismic strong motion data, allowing accurate determination of earthquake displacements in near-real time. GPS data, combined with rainfall, groundwater and geophone data, are now used for near-real-time landslide monitoring and warnings. Real-time sea-floor water pressure data are key for assessing tsunami generation by large earthquakes. For monitoring remote volcanoes that lack local ground-based instrumentation, the USGS uses new technologies such as infrasound arrays and the worldwide lightning detection array to detect eruptions in progress. A new real-time UV-camera system for measuring the two dimensional SO2 flux from volcanic plumes will allow correlations with other volcano monitoring data streams to yield fundamental data on changes in gas flux as an eruption precursor, and how magmas de-gas prior to and during eruptions. Precision magnetic field data support the generation of real-time indices of geomagnetic disturbances (Dst, K and others), and can be used to model electrical currents in the crust and bulk power system. Ground-induced electrical current monitors are used to track those currents so that power grids can be effectively managed during geomagnetic storms. Beyond geophysical sensor data, USGS is using social media to rapidly detect possible earthquakes and to collect firsthand accounts of the impacts of

  14. Fast leaf-fitting with generalized underdose/overdose constraints for real-time MLC tracking

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Douglas, E-mail: douglas.moore@utsouthwestern.edu; Sawant, Amit [Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, Texas 75390 (United States); Ruan, Dan [Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2016-01-15

    Purpose: Real-time multileaf collimator (MLC) tracking is a promising approach to the management of intrafractional tumor motion during thoracic and abdominal radiotherapy. MLC tracking is typically performed in two steps: transforming a planned MLC aperture in response to patient motion and refitting the leaves to the newly generated aperture. One of the challenges of this approach is the inability to faithfully reproduce the desired motion-adapted aperture. This work presents an optimization-based framework with which to solve this leaf-fitting problem in real-time. Methods: This optimization framework is designed to facilitate the determination of leaf positions in real-time while accounting for the trade-off between coverage of the PTV and avoidance of organs at risk (OARs). Derived within this framework, an algorithm is presented that can account for general linear transformations of the planned MLC aperture, particularly 3D translations and in-plane rotations. This algorithm, together with algorithms presented in Sawant et al. [“Management of three-dimensional intrafraction motion through real-time DMLC tracking,” Med. Phys. 35, 2050–2061 (2008)] and Ruan and Keall [Presented at the 2011 IEEE Power Engineering and Automation Conference (PEAM) (2011) (unpublished)], was applied to apertures derived from eight lung intensity modulated radiotherapy plans subjected to six-degree-of-freedom motion traces acquired from lung cancer patients using the kilovoltage intrafraction monitoring system developed at the University of Sydney. A quality-of-fit metric was defined, and each algorithm was evaluated in terms of quality-of-fit and computation time. Results: This algorithm is shown to perform leaf-fittings of apertures, each with 80 leaf pairs, in 0.226 ms on average as compared to 0.082 and 64.2 ms for the algorithms of Sawant et al., Ruan, and Keall, respectively. The algorithm shows approximately 12% improvement in quality-of-fit over the Sawant et al

  15. Fast leaf-fitting with generalized underdose/overdose constraints for real-time MLC tracking

    International Nuclear Information System (INIS)

    Moore, Douglas; Sawant, Amit; Ruan, Dan

    2016-01-01

    Purpose: Real-time multileaf collimator (MLC) tracking is a promising approach to the management of intrafractional tumor motion during thoracic and abdominal radiotherapy. MLC tracking is typically performed in two steps: transforming a planned MLC aperture in response to patient motion and refitting the leaves to the newly generated aperture. One of the challenges of this approach is the inability to faithfully reproduce the desired motion-adapted aperture. This work presents an optimization-based framework with which to solve this leaf-fitting problem in real-time. Methods: This optimization framework is designed to facilitate the determination of leaf positions in real-time while accounting for the trade-off between coverage of the PTV and avoidance of organs at risk (OARs). Derived within this framework, an algorithm is presented that can account for general linear transformations of the planned MLC aperture, particularly 3D translations and in-plane rotations. This algorithm, together with algorithms presented in Sawant et al. [“Management of three-dimensional intrafraction motion through real-time DMLC tracking,” Med. Phys. 35, 2050–2061 (2008)] and Ruan and Keall [Presented at the 2011 IEEE Power Engineering and Automation Conference (PEAM) (2011) (unpublished)], was applied to apertures derived from eight lung intensity modulated radiotherapy plans subjected to six-degree-of-freedom motion traces acquired from lung cancer patients using the kilovoltage intrafraction monitoring system developed at the University of Sydney. A quality-of-fit metric was defined, and each algorithm was evaluated in terms of quality-of-fit and computation time. Results: This algorithm is shown to perform leaf-fittings of apertures, each with 80 leaf pairs, in 0.226 ms on average as compared to 0.082 and 64.2 ms for the algorithms of Sawant et al., Ruan, and Keall, respectively. The algorithm shows approximately 12% improvement in quality-of-fit over the Sawant et al

  16. Real-time animation software for customized training to use motor prosthetic systems.

    Science.gov (United States)

    Davoodi, Rahman; Loeb, Gerald E

    2012-03-01

    Research on control of human movement and development of tools for restoration and rehabilitation of movement after spinal cord injury and amputation can benefit greatly from software tools for creating precisely timed animation sequences of human movement. Despite their ability to create sophisticated animation and high quality rendering, existing animation software are not adapted for application to neural prostheses and rehabilitation of human movement. We have developed a software tool known as MSMS (MusculoSkeletal Modeling Software) that can be used to develop models of human or prosthetic limbs and the objects with which they interact and to animate their movement using motion data from a variety of offline and online sources. The motion data can be read from a motion file containing synthesized motion data or recordings from a motion capture system. Alternatively, motion data can be streamed online from a real-time motion capture system, a physics-based simulation program, or any program that can produce real-time motion data. Further, animation sequences of daily life activities can be constructed using the intuitive user interface of Microsoft's PowerPoint software. The latter allows expert and nonexpert users alike to assemble primitive movements into a complex motion sequence with precise timing by simply arranging the order of the slides and editing their properties in PowerPoint. The resulting motion sequence can be played back in an open-loop manner for demonstration and training or in closed-loop virtual reality environments where the timing and speed of animation depends on user inputs. These versatile animation utilities can be used in any application that requires precisely timed animations but they are particularly suited for research and rehabilitation of movement disorders. MSMS's modeling and animation tools are routinely used in a number of research laboratories around the country to study the control of movement and to develop and test

  17. Evaluation of the Effectiveness of the Stereotactic Body Frame in Reducing Respiratory Intrafractional Organ Motion Using the Real-Time Tumor-Tracking Radiotherapy System

    International Nuclear Information System (INIS)

    Bengua, Gerard; Ishikawa, Masayori; Sutherland, Kenneth; Horita, Kenji; Yamazaki, Rie; Fujita, Katsuhisa; Onimaru, Rikiya; Katoh, Noriwo; Inoue, Tetsuya; Onodera, Shunsuke; Shirato, Hiroki

    2010-01-01

    Purpose: To evaluate the effectiveness of the stereotactic body frame (SBF), with or without a diaphragm press or a breathing cycle monitoring device (Abches), in controlling the range of lung tumor motion, by tracking the real-time position of fiducial markers. Methods and Materials: The trajectories of gold markers in the lung were tracked with the real-time tumor-tracking radiotherapy system. The SBF was used for patient immobilization and the diaphragm press and Abches were used to actively control breathing and for self-controlled respiration, respectively. Tracking was performed in five setups, with and without immobilization and respiration control. The results were evaluated using the effective range, which was defined as the range that includes 95% of all the recorded marker positions in each setup. Results: The SBF, with or without a diaphragm press or Abches, did not yield effective ranges of marker motion which were significantly different from setups that did not use these materials. The differences in the effective marker ranges in the upper lobes for all the patient setups were less than 1mm. Larger effective ranges were obtained for the markers in the middle or lower lobes. Conclusion: The effectiveness of controlling respiratory-induced organ motion by using the SBF+diaphragm press or SBF + Abches patient setups were highly dependent on the individual patient reaction to the use of these materials and the location of the markers. They may be considered for lung tumors in the lower lobes, but are not necessary for tumors in the upper lobes.

  18. Real-time Monitoring of High Intensity Focused Ultrasound (HIFU) Ablation of In Vitro Canine Livers Using Harmonic Motion Imaging for Focused Ultrasound (HMIFU).

    Science.gov (United States)

    Grondin, Julien; Payen, Thomas; Wang, Shutao; Konofagou, Elisa E

    2015-11-03

    Harmonic Motion Imaging for Focused Ultrasound (HMIFU) is a technique that can perform and monitor high-intensity focused ultrasound (HIFU) ablation. An oscillatory motion is generated at the focus of a 93-element and 4.5 MHz center frequency HIFU transducer by applying a 25 Hz amplitude-modulated signal using a function generator. A 64-element and 2.5 MHz imaging transducer with 68kPa peak pressure is confocally placed at the center of the HIFU transducer to acquire the radio-frequency (RF) channel data. In this protocol, real-time monitoring of thermal ablation using HIFU with an acoustic power of 7 W on canine livers in vitro is described. HIFU treatment is applied on the tissue during 2 min and the ablated region is imaged in real-time using diverging or plane wave imaging up to 1,000 frames/second. The matrix of RF channel data is multiplied by a sparse matrix for image reconstruction. The reconstructed field of view is of 90° for diverging wave and 20 mm for plane wave imaging and the data are sampled at 80 MHz. The reconstruction is performed on a Graphical Processing Unit (GPU) in order to image in real-time at a 4.5 display frame rate. 1-D normalized cross-correlation of the reconstructed RF data is used to estimate axial displacements in the focal region. The magnitude of the peak-to-peak displacement at the focal depth decreases during the thermal ablation which denotes stiffening of the tissue due to the formation of a lesion. The displacement signal-to-noise ratio (SNRd) at the focal area for plane wave was 1.4 times higher than for diverging wave showing that plane wave imaging appears to produce better displacement maps quality for HMIFU than diverging wave imaging.

  19. Real-Time Capable Micro-Doppler Signature Decomposition of Walking Human Limbs

    OpenAIRE

    Abdulatif, Sherif; Aziz, Fady; Kleiner, Bernhard; Schneider, Urs

    2017-01-01

    Unique micro-Doppler signature ($\\boldsymbol{\\mu}$-D) of a human body motion can be analyzed as the superposition of different body parts $\\boldsymbol{\\mu}$-D signatures. Extraction of human limbs $\\boldsymbol{\\mu}$-D signatures in real-time can be used to detect, classify and track human motion especially for safety application. In this paper, two methods are combined to simulate $\\boldsymbol{\\mu}$-D signatures of a walking human. Furthermore, a novel limbs $\\mu$-D signature time independent...

  20. SU-G-BRA-01: A Real-Time Tumor Localization and Guidance Platform for Radiotherapy Using US and MRI

    International Nuclear Information System (INIS)

    Bednarz, B; Culberson, W; Bassetti, M; McMillan, A; Matrosic, C; Shepard, A; Zagzebski, J; Smith, S; Lee, W; Mills, D; Cao, K; Wang, B; Fiveland, E; Darrow, R; Foo, T

    2016-01-01

    Purpose: To develop and validate a real-time motion management platform for radiotherapy that directly tracks tumor motion using ultrasound and MRI. This will be a cost-effective and non-invasive real-time platform combining the excellent temporal resolution of ultrasound with the excellent soft-tissue contrast of MRI. Methods: A 4D planar ultrasound acquisition during the treatment that is coupled to a pre-treatment calibration training image set consisting of a simultaneous 4D ultrasound and 4D MRI acquisition. The image sets will be rapidly matched using advanced image and signal processing algorithms, allowing the display of virtual MR images of the tumor/organ motion in real-time from an ultrasound acquisition. Results: The completion of this work will result in several innovations including: a (2D) patch-like, MR and LINAC compatible 4D planar ultrasound transducer that is electronically steerable for hands-free operation to provide real-time virtual MR and ultrasound imaging for motion management during radiation therapy; a multi- modal tumor localization strategy that uses ultrasound and MRI; and fast and accurate image processing algorithms that provide real-time information about the motion and location of tumor or related soft-tissue structures within the patient. Conclusion: If successful, the proposed approach will provide real-time guidance for radiation therapy without degrading image or treatment plan quality. The approach would be equally suitable for image-guided proton beam or heavy ion-beam therapy. This work is partially funded by NIH grant R01CA190298

  1. SU-G-BRA-01: A Real-Time Tumor Localization and Guidance Platform for Radiotherapy Using US and MRI

    Energy Technology Data Exchange (ETDEWEB)

    Bednarz, B; Culberson, W; Bassetti, M; McMillan, A; Matrosic, C; Shepard, A; Zagzebski, J [University of Wisconsin, Madison, WI (United States); Smith, S; Lee, W; Mills, D; Cao, K; Wang, B; Fiveland, E; Darrow, R; Foo, T [GE Global Research Center, Niskayuna, NY (United States)

    2016-06-15

    Purpose: To develop and validate a real-time motion management platform for radiotherapy that directly tracks tumor motion using ultrasound and MRI. This will be a cost-effective and non-invasive real-time platform combining the excellent temporal resolution of ultrasound with the excellent soft-tissue contrast of MRI. Methods: A 4D planar ultrasound acquisition during the treatment that is coupled to a pre-treatment calibration training image set consisting of a simultaneous 4D ultrasound and 4D MRI acquisition. The image sets will be rapidly matched using advanced image and signal processing algorithms, allowing the display of virtual MR images of the tumor/organ motion in real-time from an ultrasound acquisition. Results: The completion of this work will result in several innovations including: a (2D) patch-like, MR and LINAC compatible 4D planar ultrasound transducer that is electronically steerable for hands-free operation to provide real-time virtual MR and ultrasound imaging for motion management during radiation therapy; a multi- modal tumor localization strategy that uses ultrasound and MRI; and fast and accurate image processing algorithms that provide real-time information about the motion and location of tumor or related soft-tissue structures within the patient. Conclusion: If successful, the proposed approach will provide real-time guidance for radiation therapy without degrading image or treatment plan quality. The approach would be equally suitable for image-guided proton beam or heavy ion-beam therapy. This work is partially funded by NIH grant R01CA190298.

  2. Real-time systems

    OpenAIRE

    Badr, Salah M.; Bruztman, Donald P.; Nelson, Michael L.; Byrnes, Ronald Benton

    1992-01-01

    This paper presents an introduction to the basic issues involved in real-time systems. Both real-time operating sys and real-time programming languages are explored. Concurrent programming and process synchronization and communication are also discussed. The real-time requirements of the Naval Postgraduate School Autonomous Under Vehicle (AUV) are then examined. Autonomous underwater vehicle (AUV), hard real-time system, real-time operating system, real-time programming language, real-time sy...

  3. An Indoor Scene Recognition-Based 3D Registration Mechanism for Real-Time AR-GIS Visualization in Mobile Applications

    Directory of Open Access Journals (Sweden)

    Wei Ma

    2018-03-01

    Full Text Available Mobile Augmented Reality (MAR systems are becoming ideal platforms for visualization, permitting users to better comprehend and interact with spatial information. Subsequently, this technological development, in turn, has prompted efforts to enhance mechanisms for registering virtual objects in real world contexts. Most existing AR 3D Registration techniques lack the scene recognition capabilities needed to describe accurately the positioning of virtual objects in scenes representing reality. Moreover, the application of such registration methods in indoor AR-GIS systems is further impeded by the limited capacity of these systems to detect the geometry and semantic information in indoor environments. In this paper, we propose a novel method for fusing virtual objects and indoor scenes, based on indoor scene recognition technology. To accomplish scene fusion in AR-GIS, we first detect key points in reference images. Then, we perform interior layout extraction using a Fully Connected Networks (FCN algorithm to acquire layout coordinate points for the tracking targets. We detect and recognize the target scene in a video frame image to track targets and estimate the camera pose. In this method, virtual 3D objects are fused precisely to a real scene, according to the camera pose and the previously extracted layout coordinate points. Our results demonstrate that this approach enables accurate fusion of virtual objects with representations of real world indoor environments. Based on this fusion technique, users can better grasp virtual three-dimensional representations on an AR-GIS platform.

  4. Real-Time Visualization System for Deep-Sea Surveying

    Directory of Open Access Journals (Sweden)

    Yujie Li

    2014-01-01

    Full Text Available Remote robotic exploration holds vast potential for gaining knowledge about extreme environments, which is difficult to be accessed by humans. In the last two decades, various underwater devices were developed for detecting the mines and mine-like objects in the deep-sea environment. However, there are some problems in recent equipment, like poor accuracy of mineral objects detection, without real-time processing, and low resolution of underwater video frames. Consequently, the underwater objects recognition is a difficult task, because the physical properties of the medium, the captured video frames, are distorted seriously. In this paper, we are considering use of the modern image processing methods to determine the mineral location and to recognize the mineral actually within a little computation complex. We firstly analyze the recent underwater imaging models and propose a novel underwater optical imaging model, which is much closer to the light propagation model in the underwater environment. In our imaging system, we remove the electrical noise by dual-tree complex wavelet transform. And then we solve the nonuniform illumination of artificial lights by fast guided trilateral bilateral filter and recover the image color through automatic color equalization. Finally, a shape-based mineral recognition algorithm is proposed for underwater objects detection. These methods are designed for real-time execution on limited-memory platforms. This pipeline is suitable for detecting underwater objects in practice by our experiences. The initial results are presented and experiments demonstrate the effectiveness of the proposed real-time visualization system.

  5. An Analysis of Intrinsic and Extrinsic Hand Muscle EMG for Improved Pattern Recognition Control.

    Science.gov (United States)

    Adewuyi, Adenike A; Hargrove, Levi J; Kuiken, Todd A

    2016-04-01

    Pattern recognition control combined with surface electromyography (EMG) from the extrinsic hand muscles has shown great promise for control of multiple prosthetic functions for transradial amputees. There is, however, a need to adapt this control method when implemented for partial-hand amputees, who possess both a functional wrist and information-rich residual intrinsic hand muscles. We demonstrate that combining EMG data from both intrinsic and extrinsic hand muscles to classify hand grasps and finger motions allows up to 19 classes of hand grasps and individual finger motions to be decoded, with an accuracy of 96% for non-amputees and 85% for partial-hand amputees. We evaluated real-time pattern recognition control of three hand motions in seven different wrist positions. We found that a system trained with both intrinsic and extrinsic muscle EMG data, collected while statically and dynamically varying wrist position increased completion rates from 73% to 96% for partial-hand amputees and from 88% to 100% for non-amputees when compared to a system trained with only extrinsic muscle EMG data collected in a neutral wrist position. Our study shows that incorporating intrinsic muscle EMG data and wrist motion can significantly improve the robustness of pattern recognition control for application to partial-hand prosthetic control.

  6. The dosimetric impact of inversely optimized arc radiotherapy plan modulation for real-time dynamic MLC tracking delivery

    DEFF Research Database (Denmark)

    Falk, Marianne; Larsson, Tobias; Keall, P.

    2012-01-01

    Purpose: Real-time dynamic multileaf collimator (MLC) tracking for management of intrafraction tumor motion can be challenging for highly modulated beams, as the leaves need to travel far to adjust for target motion perpendicular to the leaf travel direction. The plan modulation can be reduced......-to-peak displacement of 2 cm and a cycle time of 6 s. The delivery was adjusted to the target motion using MLC tracking, guided in real-time by an infrared optical system. The dosimetric results were evaluated using gamma index evaluation with static target measurements as reference. Results: The plan quality...

  7. Near real-time shadow detection and removal in aerial motion imagery application

    Science.gov (United States)

    Silva, Guilherme F.; Carneiro, Grace B.; Doth, Ricardo; Amaral, Leonardo A.; Azevedo, Dario F. G. de

    2018-06-01

    This work presents a method to automatically detect and remove shadows in urban aerial images and its application in an aerospace remote monitoring system requiring near real-time processing. Our detection method generates shadow masks and is accelerated by GPU programming. To obtain the shadow masks, we converted images from RGB to CIELCh model, calculated a modified Specthem ratio, and applied multilevel thresholding. Morphological operations were used to reduce shadow mask noise. The shadow masks are used in the process of removing shadows from the original images using the illumination ratio of the shadow/non-shadow regions. We obtained shadow detection accuracy of around 93% and shadow removal results comparable to the state-of-the-art while maintaining execution time under real-time constraints.

  8. Automatic online and real-time tumour motion monitoring during stereotactic liver treatments on a conventional linac by combined optical and sparse monoscopic imaging with kilovoltage x-rays (COSMIK)

    Science.gov (United States)

    Bertholet, Jenny; Toftegaard, Jakob; Hansen, Rune; Worm, Esben S.; Wan, Hanlin; Parikh, Parag J.; Weber, Britta; Høyer, Morten; Poulsen, Per R.

    2018-03-01

    The purpose of this study was to develop, validate and clinically demonstrate fully automatic tumour motion monitoring on a conventional linear accelerator by combined optical and sparse monoscopic imaging with kilovoltage x-rays (COSMIK). COSMIK combines auto-segmentation of implanted fiducial markers in cone-beam computed tomography (CBCT) projections and intra-treatment kV images with simultaneous streaming of an external motion signal. A pre-treatment CBCT is acquired with simultaneous recording of the motion of an external marker block on the abdomen. The 3-dimensional (3D) marker motion during the CBCT is estimated from the auto-segmented positions in the projections and used to optimize an external correlation model (ECM) of internal motion as a function of external motion. During treatment, the ECM estimates the internal motion from the external motion at 20 Hz. KV images are acquired every 3 s, auto-segmented, and used to update the ECM for baseline shifts between internal and external motion. The COSMIK method was validated using Calypso-recorded internal tumour motion with simultaneous camera-recorded external motion for 15 liver stereotactic body radiotherapy (SBRT) patients. The validation included phantom experiments and simulations hereof for 12 fractions and further simulations for 42 fractions. The simulations compared the accuracy of COSMIK with ECM-based monitoring without model updates and with model updates based on stereoscopic imaging as well as continuous kilovoltage intrafraction monitoring (KIM) at 10 Hz without an external signal. Clinical real-time tumour motion monitoring with COSMIK was performed offline for 14 liver SBRT patients (41 fractions) and online for one patient (two fractions). The mean 3D root-mean-square error for the four monitoring methods was 1.61 mm (COSMIK), 2.31 mm (ECM without updates), 1.49 mm (ECM with stereoscopic updates) and 0.75 mm (KIM). COSMIK is the first combined kV/optical real-time motion

  9. Real-time image mosaicing for medical applications.

    Science.gov (United States)

    Loewke, Kevin E; Camarillo, David B; Jobst, Christopher A; Salisbury, J Kenneth

    2007-01-01

    In this paper we describe the development of a robotically-assisted image mosaicing system for medical applications. The processing occurs in real-time due to a fast initial image alignment provided by robotic position sensing. Near-field imaging, defined by relatively large camera motion, requires translations as well as pan and tilt orientations to be measured. To capture these measurements we use 5-d.o.f. sensing along with a hand-eye calibration to account for sensor offset. This sensor-based approach speeds up the mosaicing, eliminates cumulative errors, and readily handles arbitrary camera motions. Our results have produced visually satisfactory mosaics on a dental model but can be extended to other medical images.

  10. Real-time control of electronic motion: Application to NaI

    DEFF Research Database (Denmark)

    Grønager, Michael; Henriksen, Niels Engholm

    1998-01-01

    + + I- depends on the electron distribution (i.e., where the electron "sits") prior to the time where the bond is broken by a subpicosecond half-cycle unipolar electromagnetic pulse. Thus we control, in real time, which nucleus one of the valence electrons will follow after the bond is broken. (C) 1998......We study theoretically the electronic and nuclear dynamics in NaI. After a femtosecond pulse has prepared a wave packet in the first excited state, we consider the adiabatic and the nonadiabatic electronic dynamics and demonstrate explicitly that a nonstationary electron is created in Na...

  11. Planning Study Comparison of Real-Time Target Tracking and Four-Dimensional Inverse Planning for Managing Patient Respiratory Motion

    International Nuclear Information System (INIS)

    Zhang Peng; Hugo, Geoffrey D.; Yan Di

    2008-01-01

    Purpose: Real-time target tracking (RT-TT) and four-dimensional inverse planning (4D-IP) are two potential methods to manage respiratory target motion. In this study, we evaluated each method using the cumulative dose-volume criteria in lung cancer radiotherapy. Methods and Materials: Respiration-correlated computed tomography scans were acquired for 4 patients. Deformable image registration was applied to generate a displacement mapping for each phase image of the respiration-correlated computed tomography images. First, the dose distribution for the organs of interest obtained from an idealized RT-TT technique was evaluated, assuming perfect knowledge of organ motion and beam tracking. Inverse planning was performed on each phase image separately. The treatment dose to the organs of interest was then accumulated from the optimized plans. Second, 4D-IP was performed using the probability density function of respiratory motion. The beam arrangement, prescription dose, and objectives were consistent in both planning methods. The dose-volume and equivalent uniform dose in the target volume, lung, heart, and spinal cord were used for the evaluation. Results: The cumulative dose in the target was similar for both techniques. The equivalent uniform dose of the lung, heart, and spinal cord was 4.6 ± 2.2, 11 ± 4.4, and 11 ± 6.6 Gy for RT-TT with a 0-mm target margin, 5.2 ± 3.1, 12 ± 5.9, and 12 ± 7.8 Gy for RT-TT with a 2-mm target margin, and 5.3 ± 2.3, 11.9 ± 5.0, and 12 ± 5.6 Gy for 4D-IP, respectively. Conclusion: The results of our study have shown that 4D-IP can achieve plans similar to those achieved by RT-TT. Considering clinical implementation, 4D-IP could be a more reliable and practical method to manage patient respiration-induced motion

  12. [A review of progress of real-time tumor tracking radiotherapy technology based on dynamic multi-leaf collimator].

    Science.gov (United States)

    Liu, Fubo; Li, Guangjun; Shen, Jiuling; Li, Ligin; Bai, Sen

    2017-02-01

    While radiation treatment to patients with tumors in thorax and abdomen is being performed, further improvement of radiation accuracy is restricted by the tumor intra-fractional motion due to respiration. Real-time tumor tracking radiation is an optimal solution to tumor intra-fractional motion. A review of the progress of real-time dynamic multi-leaf collimator(DMLC) tracking is provided in the present review, including DMLC tracking method, time lag of DMLC tracking system, and dosimetric verification.

  13. Viewpoint Manifolds for Action Recognition

    Directory of Open Access Journals (Sweden)

    Souvenir Richard

    2009-01-01

    Full Text Available Abstract Action recognition from video is a problem that has many important applications to human motion analysis. In real-world settings, the viewpoint of the camera cannot always be fixed relative to the subject, so view-invariant action recognition methods are needed. Previous view-invariant methods use multiple cameras in both the training and testing phases of action recognition or require storing many examples of a single action from multiple viewpoints. In this paper, we present a framework for learning a compact representation of primitive actions (e.g., walk, punch, kick, sit that can be used for video obtained from a single camera for simultaneous action recognition and viewpoint estimation. Using our method, which models the low-dimensional structure of these actions relative to viewpoint, we show recognition rates on a publicly available dataset previously only achieved using multiple simultaneous views.

  14. Main real time software for high-energy physics experiments

    International Nuclear Information System (INIS)

    Tikhonov, A.N.

    1985-01-01

    The general problems of organization of software complexes, as well as development of typical algorithms and packages of applied programs for real time systems used in experiments with charged particle accelerators are discussed. It is noted that numerous qualitatively different real time tasks are solved by parallel programming of the processes of data acquisition, equipment control, data exchange with remote terminals, data express processing and accumulation, operator's instruction interpretation, generation and buffering of resulting files for data output and information processing which is realized on the basis of multicomputer system utilization. Further development of software for experiments is associated with improving the algorithms for automatic recognition and analysis of events with complex topology and standardization of applied program packages

  15. Real-time colour hologram generation based on ray-sampling plane with multi-GPU acceleration.

    Science.gov (United States)

    Sato, Hirochika; Kakue, Takashi; Ichihashi, Yasuyuki; Endo, Yutaka; Wakunami, Koki; Oi, Ryutaro; Yamamoto, Kenji; Nakayama, Hirotaka; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2018-01-24

    Although electro-holography can reconstruct three-dimensional (3D) motion pictures, its computational cost is too heavy to allow for real-time reconstruction of 3D motion pictures. This study explores accelerating colour hologram generation using light-ray information on a ray-sampling (RS) plane with a graphics processing unit (GPU) to realise a real-time holographic display system. We refer to an image corresponding to light-ray information as an RS image. Colour holograms were generated from three RS images with resolutions of 2,048 × 2,048; 3,072 × 3,072 and 4,096 × 4,096 pixels. The computational results indicate that the generation of the colour holograms using multiple GPUs (NVIDIA Geforce GTX 1080) was approximately 300-500 times faster than those generated using a central processing unit. In addition, the results demonstrate that 3D motion pictures were successfully reconstructed from RS images of 3,072 × 3,072 pixels at approximately 15 frames per second using an electro-holographic reconstruction system in which colour holograms were generated from RS images in real time.

  16. GPU-based real-time trinocular stereo vision

    Science.gov (United States)

    Yao, Yuanbin; Linton, R. J.; Padir, Taskin

    2013-01-01

    Most stereovision applications are binocular which uses information from a 2-camera array to perform stereo matching and compute the depth image. Trinocular stereovision with a 3-camera array has been proved to provide higher accuracy in stereo matching which could benefit applications like distance finding, object recognition, and detection. This paper presents a real-time stereovision algorithm implemented on a GPGPU (General-purpose graphics processing unit) using a trinocular stereovision camera array. Algorithm employs a winner-take-all method applied to perform fusion of disparities in different directions following various image processing techniques to obtain the depth information. The goal of the algorithm is to achieve real-time processing speed with the help of a GPGPU involving the use of Open Source Computer Vision Library (OpenCV) in C++ and NVidia CUDA GPGPU Solution. The results are compared in accuracy and speed to verify the improvement.

  17. Onboard Risk-Aware Real-Time Motion Planning Algorithms for Spacecraft Maneuvering

    Data.gov (United States)

    National Aeronautics and Space Administration — Unlocking the next generation of complex missions for autonomous spacecraft will require significant advances in robust motion planning. The aim of motion planning...

  18. Visual detectability of elastic contrast in real-time ultrasound images

    Science.gov (United States)

    Miller, Naomi R.; Bamber, Jeffery C.; Doyley, Marvin M.; Leach, Martin O.

    1997-04-01

    Elasticity imaging (EI) has recently been proposed as a technique for imaging the mechanical properties of soft tissue. However, dynamic features, known as compressibility and mobility, are already employed to distinguish between different tissue types in ultrasound breast examination. This method, which involves the subjective interpretation of tissue motion seen in real-time B-mode images during palpation, is hereafter referred to as differential motion imaging (DMI). The purpose of this study was to develop the methodology required to perform a series of perception experiments to measure elastic lesion detectability by means of DMI and to obtain preliminary results for elastic contrast thresholds for different lesion sizes. Simulated sequences of real-time B-scans of tissue moving in response to an applied force were generated. A two-alternative forced choice (2-AFC) experiment was conducted and the measured contrast thresholds were compared with published results for lesions detected by EI. Although the trained observer was found to be quite skilled at the task of differential motion perception, it would appear that lesion detectability is improved when motion information is detected by computer processing and converted to gray scale before presentation to the observer. In particular, for lesions containing fewer than eight speckle cells, a signal detection rate of 100% could not be achieved even when the elastic contrast was very high.

  19. Development and validation of real-time simulation of X-ray imaging with respiratory motion.

    Science.gov (United States)

    Vidal, Franck P; Villard, Pierre-Frédéric

    2016-04-01

    We present a framework that combines evolutionary optimisation, soft tissue modelling and ray tracing on GPU to simultaneously compute the respiratory motion and X-ray imaging in real-time. Our aim is to provide validated building blocks with high fidelity to closely match both the human physiology and the physics of X-rays. A CPU-based set of algorithms is presented to model organ behaviours during respiration. Soft tissue deformation is computed with an extension of the Chain Mail method. Rigid elements move according to kinematic laws. A GPU-based surface rendering method is proposed to compute the X-ray image using the Beer-Lambert law. It is provided as an open-source library. A quantitative validation study is provided to objectively assess the accuracy of both components: (i) the respiration against anatomical data, and (ii) the X-ray against the Beer-Lambert law and the results of Monte Carlo simulations. Our implementation can be used in various applications, such as interactive medical virtual environment to train percutaneous transhepatic cholangiography in interventional radiology, 2D/3D registration, computation of digitally reconstructed radiograph, simulation of 4D sinograms to test tomography reconstruction tools. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Real-time Kalman filter: Cooling of an optically levitated nanoparticle

    Science.gov (United States)

    Setter, Ashley; Toroš, Marko; Ralph, Jason F.; Ulbricht, Hendrik

    2018-03-01

    We demonstrate that a Kalman filter applied to estimate the position of an optically levitated nanoparticle, and operated in real-time within a field programmable gate array, is sufficient to perform closed-loop parametric feedback cooling of the center-of-mass motion to sub-Kelvin temperatures. The translational center-of-mass motion along the optical axis of the trapped nanoparticle has been cooled by 3 orders of magnitude, from a temperature of 300 K to a temperature of 162 ±15 mK.

  1. Real-time Kalman filter: cooling of an optically levitated nanoparticle

    OpenAIRE

    Setter, Ashley; Toros, Marko; Ralph, Jason; Ulbricht, Hendrik

    2018-01-01

    We demonstrate that a Kalman filter applied to estimate the position of an optically levitated nanoparticle, and operated in real-time within a Field Programmable Gate Array (FPGA), is sufficient to perform closed-loop parametric feedback cooling of the centre of mass motion to sub-Kelvin temperatures. The translational centre of mass motion along the optical axis of the trapped nanoparticle has been cooled by three orders of magnitude, from a temperature of 300K to a temperature of 162 +/- 1...

  2. Toward fast feature adaptation and localization for real-time face recognition systems

    NARCIS (Netherlands)

    Zuo, F.; With, de P.H.N.; Ebrahimi, T.; Sikora, T.

    2003-01-01

    In a home environment, video surveillance employing face detection and recognition is attractive for new applications. Facial feature (e.g. eyes and mouth) localization in the face is an essential task for face recognition because it constitutes an indispensable step for face geometry normalization.

  3. Neuromorphic Configurable Architecture for Robust Motion Estimation

    Directory of Open Access Journals (Sweden)

    Guillermo Botella

    2008-01-01

    Full Text Available The robustness of the human visual system recovering motion estimation in almost any visual situation is enviable, performing enormous calculation tasks continuously, robustly, efficiently, and effortlessly. There is obviously a great deal we can learn from our own visual system. Currently, there are several optical flow algorithms, although none of them deals efficiently with noise, illumination changes, second-order motion, occlusions, and so on. The main contribution of this work is the efficient implementation of a biologically inspired motion algorithm that borrows nature templates as inspiration in the design of architectures and makes use of a specific model of human visual motion perception: Multichannel Gradient Model (McGM. This novel customizable architecture of a neuromorphic robust optical flow can be constructed with FPGA or ASIC device using properties of the cortical motion pathway, constituting a useful framework for building future complex bioinspired systems running in real time with high computational complexity. This work includes the resource usage and performance data, and the comparison with actual systems. This hardware has many application fields like object recognition, navigation, or tracking in difficult environments due to its bioinspired and robustness properties.

  4. MO-FG-CAMPUS-JeP3-04: Feasibility Study of Real-Time Ultrasound Monitoring for Abdominal Stereotactic Body Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Su, Lin; Kien Ng, Sook; Zhang, Ying; Herman, Joseph; Wong, John; Ding, Kai [Department of Radiation Oncology, John Hopkins University, Baltimore, MD (United States); Ji, Tianlong [Department of Radiation Oncology, The First Hospital of China Medical University, Shenyang, Liaoning (China); Iordachita, Iulian [Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD (United States); Tutkun Sen, H.; Kazanzides, Peter; Lediju Bell, Muyinatu A. [Department of Computer Science, Johns Hopkins University, Baltimore, MD (United States)

    2016-06-15

    Purpose: Ultrasound is ideal for real-time monitoring in radiotherapy with high soft tissue contrast, non-ionization, portability, and cost effectiveness. Few studies investigated clinical application of real-time ultrasound monitoring for abdominal stereotactic body radiation therapy (SBRT). This study aims to demonstrate the feasibility of real-time monitoring of 3D target motion using 4D ultrasound. Methods: An ultrasound probe holding system was designed to allow clinician to freely move and lock ultrasound probe. For phantom study, an abdominal ultrasound phantom was secured on a 2D programmable respiratory motion stage. One side of the stage was elevated than another side to generate 3D motion. The motion stage made periodic breath-hold movement. Phantom movement tracked by infrared camera was considered as ground truth. For volunteer study three healthy subjects underwent the same setup for abdominal SBRT with active breath control (ABC). 4D ultrasound B-mode images were acquired for both phantom and volunteers for real-time monitoring. 10 breath-hold cycles were monitored for each experiment. For phantom, the target motion tracked by ultrasound was compared with motion tracked by infrared camera. For healthy volunteers, the reproducibility of ABC breath-hold was evaluated. Results: Volunteer study showed the ultrasound system fitted well to the clinical SBRT setup. The reproducibility for 10 breath-holds is less than 2 mm in three directions for all three volunteers. For phantom study the motion between inspiration and expiration captured by camera (ground truth) is 2.35±0.02 mm, 1.28±0.04 mm, 8.85±0.03 mm in LR, AP, SI directly, respectively. The motion monitored by ultrasound is 2.21±0.07 mm, 1.32±0.12mm, 9.10±0.08mm, respectively. The motion monitoring error in any direction is less than 0.5 mm. Conclusion: The volunteer study proved the clinical feasibility of real-time ultrasound monitoring for abdominal SBRT. The phantom and volunteer ABC

  5. MO-FG-CAMPUS-JeP3-04: Feasibility Study of Real-Time Ultrasound Monitoring for Abdominal Stereotactic Body Radiation Therapy

    International Nuclear Information System (INIS)

    Su, Lin; Kien Ng, Sook; Zhang, Ying; Herman, Joseph; Wong, John; Ding, Kai; Ji, Tianlong; Iordachita, Iulian; Tutkun Sen, H.; Kazanzides, Peter; Lediju Bell, Muyinatu A.

    2016-01-01

    Purpose: Ultrasound is ideal for real-time monitoring in radiotherapy with high soft tissue contrast, non-ionization, portability, and cost effectiveness. Few studies investigated clinical application of real-time ultrasound monitoring for abdominal stereotactic body radiation therapy (SBRT). This study aims to demonstrate the feasibility of real-time monitoring of 3D target motion using 4D ultrasound. Methods: An ultrasound probe holding system was designed to allow clinician to freely move and lock ultrasound probe. For phantom study, an abdominal ultrasound phantom was secured on a 2D programmable respiratory motion stage. One side of the stage was elevated than another side to generate 3D motion. The motion stage made periodic breath-hold movement. Phantom movement tracked by infrared camera was considered as ground truth. For volunteer study three healthy subjects underwent the same setup for abdominal SBRT with active breath control (ABC). 4D ultrasound B-mode images were acquired for both phantom and volunteers for real-time monitoring. 10 breath-hold cycles were monitored for each experiment. For phantom, the target motion tracked by ultrasound was compared with motion tracked by infrared camera. For healthy volunteers, the reproducibility of ABC breath-hold was evaluated. Results: Volunteer study showed the ultrasound system fitted well to the clinical SBRT setup. The reproducibility for 10 breath-holds is less than 2 mm in three directions for all three volunteers. For phantom study the motion between inspiration and expiration captured by camera (ground truth) is 2.35±0.02 mm, 1.28±0.04 mm, 8.85±0.03 mm in LR, AP, SI directly, respectively. The motion monitored by ultrasound is 2.21±0.07 mm, 1.32±0.12mm, 9.10±0.08mm, respectively. The motion monitoring error in any direction is less than 0.5 mm. Conclusion: The volunteer study proved the clinical feasibility of real-time ultrasound monitoring for abdominal SBRT. The phantom and volunteer ABC

  6. Real-time well condition monitoring in extended reach wells

    Energy Technology Data Exchange (ETDEWEB)

    Kucs, R.; Spoerker, H.F. [OMV Austria Exploration and Production GmbH, Gaenserndorf (Austria); Thonhauser, G. [Montanuniversitaet Leoben (Austria)

    2008-10-23

    Ever rising daily operating cost for offshore operations make the risk of running into drilling problems due to torque and drag developments in extended reach applications a growing concern. One option to reduce cost related to torque and drag problems can be to monitor torque and drag trends in real time without additional workload on the platform drilling team. To evaluate observed torque or drag trends it is necessary to automatically recognize operations and to have a 'standard value' to compare the measurements to. The presented systematic approach features both options - fully automated operations recognition and real time analysis. Trends can be discussed between rig- and shore-based teams, and decisions can be based on up to date information. Since the system is focused on visualization of real-time torque and drag trends, instead of highly complex and repeated simulations, calculation time is reduced by comparing the real-time rig data against predictions imported from a commercial drilling engineering application. The system allows reacting to emerging stuck pipe situations or developing cuttings beds long before the situations become severe enough to result in substantial lost time. The ability to compare real-time data with historical data from the same or other wells makes the system a valuable tool in supporting a learning organization. The system has been developed in a joint research initiative for field application on the development of an offshore heavy oil field in New Zealand. (orig.)

  7. Grayscale optical correlator for real-time onboard ATR

    Science.gov (United States)

    Chao, Tien-Hsin; Zhou, Hanying; Reyes, George F.

    2001-03-01

    Jet Propulsion Laboratory has been developing grayscale optical correlator (GOC) for a variety of automatic target recognition (ATR) applications. As reported in previous papers, a 128 X 128 camcorder-sized GOC has been demonstrated for real-time field ATR demos. In this paper, we will report the recent development of a prototype 512 X 512 GOC utilizing a new miniature ferroelectric liquid crystal spatial light modulator with a 7-micrometers pixel pitch. Experimental demonstration of ATR applications using this new GOC will be presented. The potential of developing a matchbox-sized GOC will also be discussed. A new application of synthesizing new complex-valued correlation filters using this real-axis 512 X 512 SLM will also be included.

  8. Design of a compact low-power human-computer interaction equipment for hand motion

    Science.gov (United States)

    Wu, Xianwei; Jin, Wenguang

    2017-01-01

    Human-Computer Interaction (HCI) raises demand of convenience, endurance, responsiveness and naturalness. This paper describes a design of a compact wearable low-power HCI equipment applied to gesture recognition. System combines multi-mode sense signals: the vision sense signal and the motion sense signal, and the equipment is equipped with the depth camera and the motion sensor. The dimension (40 mm × 30 mm) and structure is compact and portable after tight integration. System is built on a module layered framework, which contributes to real-time collection (60 fps), process and transmission via synchronous confusion with asynchronous concurrent collection and wireless Blue 4.0 transmission. To minimize equipment's energy consumption, system makes use of low-power components, managing peripheral state dynamically, switching into idle mode intelligently, pulse-width modulation (PWM) of the NIR LEDs of the depth camera and algorithm optimization by the motion sensor. To test this equipment's function and performance, a gesture recognition algorithm is applied to system. As the result presents, general energy consumption could be as low as 0.5 W.

  9. Human Action Recognition Using Ordinal Measure of Accumulated Motion

    Directory of Open Access Journals (Sweden)

    Kim Wonjun

    2010-01-01

    Full Text Available This paper presents a method for recognizing human actions from a single query action video. We propose an action recognition scheme based on the ordinal measure of accumulated motion, which is robust to variations of appearances. To this end, we first define the accumulated motion image (AMI using image differences. Then the AMI of the query action video is resized to a subimage by intensity averaging and a rank matrix is generated by ordering the sample values in the sub-image. By computing the distances from the rank matrix of the query action video to the rank matrices of all local windows in the target video, local windows close to the query action are detected as candidates. To find the best match among the candidates, their energy histograms, which are obtained by projecting AMI values in horizontal and vertical directions, respectively, are compared with those of the query action video. The proposed method does not require any preprocessing task such as learning and segmentation. To justify the efficiency and robustness of our approach, the experiments are conducted on various datasets.

  10. Improved Hip-Based Individual Recognition Using Wearable Motion Recording Sensor

    Science.gov (United States)

    Gafurov, Davrondzhon; Bours, Patrick

    In todays society the demand for reliable verification of a user identity is increasing. Although biometric technologies based on fingerprint or iris can provide accurate and reliable recognition performance, they are inconvenient for periodic or frequent re-verification. In this paper we propose a hip-based user recognition method which can be suitable for implicit and periodic re-verification of the identity. In our approach we use a wearable accelerometer sensor attached to the hip of the person, and then the measured hip motion signal is analysed for identity verification purposes. The main analyses steps consists of detecting gait cycles in the signal and matching two sets of detected gait cycles. Evaluating the approach on a hip data set consisting of 400 gait sequences (samples) from 100 subjects, we obtained equal error rate (EER) of 7.5% and identification rate at rank 1 was 81.4%. These numbers are improvements by 37.5% and 11.2% respectively of the previous study using the same data set.

  11. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology.

    Science.gov (United States)

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-02-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. Copyright © 2011. Published by Elsevier GmbH.

  12. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    Energy Technology Data Exchange (ETDEWEB)

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph [Medical University of Vienna (Austria). Center of Medical Physics and Biomedical Engineering] [and others

    2012-07-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference X-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 x 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. (orig.)

  13. Self-Management of Patient Body Position, Pose, and Motion Using Wide-Field, Real-Time Optical Measurement Feedback: Results of a Volunteer Study

    International Nuclear Information System (INIS)

    Parkhurst, James M.; Price, Gareth J.; Sharrock, Phil J.; Jackson, Andrew S.N.; Stratford, Julie; Moore, Christopher J.

    2013-01-01

    Purpose: We present the results of a clinical feasibility study, performed in 10 healthy volunteers undergoing a simulated treatment over 3 sessions, to investigate the use of a wide-field visual feedback technique intended to help patients control their pose while reducing motion during radiation therapy treatment. Methods and Materials: An optical surface sensor is used to capture wide-area measurements of a subject's body surface with visualizations of these data displayed back to them in real time. In this study we hypothesize that this active feedback mechanism will enable patients to control their motion and help them maintain their setup pose and position. A capability hierarchy of 3 different level-of-detail abstractions of the measured surface data is systematically compared. Results: Use of the device enabled volunteers to increase their conformance to a reference surface, as measured by decreased variability across their body surfaces. The use of visual feedback also enabled volunteers to reduce their respiratory motion amplitude to 1.7 ± 0.6 mm compared with 2.7 ± 1.4 mm without visual feedback. Conclusions: The use of live feedback of their optically measured body surfaces enabled a set of volunteers to better manage their pose and motion when compared with free breathing. The method is suitable to be taken forward to patient studies

  14. Exploratory data analysis of acceleration signals to select light-weight and accurate features for real-time activity recognition on smartphones.

    Science.gov (United States)

    Khan, Adil Mehmood; Siddiqi, Muhammad Hameed; Lee, Seok-Won

    2013-09-27

    Smartphone-based activity recognition (SP-AR) recognizes users' activities using the embedded accelerometer sensor. Only a small number of previous works can be classified as online systems, i.e., the whole process (pre-processing, feature extraction, and classification) is performed on the device. Most of these online systems use either a high sampling rate (SR) or long data-window (DW) to achieve high accuracy, resulting in short battery life or delayed system response, respectively. This paper introduces a real-time/online SP-AR system that solves this problem. Exploratory data analysis was performed on acceleration signals of 6 activities, collected from 30 subjects, to show that these signals are generated by an autoregressive (AR) process, and an accurate AR-model in this case can be built using a low SR (20 Hz) and a small DW (3 s). The high within class variance resulting from placing the phone at different positions was reduced using kernel discriminant analysis to achieve position-independent recognition. Neural networks were used as classifiers. Unlike previous works, true subject-independent evaluation was performed, where 10 new subjects evaluated the system at their homes for 1 week. The results show that our features outperformed three commonly used features by 40% in terms of accuracy for the given SR and DW.

  15. The active blind spot camera: hard real-time recognition of moving objects from a moving camera

    OpenAIRE

    Van Beeck, Kristof; Goedemé, Toon; Tuytelaars, Tinne

    2014-01-01

    This PhD research focuses on visual object recognition under specific demanding conditions. The object to be recognized as well as the camera move, and the time available for the recognition task is extremely short. This generic problem is applied here on a specific problem: the active blind spot camera. Statistics show a large number of accidents with trucks are related to the so-called blind spot, the area around the vehicle in which vulnerable road users are hard to perceive by the truck d...

  16. Real-time high dynamic range laser scanning microscopy

    Science.gov (United States)

    Vinegoni, C.; Leon Swisher, C.; Fumene Feruglio, P.; Giedt, R. J.; Rousso, D. L.; Stapleton, S.; Weissleder, R.

    2016-04-01

    In conventional confocal/multiphoton fluorescence microscopy, images are typically acquired under ideal settings and after extensive optimization of parameters for a given structure or feature, often resulting in information loss from other image attributes. To overcome the problem of selective data display, we developed a new method that extends the imaging dynamic range in optical microscopy and improves the signal-to-noise ratio. Here we demonstrate how real-time and sequential high dynamic range microscopy facilitates automated three-dimensional neural segmentation. We address reconstruction and segmentation performance on samples with different size, anatomy and complexity. Finally, in vivo real-time high dynamic range imaging is also demonstrated, making the technique particularly relevant for longitudinal imaging in the presence of physiological motion and/or for quantification of in vivo fast tracer kinetics during functional imaging.

  17. Motion artifacts in functional near-infrared spectroscopy: a comparison of motion correction techniques applied to real cognitive data

    Science.gov (United States)

    Brigadoi, Sabrina; Ceccherini, Lisa; Cutini, Simone; Scarpa, Fabio; Scatturin, Pietro; Selb, Juliette; Gagnon, Louis; Boas, David A.; Cooper, Robert J.

    2013-01-01

    Motion artifacts are a significant source of noise in many functional near-infrared spectroscopy (fNIRS) experiments. Despite this, there is no well-established method for their removal. Instead, functional trials of fNIRS data containing a motion artifact are often rejected completely. However, in most experimental circumstances the number of trials is limited, and multiple motion artifacts are common, particularly in challenging populations. Many methods have been proposed recently to correct for motion artifacts, including principle component analysis, spline interpolation, Kalman filtering, wavelet filtering and correlation-based signal improvement. The performance of different techniques has been often compared in simulations, but only rarely has it been assessed on real functional data. Here, we compare the performance of these motion correction techniques on real functional data acquired during a cognitive task, which required the participant to speak aloud, leading to a low-frequency, low-amplitude motion artifact that is correlated with the hemodynamic response. To compare the efficacy of these methods, objective metrics related to the physiology of the hemodynamic response have been derived. Our results show that it is always better to correct for motion artifacts than reject trials, and that wavelet filtering is the most effective approach to correcting this type of artifact, reducing the area under the curve where the artifact is present in 93% of the cases. Our results therefore support previous studies that have shown wavelet filtering to be the most promising and powerful technique for the correction of motion artifacts in fNIRS data. The analyses performed here can serve as a guide for others to objectively test the impact of different motion correction algorithms and therefore select the most appropriate for the analysis of their own fNIRS experiment. PMID:23639260

  18. Toward real-time regional earthquake simulation of Taiwan earthquakes

    Science.gov (United States)

    Lee, S.; Liu, Q.; Tromp, J.; Komatitsch, D.; Liang, W.; Huang, B.

    2013-12-01

    We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 minutes after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 minutes for a 70 sec ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.

  19. Video-based real-time on-street parking occupancy detection system

    Science.gov (United States)

    Bulan, Orhan; Loce, Robert P.; Wu, Wencheng; Wang, YaoRong; Bernal, Edgar A.; Fan, Zhigang

    2013-10-01

    Urban parking management is receiving significant attention due to its potential to reduce traffic congestion, fuel consumption, and emissions. Real-time parking occupancy detection is a critical component of on-street parking management systems, where occupancy information is relayed to drivers via smart phone apps, radio, Internet, on-road signs, or global positioning system auxiliary signals. Video-based parking occupancy detection systems can provide a cost-effective solution to the sensing task while providing additional functionality for traffic law enforcement and surveillance. We present a video-based on-street parking occupancy detection system that can operate in real time. Our system accounts for the inherent challenges that exist in on-street parking settings, including illumination changes, rain, shadows, occlusions, and camera motion. Our method utilizes several components from video processing and computer vision for motion detection, background subtraction, and vehicle detection. We also present three traffic law enforcement applications: parking angle violation detection, parking boundary violation detection, and exclusion zone violation detection, which can be integrated into the parking occupancy cameras as a value-added option. Our experimental results show that the proposed parking occupancy detection method performs in real-time at 5 frames/s and achieves better than 90% detection accuracy across several days of videos captured in a busy street block under various weather conditions such as sunny, cloudy, and rainy, among others.

  20. Interacting with mobile devices by fusion eye and hand gestures recognition systems based on decision tree approach

    Science.gov (United States)

    Elleuch, Hanene; Wali, Ali; Samet, Anis; Alimi, Adel M.

    2017-03-01

    Two systems of eyes and hand gestures recognition are used to control mobile devices. Based on a real-time video streaming captured from the device's camera, the first system recognizes the motion of user's eyes and the second one detects the static hand gestures. To avoid any confusion between natural and intentional movements we developed a system to fuse the decision coming from eyes and hands gesture recognition systems. The phase of fusion was based on decision tree approach. We conducted a study on 5 volunteers and the results that our system is robust and competitive.

  1. High-speed railway real-time localization auxiliary method based on deep neural network

    Science.gov (United States)

    Chen, Dongjie; Zhang, Wensheng; Yang, Yang

    2017-11-01

    High-speed railway intelligent monitoring and management system is composed of schedule integration, geographic information, location services, and data mining technology for integration of time and space data. Assistant localization is a significant submodule of the intelligent monitoring system. In practical application, the general access is to capture the image sequences of the components by using a high-definition camera, digital image processing technique and target detection, tracking and even behavior analysis method. In this paper, we present an end-to-end character recognition method based on a deep CNN network called YOLO-toc for high-speed railway pillar plate number. Different from other deep CNNs, YOLO-toc is an end-to-end multi-target detection framework, furthermore, it exhibits a state-of-art performance on real-time detection with a nearly 50fps achieved on GPU (GTX960). Finally, we realize a real-time but high-accuracy pillar plate number recognition system and integrate natural scene OCR into a dedicated classification YOLO-toc model.

  2. Real Time Revisited

    Science.gov (United States)

    Allen, Phillip G.

    1985-12-01

    The call for abolishing photo reconnaissance in favor of real time is once more being heard. Ten years ago the same cries were being heard with the introduction of the Charge Coupled Device (CCD). The real time system problems that existed then and stopped real time proliferation have not been solved. The lack of an organized program by either DoD or industry has hampered any efforts to solve the problems, and as such, very little has happened in real time in the last ten years. Real time is not a replacement for photo, just as photo is not a replacement for infra-red or radar. Operational real time sensors can be designed only after their role has been defined and improvements made to the weak links in the system. Plodding ahead on a real time reconnaissance suite without benefit of evaluation of utility will allow this same paper to be used ten years from now.

  3. Low-level processing for real-time image analysis

    Science.gov (United States)

    Eskenazi, R.; Wilf, J. M.

    1979-01-01

    A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.

  4. PROMO – Real-time Prospective Motion Correction in MRI using Image-based Tracking

    Science.gov (United States)

    White, Nathan; Roddey, Cooper; Shankaranarayanan, Ajit; Han, Eric; Rettmann, Dan; Santos, Juan; Kuperman, Josh; Dale, Anders

    2010-01-01

    Artifacts caused by patient motion during scanning remain a serious problem in most MRI applications. The prospective motion correction technique attempts to address this problem at its source by keeping the measurement coordinate system fixed with respect to the patient throughout the entire scan process. In this study, a new image-based approach for prospective motion correction is described, which utilizes three orthogonal 2D spiral navigator acquisitions (SP-Navs) along with a flexible image-based tracking method based on the Extended Kalman Filter (EKF) algorithm for online motion measurement. The SP-Nav/EKF framework offers the advantages of image-domain tracking within patient-specific regions-of-interest and reduced sensitivity to off-resonance-induced corruption of rigid-body motion estimates. The performance of the method was tested using offline computer simulations and online in vivo head motion experiments. In vivo validation results covering a broad range of staged head motions indicate a steady-state error of the SP-Nav/EKF motion estimates of less than 10 % of the motion magnitude, even for large compound motions that included rotations over 15 degrees. A preliminary in vivo application in 3D inversion recovery spoiled gradient echo (IR-SPGR) and 3D fast spin echo (FSE) sequences demonstrates the effectiveness of the SP-Nav/EKF framework for correcting 3D rigid-body head motion artifacts prospectively in high-resolution 3D MRI scans. PMID:20027635

  5. Real time control of the flexible dynamics of orbital launch vehicles

    NARCIS (Netherlands)

    Bos, van den J.; Steinbuch, M.; Gutierrez, H.M.

    2011-01-01

    During this traineeship the flexible dynamics of orbital launch vehicles are estimated and controlled in real time, using distributed fiber-Bragg sensor arrays for motion estimation and cold gas thrusters for control. The use of these cold-gas thrusters to actively control flexible modes is the main

  6. Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices

    Science.gov (United States)

    Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun

    2014-05-01

    With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.

  7. Accuracy of Real-time Couch Tracking During 3-dimensional Conformal Radiation Therapy, Intensity Modulated Radiation Therapy, and Volumetric Modulated Arc Therapy for Prostate Cancer

    International Nuclear Information System (INIS)

    Wilbert, Juergen; Baier, Kurt; Hermann, Christian; Flentje, Michael; Guckenberger, Matthias

    2013-01-01

    Purpose: To evaluate the accuracy of real-time couch tracking for prostate cancer. Methods and Materials: Intrafractional motion trajectories of 15 prostate cancer patients were the basis for this phantom study; prostate motion had been monitored with the Calypso System. An industrial robot moved a phantom along these trajectories, motion was detected via an infrared camera system, and the robotic HexaPOD couch was used for real-time counter-steering. Residual phantom motion during real-time tracking was measured with the infrared camera system. Film dosimetry was performed during delivery of 3-dimensional conformal radiation therapy (3D-CRT), step-and-shoot intensity modulated radiation therapy (IMRT), and volumetric modulated arc therapy (VMAT). Results: Motion of the prostate was largest in the anterior–posterior direction, with systematic (∑) and random (σ) errors of 2.3 mm and 2.9 mm, respectively; the prostate was outside a threshold of 5 mm (3D vector) for 25.0%±19.8% of treatment time. Real-time tracking reduced prostate motion to ∑=0.01 mm and σ = 0.55 mm in the anterior–posterior direction; the prostate remained within a 1-mm and 5-mm threshold for 93.9%±4.6% and 99.7%±0.4% of the time, respectively. Without real-time tracking, pass rates based on a γ index of 2%/2 mm in film dosimetry ranged between 66% and 72% for 3D-CRT, IMRT, and VMAT, on average. Real-time tracking increased pass rates to minimum 98% on average for 3D-CRT, IMRT, and VMAT. Conclusions: Real-time couch tracking resulted in submillimeter accuracy for prostate cancer, which transferred into high dosimetric accuracy independently of whether 3D-CRT, IMRT, or VMAT was used.

  8. Robust real-time extraction of respiratory signals from PET list-mode data.

    Science.gov (United States)

    Salomon, Andre; Zhang, Bin; Olivier, Patrick; Goedicke, Andreas

    2018-05-01

    Respiratory motion, which typically cannot simply be suspended during PET image acquisition, affects lesions' detection and quantitative accuracy inside or in close vicinity to the lungs. Some motion compensation techniques address this issue via pre-sorting ("binning") of the acquired PET data into a set of temporal gates, where each gate is assumed to be minimally affected by respiratory motion. Tracking respiratory motion is typically realized using dedicated hardware (e.g. using respiratory belts and digital cameras). Extracting respiratory signalsdirectly from the acquired PET data simplifies the clinical workflow as it avoids to handle additional signal measurement equipment. We introduce a new data-driven method "Combined Local Motion Detection" (CLMD). It uses the Time-of-Flight (TOF) information provided by state-of-the-art PET scanners in order to enable real-time respiratory signal extraction without additional hardware resources. CLMD applies center-of-mass detection in overlapping regions based on simple back-positioned TOF event sets acquired in short time frames. Following a signal filtering and quality-based pre-selection step, the remaining extracted individual position information over time is then combined to generate a global respiratory signal. The method is evaluated using 7 measured FDG studies from single and multiple scan positions of the thorax region, and it is compared to other software-based methods regarding quantitative accuracy and statistical noise stability. Correlation coefficients around 90% between the reference and the extracted signal have been found for those PET scans where motion affected features such as tumors or hot regions were present in the PET field-of-view. For PET scans with a quarter of typically applied radiotracer doses, the CLMD method still provides similar high correlation coefficients which indicates its robustness to noise. Each CLMD processing needed less than 0.4s in total on a standard multi-core CPU

  9. Robust real-time extraction of respiratory signals from PET list-mode data

    Science.gov (United States)

    Salomon, André; Zhang, Bin; Olivier, Patrick; Goedicke, Andreas

    2018-06-01

    Respiratory motion, which typically cannot simply be suspended during PET image acquisition, affects lesions’ detection and quantitative accuracy inside or in close vicinity to the lungs. Some motion compensation techniques address this issue via pre-sorting (‘binning’) of the acquired PET data into a set of temporal gates, where each gate is assumed to be minimally affected by respiratory motion. Tracking respiratory motion is typically realized using dedicated hardware (e.g. using respiratory belts and digital cameras). Extracting respiratory signals directly from the acquired PET data simplifies the clinical workflow as it avoids handling additional signal measurement equipment. We introduce a new data-driven method ‘combined local motion detection’ (CLMD). It uses the time-of-flight (TOF) information provided by state-of-the-art PET scanners in order to enable real-time respiratory signal extraction without additional hardware resources. CLMD applies center-of-mass detection in overlapping regions based on simple back-positioned TOF event sets acquired in short time frames. Following a signal filtering and quality-based pre-selection step, the remaining extracted individual position information over time is then combined to generate a global respiratory signal. The method is evaluated using seven measured FDG studies from single and multiple scan positions of the thorax region, and it is compared to other software-based methods regarding quantitative accuracy and statistical noise stability. Correlation coefficients around 90% between the reference and the extracted signal have been found for those PET scans where motion affected features such as tumors or hot regions were present in the PET field-of-view. For PET scans with a quarter of typically applied radiotracer doses, the CLMD method still provides similar high correlation coefficients which indicates its robustness to noise. Each CLMD processing needed less than 0.4 s in total on a standard

  10. Comparative Study on Interaction of Form and Motion Processing Streams by Applying Two Different Classifiers in Mechanism for Recognition of Biological Movement

    Science.gov (United States)

    2014-01-01

    Research on psychophysics, neurophysiology, and functional imaging shows particular representation of biological movements which contains two pathways. The visual perception of biological movements formed through the visual system called dorsal and ventral processing streams. Ventral processing stream is associated with the form information extraction; on the other hand, dorsal processing stream provides motion information. Active basic model (ABM) as hierarchical representation of the human object had revealed novelty in form pathway due to applying Gabor based supervised object recognition method. It creates more biological plausibility along with similarity with original model. Fuzzy inference system is used for motion pattern information in motion pathway creating more robustness in recognition process. Besides, interaction of these paths is intriguing and many studies in various fields considered it. Here, the interaction of the pathways to get more appropriated results has been investigated. Extreme learning machine (ELM) has been implied for classification unit of this model, due to having the main properties of artificial neural networks, but crosses from the difficulty of training time substantially diminished in it. Here, there will be a comparison between two different configurations, interactions using synergetic neural network and ELM, in terms of accuracy and compatibility. PMID:25276860

  11. Comparative Study on Interaction of Form and Motion Processing Streams by Applying Two Different Classifiers in Mechanism for Recognition of Biological Movement

    Directory of Open Access Journals (Sweden)

    Bardia Yousefi

    2014-01-01

    Full Text Available Research on psychophysics, neurophysiology, and functional imaging shows particular representation of biological movements which contains two pathways. The visual perception of biological movements formed through the visual system called dorsal and ventral processing streams. Ventral processing stream is associated with the form information extraction; on the other hand, dorsal processing stream provides motion information. Active basic model (ABM as hierarchical representation of the human object had revealed novelty in form pathway due to applying Gabor based supervised object recognition method. It creates more biological plausibility along with similarity with original model. Fuzzy inference system is used for motion pattern information in motion pathway creating more robustness in recognition process. Besides, interaction of these paths is intriguing and many studies in various fields considered it. Here, the interaction of the pathways to get more appropriated results has been investigated. Extreme learning machine (ELM has been implied for classification unit of this model, due to having the main properties of artificial neural networks, but crosses from the difficulty of training time substantially diminished in it. Here, there will be a comparison between two different configurations, interactions using synergetic neural network and ELM, in terms of accuracy and compatibility.

  12. Real-time non-rigid target tracking for ultrasound-guided clinical interventions

    NARCIS (Netherlands)

    Zachiu, Cornel; Ries, Mario G; Ramaekers, Pascal; Guey, Jean-Luc; Moonen, Chrit T W; de Senneville, Baudouin Denis

    2017-01-01

    Biological motion is a problem for non- or mini-invasive interventions when conducted in mobile/deformable organs due to the targeted pathology moving/deforming with the organ. This may lead to high miss rates and/or incomplete treatment of the pathology. Therefore, real-time tracking of the target

  13. Probabilistic recognition of human faces from video

    DEFF Research Database (Denmark)

    Zhou, Saohua; Krüger, Volker; Chellappa, Rama

    2003-01-01

    Recognition of human faces using a gallery of still or video images and a probe set of videos is systematically investigated using a probabilistic framework. In still-to-video recognition, where the gallery consists of still images, a time series state space model is proposed to fuse temporal...... of the identity variable produces the recognition result. The model formulation is very general and it allows a variety of image representations and transformations. Experimental results using videos collected by NIST/USF and CMU illustrate the effectiveness of this approach for both still-to-video and video-to-video...... information in a probe video, which simultaneously characterizes the kinematics and identity using a motion vector and an identity variable, respectively. The joint posterior distribution of the motion vector and the identity variable is estimated at each time instant and then propagated to the next time...

  14. Action Recognition Using Motion Primitives and Probabilistic Edit Distance

    DEFF Research Database (Denmark)

    Fihl, Preben; Holte, Michael Boelstoft; Moeslund, Thomas B.

    2006-01-01

    In this paper we describe a recognition approach based on the notion of primitives. As opposed to recognizing actions based on temporal trajectories or temporal volumes, primitive-based recognition is based on representing a temporal sequence containing an action by only a few characteristic time...... into a string containing a sequence of symbols, each representing a primitives. After pruning the string a probabilistic Edit Distance classifier is applied to identify which action best describes the pruned string. The approach is evaluated on five one-arm gestures and the recognition rate is 91...

  15. MO-A-BRD-08: Radiosurgery Beyond Cancer: Real-Time Target Localization and Treatment Planning for Cardiac Radiosurgery Under MRI Guidance

    Energy Technology Data Exchange (ETDEWEB)

    Ipsen, S [University of Luebeck, Luebeck, SH (Germany); University of Sydney, Camperdown (Australia); Blanck, O [CyberKnife Zentrum Norddeutschland, Guestrow, MV (Germany); Oborn, B [Illawarra Cancer Care Centre, Wollongong, NSW (Australia); Bode, F [Medical Clinic II, Section for Electrophysiology, UKSH, Luebeck, SH (Germany); Liney, G [Ingham Institute for Applied Medical Research, Liverpool, NSW (United Kingdom); Keall, P [University of Sydney, Camperdown (Australia)

    2014-06-15

    Purpose: Atrial fibrillation (AF) is the most common cardiac arrhythmia, affecting >2.5M Americans and >4.5M Europeans. AF is usually treated with minimally-invasive, time consuming catheter ablation techniques. Radiosurgery of the pulmonary veins (PV) has been proposed for AF treatment, however is challenging due to the complex respiratory and cardiac motion patterns. We hypothesize that an MRI-linac could solve the difficult real-time targeting and adaptation problem. In this study we quantified target motion ranges on cardiac MRI and analyzed the dosimetric benefits of margin reduction assuming real-time MRI tracking was applied. Methods: For the motion study, four human subjects underwent real-time cardiac MRI under free breathing. The target motion on coronal and axial cine planes was analyzed using a template matching algorithm. For the planning study, an ablation line at each PV antrum was defined as target on an AF patient scheduled for catheter ablation. Various safety margins ranging from 0mm (perfect tracking) to 8mm (untracked motion) were added to the target defining the PTV. 30Gy single fraction IMRT plans were then generated. Finally, the influence of a 1T magnetic field on treatment beam delivery was calculated using the Geant4 Monte Carlo algorithm to simulate the dosimetric impact of MRI guidance. Results: The motion study showed the mean respiratory motion of the target area on MRI was 8.4mm (SI), 1.7mm (AP) and 0.3mm (LR). Cardiac motion was small (<2mm). The planning study showed that with increasing safety margins to encompass untracked motion, dose tolerances for OARs such as the esophagus and airways were exceeded by >100%. The magnetic field had little impact on the dose distribution. Conclusion: Our results indicate that real-time MRI tracking of the PVs seems feasible. Accurate image guidance for high-dose AF radiosurgery is essential since safety margins covering untracked target motion will result in unacceptable treatment plans.

  16. Deficient Biological Motion Perception in Schizophrenia: Results from a Motion Noise Paradigm

    Directory of Open Access Journals (Sweden)

    Jejoong eKim

    2013-07-01

    Full Text Available Background: Schizophrenia patients exhibit deficient processing of perceptual and cognitive information. However, it is not well understood how basic perceptual deficits contribute to higher level cognitive problems in this mental disorder. Perception of biological motion, a motion-based cognitive recognition task, relies on both basic visual motion processing and social cognitive processing, thus providing a useful paradigm to evaluate the potentially hierarchical relationship between these two levels of information processing. Methods: In this study, we designed a biological motion paradigm in which basic visual motion signals were manipulated systematically by incorporating different levels of motion noise. We measured the performances of schizophrenia patients (n=21 and healthy controls (n=22 in this biological motion perception task, as well as in coherent motion detection, theory of mind, and a widely used biological motion recognition task. Results: Schizophrenia patients performed the biological motion perception task with significantly lower accuracy than healthy controls when perceptual signals were moderately degraded by noise. A more substantial degradation of perceptual signals, through using additional noise, impaired biological motion perception in both groups. Performance levels on biological motion recognition, coherent motion detection and theory of mind tasks were also reduced in patients. Conclusion: The results from the motion-noise biological motion paradigm indicate that in the presence of visual motion noise, the processing of biological motion information in schizophrenia is deficient. Combined with the results of poor basic visual motion perception (coherent motion task and biological motion recognition, the association between basic motion signals and biological motion perception suggests a need to incorporate the improvement of visual motion perception in social cognitive remediation.

  17. Semantic Segmentation of Real-time Sensor Data Stream for Complex Activity Recognition

    OpenAIRE

    Triboan, Darpan; Chen, Liming; Chen, Feng; Wang, Zumin

    2016-01-01

    Department of Information Engineering, Dalian University, China The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link. Data segmentation plays a critical role in performing human activity recognition (HAR) in the ambient assistant living (AAL) systems. It is particularly important for complex activity recognition when the events occur in short bursts with attributes of multiple sub-tasks. Althou...

  18. Noncontact optical motion sensing for real-time analysis

    Science.gov (United States)

    Fetzer, Bradley R.; Imai, Hiromichi

    1990-08-01

    The adaptation of an image dissector tube (IDT) within the OPTFOLLOW system provides high resolution displacement measurement of a light discontinuity. Due to the high speed response of the IDT and the advanced servo loop circuitry, the system is capable of real time analysis of the object under test. The image of the discontinuity may be contoured by direct or reflected light and ranges spectrally within the field of visible light. The image is monitored to 500 kHz through a lens configuration which transposes the optical image upon the photocathode of the IDT. The photoelectric effect accelerates the resultant electrons through a photomultiplier and an enhanced current is emitted from the anode. A servo loop controls the electron beam, continually centering it within the IDT using magnetic focusing of deflection coils. The output analog voltage from the servo amplifier is thereby proportional to the displacement of the target. The system is controlled by a microprocessor with a 32kbyte memory and provides a digital display as well as instructional readout on a color monitor allowing for offset image tracking and automatic system calibration.

  19. Real-time knee adduction moment feedback training using an elliptical trainer.

    Science.gov (United States)

    Kang, Sang Hoon; Lee, Song Joo; Ren, Yupeng; Zhang, Li-Qun

    2014-03-01

    The external knee adduction moment (EKAM) is associated with knee osteoarthritis (OA) in many aspects including presence, progression, and severity of knee OA. Despite of its importance, there is a lack of EKAM estimation methods that can provide patients with knee OA real-time EKAM biofeedback for training and clinical evaluations without using a motion analysis laboratory. A practical real-time EKAM estimation method, which utilizes kinematics measured by a simple six degree-of-freedom goniometer and kinetics measured by a multi-axis force sensor underneath the foot, was developed to provide real-time feedback of the EKAM to the patients during stepping on an elliptical trainer, which can potentially be used to control and alter the EKAM. High reliability (ICC(2,1): 0.9580) of the real-time EKAM estimation method was verified through stepping trials of seven subjects without musculoskeletal disorders. Combined with advantages of elliptical trainers including functional weight-bearing stepping and mitigation of impulsive forces, the real-time EKAM estimation method is expected to help patients with knee OA better control frontal plane knee loading and reduce knee OA development and progression.

  20. Validation of Magnetic Reconstruction Codes for Real-Time Applications

    International Nuclear Information System (INIS)

    Mazon, D.; Murari, A.; Boulbe, C.; Faugeras, B.; Blum, J.; Svensson, J.; Quilichini, T.; Gelfusa, M.

    2010-01-01

    The real-time reconstruction of the plasma magnetic equilibrium in a tokamak is a key point to access high-performance regimes. Indeed, the shape of the plasma current density profile is a direct output of the reconstruction and has a leading effect for reaching a steady-state high-performance regime of operation. The challenge is thus to develop real-time methods and algorithms that reconstruct the magnetic equilibrium from the perspective of using these outputs for feedback control purposes. In this paper the validation of the JET real-time equilibrium reconstruction codes using both a Bayesian approach and a full equilibrium solver named Equinox will be detailed, the comparison being performed with the off-line equilibrium code EFIT (equilibrium fitting) or the real-time boundary reconstruction code XLOC (X-point local expansion). In this way a significant database, a methodology, and a strategy for the validation are presented. The validation of the results has been performed using a validated database of 130 JET discharges with a large variety of magnetic configurations. Internal measurements like polarimetry and motional Stark effect have been also used for the Equinox validation including some magnetohydrodynamic signatures for the assessment of the reconstructed safety profile and current density. (authors)

  1. Exploratory Data Analysis of Acceleration Signals to Select Light-Weight and Accurate Features for Real-Time Activity Recognition on Smartphones

    Directory of Open Access Journals (Sweden)

    Seok-Won Lee

    2013-09-01

    Full Text Available Smartphone-based activity recognition (SP-AR recognizes users’ activities using the embedded accelerometer sensor. Only a small number of previous works can be classified as online systems, i.e., the whole process (pre-processing, feature extraction, and classification is performed on the device. Most of these online systems use either a high sampling rate (SR or long data-window (DW to achieve high accuracy, resulting in short battery life or delayed system response, respectively. This paper introduces a real-time/online SP-AR system that solves this problem. Exploratory data analysis was performed on acceleration signals of 6 activities, collected from 30 subjects, to show that these signals are generated by an autoregressive (AR process, and an accurate AR-model in this case can be built using a low SR (20 Hz and a small DW (3 s. The high within class variance resulting from placing the phone at different positions was reduced using kernel discriminant analysis to achieve position-independent recognition. Neural networks were used as classifiers. Unlike previous works, true subject-independent evaluation was performed, where 10 new subjects evaluated the system at their homes for 1 week. The results show that our features outperformed three commonly used features by 40% in terms of accuracy for the given SR and DW.

  2. A multimodal interface for real-time soldier-robot teaming

    Science.gov (United States)

    Barber, Daniel J.; Howard, Thomas M.; Walter, Matthew R.

    2016-05-01

    Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.

  3. Internal models of target motion: expected dynamics overrides measured kinematics in timing manual interceptions.

    Science.gov (United States)

    Zago, Myrka; Bosco, Gianfranco; Maffei, Vincenzo; Iosa, Marco; Ivanenko, Yuri P; Lacquaniti, Francesco

    2004-04-01

    Prevailing views on how we time the interception of a moving object assume that the visual inputs are informationally sufficient to estimate the time-to-contact from the object's kinematics. Here we present evidence in favor of a different view: the brain makes the best estimate about target motion based on measured kinematics and an a priori guess about the causes of motion. According to this theory, a predictive model is used to extrapolate time-to-contact from expected dynamics (kinetics). We projected a virtual target moving vertically downward on a wide screen with different randomized laws of motion. In the first series of experiments, subjects were asked to intercept this target by punching a real ball that fell hidden behind the screen and arrived in synchrony with the visual target. Subjects systematically timed their motor responses consistent with the assumption of gravity effects on an object's mass, even when the visual target did not accelerate. With training, the gravity model was not switched off but adapted to nonaccelerating targets by shifting the time of motor activation. In the second series of experiments, there was no real ball falling behind the screen. Instead the subjects were required to intercept the visual target by clicking a mousebutton. In this case, subjects timed their responses consistent with the assumption of uniform motion in the absence of forces, even when the target actually accelerated. Overall, the results are in accord with the theory that motor responses evoked by visual kinematics are modulated by a prior of the target dynamics. The prior appears surprisingly resistant to modifications based on performance errors.

  4. An Internal Data Non-hiding Type Real-time Kernel and its Application to the Mechatronics Controller

    Science.gov (United States)

    Yoshida, Toshio

    For the mechatronics equipment controller that controls robots and machine tools, high-speed motion control processing is essential. The software system of the controller like other embedded systems is composed of three layers software such as real-time kernel layer, middleware layer, and application software layer on the dedicated hardware. The application layer in the top layer is composed of many numbers of tasks, and application function of the system is realized by the cooperation between these tasks. In this paper we propose an internal data non-hiding type real-time kernel in which customizing the task control is possible only by change in the program code of the task side without any changes in the program code of real-time kernel. It is necessary to reduce the overhead caused by the real-time kernel task control for the speed-up of the motion control of the mechatronics equipment. For this, customizing the task control function is needed. We developed internal data non-cryptic type real-time kernel ZRK to evaluate this method, and applied to the control of the multi system automatic lathe. The effect of the speed-up of the task cooperation processing was able to be confirmed by combined task control processing on the task side program code using an internal data non-hiding type real-time kernel ZRK.

  5. Real-time orbit feedback at the APS

    International Nuclear Information System (INIS)

    Carwardine, J.

    1998-01-01

    A real-time orbit feedback system has been implemented at the Advanced Photon Source in order to meet the stringent orbit stability requirements. The system reduces global orbit motion below 30Hz by a factor of four to below 5 microm rms horizontally and 2 microm rms vertically. This paper focuses on dynamic orbit stability and describes the all-digital orbit feedback system that has been implemented at the APS. Implementation of the global orbit feedback system is described and its latest performance is presented. Ultimately, the system will provide local feedback at each x-ray source point using installed photon BPMs to measure x-ray beam position and angle directly. Technical challenges associated with local feedback and with dynamics of the associated corrector magnets are described. The unique diagnostic capabilities provided by the APS system are discussed with reference to their use in identifying sources of the underlying orbit motion

  6. Dual-EKF-Based Real-Time Celestial Navigation for Lunar Rover

    Directory of Open Access Journals (Sweden)

    Li Xie

    2012-01-01

    Full Text Available A key requirement of lunar rover autonomous navigation is to acquire state information accurately in real-time during its motion and set up a gradual parameter-based nonlinear kinematics model for the rover. In this paper, we propose a dual-extended-Kalman-filter- (dual-EKF- based real-time celestial navigation (RCN method. The proposed method considers the rover position and velocity on the lunar surface as the system parameters and establishes a constant velocity (CV model. In addition, the attitude quaternion is considered as the system state, and the quaternion differential equation is established as the state equation, which incorporates the output of angular rate gyroscope. Therefore, the measurement equation can be established with sun direction vector from the sun sensor and speed observation from the speedometer. The gyro continuous output ensures the algorithm real-time operation. Finally, we use the dual-EKF method to solve the system equations. Simulation results show that the proposed method can acquire the rover position and heading information in real time and greatly improve the navigation accuracy. Our method overcomes the disadvantage of the cumulative error in inertial navigation.

  7. A Preliminary Examination of the Second Generation CMORPH Real-time Production

    Science.gov (United States)

    Joyce, R.; Xie, P.; Wu, S.

    2017-12-01

    The second generation CMORPH (CMORPH2) has started test real-time production of 30-minute precipitation estimates on a 0.05olat/lon grid over the entire globe, from pole-to-pole. The CMORPH2 is built upon the Kalman Filter based CMORPH algorithm of Joyce and Xie (2011). Inputs to the system include rainfall and snowfall rate retrievals from passive microwave (PMW) measurements aboard all available low earth orbit (LEO) satellites, precipitation estimates derived from infrared (IR) observations of geostationary (GEO) and LEO platforms, and precipitation simulations from the NCEP operational global forecast system (GFS). Inputs from the various sources are first inter-calibrated to ensure quantitative consistencies in representing precipitation events of different intensities through PDF calibration against a common reference standard. The inter-calibrated PMW retrievals and IR-based precipitation estimates are then propagated from their respective observation times to the target analysis time along the motion vectors of the precipitating clouds. Motion vectors are first derived separately from the satellite IR based precipitation estimates and the GFS precipitation fields. These individually derived motion vectors are then combined through a 2D-VAR technique to form an analyzed field of cloud motion vectors over the entire globe. The propagated PMW and IR based precipitation estimates are finally integrated into a single field of global precipitation through the Kalman Filter framework. A set of procedures have been established to examine the performance of the CMORPH2 real-time production. CMORPH2 satellite precipitation estimates are compared against the CPC daily gauge analysis, Stage IV radar precipitation over the CONUS, and numerical model forecasts to discover potential shortcomings and quantify improvements against the first generation CMORPH. Special attention has been focused on the CMORPH behavior over high-latitude areas beyond the coverage of the first

  8. Real time ray tracing based on shader

    Science.gov (United States)

    Gui, JiangHeng; Li, Min

    2017-07-01

    Ray tracing is a rendering algorithm for generating an image through tracing lights into an image plane, it can simulate complicate optical phenomenon like refraction, depth of field and motion blur. Compared with rasterization, ray tracing can achieve more realistic rendering result, however with greater computational cost, simple scene rendering can consume tons of time. With the GPU's performance improvement and the advent of programmable rendering pipeline, complicated algorithm can also be implemented directly on shader. So, this paper proposes a new method that implement ray tracing directly on fragment shader, mainly include: surface intersection, importance sampling and progressive rendering. With the help of GPU's powerful throughput capability, it can implement real time rendering of simple scene.

  9. Real-time earthquake source imaging: An offline test for the 2011 Tohoku earthquake

    Science.gov (United States)

    Zhang, Yong; Wang, Rongjiang; Zschau, Jochen; Parolai, Stefano; Dahm, Torsten

    2014-05-01

    In recent decades, great efforts have been expended in real-time seismology aiming at earthquake and tsunami early warning. One of the most important issues is the real-time assessment of earthquake rupture processes using near-field seismogeodetic networks. Currently, earthquake early warning systems are mostly based on the rapid estimate of P-wave magnitude, which contains generally large uncertainties and the known saturation problem. In the case of the 2011 Mw9.0 Tohoku earthquake, JMA (Japan Meteorological Agency) released the first warning of the event with M7.2 after 25 s. The following updates of the magnitude even decreased to M6.3-6.6. Finally, the magnitude estimate stabilized at M8.1 after about two minutes. This led consequently to the underestimated tsunami heights. By using the newly developed Iterative Deconvolution and Stacking (IDS) method for automatic source imaging, we demonstrate an offline test for the real-time analysis of the strong-motion and GPS seismograms of the 2011 Tohoku earthquake. The results show that we had been theoretically able to image the complex rupture process of the 2011 Tohoku earthquake automatically soon after or even during the rupture process. In general, what had happened on the fault could be robustly imaged with a time delay of about 30 s by using either the strong-motion (KiK-net) or the GPS (GEONET) real-time data. This implies that the new real-time source imaging technique is helpful to reduce false and missing warnings, and therefore should play an important role in future tsunami early warning and earthquake rapid response systems.

  10. Stability Analysis and Variational Integrator for Real-Time Formation Based on Potential Field

    Directory of Open Access Journals (Sweden)

    Shengqing Yang

    2014-01-01

    Full Text Available This paper investigates a framework of real-time formation of autonomous vehicles by using potential field and variational integrator. Real-time formation requires vehicles to have coordinated motion and efficient computation. Interactions described by potential field can meet the former requirement which results in a nonlinear system. Stability analysis of such nonlinear system is difficult. Our methodology of stability analysis is discussed in error dynamic system. Transformation of coordinates from inertial frame to body frame can help the stability analysis focus on the structure instead of particular coordinates. Then, the Jacobian of reduced system can be calculated. It can be proved that the formation is stable at the equilibrium point of error dynamic system with the effect of damping force. For consideration of calculation, variational integrator is introduced. It is equivalent to solving algebraic equations. Forced Euler-Lagrange equation in discrete expression is used to construct a forced variational integrator for vehicles in potential field and obstacle environment. By applying forced variational integrator on computation of vehicles' motion, real-time formation of vehicles in obstacle environment can be implemented. Algorithm based on forced variational integrator is designed for a leader-follower formation.

  11. The Effects of Musical and Linguistic Components in Recognition of Real-World Musical Excerpts by Cochlear Implant Recipients and Normal-Hearing Adults

    Science.gov (United States)

    Gfeller, Kate; Jiang, Dingfeng; Oleson, Jacob; Driscoll, Virginia; Olszewski, Carol; Knutson, John F.; Turner, Christopher; Gantz, Bruce

    2011-01-01

    Background Cochlear implants (CI) are effective in transmitting salient features of speech, especially in quiet, but current CI technology is not well suited in transmission of key musical structures (e.g., melody, timbre). It is possible, however, that sung lyrics, which are commonly heard in real-world music may provide acoustical cues that support better music perception. Objective The purpose of this study was to examine how accurately adults who use CIs (n=87) and those with normal hearing (NH) (n=17) are able to recognize real-world music excerpts based upon musical and linguistic (lyrics) cues. Results CI recipients were significantly less accurate than NH listeners on recognition of real-world music with or, in particular, without lyrics; however, CI recipients whose devices transmitted acoustic plus electric stimulation were more accurate than CI recipients reliant upon electric stimulation alone (particularly items without linguistic cues). Recognition by CI recipients improved as a function of linguistic cues. Methods Participants were tested on melody recognition of complex melodies (pop, country, classical styles). Results were analyzed as a function of: hearing status and history, device type (electric only or acoustic plus electric stimulation), musical style, linguistic and musical cues, speech perception scores, cognitive processing, music background, age, and in relation to self-report on listening acuity and enjoyment. Age at time of testing was negatively correlated with recognition performance. Conclusions These results have practical implications regarding successful participation of CI users in music-based activities that include recognition and accurate perception of real-world songs (e.g., reminiscence, lyric analysis, listening for enjoyment). PMID:22803258

  12. The effects of musical and linguistic components in recognition of real-world musical excerpts by cochlear implant recipients and normal-hearing adults.

    Science.gov (United States)

    Gfeller, Kate; Jiang, Dingfeng; Oleson, Jacob J; Driscoll, Virginia; Olszewski, Carol; Knutson, John F; Turner, Christopher; Gantz, Bruce

    2012-01-01

    Cochlear implants (CI) are effective in transmitting salient features of speech, especially in quiet, but current CI technology is not well suited in transmission of key musical structures (e.g., melody, timbre). It is possible, however, that sung lyrics, which are commonly heard in real-world music may provide acoustical cues that support better music perception. The purpose of this study was to examine how accurately adults who use CIs (n = 87) and those with normal hearing (NH) (n = 17) are able to recognize real-world music excerpts based upon musical and linguistic (lyrics) cues. CI recipients were significantly less accurate than NH listeners on recognition of real-world music with or, in particular, without lyrics; however, CI recipients whose devices transmitted acoustic plus electric stimulation were more accurate than CI recipients reliant upon electric stimulation alone (particularly items without linguistic cues). Recognition by CI recipients improved as a function of linguistic cues. Participants were tested on melody recognition of complex melodies (pop, country, & classical styles). Results were analyzed as a function of: hearing status and history, device type (electric only or acoustic plus electric stimulation), musical style, linguistic and musical cues, speech perception scores, cognitive processing, music background, age, and in relation to self-report on listening acuity and enjoyment. Age at time of testing was negatively correlated with recognition performance. These results have practical implications regarding successful participation of CI users in music-based activities that include recognition and accurate perception of real-world songs (e.g., reminiscence, lyric analysis, & listening for enjoyment).

  13. Real-Time Energy Management Control for Hybrid Electric Powertrains

    Directory of Open Access Journals (Sweden)

    Mohamed Zaher

    2013-01-01

    Full Text Available This paper focuses on embedded control of a hybrid powertrain concepts for mobile vehicle applications. Optimal robust control approach is used to develop a real-time energy management strategy. The main idea is to store the normally wasted mechanical regenerative energy in energy storage devices for later usage. The regenerative energy recovery opportunity exists in any condition where the speed of motion is in the opposite direction to the applied force or torque. This is the case when the vehicle is braking, decelerating, the motion is driven by gravitational force, or load driven. There are three main concepts for energy storing devices in hybrid vehicles: electric, hydraulic, and mechanical (flywheel. The real-time control challenge is to balance the system power demands from the engine and the hybrid storage device, without depleting the energy storage device or stalling the engine in any work cycle. In the worst-case scenario, only the engine is used and the hybrid system is completely disabled. A rule-based control algorithm is developed and is tuned for different work cycles and could be linked to a gain scheduling algorithm. A gain scheduling algorithm identifies the cycle being performed by the work machine and its position via GPS and maps both of them to the gains.

  14. Loss pattern identification in near-real-time accounting systems

    International Nuclear Information System (INIS)

    Argentesi, F.

    1983-01-01

    To maximize the benefits from an advanced safeguards technique such as near-real-time accounting, sophisticated methods of analysing sequential material accounting data are necessary. The methods must be capable of controlling the overall false-alarm rate while assuring good power of detection against all possible diversion scenarios. A method drawn from the field of pattern recognition and related to the alarm-sequence chart appears to be promising. Power curves based on Monte Carlo calculations illustrate the improvements over more conventional methods. (author)

  15. On-chip real-time single-copy polymerase chain reaction in picoliter droplets

    Energy Technology Data Exchange (ETDEWEB)

    Beer, N R; Hindson, B; Wheeler, E; Hall, S B; Rose, K A; Kennedy, I; Colston, B

    2007-04-20

    The first lab-on-chip system for picoliter droplet generation and PCR amplification with real-time fluorescence detection has performed PCR in isolated droplets at volumes 10{sup 6} smaller than commercial real-time PCR systems. The system utilized a shearing T-junction in a silicon device to generate a stream of monodisperse picoliter droplets that were isolated from the microfluidic channel walls and each other by the oil phase carrier. An off-chip valving system stopped the droplets on-chip, allowing them to be thermal cycled through the PCR protocol without droplet motion. With this system a 10-pL droplet, encapsulating less than one copy of viral genomic DNA through Poisson statistics, showed real-time PCR amplification curves with a cycle threshold of {approx}18, twenty cycles earlier than commercial instruments. This combination of the established real-time PCR assay with digital microfluidics is ideal for isolating single-copy nucleic acids in a complex environment.

  16. Activity Recognition for Personal Time Management

    Science.gov (United States)

    Prekopcsák, Zoltán; Soha, Sugárka; Henk, Tamás; Gáspár-Papanek, Csaba

    We describe an accelerometer based activity recognition system for mobile phones with a special focus on personal time management. We compare several data mining algorithms for the automatic recognition task in the case of single user and multiuser scenario, and improve accuracy with heuristics and advanced data mining methods. The results show that daily activities can be recognized with high accuracy and the integration with the RescueTime software can give good insights for personal time management.

  17. Observations on Real-Time Prostate Gland Motion Using Electromagnetic Tracking

    International Nuclear Information System (INIS)

    Langen, Katja M.; Willoughby, Twyla R.; Meeks, Sanford L.; Santhanam, Anand; Cunningham, Alexis; Levine, Lisa; Kupelian, Patrick A.

    2008-01-01

    Purpose: To quantify and describe the real-time movement of the prostate gland in a large data set of patients treated with radiotherapy. Methods and Materials: The Calypso four-dimensional localization system was used for target localization in 17 patients, with electromagnetic markers implanted in the prostate of each patient. We analyzed a total of 550 continuous tracking sessions. The fraction of time that the prostate was displaced by >3, >5, >7, and >10 mm was calculated for each session and patient. The frequencies of displacements after initial patient positioning were analyzed over time. Results: Averaged over all patients, the prostate was displaced >3 and >5 mm for 13.6% and 3.3% of the total treatment time, respectively. For individual patients, the corresponding maximal values were 36.2% and 10.9%. For individual fractions, the corresponding maximal values were 98.7% and 98.6%. Displacements >3 mm were observed at 5 min after initial alignment in about one-eighth of the observations, and increased to one-quarter by 10 min. For individual patients, the maximal value of the displacements >3 mm at 5 and 10 min after initial positioning was 43% and 75%, respectively. Conclusion: On average, the prostate was displaced by >3 mm and >5 mm approximately 14% and 3% of the time, respectively. For individual patients, these values were up to three times greater. After the initial positioning, the likelihood of displacement of the prostate gland increased with elapsed time. This highlights the importance of initiating treatment shortly after initially positioning the patient

  18. Real-time PCR assay using fine-needle aspirates and tissue biopsy specimens for rapid diagnosis of mycobacterial lymphadenitis in children

    NARCIS (Netherlands)

    Bruijnesteijn van Coppenraet, E. S.; Lindeboom, J. A.; Prins, J. M.; Peeters, M. F.; Claas, E. C. J.; Kuijper, E. J.

    2004-01-01

    A real-time PCR assay was developed to diagnose and identify the causative agents of suspected mycobacterial lymphadenitis. Primers and probes for the real-time PCR were designed on the basis of the internal transcribed spacer sequence, enabling the recognition of the genus Mycobacterium and the

  19. Real Time Earthquake Information System in Japan

    Science.gov (United States)

    Doi, K.; Kato, T.

    2003-12-01

    An early earthquake notification system in Japan had been developed by the Japan Meteorological Agency (JMA) as a governmental organization responsible for issuing earthquake information and tsunami forecasts. The system was primarily developed for prompt provision of a tsunami forecast to the public with locating an earthquake and estimating its magnitude as quickly as possible. Years after, a system for a prompt provision of seismic intensity information as indices of degrees of disasters caused by strong ground motion was also developed so that concerned governmental organizations can decide whether it was necessary for them to launch emergency response or not. At present, JMA issues the following kinds of information successively when a large earthquake occurs. 1) Prompt report of occurrence of a large earthquake and major seismic intensities caused by the earthquake in about two minutes after the earthquake occurrence. 2) Tsunami forecast in around three minutes. 3) Information on expected arrival times and maximum heights of tsunami waves in around five minutes. 4) Information on a hypocenter and a magnitude of the earthquake, the seismic intensity at each observation station, the times of high tides in addition to the expected tsunami arrival times in 5-7 minutes. To issue information above, JMA has established; - An advanced nationwide seismic network with about 180 stations for seismic wave observation and about 3,400 stations for instrumental seismic intensity observation including about 2,800 seismic intensity stations maintained by local governments, - Data telemetry networks via landlines and partly via a satellite communication link, - Real-time data processing techniques, for example, the automatic calculation of earthquake location and magnitude, the database driven method for quantitative tsunami estimation, and - Dissemination networks, via computer-to-computer communications and facsimile through dedicated telephone lines. JMA operationally

  20. A hierarchical graph neuron scheme for real-time pattern recognition.

    Science.gov (United States)

    Nasution, B B; Khan, A I

    2008-02-01

    The hierarchical graph neuron (HGN) implements a single cycle memorization and recall operation through a novel algorithmic design. The HGN is an improvement on the already published original graph neuron (GN) algorithm. In this improved approach, it recognizes incomplete/noisy patterns. It also resolves the crosstalk problem, which is identified in the previous publications, within closely matched patterns. To accomplish this, the HGN links multiple GN networks for filtering noise and crosstalk out of pattern data inputs. Intrinsically, the HGN is a lightweight in-network processing algorithm which does not require expensive floating point computations; hence, it is very suitable for real-time applications and tiny devices such as the wireless sensor networks. This paper describes that the HGN's pattern matching capability and the small response time remain insensitive to the increases in the number of stored patterns. Moreover, the HGN does not require definition of rules or setting of thresholds by the operator to achieve the desired results nor does it require heuristics entailing iterative operations for memorization and recall of patterns.

  1. Real-time orbit feedback at the APS

    International Nuclear Information System (INIS)

    Carwardine, J.A.; Lenkszus, F.R.

    1998-01-01

    A real-time orbit feedback system has been implemented at the Advanced Photon Source in order to meet the stringent orbit stability requirements. The system reduces global orbit motion below 30 Hz by a factor of four to below 5 μm rms horizontally and 2 μm rms vertically. This paper focuses on dynamic orbit stability and describes the all-digital orbit feedback system that has been implemented at the APS. Implementation of the global orbit feedback system is described and its latest performance is presented. Ultimately, the system will provide local feedback at each x-ray source point using installed photon BPMs to measure x-ray beam position and angle directly. Technical challenges associated with local feedback and with dynamics of the associated corrector magnets are described. The unique diagnostic capabilities provided by the APS system are discussed with reference to their use in identifying sources of the underlying orbit motion. copyright 1998 American Institute of Physics

  2. Real-time orbit feedback at the APS.

    Energy Technology Data Exchange (ETDEWEB)

    Carwardine, J.

    1998-06-18

    A real-time orbit feedback system has been implemented at the Advanced Photon Source in order to meet the stringent orbit stability requirements. The system reduces global orbit motion below 30Hz by a factor of four to below 5{micro}m rms horizontally and 2{micro}m rms vertically. This paper focuses on dynamic orbit stability and describes the all-digital orbit feedback system that has been implemented at the APS. Implementation of the global orbit feedback system is described and its latest performance is presented. Ultimately, the system will provide local feedback at each x-ray source point using installed photon BPMs to measure x-ray beam position and angle directly. Technical challenges associated with local feedback and with dynamics of the associated corrector magnets are described. The unique diagnostic capabilities provided by the APS system are discussed with reference to their use in identifying sources of the underlying orbit motion.

  3. project SENSE : multimodal simulation with full-body real-time verbal and nonverbal interactions

    NARCIS (Netherlands)

    Miri, Hossein; Kolkmeier, Jan; Taylor, Paul Jonathon; Poppe, Ronald; Heylen, Dirk; Poppe, Ronald; Meyer, John-Jules; Veltkamp, Remco; Dastani, Mehdi

    2016-01-01

    This paper presents a multimodal simulation system, project-SENSE, that combines virtual reality and full-body motion capture technologies with real-time verbal and nonverbal communication. We introduce the technical setup and employed hardware and software of a first prototype. We discuss the

  4. Loss-pattern identification in near-real-time accounting systems

    International Nuclear Information System (INIS)

    Argentesi, F.; Hafer, J.F.; Markin, J.T.; Shipley, J.P.

    1982-01-01

    To maximize the benefits from an advanced safeguards technique such as near-real-time accounting (NRTA), sophisticated methods of analyzing sequential materials accounting data are necessary. The methods must be capable of controlling the overall false-alarm rate while assuring good power of detection against all possible diversion scenarios. A method drawn from the field of pattern recognition and related to the alarm-sequence chart appears to be promising. Power curves based on Monte Carlo calculations illustrate the improvements over more conventional methods. 3 figures, 2 tables

  5. Real-time determination of magnetic island location for neoclassical tearing mode control in DIII-D

    International Nuclear Information System (INIS)

    Park, Y S; Welander, A S

    2006-01-01

    Accurate measurement of island location is crucial for efficient suppression of the neoclassical tearing mode by electron cyclotron current drive (ECCD). In the control system on DIII-D the contour of the resonant q-surface is measured in real time based on real-time magnetohydrodynamic reconstructions, EFITs, that include motional Stark effect measurements of pitch angle in the analysis. A new method for determination of the radial position of the q-surface using a 40 channel electron cyclotron emission radiometer has been developed. This method analyses localized temperature fluctuations caused by motion of the island and can be used by the plasma control system as a complementary measurement of the radial position of the q-surface contour for faster and more accurate alignment of the ECCD

  6. Real-time speckle variance swept-source optical coherence tomography using a graphics processing unit.

    Science.gov (United States)

    Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D

    2012-07-01

    Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.

  7. Mutual Information Based Dynamic Integration of Multiple Feature Streams for Robust Real-Time LVCSR

    Science.gov (United States)

    Sato, Shoei; Kobayashi, Akio; Onoe, Kazuo; Homma, Shinichi; Imai, Toru; Takagi, Tohru; Kobayashi, Tetsunori

    We present a novel method of integrating the likelihoods of multiple feature streams, representing different acoustic aspects, for robust speech recognition. The integration algorithm dynamically calculates a frame-wise stream weight so that a higher weight is given to a stream that is robust to a variety of noisy environments or speaking styles. Such a robust stream is expected to show discriminative ability. A conventional method proposed for the recognition of spoken digits calculates the weights front the entropy of the whole set of HMM states. This paper extends the dynamic weighting to a real-time large-vocabulary continuous speech recognition (LVCSR) system. The proposed weight is calculated in real-time from mutual information between an input stream and active HMM states in a searchs pace without an additional likelihood calculation. Furthermore, the mutual information takes the width of the search space into account by calculating the marginal entropy from the number of active states. In this paper, we integrate three features that are extracted through auditory filters by taking into account the human auditory system's ability to extract amplitude and frequency modulations. Due to this, features representing energy, amplitude drift, and resonant frequency drifts, are integrated. These features are expected to provide complementary clues for speech recognition. Speech recognition experiments on field reports and spontaneous commentary from Japanese broadcast news showed that the proposed method reduced error words by 9.2% in field reports and 4.7% in spontaneous commentaries relative to the best result obtained from a single stream.

  8. Real-time classification of auditory sentences using evoked cortical activity in humans

    Science.gov (United States)

    Moses, David A.; Leonard, Matthew K.; Chang, Edward F.

    2018-06-01

    Objective. Recent research has characterized the anatomical and functional basis of speech perception in the human auditory cortex. These advances have made it possible to decode speech information from activity in brain regions like the superior temporal gyrus, but no published work has demonstrated this ability in real-time, which is necessary for neuroprosthetic brain-computer interfaces. Approach. Here, we introduce a real-time neural speech recognition (rtNSR) software package, which was used to classify spoken input from high-resolution electrocorticography signals in real-time. We tested the system with two human subjects implanted with electrode arrays over the lateral brain surface. Subjects listened to multiple repetitions of ten sentences, and rtNSR classified what was heard in real-time from neural activity patterns using direct sentence-level and HMM-based phoneme-level classification schemes. Main results. We observed single-trial sentence classification accuracies of 90% or higher for each subject with less than 7 minutes of training data, demonstrating the ability of rtNSR to use cortical recordings to perform accurate real-time speech decoding in a limited vocabulary setting. Significance. Further development and testing of the package with different speech paradigms could influence the design of future speech neuroprosthetic applications.

  9. MOCA: A Low-Power, Low-Cost Motion Capture System Based on Integrated Accelerometers

    Directory of Open Access Journals (Sweden)

    Elisabetta Farella

    2007-01-01

    Full Text Available Human-computer interaction (HCI and virtual reality applications pose the challenge of enabling real-time interfaces for natural interaction. Gesture recognition based on body-mounted accelerometers has been proposed as a viable solution to translate patterns of movements that are associated with user commands, thus substituting point-and-click methods or other cumbersome input devices. On the other hand, cost and power constraints make the implementation of a natural and efficient interface suitable for consumer applications a critical task. Even though several gesture recognition solutions exist, their use in HCI context has been poorly characterized. For this reason, in this paper, we consider a low-cost/low-power wearable motion tracking system based on integrated accelerometers called motion capture with accelerometers (MOCA that we evaluated for navigation in virtual spaces. Recognition is based on a geometric algorithm that enables efficient and robust detection of rotational movements. Our objective is to demonstrate that such a low-cost and a low-power implementation is suitable for HCI applications. To this purpose, we characterized the system from both a quantitative point of view and a qualitative point of view. First, we performed static and dynamic assessment of movement recognition accuracy. Second, we evaluated the effectiveness of user experience using a 3D game application as a test bed.

  10. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Directory of Open Access Journals (Sweden)

    Vasanthan Maruthapillai

    Full Text Available In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face and change in marker distance (change in distance between the original and new marker positions, were used to extract three statistical features (mean, variance, and root mean square from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  11. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Science.gov (United States)

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  12. Relationship of Imaging Frequency and Planning Margin to Account for Intrafraction Prostate Motion: Analysis Based on Real-Time Monitoring Data

    International Nuclear Information System (INIS)

    Curtis, William; Khan, Mohammad; Magnelli, Anthony; Stephans, Kevin; Tendulkar, Rahul; Xia, Ping

    2013-01-01

    Purpose: Correction for intrafraction prostate motion becomes important for hypofraction treatment of prostate cancer. The purpose of this study was to estimate an ideal planning margin to account for intrafraction prostate motion as a function of imaging and repositioning frequency in the absence of continuous prostate motion monitoring. Methods and Materials: For 31 patients receiving intensity modulated radiation therapy treatment, prostate positions sampled at 10 Hz during treatment using the Calypso system were analyzed. Using these data, we simulated multiple, less frequent imaging protocols, including intervals of every 10, 15, 20, 30, 45, 60, 90, 120, 180, and 240 seconds. For each imaging protocol, the prostate displacement at the imaging time was corrected by subtracting prostate shifts from the subsequent displacements in that fraction. Furthermore, we conducted a principal component analysis to quantify the direction of prostate motion. Results: Averaging histograms of every 240 and 60 seconds for all patients, vector displacements of the prostate were, respectively, within 3 and 2 mm for 95% of the treatment time. A vector margin of 1 mm achieved 91.2% coverage of the prostate with 30 second imaging. The principal component analysis for all fractions showed the largest variance in prostate position in the midsagittal plane at 54° from the anterior direction, indicating that anterosuperior to inferoposterior is the direction of greatest motion. The smallest prostate motion is in the left-right direction. Conclusions: The magnitudes of intrafraction prostate motion along the superior-inferior and anterior-posterior directions are comparable, and the smallest motion is in the left-right direction. In the absence of continuous prostate motion monitoring, and under ideal circumstances, 1-, 2-, and 3-mm vector planning margins require a respective imaging frequency of every 15, 60, and 240 to account for intrafraction prostate motion while achieving

  13. Design and Voluntary Motion Intention Estimation of a Novel Wearable Full-Body Flexible Exoskeleton Robot

    Directory of Open Access Journals (Sweden)

    Chunjie Chen

    2017-01-01

    Full Text Available The wearable full-body exoskeleton robot developed in this study is one application of mobile cyberphysical system (CPS, which is a complex mobile system integrating mechanics, electronics, computer science, and artificial intelligence. Steel wire was used as the flexible transmission medium and a group of special wire-locking structures was designed. Additionally, we designed passive joints for partial joints of the exoskeleton. Finally, we proposed a novel gait phase recognition method for full-body exoskeletons using only joint angular sensors, plantar pressure sensors, and inclination sensors. The method consists of four procedures. Firstly, we classified the three types of main motion patterns: normal walking on the ground, stair-climbing and stair-descending, and sit-to-stand movement. Secondly, we segregated the experimental data into one gait cycle. Thirdly, we divided one gait cycle into eight gait phases. Finally, we built a gait phase recognition model based on k-Nearest Neighbor perception and trained it with the phase-labeled gait data. The experimental result shows that the model has a 98.52% average correct rate of classification of the main motion patterns on the testing set and a 95.32% average correct rate of phase recognition on the testing set. So the exoskeleton robot can achieve human motion intention in real time and coordinate its movement with the wearer.

  14. Adaptive Kalman filtering for real-time mapping of the visual field

    Science.gov (United States)

    Ward, B. Douglas; Janik, John; Mazaheri, Yousef; Ma, Yan; DeYoe, Edgar A.

    2013-01-01

    This paper demonstrates the feasibility of real-time mapping of the visual field for clinical applications. Specifically, three aspects of this problem were considered: (1) experimental design, (2) statistical analysis, and (3) display of results. Proper experimental design is essential to achieving a successful outcome, particularly for real-time applications. A random-block experimental design was shown to have less sensitivity to measurement noise, as well as greater robustness to error in modeling of the hemodynamic impulse response function (IRF) and greater flexibility than common alternatives. In addition, random encoding of the visual field allows for the detection of voxels that are responsive to multiple, not necessarily contiguous, regions of the visual field. Due to its recursive nature, the Kalman filter is ideally suited for real-time statistical analysis of visual field mapping data. An important feature of the Kalman filter is that it can be used for nonstationary time series analysis. The capability of the Kalman filter to adapt, in real time, to abrupt changes in the baseline arising from subject motion inside the scanner and other external system disturbances is important for the success of clinical applications. The clinician needs real-time information to evaluate the success or failure of the imaging run and to decide whether to extend, modify, or terminate the run. Accordingly, the analytical software provides real-time displays of (1) brain activation maps for each stimulus segment, (2) voxel-wise spatial tuning profiles, (3) time plots of the variability of response parameters, and (4) time plots of activated volume. PMID:22100663

  15. Activity Recognition Invariant to Sensor Orientation with Wearable Motion Sensors.

    Science.gov (United States)

    Yurtman, Aras; Barshan, Billur

    2017-08-09

    Most activity recognition studies that employ wearable sensors assume that the sensors are attached at pre-determined positions and orientations that do not change over time. Since this is not the case in practice, it is of interest to develop wearable systems that operate invariantly to sensor position and orientation. We focus on invariance to sensor orientation and develop two alternative transformations to remove the effect of absolute sensor orientation from the raw sensor data. We test the proposed methodology in activity recognition with four state-of-the-art classifiers using five publicly available datasets containing various types of human activities acquired by different sensor configurations. While the ordinary activity recognition system cannot handle incorrectly oriented sensors, the proposed transformations allow the sensors to be worn at any orientation at a given position on the body, and achieve nearly the same activity recognition performance as the ordinary system for which the sensor units are not rotatable. The proposed techniques can be applied to existing wearable systems without much effort, by simply transforming the time-domain sensor data at the pre-processing stage.

  16. Design and Implementation of Real-Time Vehicular Camera for Driver Assistance and Traffic Congestion Estimation.

    Science.gov (United States)

    Son, Sanghyun; Baek, Yunju

    2015-08-18

    As society has developed, the number of vehicles has increased and road conditions have become complicated, increasing the risk of crashes. Therefore, a service that provides safe vehicle control and various types of information to the driver is urgently needed. In this study, we designed and implemented a real-time traffic information system and a smart camera device for smart driver assistance systems. We selected a commercial device for the smart driver assistance systems, and applied a computer vision algorithm to perform image recognition. For application to the dynamic region of interest, dynamic frame skip methods were implemented to perform parallel processing in order to enable real-time operation. In addition, we designed and implemented a model to estimate congestion by analyzing traffic information. The performance of the proposed method was evaluated using images of a real road environment. We found that the processing time improved by 15.4 times when all the proposed methods were applied in the application. Further, we found experimentally that there was little or no change in the recognition accuracy when the proposed method was applied. Using the traffic congestion estimation model, we also found that the average error rate of the proposed model was 5.3%.

  17. Design and Implementation of Real-Time Vehicular Camera for Driver Assistance and Traffic Congestion Estimation

    Directory of Open Access Journals (Sweden)

    Sanghyun Son

    2015-08-01

    Full Text Available As society has developed, the number of vehicles has increased and road conditions have become complicated, increasing the risk of crashes. Therefore, a service that provides safe vehicle control and various types of information to the driver is urgently needed. In this study, we designed and implemented a real-time traffic information system and a smart camera device for smart driver assistance systems. We selected a commercial device for the smart driver assistance systems, and applied a computer vision algorithm to perform image recognition. For application to the dynamic region of interest, dynamic frame skip methods were implemented to perform parallel processing in order to enable real-time operation. In addition, we designed and implemented a model to estimate congestion by analyzing traffic information. The performance of the proposed method was evaluated using images of a real road environment. We found that the processing time improved by 15.4 times when all the proposed methods were applied in the application. Further, we found experimentally that there was little or no change in the recognition accuracy when the proposed method was applied. Using the traffic congestion estimation model, we also found that the average error rate of the proposed model was 5.3%.

  18. Face customization in a real-time digiTV stream

    Science.gov (United States)

    Lugmayr, Artur R.; Creutzburg, Reiner; Kalli, Seppo; Tsoumanis, Andreas

    2002-03-01

    The challenge in digital, interactive TV (digiTV) is to move the consumer from the refiguration state to the configuration state, where he can influence the story flow, the choice of characters and other narrative elements. Besides restructuring narrative and interactivity methodologies, one major task is content manipulation to provide the auditorium the ability to predefine actors that it wants to have in its virtual story universe. Current solutions in broadcasting video provide content as monolithic structure, composed of graphics, narration, special effects, etc. compressed into one high bit rate MPEG-2 stream. More personalized and interactive TV requires a contemporary approach to segment video data in real-time to customize contents. Our research work emphasizes techniques for interchanging faces/bodies against virtual anchors in real-time constrained broadcasted video streams. The aim of our research paper is to show and point out solutions for realizing real-time face and avatar customization. The major task for the broadcaster is metadata extraction by applying face detection/tracking/recognition algorithms, and transmission of the information to the client side. At the client side, our system shall provide the facility to pre-select virtual avatars stored in a local database, and synchronize movements and expressions with the current digiTV contents.

  19. Compact holographic optical neural network system for real-time pattern recognition

    Science.gov (United States)

    Lu, Taiwei; Mintzer, David T.; Kostrzewski, Andrew A.; Lin, Freddie S.

    1996-08-01

    One of the important characteristics of artificial neural networks is their capability for massive interconnection and parallel processing. Recently, specialized electronic neural network processors and VLSI neural chips have been introduced in the commercial market. The number of parallel channels they can handle is limited because of the limited parallel interconnections that can be implemented with 1D electronic wires. High-resolution pattern recognition problems can require a large number of neurons for parallel processing of an image. This paper describes a holographic optical neural network (HONN) that is based on high- resolution volume holographic materials and is capable of performing massive 3D parallel interconnection of tens of thousands of neurons. A HONN with more than 16,000 neurons packaged in an attache case has been developed. Rotation- shift-scale-invariant pattern recognition operations have been demonstrated with this system. System parameters such as the signal-to-noise ratio, dynamic range, and processing speed are discussed.

  20. Time-dependent reliability sensitivity analysis of motion mechanisms

    International Nuclear Information System (INIS)

    Wei, Pengfei; Song, Jingwen; Lu, Zhenzhou; Yue, Zhufeng

    2016-01-01

    Reliability sensitivity analysis aims at identifying the source of structure/mechanism failure, and quantifying the effects of each random source or their distribution parameters on failure probability or reliability. In this paper, the time-dependent parametric reliability sensitivity (PRS) analysis as well as the global reliability sensitivity (GRS) analysis is introduced for the motion mechanisms. The PRS indices are defined as the partial derivatives of the time-dependent reliability w.r.t. the distribution parameters of each random input variable, and they quantify the effect of the small change of each distribution parameter on the time-dependent reliability. The GRS indices are defined for quantifying the individual, interaction and total contributions of the uncertainty in each random input variable to the time-dependent reliability. The envelope function method combined with the first order approximation of the motion error function is introduced for efficiently estimating the time-dependent PRS and GRS indices. Both the time-dependent PRS and GRS analysis techniques can be especially useful for reliability-based design. This significance of the proposed methods as well as the effectiveness of the envelope function method for estimating the time-dependent PRS and GRS indices are demonstrated with a four-bar mechanism and a car rack-and-pinion steering linkage. - Highlights: • Time-dependent parametric reliability sensitivity analysis is presented. • Time-dependent global reliability sensitivity analysis is presented for mechanisms. • The proposed method is especially useful for enhancing the kinematic reliability. • An envelope method is introduced for efficiently implementing the proposed methods. • The proposed method is demonstrated by two real planar mechanisms.

  1. First Demonstration of Combined kV/MV Image-Guided Real-Time Dynamic Multileaf-Collimator Target Tracking

    International Nuclear Information System (INIS)

    Cho, Byungchul; Poulsen, Per R.; Sloutsky, Alex; Sawant, Amit; Keall, Paul J.

    2009-01-01

    Purpose: For intrafraction motion management, a real-time tracking system was developed by combining fiducial marker-based tracking via simultaneous kilovoltage (kV) and megavoltage (MV) imaging and a dynamic multileaf collimator (DMLC) beam-tracking system. Methods and Materials: The integrated tracking system employed a Varian Trilogy system equipped with kV/MV imaging systems and a Millennium 120-leaf MLC. A gold marker in elliptical motion (2-cm superior-inferior, 1-cm left-right, 10 cycles/min) was simultaneously imaged by the kV and MV imagers at 6.7 Hz and segmented in real time. With these two-dimensional projections, the tracking software triangulated the three-dimensional marker position and repositioned the MLC leaves to follow the motion. Phantom studies were performed to evaluate time delay from image acquisition to MLC adjustment, tracking error, and dosimetric impact of target motion with and without tracking. Results: The time delay of the integrated tracking system was ∼450 ms. The tracking error using a prediction algorithm was 0.9 ± 0.5 mm for the elliptical motion. The dose distribution with tracking showed better target coverage and less dose to surrounding region over no tracking. The failure rate of the gamma test (3%/3-mm criteria) was 22.5% without tracking but was reduced to 0.2% with tracking. Conclusion: For the first time, a complete tracking system combining kV/MV image-guided target tracking and DMLC beam tracking was demonstrated. The average geometric error was less than 1 mm, and the dosimetric error was negligible. This system is a promising method for intrafraction motion management.

  2. TU-EF-210-03: Real-Time Ablation Monitoring and Lesion Quantification Using Harmonic Motion Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Konofagou, E. [Columbia University (United States)

    2015-06-15

    The use of therapeutic ultrasound to provide targeted therapy is an active research area that has a broad application scope. The invited talks in this session will address currently implemented strategies and protocols for both hyperthermia and ablation applications using therapeutic ultrasound. The role of both ultrasound and MRI in the monitoring and assessment of these therapies will be explored in both pre-clinical and clinical applications. Katherine Ferrara: High Intensity Focused Ultrasound, Drug Delivery, and Immunotherapy Rajiv Chopra: Translating Localized Doxorubicin Delivery to Pediatric Oncology using MRI-guided HIFU Elisa Konofagou: Real-time Ablation Monitoring and Lesion Quantification using Harmonic Motion Imaging Keyvan Farahani: AAPM Task Groups in Interventional Ultrasound Imaging and Therapy Learning Objectives: Understand the role of ultrasound in localized drug delivery and the effects of immunotherapy when used in conjunction with ultrasound therapy. Understand potential targeted drug delivery clinical applications including pediatric oncology. Understand the technical requirements for performing targeted drug delivery. Understand how radiation-force approaches can be used to both monitor and assess high intensity focused ultrasound ablation therapy. Understand the role of AAPM task groups in ultrasound imaging and therapies. Chopra: Funding from Cancer Prevention and Research Initiative of Texas (CPRIT), Award R1308 Evelyn and M.R. Hudson Foundation; Research Support from Research Contract with Philips Healthcare; COI are Co-founder of FUS Instruments Inc Ferrara: Supported by NIH, UCDavis and California (CIRM and BHCE) Farahani: In-kind research support from Philips Healthcare.

  3. TU-EF-210-03: Real-Time Ablation Monitoring and Lesion Quantification Using Harmonic Motion Imaging

    International Nuclear Information System (INIS)

    Konofagou, E.

    2015-01-01

    The use of therapeutic ultrasound to provide targeted therapy is an active research area that has a broad application scope. The invited talks in this session will address currently implemented strategies and protocols for both hyperthermia and ablation applications using therapeutic ultrasound. The role of both ultrasound and MRI in the monitoring and assessment of these therapies will be explored in both pre-clinical and clinical applications. Katherine Ferrara: High Intensity Focused Ultrasound, Drug Delivery, and Immunotherapy Rajiv Chopra: Translating Localized Doxorubicin Delivery to Pediatric Oncology using MRI-guided HIFU Elisa Konofagou: Real-time Ablation Monitoring and Lesion Quantification using Harmonic Motion Imaging Keyvan Farahani: AAPM Task Groups in Interventional Ultrasound Imaging and Therapy Learning Objectives: Understand the role of ultrasound in localized drug delivery and the effects of immunotherapy when used in conjunction with ultrasound therapy. Understand potential targeted drug delivery clinical applications including pediatric oncology. Understand the technical requirements for performing targeted drug delivery. Understand how radiation-force approaches can be used to both monitor and assess high intensity focused ultrasound ablation therapy. Understand the role of AAPM task groups in ultrasound imaging and therapies. Chopra: Funding from Cancer Prevention and Research Initiative of Texas (CPRIT), Award R1308 Evelyn and M.R. Hudson Foundation; Research Support from Research Contract with Philips Healthcare; COI are Co-founder of FUS Instruments Inc Ferrara: Supported by NIH, UCDavis and California (CIRM and BHCE) Farahani: In-kind research support from Philips Healthcare

  4. Tracking errors in a prototype real-time tumour tracking system

    International Nuclear Information System (INIS)

    Sharp, Gregory C; Jiang, Steve B; Shimizu, Shinichi; Shirato, Hiroki

    2004-01-01

    In motion-compensated radiation therapy, radio-opaque markers can be implanted in or near a tumour and tracked in real-time using fluoroscopic imaging. Tracking these implanted markers gives highly accurate position information, except when tracking fails due to poor or ambiguous imaging conditions. This study investigates methods for automatic detection of tracking errors, and assesses the frequency and impact of tracking errors on treatments using the prototype real-time tumour tracking system. We investigated four indicators for automatic detection of tracking errors, and found that the distance between corresponding rays was most effective. We also found that tracking errors cause a loss of gating efficiency of between 7.6 and 10.2%. The incidence of treatment beam delivery during tracking errors was estimated at between 0.8% and 1.25%

  5. A novel dataset for real-life evaluation of facial expression recognition methodologies

    NARCIS (Netherlands)

    Siddiqi, Muhammad Hameed; Ali, Maqbool; Idris, Muhammad; Banos Legran, Oresti; Lee, Sungyoung; Choo, Hyunseung

    2016-01-01

    One limitation seen among most of the previous methods is that they were evaluated under settings that are far from real-life scenarios. The reason is that the existing facial expression recognition (FER) datasets are mostly pose-based and assume a predefined setup. The expressions in these datasets

  6. Neutron beam applications - A development of real-time imaging processing for neutron radiography

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Whoi Yul; Lee, Sang Yup; Choi, Min Seok; Hwang, Sun Kyu; Han, Il Ho; Jang, Jae Young [Hanyang University, Seoul (Korea)

    1999-08-01

    This research is sponsored and supported by KAERI as a part of {sup A}pplication of Neutron Radiography Beam.{sup M}ain theme of the research is to develop a non-destructive inspection system for the task of studying the real-time behaviour of dynamic motion using neutron beam with the aid of a special purpose real-time image processing system that allows to capture an image of internal structure of a specimen. Currently, most off-the-shelf image processing programs designed for visible light or X-ray are not adequate for the applications that require neutron beam generated by the experimental nuclear reactor. In addition, study of dynamic motion of a specimen is severely constrained by such image processing systems. In this research, a special image processing system suited for such application is developed which not only supplements the commercial image processing system but allows to use neutron beam directly in the system for the study. 18 refs., 21 figs., 1 tab. (Author)

  7. Highly-Accelerated Real-Time Cardiac Cine MRI Using k-t SPARSE-SENSE

    Science.gov (United States)

    Feng, Li; Srichai, Monvadi B.; Lim, Ruth P.; Harrison, Alexis; King, Wilson; Adluru, Ganesh; Dibella, Edward VR.; Sodickson, Daniel K.; Otazo, Ricardo; Kim, Daniel

    2012-01-01

    For patients with impaired breath-hold capacity and/or arrhythmias, real-time cine MRI may be more clinically useful than breath-hold cine MRI. However, commercially available real-time cine MRI methods using parallel imaging typically yield relatively poor spatio-temporal resolution due to their low image acquisition speed. We sought to achieve relatively high spatial resolution (~2.5mm × 2.5mm) and temporal resolution (~40ms), to produce high-quality real-time cine MR images that could be applied clinically for wall motion assessment and measurement of left ventricular (LV) function. In this work, we present an 8-fold accelerated real-time cardiac cine MRI pulse sequence using a combination of compressed sensing and parallel imaging (k-t SPARSE-SENSE). Compared with reference, breath-hold cine MRI, our 8-fold accelerated real-time cine MRI produced significantly worse qualitative grades (1–5 scale), but its image quality and temporal fidelity scores were above 3.0 (adequate) and artifacts and noise scores were below 3.0 (moderate), suggesting that acceptable diagnostic image quality can be achieved. Additionally, both 8-fold accelerated real-time cine and breath-hold cine MRI yielded comparable LV function measurements, with coefficient of variation cine MRI with k-t SPARSE-SENSE is a promising modality for rapid imaging of myocardial function. PMID:22887290

  8. Highly accelerated real-time cardiac cine MRI using k-t SPARSE-SENSE.

    Science.gov (United States)

    Feng, Li; Srichai, Monvadi B; Lim, Ruth P; Harrison, Alexis; King, Wilson; Adluru, Ganesh; Dibella, Edward V R; Sodickson, Daniel K; Otazo, Ricardo; Kim, Daniel

    2013-07-01

    For patients with impaired breath-hold capacity and/or arrhythmias, real-time cine MRI may be more clinically useful than breath-hold cine MRI. However, commercially available real-time cine MRI methods using parallel imaging typically yield relatively poor spatio-temporal resolution due to their low image acquisition speed. We sought to achieve relatively high spatial resolution (∼2.5 × 2.5 mm(2)) and temporal resolution (∼40 ms), to produce high-quality real-time cine MR images that could be applied clinically for wall motion assessment and measurement of left ventricular function. In this work, we present an eightfold accelerated real-time cardiac cine MRI pulse sequence using a combination of compressed sensing and parallel imaging (k-t SPARSE-SENSE). Compared with reference, breath-hold cine MRI, our eightfold accelerated real-time cine MRI produced significantly worse qualitative grades (1-5 scale), but its image quality and temporal fidelity scores were above 3.0 (adequate) and artifacts and noise scores were below 3.0 (moderate), suggesting that acceptable diagnostic image quality can be achieved. Additionally, both eightfold accelerated real-time cine and breath-hold cine MRI yielded comparable left ventricular function measurements, with coefficient of variation cine MRI with k-t SPARSE-SENSE is a promising modality for rapid imaging of myocardial function. Copyright © 2012 Wiley Periodicals, Inc.

  9. Handling Real-World Context Awareness, Uncertainty and Vagueness in Real-Time Human Activity Tracking and Recognition with a Fuzzy Ontology-Based Hybrid Method

    Science.gov (United States)

    Díaz-Rodríguez, Natalia; Cadahía, Olmo León; Cuéllar, Manuel Pegalajar; Lilius, Johan; Calvo-Flores, Miguel Delgado

    2014-01-01

    Human activity recognition is a key task in ambient intelligence applications to achieve proper ambient assisted living. There has been remarkable progress in this domain, but some challenges still remain to obtain robust methods. Our goal in this work is to provide a system that allows the modeling and recognition of a set of complex activities in real life scenarios involving interaction with the environment. The proposed framework is a hybrid model that comprises two main modules: a low level sub-activity recognizer, based on data-driven methods, and a high-level activity recognizer, implemented with a fuzzy ontology to include the semantic interpretation of actions performed by users. The fuzzy ontology is fed by the sub-activities recognized by the low level data-driven component and provides fuzzy ontological reasoning to recognize both the activities and their influence in the environment with semantics. An additional benefit of the approach is the ability to handle vagueness and uncertainty in the knowledge-based module, which substantially outperforms the treatment of incomplete and/or imprecise data with respect to classic crisp ontologies. We validate these advantages with the public CAD-120 dataset (Cornell Activity Dataset), achieving an accuracy of 90.1% and 91.07% for low-level and high-level activities, respectively. This entails an improvement over fully data-driven or ontology-based approaches. PMID:25268914

  10. Handling Real-World Context Awareness, Uncertainty and Vagueness in Real-Time Human Activity Tracking and Recognition with a Fuzzy Ontology-Based Hybrid Method

    Directory of Open Access Journals (Sweden)

    Natalia Díaz-Rodríguez

    2014-09-01

    Full Text Available Human activity recognition is a key task in ambient intelligence applications to achieve proper ambient assisted living. There has been remarkable progress in this domain, but some challenges still remain to obtain robust methods. Our goal in this work is to provide a system that allows the modeling and recognition of a set of complex activities in real life scenarios involving interaction with the environment. The proposed framework is a hybrid model that comprises two main modules: a low level sub-activity recognizer, based on data-driven methods, and a high-level activity recognizer, implemented with a fuzzy ontology to include the semantic interpretation of actions performed by users. The fuzzy ontology is fed by the sub-activities recognized by the low level data-driven component and provides fuzzy ontological reasoning to recognize both the activities and their influence in the environment with semantics. An additional benefit of the approach is the ability to handle vagueness and uncertainty in the knowledge-based module, which substantially outperforms the treatment of incomplete and/or imprecise data with respect to classic crisp ontologies. We validate these advantages with the public CAD-120 dataset (Cornell Activity Dataset, achieving an accuracy of 90.1% and 91.07% for low-level and high-level activities, respectively. This entails an improvement over fully data-driven or ontology-based approaches.

  11. Real-time shadows

    CERN Document Server

    Eisemann, Elmar; Assarsson, Ulf; Wimmer, Michael

    2011-01-01

    Important elements of games, movies, and other computer-generated content, shadows are crucial for enhancing realism and providing important visual cues. In recent years, there have been notable improvements in visual quality and speed, making high-quality realistic real-time shadows a reachable goal. Real-Time Shadows is a comprehensive guide to the theory and practice of real-time shadow techniques. It covers a large variety of different effects, including hard, soft, volumetric, and semi-transparent shadows.The book explains the basics as well as many advanced aspects related to the domain

  12. Dependable Real-Time Systems

    Science.gov (United States)

    1991-09-30

    0196 or 413 545-0720 PI E-mail Address: krithi@nirvan.cs.umass.edu, stankovic(ocs.umass.edu Grant or Contract Title: Dependable Real - Time Systems Grant...Dependable Real - Time Systems " Grant or Contract Number: N00014-85-k-0398 L " Reporting Period: 1 Oct 87 - 30 Sep 91 , 2. Summary of Accomplishments ’ 2.1 Our...in developing a sound approach to scheduling tasks in complex real - time systems , (2) developed a real-time operating system kernel, a preliminary

  13. Real-time digital angiocardiography using a temporal high-pass filter

    International Nuclear Information System (INIS)

    Hardin, C.W.; Kruger, R.A.; Anderson, F.L.; Bray, B.F.; Nelson, J.A.

    1984-01-01

    A temporal high-pass filtration technique for digital subtraction angiocardiography was studied, using real-time digital studies performed simultaneously with routine cineangiocardiography (cine) for qualitative image comparison. The digital studies showed increased contrast and suppression of background anatomy and also enhanced detection of wall motion abnormalities when compared with cine. The digital images are comparable with, and in some cases better than, cine images. Clinical efficacy of this digital technique is currently being evaluated

  14. Real Time Adaptive Stream-oriented Geo-data Filtering

    Directory of Open Access Journals (Sweden)

    A. A. Golovkov

    2016-01-01

    Full Text Available The cutting-edge engineering maintenance software systems of various objects are aimed at processing of geo-location data coming from the employees’ mobile devices in real time. To reduce the amount of transmitted data such systems, usually, use various filtration methods of geo-coordinates recorded directly on mobile devices.The paper identifies the reasons for errors of geo-data coming from different sources, and proposes an adaptive dynamic method to filter geo-location data. Compared with the static method previously described in the literature [1] the approach offers to align adaptively the filtering threshold with changing characteristics of coordinates from many sources of geo-location data.To evaluate the efficiency of the developed filter method have been involved about 400 thousand points, representing motion paths of different type (on foot, by car and high-speed train and parking (indoors, outdoors, near high-rise buildings to take data from different mobile devices. Analysis of results has shown that the benefits of the proposed method are the more precise location of long parking (up to 6 hours and coordinates when user is in motion, the capability to provide steam-oriented filtering of data from different sources that allows to use the approach in geo-information systems, providing continuous monitoring of the location in streamoriented data processing in real time. The disadvantage is a little bit more computational complexity and increasing amount of points of the final track as compared to other filtration techniques.In general, the developed approach enables a significant quality improvement of displayed paths of moving mobile objects.

  15. Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos

    Directory of Open Access Journals (Sweden)

    Li Yao

    2016-01-01

    Full Text Available Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a k-way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm’s projective function. We test our work on the several datasets and obtain very promising results.

  16. Concepts of real time and semi-real time material control

    International Nuclear Information System (INIS)

    Lovett, J.E.

    1975-01-01

    After a brief consideration of the traditional material balance accounting on an MBA basis, this paper explores the basic concepts of real time and semi-real time material control, together with some of the major problems to be solved. Three types of short-term material control are discussed: storage, batch processing, and continuous processing. (DLC)

  17. Real Time Systems

    DEFF Research Database (Denmark)

    Christensen, Knud Smed

    2000-01-01

    Describes fundamentals of parallel programming and a kernel for that. Describes methods for modelling and checking parallel problems. Real time problems.......Describes fundamentals of parallel programming and a kernel for that. Describes methods for modelling and checking parallel problems. Real time problems....

  18. Real time expert systems

    International Nuclear Information System (INIS)

    Asami, Tohru; Hashimoto, Kazuo; Yamamoto, Seiichi

    1992-01-01

    Recently, aiming at the application to the plant control for nuclear reactors and traffic and communication control, the research and the practical use of the expert system suitable to real time processing have become conspicuous. In this report, the condition for the required function to control the object that dynamically changes within a limited time is presented, and the technical difference between the real time expert system developed so as to satisfy it and the expert system of conventional type is explained with the actual examples and from theoretical aspect. The expert system of conventional type has the technical base in the problem-solving equipment originating in STRIPS. The real time expert system is applied to the fields accompanied by surveillance and control, to which conventional expert system is hard to be applied. The requirement for the real time expert system, the example of the real time expert system, and as the techniques of realizing real time processing, the realization of interruption processing, dispersion processing, and the mechanism of maintaining the consistency of knowledge are explained. (K.I.)

  19. Evaluation of highly accelerated real-time cardiac cine MRI in tachycardia.

    Science.gov (United States)

    Bassett, Elwin C; Kholmovski, Eugene G; Wilson, Brent D; DiBella, Edward V R; Dosdall, Derek J; Ranjan, Ravi; McGann, Christopher J; Kim, Daniel

    2014-02-01

    Electrocardiogram (ECG)-gated breath-hold cine MRI is considered to be the gold standard test for the assessment of cardiac function. However, it may fail in patients with arrhythmia, impaired breath-hold capacity and poor ECG gating. Although ungated real-time cine MRI may mitigate these problems, commercially available real-time cine MRI pulse sequences using parallel imaging typically yield relatively poor spatiotemporal resolution because of their low image acquisition efficiency. As an extension of our previous work, the purpose of this study was to evaluate the diagnostic quality and accuracy of eight-fold-accelerated real-time cine MRI with compressed sensing (CS) for the quantification of cardiac function in tachycardia, where it is challenging for real-time cine MRI to provide sufficient spatiotemporal resolution. We evaluated the performances of eight-fold-accelerated cine MRI with CS, three-fold-accelerated real-time cine MRI with temporal generalized autocalibrating partially parallel acquisitions (TGRAPPA) and ECG-gated breath-hold cine MRI in 21 large animals with tachycardia (mean heart rate, 104 beats per minute) at 3T. For each cine MRI method, two expert readers evaluated the diagnostic quality in four categories (image quality, temporal fidelity of wall motion, artifacts and apparent noise) using a Likert scale (1-5, worst to best). One reader evaluated the left ventricular functional parameters. The diagnostic quality scores were significantly different between the three cine pulse sequences, except for the artifact level between CS and TGRAPPA real-time cine MRI. Both ECG-gated breath-hold cine MRI and eight-fold accelerated real-time cine MRI yielded all four scores of ≥ 3.0 (acceptable), whereas three-fold-accelerated real-time cine MRI yielded all scores below 3.0, except for artifact (3.0). The left ventricular ejection fraction (LVEF) measurements agreed better between ECG-gated cine MRI and eight-fold-accelerated real-time cine MRI

  20. Real-time dose compensation methods for scanned ion beam therapy of moving tumors

    International Nuclear Information System (INIS)

    Luechtenborg, Robert

    2012-01-01

    Scanned ion beam therapy provides highly tumor-conformal treatments. So far, only tumors showing no considerable motion during therapy have been treated as tumor motion and dynamic beam delivery interfere, causing dose deteriorations. One proposed technique to mitigate these deteriorations is beam tracking (BT), which adapts the beam position to the moving tumor. Despite application of BT, dose deviations can occur in the case of non-translational motion. In this work, real-time dose compensation combined with beam tracking (RDBT) has been implemented into the control system to compensate these dose changes by adaptation of nominal particle numbers during irradiation. Compared to BT, significantly reduced dose deviations were measured using RDBT. Treatment planning studies for lung cancer patients including the increased biological effectiveness of ions revealed a significantly reduced over-dose level (3/5 patients) as well as significantly improved dose homogeneity (4/5 patients) for RDBT. Based on these findings, real-time dose compensated re-scanning (RDRS) has been proposed that potentially supersedes the technically complex fast energy adaptation necessary for BT and RDBT. Significantly improved conformity compared to re-scanning, i.e., averaging of dose deviations by repeated irradiation, was measured in film irradiations. Simulations comparing RDRS to BT revealed reduced under- and overdoses of the former method.

  1. Perception of biological motion from size-invariant body representations

    Directory of Open Access Journals (Sweden)

    Markus eLappe

    2015-03-01

    Full Text Available The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.

  2. Real-Time 3D Image Guidance Using a Standard LINAC: Measured Motion, Accuracy, and Precision of the First Prospective Clinical Trial of Kilovoltage Intrafraction Monitoring-Guided Gating for Prostate Cancer Radiation Therapy

    DEFF Research Database (Denmark)

    Keall, Paul J; Ng, Jin Aun; Juneja, Prabhjot

    2016-01-01

    for prostate cancer radiation therapy. In this paper we report on the measured motion accuracy and precision using real-time KIM-guided gating. METHODS AND MATERIALS: Imaging and motion information from the first 200 fractions from 6 patient prostate cancer radiation therapy volumetric modulated arc therapy...... treatments were analyzed. A 3-mm/5-second action threshold was used to trigger a gating event where the beam is paused and the couch position adjusted to realign the prostate to the treatment isocenter. To quantify the in vivo accuracy and precision, KIM was compared with simultaneously acquired k...

  3. Real-Time Location-Based Rendering of Urban Underground Pipelines

    Directory of Open Access Journals (Sweden)

    Wei Li

    2018-01-01

    Full Text Available The concealment and complex spatial relationships of urban underground pipelines present challenges in managing them. Recently, augmented reality (AR has been a hot topic around the world, because it can enhance our perception of reality by overlaying information about the environment and its objects onto the real world. Using AR, underground pipelines can be displayed accurately, intuitively, and in real time. We analyzed the characteristics of AR and their application in underground pipeline management. We mainly focused on the AR pipeline rendering procedure based on the BeiDou Navigation Satellite System (BDS and simultaneous localization and mapping (SLAM technology. First, in aiming to improve the spatial accuracy of pipeline rendering, we used differential corrections received from the Ground-Based Augmentation System to compute the precise coordinates of users in real time, which helped us accurately retrieve and draw pipelines near the users, and by scene recognition the accuracy can be further improved. Second, in terms of pipeline rendering, we used Visual-Inertial Odometry (VIO to track the rendered objects and made some improvements to visual effects, which can provide steady dynamic tracking of pipelines even in relatively markerless environments and outdoors. Finally, we used the occlusion method based on real-time 3D reconstruction to realistically express the immersion effect of underground pipelines. We compared our methods to the existing methods and concluded that the method proposed in this research improves the spatial accuracy of pipeline rendering and the portability of the equipment. Moreover, the updating of our rendering procedure corresponded with the moving of the user’s location, thus we achieved a dynamic rendering of pipelines in the real environment.

  4. Novel real-time tumor-contouring method using deep learning to prevent mistracking in X-ray fluoroscopy.

    Science.gov (United States)

    Terunuma, Toshiyuki; Tokui, Aoi; Sakae, Takeji

    2018-03-01

    Robustness to obstacles is the most important factor necessary to achieve accurate tumor tracking without fiducial markers. Some high-density structures, such as bone, are enhanced on X-ray fluoroscopic images, which cause tumor mistracking. Tumor tracking should be performed by controlling "importance recognition": the understanding that soft-tissue is an important tracking feature and bone structure is unimportant. We propose a new real-time tumor-contouring method that uses deep learning with importance recognition control. The novelty of the proposed method is the combination of the devised random overlay method and supervised deep learning to induce the recognition of structures in tumor contouring as important or unimportant. This method can be used for tumor contouring because it uses deep learning to perform image segmentation. Our results from a simulated fluoroscopy model showed accurate tracking of a low-visibility tumor with an error of approximately 1 mm, even if enhanced bone structure acted as an obstacle. A high similarity of approximately 0.95 on the Jaccard index was observed between the segmented and ground truth tumor regions. A short processing time of 25 ms was achieved. The results of this simulated fluoroscopy model support the feasibility of robust real-time tumor contouring with fluoroscopy. Further studies using clinical fluoroscopy are highly anticipated.

  5. Combining Real-Time Seismic and GPS Data for Earthquake Early Warning (Invited)

    Science.gov (United States)

    Boese, M.; Heaton, T. H.; Hudnut, K. W.

    2013-12-01

    Scientists at Caltech, UC Berkeley, the Univ. of SoCal, the Univ. of Washington, the US Geological Survey, and ETH Zurich have developed an earthquake early warning (EEW) demonstration system for California and the Pacific Northwest. To quickly determine the earthquake magnitude and location, 'ShakeAlert' currently processes and interprets real-time data-streams from ~400 seismic broadband and strong-motion stations within the California Integrated Seismic Network (CISN). Based on these parameters, the 'UserDisplay' software predicts and displays the arrival and intensity of shaking at a given user site. Real-time ShakeAlert feeds are currently shared with around 160 individuals, companies, and emergency response organizations to educate potential users about EEW and to identify needs and applications of EEW in a future operational warning system. Recently, scientists at the contributing institutions have started to develop algorithms for ShakeAlert that make use of high-rate real-time GPS data to improve the magnitude estimates for large earthquakes (M>6.5) and to determine slip distributions. Knowing the fault slip in (near) real-time is crucial for users relying on or operating distributed systems, such as for power, water or transportation, especially if these networks run close to or across large faults. As shown in an earlier study, slip information is also useful to predict (in a probabilistic sense) how far a fault rupture will propagate, thus enabling more robust probabilistic ground-motion predictions at distant locations. Finally, fault slip information is needed for tsunami warning, such as in the Cascadia subduction-zone. To handle extended fault-ruptures of large earthquakes in real-time, Caltech and USGS Pasadena are currently developing and testing a two-step procedure that combines seismic and geodetic data; in the first step, high-frequency strong-motion amplitudes are used to rapidly classify near-and far-source stations. Then, the location and

  6. Real-time prediction and gating of respiratory motion in 3D space using extended Kalman filters and Gaussian process regression network

    Science.gov (United States)

    Bukhari, W.; Hong, S.-M.

    2016-03-01

    The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient’s breathing cycle. The algorithm, named EKF-GPRN+ , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN+ prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN+ implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN+ . The experimental results show that the EKF-GPRN+ algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN+ algorithm can further reduce the prediction error by employing the gating function, albeit

  7. Real-time prediction and gating of respiratory motion in 3D space using extended Kalman filters and Gaussian process regression network

    International Nuclear Information System (INIS)

    Bukhari, W; Hong, S-M

    2016-01-01

    The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient’s breathing cycle. The algorithm, named EKF-GPRN +  , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN + prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN + implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN +  . The experimental results show that the EKF-GPRN + algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN + algorithm can further reduce the prediction error by employing the gating function

  8. Attraction of posture and motion-trajectory elements of conspecific biological motion in medaka fish.

    Science.gov (United States)

    Shibai, Atsushi; Arimoto, Tsunehiro; Yoshinaga, Tsukasa; Tsuchizawa, Yuta; Khureltulga, Dashdavaa; Brown, Zuben P; Kakizuka, Taishi; Hosoda, Kazufumi

    2018-06-05

    Visual recognition of conspecifics is necessary for a wide range of social behaviours in many animals. Medaka (Japanese rice fish), a commonly used model organism, are known to be attracted by the biological motion of conspecifics. However, biological motion is a composite of both body-shape motion and entire-field motion trajectory (i.e., posture or motion-trajectory elements, respectively), and it has not been revealed which element mediates the attractiveness. Here, we show that either posture or motion-trajectory elements alone can attract medaka. We decomposed biological motion of the medaka into the two elements and synthesized visual stimuli that contain both, either, or none of the two elements. We found that medaka were attracted by visual stimuli that contain at least one of the two elements. In the context of other known static visual information regarding the medaka, the potential multiplicity of information regarding conspecific recognition has further accumulated. Our strategy of decomposing biological motion into these partial elements is applicable to other animals, and further studies using this technique will enhance the basic understanding of visual recognition of conspecifics.

  9. TH-EF-BRA-05: A Method of Near Real-Time 4D MRI Using Volumetric Dynamic Keyhole (VDK) in the Presence of Respiratory Motion for MR-Guided Radiotherapy

    International Nuclear Information System (INIS)

    Lewis, B; Kim, S; Kim, T

    2016-01-01

    Purpose: To develop a novel method that enables 4D MR imaging in near real-time for continuous monitoring of tumor motion in MR-guided radiotherapy. Methods: This method is mainly based on an idea of expanding dynamic keyhole to full volumetric imaging acquisition. In the VDK approach introduced in this study, a library of peripheral volumetric k-space data is generated in given number of phases (5 and 10 in this study) in advance. For 4D MRI at any given time, only volumetric central k-space data are acquired in real-time and combined with pre-acquired peripheral volumetric k-space data in the library corresponding to the respiratory phase (or amplitude). The combined k-space data are Fourier-transformed to MR images. For simulation study, an MRXCAT program was used to generate synthetic MR images of the thorax with desired respiratory motion, contrast levels, and spatial and temporal resolution. 20 phases of volumetric MR images, with 200 ms temporal resolution in 4 s respiratory period, were generated using balanced steady-state free precession MR pulse sequence. The total acquisition time was 21.5s/phase with a voxel size of 3×3×5 mm 3 and an image matrix of 128×128×56. Image similarity was evaluated with difference maps between the reference and reconstructed images. The VDK, conventional keyhole, and zero filling methods were compared for this simulation study. Results: Using 80% of the ky data and 70% of the kz data from the library resulted in 12.20% average intensity difference from the reference, and 21.60% and 28.45% difference in threshold pixel difference for conventional keyhole and zero filling, respectively. The imaging time will be reduced from 21.5s to 1.3s per volume using the VDK method. Conclusion: Near real-time 4D MR imaging can be achieved using the volumetric dynamic keyhole method. That makes the possibility of utilizing 4D MRI during MR-guided radiotherapy.

  10. Time-Frequency Feature Representation Using Multi-Resolution Texture Analysis and Acoustic Activity Detector for Real-Life Speech Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2015-01-01

    Full Text Available The classification of emotional speech is mostly considered in speech-related research on human-computer interaction (HCI. In this paper, the purpose is to present a novel feature extraction based on multi-resolutions texture image information (MRTII. The MRTII feature set is derived from multi-resolution texture analysis for characterization and classification of different emotions in a speech signal. The motivation is that we have to consider emotions have different intensity values in different frequency bands. In terms of human visual perceptual, the texture property on multi-resolution of emotional speech spectrogram should be a good feature set for emotion classification in speech. Furthermore, the multi-resolution analysis on texture can give a clearer discrimination between each emotion than uniform-resolution analysis on texture. In order to provide high accuracy of emotional discrimination especially in real-life, an acoustic activity detection (AAD algorithm must be applied into the MRTII-based feature extraction. Considering the presence of many blended emotions in real life, in this paper make use of two corpora of naturally-occurring dialogs recorded in real-life call centers. Compared with the traditional Mel-scale Frequency Cepstral Coefficients (MFCC and the state-of-the-art features, the MRTII features also can improve the correct classification rates of proposed systems among different language databases. Experimental results show that the proposed MRTII-based feature information inspired by human visual perception of the spectrogram image can provide significant classification for real-life emotional recognition in speech.

  11. Real Time Energy Management Control Strategies for Hybrid Powertrains

    Science.gov (United States)

    Zaher, Mohamed Hegazi Mohamed

    In order to improve fuel efficiency and reduce emissions of mobile vehicles, various hybrid power-train concepts have been developed over the years. This thesis focuses on embedded control of hybrid powertrain concepts for mobile vehicle applications. Optimal robust control approach is used to develop a real time energy management strategy for continuous operations. The main idea is to store the normally wasted mechanical regenerative energy in energy storage devices for later usage. The regenerative energy recovery opportunity exists in any condition where the speed of motion is in opposite direction to the applied force or torque. This is the case when the vehicle is braking, decelerating, or the motion is driven by gravitational force, or load driven. There are three main concepts for regernerative energy storing devices in hybrid vehicles: electric, hydraulic, and flywheel. The real time control challenge is to balance the system power demand from the engine and the hybrid storage device, without depleting the energy storage device or stalling the engine in any work cycle, while making optimal use of the energy saving opportunities in a given operational, often repetitive cycle. In the worst case scenario, only engine is used and hybrid system completely disabled. A rule based control is developed and tuned for different work cycles and linked to a gain scheduling algorithm. A gain scheduling algorithm identifies the cycle being performed by the machine and its position via GPS, and maps them to the gains.

  12. Temporal logic motion planning

    CSIR Research Space (South Africa)

    Seotsanyana, M

    2010-01-01

    Full Text Available In this paper, a critical review on temporal logic motion planning is presented. The review paper aims to address the following problems: (a) In a realistic situation, the motion planning problem is carried out in real-time, in a dynamic, uncertain...

  13. Tightly-coupled real-time analysis of GPS and accelerometer data for translational and rotational ground motions and application to earthquake and tsunami early warning

    Science.gov (United States)

    Geng, J.; Bock, Y.; Melgar, D.; Hasse, J.; Crowell, B. W.

    2013-12-01

    High-rate GPS can play an important role in earthquake early warning (EEW) systems for large (>M6) events by providing permanent displacements immediately as they are achieved, to be used in source inversions that can be repeatedly updated as more information becomes available. This is most valuable to implement at a site very near the potential source rupture, where broadband seismometers are likely to clip, and accelerometer data cannot be objectively integrated to produce reliable displacements in real time. At present, more than 525 real-time GPS stations have been established in western North America, which are being integrated into EEW systems. Our analysis technique relies on a tightly-coupled combination of GPS and accelerometer data, an extension of precise point positioning with ambiguity resolution (PPP-AR). We operate a PPP service based on North American stations available through the IGS and UNAVCO/PBO. The service provides real-time satellite clock and fractional-cycle bias products that allow us to position individual client stations in the zone of deformation. The service reference stations are chosen to be further than 200 km from the primary zones of tectonic deformation in the western U.S. to avoid contamination of the satellite products during a large seismic event. At client stations, accelerometer data are applied as tight constraints on the positions between epochs in PPP-AR, which improves cycle-slip repair and rapid ambiguity resolution after GPS outages. Furthermore, we estimate site displacements, seismic velocities, and coseismic ground tilts to facilitate the analysis of ground motion characteristics and the inversion for source mechanisms. The seismogeodetic displacement and velocity waveforms preserves the detection of P wave arrivals, and provides P-wave arrival displacement that is key new information for EEW. Our innovative solution method for coseismic tilts mitigates an error source that has continually plagued strong motion

  14. [Recognition of walking stance phase and swing phase based on moving window].

    Science.gov (United States)

    Geng, Xiaobo; Yang, Peng; Wang, Xinran; Geng, Yanli; Han, Yu

    2014-04-01

    Wearing transfemoral prosthesis is the only way to complete daily physical activity for amputees. Motion pattern recognition is important for the control of prosthesis, especially in the recognizing swing phase and stance phase. In this paper, it is reported that surface electromyography (sEMG) signal is used in swing and stance phase recognition. sEMG signal of related muscles was sampled by Infiniti of a Canadian company. The sEMG signal was then filtered by weighted filtering window and analyzed by height permitted window. The starting time of stance phase and swing phase is determined through analyzing special muscles. The sEMG signal of rectus femoris was used in stance phase recognition and sEMG signal of tibialis anterior is used in swing phase recognition. In a certain tolerating range, the double windows theory, including weighted filtering window and height permitted window, can reach a high accuracy rate. Through experiments, the real walking consciousness of the people was reflected by sEMG signal of related muscles. Using related muscles to recognize swing and stance phase is reachable. The theory used in this paper is useful for analyzing sEMG signal and actual prosthesis control.

  15. Real-Time Continuous Response Spectra Exceedance Calculation Displayed in a Web-Browser Enables Rapid and Robust Damage Evaluation by First Responders

    Science.gov (United States)

    Franke, M.; Skolnik, D. A.; Harvey, D.; Lindquist, K.

    2014-12-01

    A novel and robust approach is presented that provides near real-time earthquake alarms for critical structures at distributed locations and large facilities using real-time estimation of response spectra obtained from near free-field motions. Influential studies dating back to the 1980s identified spectral response acceleration as a key ground motion characteristic that correlates well with observed damage in structures. Thus, monitoring and reporting on exceedance of spectra-based thresholds are useful tools for assessing the potential for damage to facilities or multi-structure campuses based on input ground motions only. With as little as one strong-motion station per site, this scalable approach can provide rapid alarms on the damage status of remote towns, critical infrastructure (e.g., hospitals, schools) and points of interests (e.g., bridges) for a very large number of locations enabling better rapid decision making during critical and difficult immediate post-earthquake response actions. Details on the novel approach are presented along with an example implementation for a large energy company. Real-time calculation of PSA exceedance and alarm dissemination are enabled with Bighorn, an extension module based on the Antelope software package that combines real-time spectral monitoring and alarm capabilities with a robust built-in web display server. Antelope is an environmental data collection software package from Boulder Real Time Technologies (BRTT) typically used for very large seismic networks and real-time seismic data analyses. The primary processing engine produces continuous time-dependent response spectra for incoming acceleration streams. It utilizes expanded floating-point data representations within object ring-buffer packets and waveform files in a relational database. This leads to a very fast method for computing response spectra for a large number of channels. A Python script evaluates these response spectra for exceedance of one or more

  16. An innate immune system-mimicking, real-time biosensing of infectious bacteria.

    Science.gov (United States)

    Seo, Sung-Min; Jeon, Jin-Woo; Kim, Tae-Yong; Paek, Se-Hwan

    2015-09-07

    An animal cell-based biosensor was investigated to monitor bacterial contamination in an unattended manner by mimicking the innate immune response. The cells (RAW 264.7 cell line) were first attached onto the solid surfaces of a 96-well microtiter plate and co-incubated in the culture medium with a sample that might contain bacterial contaminants. As Toll-like receptors were present on the cell membrane surfaces, they acted as a sentinel by binding to pathogen-associated molecular patterns (PAMPs) of any contaminant. Such biological recognition initiates signal transmission along various pathways to produce different proinflammatory mediators, one of which, tumor necrosis factor-α (TNF-α) was measured using an immunosensor. To demonstrate automated bacterium monitoring, a capture antibody specific for TNF-α was immobilized on an optical fiber sensor tip and then used to measure complex formation in a label-free sensor system (e.g., Octet Red). The sensor response time depended significantly on the degree of agitation of the culture medium, controlling the biological recognition and further autocrine/paracrine signaling by cytokines. The response, particularly under non-agitated conditions, was also influenced by the medium volume, revealing a local gradient change of the cytokine concentration and also acidity, caused by bacterial growth near the bottom surfaces. A biosensor system retaining 50 μL medium and not employing agitation could be used for the early detection of bacterial contamination. This novel biosensing model was applied to the real-time monitoring of different bacteria, Shigella sonnei, Staphylococcus aureus, and Listeria monocytogenes. They (bacterial species, suggesting the concept of non-targeted bacterial real-time monitoring. This technique was further applied to real sample testing (e.g., with milk) to exemplify, for example, the food quality control process without using any additional sample pretreatment such as magnetic concentration.

  17. Using real-time stereopsis for mobile robot control

    Science.gov (United States)

    Bonasso, R. P.; Nishihara, H. K.

    1991-02-01

    This paper describes on-going work in using range and motion data generated at video-frame rates as the basis for long-range perception in a mobile robot. A current approach in the artificial intelligence community to achieve timecritical perception for situated reasoning is to use low-level perception for motor reflex-like activity and higher-level but more computationally intense perception for path planning reconnaissance and retrieval activities. Typically inclinometers and a compass or an infra-red beacon system provide stability and orientation maintenance and ultrasonic or infra-red sensors serve as proximity detectors for obstacle avoidance. For distant ranging and area occupancy determination active imaging systems such as laser scanners can be prohibitivtly expensive and heretofore passive systems typically performed more slowly than the cycle time of the control system causing the robot to halt periodically along its way. However a recent stereo system developed by Nishihara known as PRISM (Practical Real-time Imaging Stereo Matcher) matches stereo pairs using a sign-correlation technique that gives range and motion at video frame rates. We are integrating this technique with constant-time control software for distant ranging and object detection at a speed that is comparable with the cycle-times of the low-level sensors. Possibilities for a variety of uses in a leader-follower mobile robot situation are discussed.

  18. Performance enhancement of various real-time image processing techniques via speculative execution

    Science.gov (United States)

    Younis, Mohamed F.; Sinha, Purnendu; Marlowe, Thomas J.; Stoyenko, Alexander D.

    1996-03-01

    In real-time image processing, an application must satisfy a set of timing constraints while ensuring the semantic correctness of the system. Because of the natural structure of digital data, pure data and task parallelism have been used extensively in real-time image processing to accelerate the handling time of image data. These types of parallelism are based on splitting the execution load performed by a single processor across multiple nodes. However, execution of all parallel threads is mandatory for correctness of the algorithm. On the other hand, speculative execution is an optimistic execution of part(s) of the program based on assumptions on program control flow or variable values. Rollback may be required if the assumptions turn out to be invalid. Speculative execution can enhance average, and sometimes worst-case, execution time. In this paper, we target various image processing techniques to investigate applicability of speculative execution. We identify opportunities for safe and profitable speculative execution in image compression, edge detection, morphological filters, and blob recognition.

  19. Process algebra with timing : real time and discrete time

    NARCIS (Netherlands)

    Baeten, J.C.M.; Middelburg, C.A.; Bergstra, J.A.; Ponse, A.J.; Smolka, S.A.

    2001-01-01

    We present real time and discrete time versions of ACP with absolute timing and relative timing. The starting-point is a new real time version with absolute timing, called ACPsat, featuring urgent actions and a delay operator. The discrete time versions are conservative extensions of the discrete

  20. Process algebra with timing: Real time and discrete time

    NARCIS (Netherlands)

    Baeten, J.C.M.; Middelburg, C.A.

    1999-01-01

    We present real time and discrete time versions of ACP with absolute timing and relative timing. The startingpoint is a new real time version with absolute timing, called ACPsat , featuring urgent actions and a delay operator. The discrete time versions are conservative extensions of the discrete

  1. Real-time motion analysis reveals cell directionality as an indicator of breast cancer progression.

    Directory of Open Access Journals (Sweden)

    Michael C Weiger

    Full Text Available Cancer cells alter their migratory properties during tumor progression to invade surrounding tissues and metastasize to distant sites. However, it remains unclear how migratory behaviors differ between tumor cells of different malignancy and whether these migratory behaviors can be utilized to assess the malignant potential of tumor cells. Here, we analyzed the migratory behaviors of cell lines representing different stages of breast cancer progression using conventional migration assays or time-lapse imaging and particle image velocimetry (PIV to capture migration dynamics. We find that the number of migrating cells in transwell assays, and the distance and speed of migration in unconstrained 2D assays, show no correlation with malignant potential. However, the directionality of cell motion during 2D migration nicely distinguishes benign and tumorigenic cell lines, with tumorigenic cell lines harboring less directed, more random motion. Furthermore, the migratory behaviors of epithelial sheets observed under basal conditions and in response to stimulation with epidermal growth factor (EGF or lysophosphatitic acid (LPA are distinct for each cell line with regard to cell speed, directionality, and spatiotemporal motion patterns. Surprisingly, treatment with LPA promotes a more cohesive, directional sheet movement in lung colony forming MCF10CA1a cells compared to basal conditions or EGF stimulation, implying that the LPA signaling pathway may alter the invasive potential of MCF10CA1a cells. Together, our findings identify cell directionality as a promising indicator for assessing the tumorigenic potential of breast cancer cell lines and show that LPA induces more cohesive motility in a subset of metastatic breast cancer cells.

  2. Real-time radiography

    International Nuclear Information System (INIS)

    Bossi, R.H.; Oien, C.T.

    1981-01-01

    Real-time radiography is used for imaging both dynamic events and static objects. Fluorescent screens play an important role in converting radiation to light, which is then observed directly or intensified and detected. The radiographic parameters for real-time radiography are similar to conventional film radiography with special emphasis on statistics and magnification. Direct-viewing fluoroscopy uses the human eye as a detector of fluorescent screen light or the light from an intensifier. Remote-viewing systems replace the human observer with a television camera. The remote-viewing systems have many advantages over the direct-viewing conditions such as safety, image enhancement, and the capability to produce permanent records. This report reviews real-time imaging system parameters and components

  3. Real-time vision systems

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  4. Real-time skin feature identification in a time-sequential video stream

    Science.gov (United States)

    Kramberger, Iztok

    2005-04-01

    Skin color can be an important feature when tracking skin-colored objects. Particularly this is the case for computer-vision-based human-computer interfaces (HCI). Humans have a highly developed feeling of space and, therefore, it is reasonable to support this within intelligent HCI, where the importance of augmented reality can be foreseen. Joining human-like interaction techniques within multimodal HCI could, or will, gain a feature for modern mobile telecommunication devices. On the other hand, real-time processing plays an important role in achieving more natural and physically intuitive ways of human-machine interaction. The main scope of this work is the development of a stereoscopic computer-vision hardware-accelerated framework for real-time skin feature identification in the sense of a single-pass image segmentation process. The hardware-accelerated preprocessing stage is presented with the purpose of color and spatial filtering, where the skin color model within the hue-saturation-value (HSV) color space is given with a polyhedron of threshold values representing the basis of the filter model. An adaptive filter management unit is suggested to achieve better segmentation results. This enables the adoption of filter parameters to the current scene conditions in an adaptive way. Implementation of the suggested hardware structure is given at the level of filed programmable system level integrated circuit (FPSLIC) devices using an embedded microcontroller as their main feature. A stereoscopic clue is achieved using a time-sequential video stream, but this shows no difference for real-time processing requirements in terms of hardware complexity. The experimental results for the hardware-accelerated preprocessing stage are given by efficiency estimation of the presented hardware structure using a simple motion-detection algorithm based on a binary function.

  5. Real-time registration of 3D to 2D ultrasound images for image-guided prostate biopsy.

    Science.gov (United States)

    Gillies, Derek J; Gardi, Lori; De Silva, Tharindu; Zhao, Shuang-Ren; Fenster, Aaron

    2017-09-01

    During image-guided prostate biopsy, needles are targeted at tissues that are suspicious of cancer to obtain specimen for histological examination. Unfortunately, patient motion causes targeting errors when using an MR-transrectal ultrasound (TRUS) fusion approach to augment the conventional biopsy procedure. This study aims to develop an automatic motion correction algorithm approaching the frame rate of an ultrasound system to be used in fusion-based prostate biopsy systems. Two modes of operation have been investigated for the clinical implementation of the algorithm: motion compensation using a single user initiated correction performed prior to biopsy, and real-time continuous motion compensation performed automatically as a background process. Retrospective 2D and 3D TRUS patient images acquired prior to biopsy gun firing were registered using an intensity-based algorithm utilizing normalized cross-correlation and Powell's method for optimization. 2D and 3D images were downsampled and cropped to estimate the optimal amount of image information that would perform registrations quickly and accurately. The optimal search order during optimization was also analyzed to avoid local optima in the search space. Error in the algorithm was computed using target registration errors (TREs) from manually identified homologous fiducials in a clinical patient dataset. The algorithm was evaluated for real-time performance using the two different modes of clinical implementations by way of user initiated and continuous motion compensation methods on a tissue mimicking prostate phantom. After implementation in a TRUS-guided system with an image downsampling factor of 4, the proposed approach resulted in a mean ± std TRE and computation time of 1.6 ± 0.6 mm and 57 ± 20 ms respectively. The user initiated mode performed registrations with in-plane, out-of-plane, and roll motions computation times of 108 ± 38 ms, 60 ± 23 ms, and 89 ± 27 ms, respectively, and corresponding

  6. A deformable surface model for real-time water drop animation.

    Science.gov (United States)

    Zhang, Yizhong; Wang, Huamin; Wang, Shuai; Tong, Yiying; Zhou, Kun

    2012-08-01

    A water drop behaves differently from a large water body because of its strong viscosity and surface tension under the small scale. Surface tension causes the motion of a water drop to be largely determined by its boundary surface. Meanwhile, viscosity makes the interior of a water drop less relevant to its motion, as the smooth velocity field can be well approximated by an interpolation of the velocity on the boundary. Consequently, we propose a fast deformable surface model to realistically animate water drops and their flowing behaviors on solid surfaces. Our system efficiently simulates water drop motions in a Lagrangian fashion, by reducing 3D fluid dynamics over the whole liquid volume to a deformable surface model. In each time step, the model uses an implicit mean curvature flow operator to produce surface tension effects, a contact angle operator to change droplet shapes on solid surfaces, and a set of mesh connectivity updates to handle topological changes and improve mesh quality over time. Our numerical experiments demonstrate a variety of physically plausible water drop phenomena at a real-time rate, including capillary waves when water drops collide, pinch-off of water jets, and droplets flowing over solid materials. The whole system performs orders-of-magnitude faster than existing simulation approaches that generate comparable water drop effects.

  7. Real-Time Unsteady Loads Measurements Using Hot-Film Sensors

    Science.gov (United States)

    Mangalam, Arun S.; Moes, Timothy R.

    2004-01-01

    Several flight-critical aerodynamic problems such as buffet, flutter, stall, and wing rock are strongly affected or caused by abrupt changes in unsteady aerodynamic loads and moments. Advanced sensing and flow diagnostic techniques have made possible simultaneous identification and tracking, in realtime, of the critical surface, viscosity-related aerodynamic phenomena under both steady and unsteady flight conditions. The wind tunnel study reported here correlates surface hot-film measurements of leading edge stagnation point and separation point, with unsteady aerodynamic loads on a NACA 0015 airfoil. Lift predicted from the correlation model matches lift obtained from pressure sensors for an airfoil undergoing harmonic pitchup and pitchdown motions. An analytical model was developed that demonstrates expected stall trends for pitchup and pitchdown motions. This report demonstrates an ability to obtain unsteady aerodynamic loads in real time, which could lead to advances in air vehicle safety, performance, ride-quality, control, and health management.

  8. Tokamak equilibrium reconstruction code LIUQE and its real time implementation

    International Nuclear Information System (INIS)

    Moret, J.-M.; Duval, B.P.; Le, H.B.; Coda, S.; Felici, F.; Reimerdes, H.

    2015-01-01

    Highlights: • Algorithm vertical stabilisation using a linear parametrisation of the current density. • Experimentally derived model of the vacuum vessel to account for vessel currents. • Real-time contouring algorithm for flux surface averaged 1.5 D transport equations. • Full real time implementation coded in SIMULINK runs in less than 200 μs. • Applications: shape control, safety factor profile control, coupling with RAPTOR. - Abstract: Equilibrium reconstruction consists in identifying, from experimental measurements, a distribution of the plasma current density that satisfies the pressure balance constraint. The LIUQE code adopts a computationally efficient method to solve this problem, based on an iterative solution of the Poisson equation coupled with a linear parametrisation of the plasma current density. This algorithm is unstable against vertical gross motion of the plasma column for elongated shapes and its application to highly shaped plasmas on TCV requires a particular treatment of this instability. TCV's continuous vacuum vessel has a low resistance designed to enhance passive stabilisation of the vertical position. The eddy currents in the vacuum vessel have a sizeable influence on the equilibrium reconstruction and must be taken into account. A real time version of LIUQE has been implemented on TCV's distributed digital control system with a cycle time shorter than 200 μs for a full spatial grid of 28 by 65, using all 133 experimental measurements and including the flux surface average of quantities necessary for the real time solution of 1.5 D transport equations. This performance was achieved through a thoughtful choice of numerical methods and code optimisation techniques at every step of the algorithm, and was coded in MATLAB and SIMULINK for the off-line and real time version respectively

  9. Environmental Sound Recognition Using Time-Frequency Intersection Patterns

    Directory of Open Access Journals (Sweden)

    Xuan Guo

    2012-01-01

    Full Text Available Environmental sound recognition is an important function of robots and intelligent computer systems. In this research, we use a multistage perceptron neural network system for environmental sound recognition. The input data is a combination of time-variance pattern of instantaneous powers and frequency-variance pattern with instantaneous spectrum at the power peak, referred to as a time-frequency intersection pattern. Spectra of many environmental sounds change more slowly than those of speech or voice, so the intersectional time-frequency pattern will preserve the major features of environmental sounds but with drastically reduced data requirements. Two experiments were conducted using an original database and an open database created by the RWCP project. The recognition rate for 20 kinds of environmental sounds was 92%. The recognition rate of the new method was about 12% higher than methods using only an instantaneous spectrum. The results are also comparable with HMM-based methods, although those methods need to treat the time variance of an input vector series with more complicated computations.

  10. An active robot vision system for real-time 3-D structure recovery

    Energy Technology Data Exchange (ETDEWEB)

    Juvin, D. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. d`Electronique et d`Instrumentation Nucleaire; Boukir, S.; Chaumette, F.; Bouthemy, P. [Rennes-1 Univ., 35 (France)

    1993-10-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up.

  11. An active robot vision system for real-time 3-D structure recovery

    International Nuclear Information System (INIS)

    Juvin, D.

    1993-01-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up

  12. Memory controllers for real-time embedded systems predictable and composable real-time systems

    CERN Document Server

    Akesson, Benny

    2012-01-01

      Verification of real-time requirements in systems-on-chip becomes more complex as more applications are integrated. Predictable and composable systems can manage the increasing complexity using formal verification and simulation.  This book explains the concepts of predictability and composability and shows how to apply them to the design and analysis of a memory controller, which is a key component in any real-time system. This book is generally intended for readers interested in Systems-on-Chips with real-time applications.   It is especially well-suited for readers looking to use SDRAM memories in systems with hard or firm real-time requirements. There is a strong focus on real-time concepts, such as predictability and composability, as well as a brief discussion about memory controller architectures for high-performance computing. Readers will learn step-by-step how to go from an unpredictable SDRAM memory, offering highly variable bandwidth and latency, to a predictable and composable shared memory...

  13. [Real-time feedback systems for improvement of resuscitation quality].

    Science.gov (United States)

    Lukas, R P; Van Aken, H; Engel, P; Bohn, A

    2011-07-01

    The quality of chest compression is a determinant of survival after cardiac arrest. Therefore, the European Resuscitation Council (ERC) 2010 guidelines on resuscitation strongly focus on compression quality. Despite its impact on survival, observational studies have shown that chest compression quality is not reached by professional rescue teams. Real-time feedback devices for resuscitation are able to measure chest compression during an ongoing resuscitation attempt through a sternal sensor equipped with a motion and pressure detection system. In addition to the electrocardiograph (ECG) ventilation can be detected by transthoracic impedance monitoring. In cases of quality deviation, such as shallow chest compression depth or hyperventilation, feedback systems produce visual or acoustic alarms. Rescuers can thereby be supported and guided to the requested quality in chest compression and ventilation. Feedback technology is currently available both as a so-called stand-alone device and as an integrated feature in a monitor/defibrillator unit. Multiple studies have demonstrated sustainable enhancement in the education of resuscitation due to the use of real-time feedback technology. There is evidence that real-time feedback for resuscitation combined with training and debriefing strategies can improve both resuscitation quality and patient survival. Chest compression quality is an independent predictor for survival in resuscitation and should therefore be measured and documented in further clinical multicenter trials.

  14. Development of real time abdominal compression force monitoring and visual biofeedback system

    Science.gov (United States)

    Kim, Tae-Ho; Kim, Siyong; Kim, Dong-Su; Kang, Seong-Hee; Cho, Min-Seok; Kim, Kyeong-Hyeon; Shin, Dong-Seok; Suh, Tae-Suk

    2018-03-01

    In this study, we developed and evaluated a system that could monitor abdominal compression force (ACF) in real time and provide a surrogating signal, even under abdominal compression. The system could also provide visual-biofeedback (VBF). The real-time ACF monitoring system developed consists of an abdominal compression device, an ACF monitoring unit and a control system including an in-house ACF management program. We anticipated that ACF variation information caused by respiratory abdominal motion could be used as a respiratory surrogate signal. Four volunteers participated in this test to obtain correlation coefficients between ACF variation and tidal volumes. A simulation study with another group of six volunteers was performed to evaluate the feasibility of the proposed system. In the simulation, we investigated the reproducibility of the compression setup and proposed a further enhanced shallow breathing (ESB) technique using VBF by intentionally reducing the amplitude of the breathing range under abdominal compression. The correlation coefficient between the ACF variation caused by the respiratory abdominal motion and the tidal volume signal for each volunteer was evaluated and R 2 values ranged from 0.79 to 0.84. The ACF variation was similar to a respiratory pattern and slight variations of ACF ranges were observed among sessions. About 73-77% average ACF control rate (i.e. compliance) over five trials was observed in all volunteer subjects except one (64%) when there was no VBF. The targeted ACF range was intentionally reduced to achieve ESB for VBF simulation. With VBF, in spite of the reduced target range, overall ACF control rate improved by about 20% in all volunteers except one (4%), demonstrating the effectiveness of VBF. The developed monitoring system could help reduce the inter-fraction ACF set up error and the intra fraction ACF variation. With the capability of providing a real time surrogating signal and VBF under compression, it could

  15. Lexical Leverage: Category Knowledge Boosts Real-Time Novel Word Recognition in 2-Year-Olds

    Science.gov (United States)

    Borovsky, Arielle; Ellis, Erica M.; Evans, Julia L.; Elman, Jeffrey L.

    2016-01-01

    Recent research suggests that infants tend to add words to their vocabulary that are semantically related to other known words, though it is not clear why this pattern emerges. In this paper, we explore whether infants leverage their existing vocabulary and semantic knowledge when interpreting novel label-object mappings in real time. We initially…

  16. Tsunami Amplitude Estimation from Real-Time GNSS.

    Science.gov (United States)

    Jeffries, C.; MacInnes, B. T.; Melbourne, T. I.

    2017-12-01

    Tsunami early warning systems currently comprise modeling of observations from the global seismic network, deep-ocean DART buoys, and a global distribution of tide gauges. While these tools work well for tsunamis traveling teleseismic distances, saturation of seismic magnitude estimation in the near field can result in significant underestimation of tsunami excitation for local warning. Moreover, DART buoy and tide gauge observations cannot be used to rectify the underestimation in the available time, typically 10-20 minutes, before local runup occurs. Real-time GNSS measurements of coseismic offsets may be used to estimate finite faulting within 1-2 minutes and, in turn, tsunami excitation for local warning purposes. We describe here a tsunami amplitude estimation algorithm; implemented for the Cascadia subduction zone, that uses continuous GNSS position streams to estimate finite faulting. The system is based on a time-domain convolution of fault slip that uses a pre-computed catalog of hydrodynamic Green's functions generated with the GeoClaw shallow-water wave simulation software and maps seismic slip along each section of the fault to points located off the Cascadia coast in 20m of water depth and relies on the principle of the linearity in tsunami wave propagation. The system draws continuous slip estimates from a message broker, convolves the slip with appropriate Green's functions which are then superimposed to produce wave amplitude at each coastal location. The maximum amplitude and its arrival time are then passed into a database for subsequent monitoring and display. We plan on testing this system using a suite of synthetic earthquakes calculated for Cascadia whose ground motions are simulated at 500 existing Cascadia GPS sites, as well as real earthquakes for which we have continuous GNSS time series and surveyed runup heights, including Maule, Chile 2010 and Tohoku, Japan 2011. This system has been implemented in the CWU Geodesy Lab for the Cascadia

  17. Four-dimensional real-time sonographically guided cauterization of the umbilical cord in a case of twin-twin transfusion syndrome.

    Science.gov (United States)

    Timor-Tritsch, Ilan E; Rebarber, Andrei; MacKenzie, Andrew; Caglione, Christopher F; Young, Bruce K

    2003-07-01

    In the past decade, three-dimensional (3D) sonographic technology has matured from a static imaging modality to near-real-time imaging. One of the more notable improvements in this technology has been the speed with which the imaged volume is acquired and displayed. This has enabled the birth of the near-real-time or four-dimensional (4D) sonographic concept. Using the 4D feature of the current 3D sonography machines allows us to follow moving structures, such as fetal motion, in almost real time. Shortly after the emergence of 3D and 4D technology as a clinical imaging tool, its use in guiding needles into structures was explored by other investigators. We present a case in which we used the 4D feature of our sonographic equipment to follow the course and motion of an instrument inserted into the uterus to occlude the umbilical cord of a fetus in a case of twin-twin transfusion syndrome.

  18. Real time speckle monitoring to control retinal photocoagulation

    Science.gov (United States)

    Bliedtner, Katharina; Seifert, Eric; Brinkmann, Ralf

    2017-07-01

    Photocoagulation is a treatment modality for several retinal diseases. Intra- and inter-individual variations of the retinal absorption as well as ocular transmission and light scattering makes it impossible to achieve a uniform effective exposure with one set of laser parameters. To guarantee a uniform damage throughout the therapy a real-time control is highly requested. Here, an approach to realize a real-time optical feedback using dynamic speckle analysis in-vivo is presented. A 532 nm continuous wave Nd:YAG laser is used for coagulation. During coagulation, speckle dynamics are monitored by a coherent object illumination using a 633 nm diode laser and analyzed by a CMOS camera with a frame rate up to 1 kHz. An algorithm is presented that can discriminate between different categories of retinal pigment epithelial damage ex-vivo in enucleated porcine eyes and that seems to be robust to noise in-vivo. Tissue changes in rabbits during retinal coagulation could be observed for different lesion strengths. This algorithm can run on a FPGA and is able to calculate a feedback value which is correlated to the thermal and coagulation induced tissue motion and thus the achieved damage.

  19. Dynamics in two-elevator traffic system with real-time information

    Energy Technology Data Exchange (ETDEWEB)

    Nagatani, Takashi, E-mail: wadokeioru@yahoo.co.jp

    2013-12-17

    We study the dynamics of traffic system with two elevators using a elevator choice scenario. The two-elevator traffic system with real-time information is similar to the two-route vehicular traffic system. The dynamics of two-elevator traffic system is described by the two-dimensional nonlinear map. An elevator runs a neck-and-neck race with another elevator. The motion of two elevators displays such a complex behavior as quasi-periodic one. The return map of two-dimensional map shows a piecewise map.

  20. Deep neural networks to enable real-time multimessenger astrophysics

    Science.gov (United States)

    George, Daniel; Huerta, E. A.

    2018-02-01

    Gravitational wave astronomy has set in motion a scientific revolution. To further enhance the science reach of this emergent field of research, there is a pressing need to increase the depth and speed of the algorithms used to enable these ground-breaking discoveries. We introduce Deep Filtering—a new scalable machine learning method for end-to-end time-series signal processing. Deep Filtering is based on deep learning with two deep convolutional neural networks, which are designed for classification and regression, to detect gravitational wave signals in highly noisy time-series data streams and also estimate the parameters of their sources in real time. Acknowledging that some of the most sensitive algorithms for the detection of gravitational waves are based on implementations of matched filtering, and that a matched filter is the optimal linear filter in Gaussian noise, the application of Deep Filtering using whitened signals in Gaussian noise is investigated in this foundational article. The results indicate that Deep Filtering outperforms conventional machine learning techniques, achieves similar performance compared to matched filtering, while being several orders of magnitude faster, allowing real-time signal processing with minimal resources. Furthermore, we demonstrate that Deep Filtering can detect and characterize waveform signals emitted from new classes of eccentric or spin-precessing binary black holes, even when trained with data sets of only quasicircular binary black hole waveforms. The results presented in this article, and the recent use of deep neural networks for the identification of optical transients in telescope data, suggests that deep learning can facilitate real-time searches of gravitational wave sources and their electromagnetic and astroparticle counterparts. In the subsequent article, the framework introduced herein is directly applied to identify and characterize gravitational wave events in real LIGO data.

  1. Real time MHD mode control using ECCD in KSTAR: Plan and requirements

    Energy Technology Data Exchange (ETDEWEB)

    Joung, M.; Woo, M. H.; Jeong, J. H.; Hahn, S. H.; Yun, S. W.; Lee, W. R.; Bae, Y. S.; Oh, Y. K.; Kwak, J. G.; Yang, H. L. [National Fusion Research Institute, 52 Eoeun-dong, Yuseong-gu, Daejeon (Korea, Republic of); Namkung, W.; Park, H.; Cho, M. H. [Department of Physics, POSTECH, Hyoja-dong, Nam-gu, Pohang, Gyeongangbuk-do (Korea, Republic of); Kim, M. H.; Kim, K. J.; Na, Y. S. [Department of Nuclear Engineering, Seoul National University, Daehak-dong, Gwanak-gu, Seoul (Korea, Republic of); Hosea, J.; Ellis, R. [Princeton Plasma Physics Laboratory, Princeton (United States)

    2014-02-12

    For a high-performance, advanced tokamak mode in KSTAR, we have been developing a real-time control system of MHD modes such as sawtooth and Neo-classical Tearing Mode (NTM) by ECH/ECCD. The active feedback control loop will be also added to the mirror position and the real-time detection of the mode position. In this year, for the stabilization of NTM that is crucial to plasma performance we have implemented open-loop ECH antenna control system in KSTAR Plasma Control System (PCS) for ECH mirror movement during a single plasma discharge. KSTAR 170 GHz ECH launcher which was designed and fabricated by collaboration with PPPL and POSTECH has a final mirror of a poloidally and toroidally steerable mirror. The poloidal steering motion is only controlled in the real-time NTM control system and its maximum steering speed is 10 degree/sec by DC motor. However, the latency of the mirror control system and the return period of ECH antenna mirror angle are not fast because the existing launcher mirror control system is based on PLC which is connected to the KSTAR machine network through serial to LAN converter. In this paper, we present the design of real time NTM control system, ECH requirements, and the upgrade plan.

  2. Rapid Modeling of and Response to Large Earthquakes Using Real-Time GPS Networks (Invited)

    Science.gov (United States)

    Crowell, B. W.; Bock, Y.; Squibb, M. B.

    2010-12-01

    Real-time GPS networks have the advantage of capturing motions throughout the entire earthquake cycle (interseismic, seismic, coseismic, postseismic), and because of this, are ideal for real-time monitoring of fault slip in the region. Real-time GPS networks provide the perfect supplement to seismic networks, which operate with lower noise and higher sampling rates than GPS networks, but only measure accelerations or velocities, putting them at a supreme disadvantage for ascertaining the full extent of slip during a large earthquake in real-time. Here we report on two examples of rapid modeling of recent large earthquakes near large regional real-time GPS networks. The first utilizes Japan’s GEONET consisting of about 1200 stations during the 2003 Mw 8.3 Tokachi-Oki earthquake about 100 km offshore Hokkaido Island and the second investigates the 2010 Mw 7.2 El Mayor-Cucapah earthquake recorded by more than 100 stations in the California Real Time Network. The principal components of strain were computed throughout the networks and utilized as a trigger to initiate earthquake modeling. Total displacement waveforms were then computed in a simulated real-time fashion using a real-time network adjustment algorithm that fixes a station far away from the rupture to obtain a stable reference frame. Initial peak ground displacement measurements can then be used to obtain an initial size through scaling relationships. Finally, a full coseismic model of the event can be run minutes after the event, given predefined fault geometries, allowing emergency first responders and researchers to pinpoint the regions of highest damage. Furthermore, we are also investigating using total displacement waveforms for real-time moment tensor inversions to look at spatiotemporal variations in slip.

  3. CamOn: A Real-Time Autonomous Camera Control System

    DEFF Research Database (Denmark)

    Burelli, Paolo; Jhala, Arnav Harish

    2009-01-01

    This demonstration presents CamOn, an autonomous cam- era control system for real-time 3D games. CamOn employs multiple Artificial Potential Fields (APFs), a robot motion planning technique, to control both the location and orienta- tion of the camera. Scene geometry from the 3D environment...... contributes to the potential field that is used to determine po- sition and movement of the camera. Composition constraints for the camera are modelled as potential fields for controlling the view target of the camera. CamOn combines the compositional benefits of constraint- based camera systems, and improves...

  4. Essays in real-time forecasting

    OpenAIRE

    Liebermann, Joelle

    2012-01-01

    This thesis contains three essays in the field of real-time econometrics, and more particularlyforecasting.The issue of using data as available in real-time to forecasters, policymakers or financialmarkets is an important one which has only recently been taken on board in the empiricalliterature. Data available and used in real-time are preliminary and differ from ex-postrevised data, and given that data revisions may be quite substantial, the use of latestavailable instead of real-time can s...

  5. Galileo and the Problems of Motion

    Science.gov (United States)

    Hooper, Wallace Edd

    Galileo's science of motion changed natural philosophy. His results initiated a broad human awakening to the intricate new world of physical order found in the midst of familiar operations of nature. His thinking was always based squarely on the academic traditions of the spiritual old world. He advanced physics by new standards of judgment drawn from mechanics and geometry, and disciplined observation of the world. My study first determines the order of composition of the earliest essays on motion and physics, ca. 1588 -1592, from internal evidence, and bibliographic evidence. There are clear signs of a Platonist critique of Aristotle, supported by Archimedes, in the Ten Section Version of On Motion, written ca. 1588, and probably the earliest of his treatises on motion or physics. He expanded upon his opening Platonic -Archimedean position by investigating the ideas of scholastic critics of Aristotle, including the Doctores Parisienses, found in his readings of the Jesuit Professors at the Collegio Romano. Their influences surfaced clearly in Galileo's Memoranda on Motion and the Dialogue on Motion, and in On Motion, which followed, ca. 1590-1592. At the end of his sojourn in Pisa, Galileo opened the road to the new physics by solving an important problem in the mechanics of Pappus, concerning motion along inclined planes. My study investigates why Galileo gave up attempts to establish a ratio between speed and weight, and why he began to seek the ratios of time and distance and speed, by 1602. It also reconstructs Galileo's development of the 1604 principle, seeking to outline its invention, elaboration, and abandonment. Then, I try to show that we have a record of Galileo's moment of recognition of the direct relation between the time of fall and the accumulated speed of motion--that great affinity between time and motion and the key to the new science of motion established before 1610. Evidence also ties the discovery of the time affinity directly to Galileo

  6. USE OF IMAGE ENHANCEMENT TECHNIQUES FOR IMPROVING REAL TIME FACE RECOGNITION EFFICIENCY ON WEARABLE GADGETS

    Directory of Open Access Journals (Sweden)

    MUHAMMAD EHSAN RANA

    2017-01-01

    Full Text Available The objective of this research is to study the effects of image enhancement techniques on face recognition performance of wearable gadgets with an emphasis on recognition rate.In this research, a number of image enhancement techniques are selected that include brightness normalization, contrast normalization, sharpening, smoothing, and various combinations of these. Subsequently test images are obtained from AT&T database and Yale Face Database B to investigate the effect of these image enhancement techniques under various conditions such as change of illumination and face orientation and expression.The evaluation of data, collected during this research, revealed that the effect of image pre-processing techniques on face recognition highly depends on the illumination condition under which these images are taken. It is revealed that the benefit of applying image enhancement techniques on face images is best seen when there is high variation of illumination among images. Results also indicate that highest recognition rate is achieved when images are taken under low light condition and image contrast is enhanced using histogram equalization technique and then image noise is reduced using median smoothing filter. Additionally combination of contrast normalization and mean smoothing filter shows good result in all scenarios. Results obtained from test cases illustrate up to 75% improvement in face recognition rate when image enhancement is applied to images in given scenarios.

  7. An audiovisual emotion recognition system

    Science.gov (United States)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  8. Biogeography-based combinatorial strategy for efficient autonomous underwater vehicle motion planning and task-time management

    Science.gov (United States)

    Zadeh, S. M.; Powers, D. M. W.; Sammut, K.; Yazdani, A. M.

    2016-12-01

    Autonomous Underwater Vehicles (AUVs) are capable of spending long periods of time for carrying out various underwater missions and marine tasks. In this paper, a novel conflict-free motion planning framework is introduced to enhance underwater vehicle's mission performance by completing maximum number of highest priority tasks in a limited time through a large scale waypoint cluttered operating field, and ensuring safe deployment during the mission. The proposed combinatorial route-path planner model takes the advantages of the Biogeography-Based Optimization (BBO) algorithm toward satisfying objectives of both higher-lower level motion planners and guarantees maximization of the mission productivity for a single vehicle operation. The performance of the model is investigated under different scenarios including the particular cost constraints in time-varying operating fields. To show the reliability of the proposed model, performance of each motion planner assessed separately and then statistical analysis is undertaken to evaluate the total performance of the entire model. The simulation results indicate the stability of the contributed model and its feasible application for real experiments.

  9. Real-Time Observation of Target Search by the CRISPR Surveillance Complex Cascade

    Directory of Open Access Journals (Sweden)

    Chaoyou Xue

    2017-12-01

    Full Text Available CRISPR-Cas systems defend bacteria and archaea against infection by bacteriophage and other threats. The central component of these systems are surveillance complexes that use guide RNAs to bind specific regions of foreign nucleic acids, marking them for destruction. Surveillance complexes must locate targets rapidly to ensure timely immune response, but the mechanism of this search process remains unclear. Here, we used single-molecule FRET to visualize how the type I-E surveillance complex Cascade searches DNA in real time. Cascade rapidly and randomly samples DNA through nonspecific electrostatic contacts, pausing at short PAM recognition sites that may be adjacent to the target. We identify Cascade motifs that are essential for either nonspecific sampling or positioning and readout of the PAM. Our findings provide a comprehensive structural and kinetic model for the Cascade target-search mechanism, revealing how CRISPR surveillance complexes can rapidly search large amounts of genetic material en route to target recognition.

  10. Development of real-time tumor tracking system for stereotactic radiotherapy

    International Nuclear Information System (INIS)

    Yamanaka, Seiji; Sasagawa, Tsuyoshi; Uno, Yukimichi

    2011-01-01

    We are now developing the real-time tumor tracking system for stereotactic radiotherapy (SRT) to provide precise information on the location of a tumor and to reduce the irradiation to healthy tissue in a patient. The system has the following features: A motion tracking and processing unit recognizes a gold marker inserted in or near a tumor in real time by the pattern matching of a predetermined template image and acquired X-ray fluoroscopic images. When the gold marker is within a planned area, that is to say, when a tumor enters a target irradiation area, a gate signal is sent to a linear accelerator. A railway unit is equipped with two X-ray tubes and two detectors, which are controlled separately with their own drive mechanism. They travel with high accuracy and reproducibility to the best position for monitoring the gold marker. A synchronization controller controls the timing for X-ray fluoroscopy and the gate signals to the linear accelerator. The controller works for two types of detectors: a color X-ray detector and a flat panel detector (FPD). (author)

  11. Generalized Hough transform based time invariant action recognition with 3D pose information

    Science.gov (United States)

    Muench, David; Huebner, Wolfgang; Arens, Michael

    2014-10-01

    Human action recognition has emerged as an important field in the computer vision community due to its large number of applications such as automatic video surveillance, content based video-search and human robot interaction. In order to cope with the challenges that this large variety of applications present, recent research has focused more on developing classifiers able to detect several actions in more natural and unconstrained video sequences. The invariance discrimination tradeoff in action recognition has been addressed by utilizing a Generalized Hough Transform. As a basis for action representation we transform 3D poses into a robust feature space, referred to as pose descriptors. For each action class a one-dimensional temporal voting space is constructed. Votes are generated from associating pose descriptors with their position in time relative to the end of an action sequence. Training data consists of manually segmented action sequences. In the detection phase valid human 3D poses are assumed as input, e.g. originating from 3D sensors or monocular pose reconstruction methods. The human 3D poses are normalized to gain view-independence and transformed into (i) relative limb-angle space to ensure independence of non-adjacent joints or (ii) geometric features. In (i) an action descriptor consists of the relative angles between limbs and their temporal derivatives. In (ii) the action descriptor consists of different geometric features. In order to circumvent the problem of time-warping we propose to use a codebook of prototypical 3D poses which is generated from sample sequences of 3D motion capture data. This idea is in accordance with the concept of equivalence classes in action space. Results of the codebook method are presented using the Kinect sensor and the CMU Motion Capture Database.

  12. Timely loss recognition and termination of unprofitable projects

    Directory of Open Access Journals (Sweden)

    Anup Srivastava

    2015-09-01

    Full Text Available Ideally, firms should discontinue projects that become unprofitable. Managers, however, continue to operate such projects because of their limited employment horizons and empire-building motivations (Jensen, 1986; Ball, 2001. Prior studies suggest that timely loss recognition in accounting earnings enables lenders, shareholders, and boards of directors to identify unprofitable projects; thereby, enabling them to force managers to discontinue such projects before large value erosion occurs. However, this conjecture has not been tested empirically. Consistent with this notion, we find that timely loss recognition increases the likelihood of timely closures of unprofitable projects. Moreover, managers, by announcing late discontinuations of such projects, reveal their inability to select good projects and/or to contain losses, when projects turn unprofitable. Accordingly, thereafter, the fund providers and board of directors are likely to demand improved timeliness of loss recognition and stringent scrutiny of firms’ capital expenditure plans. Consistently, we find that firms that announce large discontinuation losses reduce capital expenditures and improve timeliness of loss recognition in subsequent years. Our study provides evidence that timely loss reporting affects “real” economic decisions and creates economic benefits.

  13. Quantification of Artifact Reduction With Real-Time Cine Four-Dimensional Computed Tomography Acquisition Methods

    International Nuclear Information System (INIS)

    Langner, Ulrich W.; Keall, Paul J.

    2010-01-01

    Purpose: To quantify the magnitude and frequency of artifacts in simulated four-dimensional computed tomography (4D CT) images using three real-time acquisition methods- direction-dependent displacement acquisition, simultaneous displacement and phase acquisition, and simultaneous displacement and velocity acquisition- and to compare these methods with commonly used retrospective phase sorting. Methods and Materials: Image acquisition for the four 4D CT methods was simulated with different displacement and velocity tolerances for spheres with radii of 0.5 cm, 1.5 cm, and 2.5 cm, using 58 patient-measured tumors and respiratory motion traces. The magnitude and frequency of artifacts, CT doses, and acquisition times were computed for each method. Results: The mean artifact magnitude was 50% smaller for the three real-time methods than for retrospective phase sorting. The dose was ∼50% lower, but the acquisition time was 20% to 100% longer for the real-time methods than for retrospective phase sorting. Conclusions: Real-time acquisition methods can reduce the frequency and magnitude of artifacts in 4D CT images, as well as the imaging dose, but they increase the image acquisition time. The results suggest that direction-dependent displacement acquisition is the preferred real-time 4D CT acquisition method, because on average, the lowest dose is delivered to the patient and the acquisition time is the shortest for the resulting number and magnitude of artifacts.

  14. Ovation Prime Real-Time

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Ovation Prime Real-Time (OPRT) product is a real-time forecast and nowcast model of auroral power and is an operational implementation of the work by Newell et...

  15. Supporting Fourth Graders' Ability to Interpret Graphs through Real-Time Graphing Technology: A Preliminary Study

    Science.gov (United States)

    Deniz, Hasan; Dulger, Mehmet F.

    2012-01-01

    This study examined to what extent inquiry-based instruction supported with real-time graphing technology improves fourth grader's ability to interpret graphs as representations of physical science concepts such as motion and temperature. This study also examined whether there is any difference between inquiry-based instruction supported with…

  16. Impact of a voice recognition system on report cycle time and radiologist reading time

    Science.gov (United States)

    Melson, David L.; Brophy, Robert; Blaine, G. James; Jost, R. Gilbert; Brink, Gary S.

    1998-07-01

    Because of its exciting potential to improve clinical service, as well as reduce costs, a voice recognition system for radiological dictation was recently installed at our institution. This system will be clinically successful if it dramatically reduces radiology report turnaround time without substantially affecting radiologist dictation and editing time. This report summarizes an observer study currently under way in which radiologist reporting times using the traditional transcription system and the voice recognition system are compared. Four radiologists are observed interpreting portable intensive care unit (ICU) chest examinations at a workstation in the chest reading area. Data are recorded with the radiologists using the transcription system and using the voice recognition system. The measurements distinguish between time spent performing clerical tasks and time spent actually dictating the report. Editing time and the number of corrections made are recorded. Additionally, statistics are gathered to assess the voice recognition system's impact on the report cycle time -- the time from report dictation to availability of an edited and finalized report -- and the length of reports.

  17. Real-Time 3D Face Acquisition Using Reconfigurable Hybrid Architecture

    Directory of Open Access Journals (Sweden)

    Mitéran Johel

    2007-01-01

    Full Text Available Acquiring 3D data of human face is a general problem which can be applied in face recognition, virtual reality, and many other applications. It can be solved using stereovision. This technique consists in acquiring data in three dimensions from two cameras. The aim is to implement an algorithmic chain which makes it possible to obtain a three-dimensional space from two two-dimensional spaces: two images coming from the two cameras. Several implementations have already been considered. We propose a new simple real-time implementation based on a hybrid architecture (FPGA-DSP, allowing to consider an embedded and reconfigurable processing. Then we show our method which provides depth map of face, dense and reliable, and which can be implemented on an embedded architecture. A various architecture study led us to a judicious choice allowing to obtain the desired result. The real-time data processing is implemented in an embedded architecture. We obtain a dense face disparity map, precise enough for considered applications (multimedia, virtual worlds, biometrics and using a reliable method.

  18. Improved motion description for action classification

    Directory of Open Access Journals (Sweden)

    Mihir eJain

    2016-01-01

    Full Text Available Even though the importance of explicitly integrating motion characteristics in video descriptions has been demonstrated by several recent papers on action classification, our current work concludes that adequately decomposing visual motion into dominant and residual motions, i.e.: camera and scene motion, significantly improves action recognition algorithms. This holds true both for the extraction of the space-time trajectories and for computation of descriptors.We designed a new motion descriptor – the DCS descriptor – that captures additional information on local motion patterns enhancing results based on differential motion scalar quantities, divergence, curl and shear features. Finally, applying the recent VLAD coding technique proposed in image retrieval provides a substantial improvement for action recognition. These findings are complementary to each other and they outperformed all previously reported results by a significant margin on three challenging datasets: Hollywood 2, HMDB51 and Olympic Sports as reported in (Jain et al. (2013. These results were further improved by (Oneata et al. (2013; Wang and Schmid (2013; Zhu et al. (2013 through the use of the Fisher vector encoding. We therefore also employ Fisher vector in this paper and we further enhance our approach by combining trajectories from both optical flow and compensated flow. We as well provide additional details of DCS descriptors, including visualization. For extending the evaluation, a novel dataset with 101 action classes, UCF101, was added.

  19. VERSE - Virtual Equivalent Real-time Simulation

    Science.gov (United States)

    Zheng, Yang; Martin, Bryan J.; Villaume, Nathaniel

    2005-01-01

    Distributed real-time simulations provide important timing validation and hardware in the- loop results for the spacecraft flight software development cycle. Occasionally, the need for higher fidelity modeling and more comprehensive debugging capabilities - combined with a limited amount of computational resources - calls for a non real-time simulation environment that mimics the real-time environment. By creating a non real-time environment that accommodates simulations and flight software designed for a multi-CPU real-time system, we can save development time, cut mission costs, and reduce the likelihood of errors. This paper presents such a solution: Virtual Equivalent Real-time Simulation Environment (VERSE). VERSE turns the real-time operating system RTAI (Real-time Application Interface) into an event driven simulator that runs in virtual real time. Designed to keep the original RTAI architecture as intact as possible, and therefore inheriting RTAI's many capabilities, VERSE was implemented with remarkably little change to the RTAI source code. This small footprint together with use of the same API allows users to easily run the same application in both real-time and virtual time environments. VERSE has been used to build a workstation testbed for NASA's Space Interferometry Mission (SIM PlanetQuest) instrument flight software. With its flexible simulation controls and inexpensive setup and replication costs, VERSE will become an invaluable tool in future mission development.

  20. Optical Pattern Recognition

    Science.gov (United States)

    Yu, Francis T. S.; Jutamulia, Suganda

    2008-10-01

    Contributors; Preface; 1. Pattern recognition with optics Francis T. S. Yu and Don A. Gregory; 2. Hybrid neural networks for nonlinear pattern recognition Taiwei Lu; 3. Wavelets, optics, and pattern recognition Yao Li and Yunglong Sheng; 4. Applications of the fractional Fourier transform to optical pattern recognition David Mendlovic, Zeev Zalesky and Haldum M. Oxaktas; 5. Optical implementation of mathematical morphology Tien-Hsin Chao; 6. Nonlinear optical correlators with improved discrimination capability for object location and recognition Leonid P. Yaroslavsky; 7. Distortion-invariant quadratic filters Gregory Gheen; 8. Composite filter synthesis as applied to pattern recognition Shizhou Yin and Guowen Lu; 9. Iterative procedures in electro-optical pattern recognition Joseph Shamir; 10. Optoelectronic hybrid system for three-dimensional object pattern recognition Guoguang Mu, Mingzhe Lu and Ying Sun; 11. Applications of photrefractive devices in optical pattern recognition Ziangyang Yang; 12. Optical pattern recognition with microlasers Eung-Gi Paek; 13. Optical properties and applications of bacteriorhodopsin Q. Wang Song and Yu-He Zhang; 14. Liquid-crystal spatial light modulators Aris Tanone and Suganda Jutamulia; 15. Representations of fully complex functions on real-time spatial light modulators Robert W. Cohn and Laurence G. Hassbrook; Index.

  1. Star pattern recognition algorithm aided by inertial information

    Science.gov (United States)

    Liu, Bao; Wang, Ke-dong; Zhang, Chao

    2011-08-01

    Star pattern recognition is one of the key problems of the celestial navigation. The traditional star pattern recognition approaches, such as the triangle algorithm and the star angular distance algorithm, are a kind of all-sky matching method whose recognition speed is slow and recognition success rate is not high. Therefore, the real time and reliability of CNS (Celestial Navigation System) is reduced to some extent, especially for the maneuvering spacecraft. However, if the direction of the camera optical axis can be estimated by other navigation systems such as INS (Inertial Navigation System), the star pattern recognition can be fulfilled in the vicinity of the estimated direction of the optical axis. The benefits of the INS-aided star pattern recognition algorithm include at least the improved matching speed and the improved success rate. In this paper, the direction of the camera optical axis, the local matching sky, and the projection of stars on the image plane are estimated by the aiding of INS firstly. Then, the local star catalog for the star pattern recognition is established in real time dynamically. The star images extracted in the camera plane are matched in the local sky. Compared to the traditional all-sky star pattern recognition algorithms, the memory of storing the star catalog is reduced significantly. Finally, the INS-aided star pattern recognition algorithm is validated by simulations. The results of simulations show that the algorithm's computation time is reduced sharply and its matching success rate is improved greatly.

  2. Ground motion input in seismic evaluation studies

    International Nuclear Information System (INIS)

    Sewell, R.T.; Wu, S.C.

    1996-07-01

    This report documents research pertaining to conservatism and variability in seismic risk estimates. Specifically, it examines whether or not artificial motions produce unrealistic evaluation demands, i.e., demands significantly inconsistent with those expected from real earthquake motions. To study these issues, two types of artificial motions are considered: (a) motions with smooth response spectra, and (b) motions with realistic variations in spectral amplitude across vibration frequency. For both types of artificial motion, time histories are generated to match target spectral shapes. For comparison, empirical motions representative of those that might result from strong earthquakes in the Eastern U.S. are also considered. The study findings suggest that artificial motions resulting from typical simulation approaches (aimed at matching a given target spectrum) are generally adequate and appropriate in representing the peak-response demands that may be induced in linear structures and equipment responding to real earthquake motions. Also, given similar input Fourier energies at high-frequencies, levels of input Fourier energy at low frequencies observed for artificial motions are substantially similar to those levels noted in real earthquake motions. In addition, the study reveals specific problems resulting from the application of Western U.S. type motions for seismic evaluation of Eastern U.S. nuclear power plants

  3. ISTTOK real-time architecture

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, Ivo S., E-mail: ivoc@ipfn.ist.utl.pt; Duarte, Paulo; Fernandes, Horácio; Valcárcel, Daniel F.; Carvalho, Pedro J.; Silva, Carlos; Duarte, André S.; Neto, André; Sousa, Jorge; Batista, António J.N.; Hekkert, Tiago; Carvalho, Bernardo B.

    2014-03-15

    Highlights: • All real-time diagnostics and actuators were integrated in the same control platform. • A 100 μs control cycle was achieved under the MARTe framework. • Time-windows based control with several event-driven control strategies implemented. • AC discharges with exception handling on iron core flux saturation. • An HTML discharge configuration was developed for configuring the MARTe system. - Abstract: The ISTTOK tokamak was upgraded with a plasma control system based on the Advanced Telecommunications Computing Architecture (ATCA) standard. This control system was designed to improve the discharge stability and to extend the operational space to the alternate plasma current (AC) discharges as part of the ISTTOK scientific program. In order to accomplish these objectives all ISTTOK diagnostics and actuators relevant for real-time operation were integrated in the control system. The control system was programmed in C++ over the Multi-threaded Application Real-Time executor (MARTe) which provides, among other features, a real-time scheduler, an interrupt handler, an intercommunications interface between code blocks and a clearly bounded interface with the external devices. As a complement to the MARTe framework, the BaseLib2 library provides the foundations for the data, code introspection and also a Hypertext Transfer Protocol (HTTP) server service. Taking advantage of the modular nature of MARTe, the algorithms of each diagnostic data processing, discharge timing, context switch, control and actuators output reference generation, run on well-defined blocks of code named Generic Application Module (GAM). This approach allows reusability of the code, simplified simulation, replacement or editing without changing the remaining GAMs. The ISTTOK control system GAMs run sequentially each 100 μs cycle on an Intel{sup ®} Q8200 4-core processor running at 2.33 GHz located in the ATCA crate. Two boards (inside the ATCA crate) with 32 analog

  4. ISTTOK real-time architecture

    International Nuclear Information System (INIS)

    Carvalho, Ivo S.; Duarte, Paulo; Fernandes, Horácio; Valcárcel, Daniel F.; Carvalho, Pedro J.; Silva, Carlos; Duarte, André S.; Neto, André; Sousa, Jorge; Batista, António J.N.; Hekkert, Tiago; Carvalho, Bernardo B.

    2014-01-01

    Highlights: • All real-time diagnostics and actuators were integrated in the same control platform. • A 100 μs control cycle was achieved under the MARTe framework. • Time-windows based control with several event-driven control strategies implemented. • AC discharges with exception handling on iron core flux saturation. • An HTML discharge configuration was developed for configuring the MARTe system. - Abstract: The ISTTOK tokamak was upgraded with a plasma control system based on the Advanced Telecommunications Computing Architecture (ATCA) standard. This control system was designed to improve the discharge stability and to extend the operational space to the alternate plasma current (AC) discharges as part of the ISTTOK scientific program. In order to accomplish these objectives all ISTTOK diagnostics and actuators relevant for real-time operation were integrated in the control system. The control system was programmed in C++ over the Multi-threaded Application Real-Time executor (MARTe) which provides, among other features, a real-time scheduler, an interrupt handler, an intercommunications interface between code blocks and a clearly bounded interface with the external devices. As a complement to the MARTe framework, the BaseLib2 library provides the foundations for the data, code introspection and also a Hypertext Transfer Protocol (HTTP) server service. Taking advantage of the modular nature of MARTe, the algorithms of each diagnostic data processing, discharge timing, context switch, control and actuators output reference generation, run on well-defined blocks of code named Generic Application Module (GAM). This approach allows reusability of the code, simplified simulation, replacement or editing without changing the remaining GAMs. The ISTTOK control system GAMs run sequentially each 100 μs cycle on an Intel ® Q8200 4-core processor running at 2.33 GHz located in the ATCA crate. Two boards (inside the ATCA crate) with 32 analog

  5. Real-time Collision Avoidance and Path Optimizer for Semi-autonomous UAVs.

    Science.gov (United States)

    Hawary, A. F.; Razak, N. A.

    2018-05-01

    Whilst UAV offers a potentially cheaper and more localized observation platform than current satellite or land-based approaches, it requires an advance path planner to reveal its true potential, particularly in real-time missions. Manual control by human will have limited line-of-sights and prone to errors due to careless and fatigue. A good alternative solution is to equip the UAV with semi-autonomous capabilities that able to navigate via a pre-planned route in real-time fashion. In this paper, we propose an easy-and-practical path optimizer based on the classical Travelling Salesman Problem and adopts a brute force search method to re-optimize the route in the event of collisions using range finder sensor. The former utilizes a Simple Genetic Algorithm and the latter uses Nearest Neighbour algorithm. Both algorithms are combined to optimize the route and avoid collision at once. Although many researchers proposed various path planning algorithms, we find that it is difficult to integrate on a basic UAV model and often lacks of real-time collision detection optimizer. Therefore, we explore a practical benefit from this approach using on-board Arduino and Ardupilot controllers by manually emulating the motion of an actual UAV model prior to test on the flying site. The result showed that the range finder sensor provides a real-time data to the algorithm to find a collision-free path and eventually optimized the route successfully.

  6. The first clinical implementation of real-time image-guided adaptive radiotherapy using a standard linear accelerator.

    Science.gov (United States)

    Keall, Paul J; Nguyen, Doan Trang; O'Brien, Ricky; Caillet, Vincent; Hewson, Emily; Poulsen, Per Rugaard; Bromley, Regina; Bell, Linda; Eade, Thomas; Kneebone, Andrew; Martin, Jarad; Booth, Jeremy T

    2018-04-01

    Until now, real-time image guided adaptive radiation therapy (IGART) has been the domain of dedicated cancer radiotherapy systems. The purpose of this study was to clinically implement and investigate real-time IGART using a standard linear accelerator. We developed and implemented two real-time technologies for standard linear accelerators: (1) Kilovoltage Intrafraction Monitoring (KIM) that finds the target and (2) multileaf collimator (MLC) tracking that aligns the radiation beam to the target. Eight prostate SABR patients were treated with this real-time IGART technology. The feasibility, geometric accuracy and the dosimetric fidelity were measured. Thirty-nine out of forty fractions with real-time IGART were successful (95% confidence interval 87-100%). The geometric accuracy of the KIM system was -0.1 ± 0.4, 0.2 ± 0.2 and -0.1 ± 0.6 mm in the LR, SI and AP directions, respectively. The dose reconstruction showed that real-time IGART more closely reproduced the planned dose than that without IGART. For the largest motion fraction, with real-time IGART 100% of the CTV received the prescribed dose; without real-time IGART only 95% of the CTV would have received the prescribed dose. The clinical implementation of real-time image-guided adaptive radiotherapy on a standard linear accelerator using KIM and MLC tracking is feasible. This achievement paves the way for real-time IGART to be a mainstream treatment option. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    Science.gov (United States)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  8. Real-time 3D-surface-guided head refixation useful for fractionated stereotactic radiotherapy

    International Nuclear Information System (INIS)

    Li Shidong; Liu Dezhi; Yin Gongjie; Zhuang Ping; Geng, Jason

    2006-01-01

    Accurate and precise head refixation in fractionated stereotactic radiotherapy has been achieved through alignment of real-time 3D-surface images with a reference surface image. The reference surface image is either a 3D optical surface image taken at simulation with the desired treatment position, or a CT/MRI-surface rendering in the treatment plan with corrections for patient motion during CT/MRI scans and partial volume effects. The real-time 3D surface images are rapidly captured by using a 3D video camera mounted on the ceiling of the treatment vault. Any facial expression such as mouth opening that affects surface shape and location can be avoided using a new facial monitoring technique. The image artifacts on the real-time surface can generally be removed by setting a threshold of jumps at the neighboring points while preserving detailed features of the surface of interest. Such a real-time surface image, registered in the treatment machine coordinate system, provides a reliable representation of the patient head position during the treatment. A fast automatic alignment between the real-time surface and the reference surface using a modified iterative-closest-point method leads to an efficient and robust surface-guided target refixation. Experimental and clinical results demonstrate the excellent efficacy of <2 min set-up time, the desired accuracy and precision of <1 mm in isocenter shifts, and <1 deg. in rotation

  9. 3D Display of Spacecraft Dynamics Using Real Telemetry

    Directory of Open Access Journals (Sweden)

    Sanguk Lee

    2002-12-01

    Full Text Available 3D display of spacecraft motion by using telemetry data received from satellite in real-time is described. Telemetry data are converted to the appropriate form for 3-D display by the real-time preprocessor. Stored playback telemetry data also can be processed for the display. 3D display of spacecraft motion by using real telemetry data provides intuitive comprehension of spacecraft dynamics.

  10. Track recognition in the central drift chamber of the SAPHIR detector at ELSA and first reconstruction of real tracks

    International Nuclear Information System (INIS)

    Korn, P.

    1991-02-01

    The FORTRAN program for pattern recognition in the central drift chamber of SAPHIR has been modified in order to find tracks with more than one missing wire signal and has been optimized in resolving the left/right ambiguities. The second part of this report deals with the reconstruction of some real tracks (γ → e + e - ), which were measured with SAPHIR. The efficiency of the central drift chamber and the space-to-drift time-relation are discussed. (orig.)

  11. Semantic Activity Recognition

    OpenAIRE

    Thonnat , Monique

    2008-01-01

    International audience; Extracting automatically the semantics from visual data is a real challenge. We describe in this paper how recent work in cognitive vision leads to significative results in activity recognition for visualsurveillance and video monitoring. In particular we present work performed in the domain of video understanding in our PULSAR team at INRIA in Sophia Antipolis. Our main objective is to analyse in real-time video streams captured by static video cameras and to recogniz...

  12. Automatic speech recognition (zero crossing method). Automatic recognition of isolated vowels

    International Nuclear Information System (INIS)

    Dupeyrat, Benoit

    1975-01-01

    This note describes a recognition method of isolated vowels, using a preprocessing of the vocal signal. The processing extracts the extrema of the vocal signal and the interval time separating them (Zero crossing distances of the first derivative of the signal). The recognition of vowels uses normalized histograms of the values of these intervals. The program determines a distance between the histogram of the sound to be recognized and histograms models built during a learning phase. The results processed on real time by a minicomputer, are relatively independent of the speaker, the fundamental frequency being not allowed to vary too much (i.e. speakers of the same sex). (author) [fr

  13. Concept, Implementation and Testing of PRESTo: Real-time experimentation in Southern Italy and worldwide applications

    Science.gov (United States)

    Zollo, Aldo; Emolo, Antonio; Festa, Gaetano; Picozzi, Matteo; Elia, Luca; Martino, Claudio; Colombelli, Simona; Brondi, Piero; Caruso, Alessandro

    2016-04-01

    The past two decades have witnessed a huge progress in the development, implementation and testing of Earthquakes Early Warning Systems (EEWS) worldwide, as the result of a joint effort of the seismological and earthquake engineering communities to set up robust and efficient methodologies for the real-time seismic risk mitigation. This work presents an overview of the worldwide applications of the system PRESTo (PRobabilistic and Evolutionary early warning SysTem), which is the highly configurable and easily portable platform for Earthquake Early Warning developed by the RISSCLab group of the University of Naples Federico II. In particular, we first present the results of the real-time experimentation of PRESTo in Suthern Italy on the data streams of the Irpinia Seismic Network (ISNet), in Southern Italy. ISNet is a dense high-dynamic range, earthquake observing system, which operates in true real-time mode, thanks to a mixed data transmission system based on proprietary digital terrestrial links, standard ADSL and UMTS technologies. Using the seedlink protocol data are transferred to the network center unit, running the software platform PRESTo which is devoted to process the real-time data streaming, estimate source parameters and issue the alert. The software platform PRESTo uses a P-wave, network-based approach which has evolved and improved during the time since its first release. In its original version consisted in a series of modules, aimed at the event detection/picking, probabilistic real-time earthquake location and magnitude estimation, prediction of peak ground motion at distant sites through ground motion prediction equations for the area. In the recent years, PRESTo has been also implemented at the accelerometric and broad-band seismic networks in South Korea, Romania, North-East Italy, and Turkey and off-line tested in Iberian Peninsula, Israel, and Japan. Moreover, the feasibility of a PRESTo-based, EEWS at national scale in Italy, has been tested

  14. Real-time segmentation of multiple implanted cylindrical liver markers in kilovoltage and megavoltage x-ray images

    DEFF Research Database (Denmark)

    Fledelius, Walther; Worm, Esben Schjødt; Høyer, Morten

    2014-01-01

    (CBCT) projections, for real-time motion management. Thirteen patients treated with conformal stereotactic body radiation therapy in three fractions had 2-3 cylindrical gold markers implanted in the liver prior to treatment. At each fraction, the projection images of a pre-treatment CBCT scan were used...... for automatic generation of a 3D marker model that consisted of the size, orientation, and estimated 3D trajectory of each marker during the CBCT scan. The 3D marker model was used for real-time template based segmentation in subsequent x-ray images by projecting each marker's 3D shape and likely 3D motion...... range onto the imager plane. The segmentation was performed in intra-treatment kV images (526 marker traces, 92 097 marker projections) and MV images (88 marker traces, 22 382 marker projections), and in post-treatment CBCT projections (42 CBCT scans, 71 381 marker projections). 227 kV marker traces...

  15. Merged Real Time GNSS Solutions for the READI System

    Science.gov (United States)

    Santillan, V. M.; Geng, J.

    2014-12-01

    Real-time measurements from increasingly dense Global Navigational Satellite Systems (GNSS) networks located throughout the western US offer a substantial, albeit largely untapped, contribution towards the mitigation of seismic and other natural hazards. Analyzed continuously in real-time, currently over 600 instruments blanket the San Andreas and Cascadia fault systems of the North American plate boundary and can provide on-the-fly characterization of transient ground displacements highly complementary to traditional seismic strong-motion monitoring. However, the utility of GNSS systems depends on their resolution, and merged solutions of two or more independent estimation strategies have been shown to offer lower scatter and higher resolution. Towards this end, independent real time GNSS solutions produced by Scripps Inst. of Oceanography and Central Washington University (PANGA) are now being formally combined in pursuit of NASA's Real-Time Earthquake Analysis for Disaster Mitigation (READI) positioning goals. CWU produces precise point positioning (PPP) solutions while SIO produces ambiguity resolved PPP solutions (PPP-AR). The PPP-AR solutions have a ~5 mm RMS scatter in the horizontal and ~10mm in the vertical, however PPP-AR solutions can take tens of minutes to re-converge in case of data gaps. The PPP solutions produced by CWU use pre-cleaned data in which biases are estimated as non-integer ambiguities prior to formal positioning with GIPSY 6.2 using a real time stream editor developed at CWU. These solutions show ~20mm RMS scatter in the horizontal and ~50mm RMS scatter in the vertical but re-converge within 2 min. or less following cycle-slips or data outages. We have implemented the formal combination of the CWU and SCRIPPS ENU displacements using the independent solutions as input measurements to a simple 3-element state Kalman filter plus white noise. We are now merging solutions from 90 stations, including 30 in Cascadia, 39 in the Bay Area, and 21

  16. Evaluation of Real-Time Performance of the Virtual Seismologist Earthquake Early Warning Algorithm in Switzerland and California

    Science.gov (United States)

    Behr, Y.; Cua, G. B.; Clinton, J. F.; Heaton, T. H.

    2012-12-01

    The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) originally formulated by Cua and Heaton (2007). Implementation of VS into real-time EEW codes has been an on-going effort of the Swiss Seismological Service at ETH Zürich since 2006, with support from ETH Zürich, various European projects, and the United States Geological Survey (USGS). VS is one of three EEW algorithms - the other two being ElarmS (Allen and Kanamori, 2003) and On-Site (Wu and Kanamori, 2005; Boese et al., 2008) algorithms - that form the basis of the California Integrated Seismic Network (CISN) ShakeAlert system, a USGS-funded prototype end-to-end EEW system that could potentially be implemented in California. In Europe, VS is currently operating as a real-time test system in Switzerland. As part of the on-going EU project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction), VS will be installed and tested at other European networks. VS has been running in real-time on stations of the Southern California Seismic Network (SCSN) since July 2008, and on stations of the Berkeley Digital Seismic Network (BDSN) and the USGS Menlo Park strong motion network in northern California since February 2009. In Switzerland, VS has been running in real-time on stations monitored by the Swiss Seismological Service (including stations from Austria, France, Germany, and Italy) since 2010. We present summaries of the real-time performance of VS in Switzerland and California over the past two and three years respectively. The empirical relationships used by VS to estimate magnitudes and ground motion, originally derived from southern California data, are demonstrated to perform well in northern California and Switzerland. Implementation in real-time and off-line testing in Europe will potentially be extended to southern Italy, western Greece, Istanbul, Romania, and Iceland. Integration of the VS algorithm into both the CISN Advanced

  17. Operational tracking of lava lake surface motion at Kīlauea Volcano, Hawai‘i

    Science.gov (United States)

    Patrick, Matthew R.; Orr, Tim R.

    2018-03-08

    Surface motion is an important component of lava lake behavior, but previous studies of lake motion have been focused on short time intervals. In this study, we implement the first continuous, real-time operational routine for tracking lava lake surface motion, applying the technique to the persistent lava lake in Halema‘uma‘u Crater at the summit of Kīlauea Volcano, Hawai‘i. We measure lake motion by using images from a fixed thermal camera positioned on the crater rim, transmitting images to the Hawaiian Volcano Observatory (HVO) in real time. We use an existing optical flow toolbox in Matlab to calculate motion vectors, and we track the position of lava upwelling in the lake, as well as the intensity of spattering on the lake surface. Over the past 2 years, real-time tracking of lava lake surface motion at Halema‘uma‘u has been an important part of monitoring the lake’s activity, serving as another valuable tool in the volcano monitoring suite at HVO.

  18. Eye Tracking Reveals a Crucial Role for Facial Motion in Recognition of Faces by Infants

    Science.gov (United States)

    Xiao, Naiqi G.; Quinn, Paul C.; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-01-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was…

  19. Real-Time Cosmology with Gaia: Developing the Theory to Use Extragalactic Proper Motions to Make Dynamical Cosmological Tests, to Measure Geometric Distances, and to Detect Primordial Gravitational Waves

    Science.gov (United States)

    Darling, Jeremy

    A new field of study, "real-time cosmology," is now possible. This involves observing a dynamic universe that can be seen to change over human timescales. Most cosmological observations are geometrical, using standard candles or rulers to measure the expansion history and curvature as light propagates through the universe. Real-time cosmological measurements are dynamical, revealing the changing geometry of the universe - thus often providing geometrical distances independent of the canonical cosmological distance ladder - and are typically orthogonal to customary cosmological tests. This field of inquiry is no longer far-fetched, and this proposal demonstrates using extant data that many types of measurement are now within a factor of a few of being detectable, but the theory will very soon lag the observational capabilities. The Gaia mission will provide astrometry and proper motions of roughly 100 microarcseconds per year for half a million quasars by the end of its 5-year mission, but the theory for how to employ these data for cosmological tests has not been established. This project will develop the theory, models, and methods needed to make optimal use of the Gaia extragalactic proper motion measurements and to make significant new cosmological tests, distance measurements, and mass measurements. Gaia data can provide rich cosmological tests that are nearly model-independent. This work will build the theoretical framework enabling Gaia to measure or constrain: (1) The real-time growth and recession of structures, providing mass and distance measurements, (2) Extragalactic parallax for a statistical sample and individual galaxies, thus providing geometric distances, (3) The primordial stochastic long-period gravitational wave background, which deflects quasar light in a quadrupolar proper motion pattern, and (4) Cosmic shear, rotation, bulk motion, and local voids that may manifest as an apparent acceleration attributed to dark energy. One can also test the

  20. Real-time segmentation of multiple implanted cylindrical liver markers in kilovoltage and megavoltage x-ray images

    International Nuclear Information System (INIS)

    Fledelius, W; Worm, E; Høyer, M; Grau, C; Poulsen, P R

    2014-01-01

    Gold markers implanted in or near a tumor can be used as x-ray visible landmarks for image based tumor localization. The aim of this study was to develop and demonstrate fast and reliable real-time segmentation of multiple liver tumor markers in intra-treatment kV and MV images and in cone-beam CT (CBCT) projections, for real-time motion management. Thirteen patients treated with conformal stereotactic body radiation therapy in three fractions had 2–3 cylindrical gold markers implanted in the liver prior to treatment. At each fraction, the projection images of a pre-treatment CBCT scan were used for automatic generation of a 3D marker model that consisted of the size, orientation, and estimated 3D trajectory of each marker during the CBCT scan. The 3D marker model was used for real-time template based segmentation in subsequent x-ray images by projecting each marker's 3D shape and likely 3D motion range onto the imager plane. The segmentation was performed in intra-treatment kV images (526 marker traces, 92 097 marker projections) and MV images (88 marker traces, 22 382 marker projections), and in post-treatment CBCT projections (42 CBCT scans, 71 381 marker projections). 227 kV marker traces with low mean contrast-to-noise ratio were excluded as markers were not visible due to MV scatter. Online segmentation times measured for a limited dataset were used for estimating real-time segmentation times for all images. The percentage of detected markers was 94.8% (kV), 96.1% (MV), and 98.6% (CBCT). For the detected markers, the real-time segmentation was erroneous in 0.2–0.31% of the cases. The mean segmentation time per marker was 5.6 ms [2.1–12 ms] (kV), 5.5 ms [1.6–13 ms] (MV), and 6.5 ms [1.8–15 ms] (CBCT). Fast and reliable real-time segmentation of multiple liver tumor markers in intra-treatment kV and MV images and in CBCT projections was demonstrated for a large dataset. (paper)