WorldWideScience

Sample records for video frame rates

  1. High Resolution, High Frame Rate Video Technology

    Science.gov (United States)

    1990-01-01

    Papers and working group summaries presented at the High Resolution, High Frame Rate Video (HHV) Workshop are compiled. HHV system is intended for future use on the Space Shuttle and Space Station Freedom. The Workshop was held for the dual purpose of: (1) allowing potential scientific users to assess the utility of the proposed system for monitoring microgravity science experiments; and (2) letting technical experts from industry recommend improvements to the proposed near-term HHV system. The following topics are covered: (1) State of the art in the video system performance; (2) Development plan for the HHV system; (3) Advanced technology for image gathering, coding, and processing; (4) Data compression applied to HHV; (5) Data transmission networks; and (6) Results of the users' requirements survey conducted by NASA.

  2. Robust gait recognition from extremely low frame-rate videos

    OpenAIRE

    Guan, Yu; Li, Chang-Tsun; Choudhury, Sruti Das

    2013-01-01

    In this paper, we propose a gait recognition method for extremely low frame-rate videos. Different from the popular temporal reconstruction-based methods, the proposed method uses the average gait over the whole sequence as input feature template. Assuming the effect caused by extremely low frame-rate or large gait fluctuations are intra-class variations that the gallery data fails to capture, we build a general model based on random subspace method. More specifically, a number of weak classi...

  3. Effects Of Frame Rates In Video Displays

    Science.gov (United States)

    Kellogg, Gary V.; Wagner, Charles A.

    1991-01-01

    Report describes experiment on subjective effects of rates at which display on cathode-ray tube in flight simulator updated and refreshed. Conducted to learn more about jumping, blurring, flickering, and multiple lines that observer perceives when line moves at high speed across screen of a calligraphic CRT.

  4. Reducing video frame rate increases remote optimal focus time

    Science.gov (United States)

    Haines, Richard F.

    1993-01-01

    Twelve observers made best optical focus adjustments to a microscope whose high-resolution pattern was video monitored and displayed first on a National Television System Committee (NTSC) analog color monitor and second on a digitally compressed computer monitor screen at frame rates ranging (in six steps) from 1.5 to 30 frames per second (fps). This was done to determine whether reducing the frame rate affects the image focus. Reducing frame rate has been shown to be an effective and acceptable means of reducing transmission bandwidth of dynamic video imagery sent from Space Station Freedom (SSF) to ground scientists. Three responses were recorded per trial: time to complete the focus adjustment, number of changes of focus direction, and subjective rating of final image quality. It was found that: the average time to complete the focus setting increases from 4.5 sec at 30 fps to 7.9 sec at 1.5 fps (statistical probability = 1.2 x 10(exp -7)); there is no significant difference in the number of changes in the direction of focus adjustment across these frame rates; and there is no significant change in subjectively determined final image quality across these frame rates. These data can be used to help pre-plan future remote optical-focus operations on SSF.

  5. Frame Rate versus Spatial Quality: Which Video Characteristics Do Matter?

    DEFF Research Database (Denmark)

    Korhonen, Jari; Reiter, Ulrich; Ukhanova, Ann

    2013-01-01

    and temporal quality levels. We also propose simple yet powerful metrics for characterizing spatial and temporal properties of a video sequence, and demonstrate how these metrics can be applied for evaluating the relative impact of spatial and temporal quality on the perceived overall quality.......Several studies have shown that the relationship between perceived video quality and frame rate is dependent on the video content. In this paper, we have analyzed the content characteristics and compared them against the subjective results derived from preference decisions between different spatial...

  6. The effects of frame-rate and image quality on perceived video quality in videoconferencing

    OpenAIRE

    Thakur, Aruna; Gao, Chaunsi; Larsson, Andreas; Parnes, Peter

    2001-01-01

    This report discusses the effect of frame-rate and image quality on the perceived video quality in a specific videoconferencing application (MarratechPro). Subjects with various videoconferencing experiences took part in four experiments wherein they gave their opinions on the quality of video upon the variations in frame-rate and image quality. The results of the experiments showed that the subjects preferred high frame rate over high image quality, under the condition of limited bandwidth. ...

  7. A video event trigger for high frame rate, high resolution video technology

    Science.gov (United States)

    Williams, Glenn L.

    1991-01-01

    When video replaces film the digitized video data accumulates very rapidly, leading to a difficult and costly data storage problem. One solution exists for cases when the video images represent continuously repetitive 'static scenes' containing negligible activity, occasionally interrupted by short events of interest. Minutes or hours of redundant video frames can be ignored, and not stored, until activity begins. A new, highly parallel digital state machine generates a digital trigger signal at the onset of a video event. High capacity random access memory storage coupled with newly available fuzzy logic devices permits the monitoring of a video image stream for long term or short term changes caused by spatial translation, dilation, appearance, disappearance, or color change in a video object. Pretrigger and post-trigger storage techniques are then adaptable for archiving the digital stream from only the significant video images.

  8. FMO-based H.264 frame layer rate control for low bit rate video transmission

    Science.gov (United States)

    Cajote, Rhandley D.; Aramvith, Supavadee; Miyanaga, Yoshikazu

    2011-12-01

    The use of flexible macroblock ordering (FMO) in H.264/AVC improves error resiliency at the expense of reduced coding efficiency with added overhead bits for slice headers and signalling. The trade-off is most severe at low bit rates, where header bits occupy a significant portion of the total bit budget. To better manage the rate and improve coding efficiency, we propose enhancements to the H.264/AVC frame layer rate control, which take into consideration the effects of using FMO for video transmission. In this article, we propose a new header bits model, an enhanced frame complexity measure, a bit allocation and a quantization parameter adjustment scheme. Simulation results show that the proposed improvements achieve better visual quality compared with the JM 9.2 frame layer rate control with FMO enabled using a different number of slice groups. Using FMO as an error resilient tool with better rate management is suitable in applications that have limited bandwidth and in error prone environments such as video transmission for mobile terminals.

  9. FMO-based H.264 frame layer rate control for low bit rate video transmission

    Directory of Open Access Journals (Sweden)

    Miyanaga Yoshikazu

    2011-01-01

    Full Text Available Abstract The use of flexible macroblock ordering (FMO in H.264/AVC improves error resiliency at the expense of reduced coding efficiency with added overhead bits for slice headers and signalling. The trade-off is most severe at low bit rates, where header bits occupy a significant portion of the total bit budget. To better manage the rate and improve coding efficiency, we propose enhancements to the H.264/AVC frame layer rate control, which take into consideration the effects of using FMO for video transmission. In this article, we propose a new header bits model, an enhanced frame complexity measure, a bit allocation and a quantization parameter adjustment scheme. Simulation results show that the proposed improvements achieve better visual quality compared with the JM 9.2 frame layer rate control with FMO enabled using a different number of slice groups. Using FMO as an error resilient tool with better rate management is suitable in applications that have limited bandwidth and in error prone environments such as video transmission for mobile terminals.

  10. A Framework for the Assessment of Temporal Artifacts in Medium Frame-Rate Binary Video Halftones

    Directory of Open Access Journals (Sweden)

    Rehman Hamood-Ur

    2010-01-01

    Full Text Available Display of a video having a higher number of bits per pixel than that available on the display device requires quantization prior to display. Video halftoning performs this quantization so as to reduce visibility of certain artifacts. In many cases, visibility of one set of artifacts is decreased at the expense of increasing the visibility of another set. In this paper, we focus on two key temporal artifacts, flicker and dirty-window-effect, in binary video halftones. We quantify the visibility of these two artifacts when the video halftone is displayed at medium frame rates (15 to 30 frames per second. We propose new video halftoning methods to reduce visibility of these artifacts. The proposed contributions are (1 an enhanced measure of perceived flicker, (2 a new measure of perceived dirty-window-effect, (3 a new video halftoning method to reduce flicker, and (4 a new video halftoning method to reduce dirty-window-effect.

  11. TEKNIK ESTIMASI GERAK PENCARIAN PENUH DENGAN AKURASI SETENGAH PIKSEL UNTUK FRAME RATE UP CONVERSION VIDEO

    Directory of Open Access Journals (Sweden)

    ary satya prabhawa

    2014-10-01

    Full Text Available ABSTRAK Saat ini Teknologi video digital banyak digunakan pada aplikasi hiburan, contohnya adalah TV Digital dengan format HD. Dengan frame rate tinggi, pengkodean video akan menghasil laju bit lebih tinggi yaitu sampai 15 – 30 fps. Permasalahannya adalah kapasitas saluran transmisi memiliki kapasitas terbatas. Solusinya adalah menurunkan laju bit dengan menurunkan jumlah frame video ke penerima. Skema ini dikenal dengan Frame Rate Up-Conversion (FRUC video, dimana frame yang di encoder akan direkonstruksi kembali di decoder dengan membangkitkan frame intermediate (FI. FI dibangkitkan dengan teknik Motion Compensation Interpolation (MCI. Terkait dengan metode FRUC, penelitian ini mengajukan skema MCI unidirectional dengan pencarian gerak akurasi setengah piksel. Pada skema ini, sebuah motion vector (MV kandidat akan dicari di frame referensi, proses estimasi gerak dilakukan dengan menambah piksel sisipan diantara piksel eksisting. Sasarannya adalah meningkatkan akurasi MV kandidat. Hasil simulasi menunjukkan bahwa metode yang diajukan lebih baik sampai sebesar masing – masing 3,21 dB dan 3,11 dB pada wilayah pencarian 7 dan 15 piksel dibandingkan dengan metode frame repetition untuk sekuen video foreman dan hall monitor.

  12. Data compression techniques applied to high resolution high frame rate video technology

    Science.gov (United States)

    Hartz, William G.; Alexovich, Robert E.; Neustadter, Marc S.

    1989-01-01

    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended.

  13. A study of video frame rate on the perception of moving imagery detail

    Science.gov (United States)

    Haines, Richard F.; Chuang, Sherry L.

    1993-01-01

    The rate at which each frame of color moving video imagery is displayed was varied in small steps to determine what is the minimal acceptable frame rate for life scientists viewing white rats within a small enclosure. Two, twenty five second-long scenes (slow and fast animal motions) were evaluated by nine NASA principal investigators and animal care technicians. The mean minimum acceptable frame rate across these subjects was 3.9 fps both for the slow and fast moving animal scenes. The highest single trial frame rate averaged across all subjects for the slow and the fast scene was 6.2 and 4.8, respectively. Further research is called for in which frame rate, image size, and color/gray scale depth are covaried during the same observation period.

  14. Rate-Distortion Optimized Frame Dropping for Multiuser Streaming and Conversational Videos

    OpenAIRE

    Eckehard Steinbach; Jacob Chakareski; Wei Tu

    2008-01-01

    We consider rate-distortion optimized strategies for dropping frames from multiple conversational and streaming videos sharing limited network node resources. The dropping strategies are based on side information that is extracted during encoding and is sent along the regular bitstream. The additional transmission overhead and the computational complexity of the proposed frame dropping schemes are analyzed. Our experimental results show that a significant improvement in end-to-end performance...

  15. Audiovisual presentation of video-recorded stimuli at a high frame rate.

    Science.gov (United States)

    Lidestam, Björn

    2014-06-01

    A method for creating and presenting video-recorded synchronized audiovisual stimuli at a high frame rate-which would be highly useful for psychophysical studies on, for example, just-noticeable differences and gating-is presented. Methods for accomplishing this include recording audio and video separately using an exact synchronization signal, editing the recordings and finding exact synchronization points, and presenting the synchronized audiovisual stimuli with a desired frame rate on a cathode ray tube display using MATLAB and Psychophysics Toolbox 3. The methods from an empirical gating study (Moradi, Lidestam, & Rönnberg, Frontiers in Psychology 4:359, 2013) are presented as an example of the implementation of playback at 120 fps.

  16. Rate-Distortion Optimized Frame Dropping for Multiuser Streaming and Conversational Videos

    Directory of Open Access Journals (Sweden)

    Wei Tu

    2008-01-01

    Full Text Available We consider rate-distortion optimized strategies for dropping frames from multiple conversational and streaming videos sharing limited network node resources. The dropping strategies are based on side information that is extracted during encoding and is sent along the regular bitstream. The additional transmission overhead and the computational complexity of the proposed frame dropping schemes are analyzed. Our experimental results show that a significant improvement in end-to-end performance is achieved compared to priority-based random early dropping.

  17. Bayesian foreground and shadow detection in uncertain frame rate surveillance videos.

    Science.gov (United States)

    Benedek, C; Sziranyi, T

    2008-04-01

    In in this paper, we propose a new model regarding foreground and shadow detection in video sequences. The model works without detailed a priori object-shape information, and it is also appropriate for low and unstable frame rate video sources. Contribution is presented in three key issues: 1) we propose a novel adaptive shadow model, and show the improvements versus previous approaches in scenes with difficult lighting and coloring effects; 2) we give a novel description for the foreground based on spatial statistics of the neighboring pixel values, which enhances the detection of background or shadow-colored object parts; 3) we show how microstructure analysis can be used in the proposed framework as additional feature components improving the results. Finally, a Markov random field model is used to enhance the accuracy of the separation. We validate our method on outdoor and indoor sequences including real surveillance videos and well-known benchmark test sets.

  18. High resolution, high frame rate video technology development plan and the near-term system conceptual design

    Science.gov (United States)

    Ziemke, Robert A.

    1990-01-01

    The objective of the High Resolution, High Frame Rate Video Technology (HHVT) development effort is to provide technology advancements to remove constraints on the amount of high speed, detailed optical data recorded and transmitted for microgravity science and application experiments. These advancements will enable the development of video systems capable of high resolution, high frame rate video data recording, processing, and transmission. Techniques such as multichannel image scan, video parameter tradeoff, and the use of dual recording media were identified as methods of making the most efficient use of the near-term technology.

  19. A change detection approach to moving object detection in low frame-rate video

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Reid B [Los Alamos National Laboratory; Harvey, Neal R [Los Alamos National Laboratory; Theiler, James P [Los Alamos National Laboratory

    2009-01-01

    Moving object detection is of significant interest in temporal image analysis since it is a first step in many object identification and tracking applications. A key component in almost all moving object detection algorithms is a pixel-level classifier, where each pixel is predicted to be either part of a moving object or part of the background. In this paper we investigate a change detection approach to the pixel-level classification problem and evaluate its impact on moving object detection. The change detection approach that we investigate was previously applied to multi-and hyper-spectral datasets, where images were typically taken several days, or months apart. In this paper, we apply the approach to low-frame rate (1-2 frames per second) video datasets.

  20. Pixel-Level and Robust Vibration Source Sensing in High-Frame-Rate Video Analysis

    Directory of Open Access Journals (Sweden)

    Mingjun Jiang

    2016-11-01

    Full Text Available We investigate the effect of appearance variations on the detectability of vibration feature extraction with pixel-level digital filters for high-frame-rate videos. In particular, we consider robust vibrating object tracking, which is clearly different from conventional appearance-based object tracking with spatial pattern recognition in a high-quality image region of a certain size. For 512 × 512 videos of a rotating fan located at different positions and orientations and captured at 2000 frames per second with different lens settings, we verify how many pixels are extracted as vibrating regions with pixel-level digital filters. The effectiveness of dynamics-based vibration features is demonstrated by examining the robustness against changes in aperture size and the focal condition of the camera lens, the apparent size and orientation of the object being tracked, and its rotational frequency, as well as complexities and movements of background scenes. Tracking experiments for a flying multicopter with rotating propellers are also described to verify the robustness of localization under complex imaging conditions in outside scenarios.

  1. Objective assessment of the impact of frame rate on video quality

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Korhonen, Jari; Forchhammer, Søren

    2012-01-01

    In this paper, we present a novel objective quality metric that takes the impact of frame rate into account. The proposed metric uses PSNR, frame rate and a content dependent parameter that can easily be obtained from spatial and temporal activity indices. The results have been validated on data...... from a subjective quality study, where the test subjects have been choosing the preferred path from the lowest quality to the best quality, at each step making a choice in favor of higher frame rate or lower distortion. A comparison with other relevant objective metrics shows that the proposed metric...

  2. Objective assessment of the impact of frame rate on video quality

    OpenAIRE

    Ukhanova, Ann; Korhonen, Jari; Forchhammer, Søren

    2012-01-01

    In this paper, we present a novel objective quality metric that takes the impact of frame rate into account. The proposed metric uses PSNR, frame rate and a content dependent parameter that can easily be obtained from spatial and temporal activity indices. The results have been validated on data from a subjective quality study, where the test subjects have been choosing the preferred path from the lowest quality to the best quality, at each step making a choice in favor of higher frame rate o...

  3. Influence of acquisition frame-rate and video compression techniques on pulse-rate variability estimation from vPPG signal.

    Science.gov (United States)

    Cerina, Luca; Iozzia, Luca; Mainardi, Luca

    2017-11-14

    In this paper, common time- and frequency-domain variability indexes obtained by pulse rate variability (PRV) series extracted from video-photoplethysmographic signal (vPPG) were compared with heart rate variability (HRV) parameters calculated from synchronized ECG signals. The dual focus of this study was to analyze the effect of different video acquisition frame-rates starting from 60 frames-per-second (fps) down to 7.5 fps and different video compression techniques using both lossless and lossy codecs on PRV parameters estimation. Video recordings were acquired through an off-the-shelf GigE Sony XCG-C30C camera on 60 young, healthy subjects (age 23±4 years) in the supine position. A fully automated, signal extraction method based on the Kanade-Lucas-Tomasi (KLT) algorithm for regions of interest (ROI) detection and tracking, in combination with a zero-phase principal component analysis (ZCA) signal separation technique was employed to convert the video frames sequence to a pulsatile signal. The frame-rate degradation was simulated on video recordings by directly sub-sampling the ROI tracking and signal extraction modules, to correctly mimic videos recorded at a lower speed. The compression of the videos was configured to avoid any frame rejection caused by codec quality leveling, FFV1 codec was used for lossless compression and H.264 with variable quality parameter as lossy codec. The results showed that a reduced frame-rate leads to inaccurate tracking of ROIs, increased time-jitter in the signals dynamics and local peak displacements, which degrades the performances in all the PRV parameters. The root mean square of successive differences (RMSSD) and the proportion of successive differences greater than 50 ms (PNN50) indexes in time-domain and the low frequency (LF) and high frequency (HF) power in frequency domain were the parameters which highly degraded with frame-rate reduction. Such a degradation can be partially mitigated by up-sampling the measured

  4. A Framework for the Assessment of Temporal Artifacts in Medium Frame-Rate Binary Video Halftones

    OpenAIRE

    Hamood-Ur Rehman; Evans, Brian L.

    2010-01-01

    Display of a video having a higher number of bits per pixel than that available on the display device requires quantization prior to display. Video halftoning performs this quantization so as to reduce visibility of certain artifacts. In many cases, visibility of one set of artifacts is decreased at the expense of increasing the visibility of another set. In this paper, we focus on two key temporal artifacts, flicker and dirty-window-effect, in binary video halftones. We quantify the visibil...

  5. Audiovisual presentation of video-recorded stimuli at a high frame rate

    National Research Council Canada - National Science Library

    Lidestam, Björn

    2014-01-01

    .... Methods for accomplishing this include recording audio and video separately using an exact synchronization signal, editing the recordings and finding exact synchronization points, and presenting...

  6. Frame Rate Exclusive Sync Management of Live Video Streams in Collaborative Mobile Production Environment

    OpenAIRE

    Mughal, Mudassar Ahmad; Zoric, Goranka; Juhlin, Oskar

    2014-01-01

    We discuss synchronization problem in an emerging type of multimedia applications, called live mobile collaborative video production systems. The mobile character of the production system allows a director to be present at the site where he/she can see the event directly as well as through the mixer display. In such a situation production of a consistent broadcast is sensitive to delay and asynchrony of video streams in the mixer console. In this paper, we propose an algorithm for this situat...

  7. Intra Frame Coding In Advanced Video Coding Standard (H.264) to Obtain Consistent PSNR and Reduce Bit Rate for Diagonal Down Left Mode Using Gaussian Pulse

    Science.gov (United States)

    Manjanaik, N.; Parameshachari, B. D.; Hanumanthappa, S. N.; Banu, Reshma

    2017-08-01

    Intra prediction process of H.264 video coding standard used to code first frame i.e. Intra frame of video to obtain good coding efficiency compare to previous video coding standard series. More benefit of intra frame coding is to reduce spatial pixel redundancy with in current frame, reduces computational complexity and provides better rate distortion performance. To code Intra frame it use existing process Rate Distortion Optimization (RDO) method. This method increases computational complexity, increases in bit rate and reduces picture quality so it is difficult to implement in real time applications, so the many researcher has been developed fast mode decision algorithm for coding of intra frame. The previous work carried on Intra frame coding in H.264 standard using fast decision mode intra prediction algorithm based on different techniques was achieved increased in bit rate, degradation of picture quality(PSNR) for different quantization parameters. Many previous approaches of fast mode decision algorithms on intra frame coding achieved only reduction of computational complexity or it save encoding time and limitation was increase in bit rate with loss of quality of picture. In order to avoid increase in bit rate and loss of picture quality a better approach was developed. In this paper developed a better approach i.e. Gaussian pulse for Intra frame coding using diagonal down left intra prediction mode to achieve higher coding efficiency in terms of PSNR and bitrate. In proposed method Gaussian pulse is multiplied with each 4x4 frequency domain coefficients of 4x4 sub macro block of macro block of current frame before quantization process. Multiplication of Gaussian pulse for each 4x4 integer transformed coefficients at macro block levels scales the information of the coefficients in a reversible manner. The resulting signal would turn abstract. Frequency samples are abstract in a known and controllable manner without intermixing of coefficients, it avoids

  8. A Comparison of Visual Recognition of the Laryngopharyngeal Structures Between High and Standard Frame Rate Videos of the Fiberoptic Endoscopic Evaluation of Swallowing.

    Science.gov (United States)

    Aghdam, Mehran Alizadeh; Ogawa, Makoto; Iwahashi, Toshihiko; Hosokawa, Kiyohito; Kato, Chieri; Inohara, Hidenori

    2017-04-29

    The purpose of this study was to assess whether or not high frame rate (HFR) videos recorded using high-speed digital imaging (HSDI) improve the visual recognition of the motions of the laryngopharyngeal structures during pharyngeal swallow in fiberoptic endoscopic evaluation of swallowing (FEES). Five healthy subjects were asked to swallow 0.5 ml water under fiberoptic nasolaryngoscopy. The endoscope was connected to a high-speed camera, which recorded the laryngopharyngeal view throughout the swallowing process at 4000 frames/s (fps). Each HFR video was then copied and downsampled into a standard frame rate (SFR) video version (30 fps). Fifteen otorhinolaryngologists observed all of the HFR/SFR videos in random order and rated the four-point ordinal scale reflecting the degree of visual recognition of the rapid laryngopharyngeal structure motions just before the 'white-out' phenomenon. Significantly higher scores, reflecting better visibility, were seen for the HFR videos compared with the SFR videos for the following laryngopharyngeal structures: the posterior pharyngeal wall (p = 0.001), left pharyngeal wall (p = 0.015), right lateral pharyngeal wall (p = 0.035), tongue base (p = 0.005), and epiglottis tilting (p = 0.005). However, when visualized with HFR and SFR, 'certainly clear observation' of the laryngeal structures was achieved in <50% of cases, because all the motions were not necessarily captured in each video. These results demonstrate the use of HSDI in FEES makes the motion perception of the laryngopharyngeal structures during pharyngeal swallow easier in comparison to SFR videos with equivalent image quality due to the ability of HSDI to depict the laryngopharyngeal motions in a continuous manner.

  9. Video scan rate conversion method and apparatus for achieving same

    Science.gov (United States)

    Mills, George T.

    1992-12-01

    In a video system, a video signal operates at a vertical scan rate to generate a first video frame characterized by a first number of lines per frame. A method and apparatus are provided to convert the first video frame into a second video frame characterized by a second number of lines per frame. The first video frame is stored at the vertical scan rate as digital samples. A portion of the stored digital samples from each line of the first video frame are retrieved at the vertical scan rate. The number of digital samples in the retrieved portion from each line of the first video frame is governed by a ratio equal to the second number divided by the first number, such that the retrieved portion from the first video frame is the second video frame.

  10. Interactive streaming of stored multiview video using redundant frame structures.

    Science.gov (United States)

    Cheung, Gene; Ortega, Antonio; Cheung, Ngai-Man

    2011-03-01

    While much of multiview video coding focuses on the rate-distortion performance of compressing all frames of all views for storage or non-interactive video delivery over networks, we address the problem of designing a frame structure to enable interactive multiview streaming, where clients can interactively switch views during video playback. Thus, as a client is playing back successive frames (in time) for a given view, it can send a request to the server to switch to a different view while continuing uninterrupted temporal playback. Noting that standard tools for random access (i.e., I-frame insertion) can be bandwidth-inefficient for this application, we propose a redundant representation of I-, P-, and "merge" frames, where each original picture can be encoded into multiple versions, appropriately trading off expected transmission rate with storage, to facilitate view switching. We first present ad hoc frame structures with good performance when the view-switching probabilities are either very large or very small. We then present optimization algorithms that generate more general frame structures with better overall performance for the general case. We show in our experiments that we can generate redundant frame structures offering a range of tradeoff points between transmission and storage, e.g., outperforming simple I-frame insertion structures by up to 45% in terms of bandwidth efficiency at twice the storage cost.

  11. Ultra-scale vehicle tracking in low spatial-resolution and low frame-rate overhead video

    Energy Technology Data Exchange (ETDEWEB)

    Carrano, C J

    2009-05-20

    Overhead persistent surveillance systems are becoming more capable at acquiring wide-field image sequences for long time-spans. The need to exploit this data is becoming ever greater. The ability to track a single vehicle of interest or to track all the observable vehicles, which may number in the thousands, over large, cluttered regions while they persist in the imagery either in real-time or quickly on-demand is very desirable. With this ability we can begin to answer a number of interesting questions such as, what are normal traffic patterns in a particular region or where did that truck come from? There are many challenges associated with processing this type of data, some of which we will address in the paper. Wide-field image sequences are very large with many thousands of pixels on a side and are characterized by lower resolutions (e.g. worse than 0.5 meters/pixel) and lower frame rates (e.g. a few Hz or less). The objects in the scenery can vary in size, density, and contrast with respect to the background. At the same time the background scenery provides a number of clutter sources both man-made and natural. We describe our current implementation of an ultrascale capable multiple-vehicle tracking algorithm for overhead persistent surveillance imagery as well as discuss the tracking and timing performance of the currently implemented algorithm which is aimed at utilizing grayscale electrooptical image sequences alone for the track segment generation.

  12. Sub-Frame Crossing for Streaming Video over Wireless Networks

    OpenAIRE

    Aziz, Hussein Muzahim; Grahn, Håkan; Lundberg, Lars

    2010-01-01

    Transmitting a real time video streaming over a wireless network cannot guarantee that all the frames could be received by the mobile devices. The characteristics of a wireless network in terms of the available bandwidth, frame delay, and frame losses cannot be known in advanced. In this work, we propose a new mechanism for streaming video over a wireless channel. The proposed mechanism prevents freezing frames in the mobile devices. This is done by splitting the video frame in two sub-frames...

  13. Tiny videos: a large data set for nonparametric video retrieval and frame classification.

    Science.gov (United States)

    Karpenko, Alexandre; Aarabi, Parham

    2011-03-01

    In this paper, we present a large database of over 50,000 user-labeled videos collected from YouTube. We develop a compact representation called "tiny videos" that achieves high video compression rates while retaining the overall visual appearance of the video as it varies over time. We show that frame sampling using affinity propagation-an exemplar-based clustering algorithm-achieves the best trade-off between compression and video recall. We use this large collection of user-labeled videos in conjunction with simple data mining techniques to perform related video retrieval, as well as classification of images and video frames. The classification results achieved by tiny videos are compared with the tiny images framework [24] for a variety of recognition tasks. The tiny images data set consists of 80 million images collected from the Internet. These are the largest labeled research data sets of videos and images available to date. We show that tiny videos are better suited for classifying scenery and sports activities, while tiny images perform better at recognizing objects. Furthermore, we demonstrate that combining the tiny images and tiny videos data sets improves classification precision in a wider range of categories.

  14. An Algorithm of Extracting I-Frame in Compressed Video

    Directory of Open Access Journals (Sweden)

    Zhu Yaling

    2015-01-01

    Full Text Available The MPEG video data includes three types of frames, that is: I-frame, P-frame and B-frame. However, the I-frame records the main information of video data, the P-frame and the B-frame are just regarded as motion compensations of the I-frame. This paper presents the approach which analyzes the MPEG video stream in the compressed domain, and find out the key frame of MPEG video stream by extracting the I-frame. Experiments indicated that this method can be automatically realized in the compressed MPEG video and it will lay the foundation for the video processing in the future.

  15. Stop-Frame Removal Improves Web Video Classification

    NARCIS (Netherlands)

    Habibian, A.; Snoek, C.G.M.

    2014-01-01

    Web videos available in sharing sites like YouTube, are becoming an alternative to manually annotated training data, which are necessary for creating video classifiers. However, when looking into web videos, we observe they contain several irrelevant frames that may randomly appear in any video,

  16. Frame Rate and Human Vision

    Science.gov (United States)

    Watson, Andrew B.

    2012-01-01

    To enhance the quality of the theatre experience, the film industry is interested in achieving higher frame rates for capture and display. In this talk I will describe the basic spatio-temporal sensitivities of human vision, and how they respond to the time sequence of static images that is fundamental to cinematic presentation.

  17. Full-frame video stabilization with motion inpainting.

    Science.gov (United States)

    Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung

    2006-07-01

    Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.

  18. Estimating Body Related Soft Biometric Traits in Video Frames

    Directory of Open Access Journals (Sweden)

    Olasimbo Ayodeji Arigbabu

    2014-01-01

    Full Text Available Soft biometrics can be used as a prescreening filter, either by using single trait or by combining several traits to aid the performance of recognition systems in an unobtrusive way. In many practical visual surveillance scenarios, facial information becomes difficult to be effectively constructed due to several varying challenges. However, from distance the visual appearance of an object can be efficiently inferred, thereby providing the possibility of estimating body related information. This paper presents an approach for estimating body related soft biometrics; specifically we propose a new approach based on body measurement and artificial neural network for predicting body weight of subjects and incorporate the existing technique on single view metrology for height estimation in videos with low frame rate. Our evaluation on 1120 frame sets of 80 subjects from a newly compiled dataset shows that the mentioned soft biometric information of human subjects can be adequately predicted from set of frames.

  19. Coding B-Frames of Color Videos with Fuzzy Transforms

    Directory of Open Access Journals (Sweden)

    Ferdinando Di Martino

    2013-01-01

    Full Text Available We use a new method based on discrete fuzzy transforms for coding/decoding frames of color videos in which we determine dynamically the GOP sequences. Frames can be differentiated into intraframes, predictive frames, and bidirectional frames, and we consider particular frames, called Δ-frames (resp., R-frames, for coding P-frames (resp., B-frames by using two similarity measures based on Lukasiewicz -norm; moreover, a preprocessing phase is proposed to determine similarity thresholds for classifying the above types of frame. The proposed method provides acceptable results in terms of quality of the reconstructed videos to a certain extent if compared with classical-based F-transforms method and the standard MPEG-4.

  20. Inertial Frames and Clock Rates

    CERN Document Server

    Kak, Subhash

    2012-01-01

    This article revisits the historiography of the problem of inertial frames. Specifically, the case of the twins in the clock paradox is considered to see that some resolutions implicitly assume inertiality for the non-accelerating twin. If inertial frames are explicitly identified by motion with respect to the large scale structure of the universe, it makes it possible to consider the relative inertiality of different frames.

  1. A video rate laser scanning confocal microscope

    Science.gov (United States)

    Ma, Hongzhou; Jiang, James; Ren, Hongwu; Cable, Alex E.

    2008-02-01

    A video-rate laser scanning microscope was developed as an imaging engine to integrate with other photonic building blocks to fulfill various microscopic imaging applications. The system is quipped with diode laser source, resonant scanner, galvo scanner, control electronic and computer loaded with data acquisition boards and imaging software. Based on an open frame design, the system can be combined with varies optics to perform the functions of fluorescence confocal microscopy, multi-photon microscopy and backscattering confocal microscopy. Mounted to the camera port, it allows a traditional microscope to obtain confocal images at video rate. In this paper, we will describe the design principle and demonstrate examples of applications.

  2. Need for Speed: A Benchmark for Higher Frame Rate Object Tracking

    OpenAIRE

    Galoogahi, Hamed Kiani; Fagg, Ashton; Huang, Chen; Ramanan, Deva; Lucey, Simon

    2017-01-01

    In this paper, we propose the first higher frame rate video dataset (called Need for Speed - NfS) and benchmark for visual object tracking. The dataset consists of 100 videos (380K frames) captured with now commonly available higher frame rate (240 FPS) cameras from real world scenarios. All frames are annotated with axis aligned bounding boxes and all sequences are manually labelled with nine visual attributes - such as occlusion, fast motion, background clutter, etc. Our benchmark provides ...

  3. Rate-adaptive compressive video acquisition with sliding-window total-variation-minimization reconstruction

    Science.gov (United States)

    Liu, Ying; Pados, Dimitris A.

    2013-05-01

    We consider a compressive video acquisition system where frame blocks are sensed independently. Varying block sparsity is exploited in the form of individual per-block open-loop sampling rate allocation with minimal system overhead. At the decoder, video frames are reconstructed via sliding-window inter-frame total variation minimization. Experimental results demonstrate that such rate-adaptive compressive video acquisition improves noticeably the rate-distortion performance of the video stream over fixed-rate acquisition approaches.

  4. VLSI Architecture Design for H.264/AVC Intra-frame Video Encoding

    National Research Council Canada - National Science Library

    Kuo, Huang-Chih; Lin, Youn-Long

    2013-01-01

    Intra-frame encoding is useful for many video applications such as security surveillance, digital cinema, and video conferencing because it supports random access to every video frame for easy editing...

  5. A Novel Key-Frame Extraction Approach for Both Video Summary and Video Index

    Science.gov (United States)

    Lei, Shaoshuai; Xie, Gang; Yan, Gaowei

    2014-01-01

    Existing key-frame extraction methods are basically video summary oriented; yet the index task of key-frames is ignored. This paper presents a novel key-frame extraction approach which can be available for both video summary and video index. First a dynamic distance separability algorithm is advanced to divide a shot into subshots based on semantic structure, and then appropriate key-frames are extracted in each subshot by SVD decomposition. Finally, three evaluation indicators are proposed to evaluate the performance of the new approach. Experimental results show that the proposed approach achieves good semantic structure for semantics-based video index and meanwhile produces video summary consistent with human perception. PMID:24757431

  6. Authentication of Surveillance Videos: Detecting Frame Duplication Based on Residual Frame.

    Science.gov (United States)

    Fadl, Sondos M; Han, Qi; Li, Qiong

    2017-10-16

    Nowadays, surveillance systems are used to control crimes. Therefore, the authenticity of digital video increases the accuracy of deciding to admit the digital video as legal evidence or not. Inter-frame duplication forgery is the most common type of video forgery methods. However, many existing methods have been proposed for detecting this type of forgery and these methods require high computational time and impractical. In this study, we propose an efficient inter-frame duplication detection algorithm based on standard deviation of residual frames. Standard deviation of residual frame is applied to select some frames and ignore others, which represent a static scene. Then, the entropy of discrete cosine transform coefficients is calculated for each selected residual frame to represent its discriminating feature. Duplicated frames are then detected exactly using subsequence feature analysis. The experimental results demonstrated that the proposed method is effective to identify inter-frame duplication forgery with localization and acceptable running time. © 2017 American Academy of Forensic Sciences.

  7. The Theoretical Highest Frame Rate of Silicon Image Sensors

    Directory of Open Access Journals (Sweden)

    Takeharu Goji Etoh

    2017-02-01

    Full Text Available The frame rate of the digital high-speed video camera was 2000 frames per second (fps in 1989, and has been exponentially increasing. A simulation study showed that a silicon image sensor made with a 130 nm process technology can achieve about 1010 fps. The frame rate seems to approach the upper bound. Rayleigh proposed an expression on the theoretical spatial resolution limit when the resolution of lenses approached the limit. In this paper, the temporal resolution limit of silicon image sensors was theoretically analyzed. It is revealed that the limit is mainly governed by mixing of charges with different travel times caused by the distribution of penetration depth of light. The derived expression of the limit is extremely simple, yet accurate. For example, the limit for green light of 550 nm incident to silicon image sensors at 300 K is 11.1 picoseconds. Therefore, the theoretical highest frame rate is 90.1 Gfps (about 1011 fps

  8. De-framing video games from the light of cinema

    Directory of Open Access Journals (Sweden)

    Bernard Perron

    2015-09-01

    Full Text Available In this essay, we shall try to step back from a blinding cinema-centric approach in order to examine the impact such a framing has caused, to question its limitations, and to reflect on the interpretive communities that have relied on film (communities we are part of, due to our film studies background to position video games as an important cultural phenomenon as well as an object worthy of scholarly attention. Using Gaudreault and Marion’s notion of cultural series and wishing to spread a French theoretical approach we find very relevant to the discussion, we will question the bases on which we frame video games as cinema. This inquiry will focus on the audiovisual nature of both media and highlight their differing technical and aesthetic aspects, which will lead us to consider video games as being closer to other forms of audiovisual media.

  9. Inter-frame Collusion Attack in SS-N Video Watermarking System

    OpenAIRE

    Yaser Mohammad Taheri; Alireza Zolghadr–asli; Mehran Yazdi

    2009-01-01

    Video watermarking is usually considered as watermarking of a set of still images. In frame-by-frame watermarking approach, each video frame is seen as a single watermarked image, so collusion attack is more critical in video watermarking. If the same or redundant watermark is used for embedding in every frame of video, the watermark can be estimated and then removed by watermark estimate remodolulation (WER) attack. Also if uncorrelated watermarks are used for every frame, these watermarks c...

  10. Novel Integration of Frame Rate Up Conversion and HEVC Coding Based on Rate-Distortion Optimization.

    Science.gov (United States)

    Lu, Guo; Zhang, Xiaoyun; Chen, Li; Gao, Zhiyong

    2018-02-01

    Frame rate up conversion (FRUC) can improve the visual quality by interpolating new intermediate frames. However, high frame rate videos by FRUC are confronted with more bitrate consumption or annoying artifacts of interpolated frames. In this paper, a novel integration framework of FRUC and high efficiency video coding (HEVC) is proposed based on rate-distortion optimization, and the interpolated frames can be reconstructed at encoder side with low bitrate cost and high visual quality. First, joint motion estimation (JME) algorithm is proposed to obtain robust motion vectors, which are shared between FRUC and video coding. What's more, JME is embedded into the coding loop and employs the original motion search strategy in HEVC coding. Then, the frame interpolation is formulated as a rate-distortion optimization problem, where both the coding bitrate consumption and visual quality are taken into account. Due to the absence of original frames, the distortion model for interpolated frames is established according to the motion vector reliability and coding quantization error. Experimental results demonstrate that the proposed framework can achieve 21% ~ 42% reduction in BDBR, when compared with the traditional methods of FRUC cascaded with coding.

  11. Low Bit Rate Video Coding | Mishra | Nigerian Journal of Technology

    African Journals Online (AJOL)

    ... length bit rate (VLBR) broadly encompasses video coding which mandates a temporal frequency of 10 frames per second (fps) or less. Object-based video coding represents a very promising option for VLBR coding, though the problems of object identification and segmentation need to be addressed by further research.

  12. Statistical Analysis of Video Frame Size Distribution Originating from Scalable Video Codec (SVC

    Directory of Open Access Journals (Sweden)

    Sima Ahmadpour

    2017-01-01

    Full Text Available Designing an effective and high performance network requires an accurate characterization and modeling of network traffic. The modeling of video frame sizes is normally applied in simulation studies and mathematical analysis and generating streams for testing and compliance purposes. Besides, video traffic assumed as a major source of multimedia traffic in future heterogeneous network. Therefore, the statistical distribution of video data can be used as the inputs for performance modeling of networks. The finding of this paper comprises the theoretical definition of distribution which seems to be relevant to the video trace in terms of its statistical properties and finds the best distribution using both the graphical method and the hypothesis test. The data set used in this article consists of layered video traces generating from Scalable Video Codec (SVC video compression technique of three different movies.

  13. Motion-Compensated Coding and Frame-Rate Up-Conversion: Models and Analysis

    OpenAIRE

    Dar, Yehuda; Bruckstein, Alfred M.

    2014-01-01

    Block-based motion estimation (ME) and compensation (MC) techniques are widely used in modern video processing algorithms and compression systems. The great variety of video applications and devices results in numerous compression specifications. Specifically, there is a diversity of frame-rates and bit-rates. In this paper, we study the effect of frame-rate and compression bit-rate on block-based ME and MC as commonly utilized in inter-frame coding and frame-rate up conversion (FRUC). This j...

  14. Hardware/Software Issues for Video Guidance Systems: The Coreco Frame Grabber

    Science.gov (United States)

    Bales, John W.

    1996-01-01

    The F64 frame grabber is a high performance video image acquisition and processing board utilizing the TMS320C40 and TMS34020 processors. The hardware is designed for the ISA 16 bit bus and supports multiple digital or analog cameras. It has an acquisition rate of 40 million pixels per second, with a variable sampling frequency of 510 kHz to MO MHz. The board has a 4MB frame buffer memory expandable to 32 MB, and has a simultaneous acquisition and processing capability. It supports both VGA and RGB displays, and accepts all analog and digital video input standards.

  15. Recovery of lost color and depth frames in multiview videos.

    Science.gov (United States)

    Lin, Ting-Lan; Wang, Chuan-Jia; Ding, Tsai-Ling; Huang, Gui-Xiang; Tsai, Wei-Lin; Chang, Tsung-En; Yang, Neng-Chieh

    2017-08-29

    In this paper, we consider an integrated error concealment system for lost color frames and lost depth frames in multiview videos with depths. We first proposed a pixel-based color error-concealment method with the use of depth information. Instead of assuming that the same moving object in consecutive frames has minimal depth difference, as is done in a state-of-the-art method, a more realistic situation in which the same moving object in consecutive frames can be in different depths is considered. In the derived motion vector candidate set, we consider all the candidate motion vectors in the set, and weight the reference pixels by the depth differences to obtain the final recovered pixel. Compared to two state-of-the-art methods, the proposed method has average PSNR gains of up to 8.73 dB and 3.98 dB respectively. Second, we proposed an iterative depth frame error-concealment method. The initial recovered depth frame is obtained by DIBR (depth-image-based rendering) from another available view. The holes in the recovered depth frame are then filled in the proposed priority order. Preprocessing methods (depth difference compensation and inconsistent pixel removal) are performed to improve the performance. Compared with a method that uses the available motion vector in a color frame to recover the lost depth pixels, the HMVE (hybrid motion vector extrapolation) method, the inpainting method and the proposed method have gains of up to 4.31 dB, 10.29 dB and 6.04 dB, respectively. Finally, for the situation in which the color and the depth frames are lost at the same time, our two methods jointly perform better with a gain of up to 7.79 dB.

  16. 3-D model-based frame interpolation for distributed video coding of static scenes.

    Science.gov (United States)

    Maitre, Matthieu; Guillemot, Christine; Morin, Luce

    2007-05-01

    This paper addresses the problem of side information extraction for distributed coding of videos captured by a camera moving in a 3-D static environment. Examples of targeted applications are augmented reality, remote-controlled robots operating in hazardous environments, or remote exploration by drones. It explores the benefits of the structure-from-motion paradigm for distributed coding of this type of video content. Two interpolation methods constrained by the scene geometry, based either on block matching along epipolar lines or on 3-D mesh fitting, are first developed. These techniques are based on a robust algorithm for sub-pel matching of feature points, which leads to semi-dense correspondences between key frames. However, their rate-distortion (RD) performances are limited by misalignments between the side information and the actual Wyner-Ziv (WZ) frames due to the assumption of linear motion between key frames. To cope with this problem, two feature point tracking techniques are introduced, which recover the camera parameters of the WZ frames. A first technique, in which the frames remain encoded separately, performs tracking at the decoder and leads to significant RD performance gains. A second technique further improves the RD performances by allowing a limited tracking at the encoder. As an additional benefit, statistics on tracks allow the encoder to adapt the key frame frequency to the video motion content.

  17. Rate-Adaptive Video Compression (RAVC) Universal Video Stick (UVS)

    Science.gov (United States)

    Hench, David L.

    2009-05-01

    The H.264 video compression standard, aka MPEG 4 Part 10 aka Advanced Video Coding (AVC) allows new flexibility in the use of video in the battlefield. This standard necessitates encoder chips to effectively utilize the increased capabilities. Such chips are designed to cover the full range of the standard with designers of individual products given the capability of selecting the parameters that differentiate a broadcast system from a video conferencing system. The SmartCapture commercial product and the Universal Video Stick (UVS) military versions are about the size of a thumb drive with analog video input and USB (Universal Serial Bus) output and allow the user to select the parameters of imaging to the. Thereby, allowing the user to select video bandwidth (and video quality) using four dimensions of quality, on the fly, without stopping video transmission. The four dimensions are: 1) spatial, change from 720 pixel x 480 pixel to 320 pixel x 360 pixel to 160 pixel x 180 pixel, 2) temporal, change from 30 frames/ sec to 5 frames/sec, 3) transform quality with a 5 to 1 range, 4) and Group of Pictures (GOP) that affects noise immunity. The host processor simply wraps the H.264 network abstraction layer packets into the appropriate network packets. We also discuss the recently adopted scalable amendment to H.264 that will allow limit RAVC at any point in the communication chain by throwing away preselected packets.

  18. Reliability of video-based identification of footstrike pattern and video time frame at initial contact in recreational runners

    DEFF Research Database (Denmark)

    Damsted, Camma; Larsen, L H; Nielsen, R.O.

    2015-01-01

    and video time frame at initial contact during treadmill running using two-dimensional (2D) video recordings. METHODS: Thirty-one recreational runners were recorded twice, 1 week apart, with a high-speed video camera. Two blinded raters evaluated each video twice with an interval of at least 14 days...

  19. Dynamic frame resizing with convolutional neural network for efficient video compression

    Science.gov (United States)

    Kim, Jaehwan; Park, Youngo; Choi, Kwang Pyo; Lee, JongSeok; Jeon, Sunyoung; Park, JeongHoon

    2017-09-01

    In the past, video codecs such as vc-1 and H.263 used a technique to encode reduced-resolution video and restore original resolution from the decoder for improvement of coding efficiency. The techniques of vc-1 and H.263 Annex Q are called dynamic frame resizing and reduced-resolution update mode, respectively. However, these techniques have not been widely used due to limited performance improvements that operate well only under specific conditions. In this paper, video frame resizing (reduced/restore) technique based on machine learning is proposed for improvement of coding efficiency. The proposed method features video of low resolution made by convolutional neural network (CNN) in encoder and reconstruction of original resolution using CNN in decoder. The proposed method shows improved subjective performance over all the high resolution videos which are dominantly consumed recently. In order to assess subjective quality of the proposed method, Video Multi-method Assessment Fusion (VMAF) which showed high reliability among many subjective measurement tools was used as subjective metric. Moreover, to assess general performance, diverse bitrates are tested. Experimental results showed that BD-rate based on VMAF was improved by about 51% compare to conventional HEVC. Especially, VMAF values were significantly improved in low bitrate. Also, when the method is subjectively tested, it had better subjective visual quality in similar bit rate.

  20. Chest compression rate measurement from smartphone video.

    Science.gov (United States)

    Engan, Kjersti; Hinna, Thomas; Ryen, Tom; Birkenes, Tonje S; Myklebust, Helge

    2016-08-11

    Out-of-hospital cardiac arrest is a life threatening situation where the first person performing cardiopulmonary resuscitation (CPR) most often is a bystander without medical training. Some existing smartphone apps can call the emergency number and provide for example global positioning system (GPS) location like Hjelp 113-GPS App by the Norwegian air ambulance. We propose to extend functionality of such apps by using the built in camera in a smartphone to capture video of the CPR performed, primarily to estimate the duration and rate of the chest compression executed, if any. All calculations are done in real time, and both the caller and the dispatcher will receive the compression rate feedback when detected. The proposed algorithm is based on finding a dynamic region of interest in the video frames, and thereafter evaluating the power spectral density by computing the fast fourier transform over sliding windows. The power of the dominating frequencies is compared to the power of the frequency area of interest. The system is tested on different persons, male and female, in different scenarios addressing target compression rates, background disturbances, compression with mouth-to-mouth ventilation, various background illuminations and phone placements. All tests were done on a recording Laerdal manikin, providing true compression rates for comparison. Overall, the algorithm is seen to be promising, and it manages a number of disturbances and light situations. For target rates at 110 cpm, as recommended during CPR, the mean error in compression rate (Standard dev. over tests in parentheses) is 3.6 (0.8) for short hair bystanders, and 8.7 (6.0) including medium and long haired bystanders. The presented method shows that it is feasible to detect the compression rate of chest compressions performed by a bystander by placing the smartphone close to the patient, and using the built-in camera combined with a video processing algorithm performed real-time on the device.

  1. Video Inter-frame Forgery Identification Based on Optical Flow Consistency

    Directory of Open Access Journals (Sweden)

    Qi Wang

    2014-03-01

    Full Text Available Identifying inter-frame forgery is a hot topic in video forensics. In this paper, we propose a method based on the assumption that the optical flows are consistent in an original video, while in forgeries the consistency will be destroyed. We first extract optical flow from frames of videos and then calculate the optical flow consistency after normalization and quantization as distinguishing feature to identify inter-frame forgeries. We train the Support Vector Machine to classify original videos and video forgeries with optical flow consistency feature of some sample videos and test the classification accuracy in a large database. Experimental results show that the proposed method is efficient in classifying original videos and forgeries. Furthermore, the proposed method performs also pretty well in classifying frame insertion and frame deletion forgeries.

  2. Video Inter-frame Forgery Identification Based on Optical Flow Consistency

    OpenAIRE

    Qi Wang; Zhaohong Li; Zhenzhen Zhang; Qinglong Ma

    2014-01-01

    Identifying inter-frame forgery is a hot topic in video forensics. In this paper, we propose a method based on the assumption that the optical flows are consistent in an original video, while in forgeries the consistency will be destroyed. We first extract optical flow from frames of videos and then calculate the optical flow consistency after normalization and quantization as distinguishing feature to identify inter-frame forgeries. We train the Support Vector Machine to classify original vi...

  3. Frame Filtering and Skipping for Point Cloud Data Video Transmission

    Directory of Open Access Journals (Sweden)

    Carlos Moreno

    2017-01-01

    Full Text Available Sensors for collecting 3D spatial data from the real world are becoming more important. They are a prime research area topic and have applications in consumer markets, such as medical, entertainment, and robotics. However, a primary concern with collecting this data is the vast amount of information being generated, and thus, needing to be processed before being transmitted. To address the issue, we propose the use of filtering methods and frame skipping. To collect the 3D spatial data, called point clouds, we used the Microsoft Kinect sensor. In addition, we utilized the Point Cloud Library to process and filter the data being generated by the Kinect. Two different computers were used: a client which collects, filters, and transmits the point clouds; and a server that receives and visualizes the point clouds. The client is also checking for similarity in consecutive frames, skipping those that reach a similarity threshold. In order to compare the filtering methods and test the effectiveness of the frame skipping technique, quality of service (QoS metrics such as frame rate and percentage of filter were introduced. These metrics indicate how well a certain combination of filtering method and frame skipping accomplishes the goal of transmitting point clouds from one location to another. We found that the pass through filter in conjunction with frame skipping provides the best relative QoS. However, results also show that there is still too much data for a satisfactory QoS. For a real-time system to provide reasonable end-to-end quality, dynamic compression and progressive transmission need to be utilized.

  4. Video rate near-field scanning optical microscopy

    Science.gov (United States)

    Bukofsky, S. J.; Grober, R. D.

    1997-11-01

    The enhanced transmission efficiency of chemically etched near-field optical fiber probes makes it possible to greatly increase the scanning speed of near-field optical microscopes. This increase in system bandwidth allows sub-diffraction limit imaging of samples at video rates. We demonstrate image acquisition at 10 frames/s, rate-limited by mechanical resonances in our scanner. It is demonstrated that the optical signal to noise ratio is large enough for megahertz single pixel acquisition rates.

  5. GPU accelerated processing of astronomical high frame-rate videosequences

    Science.gov (United States)

    Vítek, Stanislav; Švihlík, Jan; Krasula, Lukáš; Fliegel, Karel; Páta, Petr

    2015-09-01

    Astronomical instruments located around the world are producing an incredibly large amount of possibly interesting scientific data. Astronomical research is expanding into large and highly sensitive telescopes. Total volume of data rates per night of operations also increases with the quality and resolution of state-of-the-art CCD/CMOS detectors. Since many of the ground-based astronomical experiments are placed in remote locations with limited access to the Internet, it is necessary to solve the problem of the data storage. It mostly means that current data acquistion, processing and analyses algorithm require review. Decision about importance of the data has to be taken in very short time. This work deals with GPU accelerated processing of high frame-rate astronomical video-sequences, mostly originating from experiment MAIA (Meteor Automatic Imager and Analyser), an instrument primarily focused to observing of faint meteoric events with a high time resolution. The instrument with price bellow 2000 euro consists of image intensifier and gigabite ethernet camera running at 61 fps. With resolution better than VGA the system produces up to 2TB of scientifically valuable video data per night. Main goal of the paper is not to optimize any GPU algorithm, but to propose and evaluate parallel GPU algorithms able to process huge amount of video-sequences in order to delete all uninteresting data.

  6. Laryngeal High-Speed Videoendoscopy: Sensitivity of Objective Parameters towards Recording Frame Rate

    Directory of Open Access Journals (Sweden)

    Anne Schützenberger

    2016-01-01

    Full Text Available The current use of laryngeal high-speed videoendoscopy in clinic settings involves subjective visual assessment of vocal fold vibratory characteristics. However, objective quantification of vocal fold vibrations for evidence-based diagnosis and therapy is desired, and objective parameters assessing laryngeal dynamics have therefore been suggested. This study investigated the sensitivity of the objective parameters and their dependence on recording frame rate. A total of 300 endoscopic high-speed videos with recording frame rates between 1000 and 15 000 fps were analyzed for a vocally healthy female subject during sustained phonation. Twenty parameters, representing laryngeal dynamics, were computed. Four different parameter characteristics were found: parameters showing no change with increasing frame rate; parameters changing up to a certain frame rate, but then remaining constant; parameters remaining constant within a particular range of recording frame rates; and parameters changing with nearly every frame rate. The results suggest that (1 parameter values are influenced by recording frame rates and different parameters have varying sensitivities to recording frame rate; (2 normative values should be determined based on recording frame rates; and (3 the typically used recording frame rate of 4000 fps seems to be too low to distinguish accurately certain characteristics of the human phonation process in detail.

  7. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  8. The effects of frame rate and resolution on users playing first person shooter games

    Science.gov (United States)

    Claypool, Mark; Claypool, Kajal; Damaa, Feissal

    2006-01-01

    The rates and resolutions for frames rendered in a computer game directly impact the player performance, influencing both the overall game playability and the game's enjoyability. Insights into the effects of frame rates and resolutions can guide users in their choice for game settings and new hardware purchases, and inform system designers in their development of new hardware, especially for embedded devices that often must make tradeoffs between resolution and frame rate. While there have been studies detailing the effects of frame rate and resolution on streaming video and other multimedia applications, to the best of our knowledge, there have been no studies quantifying the effects of frame rate and resolution on user performance for computer games. This paper presents results of a carefully designed user study that measures the impact of frame rate and frame resolution on user performance in a first person shooter game. Contrary to previous results for streaming video, frame rate has a marked impact on both player performance and game enjoyment while resolution has little impact on performance and some impact on enjoyment.

  9. low bit rate video coding low bit rate video coding

    African Journals Online (AJOL)

    eobe

    bottom-up merging procedure is to find out motion vector of current frame by any kind of motion estimation algorithm. Once, the motion vectors are available to motion compensation module, then the bottom-up merging process is implemented in two steps. Firstly, the VBMC merges macro-block into the bigger block, and ...

  10. Parallel Key Frame Extraction for Surveillance Video Service in a Smart City.

    Directory of Open Access Journals (Sweden)

    Ran Zheng

    Full Text Available Surveillance video service (SVS is one of the most important services provided in a smart city. It is very important for the utilization of SVS to provide design efficient surveillance video analysis techniques. Key frame extraction is a simple yet effective technique to achieve this goal. In surveillance video applications, key frames are typically used to summarize important video content. It is very important and essential to extract key frames accurately and efficiently. A novel approach is proposed to extract key frames from traffic surveillance videos based on GPU (graphics processing units to ensure high efficiency and accuracy. For the determination of key frames, motion is a more salient feature in presenting actions or events, especially in surveillance videos. The motion feature is extracted in GPU to reduce running time. It is also smoothed to reduce noise, and the frames with local maxima of motion information are selected as the final key frames. The experimental results show that this approach can extract key frames more accurately and efficiently compared with several other methods.

  11. Analyzing animal behavior via classifying each video frame using convolutional neural networks

    Science.gov (United States)

    Stern, Ulrich; He, Ruo; Yang, Chung-Hui

    2015-01-01

    High-throughput analysis of animal behavior requires software to analyze videos. Such software analyzes each frame individually, detecting animals’ body parts. But the image analysis rarely attempts to recognize “behavioral states”—e.g., actions or facial expressions—directly from the image instead of using the detected body parts. Here, we show that convolutional neural networks (CNNs)—a machine learning approach that recently became the leading technique for object recognition, human pose estimation, and human action recognition—were able to recognize directly from images whether Drosophila were “on” (standing or walking) or “off” (not in physical contact with) egg-laying substrates for each frame of our videos. We used multiple nets and image transformations to optimize accuracy for our classification task, achieving a surprisingly low error rate of just 0.072%. Classifying one of our 8 h videos took less than 3 h using a fast GPU. The approach enabled uncovering a novel egg-laying-induced behavior modification in Drosophila. Furthermore, it should be readily applicable to other behavior analysis tasks. PMID:26394695

  12. Stitching Stabilizer: Two-frame-stitching Video Stabilization for Embedded Systems

    OpenAIRE

    Satoh, Masaki

    2016-01-01

    In conventional electronic video stabilization, the stabilized frame is obtained by cropping the input frame to cancel camera shake. While a small cropping size results in strong stabilization, it does not provide us satisfactory results from the viewpoint of image quality, because it narrows the angle of view. By fusing several frames, we can effectively expand the area of input frames, and achieve strong stabilization even with a large cropping size. Several methods for doing so have been s...

  13. High-speed three-frame image recording system using colored flash units and low-cost video equipment

    Science.gov (United States)

    Racca, Roberto G.; Scotten, Larry N.

    1995-05-01

    This article describes a method that allows the digital recording of sequences of three black and white images at rates of several thousand frames per second using a system consisting of an ordinary CCD camcorder, three flash units with color filters, a PC-based frame grabber board and some additional electronics. The maximum framing rate is determined by the duration of the flashtube emission, and for common photographic flash units lasting about 20 microsecond(s) it can exceed 10,000 frames per second in actual use. The subject under study is strobe- illuminated using a red, a green and a blue flash unit controlled by a special sequencer, and the three images are captured by a color CCD camera on a single video field. Color is used as the distinguishing parameter that allows the overlaid exposures to be resolved. The video output for that particular field will contain three individual scenes, one for each primary color component, which potentially can be resolved with no crosstalk between them. The output is electronically decoded into the primary color channels, frame grabbed and stored into digital memory, yielding three time-resolved images of the subject. A synchronization pulse provided by the flash sequencer triggers the frame grabbing so that the correct video field is acquired. A scheme involving the use of videotape as intermediate storage allows the frame grabbing to be performed using a monochrome video digitizer. Ideally each flash- illuminated scene would be confined to one color channel, but in practice various factors, both optical and electronic, affect color separation. Correction equations have been derived that counteract these effects in the digitized images and minimize 'ghosting' between frames. Once the appropriate coefficients have been established through a calibration procedure that needs to be performed only once for a given configuration of the equipment, the correction process is carried out transparently in software every time a

  14. Reliability of video-based identification of footstrike pattern and video time frame at initial contact in recreational runners.

    Science.gov (United States)

    Damsted, C; Larsen, L H; Nielsen, R O

    2015-06-01

    Two-dimensional video recordings are used in clinical practice to identify footstrike pattern. However, knowledge about the reliability of this method of identification is limited. To evaluate intra- and inter-rater reliability of visual identification of footstrike pattern and video time frame at initial contact during treadmill running using two-dimensional (2D) video recordings. Thirty-one recreational runners were recorded twice, 1 week apart, with a high-speed video camera. Two blinded raters evaluated each video twice with an interval of at least 14 days. Kappa values for within-day identification of footstrike pattern revealed intra-rater agreement of 0.83-0.88 and inter-rater agreement of 0.50-0.63. Corresponding figures for between-day identification of footstrike pattern were 0.63-0.69 and 0.41-0.53, respectively. Identification of video time frame at initial contact ranged from five frames to 12 frames (95% limits of agreement). For clinical use, the intra-rater within-day identification of footstrike pattern is highly reliable (kappa>0.80). For the inter-rater between-day identification inconsistencies may, in worst case, occur in 36% of the identifications (kappa=0.41). The 95% limits of agreement for identification of video time frame at initial contact may, at times, allow for different identification of footstrike pattern. Clinicians should, therefore, be encouraged to continue using clinical 2D video setups for intra-rater identification of footstrike pattern, but bear in mind the restrictions related to the between day identifications. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Violence in Teen-Rated Video Games

    Science.gov (United States)

    Haninger, Kevin; Ryan, M. Seamus; Thompson, Kimberly M

    2004-01-01

    Context: Children's exposure to violence in the media remains a source of public health concern; however, violence in video games rated T (for “Teen”) by the Entertainment Software Rating Board (ESRB) has not been quantified. Objective: To quantify and characterize the depiction of violence and blood in T-rated video games. According to the ESRB, T-rated video games may be suitable for persons aged 13 years and older and may contain violence, mild or strong language, and/or suggestive themes. Design: We created a database of all 396 T-rated video game titles released on the major video game consoles in the United States by April 1, 2001 to identify the distribution of games by genre and to characterize the distribution of content descriptors for violence and blood assigned to these games. We randomly sampled 80 game titles (which included 81 games because 1 title included 2 separate games), played each game for at least 1 hour, and quantitatively assessed the content. Given the release of 2 new video game consoles, Microsoft Xbox and Nintendo GameCube, and a significant number of T-rated video games released after we drew our random sample, we played and assessed 9 additional games for these consoles. Finally, we assessed the content of 2 R-rated films, The Matrix and The Matrix: Reloaded, associated with the T-rated video game Enter the Matrix. Main Outcome Measures: Game genre; percentage of game play depicting violence; depiction of injury; depiction of blood; number of human and nonhuman fatalities; types of weapons used; whether injuring characters, killing characters, or destroying objects is rewarded or is required to advance in the game; and content that may raise concerns about marketing T-rated video games to children. Results: Based on analysis of the 396 T-rated video game titles, 93 game titles (23%) received content descriptors for both violence and blood, 280 game titles (71%) received only a content descriptor for violence, 9 game titles (2

  16. Violence in teen-rated video games.

    Science.gov (United States)

    Haninger, Kevin; Ryan, M Seamus; Thompson, Kimberly M

    2004-03-11

    Children's exposure to violence in the media remains a source of public health concern; however, violence in video games rated T (for "Teen") by the Entertainment Software Rating Board (ESRB) has not been quantified. To quantify and characterize the depiction of violence and blood in T-rated video games. According to the ESRB, T-rated video games may be suitable for persons aged 13 years and older and may contain violence, mild or strong language, and/or suggestive themes. We created a database of all 396 T-rated video game titles released on the major video game consoles in the United States by April 1, 2001 to identify the distribution of games by genre and to characterize the distribution of content descriptors for violence and blood assigned to these games. We randomly sampled 80 game titles (which included 81 games because 1 title included 2 separate games), played each game for at least 1 hour, and quantitatively assessed the content. Given the release of 2 new video game consoles, Microsoft Xbox and Nintendo GameCube, and a significant number of T-rated video games released after we drew our random sample, we played and assessed 9 additional games for these consoles. Finally, we assessed the content of 2 R-rated films, The Matrix and The Matrix: Reloaded, associated with the T-rated video game Enter the Matrix. Game genre; percentage of game play depicting violence; depiction of injury; depiction of blood; number of human and nonhuman fatalities; types of weapons used; whether injuring characters, killing characters, or destroying objects is rewarded or is required to advance in the game; and content that may raise concerns about marketing T-rated video games to children. Based on analysis of the 396 T-rated video game titles, 93 game titles (23%) received content descriptors for both violence and blood, 280 game titles (71%) received only a content descriptor for violence, 9 game titles (2%) received only a content descriptor for blood, and 14 game titles

  17. A Multi-Frame Post-Processing Approach to Improved Decoding of H.264/AVC Video

    DEFF Research Database (Denmark)

    Huang, Xin; Li, Huiying; Forchhammer, Søren

    2007-01-01

    Video compression techniques may yield visually annoying artifacts for limited bitrate coding. In order to improve video quality, a multi-frame based motion compensated filtering algorithm is reported based on combining multiple pictures to form a single super-resolution picture and decimation...

  18. Content and ratings of mature-rated video games.

    Science.gov (United States)

    Thompson, Kimberly M; Tepichin, Karen; Haninger, Kevin

    2006-04-01

    To quantify the depiction of violence, blood, sexual themes, profanity, substances, and gambling in video games rated M (for "mature") and to measure agreement between the content observed and the rating information provided to consumers on the game box by the Entertainment Software Rating Board. We created a database of M-rated video game titles, selected a random sample, recorded at least 1 hour of game play, quantitatively assessed the content, performed statistical analyses to describe the content, and compared our observations with the Entertainment Software Rating Board content descriptors and results of our prior studies. Harvard University, Boston, Mass. Authors and 1 hired game player. M-rated video games. Percentages of game play depicting violence, blood, sexual themes, gambling, alcohol, tobacco, or other drugs; use of profanity in dialogue, song lyrics, or gestures. Although the Entertainment Software Rating Board content descriptors for violence and blood provide a good indication of such content in the game, we identified 45 observations of content that could warrant a content descriptor in 29 games (81%) that lacked these content descriptors. M-rated video games are significantly more likely to contain blood, profanity, and substances; depict more severe injuries to human and nonhuman characters; and have a higher rate of human deaths than video games rated T (for "teen"). Parents and physicians should recognize that popular M-rated video games contain a wide range of unlabeled content and may expose children and adolescents to messages that may negatively influence their perceptions, attitudes, and behaviors.

  19. Finding and Improving the Key-Frames of Long Video Sequences for Face Recognition

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2010-01-01

    of such video sequences by any enhancement or even face recognition algorithm is demanding. Thus, there is a need for a mechanism to summarize the input video sequence to a set of key-frames and then applying an enhancement algorithm to this subset. This paper presents a system doing exactly this. The system......Face recognition systems are very sensitive to the quality and resolution of their input face images. This makes such systems unreliable when working with long surveillance video sequences without employing some selection and enhancement algorithms. On the other hand, processing all the frames...... uses face quality assessment to select the key-frames and a hybrid super-resolution to enhance the face image quality. The suggested system that employs a linear associator face recognizer to evaluate the enhanced results has been tested on real surveillance video sequences and the experimental results...

  20. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  1. High frame rate imaging based photometry

    DEFF Research Database (Denmark)

    Harpsøe, Kennet Bomann West; Jørgensen, Uffe Gråe; Andersen, Michael Ingemann

    2012-01-01

    an EMCCD is not normally distributed. Also, the readout process generates spurious charges in any CCD, but in EMCCD data, these charges are visible as opposed to the conventional CCD. Furthermore we aim to eliminate the photon waste associated with lucky imaging by combining this method with shift...... of the photometry, corrected frames of a crowded field are reduced with a PSF fitting photometry package, where a lucky image is used as a reference. We find that it is possible to develop an algorithm that elegantly reduces EMCCD data and produces stable photometry at the 1% level in an extremely crowded field....

  2. Beyond the frame rate: measuring high-frequency fluctuations with light-intensity modulation.

    Science.gov (United States)

    Wong, Wesley P; Halvorsen, Ken

    2009-02-01

    Power-spectral-density measurements of any sampled signal are typically restricted by both acquisition rate and frequency response limitations of instruments, which can be particularly prohibitive for video-based measurements. We have developed a new method called intensity modulation spectral analysis that circumvents these limitations, dramatically extending the effective detection bandwidth. We demonstrate this by video tracking an optically trapped microsphere while oscillating an LED illumination source. This approach allows us to quantify fluctuations of the microsphere at frequencies over 10 times higher than the Nyquist frequency, mimicking a significantly higher frame rate.

  3. Object Tracking in Frame-Skipping Video Acquired Using Wireless Consumer Cameras

    Directory of Open Access Journals (Sweden)

    Anlong Ming

    2012-10-01

    Full Text Available Object tracking is an important and fundamental task in computer vision and its high-level applications, e.g., intelligent surveillance, motion-based recognition, video indexing, traffic monitoring and vehicle navigation. However, the recent widespread use of wireless consumer cameras often produces low quality videos with frame-skipping and this makes object tracking difficult. Previous tracking methods, for example, generally depend heavily on object appearance or motion continuity and cannot be directly applied to frame-skipping videos. In this paper, we propose an improved particle filter for object tracking to overcome the frame-skipping difficulties. The novelty of our particle filter lies in using the detection result of erratic motion to ameliorate the transition model for a better trial distribution. Experimental results show that the proposed approach improves the tracking accuracy in comparison with the state-of-the-art methods, even when both the object and the consumer are in motion.

  4. Experimental Investigation on Minimum Frame Rate Requirements of High-Speed Videoendoscopy for Clinical Voice Assessment.

    Science.gov (United States)

    Deliyski, Dimitar D; Powell, Maria Eg; Zacharias, Stephanie Rc; Gerlach, Terri Treman; de Alarcon, Alessandro

    2015-03-01

    This study investigated the impact of high-speed videoendoscopy (HSV) frame rates on the assessment of nine clinically-relevant vocal-fold vibratory features. Fourteen adult patients with voice disorder and 14 adult normal controls were recorded using monochromatic rigid HSV at a rate of 16000 frames per second (fps) and spatial resolution of 639×639 pixels. The 16000-fps data were downsampled to 16 other rate denominations. Using paired comparisons design, nine common clinical vibratory features were visually compared between the downsampled and the original images. Three raters reported the thresholds at which: (1) a detectable difference between the two videos was first noticed, and (2) differences between the two videos would result in a change of clinical rating. Results indicated that glottal edge, mucosal wave magnitude and extent, aperiodicity, contact and loss of contact of the vocal folds were the vibratory features most sensitive to frame rate. Of these vibratory features, the glottal edge was selected for further analysis, due to its higher rating reliability, universal prevalence and consistent definition. Rates of 8000 fps were found to be free from visually-perceivable feature degradation, and for rates of 5333 fps, degradation was minimal. For rates of 4000 fps and higher, clinical assessments of glottal edge were not affected. Rates of 2000 fps changed the clinical ratings in over 16% of the samples, which could lead to inaccurate functional assessment.

  5. Image Enhancement for High frame-rate Neutron Radiography

    OpenAIRE

    Saito, Y; Ito, D.

    2015-01-01

    High frame rate neutron radiography has been utilized to investigate two-phase flow in a metallic duct. However, images obtained by high frame-rate neutron radiography suffered from severe statistical noise due to its short exposure time. In this study, a spatio-temporal filter was applied to reduce the noise in the sequence images obtained by high frame-rate neutron radiography. Experiments were performed at the B4-port of the Research Reactor Institute, Kyoto University, which has a thermal...

  6. A Novel Rate Control Scheme for Constant Bit Rate Video Streaming

    Directory of Open Access Journals (Sweden)

    Venkata Phani Kumar M

    2015-08-01

    Full Text Available In this paper, a novel rate control mechanism is proposed for constant bit rate video streaming. The initial quantization parameter used for encoding a video sequence is determined using the average spatio-temporal complexity of the sequence, its resolution and the target bit rate. Simple linear estimation models are then used to predict the number of bits that would be necessary to encode a frame for a given complexity and quantization parameter. The experimental results demonstrate that our proposed rate control mechanism significantly outperforms the existing rate control scheme in the Joint Model (JM reference software in terms of Peak Signal to Noise Ratio (PSNR and consistent perceptual visual quality while achieving the target bit rate. Furthermore, the proposed scheme is validated through implementation on a miniature test-bed.

  7. Determination of Optimum Frame Rates for Observation of Construction Operations from Time-Lapse Movies

    Directory of Open Access Journals (Sweden)

    Y. M. Ibrahim

    2012-07-01

    Full Text Available Construction professionals have been using time-lapse movies in monitoring construction operations. However, some amount of detail is always lost in the interval between two consecutive frames in a time-lapse movie. This poses the question: By how much can the frame rate be lowered from the standard 30fps (frames per second to allow for the accurate observation of construction operations from a time-lapse movie? This paper addresses the problem by establishing the optimum frame rates for observation of activities related to mortar mixing and block handling. The activities were first recorded at the standard rate of 30fps. Using the Adobe Premier Pro video editing software, the records were then segregated into still images from which 15 different time-lapse movies of various time intervals were generated. The movies were then shown to 25 Construction Managers. A structured questionnaire was employed to capture the level of accuracy with which Construction Managers could interpret the job site situation from each movie. The results suggest that 1fpm (frame per minute is sufficient for the accurate tracking of labourers involved in mortar mixing while 1 frame in every 20 seconds is sufficient for accurate identification of number of cement bags used. However, for tracking number of blocks off-loaded, and those damaged, 1 frame in every 2 seconds is required.

  8. A Blind Video Watermarking Scheme Robust To Frame Attacks Combined With MPEG2 Compression

    Directory of Open Access Journals (Sweden)

    C. Cruz-Ramos

    2010-12-01

    Full Text Available ABSTRACTIn this paper, we propose a robust digital video watermarking scheme with completely blind extraction process wherethe original video data, original watermark or any other information derivative of them are not required in order toretrieve the embedded watermark. The proposed algorithm embeds 2D binary visually recognizable patterns such ascompany trademarks and owner’s logotype, etc., in the DWT domain of the video frames for copyright protection.Before the embedding process, only two numerical keys are required to transform the watermark data into a noise-likepattern using the chaotic mixing method which helps to increase the security. The main advantages of the proposedscheme are its completely blind detection scheme, robustness against common video attacks, combined attacks andits low complexity implementation. The combined attacks consist of MPEG-2 compression and common video attackssuch as noise contamination, collusion attacks, frame dropping and swapping. Extensive simulation results also showthat the watermark imperceptibility and robustness outperform other previously reported methods. The extractedwatermark data from the watermarked video sequences is clear enough even after the watermarked video hadsuffered from several attacks.

  9. Digitized reality: Effects of high frame rate on visual perception

    OpenAIRE

    Loertscher, Miriam Laura; Iseli, Christian

    2016-01-01

    The digital revolution changed film production and its aesthetics in many ways. Although motion is a defining feature of moving images, it is also one of their most problematic aspects because of blurred images or other signal processing artifacts. An artistic research project was conducted to test the effects of high frame rates (HFR) on visual perception. Typical camera movements were recorded in different frame rates (24 / 48 / 96 fps) to generate test sequences for a cinema experiment. 69...

  10. Automatic 3D face synthesis using single 2D video frame

    OpenAIRE

    Sheng, Y; Sadka, AH; Kondoz, KM

    2004-01-01

    3D face synthesis has been extensively used in many applications over the last decade. Although many methods have been reported, automatic 3D face synthesis from a single video frame still remains unsolved. An automatic 3D face synthesis algorithm is proposed, which resolves a number of existing bottlenecks.

  11. Colour-reproduction algorithm for transmitting variable video frames and its application to capsule endoscopy.

    Science.gov (United States)

    Khan, Tareq; Shrestha, Ravi; Imtiaz, Md Shamin; Wahid, Khan A

    2015-04-01

    Presented is a new power-efficient colour generation algorithm for wireless capsule endoscopy (WCE) application. In WCE, transmitting colour image data from the human intestine through radio frequency (RF) consumes a huge amount of power. The conventional way is to transmit all R, G and B components of all frames. Using the proposed dictionary-based colour generation scheme, instead of sending all R, G and B frames, first one colour frame is sent followed by a series of grey-scale frames. At the receiver end, the colour information is extracted from the colour frame and then added to colourise the grey-scale frames. After a certain number of grey-scale frames, another colour frame is sent followed by the same number of grey-scale frames. This process is repeated until the end of the video sequence to maintain the colour similarity. As a result, over 50% of RF transmission power can be saved using the proposed scheme, which will eventually lead to a battery life extension of the capsule by 4-7 h. The reproduced colour images have been evaluated both statistically and subjectively by professional gastroenterologists. The algorithm is finally implemented using a WCE prototype and the performance is validated using an ex-vivo trial.

  12. Colour-reproduction algorithm for transmitting variable video frames and its application to capsule endoscopy

    Science.gov (United States)

    Khan, Tareq; Shrestha, Ravi; Imtiaz, Md. Shamin

    2015-01-01

    Presented is a new power-efficient colour generation algorithm for wireless capsule endoscopy (WCE) application. In WCE, transmitting colour image data from the human intestine through radio frequency (RF) consumes a huge amount of power. The conventional way is to transmit all R, G and B components of all frames. Using the proposed dictionary-based colour generation scheme, instead of sending all R, G and B frames, first one colour frame is sent followed by a series of grey-scale frames. At the receiver end, the colour information is extracted from the colour frame and then added to colourise the grey-scale frames. After a certain number of grey-scale frames, another colour frame is sent followed by the same number of grey-scale frames. This process is repeated until the end of the video sequence to maintain the colour similarity. As a result, over 50% of RF transmission power can be saved using the proposed scheme, which will eventually lead to a battery life extension of the capsule by 4–7 h. The reproduced colour images have been evaluated both statistically and subjectively by professional gastroenterologists. The algorithm is finally implemented using a WCE prototype and the performance is validated using an ex-vivo trial. PMID:26609405

  13. Violence in E-rated video games.

    Science.gov (United States)

    Thompson, K M; Haninger, K

    2001-08-01

    Children's exposure to violence, alcohol, tobacco and other substances, and sexual messages in the media are a source of public health concern; however, content in video games commonly played by children has not been quantified. To quantify and characterize the depiction of violence, alcohol, tobacco and other substances, and sex in video games rated E (for "Everyone"), analogous to the G rating of films, which suggests suitability for all audiences. We created a database of all existing E-rated video games available for rent or sale in the United States by April 1, 2001, to identify the distribution of games by genre and to characterize the distribution of content descriptors associated with these games. We played and assessed the content of a convenience sample of 55 E-rated video games released for major home video game consoles between 1985 and 2000. Game genre; duration of violence; number of fatalities; types of weapons used; whether injuring characters or destroying objects is rewarded or is required to advance in the game; depiction of alcohol, tobacco and other substances; and sexual content. Based on analysis of the 672 current E-rated video games played on home consoles, 77% were in sports, racing, or action genres and 57% did not receive any content descriptors. We found that 35 of the 55 games we played (64%) involved intentional violence for an average of 30.7% of game play (range, 1.5%-91.2%), and we noted significant differences in the amount of violence among game genres. Injuring characters was rewarded or required for advancement in 33 games (60%). The presence of any content descriptor for violence (n = 23 games) was significantly correlated with the presence of intentional violence in the game (at a 5% significance level based on a 2-sided Wilcoxon rank-sum test, t(53) = 2.59). Notably, 14 of 32 games (44%) that did not receive a content descriptor for violence contained acts of violence. Action and shooting games led to the largest numbers of

  14. Variable Frame Rate and Length Analysis for Data Compression in Distributed Speech Recognition

    DEFF Research Database (Denmark)

    Kraljevski, Ivan; Tan, Zheng-Hua

    2014-01-01

    at the signal level, and then increases the length of the selected frame according to the number of non-selected preceding frames to find the right time-frequency resolution at the frame level. It produces high frame rate and small frame length in rapidly changing regions and low frame rate and large frame...

  15. Capsule endoscopy capture rate: Has 4 frames-per-second any impact over 2 frames-per-second?

    Science.gov (United States)

    Fernandez-Urien, Ignacio; Carretero, Cristina; Borobio, Erika; Borda, Ana; Estevez, Emilio; Galter, Sara; Gonzalez-Suarez, Begoña; Gonzalez, Benito; Lujan, Marisol; Martinez, Jose Luis; Martínez, Vanessa; Menchén, Pedro; Navajas, Javier; Pons, Vicente; Prieto, Cesar; Valle, Julio

    2014-01-01

    AIM: To compare the current capsule and a new prototype at 2 and 4 frames-per-second, respectively, in terms of clinical and therapeutic impact. METHODS: One hundred patients with an indication for capsule endoscopy were included in the study. All procedures were performed with the new device (SB24). After an exhaustive evaluation of the SB24 videos, they were then converted to “SB2-like” videos for their evaluation. Findings, frames per finding, and clinical and therapeutic impact derived from video visualization were analyzed. Kappa index for interobserver agreement and χ2 and Student’s t tests for qualitative/quantitative variables, respectively, were used. Values of P under 0.05 were considered statistically significant. RESULTS: Eighty-nine out of 100 cases included in the study were ultimately included in the analysis. The SB24 videos detected the anatomical landmarks (Z-line and duodenal papilla) and lesions in more patients than the “SB2-like” videos. On the other hand, the SB24 videos detected more frames per landmark/lesion than the “SB2-like” videos. However, these differences were not statistically significant (P > 0.05). Both clinical and therapeutic impacts were similar between SB24 and “SB2-like” videos (K = 0.954). The time spent by readers was significantly higher for SB24 videos visualization (P videos when all images captured by the capsule were considered. However, these differences become non-significant if we only take into account small bowel images (P > 0.05). CONCLUSION: More frames-per-second detect more landmarks, lesions, and frames per landmark/lesion, but is time consuming and has a very low impact on clinical and therapeutic management. PMID:25339834

  16. Distant Measurement of Plethysmographic Signal in Various Lighting Conditions Using Configurable Frame-Rate Camera

    Directory of Open Access Journals (Sweden)

    Przybyło Jaromir

    2016-12-01

    Full Text Available Videoplethysmography is currently recognized as a promising noninvasive heart rate measurement method advantageous for ubiquitous monitoring of humans in natural living conditions. Although the method is considered for application in several areas including telemedicine, sports and assisted living, its dependence on lighting conditions and camera performance is still not investigated enough. In this paper we report on research of various image acquisition aspects including the lighting spectrum, frame rate and compression. In the experimental part, we recorded five video sequences in various lighting conditions (fluorescent artificial light, dim daylight, infrared light, incandescent light bulb using a programmable frame rate camera and a pulse oximeter as the reference. For a video sequence-based heart rate measurement we implemented a pulse detection algorithm based on the power spectral density, estimated using Welch’s technique. The results showed that lighting conditions and selected video camera settings including compression and the sampling frequency influence the heart rate detection accuracy. The average heart rate error also varies from 0.35 beats per minute (bpm for fluorescent light to 6.6 bpm for dim daylight.

  17. Intelligent Stale-Frame Discards for Real-Time Video Streaming over Wireless Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Sheu Tsang-Ling

    2009-01-01

    Full Text Available Abstract This paper presents intelligent early packet discards (I-EPD for real-time video streaming over a multihop wireless ad hoc network. In a multihop wireless ad hoc network, the quality of transferring real-time video streams could be seriously degraded, since every intermediate node (IN functionally like relay device does not possess large buffer and sufficient bandwidth. Even worse, a selected relay node could leave or power off unexpectedly, which breaks the route to destination. Thus, a stale video frame is useless even if it can reach destination after network traffic becomes smooth or failed route is reconfigured. In the proposed I-EPD, an IN can intelligently determine whether a buffered video packet should be early discarded. For the purpose of validation, we implement the I-EPD on Linux-based embedded systems. Via the comparisons of performance metrics (packet/frame discards ratios, PSNR, etc., we demonstrate that video quality over a wireless ad hoc network can be substantially improved and unnecessary bandwidth wastage is greatly reduced.

  18. Intelligent Stale-Frame Discards for Real-Time Video Streaming over Wireless Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Yung-Shih Chi

    2009-01-01

    Full Text Available This paper presents intelligent early packet discards (I-EPD for real-time video streaming over a multihop wireless ad hoc network. In a multihop wireless ad hoc network, the quality of transferring real-time video streams could be seriously degraded, since every intermediate node (IN functionally like relay device does not possess large buffer and sufficient bandwidth. Even worse, a selected relay node could leave or power off unexpectedly, which breaks the route to destination. Thus, a stale video frame is useless even if it can reach destination after network traffic becomes smooth or failed route is reconfigured. In the proposed I-EPD, an IN can intelligently determine whether a buffered video packet should be early discarded. For the purpose of validation, we implement the I-EPD on Linux-based embedded systems. Via the comparisons of performance metrics (packet/frame discards ratios, PSNR, etc., we demonstrate that video quality over a wireless ad hoc network can be substantially improved and unnecessary bandwidth wastage is greatly reduced.

  19. Complex pulsing schemes for high frame rate imaging

    DEFF Research Database (Denmark)

    Misaridis, Thanassis; Fink, Mathias; Jensen, Jørgen Arendt

    2002-01-01

    High frame rate ultrasound imaging can be achieved by simultaneous transmission of multiple focused beams along different directions. However, image quality degrades by the interference among beams. An alternative approach is to transmit spherical waves of a basic short pulse with frequency codin......B. With the proposed imaging strategy of pulse train excitation, a whole image can be formed with only two emissions, making it possible to obtain high quality images at a frame rate of 20 to 25 times higher than that of conventional phased array imaging......High frame rate ultrasound imaging can be achieved by simultaneous transmission of multiple focused beams along different directions. However, image quality degrades by the interference among beams. An alternative approach is to transmit spherical waves of a basic short pulse with frequency coding...

  20. Perancangan Video Motion Graphic Tentang Pentingnya Rating Dalam Video Game Bagi Orangtua

    OpenAIRE

    Nata, Vincent Ferian; Hagijanto, Andrian Dektisa; Christianna, Aniendya Christianna

    2016-01-01

    Video game merupakan media hiburan yang dapat dinikmati oleh berbagai kalangan masyarakat, tua atau muda. Video game memiliki konten yang bermacam – macam yang telah disesuaikan dengan target audiencenya. Tetapi terkadang anak – anak memainkan video game dengan konten yang tidak sesuai usia mereka, padahal konten dalam video game telah diatur melalui sistem rating. Hal ini karena kurangnya pengawasan dan pemahaman dari orangtua mengenai video game. Oleh karena itu penulis membuat sebuah multi...

  1. Space-time encoding for high frame rate ultrasound imaging

    DEFF Research Database (Denmark)

    Misaridis, Thanssis; Jensen, Jørgen Arendt

    2002-01-01

    Frame rate in ultrasound imaging can be dramatically increased by using sparse synthetic transmit aperture (STA) beamforming techniques. The two main drawbacks of the method are the low signal-to-noise ratio (SNR) and the motion artifacts, that degrade the image quality. In this paper we propose......, due to the orthogonality of the temporal encoded wavefronts. Thus, with this method, the frame rate is doubled compared to previous systems. Another advantage is the utilization of temporal codes which are more robust to attenuation. With the proposed technique it is possible to obtain images...

  2. Very low bit rate video coding standards

    Science.gov (United States)

    Zhang, Ya-Qin

    1995-04-01

    Very low bit rate video coding has received considerable attention in academia and industry in terms of both coding algorithms and standards activities. In addition to the earlier ITU-T efforts on H.320 standardization for video conferencing from 64 kbps to 1.544 Mbps in ISDN environment, the ITU-T/SG15 has formed an expert group on low bit coding (LBC) for visual telephone below 64 kbps. The ITU-T/SG15/LBC work consists of two phases: the near-term and long-term. The near-term standard H.32P/N, based on existing compression technologies, mainly addresses the issues related to visual telephony at below 28.8 kbps, the V.34 modem rate used in the existing Public Switched Telephone Network (PSTN). H.32P/N will be technically frozen in January '95. The long-term standard H.32P/L, relying on fundamentally new compression technologies with much improved performance, will address video telephony in both PSTN and mobile environment. The ISO/SG29/WG11, after its highly visible and successful MPEG 1/2 work, is starting to focus on the next- generation audiovisual multimedia coding standard MPEG 4. With the recent change of direction, MPEG 4 intends to provide an audio visual coding standard allowing for interactivity, high compression, and/or universal accessibility, with high degree of flexibility and extensibility. This paper briefly summarizes these on-going standards activities undertaken by ITU-T/LBC and ISO/MPEG 4 as of December 1994.

  3. Key Frame Extraction for Text Based Video Retrieval Using Maximally Stable Extremal Regions

    Directory of Open Access Journals (Sweden)

    Werachard Wattanarachothai

    2015-04-01

    Full Text Available This paper presents a new approach for text-based video content retrieval system. The proposed scheme consists of three main processes that are key frame extraction, text localization and keyword matching. For the key-frame extraction, we proposed a Maximally Stable Extremal Region (MSER based feature which is oriented to segment shots of the video with different text contents. In text localization process, in order to form the text lines, the MSERs in each key frame are clustered based on their similarity in position, size, color, and stroke width. Then, Tesseract OCR engine is used for recognizing the text regions. In this work, to improve the recognition results, we input four images obtained from different pre-processing methods to Tesseract engine. Finally, the target keyword for querying is matched with OCR results based on an approximate string search scheme. The experiment shows that, by using the MSER feature, the videos can be segmented by using efficient number of shots and provide the better precision and recall in comparison with a sum of absolute difference and edge based method.

  4. A model of R-D performance evaluation for Rate-Distortion-Complexity evaluation of H.264 video coding

    DEFF Research Database (Denmark)

    Wu, Mo; Forchhammer, Søren

    2007-01-01

    This paper considers a method for evaluation of Rate-Distortion-Complexity (R-D-C) performance of video coding. A statistical model of the transformed coefficients is used to estimate the Rate-Distortion (R-D) performance. A model frame work for rate, distortion and slope of the R-D curve for inter...... and intra frame is presented. Assumptions are given for analyzing an R-D model for fast R-D-C evaluation. The theoretical expressions are combined with H.264 video coding, and confirmed by experimental results. The complexity frame work is applied to the integer motion estimation....

  5. Content and ratings of teen-rated video games.

    Science.gov (United States)

    Haninger, Kevin; Thompson, Kimberly M

    2004-02-18

    Children's exposure to violence, blood, sexual themes, profanity, substances, and gambling in the media remains a source of public health concern. However, content in video games played by older children and adolescents has not been quantified or compared with the rating information provided to consumers by the Entertainment Software Rating Board (ESRB). To quantify and characterize the content in video games rated T (for "Teen") and to measure the agreement between the content observed in game play and the ESRB-assigned content descriptors displayed on the game box. We created a database of all 396 T-rated video game titles released on the major video game consoles in the United States by April 1, 2001, to identify the distribution of games by genre and to characterize the distribution of ESRB-assigned content descriptors. We randomly sampled 80 video game titles (which included 81 games because 1 title included 2 separate games), played each game for at least 1 hour, quantitatively assessed the content, and compared the content we observed with the content descriptors assigned by the ESRB. Depictions of violence, blood, sexual themes, gambling, and alcohol, tobacco, or other drugs; whether injuring or killing characters is rewarded or is required to advance in the game; characterization of gender associated with sexual themes; and use of profanity in dialogue, lyrics, or gestures. Analysis of all content descriptors assigned to the 396 T-rated video game titles showed 373 (94%) received content descriptors for violence, 102 (26%) for blood, 60 (15%) for sexual themes, 57 (14%) for profanity, 26 (7%) for comic mischief, 6 (2%) for substances, and none for gambling. In the random sample of 81 games we played, we found that 79 (98%) involved intentional violence for an average of 36% of game play, 73 (90%) rewarded or required the player to injure characters, 56 (69%) rewarded or required the player to kill, 34 (42%) depicted blood, 22 (27%) depicted sexual themes

  6. High-frame-rate echocardiography with reduced sidelobe level.

    Science.gov (United States)

    Hasegawa, Hideyuki; Kanai, Hiroshi

    2012-11-01

    Echocardiography has become an indispensable modality for diagnosis of the heart. It enables observation of the shape of the heart and estimation of global heart function based on B-mode and M-mode imaging. Methods for echocardiographic estimation of myocardial strain and strain rate have also been developed to evaluate regional heart function. Furthermore, it has been recently shown that echocardiographic measurements of transmural transition of myocardial contraction/relaxation and propagation of vibration caused by closure of the heart valve would be useful for evaluation of myocardial function and viscoelasticity. However, such measurements require a frame rate (typically >200 Hz) much higher than that achieved by conventional ultrasonic diagnostic equipment. We have recently realized a high frame rate of about 300 Hz with a full field of view of 90° using diverging transmit beams and parallel receive beamforming. Although high-frame-rate imaging was made possible by this method, the side lobe level was slightly larger than that of the conventional method. To reduce the side lobe level, phase coherence imaging has recently been developed. Using this method, the spatial resolution is improved and the side lobe level is also reduced. However, speckle-like echoes, for example, echoes from the inside of the heart wall, are also suppressed. In the present study, a method for reducing the side lobe level while preserving speckle-like echoes was developed. The side lobe level was evaluated using a wire phantom. The side lobe level of the high-frame-rate imaging using unfocused diverging beams was improved by 13.3 dB by the proposed method. In in vivo measurements, a B-mode image of the heart of a 23-year-old healthy male could be obtained while preserving the speckle pattern in the heart wall at a frame rate of 316 Hz with a full field of view of 90°.

  7. High Frame Rate Synthetic Aperture 3D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Holbek, Simon; Stuart, Matthias Bo

    2016-01-01

    , current volumetric ultrasonic flow methods are limited to one velocity component or restricted to a reduced field of view (FOV), e.g. fixed imaging planes, in exchange for higher temporal resolutions. To solve these problems, a previously proposed accurate 2-D high frame rate vector flow imaging (VFI...

  8. Minimum Variance Beamforming for High Frame-Rate Ultrasound Imaging

    DEFF Research Database (Denmark)

    Holfort, Iben Kraglund; Gran, Fredrik; Jensen, Jørgen Arendt

    2007-01-01

    emissions. An increase in resolution is seen when using only one single emission. Furthermore, it is seen that an increase of the number of emissions does not alter the FWHM. Thus, the MV beamformer introduces the possibility for high frame-rate imaging with increased resolution....

  9. Seafloor video footage and still-frame grabs from U.S. Geological Survey cruises in Hawaiian nearshore waters

    Science.gov (United States)

    Gibbs, Ann E.; Cochran, Susan A.; Tierney, Peter W.

    2013-01-01

    Underwater video footage was collected in nearshore waters (video footage collected during four USGS cruises and more than 10,200 still images extracted from the videos, including still frames from every 10 seconds along transect lines, and still frames showing both an overview and a near-bottom view from fixed stations. Environmental Systems Research Institute (ESRI) shapefiles of individual video and still-image locations, and Google Earth kml files with explanatory text and links to the video and still images, are included. This report documents the various camera systems and methods used to collect the videos, and the techniques and software used to convert the analog video tapes into digital data in order to process the images for optimum viewing and to extract the still images, along with a brief summary of each survey cruise.

  10. Rate-distortion optimised video transmission using pyramid vector quantisation.

    Science.gov (United States)

    Bokhari, Syed; Nix, Andrew R; Bull, David R

    2012-08-01

    Conventional video compression relies on interframe prediction (motion estimation), intra frame prediction and variable-length entropy encoding to achieve high compression ratios but, as a consequence, produces an encoded bitstream that is inherently sensitive to channel errors. In order to ensure reliable delivery over lossy channels, it is necessary to invoke various additional error detection and correction methods. In contrast, techniques such as Pyramid Vector Quantisation have the ability to prevent error propagation through the use of fixed length codewords. This paper introduces an efficient rate distortion optimisation algorithm for intra-mode PVQ which offers similar compression performance to intra H.264/AVC and Motion JPEG 2000 while offering inherent error resilience. The performance of our enhanced codec is evaluated for HD content in the context of a realistic (IEEE 802.11n) wireless environment. We show that PVQ provides high tolerance to corrupted data compared to the state of the art while obviating the need for complex encoding tools.

  11. Scanning probe microscopes go video rate and beyond

    Science.gov (United States)

    Rost, M. J.; Crama, L.; Schakel, P.; van Tol, E.; van Velzen-Williams, G. B. E. M.; Overgauw, C. F.; ter Horst, H.; Dekker, H.; Okhuijsen, B.; Seynen, M.; Vijftigschild, A.; Han, P.; Katan, A. J.; Schoots, K.; Schumm, R.; van Loo, W.; Oosterkamp, T. H.; Frenken, J. W. M.

    2005-05-01

    In this article we introduce a, video-rate, control system that can be used with any type of scanning probe microscope, and that allows frame rates up to 200images/s. These electronics are capable of measuring in a fast, completely analog mode as well as in the more conventional digital mode. The latter allows measurements at low speeds and options, such as, e.g., atom manipulation, current-voltage spectroscopy, or force-distance curves. For scanning tunneling microscope (STM) application we implemented a hybrid mode between the well-known constant-height and constant-current modes. This hybrid mode not only increases the maximum speed at which the surface can be imaged, but also improves the resolution at lower speeds. Acceptable image quality at high speeds could only be obtained by pushing the performance of each individual part of the electronics to its limit: we developed a preamplifier with a bandwidth of 600kHz, a feedback electronics with a bandwidth of 1MHz, a home-built bus structure for the fast data transfer, fast analog to digital converters, and low-noise drivers. Future improvements and extensions to the control electronics can be realized easily and quickly, because of its open architecture with its modular plug-in units. In the second part of this article we show our high-speed results. The ultrahigh vacuum application of these control electronics on our (UHV)-STM enabled imaging speeds up to 0.3mm/s, while still obtaining atomic step resolution. At high frame rates, the images suffered from noticeable distortions, which we have been able to analyze by virtue of the unique access to the error (dZ) signal. The distortions have all been associated with mechanical resonances in the scan head of the UHV-STM. In order to reduce such resonance effects, we have designed and built a scan head with high resonance frequencies (⩾64kHz), especially for the purpose of testing the fast electronics. Using this scanner we have reached video-rate imaging speeds

  12. Variable frame rate analysis for automatic speech recognition

    Science.gov (United States)

    Tan, Zheng-Hua

    2007-09-01

    In this paper we investigate the use of variable frame rate (VFR) analysis in automatic speech recognition (ASR). First, we review VFR technique and analyze its behavior. It is experimentally shown that VFR improves ASR performance for signals with low signal-to-noise ratios since it generates improved acoustic models and substantially reduces insertion and substitution errors although it may increase deletion errors. It is also underlined that the match between the average frame rate and the number of hidden Markov model states is critical in implementing VFR. Secondly, we analyze an effective VFR method that uses a cumulative, weighted cepstral-distance criterion for frame selection and present a revision for it. Lastly, the revised VFR method is combined with spectral- and cepstral-domain enhancement methods including the minimum statistics noise estimation (MSNE) based spectral subtraction and the cepstral mean subtraction, variance normalization and ARMA filtering (MVA) process. Experiments on the Aurora 2 database justify that VFR is highly complementary to the enhancement methods. Enhancement of speech both facilitates the frame selection in VFR and provides de-noised speech for recognition.

  13. JPEG XS-based frame buffer compression inside HEVC for power-aware video compression

    Science.gov (United States)

    Willème, Alexandre; Descampe, Antonin; Rouvroy, Gaël.; Pellegrin, Pascal; Macq, Benoit

    2017-09-01

    With the emergence of Ultra-High Definition video, reference frame buffers (FBs) inside HEVC-like encoders and decoders have to sustain huge bandwidth. The power consumed by these external memory accesses accounts for a significant share of the codec's total consumption. This paper describes a solution to significantly decrease the FB's bandwidth, making HEVC encoder more suitable for use in power-aware applications. The proposed prototype consists in integrating an embedded lightweight, low-latency and visually lossless codec at the FB interface inside HEVC in order to store each reference frame as several compressed bitstreams. As opposed to previous works, our solution compresses large picture areas (ranging from a CTU to a frame stripe) independently in order to better exploit the spatial redundancy found in the reference frame. This work investigates two data reuse schemes namely Level-C and Level-D. Our approach is made possible thanks to simplified motion estimation mechanisms further reducing the FB's bandwidth and inducing very low quality degradation. In this work, we integrated JPEG XS, the upcoming standard for lightweight low-latency video compression, inside HEVC. In practice, the proposed implementation is based on HM 16.8 and on XSM 1.1.2 (JPEG XS Test Model). Through this paper, the architecture of our HEVC with JPEG XS-based frame buffer compression is described. Then its performance is compared to HM encoder. Compared to previous works, our prototype provides significant external memory bandwidth reduction. Depending on the reuse scheme, one can expect bandwidth and FB size reduction ranging from 50% to 83.3% without significant quality degradation.

  14. Mode shape analysis using a commercially available peak store video frame buffer

    Science.gov (United States)

    Snow, Walter L.; Childers, Brooks A.

    1994-01-01

    Time exposure photography, sometimes coupled with strobe illumination, is an accepted method for motion analysis that bypasses frame by frame analysis and resynthesis of data. Garden variety video cameras can now exploit this technique using a unique frame buffer that is a non-integrating memory that compares incoming data with that already stored. The device continuously outputs an analog video signal of the stored contents which can then be redigitized and analyzed using conventional equipment. Historically, photographic time exposures have been used to record the displacement envelope of harmonically oscillating structures to show mode shape. Mode shape analysis is crucial, for example, in aeroelastic testing of wind tunnel models. Aerodynamic, inertial, and elastic forces can couple together leading to catastrophic failure of a poorly designed aircraft. This paper will explore the usefulness of the peak store device as a videometric tool and in particular discuss methods for analyzing a targeted vibrating plate using the 'peak store' in conjunction with calibration methods familiar to the close-range videometry community. Results for the first three normal modes will be presented.

  15. Visible light communication using mobile-phone camera with data rate higher than frame rate

    National Research Council Canada - National Science Library

    Chow, Chi-Wai; Chen, Chung-Yen; Chen, Shih-Hao

    2015-01-01

    ...). However, using these CMOS image sensors are challenging. In this work, we propose and demonstrate a VLC link using mobile-phone camera with data rate higher than frame rate of the CMOS image sensor...

  16. Joint variable frame rate and length analysis for speech recognition under adverse conditions

    DEFF Research Database (Denmark)

    Tan, Zheng-Hua; Kraljevski, Ivan

    2014-01-01

    -to-noise (SNR) ratio weighted energy distance and increases the length of the selected frames, according to the number of non-selected preceding frames. It assigns a higher frame rate and a normal frame length to a rapidly changing and high SNR region of a speech signal, and a lower frame rate and an increased...

  17. Compact Beamformer Design with High Frame Rate for Ultrasound Imaging

    Directory of Open Access Journals (Sweden)

    Jun Luo

    2014-04-01

    Full Text Available In medical field, two-dimension ultrasound images are widely used in clinical diagnosis. Beamformer is critical in determining the complexity and performance of an ultrasound imaging system. Different from traditional means implemented with separated chips, a compact beamformer with 64 effective channels in a single moderate Field Programmable Gate Array has been presented in this paper. The compactness is acquired by employing receive synthetic aperture, harmonic imaging, time sharing and linear interpolation. Besides that, multi-beams method is used to improve the frame rate of the ultrasound imaging system. Online dynamic configuration is employed to expand system’s flexibility to two kinds of transducers with multi-scanning modes. The design is verified on a prototype scanner board. Simulation results have shown that on-chip memories can be saved and the frame rate can be improved on the case of 64 effective channels which will meet the requirement of real-time application.

  18. GPU accelerated OCT processing at megahertz axial scan rate and high resolution video rate volumetric rendering

    Science.gov (United States)

    Jian, Yifan; Wong, Kevin; Sarunic, Marinko V.

    2013-03-01

    In this report, we describe how to highly optimize a CUDA based platform to perform real time optical coherence tomography data processing and 3D volumetric rendering using commercially-available cost-effective graphic processing units (GPUs). The maximum complete attainable axial scan processing rate (including memory transfer and rendering frame) was 2.2 megahertz for 16 bits pixel depth and 2048 pixels/A-scan, the maximum 3D volumetric rendering speed is 23 volumes/second (size:1024×256×200). To the best of our knowledge, this is the fastest processing rate reported to date with single-chip GPU and the first implementation of real time video rate volumetric OCT processing and rendering that is capable of matching the ultrahigh-speed OCT acquisition rates.

  19. Videopanorama Frame Rate Requirements Derived from Visual Discrimination of Deceleration During Simulated Aircraft Landing

    Science.gov (United States)

    Furnstenau, Norbert; Ellis, Stephen R.

    2015-01-01

    In order to determine the required visual frame rate (FR) for minimizing prediction errors with out-the-window video displays at remote/virtual airport towers, thirteen active air traffic controllers viewed high dynamic fidelity simulations of landing aircraft and decided whether aircraft would stop as if to be able to make a turnoff or whether a runway excursion would be expected. The viewing conditions and simulation dynamics replicated visual rates and environments of transport aircraft landing at small commercial airports. The required frame rate was estimated using Bayes inference on prediction errors by linear FRextrapolation of event probabilities conditional on predictions (stop, no-stop). Furthermore estimates were obtained from exponential model fits to the parametric and non-parametric perceptual discriminabilities d' and A (average area under ROC-curves) as dependent on FR. Decision errors are biased towards preference of overshoot and appear due to illusionary increase in speed at low frames rates. Both Bayes and A - extrapolations yield a framerate requirement of 35 40 Hz for minimizing decision errors. Definitive recommendations require further experiments with FR > 30 Hz.

  20. Deep Frame Interpolation

    OpenAIRE

    Samsonov, Vladislav

    2017-01-01

    This work presents a supervised learning based approach to the computer vision problem of frame interpolation. The presented technique could also be used in the cartoon animations since drawing each individual frame consumes a noticeable amount of time. The most existing solutions to this problem use unsupervised methods and focus only on real life videos with already high frame rate. However, the experiments show that such methods do not work as well when the frame rate becomes low and objec...

  1. Leica solution: CARS microscopy at video rates

    Science.gov (United States)

    Lurquin, V.

    2008-02-01

    Confocal and multiphoton microscopy are powerful techniques to study morphology and dynamics in cells and tissue, if fluorescent labeling is possible or autofluorescence is strong. For non-fluorescent molecules, Coherent anti-Stokes Raman scattering (CARS) microscopy provides chemical contrast based on intrinsic and highly specific vibrational properties of molecules eliminating the need for labeling. Just as other multiphoton techniques, CARS microscopy possesses three-dimensional sectioning capabilities. Leica Microsystems has combined the CARS imaging technology with its TCS SP5 confocal microscope to provide several advantages for CARS imaging. For CARS microscopy, two picosecond near-infrared lasers are overlapped spatially and temporally and sent into the scanhead of the confocal system. The software allows programmed, automatic switching between these light sources for multi-modal imaging. Furthermore the Leica TCS SP5 can be equipped with a non-descanned detector which will significantly enhance the signal. The Leica TCS SP5 scanhead combines two technologies in one system: a conventional scanner for maximum resolution and a resonant scanner for high time resolution. The fast scanner allows imaging speeds as high as 25 images/per second at a resolution of 512×512 pixel. This corresponds to true video-rate allowing to follow processes at these time-scales as well as the acquisition of three-dimensional stacks in a few seconds. This time resolution is critical to study live animals or human patients for which heart beat and muscle movements lead to a blurring of the image if the acquisition time is high. Furthermore with the resonant scanhead the sectioning is truly confocal and does not suffer from spatial leakage. In summary, CARS microscopy combined with the tandem scanner makes the Leica TCS SP5 a powerful tool for three-dimensional, label-free imaging of chemical and biological samples in vitro and in vivo.

  2. Online multispectral fluorescence lifetime values estimation and overlay onto tissue white-light video frames

    Science.gov (United States)

    Gorpas, Dimitris; Ma, Dinglong; Bec, Julien; Yankelevich, Diego R.; Marcu, Laura

    2016-03-01

    Fluorescence lifetime imaging has been shown to be a robust technique for biochemical and functional characterization of tissues and to present great potential for intraoperative tissue diagnosis and guidance of surgical procedures. We report a technique for real-time mapping of fluorescence parameters (i.e. lifetime values) onto the location from where the fluorescence measurements were taken. This is achieved by merging a 450 nm aiming beam generated by a diode laser with the excitation light in a single delivery/collection fiber and by continuously imaging the region of interest with a color CMOS camera. The interrogated locations are then extracted from the acquired frames via color-based segmentation of the aiming beam. Assuming a Gaussian profile of the imaged aiming beam, the segmentation results are fitted to ellipses that are dynamically scaled at the full width of three automatically estimated thresholds (50%, 75%, 90%) of the Gaussian distribution's maximum value. This enables the dynamic augmentation of the white-light video frames with the corresponding fluorescence decay parameters. A fluorescence phantom and fresh tissue samples were used to evaluate this method with motorized and hand-held scanning measurements. At 640x512 pixels resolution the area of interest augmented with fluorescence decay parameters can be imaged at an average 34 frames per second. The developed method has the potential to become a valuable tool for real-time display of optical spectroscopy data during continuous scanning applications that subsequently can be used for tissue characterization and diagnosis.

  3. Intelligent real-time CCD data processing system based on variable frame rate

    Science.gov (United States)

    Chen, Su-ting

    2009-07-01

    In order to meet the need of image shooting with CCD in unmanned aerial vehicles, a real-time high resolution CCD data processing system based on variable frame rate is designed. The system is consisted of three modules: CCD control module, data processing module and data display module. In the CCD control module, real-time flight parameters (e.g. flight height, velocity and longitude) should be received from GPS through UART (Universal Asynchronous Receiver Transmitter) and according to the corresponding flight parameters, the variable frame rate is calculated. Based on the calculated variable frame rate, CCD external synchronization control impulse signal is generated in the control of FPGA and then CCD data is read out. In the data processing module, data segmentation is designed to extract ROI (region of interest), whose resolution is equal to valid data resolution of HDTV standard conforming to SMPTE (1080i). On one hand, Ping-pong SRAM storage controller is designed in FPGA to real-time store ROI data. On the other hand, according to the need of intelligent observing, changeable window position is designed, and a flexible area of interest is obtained. In the real-time display module, a special video encoder is used to accomplish data format conversion. Data after storage is packeted to HDTV format by creating corresponding format information in FPGA. Through inner register configuration, high definition video analog signal is implemented. The entire system has been implemented in FPGA and validated. It has been used in various real-time CCD data processing situations.

  4. Joint Machine Learning and Game Theory for Rate Control in High Efficiency Video Coding.

    Science.gov (United States)

    Gao, Wei; Kwong, Sam; Jia, Yuheng

    2017-08-25

    In this paper, a joint machine learning and game theory modeling (MLGT) framework is proposed for inter frame coding tree unit (CTU) level bit allocation and rate control (RC) optimization in High Efficiency Video Coding (HEVC). First, a support vector machine (SVM) based multi-classification scheme is proposed to improve the prediction accuracy of CTU-level Rate-Distortion (R-D) model. The legacy "chicken-and-egg" dilemma in video coding is proposed to be overcome by the learning-based R-D model. Second, a mixed R-D model based cooperative bargaining game theory is proposed for bit allocation optimization, where the convexity of the mixed R-D model based utility function is proved, and Nash bargaining solution (NBS) is achieved by the proposed iterative solution search method. The minimum utility is adjusted by the reference coding distortion and frame-level Quantization parameter (QP) change. Lastly, intra frame QP and inter frame adaptive bit ratios are adjusted to make inter frames have more bit resources to maintain smooth quality and bit consumption in the bargaining game optimization. Experimental results demonstrate that the proposed MLGT based RC method can achieve much better R-D performances, quality smoothness, bit rate accuracy, buffer control results and subjective visual quality than the other state-of-the-art one-pass RC methods, and the achieved R-D performances are very close to the performance limits from the FixedQP method.

  5. A Rate Control Scheme of the Even Low Bit-Rate Video Encoder

    OpenAIRE

    Bing Zhou; Shi-Mei Su; Xingjin Zhang; Xiaoqiang Li

    2009-01-01

    Rate control plays an important role in transmitting low-delay and high-quality images over the channel of very low bandwidth. The rate control algorithm in MPEG-4 or H.26X only defined the rate control model of P-frame, and did not introduce the rate control model of I-frame as it supposed that only the first frame is an I-frame, the others are all P-frames. However, in practical applications, a certain number of I-frames have to be inserted to meet the demand for fault-tolerant transmission...

  6. Heartbeat Rate Measurement from Facial Video

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal

    2016-01-01

    by combining a ‘Good feature to track’ and a ‘Supervised descent method’ in order to overcome the limitations of currently available facial video based HR measuring systems. Such limitations include, e.g., unrealistic restriction of the subject’s movement and artificial lighting during data capture. A face...

  7. What does video-camera framing say during the news? A look at contemporary forms of visual journalism

    Directory of Open Access Journals (Sweden)

    Juliana Freire Gutmann

    2012-12-01

    Full Text Available In order to contribute to the discussion about audiovisual processing of journalistic information, this article examines connections between the uses of video framing on the television news stage, contemporary senses, public interest and the distinction values of journalism, addressed here through the perspective of the concepts of conversation and participation. The article identifies recurring video framing techniques used by 15 Brazilian television newscasts, accounting for contemporary forms of audiovisual telejournalism, responsible for new types of spatial-temporal configurations. From a methodological perspective, this article seeks to contribute to the study of the television genre by understanding the uses of these audiovisual techniques as a strategy for newscast communicability.

  8. WHAT DOES VIDEO-CAMERA FRAMING SAY DURING THE NEWS? A LOOK AT CONTEMPORARY FORMS OF VISUAL JOURNALISM

    Directory of Open Access Journals (Sweden)

    Juliana Freire Gutmann

    2013-06-01

    Full Text Available In order to contribute to the discussion about audiovisual processing of journalistic information, this article examines connections between the uses of video framing on the television news stage, contemporary senses, public interest and the distinction values of journalism, addressed here through the perspective of the concepts of conversation and participation. The article identifies recurring video framing techniques used by 15 Brazilian television newscasts, accounting for contemporary forms of audiovisual telejournalism, responsible for new types of spatial-temporal configurations. From a methodological perspective, this article seeks to contribute to the study of the television genre by understanding the uses of these audiovisual techniques as a strategy for newscast communicability.

  9. The use of freeze frame (slow scan) video for health professional education.

    Science.gov (United States)

    Dunn, E V; Fisher, M

    1985-03-01

    Continuing education in the professions is receiving increased emphasis and the economic and effective delivery of programmes must be a priority for the future. Freeze frame video, one of the newer telecommunication technologies, is a promising method for delivering continuing medical education (CME) over distance for those who have difficulty in regularly attending educational update programmes, especially for those in rural and isolated areas. The system uses two telephone lines to transmit both voice and a still picture simultaneously to one or several sites. The video portion can be a view of the patient, text, 35-mm slides, microscopic slides, or any other still object. Five years' experience with a slow-scan system as used for education is outlined. Three types of programme formats were all presented with this technology; consultations; discussion/case presentations; and lectures. The best use of the system was for small groups, with discussion of their unique problems. The fully interactive nature of the slow-scan system assisted in the presentations and allowed all sites in multisite conferences to be fully involved. Because most teachers are not familiar with the technology in their everyday life it requires more orientation and experience to accomplish a skilled programme than with other telecommunication systems such as the telephone or television.

  10. Rate Allocation in predictive video coding using a Convex Optimization Framework.

    Science.gov (United States)

    Fiengo, Aniello; Chierchia, Giovanni; Cagnazzo, Marco; Pesquet-Popescu, Beatrice

    2016-10-26

    Optimal rate allocation is among the most challenging tasks to perform in the context of predictive video coding, because of the dependencies between frames induced by motion compensation. In this paper, using a recursive rate-distortion model that explicitly takes into account these dependencies, we approach the frame-level rate allocation as a convex optimization problem. This technique is integrated into the recent HEVC encoder, and tested on several standard sequences. Experiments indicate that the proposed rate allocation ensures a better performance (in the rate-distortion sense) than the standard HEVC rate control, and with a little loss w.r.t. an optimal exhaustive research which is largely compensated by a much shorter execution time.

  11. Space-time encoding for high frame rate ultrasound imaging.

    Science.gov (United States)

    Misaridis, Thanassis X; Jensen, Jørgen A

    2002-05-01

    Frame rate in ultrasound imaging can be dramatically increased by using sparse synthetic transmit aperture (STA) beamforming techniques. The two main drawbacks of the method are the low signal-to-noise ratio (SNR) and the motion artifacts, that degrade the image quality. In this paper we propose a spatio-temporal encoding for STA imaging based on simultaneous transmission of two quasi-orthogonal tapered linear FM signals. The excitation signals are an up- and a down-chirp with frequency division and a cross-talk of -55 dB. The received signals are first cross-correlated with the appropriate code, then spatially decoded and finally beamformed for each code, yielding two images per emission. The spatial encoding is a Hadamard encoding previously suggested by Chiao et al. [in: Proceedings of the IEEE Ultrasonics Symposium, 1997, p. 1679]. The Hadamard matrix has half the size of the transmit element groups, due to the orthogonality of the temporal encoded wavefronts. Thus, with this method, the frame rate is doubled compared to previous systems. Another advantage is the utilization of temporal codes which are more robust to attenuation. With the proposed technique it is possible to obtain images dynamically focused in both transmit and receive with only two firings. This reduces the problem of motion artifacts. The method has been tested with extensive simulations using Field II. Resolution and SNR are compared with uncoded STA imaging and conventional phased-array imaging. The range resolution remains the same for coded STA imaging with four emissions and is slightly degraded for STA imaging with two emissions due to the -55 dB cross-talk between the signals. The additional proposed temporal encoding adds more than 15 dB on the SNR gain, yielding a SNR at the same order as in phased-array imaging.

  12. Real-time video imaging of gas plumes using a DMD-enabled full-frame programmable spectral filter

    Science.gov (United States)

    Graff, David L.; Love, Steven P.

    2016-02-01

    Programmable spectral filters based on digital micromirror devices (DMDs) are typically restricted to imaging a 1D line across a scene, analogous to conventional "push-broom scanning" hyperspectral imagers. In previous work, however, we demonstrated that, by placing the diffraction grating at a telecentric image plane rather than at the more conventional location in collimated space, a spectral plane can be created at which light from the entire 2D scene focuses to a unique location for each wavelength. A DMD placed at this spectral plane can then spectrally manipulate an entire 2D image at once, enabling programmable matched filters to be applied to real-time video imaging. We have adapted this concept to imaging rapidly evolving gas plumes. We have constructed a high spectral resolution programmable spectral imager operating in the shortwave infrared region, capable of resolving the rotational-vibrational line structure of several gases at sub-nm spectral resolution. This ability to resolve the detailed gas-phase line structure enables implementation of highly selective filters that unambiguously separate the gas spectrum from background spectral clutter. On-line and between-line multi-band spectral filters, with bands individually weighted using the DMD's duty-cycle-based grayscale capability, are alternately uploaded to the DMD, the resulting images differenced, and the result displayed in real time at rates of several frames per second to produce real-time video of the turbulent motion of the gas plume.

  13. Unequal Protection of Video Streaming through Adaptive Modulation with a Trizone Buffer over Bluetooth Enhanced Data Rate

    Directory of Open Access Journals (Sweden)

    Rouzbeh Razavi

    2007-12-01

    Full Text Available Bluetooth enhanced data rate wireless channel can support higher-quality video streams compared to previous versions of Bluetooth. Packet loss when transmitting compressed data has an effect on the delivered video quality that endures over multiple frames. To reduce the impact of radio frequency noise and interference, this paper proposes adaptive modulation based on content type at the video frame level and content importance at the macroblock level. Because the bit rate of protected data is reduced, the paper proposes buffer management to reduce the risk of buffer overflow. A trizone buffer is introduced, with a varying unequal protection policy in each zone. Application of this policy together with adaptive modulation results in up to 4 dB improvement in objective video quality compared to fixed rate scheme for an additive white Gaussian noise channel and around 10 dB for a Gilbert-Elliott channel. The paper also reports a consistent improvement in video quality over a scheme that adapts to channel conditions by varying the data rate without accounting for the video frame packet type or buffer congestion.

  14. Unequal Protection of Video Streaming through Adaptive Modulation with a Trizone Buffer over Bluetooth Enhanced Data Rate

    Directory of Open Access Journals (Sweden)

    Razavi Rouzbeh

    2008-01-01

    Full Text Available Abstract Bluetooth enhanced data rate wireless channel can support higher-quality video streams compared to previous versions of Bluetooth. Packet loss when transmitting compressed data has an effect on the delivered video quality that endures over multiple frames. To reduce the impact of radio frequency noise and interference, this paper proposes adaptive modulation based on content type at the video frame level and content importance at the macroblock level. Because the bit rate of protected data is reduced, the paper proposes buffer management to reduce the risk of buffer overflow. A trizone buffer is introduced, with a varying unequal protection policy in each zone. Application of this policy together with adaptive modulation results in up to 4 dB improvement in objective video quality compared to fixed rate scheme for an additive white Gaussian noise channel and around 10 dB for a Gilbert-Elliott channel. The paper also reports a consistent improvement in video quality over a scheme that adapts to channel conditions by varying the data rate without accounting for the video frame packet type or buffer congestion.

  15. Application of high-frame-rate neutron radiography to fluid measurement

    Energy Technology Data Exchange (ETDEWEB)

    Mishima, Kaichiro; Hibiki, Takashi [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst.

    1997-02-01

    To apply Neutron radiography (NR) technique to multiphase flow research, high frame-rate NR was developed by assembling up-to-date technologies for neutron source, scintillator, high-speed video and image intensifier. This imaging system has several advantages such as a long recording time (up to 21 minutes), high-frame-rate (up to 1000 frames/s) imaging and no need for triggering signal. Visualization studies of air-water two-phase flow in a metallic duct and molten metal-water interaction were performed at the recording speeds of 250, 500 and 1000 frames/s. The qualities of those consecutive images were good enough to observe the flow pattern and behavior. It was demonstrated also that some characteristics of two-phase flow could be measured from those images in collaboration with image processing techniques. By utilizing geometrical information extracted from NR images, data on flow regime, rising velocity of bubbles, and wave height and interfacial area in annular flow could be obtained. By utilizing attenuation characteristics of neutrons in materials, measurements of void profile and average void fraction could be performed. For this purpose, a quantification method, i.e. {Sigma}-scaling method, was proposed based upon the consideration on the effect of scattered neutrons. This method was tested against known void profiles and compared with existing measurement methods and a correlation for void fraction. It was confirmed that this new technique has significant advantages both in visualizing and measuring high-speed fluid phenomena. (J.P.N.)

  16. Visible light communication using mobile-phone camera with data rate higher than frame rate.

    Science.gov (United States)

    Chow, Chi-Wai; Chen, Chung-Yen; Chen, Shih-Hao

    2015-10-05

    Complementary Metal-Oxide-Semiconductor (CMOS) image sensors are widely used in mobile-phone and cameras. Hence, it is attractive if these image sensors can be used as the visible light communication (VLC) receivers (Rxs). However, using these CMOS image sensors are challenging. In this work, we propose and demonstrate a VLC link using mobile-phone camera with data rate higher than frame rate of the CMOS image sensor. We first discuss and analyze the features of using CMOS image sensor as VLC Rx, including the rolling shutter effect, overlapping of exposure time of each row of pixels, frame-to-frame processing time gap, and also the image sensor "blooming" effect. Then, we describe the procedure of synchronization and demodulation. This includes file format conversion, grayscale conversion, column matrix selection avoiding blooming, polynomial fitting for threshold location. Finally, the evaluation of bit-error-rate (BER) is performed satisfying the forward error correction (FEC) limit.

  17. Web-Based Video and Frame Theory in the Professional Development of Teachers: Some Implications for Distance Education

    Science.gov (United States)

    Fong, Cresencia; Woodruff, Earl

    2003-01-01

    This study explores the use of video vignettes as a tool for the professional development of teachers. It is postulated that teachers' professional frames prime them to view vignettes through multiple "lenses," and that teachers may not recognize exemplary practice when presented with it. Think-aloud and interview data are collected as 11…

  18. Framing violence: the effect of survey context and question framing on reported rates of partner violence

    OpenAIRE

    Regan, Katherine V.

    2008-01-01

    In this dissertation, I investigated two explanations for the variability in levels of partner violence found by large community surveys. In Study 1, I examined the effect of how questions about partner violence are introduced (question framing: conflict, violence-in-relationships, or attacks) on reports of partner violence. Although there was not a reliable effect of question framing, the pattern of findings was consistent across 3 of 4 analyses. Counter to predictions, an attacks question f...

  19. An infrared high rate video imager for various space applications

    Science.gov (United States)

    Svedhem, Hâkan; Koschny, Detlef

    2010-05-01

    Modern spacecraft with high data transmission capabilities have opened up the possibility to fly video rate imagers in space. Several fields concerned with observations of transient phenomena can benefit significantly from imaging at video frame rate. Some applications are observations and characterization of bolides/meteors, sprites, lightning, volcanic eruptions, and impacts on airless bodies. Applications can be found both on low and high Earth orbiting spacecraft as well as on planetary and lunar orbiters. The optimum wavelength range varies depending on the application but we will focus here on the near infrared, partly since it allows exploration of a new field and partly because it, in many cases, allows operation both during day and night. Such an instrument has to our knowledge never flown in space so far. The only sensors of a similar kind fly on US defense satellites for monitoring launches of ballistic missiles. The data from these sensors, however, is largely inaccessible to scientists. We have developed a bread-board version of such an instrument, the SPOSH-IR. The instrument is based on an earlier technology development - SPOSH - a Smart Panoramic Optical Sensor Head, for operation in the visible range, but with the sensor replace by a cooled IR detector and new optics. The instrument is using a Sofradir 320x256 pixel HgCdTe detector array with 30µm pixel size, mounted directly on top of a four stage thermoelectric Peltier cooler. The detector-cooler combination is integrated into an evacuated closed package with a glass window on its front side. The detector has a sensitive range between 0.8 and 2.5 µm. The optical part is a seven lens design with a focal length of 6 mm and a FOV 90deg by 72 deg optimized for use at SWIR. The detector operates at 200K while the optics operates at ambient temperature. The optics and electronics for the bread-board has been designed and built by Jena-Optronik, Jena, Germany. This talk will present the design and the

  20. GOP-based channel rate allocation using genetic algorithm for scalable video streaming over error-prone networks.

    Science.gov (United States)

    Fang, Tao; Chau, Lap-Pui

    2006-06-01

    In this paper, we address the problem of unequal error protection (UEP) for scalable video transmission over wireless packet-erasure channel. Unequal amounts of protection are allocated to the different frames (I- or P-frame) of a group-of-pictures (GOP), and in each frame, unequal amounts of protection are allocated to the progressive bit-stream of scalable video to provide a graceful degradation of video quality as packet loss rate varies. We use a genetic algorithm (GA) to quickly get the allocation pattern, which is hard to get with other conventional methods, like hill-climbing method. Theoretical analysis and experimental results both demonstrate the advantage of the proposed algorithm.

  1. Region-of-interest based rate control for UAV video coding

    Science.gov (United States)

    Zhao, Chun-lei; Dai, Ming; Xiong, Jing-ying

    2016-05-01

    To meet the requirement of high-quality transmission of videos captured by unmanned aerial vehicles (UAV) with low bandwidth, a novel rate control (RC) scheme based on region-of-interest (ROI) is proposed. First, the ROI information is sent to the encoder with the latest high efficient video coding (HEVC) standard to generate an ROI map. Then, by using the ROI map, bit allocation methods are developed at frame level and large coding unit (LCU) level, to avoid inaccurate bit allocation produced by camera movement. At last, by using a better robustness R- λ model, the quantization parameter ( QP) for each LCU is calculated. The experimental results show that the proposed RC method can get a lower bitrate error and a higher quality for reconstructed video by choosing appropriate pixel weight on the HEVC platform.

  2. High frame rate retrospectively triggered Cine MRI for assessment of murine diastolic function

    NARCIS (Netherlands)

    Coolen, Bram F.; Abdurrachim, Desiree; Motaal, Abdallah G.; Nicolay, Klaas; Prompers, Jeanine J.; Strijkers, Gustav J.

    2013-01-01

    To assess left ventricular (LV) diastolic function in mice with Cine MRI, a high frame rate (>60 frames per cardiac cycle) is required. For conventional electrocardiography-triggered Cine MRI, the frame rate is inversely proportional to the pulse repetition time (TR). However, TR cannot be lowered

  3. High frame rate retrospectively triggered Cine MRI for assessment of murine diastolic function.

    Science.gov (United States)

    Coolen, Bram F; Abdurrachim, Desiree; Motaal, Abdallah G; Nicolay, Klaas; Prompers, Jeanine J; Strijkers, Gustav J

    2013-03-01

    To assess left ventricular (LV) diastolic function in mice with Cine MRI, a high frame rate (>60 frames per cardiac cycle) is required. For conventional electrocardiography-triggered Cine MRI, the frame rate is inversely proportional to the pulse repetition time (TR). However, TR cannot be lowered at will to increase the frame rate because of gradient hardware, spatial resolution, and signal-to-noise limitations. To overcome these limitations associated with electrocardiography-triggered Cine MRI, in this paper, we introduce a retrospectively triggered Cine MRI protocol capable of producing high-resolution high frame rate Cine MRI of the mouse heart for addressing left ventricular diastolic function. Simulations were performed to investigate the influence of MRI sequence parameters and the k-space filling trajectory in relation to the desired number of frames per cardiac cycle. An optimized protocol was applied in vivo and compared with electrocardiography-triggered Cine for which a high-frame rate could only be achieved by several interleaved acquisitions. Retrospective high frame rate Cine MRI proved superior to the interleaved electrocardiography-triggered protocols. High spatial-resolution Cine movies with frames rates up to 80 frames per cardiac cycle were obtained in 25 min. Analysis of left ventricular filling rate curves allowed accurate determination of early and late filling rates and revealed subtle impairments in left ventricular diastolic function of diabetic mice in comparison with nondiabetic mice. Copyright © 2012 Wiley Periodicals, Inc.

  4. Low-Complexity Variable Frame Rate Analysis for Speech Recognition and Voice Activity Detection

    DEFF Research Database (Denmark)

    Tan, Zheng-Hua; Lindberg, Børge

    2010-01-01

    Frame based speech processing inherently assumes a stationary behavior of speech signals in a short period of time. Over a long time, the characteristics of the signals can change significantly and frames are not equally important, underscoring the need for frame selection. In this paper, we......, and the use of a posteriori SNR weighting emphasizes the reliable regions in noisy speech signals. It is experimentally found that the approach is able to assign a higher frame rate to fast changing events such as consonants, a lower frame rate to steady regions like vowels and no frames to silence, even...... for scalable source coding schemes in distributed speech recognition where the target bit rate is met by adjusting the frame rate. Thirdly, it is applied to voice activity detection. Very encouraging results are obtained for all three speech processing tasks....

  5. The right frame of reference makes it simple: an example of introductory mechanics supported by video analysis of motion

    Science.gov (United States)

    Klein, P.; Gröber, S.; Kuhn, J.; Fleischhauer, A.; Müller, A.

    2015-01-01

    The selection and application of coordinate systems is an important issue in physics. However, considering different frames of references in a given problem sometimes seems un-intuitive and is difficult for students. We present a concrete problem of projectile motion which vividly demonstrates the value of considering different frames of references. We use this example to explore the effectiveness of video-based motion analysis (VBMA) as an instructional technique at university level in enhancing students’ understanding of the abstract concept of coordinate systems. A pilot study with 47 undergraduate students indicates that VBMA instruction improves conceptual understanding of this issue.

  6. Impact of Constant Rate Factor on Objective Video Quality Assessment

    Directory of Open Access Journals (Sweden)

    Juraj Bienik

    2017-01-01

    Full Text Available This paper deals with the impact of constant rate factor value on the objective video quality assessment using PSNR and SSIM metrics. Compression efficiency of H.264 and H.265 codecs defined by different Constant rate factor (CRF values was tested. The assessment was done for eight types of video sequences depending on content for High Definition (HD, Full HD (FHD and Ultra HD (UHD resolution. Finally, performance of both mentioned codecs with emphasis on compression ratio and efficiency of coding was compared.

  7. Frame-rate performance modeling of software MPEG decoder

    Science.gov (United States)

    Ramamoorthy, Victor

    1997-01-01

    A software MPEG decoder, though attractive in terms of performance and cost, opens up new technical challenges. The most critical question is: When does a software decoder drop a frame? How to predict its timing performance well ahead of its implementation? It is not easy to answer these questions without introducing a stochastic model of the decoding time. With a double buffering scheme, fluctuations in decoding time can be smoothed out to a large extent. However, dropping of frames can not be totally eliminated. New ideas of slip and asymptotic synchronous locking are shown to answer critical design questions of a software decoder. Beneath the troubled world of frame droppings lies the beauty and harmony of our stochastic formulation.

  8. Adaptation of hidden Markov models for recognizing speech of reduced frame rate.

    Science.gov (United States)

    Lee, Lee-Min; Jean, Fu-Rong

    2013-12-01

    The frame rate of the observation sequence in distributed speech recognition applications may be reduced to suit a resource-limited front-end device. In order to use models trained using full-frame-rate data in the recognition of reduced frame-rate (RFR) data, we propose a method for adapting the transition probabilities of hidden Markov models (HMMs) to match the frame rate of the observation. Experiments on the recognition of clean and noisy connected digits are conducted to evaluate the proposed method. Experimental results show that the proposed method can effectively compensate for the frame-rate mismatch between the training and the test data. Using our adapted model to recognize the RFR speech data, one can significantly reduce the computation time and achieve the same level of accuracy as that of a method, which restores the frame rate using data interpolation.

  9. Robust Adaptable Video Copy Detection

    DEFF Research Database (Denmark)

    Assent, Ira; Kremer, Hardy

    2009-01-01

    Video copy detection should be capable of identifying video copies subject to alterations e.g. in video contrast or frame rates. We propose a video copy detection scheme that allows for adaptable detection of videos that are altered temporally (e.g. frame rate change) and/or visually (e.g. change...... in contrast). Our query processing combines filtering and indexing structures for efficient multistep computation of video copies under this model. We show that our model successfully identifies altered video copies and does so more reliably than existing models....

  10. Content-Aware Scalability-Type Selection for Rate Adaptation of Scalable Video

    Directory of Open Access Journals (Sweden)

    Tekalp A Murat

    2007-01-01

    Full Text Available Scalable video coders provide different scaling options, such as temporal, spatial, and SNR scalabilities, where rate reduction by discarding enhancement layers of different scalability-type results in different kinds and/or levels of visual distortion depend on the content and bitrate. This dependency between scalability type, video content, and bitrate is not well investigated in the literature. To this effect, we first propose an objective function that quantifies flatness, blockiness, blurriness, and temporal jerkiness artifacts caused by rate reduction by spatial size, frame rate, and quantization parameter scaling. Next, the weights of this objective function are determined for different content (shot types and different bitrates using a training procedure with subjective evaluation. Finally, a method is proposed for choosing the best scaling type for each temporal segment that results in minimum visual distortion according to this objective function given the content type of temporal segments. Two subjective tests have been performed to validate the proposed procedure for content-aware selection of the best scalability type on soccer videos. Soccer videos scaled from 600 kbps to 100 kbps by the proposed content-aware selection of scalability type have been found visually superior to those that are scaled using a single scalability option over the whole sequence.

  11. Correction of Line Interleaving Displacement in Frame Captured Aerial Video Imagery

    Science.gov (United States)

    B. Cooke; A. Saucier

    1995-01-01

    Scientists with the USDA Forest Service are currently assessing the usefulness of aerial video imagery for various purposes including midcycle inventory updates. The potential of video image data for these purposes may be compromised by scan line interleaving displacement problems. Interleaving displacement problems cause features in video raster datasets to have...

  12. Juegos de videos: Investigacion, puntajes y recomendaciones (Video Games: Research, Ratings and Recommendations). ERIC Digest.

    Science.gov (United States)

    Cesarone, Bernard

    This Spanish-language digest reviews research on the demographics and effects of video game playing, discusses game rating systems, and offers recommendations for parents. The digest begins by discussing research on the time children spend playing electronic games, which shows that younger children's game playing at home (90% of fourth-graders…

  13. An Evaluation of an LCD Display With 240 Hz Frame Rate for Visual Psychophysics Experiments.

    Science.gov (United States)

    Shi, Lin

    2017-01-01

    Recently, a few LCD displays with 240 Hz frame rate have appeared on the market. I evaluated a LCD display with 240 Hz frame rate in terms of its temporal characteristics, progression between frames, and chromatic characteristics. The display showed (a) accurate frame durations at millisecond level, (b) gradual transition between adjacent frames, and (c) acceptable chromatic characteristics.

  14. Efficient Video Transcoding from H.263 to H.264/AVC Standard with Enhanced Rate Control

    Directory of Open Access Journals (Sweden)

    Nguyen Viet-Anh

    2006-01-01

    Full Text Available A new video coding standard H.264/AVC has been recently developed and standardized. The standard represents a number of advances in video coding technology in terms of both coding efficiency and flexibility and is expected to replace the existing standards such as H.263 and MPEG-1/2/4 in many possible applications. In this paper we investigate and present efficient syntax transcoding and downsizing transcoding methods from H.263 to H.264/AVC standard. Specifically, we propose an efficient motion vector reestimation scheme using vector median filtering and a fast intraprediction mode selection scheme based on coarse edge information obtained from integer-transform coefficients. Furthermore, an enhanced rate control method based on a quadratic model is proposed for selecting quantization parameters at the sequence and frame levels together with a new frame-layer bit allocation scheme based on the side information in the precoded video. Extensive experiments have been conducted and the results show the efficiency and effectiveness of the proposed methods.

  15. Frame rate vs resolution: A subjective evaluation of spatiotemporal perceived quality under varying computational budgets

    OpenAIRE

    Debattista, K.; Bugeja, K.; Spina, S.; Bashford-Rogers, T.; V. Hulusic

    2017-01-01

    Maximising performance for rendered content requires making compromises on quality parameters depending on the computational resources available. Yet, it is currently unclear which parameters best maximise perceived quality. This work investigates perceived quality across computational budgets for the primary spatio-temporal parameters of resolution and frame rate. Three experiments are conducted. Experiment 1 (n = 26) shows that participants prefer fixed frame rates of 60 frames per second (...

  16. Frame rate vs resolution : a subjective evaluation of spatiotemporal perceived quality under varying computational budgets

    OpenAIRE

    Debattista, Kurt; Bugeja, Keith; Spina, Sandro; Bashford-Rogers, Thomas; Hulusić, Vedad

    2017-01-01

    Maximizing performance for rendered content requires making compromises on quality parameters depending on the computational resources available . Yet, it is currently unclear which parameters best maximize perceived quality. This work investigates perceived quality across computational budgets for the primary spatiotemporal parameters of resolution and frame rate. Three experiments are conducted. Experiment 1 (n = 26) shows that participants prefer fixed frame rates of 60 frames per second (...

  17. Video-rate optical coherence tomography imaging with smart pixels

    Science.gov (United States)

    Beer, Stephan; Waldis, Severin; Seitz, Peter

    2003-10-01

    A novel concept for video-rate parallel acquisition of optical coherence tomography imaging is presented based on in-pixel demodulation. The main restrictions for parallel detection such as data rate, power consumption, circuit size and poor sensitivity are overcome with a smart pixel architecture incorporating an offset compensation circuit, a synchronous sampling stage, programmable time averaging and random pixel accessing, allowing envelope and phase detection in large 1D and 2D arrays.

  18. Region of interest video coding for low bit-rate transmission of carotid ultrasound videos over 3G wireless networks.

    Science.gov (United States)

    Tsapatsoulis, Nicolas; Loizou, Christos; Pattichis, Constantinos

    2007-01-01

    Efficient medical video transmission over 3G wireless is of great importance for fast diagnosis and on site medical staff training purposes. In this paper we present a region of interest based ultrasound video compression study which shows that significant reduction of the required, for transmission, bit rate can be achieved without altering the design of existing video codecs. Simple preprocessing of the original videos to define visually and clinically important areas is the only requirement.

  19. Multi-Frame Rate Based Multiple-Model Training for Robust Speaker Identification of Disguised Voice

    DEFF Research Database (Denmark)

    Prasad, Swati; Tan, Zheng-Hua; Prasad, Ramjee

    2013-01-01

    Speaker identification systems are prone to attack when voice disguise is adopted by the user. To address this issue,our paper studies the effect of using different frame rates on the accuracy of the speaker identification system for disguised voice.In addition, a multi-frame rate based multiple-......-model training method is proposed. The experimental results show the superior performance of the proposed method compared to the commonly used single frame rate method for three types of disguised voice taken from the CHAINS corpus.......Speaker identification systems are prone to attack when voice disguise is adopted by the user. To address this issue,our paper studies the effect of using different frame rates on the accuracy of the speaker identification system for disguised voice.In addition, a multi-frame rate based multiple...

  20. Colour-reproduction algorithm for transmitting variable video frames and its application to capsule endoscopy

    OpenAIRE

    Khan, Tareq; Shrestha, Ravi; Imtiaz, Md. Shamin; Khan A. Wahid

    2015-01-01

    Presented is a new power-efficient colour generation algorithm for wireless capsule endoscopy (WCE) application. In WCE, transmitting colour image data from the human intestine through radio frequency (RF) consumes a huge amount of power. The conventional way is to transmit all R, G and B components of all frames. Using the proposed dictionary-based colour generation scheme, instead of sending all R, G and B frames, first one colour frame is sent followed by a series of grey-scale frames. At ...

  1. Resolution enhancement of low quality videos using a high-resolution frame

    NARCIS (Netherlands)

    Pham, T.Q.; Van Vliet, L.J.; Schutte, K.

    2006-01-01

    This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of

  2. Detection of distorted frames in retinal video-sequences via machine learning

    Science.gov (United States)

    Kolar, Radim; Liberdova, Ivana; Odstrcilik, Jan; Hracho, Michal; Tornow, Ralf P.

    2017-07-01

    This paper describes detection of distorted frames in retinal sequences based on set of global features extracted from each frame. The feature vector is consequently used in classification step, in which three types of classifiers are tested. The best classification accuracy 96% has been achieved with support vector machine approach.

  3. The Impact of Silent and Freeze-Frame Viewing Techniques of Video Materials on the Intermediate EFL Learners’ Listening Comprehension

    Directory of Open Access Journals (Sweden)

    Sara Shahani

    2015-05-01

    Full Text Available The use of modern technologies has been widely prevalent among language learners, and video, in particular, as a valuable learning tool provides learners with comprehensible input. The present study investigated the effect of silent and freeze-frame viewing techniques of video materials on the intermediate English as a foreign language (EFL learners’ listening comprehension. To this end, 45 intermediate EFL learners participated in this quasi-experimental study. The results of one-way ANOVA revealed that there was a statistically significant difference between the experimental groups (using two types of viewing techniques and the control group. While the difference between the two experimental groups was not statistically significant, the experimental groups outperformed the control group significantly.

  4. High frame-rate en face optical coherence tomography system using KTN optical beam deflector

    Science.gov (United States)

    Ohmi, Masato; Shinya, Yusuke; Imai, Tadayuki; Toyoda, Seiji; Kobayashi, Junya; Sakamoto, Tadashi

    2017-02-01

    We developed high frame-rate en face optical coherence tomography (OCT) system using KTa1-xNbxO3 (KTN) optical beam deflector. In the imaging system, the fast scanning was performed at 200 kHz by the KTN optical beam deflector, while the slow scanning was performed at 800 Hz by the galvanometer mirror. As a preliminary experiment, we succeeded in obtaining en face OCT images of human fingerprint with a frame rate of 800 fps. This is the highest frame-rate obtained using time-domain (TD) en face OCT imaging. The 3D-OCT image of sweat gland was also obtained by our imaging system.

  5. Rate Adaptive Selective Segment Assignment for Reliable Wireless Video Transmission

    Directory of Open Access Journals (Sweden)

    Sajid Nazir

    2012-01-01

    Full Text Available A reliable video communication system is proposed based on data partitioning feature of H.264/AVC, used to create a layered stream, and LT codes for erasure protection. The proposed scheme termed rate adaptive selective segment assignment (RASSA is an adaptive low-complexity solution to varying channel conditions. The comparison of the results of the proposed scheme is also provided for slice-partitioned H.264/AVC data. Simulation results show competitiveness of the proposed scheme compared to optimized unequal and equal error protection solutions. The simulation results also demonstrate that a high visual quality video transmission can be maintained despite the adverse effect of varying channel conditions and the number of decoding failures can be reduced.

  6. 47 CFR 76.1504 - Rates, terms and conditions for carriage on open video systems.

    Science.gov (United States)

    2010-10-01

    ... system operator may charge different rates to different classes of video programming providers, provided... open video systems. 76.1504 Section 76.1504 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) BROADCAST RADIO SERVICES MULTICHANNEL VIDEO AND CABLE TELEVISION SERVICE Open Video Systems § 76...

  7. Heart rate measurement based on face video sequence

    Science.gov (United States)

    Xu, Fang; Zhou, Qin-Wu; Wu, Peng; Chen, Xing; Yang, Xiaofeng; Yan, Hong-jian

    2015-03-01

    This paper proposes a new non-contact heart rate measurement method based on photoplethysmography (PPG) theory. With this method we can measure heart rate remotely with a camera and ambient light. We collected video sequences of subjects, and detected remote PPG signals through video sequences. Remote PPG signals were analyzed with two methods, Blind Source Separation Technology (BSST) and Cross Spectral Power Technology (CSPT). BSST is a commonly used method, and CSPT is used for the first time in the study of remote PPG signals in this paper. Both of the methods can acquire heart rate, but compared with BSST, CSPT has clearer physical meaning, and the computational complexity of CSPT is lower than that of BSST. Our work shows that heart rates detected by CSPT method have good consistency with the heart rates measured by a finger clip oximeter. With good accuracy and low computational complexity, the CSPT method has a good prospect for the application in the field of home medical devices and mobile health devices.

  8. Framed bit error rate testing for 100G ethernet equipment

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert

    2010-01-01

    rate. As the need for 100 Gigabit Ethernet equipment rises, so does the need for equipment, which can properly test these systems during development, deployment and use. This paper presents early results from a work-in-progress academia-industry collaboration project and elaborates on the challenges...

  9. Ultrasonic acoustic levitation for fast frame rate X-ray protein crystallography at room temperature

    National Research Council Canada - National Science Library

    Tsujino, Soichiro; Tomizaki, Takashi

    2016-01-01

    ... of biomolecules and structure-based drug developments. Using lysozyme model crystals, we demonstrated the rapid acquisition of X-ray diffraction datasets by combining a high frame rate pixel array detector with ultrasonic acoustic levitation of protein...

  10. In vivo sub-femtoliter resolution photoacoustic microscopy with higher frame rates

    National Research Council Canada - National Science Library

    Lee, Szu-Yu; Lai, Yu-Hung; Huang, Kai-Chih; Cheng, Yu-Hsiang; Tseng, Tzu-Fang; Sun, Chi-Kuang

    2015-01-01

    .... In this paper, based on the two-photon photoacoustic mechanism, we demonstrated a in vivo label free laser-scanning photoacoustic imaging modality featuring high frame rates and sub-femtoliter 3D...

  11. Video Synchronization With Bit-Rate Signals and Correntropy Function.

    Science.gov (United States)

    Pereira, Igor; Silveira, Luiz F; Gonçalves, Luiz

    2017-09-04

    We propose an approach for the synchronization of video streams using correntropy. Essentially, the time offset is calculated on the basis of the instantaneous transfer rates of the video streams that are extracted in the form of a univariate signal known as variable bit-rate (VBR). The state-of-the-art approach uses a window segmentation strategy that is based on consensual zero-mean normalized cross-correlation (ZNCC). This strategy has an elevated computational complexity, making its application to synchronizing online data streaming difficult. Hence, our proposal uses a different window strategy that, together with the correntropy function, allows the synchronization to be performed for online applications. This provides equivalent synchronization scores with a rapid offset determination as the streams come into the system. The efficiency of our approach has been verified through experiments that demonstrate its viability with values that are as precise as those obtained by ZNCC. The proposed approach scored 81 % in time reference classification against the equivalent 81 % of the state-of-the-art approach, requiring much less computational power.

  12. Video Synchronization With Bit-Rate Signals and Correntropy Function

    Directory of Open Access Journals (Sweden)

    Igor Pereira

    2017-09-01

    Full Text Available We propose an approach for the synchronization of video streams using correntropy. Essentially, the time offset is calculated on the basis of the instantaneous transfer rates of the video streams that are extracted in the form of a univariate signal known as variable bit-rate (VBR. The state-of-the-art approach uses a window segmentation strategy that is based on consensual zero-mean normalized cross-correlation (ZNCC. This strategy has an elevated computational complexity, making its application to synchronizing online data streaming difficult. Hence, our proposal uses a different window strategy that, together with the correntropy function, allows the synchronization to be performed for online applications. This provides equivalent synchronization scores with a rapid offset determination as the streams come into the system. The efficiency of our approach has been verified through experiments that demonstrate its viability with values that are as precise as those obtained by ZNCC. The proposed approach scored 81 % in time reference classification against the equivalent 81 % of the state-of-the-art approach, requiring much less computational power.

  13. Video-rate processing in tomographic phase microscopy of biological cells using CUDA.

    Science.gov (United States)

    Dardikman, Gili; Habaza, Mor; Waller, Laura; Shaked, Natan T

    2016-05-30

    We suggest a new implementation for rapid reconstruction of three-dimensional (3-D) refractive index (RI) maps of biological cells acquired by tomographic phase microscopy (TPM). The TPM computational reconstruction process is extremely time consuming, making the analysis of large data sets unreasonably slow and the real-time 3-D visualization of the results impossible. Our implementation uses new phase extraction, phase unwrapping and Fourier slice algorithms, suitable for efficient CPU or GPU implementations. The experimental setup includes an external off-axis interferometric module connected to an inverted microscope illuminated coherently. We used single cell rotation by micro-manipulation to obtain interferometric projections from 73 viewing angles over a 180° angular range. Our parallel algorithms were implemented using Nvidia's CUDA C platform, running on Nvidia's Tesla K20c GPU. This implementation yields, for the first time to our knowledge, a 3-D reconstruction rate higher than video rate of 25 frames per second for 256 × 256-pixel interferograms with 73 different projection angles (64 × 64 × 64 output). This allows us to calculate additional cellular parameters, while still processing faster than video rate. This technique is expected to find uses for real-time 3-D cell visualization and processing, while yielding fast feedback for medical diagnosis and cell sorting.

  14. Replication Rate, Framing, and Format Affect Attitudes and Decisions about Science Claims

    Science.gov (United States)

    Barnes, Ralph M.; Tobin, Stephanie J.; Johnston, Heather M.; MacKenzie, Noah; Taglang, Chelsea M.

    2016-01-01

    A series of five experiments examined how the evaluation of a scientific finding was influenced by information about the number of studies that had successfully replicated the initial finding. The experiments also tested the impact of frame (negative, positive) and numeric format (percentage, natural frequency) on the evaluation of scientific findings. In Experiments 1 through 4, an attitude difference score served as the dependent measure, while a measure of choice served as the dependent measure in Experiment 5. Results from a diverse sample of 188 non-institutionalized U.S. adults (Experiment 2) and 730 undergraduate college students (Experiments 1, 3, and 4) indicated that attitudes became more positive as the replication rate increased and attitudes were more positive when the replication information was framed positively. The results also indicate that the manner in which replication rate was framed had a greater impact on attitude than the replication rate itself. The large effect for frame was attenuated somewhat when information about replication was presented in the form of natural frequencies rather than percentages. A fifth study employing 662 undergraduate college students in a task in which choice served as the dependent measure confirmed the framing effect and replicated the replication rate effect in the positive frame condition, but provided no evidence that the use of natural frequencies diminished the effect. PMID:27920743

  15. Adaptive Bit Rate Video Streaming Through an RF/Free Space Optical Laser Link

    Directory of Open Access Journals (Sweden)

    A. Akbulut

    2010-06-01

    Full Text Available This paper presents a channel-adaptive video streaming scheme which adjusts video bit rate according to channel conditions and transmits video through a hybrid RF/free space optical (FSO laser communication system. The design criteria of the FSO link for video transmission to 2.9 km distance have been given and adaptive bit rate video streaming according to the varying channel state over this link has been studied. It has been shown that the proposed structure is suitable for uninterrupted transmission of videos over the hybrid wireless network with reduced packet delays and losses even when the received power is decreased due to weather conditions.

  16. Feasibility of pulse wave velocity estimation from low frame rate US sequences in vivo

    Science.gov (United States)

    Zontak, Maria; Bruce, Matthew; Hippke, Michelle; Schwartz, Alan; O'Donnell, Matthew

    2017-03-01

    The pulse wave velocity (PWV) is considered one of the most important clinical parameters to evaluate CV risk, vascular adaptation, etc. There has been substantial work attempting to measure the PWV in peripheral vessels using ultrasound (US). This paper presents a fully automatic algorithm for PWV estimation from the human carotid using US sequences acquired with a Logic E9 scanner (modified for RF data capture) and a 9L probe. Our algorithm samples the pressure wave in time by tracking wall displacements over the sequence, and estimates the PWV by calculating the temporal shift between two sampled waves at two distinct locations. Several recent studies have utilized similar ideas along with speckle tracking tools and high frame rate (above 1 KHz) sequences to estimate the PWV. To explore PWV estimation in a more typical clinical setting, we used focused-beam scanning, which yields relatively low frame rates and small fields of view (e.g., 200 Hz for 16.7 mm filed of view). For our application, a 200 Hz frame rate is low. In particular, the sub-frame temporal accuracy required for PWV estimation between locations 16.7 mm apart, ranges from 0.82 of a frame for 4m/s, to 0.33 for 10m/s. When the distance is further reduced (to 0.28 mm between two beams), the sub-frame precision is in parts per thousand (ppt) of the frame (5 ppt for 10m/s). As such, the contributions of our algorithm and this paper are: 1. Ability to work with low frame-rate ( 200Hz) and decreased lateral field of view. 2. Fully automatic segmentation of the wall intima (using raw RF images). 3. Collaborative Speckle Tracking of 2D axial and lateral carotid wall motion. 4. Outlier robust PWV calculation from multiple votes using RANSAC. 5. Algorithm evaluation on volunteers of different ages and health conditions.

  17. High frame-rate resolution of cell division during Candida albicans filamentation

    OpenAIRE

    Thomson, Darren D; Berman, Judith; Brand, Alexandra C

    2016-01-01

    The commensal yeast, Candida albicans, is an opportunistic pathogen in humans and forms filaments called hyphae and pseudohyphae, in which cell division requires precise temporal and spatial control to produce mononuclear cell compartments. High-frame-rate live-cell imaging (1 frame/min) revealed that nuclear division did not occur across the septal plane. We detected the presence of nucleolar fragments that may be extrachromosomal molecules carrying the ribosomal RNA genes. Cells occasionall...

  18. An improved mixture-of-Gaussians background model with frame difference and blob tracking in video stream.

    Science.gov (United States)

    Yao, Li; Ling, Miaogen

    2014-01-01

    Modeling background and segmenting moving objects are significant techniques for computer vision applications. Mixture-of-Gaussians (MoG) background model is commonly used in foreground extraction in video steam. However considering the case that the objects enter the scenery and stay for a while, the foreground extraction would fail as the objects stay still and gradually merge into the background. In this paper, we adopt a blob tracking method to cope with this situation. To construct the MoG model more quickly, we add frame difference method to the foreground extracted from MoG for very crowded situations. What is more, a new shadow removal method based on RGB color space is proposed.

  19. An Improved Mixture-of-Gaussians Background Model with Frame Difference and Blob Tracking in Video Stream

    Directory of Open Access Journals (Sweden)

    Li Yao

    2014-01-01

    Full Text Available Modeling background and segmenting moving objects are significant techniques for computer vision applications. Mixture-of-Gaussians (MoG background model is commonly used in foreground extraction in video steam. However considering the case that the objects enter the scenery and stay for a while, the foreground extraction would fail as the objects stay still and gradually merge into the background. In this paper, we adopt a blob tracking method to cope with this situation. To construct the MoG model more quickly, we add frame difference method to the foreground extracted from MoG for very crowded situations. What is more, a new shadow removal method based on RGB color space is proposed.

  20. Graphics processing unit accelerated optical coherence tomography processing at megahertz axial scan rate and high resolution video rate volumetric rendering.

    Science.gov (United States)

    Jian, Yifan; Wong, Kevin; Sarunic, Marinko V

    2013-02-01

    In this report, we describe how to highly optimize a computer unified device architecture based platform to perform real-time processing of optical coherence tomography interferometric data and three-dimensional (3-D) volumetric rendering using a commercially available, cost-effective, graphics processing unit (GPU). The maximum complete attainable axial scan processing rate, including memory transfer and displaying B-scan frame, was 2.24 MHz for 16 bits pixel depth and 2048 fast Fourier transform size; the maximum 3-D volumetric rendering rate, including B-scan, en face view display, and 3-D rendering, was ~23 volumes/second (volume size: 1024×256×200). To the best of our knowledge, this is the fastest processing rate reported to date with a single-chip GPU and the first implementation of real-time video-rate volumetric optical coherence tomography (OCT) processing and rendering that is capable of matching the acquisition rates of ultrahigh-speed OCT.

  1. Graphics processing unit accelerated optical coherence tomography processing at megahertz axial scan rate and high resolution video rate volumetric rendering

    Science.gov (United States)

    Jian, Yifan; Wong, Kevin; Sarunic, Marinko V.

    2013-02-01

    In this report, we describe how to highly optimize a computer unified device architecture based platform to perform real-time processing of optical coherence tomography interferometric data and three-dimensional (3-D) volumetric rendering using a commercially available, cost-effective, graphics processing unit (GPU). The maximum complete attainable axial scan processing rate, including memory transfer and displaying B-scan frame, was 2.24 MHz for 16 bits pixel depth and 2048 fast Fourier transform size; the maximum 3-D volumetric rendering rate, including B-scan, en face view display, and 3-D rendering, was ˜23 volumes/second (volume size: 1024×256×200). To the best of our knowledge, this is the fastest processing rate reported to date with a single-chip GPU and the first implementation of real-time video-rate volumetric optical coherence tomography (OCT) processing and rendering that is capable of matching the acquisition rates of ultrahigh-speed OCT.

  2. High-Frame-Rate Deformation Imaging in Two Dimensions Using Continuous Speckle-Feature Tracking.

    Science.gov (United States)

    Andersen, Martin V; Moore, Cooper; Arges, Kristine; Søgaard, Peter; Østergaard, Lasse R; Schmidt, Samuel E; Kisslo, Joseph; Von Ramm, Olaf T

    2016-11-01

    The study describes a novel algorithm for deriving myocardial strain from an entire cardiac cycle using high-frame-rate ultrasound images. Validation of the tracking algorithm was conducted in vitro prior to the application to patient images. High-frame-rate ultrasound images were acquired in vivo from 10 patients, and strain curves were derived in six myocardial regions around the left ventricle from the apical four-chamber view. Strain curves derived from high-frame-rate images had a higher frequency content than those derived using conventional methods, reflecting improved temporal sampling. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  3. [Improvement on high frame rate ultrasonic imaging system based on linear frequency-modulated signal].

    Science.gov (United States)

    Han, Xuemei; Peng, Hu; Cai, Bo

    2009-08-01

    The high frame rate (HFR) ultrasonic imaging system based on linear frequency-modulated (LFM) signal constructs images at a high frame rate; the signal-to-noise ratio (SNR) of this system can also be improved. Unfortunately, such pulse compression methods that increase the SNR usually cause range sidelobe artifacts. In an imaging situation, the effects of the sidelobes extending on either side of the compressed pulse will be self-noise along the axial direction and masking of weaker echoes. The improvement on high frame rate ultrasonic imaging system based on LFM signal is considered in this paper. In this proposed scheme, a predistorted LFM signal is used as excited signal and a mismatched filter is applied on receiving end. The results show that the proposed HFR ultrasonic imaging system can achieve higher SNR and the axial resolution is also improved.

  4. The reliability and accuracy of estimating heart-rates from RGB video recorded on a consumer grade camera

    Science.gov (United States)

    Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik

    2017-03-01

    Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.

  5. Movie Ratings and the Content of Adult Videos: The Sex-Violence Ratio.

    Science.gov (United States)

    Yang, Ni; Linz, Daniel

    1990-01-01

    Quantifies sexual, violent, sexually violent, and prosocial behaviors in a sample of R-rated and X-rated videocassettes. Finds the predominant behavior in both X- and XXX-rated videos is sexual. Finds the predominant behavior in R-rated videos was violence followed by prosocial behavior. (RS)

  6. High frame-rate blood vector velocity imaging using plane waves: simulations and preliminary experiments

    DEFF Research Database (Denmark)

    Udesen, J.; Gran, F.; Hansen, K.L.

    2008-01-01

    Conventional ultrasound methods for acquiring color images of blood velocity are limited by a relatively low frame-rate and are restricted to give velocity estimates along the ultrasound beam direction only. To circumvent these limitations, the method presented in this paper uses 3 techniques: 1...... carotid artery of a healthy male was scanned with a scan sequence that satisfies the limits set by the Food and Drug Administration. Vector velocity images were obtained with a frame-rate of 100 Hz where 40 speckle images are used for each vector velocity image. It was found that the blood flow...

  7. High Frame-Rate Blood Vector Velocity Imaging Using Plane Waves: Simulations and Preliminary Experiments

    DEFF Research Database (Denmark)

    Udesen, Jesper; Gran, Fredrik; Hansen, Kristoffer Lindskov

    2008-01-01

    Conventional ultrasound methods for acquiring color images of blood velocity are limited by a relatively low frame-rate and are restricted to give velocity estimates along the ultrasound beam direction only. To circumvent these limitations, the method presented in this paper uses 3 techniques: 1...... carotid artery of a healthy male was scanned with a scan sequence that satisfies the limits set by the Food and Drug Administration. Vector velocity images were obtained with a frame-rate of 100 Hz where 40 speckle images are used for each vector velocity image. It was found that the blood flow...

  8. 3D video bit rate adaptation decision taking using ambient illumination context

    Directory of Open Access Journals (Sweden)

    G. Nur Yilmaz

    2014-09-01

    Full Text Available 3-Dimensional (3D video adaptation decision taking is an open field in which not many researchers have carried out investigations yet compared to 3D video display, coding, etc. Moreover, utilizing ambient illumination as an environmental context for 3D video adaptation decision taking has particularly not been studied in literature to date. In this paper, a user perception model, which is based on determining perception characteristics of a user for a 3D video content viewed under a particular ambient illumination condition, is proposed. Using the proposed model, a 3D video bit rate adaptation decision taking technique is developed to determine the adapted bit rate for the 3D video content to maintain 3D video quality perception by considering the ambient illumination condition changes. Experimental results demonstrate that the proposed technique is capable of exploiting the changes in ambient illumination level to use network resources more efficiently without sacrificing the 3D video quality perception.

  9. Frame, bit and chip error rate evaluation for a DSSS communication system

    Directory of Open Access Journals (Sweden)

    F.R. Castillo–Soria

    2008-07-01

    Full Text Available The relation between chips, bits and frames error rates in the Additive White Gaussian Noise (AWGN channel for a Direct Sequence Spread Spectrum (DSSS system, in Multiple Access Interference (MAI conditions is evaluated. A simple error–correction code (ECC for the Frame Error Rate (FER evaluation is used. 64 bits (chips Pseudo Noise (PN sequences are employed for the spread spectrum transmission.An iterative Montecarlo (stochastic simulation is used to evaluate how many errors on chips are introduced for channel effects and how they are related to the bit errors. It can be observed how the bit errors may eventually cause a frame error, i. e. CODEC or communication error. These results are useful for academics, engineers, or professionals alike.

  10. Frame rate required for speckle tracking echocardiography: A quantitative clinical study with open-source, vendor-independent software.

    Science.gov (United States)

    Negoita, Madalina; Zolgharni, Massoud; Dadkho, Elham; Pernigo, Matteo; Mielewczik, Michael; Cole, Graham D; Dhutia, Niti M; Francis, Darrel P

    2016-09-01

    To determine the optimal frame rate at which reliable heart walls velocities can be assessed by speckle tracking. Assessing left ventricular function with speckle tracking is useful in patient diagnosis but requires a temporal resolution that can follow myocardial motion. In this study we investigated the effect of different frame rates on the accuracy of speckle tracking results, highlighting the temporal resolution where reliable results can be obtained. 27 patients were scanned at two different frame rates at their resting heart rate. From all acquired loops, lower temporal resolution image sequences were generated by dropping frames, decreasing the frame rate by up to 10-fold. Tissue velocities were estimated by automated speckle tracking. Above 40 frames/s the peak velocity was reliably measured. When frame rate was lower, the inter-frame interval containing the instant of highest velocity also contained lower velocities, and therefore the average velocity in that interval was an underestimate of the clinically desired instantaneous maximum velocity. The higher the frame rate, the more accurately maximum velocities are identified by speckle tracking, until the frame rate drops below 40 frames/s, beyond which there is little increase in peak velocity. We provide in an online supplement the vendor-independent software we used for automatic speckle-tracked velocity assessment to help others working in this field. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. The Effect of Motion Analysis Activities in a Video-Based Laboratory in Students' Understanding of Position, Velocity and Frames of Reference

    Science.gov (United States)

    Koleza, Eugenia; Pappas, John

    2008-01-01

    In this article, we present the results of a qualitative research project on the effect of motion analysis activities in a Video-Based Laboratory (VBL) on students' understanding of position, velocity and frames of reference. The participants in our research were 48 pre-service teachers enrolled in Education Departments with no previous strong…

  12. Observer accuracy in the detection of pulmonary nodules on CT: effect of cine frame rate.

    Science.gov (United States)

    Copley, S J; Bryant, T H; Chambers, A A; Harvey, C J; Hodson, J M; Graham, A; Lynch, M J; Paley, M R; Partridge, W J; Rangi, P; Schmitz, S; Win, Z; Todd, J J; Desai, S R

    2010-02-01

    To assess the effect of cine frame rate on the accuracy of the detection of pulmonary nodules at computed tomography (CT). CT images of 15 consecutive patients with (n = 13) or without (n = 2) pulmonary metastases were identified. Initial assessment by two thoracic radiologists provided the "actual" or reference reading. Subsequently, 10 radiologists [board certified radiologists (n = 4) or radiology residents (n = 6)] used different fixed cine frame rates for nodule detection. Within-subjects analysis of variance (ANOVA) was used to evaluate the data. Eighty-nine nodules were identified by the thoracic radiologists (median 8, range 0-29 per patient; median diameter 9 mm, range 4-40 mm). There was a non-statistically significant trend to reduced accuracy at higher frame rates (p=0.113) with no statistically significant difference between experienced observers and residents (p = 0.79). The accuracy of pulmonary nodule detection at higher cine frame rates is reduced, unrelated to observer experience. Copyright 2009 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  13. High frame-rate multichannel beam-scanning microscopy based on Lissajous trajectories.

    Science.gov (United States)

    Sullivan, Shane Z; Muir, Ryan D; Newman, Justin A; Carlsen, Mark S; Sreehari, Suhas; Doerge, Chris; Begue, Nathan J; Everly, R Michael; Bouman, Charles A; Simpson, Garth J

    2014-10-06

    A simple beam-scanning optical design based on Lissajous trajectory imaging is described for achieving up to kHz frame-rate optical imaging on multiple simultaneous data acquisition channels. In brief, two fast-scan resonant mirrors direct the optical beam on a circuitous trajectory through the field of view, with the trajectory repeat-time given by the least common multiplier of the mirror periods. Dicing the raw time-domain data into sub-trajectories combined with model-based image reconstruction (MBIR) 3D in-painting algorithms allows for effective frame-rates much higher than the repeat time of the Lissajous trajectory. Since sub-trajectory and full-trajectory imaging are simply different methods of analyzing the same data, both high-frame rate images with relatively low resolution and low frame rate images with high resolution are simultaneously acquired. The optical hardware required to perform Lissajous imaging represents only a minor modification to established beam-scanning hardware, combined with additional control and data acquisition electronics. Preliminary studies based on laser transmittance imaging and polarization-dependent second harmonic generation microscopy support the viability of the approach both for detection of subtle changes in large signals and for trace-light detection of transient fluctuations.

  14. Use of modulated excitation signals in medical ultrasound. Part III: High frame rate imaging

    DEFF Research Database (Denmark)

    Misaridis, Thanassis; Jensen, Jørgen Arendt

    2005-01-01

    For pt.II, see ibid., vol.52, no.2, p.192-207 (2005). This paper, the last from a series of three papers on the application of coded excitation signals in medical ultrasound, investigates the possibility of increasing the frame rate in ultrasound imaging by using modulated excitation signals. Lin...

  15. Accelerated high-frame-rate mouse heart cine-MRI using compressed sensing reconstruction

    NARCIS (Netherlands)

    Motaal, Abdallah G.; Coolen, Bram F.; Abdurrachim, Desiree; Castro, Rui M.; Prompers, Jeanine J.; Florack, Luc M. J.; Nicolay, Klaas; Strijkers, Gustav J.

    2013-01-01

    We introduce a new protocol to obtain very high-frame-rate cinematographic (Cine) MRI movies of the beating mouse heart within a reasonable measurement time. The method is based on a self-gated accelerated fast low-angle shot (FLASH) acquisition and compressed sensi ng reconstruction. Key to our

  16. Parents rate the ratings: a test of the validity of the American movie, television, and video game ratings.

    Science.gov (United States)

    Walsh, D A; Gentile, D A; Van Brederode, T M

    2002-02-01

    Numerous studies have documented the potential effects on young audiences of violent content in media products, including movies, television programs, and computer and video games. Similar studies have evaluated the effects associated with sexual content and messages. Cumulatively, these effects represent a significant public health risk for increased aggressive and violent behavior, spread of sexually transmitted diseases, and pediatric pregnancy. In partial response to these risks and to public and legislative pressure, the movie, television, and gaming industries have implemented ratings systems intended to provide information about the content and appropriate audiences for different films, shows, and games. We conducted a panel study to test the validity of the current movie, television, and video game rating systems. Participants used the KidScore media evaluation tool, which evaluates films, television shows, and video and computer games on 10 aspects, including the appropriateness of the media product for children on the basis of age. Results revealed that when an entertainment industry rates a product as inappropriate for children, parent raters agree that it is inappropriate for children. However, parent raters disagree with industry usage of many of the ratings designating material suitable for children of different ages. Products rated as appropriate for adolescents are of the greatest concern. The level of disagreement varies from industry to industry and even from rating to rating. Analysis indicates that the amount of violent content and portrayals of violence are the primary markers for disagreement between parent raters and industry ratings. Short-term and long-term recommendations are suggested.

  17. In-Vivo Synthetic Aperture and Plane Wave High Frame Rate Cardiac Imaging

    DEFF Research Database (Denmark)

    Stuart, Matthias Bo; Jensen, Jonas; Brandt, Andreas Hjelm

    2014-01-01

    A comparison of synthetic aperture imaging using spherical and plane waves with low number of emission events is presented. For both wave types, a 90 degree sector is insonified using 15 emission events giving a frame rate of 200 frames per second. Field II simulations of point targets show simil.......43 for spherical and 0.70 for plane waves. All measures are well within FDA limits for cardiac imaging. In-vivo images of the heart of a healthy 28-year old volunteer are shown....

  18. Initial Demonstration of 9-MHz Framing Camera Rates on the FAST UV Drive Laser Pulse Trains

    Energy Technology Data Exchange (ETDEWEB)

    Lumpkin, A. H. [Fermilab; Edstrom Jr., D. [Fermilab; Ruan, J. [Fermilab

    2016-10-09

    We report the configuration of a Hamamatsu C5680 streak camera as a framing camera to record transverse spatial information of green-component laser micropulses at 3- and 9-MHz rates for the first time. The latter is near the time scale of the ~7.5-MHz revolution frequency of the Integrable Optics Test Accelerator (IOTA) ring and its expected synchroton radiation source temporal structure. The 2-D images are recorded with a Gig-E readout CCD camera. We also report a first proof of principle with an OTR source using the linac streak camera in a semi-framing mode.

  19. Improved contrast for high frame rate imaging using coherent compounding combined with spatial matched filtering.

    Science.gov (United States)

    Lou, Yang; Yen, Jesse T

    2017-07-01

    The concept of high frame rate ultrasound imaging (typically greater than 1000 frames per second) has inspired new fields of clinical applications for ultrasound imaging such as fast cardiovascular imaging, fast Doppler imaging and real-time 3D imaging. Coherent plane-wave compounding is a promising beamforming technique to achieve high frame rate imaging. By combining echoes from plane waves with different angles, dynamic transmit focusing is efficiently accomplished at all points in the image field. Meanwhile, the image frame rate can still be kept at a high level. Spatial matched filtering (SMF) with plane-wave insonification is a novel ultrafast beamforming method. An analytical study shows that SMF is equivalent to synthetic aperture methods that can provide dynamic transmit-receive focusing throughout the field of view. Experimental results show that plane-wave SMF has better performance than dynamic-receive focusing. In this paper, we propose integrating coherent plane-wave compounding with SMF to obtain greater image contrast. By using a combination of SMF beamformed images, image contrast is improved without degrading its high frame rate capabilities. The performance of compounded SMF (CSMF) is evaluated and compared with that of synthetic aperture focusing technique (SAFT) beamforming and compounded dynamic-receive-focus (CDRF) beamforming. The image quality of different beamforming methods was quantified in terms of contrast-to-noise ratio (CNR). Our results show that the new SMF based plane-wave compounding method provides better contrast than DAS based compounding method. Also CSMF can obtain a similar contrast level to dynamic transmit-receive focusing with only 21 transmit events. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Reliability of sagittal plane hip, knee, and ankle joint angles from a single frame of video data using the GAITRite camera system.

    Science.gov (United States)

    Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott

    2015-01-01

    The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.

  1. Video-rate in vivo fluorescence imaging with a line-scanned dual-axis confocal microscope

    Science.gov (United States)

    Chen, Ye; Wang, Danni; Khan, Altaz; Wang, Yu; Borwege, Sabine; Sanai, Nader; Liu, Jonathan T. C.

    2015-10-01

    Video-rate optical-sectioning microscopy of living organisms would allow for the investigation of dynamic biological processes and would also reduce motion artifacts, especially for in vivo imaging applications. Previous feasibility studies, with a slow stage-scanned line-scanned dual-axis confocal (LS-DAC) microscope, have demonstrated that LS-DAC microscopy is capable of imaging tissues with subcellular resolution and high contrast at moderate depths of up to several hundred microns. However, the sensitivity and performance of a video-rate LS-DAC imaging system, with low-numerical aperture optics, have yet to be demonstrated. Here, we report on the construction and validation of a video-rate LS-DAC system that possesses sufficient sensitivity to visualize fluorescent contrast agents that are topically applied or systemically delivered in animal and human tissues. We present images of murine oral mucosa that are topically stained with methylene blue, and images of protoporphyrin IX-expressing brain tumor from glioma patients that have been administered 5-aminolevulinic acid prior to surgery. In addition, we demonstrate in vivo fluorescence imaging of red blood cells trafficking within the capillaries of a mouse ear, at frame rates of up to 30 fps. These results can serve as a benchmark for miniature in vivo microscopy devices under development.

  2. Efficient Hybrid Watermarking Scheme for Security and Transmission Bit Rate Enhancement of 3D Color-Plus-Depth Video Communication

    Science.gov (United States)

    El-Shafai, W.; El-Rabaie, S.; El-Halawany, M.; Abd El-Samie, F. E.

    2018-03-01

    Three-Dimensional Video-plus-Depth (3DV + D) comprises diverse video streams captured by different cameras around an object. Therefore, there is a great need to fulfill efficient compression to transmit and store the 3DV + D content in compressed form to attain future resource bounds whilst preserving a decisive reception quality. Also, the security of the transmitted 3DV + D is a critical issue for protecting its copyright content. This paper proposes an efficient hybrid watermarking scheme for securing the 3DV + D transmission, which is the homomorphic transform based Singular Value Decomposition (SVD) in Discrete Wavelet Transform (DWT) domain. The objective of the proposed watermarking scheme is to increase the immunity of the watermarked 3DV + D to attacks and achieve adequate perceptual quality. Moreover, the proposed watermarking scheme reduces the transmission-bandwidth requirements for transmitting the color-plus-depth 3DV over limited-bandwidth wireless networks through embedding the depth frames into the color frames of the transmitted 3DV + D. Thus, it saves the transmission bit rate and subsequently it enhances the channel bandwidth-efficiency. The performance of the proposed watermarking scheme is compared with those of the state-of-the-art hybrid watermarking schemes. The comparisons depend on both the subjective visual results and the objective results; the Peak Signal-to-Noise Ratio (PSNR) of the watermarked frames and the Normalized Correlation (NC) of the extracted watermark frames. Extensive simulation results on standard 3DV + D sequences have been conducted in the presence of attacks. The obtained results confirm that the proposed hybrid watermarking scheme is robust in the presence of attacks. It achieves not only very good perceptual quality with appreciated PSNR values and saving in the transmission bit rate, but also high correlation coefficient values in the presence of attacks compared to the existing hybrid watermarking schemes.

  3. Extremely low-frame-rate digital fluoroscopy in catheter ablation of atrial fibrillation: A comparison of 2 versus 4 frame rate.

    Science.gov (United States)

    Lee, Ji Hyun; Kim, Jun; Kim, Minsu; Hwang, Jongmin; Hwang, You Mi; Kang, Joon-Won; Nam, Gi-Byoung; Choi, Kee-Joon; Kim, You-Ho

    2017-06-01

    Despite the technological advance in 3-dimensional (3D) mapping, radiation exposure during catheter ablation of atrial fibrillation (AF) continues to be a major concern in both patients and physicians. Previous studies reported substantial radiation exposure (7369-8690 cGy cm) during AF catheter ablation with fluoroscopic settings of 7.5 frames per second (FPS) under 3D mapping system guidance. We evaluated the efficacy and safety of a low-frame-rate fluoroscopy protocol for catheter ablation for AF.Retrospective analysis of data on 133 patients who underwent AF catheter ablation with 3-D electro-anatomic mapping at our institute from January 2014 to May 2015 was performed. Since January 2014, fluoroscopy frame rate of 4-FPS was implemented at our institute, which was further decreased to 2-FPS in September 2014. We compared the radiation exposure quantified as dose area product (DAP) and effective dose (ED) between the 4-FPS (n = 57) and 2-FPS (n = 76) groups.The 4-FPS group showed higher median DAP (599.9 cGy cm; interquartile range [IR], 371.4-1337.5 cGy cm vs. 392.0 cGy cm; IR, 289.7-591.4 cGy cm; P higher median ED (1.1 mSv; IR, 0.7-2.5 mSv vs. 0.7 mSv; IR, 0.6-1.1 mSv; P < .01) compared with the 2-FPS group. No major procedure-related complications such as cardiac tamponade were observed in either group. Over follow-up durations of 331 ± 197 days, atrial tachyarrhythmia recurred in 20 patients (35.1%) in the 4-FPS group and in 27 patients (35.5%) in the 2-FPS group (P = .96). Kaplan-Meier survival analysis revealed no significant different between the 2 groups (log rank, P = .25).In conclusion, both the 4-FPS and 2-FPS settings were feasible and emitted a relatively low level of radiation compared with that historically reported for DAP in a conventional fluoroscopy setting.

  4. An adaptive scan of high frequency subbands for dyadic intra frame in MPEG4-AVC/H.264 scalable video coding

    Science.gov (United States)

    Shahid, Z.; Chaumont, M.; Puech, W.

    2009-01-01

    This paper develops a new adaptive scanning methodology for intra frame scalable coding framework based on a subband/wavelet(DWTSB) coding approach for MPEG-4 AVC/H.264 scalable video coding (SVC). It attempts to take advantage of the prior knowledge of the frequencies which are present in different higher frequency subbands. We propose dyadic intra frame coding method with adaptive scan (DWTSB-AS) for each subband as traditional zigzag scan is not suitable for high frequency subbands. Thus, by just modification of the scan order of the intra frame scalable coding framework of H.264, we can get better compression. The proposed algorithm has been theoretically justified and is thoroughly evaluated against the current SVC test model JSVM and DWTSB through extensive coding experiments for scalable coding of intra frame. The simulation results show the proposed scanning algorithm consistently outperforms JSVM and DWTSB in PSNR performance. This results in extra compression for intra frames, along with spatial scalability. Thus Image and video coding applications, traditionally serviced by separate coders, can be efficiently provided by an integrated coding system.

  5. Data rate enhancement of optical camera communications by compensating inter-frame gaps

    Science.gov (United States)

    Nguyen, Duy Thong; Park, Youngil

    2017-07-01

    Optical camera communications (OCC) is a convenient way of transmitting data between LED lamps and image sensors that are included in most smart devices. Although many schemes have been suggested to increase the data rate of the OCC system, it is still much lower than that of the photodiode-based LiFi system. One major reason of this low data rate is attributed to the inter-frame gap (IFG) of image sensor system, that is, the time gap between consecutive image frames. In this paper, we propose a way to compensate for this IFG efficiently by an interleaved Hamming coding scheme. The proposed scheme is implemented and the performance is measured.

  6. High frame rate multi-resonance imaging refractometry with distributed feedback dye laser sensor

    DEFF Research Database (Denmark)

    Vannahme, Christoph; Dufva, Martin; Kristensen, Anders

    2015-01-01

    High frame rate and highly sensitive imaging of refractive index changes on a surface is very promising for studying the dynamics of dissolution, mixing and biological processes without the need for labeling. Here, a highly sensitive distributed feedback (DFB) dye laser sensor for high frame rate...... by analyzing laser light from all areas in parallel with an imaging spectrometer. With this multi-resonance imaging refractometry method, the spatial position in one direction is identified from the horizontal, i.e., spectral position of the multiple laser lines which is obtained from the spectrometer charged...... coupled device (CCD) array. The orthogonal spatial position is obtained from the vertical spatial position on the spectrometer CCD array as in established spatially resolved spectroscopy. Here, the imaging technique is demonstrated by monitoring the motion of small sucrose molecules upon dissolution...

  7. Increased Frame Rate for Plane Wave Imaging Without Loss of Image Quality

    DEFF Research Database (Denmark)

    Jensen, Jonas; Stuart, Matthias Bo; Jensen, Jørgen Arendt

    2015-01-01

    in the near field for λ-pitch transducers. Artefacts can only partly be suppressed by increasing the number of emissions, and this paper demonstrates how the frame rate can be increased without loss of image quality by using λ/2-pitch transducers. The number of emissions and steering angles are optimized...... in a simulation study to get the best images with as high a frame rate as possible. The optimal setup for a simulated 4.1 MHz λ-pitch transducer is 73 emissions and a maximum steering of 22◦ . The achieved FWHM is 1.3λ and the cystic resolution is -25 dB for a scatter at 9 mm. Only 37 emissions are necessary...... are scanned and show the performance using the optimized sequences for the transducers. Measurements confirm results from simulations, and the λ-pitch transducer show artefacts at undesirable strengths of -25 dB for a low number of emissions....

  8. Layer-based buffer aware rate adaptation design for SHVC video streaming

    Science.gov (United States)

    Gudumasu, Srinivas; Hamza, Ahmed; Asbun, Eduardo; He, Yong; Ye, Yan

    2016-09-01

    This paper proposes a layer based buffer aware rate adaptation design which is able to avoid abrupt video quality fluctuation, reduce re-buffering latency and improve bandwidth utilization when compared to a conventional simulcast based adaptive streaming system. The proposed adaptation design schedules DASH segment requests based on the estimated bandwidth, dependencies among video layers and layer buffer fullness. Scalable HEVC video coding is the latest state-of-art video coding technique that can alleviate various issues caused by simulcast based adaptive video streaming. With scalable coded video streams, the video is encoded once into a number of layers representing different qualities and/or resolutions: a base layer (BL) and one or more enhancement layers (EL), each incrementally enhancing the quality of the lower layers. Such layer based coding structure allows fine granularity rate adaptation for the video streaming applications. Two video streaming use cases are presented in this paper. The first use case is to stream HD SHVC video over a wireless network where available bandwidth varies, and the performance comparison between proposed layer-based streaming approach and conventional simulcast streaming approach is provided. The second use case is to stream 4K/UHD SHVC video over a hybrid access network that consists of a 5G millimeter wave high-speed wireless link and a conventional wired or WiFi network. The simulation results verify that the proposed layer based rate adaptation approach is able to utilize the bandwidth more efficiently. As a result, a more consistent viewing experience with higher quality video content and minimal video quality fluctuations can be presented to the user.

  9. Laryngeal High-Speed Videoendoscopy: Sensitivity of Objective Parameters towards Recording Frame Rate

    OpenAIRE

    Anne Schützenberger; Melda Kunduk; Michael Döllinger; Christoph Alexiou; Denis Dubrovskiy; Marion Semmler; Anja Seger; Christopher Bohr

    2016-01-01

    The current use of laryngeal high-speed videoendoscopy in clinic settings involves subjective visual assessment of vocal fold vibratory characteristics. However, objective quantification of vocal fold vibrations for evidence-based diagnosis and therapy is desired, and objective parameters assessing laryngeal dynamics have therefore been suggested. This study investigated the sensitivity of the objective parameters and their dependence on recording frame rate. A total of 300 endoscopic high-sp...

  10. Backscanning step and stare imaging system with high frame rate and wide coverage.

    Science.gov (United States)

    Sun, Chongshang; Ding, Yalin; Wang, Dejiang; Tian, Dapeng

    2015-06-01

    Step and stare imaging with staring arrays has become the main approach to realizing wide area coverage and high resolution imagery of potential targets. In this paper, a backscanning step and stare imaging system is described. Compared with traditional step and stare imaging systems, this system features a much higher frame rate by using a small-sized array. In order to meet the staring requirements, a fast steering mirror is employed to provide backscan motion to compensate for the image motion caused by the continuously scanning of the gimbal platform. According to the working principle, the control system is designed to step/stare the line of sight at a high frame rate with a high accuracy. Then a proof-of-concept backscanning step and stare imaging system is established with a CMOS camera. Finally, the modulation transfer function of the imaging system is measured by the slanted-edge method, and a quantitative analysis is made to evaluate the performance of image motion compensation. Experimental results confirm that both high frame rate and image quality improvement can be achieved by adopting this method.

  11. Fine-Grained Rate Shaping for Video Streaming over Wireless Networks

    Directory of Open Access Journals (Sweden)

    Chen Tsuhan

    2004-01-01

    Full Text Available Video streaming over wireless networks faces challenges of time-varying packet loss rate and fluctuating bandwidth. In this paper, we focus on streaming precoded video that is both source and channel coded. Dynamic rate shaping has been proposed to “shape” the precompressed video to adapt to the fluctuating bandwidth. In our earlier work, rate shaping was extended to shape the channel coded precompressed video, and to take into account the time-varying packet loss rate as well as the fluctuating bandwidth of the wireless networks. However, prior work on rate shaping can only adjust the rate oarsely. In this paper, we propose “fine-grained rate shaping (FGRS” to allow for bandwidth adaptation over a wide range of bandwidth and packet loss rate in fine granularities. The video is precoded with fine granularity scalability (FGS followed by channel coding. Utilizing the fine granularity property of FGS and channel coding, FGRS selectively drops part of the precoded video and still yields decodable bit-stream at the decoder. Moreover, FGRS optimizes video streaming rather than achieves heuristic objectives as conventional methods. A two-stage rate-distortion (RD optimization algorithm is proposed for FGRS. Promising results of FGRS are shown.

  12. stil113_0401r -- Point coverage of locations of still frames extracted from video imagery which depict sediment types

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  13. Development of scintillator for a high-frame-rate neutron radiography

    Science.gov (United States)

    Matsubayashi, Masahito; Katagiri, Masaki

    2004-08-01

    The properties of Ni doped ZnS(Ag) scintillator for a high-frame-rate neutron radiography and also for a high-counting-rate neutron scintillation detector were examined and confirmed to be promising. Although the deterioration of the emission spectrum and in the light transmission property was observed, a slow component in the scintillation decay was well suppressed. The decrease in the thermal neutron detection efficiency such as small percent due to the deterioration of optical property was recoverable with the replacement of neutron converter such as 6LiF to 10B 2O 3.

  14. Development of scintillator for a high-frame-rate neutron radiography

    Energy Technology Data Exchange (ETDEWEB)

    Matsubayashi, Masahito E-mail: matsu3@popsvr.tokai.jaeri.go.jp; Katagiri, Masaki

    2004-08-21

    The properties of Ni doped ZnS(Ag) scintillator for a high-frame-rate neutron radiography and also for a high-counting-rate neutron scintillation detector were examined and confirmed to be promising. Although the deterioration of the emission spectrum and in the light transmission property was observed, a slow component in the scintillation decay was well suppressed. The decrease in the thermal neutron detection efficiency such as small percent due to the deterioration of optical property was recoverable with the replacement of neutron converter such as {sup 6}LiF to {sup 10}B{sub 2}O{sub 3}.

  15. Accelerated high-frame-rate mouse heart cine-MRI using compressed sensing reconstruction.

    Science.gov (United States)

    Motaal, Abdallah G; Coolen, Bram F; Abdurrachim, Desiree; Castro, Rui M; Prompers, Jeanine J; Florack, Luc M J; Nicolay, Klaas; Strijkers, Gustav J

    2013-04-01

    We introduce a new protocol to obtain very high-frame-rate cinematographic (Cine) MRI movies of the beating mouse heart within a reasonable measurement time. The method is based on a self-gated accelerated fast low-angle shot (FLASH) acquisition and compressed sensing reconstruction. Key to our approach is that we exploit the stochastic nature of the retrospective triggering acquisition scheme to produce an undersampled and random k-t space filling that allows for compressed sensing reconstruction and acceleration. As a standard, a self-gated FLASH sequence with a total acquisition time of 10 min was used to produce single-slice Cine movies of seven mouse hearts with 90 frames per cardiac cycle. Two times (2×) and three times (3×) k-t space undersampled Cine movies were produced from 2.5- and 1.5-min data acquisitions, respectively. The accelerated 90-frame Cine movies of mouse hearts were successfully reconstructed with a compressed sensing algorithm. The movies had high image quality and the undersampling artifacts were effectively removed. Left ventricular functional parameters, i.e. end-systolic and end-diastolic lumen surface areas and early-to-late filling rate ratio as a parameter to evaluate diastolic function, derived from the standard and accelerated Cine movies, were nearly identical. Copyright © 2012 John Wiley & Sons, Ltd.

  16. Using a Graphics Turing Test to Evaluate the Effect of Frame Rate and Motion Blur on Telepresence of Animated Objects

    DEFF Research Database (Denmark)

    Borg, Mathias; Johansen, Stine Schmieg; Krog, Kim Srirat

    2013-01-01

    A limited Graphics Turing Test is used to determine the frame rate that is required to achieve telepresence of an animated object. For low object velocities of 2.25 and 4.5 degrees of visual angle per second at 60 frames per second a rotating object with no added motion blur is able to pass...... the test. The results of the experiments confirm previous results in psychophysics and show that the Graphics Turing Test is a useful tool in computer graphics. Even with simulated motion blur, our Graphics Turing Test could not be passed with frame rates of 30 and 20 frames per second. Our results suggest...

  17. Revisiting video game ratings: Shift from content-centric to parent-centric approach

    Directory of Open Access Journals (Sweden)

    Jiow Hee Jhee

    2017-01-01

    Full Text Available The rapid adoption of video gaming among children has placed tremendous strain on parents’ ability to manage their children’s consumption. While parents refer online to video games ratings (VGR information to support their mediation efforts, there are many difficulties associated with such practice. This paper explores the popular VGR sites, and highlights the inadequacies of VGRs to capture the parents’ concerns, such as time displacement, social interactions, financial spending and various video game effects, beyond the widespread panics over content issues, that is subjective, ever-changing and irrelevant. As such, this paper argues for a shift from content-centric to a parent-centric approach in VGRs, that captures the evolving nature of video gaming, and support parents, the main users of VGRs, in their management of their young video gaming children. This paper proposes a Video Games Repository for Parents to represent that shift.

  18. Video-assisted instruction improves the success rate for tracheal intubation by novices.

    Science.gov (United States)

    Howard-Quijano, K J; Huang, Y M; Matevosian, R; Kaplan, M B; Steadman, R H

    2008-10-01

    Tracheal intubation via laryngoscopy is a fundamental skill, particularly for anaesthesiologists. However, teaching this skill is difficult since direct laryngoscopy allows only one individual to view the larynx during the procedure. The purpose of this study was to determine if video-assisted laryngoscopy improves the effectiveness of tracheal intubation training. In this prospective, randomized, crossover study, 37 novices with less than six prior intubation attempts were randomized into two groups, video-assisted followed by traditional instruction (Group V/T) and traditional instruction followed by video-assisted instruction (Group T/V). Novices performed intubations on three patients, switched groups, and performed three more intubations. All trainees received feedback during the procedure from an attending anaesthesiologist based on standard cues. Additionally, during the video-assisted part of the study, the supervising anaesthesiologist incorporated feedback based on the video images obtained from the fibreoptic camera located in the laryngoscope. During video-assisted instruction, novices were successful at 69% of their intubation attempts whereas those trained during the non-video-assisted portion were successful in 55% of their attempts (P=0.04). Oesophageal intubations occurred in 3% of video-assisted intubation attempts and in 17% of traditional attempts (P<0.01). The improved rate of successful intubation and the decreased rate of oesophageal intubation support the use of video laryngoscopy for tracheal intubation training.

  19. Running wavelet archetype aids the determination of heart rate from the video photoplethysmogram during motion.

    Science.gov (United States)

    Addison, Paul S; Foo, David M H; Jacquel, Dominique

    2017-07-01

    The extraction of heart rate from a video-based biosignal during motion using a novel wavelet-based ensemble averaging method is described. Running Wavelet Archetyping (RWA) allows for the enhanced extraction of pulse information from the time-frequency representation, from which a video-based heart rate (HRvid) can be derived. This compares favorably to a reference heart rate derived from a pulse oximeter.

  20. Video-Based Physiologic Monitoring During an Acute Hypoxic Challenge: Heart Rate, Respiratory Rate, and Oxygen Saturation.

    Science.gov (United States)

    Addison, Paul S; Jacquel, Dominique; Foo, David M H; Antunes, André; Borg, Ulf R

    2017-09-01

    The physiologic information contained in the video photoplethysmogram is well documented. However, extracting this information during challenging conditions requires new analysis techniques to capture and process the video image streams to extract clinically useful physiologic parameters. We hypothesized that heart rate, respiratory rate, and oxygen saturation trending can be evaluated accurately from video information during acute hypoxia. Video footage was acquired from multiple desaturation episodes during a porcine model of acute hypoxia using a standard visible light camera. A novel in-house algorithm was used to extract photoplethysmographic cardiac pulse and respiratory information from the video image streams and process it to extract a continuously reported video-based heart rate (HRvid), respiratory rate (RRvid), and oxygen saturation (SvidO2). This information was then compared with HR and oxygen saturation references from commercial pulse oximetry and the known rate of respiration from the ventilator. Eighty-eight minutes of data were acquired during 16 hypoxic episodes in 8 animals. A linear mixed-effects regression showed excellent responses relative to a nonhypoxic reference signal with slopes of 0.976 (95% confidence interval [CI], 0.973-0.979) for HRvid; 1.135 (95% CI, 1.101-1.168) for RRvid, and 0.913 (95% CI, 0.905-0.920) for video-based oxygen saturation. These results were obtained while maintaining continuous uninterrupted vital sign monitoring for the entire study period. Video-based monitoring of HR, RR, and oxygen saturation may be performed with reasonable accuracy during acute hypoxic conditions in an anesthetized porcine hypoxia model using standard visible light camera equipment. However, the study was conducted during relatively low motion. A better understanding of the effect of motion and the effect of ambient light on the video photoplethysmogram may help refine this monitoring technology for use in the clinical environment.

  1. Facial attractiveness ratings from video-clips and static images tell the same story.

    Science.gov (United States)

    Rhodes, Gillian; Lie, Hanne C; Thevaraja, Nishta; Taylor, Libby; Iredell, Natasha; Curran, Christine; Tan, Shi Qin Claire; Carnemolla, Pia; Simmons, Leigh W

    2011-01-01

    Most of what we know about what makes a face attractive and why we have the preferences we do is based on attractiveness ratings of static images of faces, usually photographs. However, several reports that such ratings fail to correlate significantly with ratings made to dynamic video clips, which provide richer samples of appearance, challenge the validity of this literature. Here, we tested the validity of attractiveness ratings made to static images, using a substantial sample of male faces. We found that these ratings agreed very strongly with ratings made to videos of these men, despite the presence of much more information in the videos (multiple views, neutral and smiling expressions and speech-related movements). Not surprisingly, given this high agreement, the components of video-attractiveness were also very similar to those reported previously for static-attractiveness. Specifically, averageness, symmetry and masculinity were all significant components of attractiveness rated from videos. Finally, regression analyses yielded very similar effects of attractiveness on success in obtaining sexual partners, whether attractiveness was rated from videos or static images. These results validate the widespread use of attractiveness ratings made to static images in evolutionary and social psychological research. We speculate that this validity may stem from our tendency to make rapid and robust judgements of attractiveness.

  2. High-frame-rate echocardiography using diverging transmit beams and parallel receive beamforming.

    Science.gov (United States)

    Hasegawa, Hideyuki; Kanai, Hiroshi

    2011-07-01

    Echocardiography is a widely used modality for diagnosis of the heart. It enables observation of the shape of the heart and estimation of global heart function based on B-mode and M-mode imaging. Subsequently, methods for estimating myocardial strain and strain rate have been developed to evaluate regional heart function. Furthermore, it has recently been shown that measurements of transmural transition of myocardial contraction/relaxation and propagation of vibration caused by closure of a heart valve would be useful for evaluation of myocardial function and viscoelasticity. However, such measurements require a frame rate much higher than that achieved by conventional ultrasonic diagnostic equipment. In the present study, a method based on parallel receive beamforming was developed to achieve high-frame-rate (over 300 Hz) echocardiography. To increase the frame rate, the number of transmits was reduced to 15 with angular intervals of 6°, and 16 receiving beams were created for each transmission to obtain the same number and density of scan lines as realized by conventional sector scanning. In addition, several transmits were compounded to obtain each scan line to reduce the differences in transmit-receive sensitivities among scan lines. The number of transmits for compounding was determined by considering the width of the transmit beam. For transmission, plane waves and diverging waves were investigated. Diverging waves showed better performance than plane waves because the widths of plane waves did not increase with the range distance from the ultrasonic probe, whereas lateral intervals of scan lines increased with range distance. The spatial resolution of the proposed method was validated using fine nylon wires. Although the widths at half-maxima of the point spread functions obtained by diverging waves were slightly larger than those obtained by conventional beamforming and parallel beamforming with plane waves, point spread functions very similar to those

  3. Innovative Video Diagnostic Equipment for Material Science

    Science.gov (United States)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  4. Multi-wavelength laser sensor surface for high frame rate imaging refractometry (Conference Presentation)

    Science.gov (United States)

    Kristensen, Anders; Vannahme, Christoph; Sørensen, Kristian T.; Dufva, Martin

    2016-09-01

    A highly sensitive distributed feedback (DFB) dye laser sensor for high frame rate imaging refractometry without moving parts is presented. The laser sensor surface comprises areas of different grating periods. Imaging in two dimensions of space is enabled by analyzing laser light from all areas in parallel with an imaging spectrometer. Refractive index imaging of a 2 mm by 2 mm surface is demonstrated with a spatial resolution of 10 μm, a detection limit of 8 10-6 RIU, and a framerate of 12 Hz, limited by the CCD camera. Label-free imaging of dissolution dynamics is demonstrated.

  5. In Vivo High Frame Rate Vector Flow Imaging Using Plane Waves and Directional Beamforming

    DEFF Research Database (Denmark)

    Jensen, Jonas; Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo

    2016-01-01

    oscillation (TO) estimators and only 3 directional beamformed lines. The suggested DB vector flow estimator is employed with steered plane wave transmissions for high frame rate imaging.Two distinct plane wave sequences are used: a short sequence(3 angles) for fast flow and an interleaved long sequence (21....... The long sequence has a higher sensitivity, and when used forestimation of slow flow with a peak velocity of 0.04 m/s, the SDis 2.5 % and bias is 0.1 %. This is a factor of 4 better than ifthe short sequence is used. The carotid bifurcation was scanned on a healthy volunteer, and the short sequence...

  6. Accurate Angle Estimator for High-Frame-rate 2-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Lindskov Hansen, Kristoffer

    2016-01-01

    This paper presents a novel approach for estimating 2-D flow angles using a high-frame-rate ultrasound method. The angle estimator features high accuracy and low standard deviation (SD) over the full 360° range. The method is validated on Field II simulations and phantom measurements using the ex...... measurement is performed on a carotid bifurcation of a healthy individual. A 3-s acquisition during three heart cycles is captured. A consistent and repetitive vortex is observed in the carotid bulb during systoles....

  7. High Frame Rate Vector Velocity Estimation using Plane Waves and Transverse Oscillation

    DEFF Research Database (Denmark)

    Jensen, Jonas; Stuart, Matthias Bo; Jensen, Jørgen Arendt

    2015-01-01

    is obtained by filtering the beamformed RF images in the Fourier domain using a Gaussian filter centered at a desired oscillation frequency. Performance of the method is quantified through measurements with the experimental scanner SARUS and the BK 2L8 linear array transducer. Constant parabolic flow......This paper presents a method for estimating 2-D vector velocities using plane waves and transverse oscillation. The approach uses emission of a low number of steered plane waves, which result in a high frame rate and continuous acquisition of data for the whole image. A transverse oscillating field...

  8. Reduction in Fall Rate in Dementia Managed Care Through Video Incident Review: Pilot Study.

    Science.gov (United States)

    Bayen, Eleonore; Jacquemot, Julien; Netscher, George; Agrawal, Pulkit; Tabb Noyce, Lynn; Bayen, Alexandre

    2017-10-17

    Falls of individuals with dementia are frequent, dangerous, and costly. Early detection and access to the history of a fall is crucial for efficient care and secondary prevention in cognitively impaired individuals. However, most falls remain unwitnessed events. Furthermore, understanding why and how a fall occurred is a challenge. Video capture and secure transmission of real-world falls thus stands as a promising assistive tool. The objective of this study was to analyze how continuous video monitoring and review of falls of individuals with dementia can support better quality of care. A pilot observational study (July-September 2016) was carried out in a Californian memory care facility. Falls were video-captured (24×7), thanks to 43 wall-mounted cameras (deployed in all common areas and in 10 out of 40 private bedrooms of consenting residents and families). Video review was provided to facility staff, thanks to a customized mobile device app. The outcome measures were the count of residents' falls happening in the video-covered areas, the acceptability of video recording, the analysis of video review, and video replay possibilities for care practice. Over 3 months, 16 falls were video-captured. A drop in fall rate was observed in the last month of the study. Acceptability was good. Video review enabled screening for the severity of falls and fall-related injuries. Video replay enabled identifying cognitive-behavioral deficiencies and environmental circumstances contributing to the fall. This allowed for secondary prevention in high-risk multi-faller individuals and for updated facility care policies regarding a safer living environment for all residents. Video monitoring offers high potential to support conventional care in memory care facilities.

  9. Modeling the time--varying subjective quality of HTTP video streams with rate adaptations.

    Science.gov (United States)

    Chen, Chao; Choi, Lark Kwon; de Veciana, Gustavo; Caramanis, Constantine; Heath, Robert W; Bovik, Alan C

    2014-05-01

    Newly developed hypertext transfer protocol (HTTP)-based video streaming technologies enable flexible rate-adaptation under varying channel conditions. Accurately predicting the users' quality of experience (QoE) for rate-adaptive HTTP video streams is thus critical to achieve efficiency. An important aspect of understanding and modeling QoE is predicting the up-to-the-moment subjective quality of a video as it is played, which is difficult due to hysteresis effects and nonlinearities in human behavioral responses. This paper presents a Hammerstein-Wiener model for predicting the time-varying subjective quality (TVSQ) of rate-adaptive videos. To collect data for model parameterization and validation, a database of longer duration videos with time-varying distortions was built and the TVSQs of the videos were measured in a large-scale subjective study. The proposed method is able to reliably predict the TVSQ of rate adaptive videos. Since the Hammerstein-Wiener model has a very simple structure, the proposed method is suitable for online TVSQ prediction in HTTP-based streaming.

  10. FPGA-based voltage and current dual drive system for high frame rate electrical impedance tomography.

    Science.gov (United States)

    Khan, Shadab; Manwaring, Preston; Borsic, Andrea; Halter, Ryan

    2015-04-01

    Electrical impedance tomography (EIT) is used to image the electrical property distribution of a tissue under test. An EIT system comprises complex hardware and software modules, which are typically designed for a specific application. Upgrading these modules is a time-consuming process, and requires rigorous testing to ensure proper functioning of new modules with the existing ones. To this end, we developed a modular and reconfigurable data acquisition (DAQ) system using National Instruments' (NI) hardware and software modules, which offer inherent compatibility over generations of hardware and software revisions. The system can be configured to use up to 32-channels. This EIT system can be used to interchangeably apply current or voltage signal, and measure the tissue response in a semi-parallel fashion. A novel signal averaging algorithm, and 512-point fast Fourier transform (FFT) computation block was implemented on the FPGA. FFT output bins were classified as signal or noise. Signal bins constitute a tissue's response to a pure or mixed tone signal. Signal bins' data can be used for traditional applications, as well as synchronous frequency-difference imaging. Noise bins were used to compute noise power on the FPGA. Noise power represents a metric of signal quality, and can be used to ensure proper tissue-electrode contact. Allocation of these computationally expensive tasks to the FPGA reduced the required bandwidth between PC, and the FPGA for high frame rate EIT. In 16-channel configuration, with a signal-averaging factor of 8, the DAQ frame rate at 100 kHz exceeded 110 frames s (-1), and signal-to-noise ratio exceeded 90 dB across the spectrum. Reciprocity error was found to be for frequencies up to 1 MHz. Static imaging experiments were performed on a high-conductivity inclusion placed in a saline filled tank; the inclusion was clearly localized in the reconstructions obtained for both absolute current and voltage mode data.

  11. Politik Media dalam Membingkai Perempuan (Analisis Framing Pemberitaan Kasus Video Porno Yahya Zaini dan Maria Eva di Harian Umum Kompas dan Suara Merdeka

    Directory of Open Access Journals (Sweden)

    Mite Setiansah

    2013-12-01

    Full Text Available Abstract: This research is a qualitative descriptive research which is aims to get an explanation about process of reality reconstruction doing by mass media, various kind of framing devices that is used, and woman representation at Kompas and Suara Merdeka news reporting about circulation of Yahya Zaini-Maria Eva porn video. In its execution, this research is using framing analysis method to gain information about way of mass media’s telling story. The data are collected by using qualitative content analysis applied to Kompas and Suara Merdeka news articles publish during December 2006. Unit of analysis determined based on Pan and Kosicki framing analysis model. Data validity is measured by triangulation technique. Data analysis is conducted by using the interactive data analysis technique. The result of this research shows that Kompas and Suara Merdeka have different point of view in reconstruction this case. Kompas showed careful news reporting while Suara Merdeka is more market oriented. Both of newspapers are uses same framing devices, including syntactic, script, thematic, and rhetoric. In representing woman, Kompas and Suara Merdeka are tending to frame woman in unfavorable ways.

  12. Compressed sensing for high frame rate, high resolution and high contrast ultrasound imaging.

    Science.gov (United States)

    Jing Liu; Qiong He; Jianwen Luo

    2015-08-01

    Compressed sensing (CS) or compressive sampling allows much lower sampling frequency than the Nyquist sampling frequency. In this paper, we propose a novel technique, named compressed sensing based synthetic transmit aperture (CS-STA), to speed up the acquisition of ultrasound imaging. Ultrasound transducer transmits plane wave with random apodizations for several times and receives the corresponding echoes. The full dataset of STA is then recovered from the recorded echoes using a CS reconstruction algorithm. Finally, a standard STA beamforming is performed on the dataset to form a B-mode image. When the number of CS-STA firings is smaller than the number of STA firings, higher frame rate is achieved. In addition, CS-STA maintains the high resolution of STA because of the CS recovered full dataset of STA, and improves the contrast due to plane wave firings. Computer simulations and phantom experiments are carried out to investigate the feasibility and performance of the proposed CS-STA method. The CS-STA method is proven to be capable of obtaining simultaneously high frame rate, high solution and high contrast ultrasound imaging.

  13. High frame rate synthetic aperture vector flow imaging for transthoracic echocardiography

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Stuart, Matthias Bo; Bechsgaard, Thor

    2016-01-01

    This work presents the first in vivo results of 2-D high frame rate vector velocity imaging for transthoracic cardiac imaging. Measurements are made on a healthy volunteer using the SARUS experimental ultrasound scanner connected to an intercostal phased-array probe. Two parasternal long-axis view...... (PLAX) are obtained, one centred at the aortic valve and another centred at the left ventricle. The acquisition sequence was composed of 3 diverging waves for high frame rate synthetic aperture flow imaging. For verification a phantom measurement is performed on a transverse straight 5 mm diameter...... vessel at a depth of 100 mm in a tissue-mimicking phantom. A flow pump produced a 2 ml/s constant flow with a peak velocity of 0.2 m/s. The average estimated flow anglein the ROI was 86.22◦ ± 6.66◦ with a true flow angle of 90◦. A relative velocity bias of −39% with a standard deviation of 13% was found...

  14. Real-time intravascular photoacoustic-ultrasound imaging of lipid-laden plaque at speed of video-rate level

    Science.gov (United States)

    Hui, Jie; Cao, Yingchun; Zhang, Yi; Kole, Ayeeshik; Wang, Pu; Yu, Guangli; Eakins, Gregory; Sturek, Michael; Chen, Weibiao; Cheng, Ji-Xin

    2017-03-01

    Intravascular photoacoustic-ultrasound (IVPA-US) imaging is an emerging hybrid modality for the detection of lipidladen plaques by providing simultaneous morphological and lipid-specific chemical information of an artery wall. The clinical utility of IVPA-US technology requires real-time imaging and display at speed of video-rate level. Here, we demonstrate a compact and portable IVPA-US system capable of imaging at up to 25 frames per second in real-time display mode. This unprecedented imaging speed was achieved by concurrent innovations in excitation laser source, rotary joint assembly, 1 mm IVPA-US catheter, differentiated A-line strategy, and real-time image processing and display algorithms. By imaging pulsatile motion at different imaging speeds, 16 frames per second was deemed to be adequate to suppress motion artifacts from cardiac pulsation for in vivo applications. Our lateral resolution results further verified the number of A-lines used for a cross-sectional IVPA image reconstruction. The translational capability of this system for the detection of lipid-laden plaques was validated by ex vivo imaging of an atherosclerotic human coronary artery at 16 frames per second, which showed strong correlation to gold-standard histopathology.

  15. How Fast Is Your Body Motion? Determining a Sufficient Frame Rate for an Optical Motion Tracking System Using Passive Markers

    National Research Council Canada - National Science Library

    Song, Min-Ho; Godøy, Rolf Inge

    2016-01-01

    .... When using passive markers for the optical motion tracking, avoiding identity confusion between the markers becomes a problem as the speed of motion increases, necessitating a higher frame rate...

  16. Nanoimprinted distributed feedback dye laser sensors for high frame rate refractometric imaging of dissolution and fluid flow

    DEFF Research Database (Denmark)

    Vannahme, Christoph; Sørensen, Kristian Tølbøl; Gade, Carsten

    2015-01-01

    High frame rate refractometric dissolution and fluid flow monitoring in one and two dimensions of space with distributed feedback dye laser sensors is presented. The sensors provide both low detection limits and high spatial resolution. © 2015 OSA.......High frame rate refractometric dissolution and fluid flow monitoring in one and two dimensions of space with distributed feedback dye laser sensors is presented. The sensors provide both low detection limits and high spatial resolution. © 2015 OSA....

  17. Video Rating in Neurodegenerative Disease Clinical Trials: The Experience of PRION-1

    Directory of Open Access Journals (Sweden)

    Christopher Carswell

    2012-08-01

    Full Text Available Background/Aims: Large clinical trials including patients with uncommon diseases involve assessors in different geographical locations, resulting in considerable inter-rater variability in assessment scores. As video recordings of examinations, which can be individually rated, may eliminate such variability, we measured the agreement between a single video rater and multiple examining physicians in the context of PRION-1, a clinical trial of the antimalarial drug quinacrine in human prion diseases. Methods: We analysed a 43-component neurocognitive assessment battery, on 101 patients with Creutzfeldt-Jakob disease, focusing on the correlation and agreement between examining physicians and a single video rater. Results: In total, 335 videos of examinations of 101 patients who were video-recorded over the 4-year trial period were assessed. For neurocognitive examination, inter-observer concordance was generally excellent. Highly visual neurological examination domains (e.g. finger-nose-finger assessment of ataxia had good inter-rater correlation, whereas those dependent on non-visual clues (e.g. power or reflexes correlated poorly. Some non-visual neurological domains were surprisingly concordant, such as limb muscle tone. Conclusion: Cognitive assessments and selected neurological domains can be practically and accurately recorded in a clinical trial using video rating. Video recording of examinations is a valuable addition to any trial provided appropriate selection of assessment instruments is used and rigorous training of assessors is undertaken.

  18. Power consumption analysis of constant bit rate video transmission over 3G networks

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Belyaev, Evgeny; Wang, Le

    2012-01-01

    This paper presents an analysis of the power consumption of video data transmission with constant bit rate over 3G mobile wireless networks. The work includes the description of the radio resource control transition state machine in 3G networks, followed by a detailed power consumption analysis...... for the 3GPP transition state machine that allows to decrease power consumption on a mobile device taking signaling traffic, buffer size and latency restrictions into account. Furthermore, we discuss the gain in power consumption vs. PSNR for transmitted video and show the possibility of performing power...... consumption management based on the requirements for the video quality....

  19. Dynamic phase-sensitive optical coherence elastography at a true kilohertz frame-rate

    Science.gov (United States)

    Singh, Manmohan; Wu, Chen; Liu, Chih-Hao; Li, Jiasong; Schill, Alexander; Nair, Achuth; Larin, Kirill V.

    2016-03-01

    Dynamic optical coherence elastography (OCE) techniques have rapidly emerged as a noninvasive way to characterize the biomechanical properties of tissue. However, clinical applications of the majority of these techniques have been unfeasible due to the extended acquisition time because of multiple temporal OCT acquisitions (M-B mode). Moreover, multiple excitations, large datasets, and prolonged laser exposure prohibit their translation to the clinic, where patient discomfort and safety are critical criteria. Here, we demonstrate the feasibility of noncontact true kilohertz frame-rate dynamic optical coherence elastography by directly imaging a focused air-pulse induced elastic wave with a home-built phase-sensitive OCE system. The OCE system was based on a 4X buffered Fourier Domain Mode Locked swept source laser with an A-scan rate of ~1.5 MHz, and imaged the elastic wave propagation at a frame rate of ~7.3 kHz. Because the elastic wave directly imaged, only a single excitation was utilized for one line scan measurement. Rather than acquiring multiple temporal scans at successive spatial locations as with previous techniques, here, successive B-scans were acquired over the measurement region (B-M mode). Preliminary measurements were taken on tissue-mimicking agar phantoms of various concentrations, and the results showed good agreement with uniaxial mechanical compression testing. Then, the elasticity of an in situ porcine cornea in the whole eye-globe configuration at various intraocular pressures was measured. The results showed that this technique can acquire a depth-resolved elastogram in milliseconds. Furthermore, the ultra-fast acquisition ensured that the laser safety exposure limit for the cornea was not exceeded.

  20. stil119_0601a -- Point coverage of locations of still frames extracted from video imagery which depict sediment types

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Canadian ROPOS remotely operated vehicle (ROV) outfitted with video equipment (and other devices) was deployed from the NOAA Ship McAurthurII during May-June...

  1. Video quality assessment for web content mirroring

    Science.gov (United States)

    He, Ye; Fei, Kevin; Fernandez, Gustavo A.; Delp, Edward J.

    2014-03-01

    Due to the increasing user expectation on watching experience, moving web high quality video streaming content from the small screen in mobile devices to the larger TV screen has become popular. It is crucial to develop video quality metrics to measure the quality change for various devices or network conditions. In this paper, we propose an automated scoring system to quantify user satisfaction. We compare the quality of local videos with the videos transmitted to a TV. Four video quality metrics, namely Image Quality, Rendering Quality, Freeze Time Ratio and Rate of Freeze Events are used to measure video quality change during web content mirroring. To measure image quality and rendering quality, we compare the matched frames between the source video and the destination video using barcode tools. Freeze time ratio and rate of freeze events are measured after extracting video timestamps. Several user studies are conducted to evaluate the impact of each objective video quality metric on the subjective user watching experience.

  2. Dynamic Programming Optimization of Multi-rate Multicast Video-Streaming Services

    Directory of Open Access Journals (Sweden)

    Nestor Michael Caños Tiglao

    2010-06-01

    Full Text Available In large scale IP Television (IPTV and Mobile TV distributions, the video signal is typically encoded and transmitted using several quality streams, over IP Multicast channels, to several groups of receivers, which are classified in terms of their reception rate. As the number of video streams is usually constrained by both the number of TV channels and the maximum capacity of the content distribution network, it is necessary to find the selection of video stream transmission rates that maximizes the overall user satisfaction. In order to efficiently solve this problem, this paper proposes the Dynamic Programming Multi-rate Optimization (DPMO algorithm. The latter was comparatively evaluated considering several user distributions, featuring different access rate patterns. The experimental results reveal that DPMO is significantly more efficient than exhaustive search, while presenting slightly higher execution times than the non-optimal Multi-rate Step Search (MSS algorithm.

  3. Very high frame rate volumetric integration of depth images on mobile devices.

    Science.gov (United States)

    Kähler, Olaf; Adrian Prisacariu, Victor; Yuheng Ren, Carl; Sun, Xin; Torr, Philip; Murray, David

    2015-11-01

    Volumetric methods provide efficient, flexible and simple ways of integrating multiple depth images into a full 3D model. They provide dense and photorealistic 3D reconstructions, and parallelised implementations on GPUs achieve real-time performance on modern graphics hardware. To run such methods on mobile devices, providing users with freedom of movement and instantaneous reconstruction feedback, remains challenging however. In this paper we present a range of modifications to existing volumetric integration methods based on voxel block hashing, considerably improving their performance and making them applicable to tablet computer applications. We present (i) optimisations for the basic data structure, and its allocation and integration; (ii) a highly optimised raycasting pipeline; and (iii) extensions to the camera tracker to incorporate IMU data. In total, our system thus achieves frame rates up 47 Hz on a Nvidia Shield Tablet and 910 Hz on a Nvidia GTX Titan XGPU, or even beyond 1.1 kHz without visualisation.

  4. Fast frame rate rodent cardiac x-ray imaging using scintillator lens coupled to CMOS camera

    Science.gov (United States)

    Swathi Lakshmi, B.; Sai Varsha, M. K. N.; Kumar, N. Ashwin; Dixit, Madhulika; Krishnamurthi, Ganapathy

    2017-03-01

    Micro-Computed Tomography (MCT) systems for small animal imaging plays a critical role for monitoring disease progression and therapy evaluation. In this work, an in-house built micro-CT system equipped with a X-ray scintillator lens coupled to a commercial CMOS camera was used to test the feasibility of its application to Digital Subtraction Angiography (DSA). Literature has reported such studies being done with clinical X-ray tubes that can be pulsed rapidly or with rotating gantry systems, thus increasing the cost and infrastructural requirements.The feasibility of DSA was evaluated by injected Iodinated contrast agent (ICA) through the tail vein of a mouse. Projection images of the heart were acquired pre and post contrast using the high frame rate X-ray detector and processing done to visualize transit of ICA through the heart.

  5. Digital holographic interferometry accelerated with GPU: application in mechanical micro-deformation measurement operating at video rate

    Science.gov (United States)

    Múnera Ortiz, N.; Trujillo, C. A.; García-Sucerquia, J.

    2013-11-01

    The quantification of the deformations presented by mechanical parts is a useful tool for several applications in engineering; regularly this quantification is performed a posteriori. In this work, a digital holographic interferometer for measuring micro-deformation at video rate is presented. The interferometer is developed with the use of the parallel paradigm of CUDA™ (Compute Unified Device Architecture). A commercial Graphics Processor Unit (GPU) is used to accelerate phase processing from the recorded holograms. The proposed method can process record holograms of 1024x1024 pixels in 48 milliseconds. At the best performance of the method, it processes 21 frames per second (FPS). This benchmark surpasses 133-times the best performance of the method on a regular CPU.

  6. High frame rate synthetic aperture vector flow imaging for transthoracic echocardiography

    Science.gov (United States)

    Villagómez-Hoyos, Carlos A.; Stuart, Matthias B.; Bechsgaard, Thor; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt

    2016-04-01

    This work presents the first in vivo results of 2-D high frame rate vector velocity imaging for transthoracic cardiac imaging. Measurements are made on a healthy volunteer using the SARUS experimental ultrasound scanner connected to an intercostal phased-array probe. Two parasternal long-axis view (PLAX) are obtained, one centred at the aortic valve and another centred at the left ventricle. The acquisition sequence was composed of 3 diverging waves for high frame rate synthetic aperture flow imaging. For verification a phantom measurement is performed on a transverse straight 5 mm diameter vessel at a depth of 100 mm in a tissue-mimicking phantom. A flow pump produced a 2 ml/s constant flow with a peak velocity of 0.2 m/s. The average estimated flow angle in the ROI was 86.22° +/- 6.66° with a true flow angle of 90°. A relative velocity bias of -39% with a standard deviation of 13% was found. In-vivo acquisitions show complex flow patterns in the heart. In the aortic valve view, blood is seen exiting the left ventricle cavity through the aortic valve into the aorta during the systolic phase of the cardiac cycle. In the left ventricle view, blood flow is seen entering the left ventricle cavity through the mitral valve and splitting in two ways when approximating the left ventricle wall. The work presents 2-D velocity estimates on the heart from a non-invasive transthoracic scan. The ability of the method detecting flow regardless of the beam angle could potentially reveal a more complete view of the flow patterns presented on the heart.

  7. Development of a 3D Flash LADAR Video Camera for Entry, Decent and Landing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced Scientific Concepts, Inc. (ASC) has developed a 128 x 128 frame, 3D Flash LADAR video camera capable of a 30 Hz frame rate. Because Flash LADAR captures an...

  8. Framing the conversation: use of PRECIS-2 ratings to advance understanding of pragmatic trial design domains.

    Science.gov (United States)

    Lipman, Paula Darby; Loudon, Kirsty; Dluzak, Leanora; Moloney, Rachael; Messner, Donna; Stoney, Catherine M

    2017-11-10

    There continues to be debate about what constitutes a pragmatic trial and how it is distinguished from more traditional explanatory trials. The NIH Pragmatic Trials Collaborative Project, which includes five trials and a coordinating unit, has adopted the Pragmatic-Explanatory Continuum Indicator Summary (PRECIS-2) instrument. The purpose of the study was to collect PRECIS-2 ratings at two points in time to assess whether the tool was sensitive to change in trial design, and to explore with investigators the rationale for rating shifts. A mixed-methods design included sequential collection and analysis of quantitative data (PRECIS-2 ratings) and qualitative data. Ratings were collected at two annual, in-person project meetings, and subsequent interviews conducted with investigators were recorded, transcribed, and coded using NVivo 11 Pro for Windows. Rating shifts were coded as either (1) actual change (reflects a change in procedure or protocol), (2) primarily a rating shift reflecting rater variability, or (3) themes that reflect important concepts about the tool and/or pragmatic trial design. Based on PRECIS-2 ratings, each trial was highly pragmatic at the planning phase and remained so 1 year later in the early phases of trial implementation. Over half of the 45 paired ratings for the nine PRECIS-2 domains indicated a rating change from Time 1 to Time 2 (N = 24, 53%). Of the 24 rating changes, only three represented a true change in the design of the trial. Analysis of rationales for rating shifts identified critical themes associated with the tool or pragmatic trial design more generally. Each trial contributed one or more relevant comments, with Eligibility, Flexibility of Adherence, and Follow-up each accounting for more than one. PRECIS-2 has proved useful for "framing the conversation" about trial design among members of the Pragmatic Trials Collaborative Project. Our findings suggest that design elements assessed by the PRECIS-2 tool may represent

  9. High frame rate ultrasound monitoring of high intensity focused ultrasound-induced temperature changes: a novel asynchronous approach.

    Science.gov (United States)

    Liu, Hao-Li; Huang, Sheng-Min; Li, Meng-Lin

    2010-11-01

    When applying diagnostic ultrasound to guide focused ultrasound (FUS) thermal therapy, high frame rate ultrasonic temperature monitoring is valuable in better treatment control and dose monitoring. However, one of the potential problems encountered when performing ultrasonic temperature monitoring of a FUS procedure is interference between the FUS and imaging systems. Potential means of overcoming this problem include the switch between the FUS system and the imaging system (limited by a reduced frame rate of thermal imaging) or the development of complex synchronization protocols between the FUS therapeutic system and the ultrasonic imaging apparatus (limited by implementation efforts both for software and hardware designs, and low potential for widespread diffusion). In this paper, we apply an asynchronous idea to retrieving high frame rate and FUS-interference-free thermal imaging during FUS thermal therapy. Tone-burst delivery mode of the FUS energy is employed in our method, and the imaging and FUS systems are purposely operated in an asynchronous manner. Such asynchronous operation causes FUS interference to saturate sequential image frames at different A-lines; thus clean A-lines from several image frames can be extracted by a total energy-thresholding technique and then combined to reconstruct interference-free B-mode images at a high frame rate for temperature estimation. The performance of the proposed method is demonstrated by phantom experiments. Relationships of the FUS duty-cycle with the maximum reconstructed frame rate of thermal imaging and the corresponding maximum temperature increase are also studied. Its performance was also evaluated and compared with the existing manually synchronous and synchronous approaches. By proper selection of the FUS duty-cycle, using our method, the frame rate of thermal imaging can be increased up to tenfold compared with that provided by the manually synchronous approach. Our method is capable of pushing the frame

  10. Video Tape Recording Evaluation Protocol Behavior Rating Form - Part 1: Communication.

    Science.gov (United States)

    Curtis, W. Scott; Donlon, Edward T.

    Presented is the behavior rating scale designed for use with a video tape protocol for examination of multiply handicapped deaf blind children, whose development and evaluation are discussed in EC 040 599. The behavioral rating scale consists of five sections: unstructured orientation of child in examining area, child's task orientation and…

  11. Fair rate allocation of scalable multiple description video for many clients

    Science.gov (United States)

    Taal, Jacco R.; Lagendijk, Reginald L.

    2005-07-01

    Peer-to-peer networks (P2P) form a distributed communication infrastructure that is particularly well matched to video streaming using multiple description coding. We form M descriptions using MDC-FEC building on a scalable version of the "Dirac" video coder. The M descriptions are streamed via M different application layer multicast (ALM) trees embedded in the P2P network. Client nodes (peers in the network) receive a number of descriptions m video qualities, taking into account the distribution of the clients' bandwidth. We propose three "fairness" criteria to define the criterion to be optimized. Numerical results illustrate the effects of the different fairness criteria and client bandwidth distributions on the rates allocated to the compressed video layers and multiple descriptions.

  12. Modeling fault diagnosis as the activation and use of a frame system. [for pilot problem-solving rating

    Science.gov (United States)

    Smith, Philip J.; Giffin, Walter C.; Rockwell, Thomas H.; Thomas, Mark

    1986-01-01

    Twenty pilots with instrument flight ratings were asked to perform a fault-diagnosis task for which they had relevant domain knowledge. The pilots were asked to think out loud as they requested and interpreted information. Performances were then modeled as the activation and use of a frame system. Cognitive biases, memory distortions and losses, and failures to correctly diagnose the problem were studied in the context of this frame system model.

  13. Effects of frame rate on two-dimensional speckle tracking-derived measurements of myocardial deformation in premature infants.

    Science.gov (United States)

    Sanchez, Aura A; Levy, Philip T; Sekarski, Timothy J; Hamvas, Aaron; Holland, Mark R; Singh, Gautam K

    2015-05-01

    Frame rate (FR) of image acquisition is an important determinant of the reliability of 2-dimensional speckle tracking echocardiography (2DSTE)-derived myocardial strain. Premature infants have relatively high heart rates (HR). The aim was to analyze the effects of varying FR on the reproducibility of 2DSTE-derived right ventricle (RV) and left ventricle (LV) longitudinal strain (LS) and strain rate (LSR) in premature infants. RV and LV LS and LSR were measured by 2DSTE in the apical four-chamber view in 20 premature infants (26 ± 1 weeks) with HR 163 ± 13 bpm. For each subject, 4 sets of cine loops were acquired at FR of 130 frames/sec. Two observers measured LS and LSR. Inter- and intra-observer reproducibility was assessed using Bland-Altman analysis, coefficient of variation, and linear regression. Intra-observer reproducibility for RV and LV LS was higher at FR >110 frames/sec, and optimum at FR >130 frames/sec. The highest inter-observer reproducibility for RV and LV LS were at FR >130 and >110 frames/s, respectively. The highest reproducibility for RV and LV systolic and early diastolic LSR was at FR >110 frames/sec. FR/HR ratio >0.7 frames/sec per bpm yielded optimum reproducibility for RV and LV deformation imaging. The reliability of 2DSTE-derived RV and LV deformation imaging in premature infants is affected by the FR of image acquisition. Reproducibility is most robust when cine loops are obtained with FR/HR ratio between 0.7 and 0.9 frames/sec per bpm, which likely results from optimal myocardial speckle tracking and mechanical event timing. © 2014, Wiley Periodicals, Inc.

  14. Real time three-dimensional space video rate sensors for millimeter waves imaging based very inexpensive plasma LED lamps

    Science.gov (United States)

    Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S.; Rozban, Daniel; Abramovich, Amir

    2014-10-01

    In recent years, much effort has been invested to develop inexpensive but sensitive Millimeter Wave (MMW) detectors that can be used in focal plane arrays (FPAs), in order to implement real time MMW imaging. Real time MMW imaging systems are required for many varied applications in many fields as homeland security, medicine, communications, military products and space technology. It is mainly because this radiation has high penetration and good navigability through dust storm, fog, heavy rain, dielectric materials, biological tissue, and diverse materials. Moreover, the atmospheric attenuation in this range of the spectrum is relatively low and the scattering is also low compared to NIR and VIS. The lack of inexpensive room temperature imaging systems makes it difficult to provide a suitable MMW system for many of the above applications. In last few years we advanced in research and development of sensors using very inexpensive (30-50 cents) Glow Discharge Detector (GDD) plasma indicator lamps as MMW detectors. This paper presents three kinds of GDD sensor based lamp Focal Plane Arrays (FPA). Those three kinds of cameras are different in the number of detectors, scanning operation, and detection method. The 1st and 2nd generations are 8 × 8 pixel array and an 18 × 2 mono-rail scanner array respectively, both of them for direct detection and limited to fixed imaging. The last designed sensor is a multiplexing frame rate of 16x16 GDD FPA. It permits real time video rate imaging of 30 frames/ sec and comprehensive 3D MMW imaging. The principle of detection in this sensor is a frequency modulated continuous wave (FMCW) system while each of the 16 GDD pixel lines is sampled simultaneously. Direct detection is also possible and can be done with a friendly user interface. This FPA sensor is built over 256 commercial GDD lamps with 3 mm diameter International Light, Inc., Peabody, MA model 527 Ne indicator lamps as pixel detectors. All three sensors are fully supported

  15. RST-Resilient Video Watermarking Using Scene-Based Feature Extraction

    OpenAIRE

    Jung Han-Seung; Lee Young-Yoon; Lee Sang Uk

    2004-01-01

    Watermarking for video sequences should consider additional attacks, such as frame averaging, frame-rate change, frame shuffling or collusion attacks, as well as those of still images. Also, since video is a sequence of analogous images, video watermarking is subject to interframe collusion. In order to cope with these attacks, we propose a scene-based temporal watermarking algorithm. In each scene, segmented by scene-change detection schemes, a watermark is embedded temporally to one-dimens...

  16. Estimation of Heartbeat Peak Locations and Heartbeat Rate from Facial Video

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Nasrollahi, Kamal; Moeslund, Thomas B.

    2017-01-01

    Available systems for heartbeat signal estimations from facial video only provide an average of Heartbeat Rate (HR) over a period of time. However, physicians require Heartbeat Peak Locations (HPL) to assess a patient’s heart condition by detecting cardiac events and measuring different physiolog......Available systems for heartbeat signal estimations from facial video only provide an average of Heartbeat Rate (HR) over a period of time. However, physicians require Heartbeat Peak Locations (HPL) to assess a patient’s heart condition by detecting cardiac events and measuring different...... physiological parameters including HR and its variability. This paper proposes a new method of HPL estimation from facial video using Empirical Mode Decomposition (EMD), which provides clearly visible heartbeat peaks in a decomposed signal. The method also provides the notion of both color- and motion-based HR...... from facial videos, even when there are voluntary internal and external head motions in the videos. The employed signal processing technique has resulted in a system that could significantly advance, among others, health-monitoring technologies....

  17. Noiseless imaging detector for adaptive optics with kHz frame rates

    CERN Document Server

    Vallerga, J V; Mikulec, Bettina; Tremsin, A; Clark, Allan G; Siegmund, O H W; CERN. Geneva

    2004-01-01

    A new hybrid optical detector is described that has many of the attributes desired for the next generation AO wavefront sensors. The detector consists of a proximity focused MCP read out by four multi-pixel application specific integrated circuit (ASIC) chips developed at CERN (â€ワMedipix2”) with individual pixels that amplify, discriminate and count input events. The detector has 512 x 512 pixels, zero readout noise (photon counting) and can be read out at 1 kHz frame rates. The Medipix2 readout chips can be electronically shuttered down to a temporal window of a few microseconds with an accuracy of 10 nanoseconds. When used in a Shack-Hartman style wavefront sensor, it should be able to centroid approximately 5000 spots using 7 x 7 pixel sub-apertures resulting in very linear, off-null error correction terms. The quantum efficiency depends on the optical photocathode chosen for the bandpass of interest. A three year development effort for this detector technology has just been funded as part of the...

  18. High-frame-rate intensified fast optically shuttered TV cameras with selected imaging applications

    Energy Technology Data Exchange (ETDEWEB)

    Yates, G.J.; King, N.S.P.

    1994-08-01

    This invited paper focuses on high speed electronic/electro-optic camera development by the Applied Physics Experiments and Imaging Measurements Group (P-15) of Los Alamos National Laboratory`s Physics Division over the last two decades. The evolution of TV and image intensifier sensors and fast readout fast shuttered cameras are discussed. Their use in nuclear, military, and medical imaging applications are presented. Several salient characteristics and anomalies associated with single-pulse and high repetition rate performance of the cameras/sensors are included from earlier studies to emphasize their effects on radiometric accuracy of electronic framing cameras. The Group`s test and evaluation capabilities for characterization of imaging type electro-optic sensors and sensor components including Focal Plane Arrays, gated Image Intensifiers, microchannel plates, and phosphors are discussed. Two new unique facilities, the High Speed Solid State Imager Test Station (HSTS) and the Electron Gun Vacuum Test Chamber (EGTC) arc described. A summary of the Group`s current and developmental camera designs and R&D initiatives are included.

  19. Validation of a pediatric vocal fold nodule rating scale based on digital video images.

    Science.gov (United States)

    Nuss, Roger C; Ward, Jessica; Recko, Thomas; Huang, Lin; Woodnorth, Geralyn Harvey

    2012-01-01

    We sought to create a validated scale of vocal fold nodules in children, based on digital video clips obtained during diagnostic fiberoptic laryngoscopy. We developed a 4-point grading scale of vocal fold nodules in children, based upon short digital video clips. A tutorial for use of the scale, including schematic drawings of nodules, static images, and 10-second video clips, was presented to 36 clinicians with various levels of experience. The clinicians then reviewed 40 short digital video samples from pediatric patients evaluated in a voice clinic and rated the nodule size. Statistical analysis of the ratings provided inter-rater reliability scores. Thirty-six clinicians with various levels of experience rated a total of 40 short video clips. The ratings of experienced raters (14 pediatric otolaryngology attending physicians and pediatric otolaryngology fellows) were compared with those of inexperienced raters (22 nurses, medical students, otolaryngology residents, physician assistants, and pediatric speech-language pathologists). The overall intraclass correlation coefficient for the ratings of nodule size was quite good (0.62; 95% confidence interval, 0.52 to 0.74). The p value for experienced raters versus inexperienced raters was 0.1345, indicating no statistically significant difference in the ratings by these two groups. The intraclass correlation coefficient for intra-rater reliability was very high (0.89). The use of a dynamic scale of pediatric vocal fold nodule size most realistically represents the clinical assessment of nodules during an office visit. The results of this study show a high level of agreement between experienced and inexperienced raters. This scale can be used with a high level of reliability by clinicians with various levels of experience. A validated grading scale will help to assess long-term outcomes of pediatric patients with vocal fold nodules.

  20. Video-rate two-photon excited fluorescence lifetime imaging system with interleaved digitization.

    Science.gov (United States)

    Dow, Ximeng Y; Sullivan, Shane Z; Muir, Ryan D; Simpson, Garth J

    2015-07-15

    A fast (up to video rate) two-photon excited fluorescence lifetime imaging system based on interleaved digitization is demonstrated. The system is compatible with existing beam-scanning microscopes with minor electronics and software modification. Proof-of-concept demonstrations were performed using laser dyes and biological tissue.

  1. Spectral optical coherence tomography in video-rate and 3D imaging of contact lens wear.

    Science.gov (United States)

    Kaluzny, Bartlomiej J; Fojt, Wojciech; Szkulmowska, Anna; Bajraszewski, Tomasz; Wojtkowski, Maciej; Kowalczyk, Andrzej

    2007-12-01

    To present the applicability of spectral optical coherence tomography (SOCT) for video-rate and three-dimensional imaging of a contact lens on the eye surface. The SOCT prototype instrument constructed at Nicolaus Copernicus University (Torun, Poland) is based on Fourier domain detection, which enables high sensitivity (96 dB) and increases the speed of imaging 60 times compared with conventional optical coherence tomography techniques. Consequently, video-rate imaging and three-dimensional reconstructions can be achieved, preserving the high quality of the image. The instrument operates under clinical conditions in the Ophthalmology Department (Collegium Medicum Nicolaus Copernicus University, Bydgoszcz, Poland). A total of three eyes fitted with different contact lenses were examined with the aid of the instrument. Before SOCT measurements, slit lamp examinations were performed. Data, which are representative for each imaging mode, are presented. The instrument provided high-resolution (4 microm axial x 10 microm transverse) tomograms with an acquisition time of 40 micros per A-scan. Video-rate imaging allowed the simultaneous quantitative evaluation of the movement of the contact lens and assessment of the fitting relationship between the lens and the ocular surface. Three-dimensional scanning protocols further improved lens visualization and fit evaluation. SOCT allows video-rate and three-dimensional cross-sectional imaging of the eye fitted with a contact lens. The analysis of both imaging modes suggests the future applicability of this technology to the contact lens field.

  2. Differences in, and Frames of Reference of, Indigenous Australians' Self-rated General and Oral Health.

    Science.gov (United States)

    Chand, Reshika; Parker, Eleanor; Jamieson, Lisa

    2017-01-01

    To compare general and oral health perceptions between Indigenous and non-Indigenous Australians and to quantify Indigenous Australian health-related frames of reference. A mixed-methods approach was used. The quantitative component comprised data from four convenience studies of Indigenous oral health and one national oral health survey stratified by Indigenous status. Qualitative data with questions pertaining to frames of reference were collected from 19 Indigenous Australian interviews. Among the Indigenous studies, deficits in perceptions of excellent, very good, or good general health and excellent, very good, or good oral health ranged from 10.5% to 43.8%. Among the non-Indigenous population, the deficit was 5%. Frames of reference appeared to underpin a biomedical conceptual outlook. The deficit in perceived oral health compared with general health was far greater among Indigenous Australians. The frame of reference underpinning Indigenous Australian's perceptions of health reflect those of the general Australian population.

  3. How Fast Is Your Body Motion? Determining a Sufficient Frame Rate for an Optical Motion Tracking System Using Passive Markers.

    Science.gov (United States)

    Song, Min-Ho; Godøy, Rolf Inge

    2016-01-01

    This paper addresses how to determine a sufficient frame (sampling) rate for an optical motion tracking system using passive reflective markers. When using passive markers for the optical motion tracking, avoiding identity confusion between the markers becomes a problem as the speed of motion increases, necessitating a higher frame rate to avoid a failure of the motion tracking caused by marker confusions and/or dropouts. Initially, one might believe that the Nyquist-Shannon sampling rate estimated from the assumed maximal temporal variation of a motion (i.e. a sampling rate at least twice that of the maximum motion frequency) could be the complete solution to the problem. However, this paper shows that also the spatial distance between the markers should be taken into account in determining the suitable frame rate of an optical motion tracking with passive markers. In this paper, a frame rate criterion for the optical tracking using passive markers is theoretically derived and also experimentally verified using a high-quality optical motion tracking system. Both the theoretical and the experimental results showed that the minimum frame rate is proportional to the ratio between the maximum speed of the motion and the minimum spacing between markers, and may also be predicted precisely if the proportional constant is known in advance. The inverse of the proportional constant is here defined as the tracking efficiency constant and it can be easily determined with some test measurements. Moreover, this newly defined constant can provide a new way of evaluating the tracking algorithm performance of an optical tracking system.

  4. Using high frame rate CMOS sensors for three-dimensional eye tracking.

    Science.gov (United States)

    Clarke, A H; Ditterich, J; Drüen, K; Schönfeld, U; Steineke, C

    2002-11-01

    A novel three-dimensional eye tracker is described and its performance evaluated. In contrast to previous devices based on conventional video standards, the present eye tracker is based on programmable CMOS image sensors, interfaced directly to digital processing circuitry to permit real-time image acquisition and processing. This architecture provides a number of important advantages, including image sampling rates of up to 400/sec measurement, direct pixel addressing for preprocessing and acquisition,and hard-disk storage of relevant image data. The reconfigurable digital processing circuitry also facilitates inline optmization of the front-end, time-critical processes. The primary acquisition algorithm for tracking the pupil and other eye features is designed around the generalized Hough transform. The tracker permits comprehensive measurement of eye movement (three degrees of freedom) and head movement (six degrees of freedom), and thus provides the basis for many types of vestibulo-oculomotor and visual research. The device has been qualified by the German Space Agency (DLR) and NASA for deployment on the International Space Station. It is foreseen that the device will be used together with appropriate stimulus generators as a general purpose facility for visual and vestibular experiments. Initial verification studies with an artificial eye demonstrate a measurement resolution of better than 0.1 degrees in all three components (i.e.,system noise for each of the components measured as 0.006 degrees H, 0.005 degrees V, and 0.016 degrees T. Over a range of +/-20 degrees eye rotation, linearity was found to be <0.5% (H), <0.5% (V), and <2.0% (T). A comparison with the scleral search coil technique yielded near equivalent values for the system noise and the thickness of Listing's plane.

  5. Operator-Adjustable Frame Rate, Resolution, and Gray Scale Tradeoff in Fixed-Bandwidth Remote Manipulator Control.

    Science.gov (United States)

    1980-09-01

    easier. During the Campeche blowout, both a manned and an unmanned submersible were sent for. The remotely controlled TREAC submersible was loaded...hose into a socket would be easiest with high resolution and * relatively low gray scale and frame rate, while selecting the blue valve from a bank of

  6. Multi-channel beam-scanning imaging at kHz frame rates by Lissajous trajectory microscopy.

    Science.gov (United States)

    Newman, Justin A; Sullivan, Shane Z; Muir, Ryan D; Sreehari, Suhas; Bouman, Charles A; Simpson, Garth J

    2015-03-09

    A beam-scanning microscope based on Lissajous trajectory imaging is described for achieving streaming 2D imaging with continuous frame rates up to 1.4 kHz. The microscope utilizes two fast-scan resonant mirrors to direct the optical beam on a circuitous trajectory through the field of view. By separating the full Lissajous trajectory time-domain data into sub-trajectories (partial, undersampled trajectories) effective frame-rates much higher than the repeat time of the Lissajous trajectory are achieved with many unsampled pixels present. A model-based image reconstruction (MBIR) 3D in-painting algorithm is then used to interpolate the missing data for the unsampled pixels to recover full images. The MBIR algorithm uses a maximum a posteriori estimation with a generalized Gaussian Markov random field prior model for image interpolation. Because images are acquired using photomultiplier tubes or photodiodes, parallelization for multi-channel imaging is straightforward. Preliminary results show that when combined with the MBIR in-painting algorithm, this technique has the ability to generate kHz frame rate images across 6 total dimensions of space, time, and polarization for SHG, TPEF, and confocal reflective birefringence data on a multimodal imaging platform for biomedical imaging. The use of a multi-channel data acquisition card allows for multimodal imaging with perfect image overlay. Image blur due to sample motion was also reduced by using higher frame rates.

  7. Sexual content in video games: an analysis of the Entertainment Software Rating Board classification from 1994 to 2013.

    Science.gov (United States)

    Vidaña-Pérez, Dèsirée; Braverman-Bronstein, Ariela; Basto-Abreu, Ana; Barrientos-Gutierrez, Inti; Hilscher, Rainer; Barrientos-Gutierrez, Tonatiuh

    2018-01-11

    Background: Video games are widely used by children and adolescents and have become a significant source of exposure to sexual content. Despite evidence of the important role of media in the development of sexual attitudes and behaviours, little attention has been paid to monitor sexual content in video games. Methods: Data was obtained about sexual content and rating for 23722 video games from 1994 to 2013 from the Entertainment Software Rating Board database; release dates and information on the top 100 selling video games was also obtained. A yearly prevalence of sexual content according to rating categories was calculated. Trends and comparisons were estimated using Joinpoint regression. Results: Sexual content was present in 13% of the video games. Games rated 'Mature' had the highest prevalence of sexual content (34.5%) followed by 'Teen' (30.7%) and 'E10+' (21.3%). Over time, sexual content decreased in the 'Everyone' category, 'E10+' maintained a low prevalence and 'Teen' and 'Mature' showed a marked increase. Both top and non-top video games showed constant increases, with top selling video games having 10.1% more sexual content across the period of study. Conclusion: Over the last 20 years, the prevalence of sexual content has increased in video games with a 'Teen' or 'Mature' rating. Further studies are needed to quantify the potential association between sexual content in video games and sexual behaviour in children and adolescents.

  8. Source and Channel Adaptive Rate Control for Multicast Layered Video Transmission Based on a Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Jérôme Viéron

    2004-03-01

    Full Text Available This paper introduces source-channel adaptive rate control (SARC, a new congestion control algorithm for layered video transmission in large multicast groups. In order to solve the well-known feedback implosion problem in large multicast groups, we first present a mechanism for filtering RTCP receiver reports sent from receivers to the whole session. The proposed filtering mechanism provides a classification of receivers according to a predefined similarity measure. An end-to-end source and FEC rate control based on this distributed feedback aggregation mechanism coupled with a video layered coding system is then described. The number of layers, their rate, and their levels of protection are adapted dynamically to aggregated feedbacks. The algorithms have been validated with the NS2 network simulator.

  9. Full-Field Spectroscopy at Megahertz-frame-rates: Application of Coherent Time-Stretch Transform

    Science.gov (United States)

    DeVore, Peter Thomas Setsuda

    Outliers or rogue events are found extensively in our world and have incredible effects. Also called rare events, they arise in the distribution of wealth (e.g., Pareto index), finance, network traffic, ocean waves, and e-commerce (selling less of more). Interest in rare optical events exploded after the sighting of optical rogue waves in laboratory experiments at UCLA. Detecting such tail events in fast streams of information necessitates real-time measurements. The Coherent Time-Stretch Transform chirps a pulsed source of radiation so that its temporal envelope matches its spectral profile (analogous to the far field regime of spatial diffraction), and the mapped spectral electric field is slow enough to be captured by a real-time digitizer. Combining this technique with spectral encoding, the time stretch technique has enabled a new class of ultra-high performance spectrometers and cameras (30+ MHz), and analog-to-digital converters that have led to the discovery of optical rogue waves and detection of cancer cells in blood with one in a million sensitivity. Conventionally, the Coherent Time-Stretch Transform maps the spectrum into the temporal electric field, but the time-dilation process along with inherent fiber losses results in reduction of peak power and loss of sensitivity, a problem exacerbated by extremely narrow molecular linewidths. The loss issue notwithstanding, in many cases the requisite dispersive optical device is not available. By extending the Coherent Time-Stretch Transform to the temporal near field, I have demonstrated, for the first time, phase-sensitive absorption spectroscopy of a gaseous sample at millions of frames per second. As the Coherent Time-Stretch Transform may capture both near and far field optical waves, it is a complete spectro-temporal optical characterization tool. This is manifested as an amplitude-dependent chirp, which implies the ability to measure the complex refractive index dispersion at megahertz frame rates. This

  10. Video rate nine-band multispectral short-wave infrared sensor.

    Science.gov (United States)

    Kutteruf, Mary R; Yetzbacher, Michael K; DePrenger, Michael J; Novak, Kyle M; Miller, Corey A; Downes, Trijntje Valerie; Kanaev, Andrey V

    2014-05-01

    Short-wave infrared (SWIR) imaging sensors are increasingly being used in surveillance and reconnaissance systems due to the reduced scatter in haze and the spectral response of materials over this wavelength range. Typically SWIR images have been provided either as full motion video from framing panchromatic systems or as spectral data cubes from line-scanning hyperspectral or multispectral systems. Here, we describe and characterize a system that bridges this divide, providing nine-band spectral images at 30 Hz. The system integrates a custom array of filters onto a commercial SWIR InGaAs array. We measure the filter placement and spectral response. We demonstrate a simple simulation technique to facilitate optimization of band selection for future sensors.

  11. A field-programmable gate array based system for high frame rate laser Doppler blood flow imaging.

    Science.gov (United States)

    Nguyen, H C; Hayes-Gill, B R; Morgan, S P; Zhu, Y; Boggett, D; Huang, X; Potter, M

    2010-01-01

    This paper presents a general embedded processing system implemented in a field-programmable gate array providing high frame rate and high accuracy for a laser Doppler blood flow imaging system. The proposed system can achieve a basic frame rate of flow images at 1 frame/second for 256 x 256 images with 1024 fast Fourier transform (FFT) points used in the processing algorithm. Mixed fixed-floating point calculations are utilized to achieve high accuracy but with a reasonable resource usage. The implementation has a root mean square deviation of the relative difference in flow values below 0.1% when compared with a double-precision floating point implementation. The system can contain from one or more processing units to obtain the required frame rate and accuracy. The performance of the system is significantly higher than other methods reported to date. Furthermore, a dedicated field-programmable gate array (FPGA) board has been designed to test the proposed processing system. The board is linked with a laser line scanning system, which uses a 64 x 1 photodetector array. Test results with various operating parameters show that the performance of the new system is better, in terms of noise and imaging speed, than has been previously achieved.

  12. Ensuring QoS with Adaptive Frame Rate and Feedback Control ...

    African Journals Online (AJOL)

    Video over best-effort packet networks is cumbered by a number of factors including unknown and timevarying bandwidth, delay and losses, as well as many additional issues such as how to fairly share the network resources amongst many flows and how to efficiently perform one-to-many communication for popular ...

  13. Framing Video Games and Internet Bullying on the ‘Smarter’ Channel of the ‘Debating Europe’ Platform

    Directory of Open Access Journals (Sweden)

    Tulia Maria Cășvean

    2017-07-01

    Full Text Available The new digital world may propagate old subjects, such violence in and through new media. Violent behavior is a concerning topic for academia, EU institutions and the large public that could be debated on online platforms which take the citizens’ questions and comments directly to policy makers for them to respond. ‘Debating Europe’ is a multi-channel online platform that encourages citizen to debate diverse topics that include violent behavior. Acknowledging that participants could have their own interests, divergent from those of the institution, legitimating or delegitimating the topic, our intention is to observe and analyze through the lens of frame analysis the citizens’ communicative practice on the SMARTER channel of the Debating Europe platform and their perceptions and attitudes towards the violent behavior topic in Europe.

  14. Determining the Discharge Rate from a Submerged Oil Leaks using ROV Video and CFD study

    Science.gov (United States)

    Saha, Pankaj; Shaffer, Frank; Shahnam, Mehrdad; Savas, Omer; Devites, Dave; Steffeck, Timothy

    2016-11-01

    The current paper reports a technique to measure the discharge rate by analyzing the video from a Remotely Operated Vehicle (ROV). The technique uses instantaneous images from ROV video to measure the velocity of visible features (turbulent eddies) along the boundary of an oil leak jet and subsequently classical theory of turbulent jets is imposed to determine the discharge rate. The Flow Rate Technical Group (FRTG) Plume Team developed this technique that manually tracked the visible features and produced the first accurate government estimates of the oil discharge rate from the Deepwater Horizon (DWH). For practical application this approach needs automated control. Experiments were conducted at UC Berkeley and OHMSETT that recorded high speed, high resolution video of submerged dye-colored water or oil jets and subsequently, measured the velocity data employing LDA and PIV software. Numerical simulation have been carried out using experimental submerged turbulent oil jets flow conditions employing LES turbulence closure and VOF interface capturing technique in OpenFOAM solver. The CFD results captured jet spreading angle and jet structures in close agreement with the experimental observations. The work was funded by NETL and DOI Bureau of Safety and Environmental Enforcement (BSEE).

  15. Quantitative assessment of effects of phase aberration and noise on high-frame-rate imaging.

    Science.gov (United States)

    Chen, Hong; Lu, Jian-yu

    2013-01-01

    The goal of this paper is to quantitatively study effects of phase aberration and noise on high-frame-rate (HFR) imaging using a set of traditional and new parameters. These parameters include the traditional -6-dB lateral resolution, and new parameters called the energy ratio (ER) and the sidelobe ratio (SR). ER is the ratio between the total energy of sidelobe and the total energy of mainlobe of a point spread function (PSF) of an imaging system. SR is the ratio between the peak value of the sidelobe and the peak value of the mainlobe of the PSF. In the paper, both simulation and experiment are conducted for a quantitative assessment and comparison of the effects of phase aberration and noise on the HFR and the conventional delay-and-sum (D&S) imaging methods with the set of parameters. In the HFR imaging method, steered plane waves (SPWs) and limited-diffraction beams (LDBs) are used in transmission, and received signals are processed with the Fast Fourier Transform to reconstruct images. In the D&S imaging method, beams focused at a fixed depth are used in transmission and dynamically focused beams are used in reception for image reconstruction. The simulation results show that the average differences between the -6-dB lateral beam widths of the HFR imaging and the D&S imaging methods are -0.1337mm for SPW and -0.1481mm for LDB, which means that the HFR imaging method has a higher lateral image resolution than the D&S imaging method since the values are negative. In experiments, the average differences are also negative, i.e., -0.2804mm for SPW and -0.3365mm for LDB. The results for the changes of ER and SR between the HFR and the D&S imaging methods have negative values, too. After introducing phase aberration and noise, both simulations and experiments show that the HFR imaging method has also less change in the -6-dB lateral resolution, ER, and SR as compared to the conventional D&S imaging method. This means that the HFR imaging method is less sensitive to

  16. Framing car fuel efficiency : linearity heuristic for fuel consumption and fuel-efficiency ratings

    NARCIS (Netherlands)

    Schouten, T.M.; Bolderdijk, J.W.; Steg, L.

    2014-01-01

    People are sensitive to the way information on fuel efficiency is conveyed. When the fuel efficiency of cars is framed in terms of fuel per distance (FPD; e.g. l/100 km), instead of distance per units of fuel (DPF; e.g. km/l), people have a more accurate perception of potential fuel savings. People

  17. The FAST module: An add-on unit for driving commercial scanning probe microscopes at video rate and beyond

    Science.gov (United States)

    Esch, Friedrich; Dri, Carlo; Spessot, Alessio; Africh, Cristina; Cautero, Giuseppe; Giuressi, Dario; Sergo, Rudi; Tommasini, Riccardo; Comelli, Giovanni

    2011-05-01

    We present the design and the performance of the FAST (Fast Acquisition of SPM Timeseries) module, an add-on instrument that can drive commercial scanning probe microscopes (SPM) at and beyond video rate image frequencies. In the design of this module, we adopted and integrated several technical solutions previously proposed by different groups in order to overcome the problems encountered when driving SPMs at high scanning frequencies. The fast probe motion control and signal acquisition are implemented in a way that is totally transparent to the existing control electronics, allowing the user to switch immediately and seamlessly to the fast scanning mode when imaging in the conventional slow mode. The unit provides a completely non-invasive, fast scanning upgrade to common SPM instruments that are not specifically designed for high speed scanning. To test its performance, we used this module to drive a commercial scanning tunneling microscope (STM) system in a quasi-constant height mode to frame rates of 100 Hz and above, demonstrating extremely stable and high resolution imaging capabilities. The module is extremely versatile and its application is not limited to STM setups but can, in principle, be generalized to any scanning probe instrument.

  18. FAST rate allocation through steepest descent for JPEG2000 video transmission.

    Science.gov (United States)

    Aulí-Llinàs, Francesc; Bilgin, Ali; Marcellin, Michael W

    2011-04-01

    This work addresses the transmission of pre-encoded JPEG2000 video within a video-on-demand scenario. The primary requirement for the rate allocation algorithm deployed in the server is to match the real-time processing demands of the application. Scalability in terms of complexity must be provided to supply a valid solution by a given instant of time. The FAst rate allocation through STeepest descent (FAST) method introduced in this work selects an initial (and possibly poor) solution, and iteratively improves it until time is exhausted or the algorithm finishes execution. Experimental results suggest that FAST commonly achieves solutions close to the global optimum while employing very few computational resources.

  19. Improved depth resolution in video-rate line-scanning multiphoton microscopy using temporal focusing

    Science.gov (United States)

    Tal, Eran; Oron, Dan; Silberberg, Yaron

    2005-07-01

    By introducing spatiotemporal pulse shaping techniques to multiphoton microscopy it is possible to obtain video-rate images with depth resolution similar to point-by-point scanning multiphoton microscopy while mechanically scanning in only one dimension. This is achieved by temporal focusing of the illumination pulse: The pulsed excitation field is compressed as it propagates through the sample, reaching its shortest duration (and highest peak intensity) at the focal plane before stretching again beyond it. This method is applied to produce, in a simple and scalable setup, video-rate two-photon excitation fluorescence images of Drosophila egg chambers with nearly 100,000 effective pixels and 1.5 μm depth resolution.

  20. Cross-Layer Design of Source Rate Control and Congestion Control for Wireless Video Streaming

    Directory of Open Access Journals (Sweden)

    Peng Zhu

    2007-01-01

    Full Text Available Cross-layer design has been used in streaming video over the wireless channels to optimize the overall system performance. In this paper, we extend our previous work on joint design of source rate control and congestion control for video streaming over the wired channel, and propose a cross-layer design approach for wireless video streaming. First, we extend the QoS-aware congestion control mechanism (TFRCC proposed in our previous work to the wireless scenario, and provide a detailed discussion about how to enhance the overall performance in terms of rate smoothness and responsiveness of the transport protocol. Then, we extend our previous joint design work to the wireless scenario, and a thorough performance evaluation is conducted to investigate its performance. Simulation results show that by cross-layer design of source rate control at application layer and congestion control at transport layer, and by taking advantage of the MAC layer information, our approach can avoid the throughput degradation caused by wireless link error, and better support the QoS requirements of the application. Thus, the playback quality is significantly improved, while good performance of the transport protocol is still preserved.

  1. Synchronous-digitization for Video Rate Polarization Modulated Beam Scanning Second Harmonic Generation Microscopy.

    Science.gov (United States)

    Sullivan, Shane Z; DeWalt, Emma L; Schmitt, Paul D; Muir, Ryan M; Simpson, Garth J

    2015-03-09

    Fast beam-scanning non-linear optical microscopy, coupled with fast (8 MHz) polarization modulation and analytical modeling have enabled simultaneous nonlinear optical Stokes ellipsometry (NOSE) and linear Stokes ellipsometry imaging at video rate (15 Hz). NOSE enables recovery of the complex-valued Jones tensor that describes the polarization-dependent observables, in contrast to polarimetry, in which the polarization stated of the exciting beam is recorded. Each data acquisition consists of 30 images (10 for each detector, with three detectors operating in parallel), each of which corresponds to polarization-dependent results. Processing of this image set by linear fitting contracts down each set of 10 images to a set of 5 parameters for each detector in second harmonic generation (SHG) and three parameters for the transmittance of the fundamental laser beam. Using these parameters, it is possible to recover the Jones tensor elements of the sample at video rate. Video rate imaging is enabled by performing synchronous digitization (SD), in which a PCIe digital oscilloscope card is synchronized to the laser (the laser is the master clock.) Fast polarization modulation was achieved by modulating an electro-optic modulator synchronously with the laser and digitizer, with a simple sine-wave at 1/10th the period of the laser, producing a repeating pattern of 10 polarization states. This approach was validated using Z-cut quartz, and NOSE microscopy was performed for micro-crystals of naproxen.

  2. Synchronous-digitization for video rate polarization modulated beam scanning second harmonic generation microscopy

    Science.gov (United States)

    Sullivan, Shane Z.; DeWalt, Emma L.; Schmitt, Paul D.; Muir, Ryan D.; Simpson, Garth J.

    2015-03-01

    Fast beam-scanning non-linear optical microscopy, coupled with fast (8 MHz) polarization modulation and analytical modeling have enabled simultaneous nonlinear optical Stokes ellipsometry (NOSE) and linear Stokes ellipsometry imaging at video rate (15 Hz). NOSE enables recovery of the complex-valued Jones tensor that describes the polarization-dependent observables, in contrast to polarimetry, in which the polarization stated of the exciting beam is recorded. Each data acquisition consists of 30 images (10 for each detector, with three detectors operating in parallel), each of which corresponds to polarization-dependent results. Processing of this image set by linear fitting contracts down each set of 10 images to a set of 5 parameters for each detector in second harmonic generation (SHG) and three parameters for the transmittance of the fundamental laser beam. Using these parameters, it is possible to recover the Jones tensor elements of the sample at video rate. Video rate imaging is enabled by performing synchronous digitization (SD), in which a PCIe digital oscilloscope card is synchronized to the laser (the laser is the master clock.) Fast polarization modulation was achieved by modulating an electro-optic modulator synchronously with the laser and digitizer, with a simple sine-wave at 1/10th the period of the laser, producing a repeating pattern of 10 polarization states. This approach was validated using Z-cut quartz, and NOSE microscopy was performed for micro-crystals of naproxen.

  3. The learning rate in three dimensional high definition video assisted microvascular anastomosis in a rat model.

    Science.gov (United States)

    Kotsougiani, Dimitra; Hundepool, Caroline A; Bulstra, Liselotte F; Shin, Delaney M; Shin, Alexander Y; Bishop, Allen T

    2016-11-01

    Three-dimensional (3D) high definition (HD) video systems are changing microsurgical practice by providing stereoscopic imaging not only for the surgeon and first assistant using the binocular microscope, but also for others involved in the surgery. The purpose of this study was to evaluate the potential to replace the binocular microscope for microarterial anastomoses and assess the rate of learning based on surgeons' experience. Two experienced and two novice microsurgeons performed a total of 88 rat femoral arterial anastomoses: 44 using a 3D HD video device ('Trenion', Carl Zeiss Meditech) and 44, a binocular microscope. We evaluated anastomosis time and modified OSATS scores as well as the subjects' preference for comfort, image adequacy and technical ease. Experienced microsurgeons showed a steep learning curve for anastomosis times with equivalent OSATS scores for both systems. However, prolonged anastomosis times were required when using the novel 3D-HD system rather than direct binocular vision. Comparable learning rates for anastomosis time were demonstrated for novice microsurgeons and modified OSATS scores did not differ between the different viewing technologies. All microsurgeons reported improved comfort for the 3D HD video system but found the image quality of the conventional microscope superior, facilitating technical ease. The present study demonstrates the potential of 3D HD video systems to replace current binocular microscopes, offering qualitatively-equivalent microvascular anastomosis with improved comfort for experienced microsurgeons. However, image quality was rated inferior with the 3D HD system resulting in prolonged anastomosis times. Microsurgical skill acquisition in novice microsurgeons was not influenced by the viewing system used. Copyright © 2016. Published by Elsevier Ltd.

  4. Application of X-Y separable 2-D array beamforming for increased frame rate and energy efficiency in handheld devices.

    Science.gov (United States)

    Owen, Kevin; Fuller, Michael; Hossack, John

    2012-07-01

    Two-dimensional arrays present significant beamforming computational challenges because of their high channel count and data rate. These challenges are even more stringent when incorporating a 2-D transducer array into a battery-powered hand-held device, placing significant demands on power efficiency. Previous work in sonar and ultrasound indicates that 2-D array beamforming can be decomposed into two separable line-array beamforming operations. This has been used in conjunction with frequency-domain phase-based focusing to achieve fast volume imaging. In this paper, we analyze the imaging and computational performance of approximate near-field separable beamforming for high-quality delay-and-sum (DAS) beamforming and for a low-cost, phase-rotation-only beamforming method known as direct-sampled in-phase quadrature (DSIQ) beamforming. We show that when high-quality time-delay interpolation is used, separable DAS focusing introduces no noticeable imaging degradation under practical conditions. Similar results for DSIQ focusing are observed. In addition, a slight modification to the DSIQ focusing method greatly increases imaging contrast, making it comparable to that of DAS, despite having a wider main lobe and higher side lobes resulting from the limitations of phase-only time-delay interpolation. Compared with non-separable 2-D imaging, up to a 20-fold increase in frame rate is possible with the separable method. When implemented on a smart-phone-oriented processor to focus data from a 60 x 60 channel array using a 40 x 40 aperture, the frame rate per C-mode volume slice increases from 16 to 255 Hz for DAS, and from 11 to 193 Hz for DSIQ. Energy usage per frame is similarly reduced from 75 to 4.8 mJ/ frame for DAS, and from 107 to 6.3 mJ/frame for DSIQ. We also show that the separable method outperforms 2-D FFT-based focusing by a factor of 1.64 at these data sizes. This data indicates that with the optimal design choices, separable 2-D beamforming can

  5. Game-Theoretic Rate-Distortion-Complexity Optimization of High Efficiency Video Coding

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Milani, Simone; Forchhammer, Søren

    2013-01-01

    This paper presents an algorithm for rate-distortioncomplexity optimization for the emerging High Efficiency Video Coding (HEVC) standard, whose high computational requirements urge the need for low-complexity optimization algorithms. Optimization approaches need to specify different complexity...... profiles in order to tailor the computational load to the different hardware and power-supply resources of devices. In this work, we focus on optimizing the quantization parameter and partition depth in HEVC via a game-theoretic approach. The proposed rate control strategy alone provides 0.2 dB improvement...

  6. A High Frame Rate Test System for the HEPS-BPIX Based on NI-sbRIO Board

    Science.gov (United States)

    Gu, Jingzi; Zhang, Jie; Wei, Wei; Ning, Zhe; Li, Zhenjie; Jiang, Xiaoshan; Fan, Lei; Shen, Wei; Ren, Jiayi; Ji, Xiaolu; Lan, Allan K.; Lu, Yunpeng; Ouyang, Qun; Liu, Peng; Zhu, Kejun; Wang, Zheng

    2017-06-01

    HEPS-BPIX is a silicon pixel detector designed for the future large scientific facility, high-energy photon sources (HEPS) in Beijing, China. It is a high frame rate hybrid pixel detector which works in the single-photon-counting mode. High frame rate leads to much higher readout data bandwidth than former systems, which is also the difficulty of the design. Aiming to test and calibrate the pixel detector, a test system based on the National Instruments single-board RIO 9626 and LabVIEW program environment has been designed. A series of tests has been carried out with X-ray machine as well as on the Beijing Synchrotron Radiation Facility 1W2B beamline. The test results show that the threshold uniformity is better than 60 electrons and the equivalent noise charge is less than 120 electrons. Besides, the required highest frame rate of 1.2 kHz has been realized. This paper will elaborate the test system design and present the latest testing results of the HEPS-BPIX system.

  7. Performance Analysis of Video Transmission Using Sequential Distortion Minimization Method for Digital Video Broadcasting Terrestrial

    Directory of Open Access Journals (Sweden)

    Novita Astin

    2016-12-01

    Full Text Available This paper presents about the transmission of Digital Video Broadcasting system with streaming video resolution 640x480 on different IQ rate and modulation. In the video transmission, distortion often occurs, so the received video has bad quality. Key frames selection algorithm is flexibel on a change of video, but on these methods, the temporal information of a video sequence is omitted. To minimize distortion between the original video and received video, we aimed at adding methodology using sequential distortion minimization algorithm. Its aim was to create a new video, better than original video without significant loss of content between the original video and received video, fixed sequentially. The reliability of video transmission was observed based on a constellation diagram, with the best result on IQ rate 2 Mhz and modulation 8 QAM. The best video transmission was also investigated using SEDIM (Sequential Distortion Minimization Method and without SEDIM. The experimental result showed that the PSNR (Peak Signal to Noise Ratio average of video transmission using SEDIM was an increase from 19,855 dB to 48,386 dB and SSIM (Structural Similarity average increase 10,49%. The experimental results and comparison of proposed method obtained a good performance. USRP board was used as RF front-end on 2,2 GHz.

  8. Distributed video coding with multiple side information

    DEFF Research Database (Denmark)

    Huang, Xin; Brites, C.; Ascenso, J.

    2009-01-01

    Distributed Video Coding (DVC) is a new video coding paradigm which mainly exploits the source statistics at the decoder based on the availability of some decoder side information. The quality of the side information has a major impact on the DVC rate-distortion (RD) performance in the same way...... the quality of the predictions had a major impact in predictive video coding. In this paper, a DVC solution exploiting multiple side information is proposed; the multiple side information is generated by frame interpolation and frame extrapolation targeting to improve the side information of a single...

  9. A validity test of movie, television, and video-game ratings.

    Science.gov (United States)

    Walsh, D A; Gentile, D A

    2001-06-01

    Numerous studies have documented the potential effects on young audiences of violent content in media products, including movies, television programs, and computer and video games. Similar studies have evaluated the effects associated with sexual content and messages. Cumulatively, these effects represent a significant public health risk for increased aggressive and violent behavior, spread of sexually transmitted diseases, and pediatric pregnancy. In partial response to these risks and to public and legislative pressure, the movie, television, and gaming industries have implemented ratings systems intended to provide information about the content and appropriate audiences for different films, shows, and games. To test the validity of the current movie-, television-, and video game-rating systems. Panel study. Participants used the KidScore media evaluation tool, which evaluates films, television shows, and video games on 10 aspects, including the appropriateness of the media product for children based on age. When an entertainment industry rates a product as inappropriate for children, parent raters agree that it is inappropriate for children. However, parent raters disagree with industry usage of many of the ratings designating material suitable for children of different ages. Products rated as appropriate for adolescents are of the greatest concern. The level of disagreement varies from industry to industry and even from rating to rating. Analysis indicates that the amount of violent content and portrayals of violence are the primary markers for disagreement between parent raters and industry ratings. As 1 part of a solution to the complex public health problems posed by violent and sexually explicit media products, ratings can have value if used with caution. Parents and caregivers relying on the ratings systems to guide their children's use of media products should continue to monitor content independently. Industry ratings systems should be revised with input

  10. Temporal compressive imaging for video

    Science.gov (United States)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  11. Probe Oscillation Shear Elastography (PROSE): A High Frame-Rate Method for Two-Dimensional Ultrasound Shear Wave Elastography.

    Science.gov (United States)

    Mellema, Daniel C; Song, Pengfei; Kinnick, Randall R; Urban, Matthew W; Greenleaf, James F; Manduca, Armando; Chen, Shigao

    2016-09-01

    Ultrasound shear wave elastography (SWE) utilizes the propagation of induced shear waves to characterize the shear modulus of soft tissue. Many methods rely on an acoustic radiation force (ARF) "push beam" to generate shear waves. However, specialized hardware is required to generate the push beams, and the thermal stress that is placed upon the ultrasound system, transducer, and tissue by the push beams currently limits the frame-rate to about 1 Hz. These constraints have limited the implementation of ARF to high-end clinical systems. This paper presents Probe Oscillation Shear Elastography (PROSE) as an alternative method to measure tissue elasticity. PROSE generates shear waves using a harmonic mechanical vibration of an ultrasound transducer, while simultaneously detecting motion with the same transducer under pulse-echo mode. Motion of the transducer during detection produces a "strain-like" compression artifact that is coupled with the observed shear waves. A novel symmetric sampling scheme is proposed such that pulse-echo detection events are acquired when the ultrasound transducer returns to the same physical position, allowing the shear waves to be decoupled from the compression artifact. Full field-of-view (FOV) two-dimensional (2D) shear wave speed images were obtained by applying a local frequency estimation (LFE) technique, capable of generating a 2D map from a single frame of shear wave motion. The shear wave imaging frame rate of PROSE is comparable to the vibration frequency, which can be an order of magnitude higher than ARF based techniques. PROSE was able to produce smooth and accurate shear wave images from three homogeneous phantoms with different moduli, with an effective frame rate of 300 Hz. An inclusion phantom study showed that increased vibration frequencies improved the accuracy of inclusion imaging, and allowed targets as small as 6.5 mm to be resolved with good contrast (contrast-to-noise ratio ≥ 19 dB) between the target and

  12. Expulsion and continuation rates after postabortion insertion of framed IUDs versus frameless IUDs – review of the literature

    Directory of Open Access Journals (Sweden)

    Wildemeersch D

    2015-07-01

    with conventional (framed IUDs were between 33.8% and 80% at 1 year for studies providing 1 year rates and between 68% and 94.1% for studies reporting continuation rates at 6 months. Studies utilizing frameless IUDs reported 1 year continuation rate over 95%. Conclusion: Frameless IUDs, due to their attachment to the uterine fundus, appear to be better retained by the postabortal uterus when compared with conventional framed IUDs. The absence of a frame ensures compatibility with uterine cavity anatomical dimensions, and may therefore result in improved acceptability and continuation rates in comparison with framed IUDs. Both these characteristics of the frameless IUD could help reduce the number of repeat unwanted pregnancies and subsequent abortions in some cases. Keywords: IUD, abortion, frameless IUD, expulsion, continuation, repeat abortion, unintended pregnancy

  13. Quality Adaptive Video Streaming Mechanism Using the Temporal Scalability

    Science.gov (United States)

    Lee, Sunhun; Chung, Kwangsue

    In video streaming applications over the Internet, TCP-friendly rate control schemes are useful for improving network stability and inter-protocol fairness. However, they do not always guarantee a smooth video streaming. To simultaneously satisfy both the network and user requirements, video streaming applications should be quality-adaptive. In this paper, we propose a new quality adaptation mechanism to adjust the quality of congestion-controlled video stream by controlling the frame rate. Based on the current network condition, it controls the frame rate of video stream and the sending rate in a TCP-friendly manner. Through a simulation, we prove that our adaptation mechanism appropriately adjusts the quality of video stream while improving network stability.

  14. Reduction of frame rate in full-field swept-source optical coherence tomography by numerical motion correction [Invited].

    Science.gov (United States)

    Pfäffle, Clara; Spahr, Hendrik; Hillmann, Dierck; Sudkamp, Helge; Franke, Gesa; Koch, Peter; Hüttmann, Gereon

    2017-03-01

    Full-field swept-source optical coherence tomography (FF-SS-OCT) was recently shown to allow new and exciting applications for imaging the human eye that were previously not possible using current scanning OCT systems. However, especially when using cameras that do not acquire data with hundreds of kHz frame rate, uncorrected phase errors due to axial motion of the eye lead to a drastic loss in image quality of the reconstructed volumes. Here we first give a short overview of recent advances in techniques and applications of parallelized OCT and finally present an iterative and statistical algorithm that estimates and corrects motion-induced phase errors in the FF-SS-OCT data. The presented algorithm is in many aspects adopted from the phase gradient autofocus (PGA) method, which is frequently used in synthetic aperture radar (SAR). Following this approach, the available phase errors can be estimated based on the image information that remains in the data, and no parametrization with few degrees of freedom is required. Consequently, the algorithm is capable of compensating even strong motion artifacts. Efficacy of the algorithm was tested on simulated data with motion containing varying frequency components. We show that even in strongly blurred data, the actual image information remains intact, and the algorithm can identify the phase error and correct it. Furthermore, we use the algorithm to compensate real phase error in FF-SS-OCT imaging of the human retina. Acquisition rates can be reduced by a factor of three (from 60 to 20 kHz frame rate) with an image quality that is even higher compared to uncorrected volumes recorded at the maximum acquisition rate. The presented algorithm for axial motion correction decreases the high requirements on the camera frame rate and thus brings FF-SS-OCT closer to clinical applications.

  15. Video Analysis of Rolling Cylinders

    Science.gov (United States)

    Phommarach, S.; Wattanakasiwich, P.; Johnston, I.

    2012-01-01

    In this work, we studied the rolling motion of solid and hollow cylinders down an inclined plane at different angles. The motions were captured on video at 300 frames s[superscript -1], and the videos were analyzed frame by frame using video analysis software. Data from the real motion were compared with the theory of rolling down an inclined…

  16. Validity of Borg Ratings of Perceived Exertion During Active Video Game Play

    Science.gov (United States)

    POLLOCK, BRANDON S.; BARKLEY, JACOB E.; POTENZINI, NICK; DESALVO, RENEE M.; BUSER, STACEY L.; OTTERSTETTER, RONALD; JUVANCIC-HELTZEL, JUDITH A.

    2013-01-01

    During physically interactive video game play (e.g., Nintendo Wii), users are exposed to potential distracters (e.g., video, music), which may decrease their ratings of perceived exertion (RPE) throughout game play. The purpose of this investigation was to determine the association between RPE scores and heart rate while playing the Nintendo Wii. Healthy adults (N = 13, 53.5 ± 5.4 years old) participated in two exercise sessions using the Nintendo Wii Fit Plus. During each session participants played a five-minute warm-up game (Basic Run), two separate Wii Fit Plus games (Yoga, Strength Training, Aerobics or Balance Training) for fifteen minutes each, and then a five-minute cool down game (Basic Run). Borg RPE and heart rate were assessed during the final 30 seconds of the warm up and cool down, as well during the final 30 seconds of play for each Wii Fit Plus game. Correlation analysis combining data from both exercise sessions indicated a moderate positive relationship between heart rate and RPE (r = 0.32). Mixed-effects model regression analyses demonstrated that RPE scores were significantly associated with heart rate (p < 0.001). The average percentage of age-predicted heart rate maximum achieved (58 ± 6%) was significantly greater (p = 0.001) than the percentage of maximum RPE indicated (43 ± 11%). Borg RPE scores were positively associated with heart rate in adults during exercise sessions using the Wii Fit Plus. However, this relationship was lower than observed in past research assessing RPE validity during different modes of exercise (e.g. walking, running) without distracters. PMID:27293499

  17. Validity of Borg Ratings of Perceived Exertion During Active Video Game Play.

    Science.gov (United States)

    Pollock, Brandon S; Barkley, Jacob E; Potenzini, Nick; Desalvo, Renee M; Buser, Stacey L; Otterstetter, Ronald; Juvancic-Heltzel, Judith A

    During physically interactive video game play (e.g., Nintendo Wii), users are exposed to potential distracters (e.g., video, music), which may decrease their ratings of perceived exertion (RPE) throughout game play. The purpose of this investigation was to determine the association between RPE scores and heart rate while playing the Nintendo Wii. Healthy adults (N = 13, 53.5 ± 5.4 years old) participated in two exercise sessions using the Nintendo Wii Fit Plus. During each session participants played a five-minute warm-up game (Basic Run), two separate Wii Fit Plus games (Yoga, Strength Training, Aerobics or Balance Training) for fifteen minutes each, and then a five-minute cool down game (Basic Run). Borg RPE and heart rate were assessed during the final 30 seconds of the warm up and cool down, as well during the final 30 seconds of play for each Wii Fit Plus game. Correlation analysis combining data from both exercise sessions indicated a moderate positive relationship between heart rate and RPE (r = 0.32). Mixed-effects model regression analyses demonstrated that RPE scores were significantly associated with heart rate (p < 0.001). The average percentage of age-predicted heart rate maximum achieved (58 ± 6%) was significantly greater (p = 0.001) than the percentage of maximum RPE indicated (43 ± 11%). Borg RPE scores were positively associated with heart rate in adults during exercise sessions using the Wii Fit Plus. However, this relationship was lower than observed in past research assessing RPE validity during different modes of exercise (e.g. walking, running) without distracters.

  18. Fast Aerial Video Stitching

    Directory of Open Access Journals (Sweden)

    Jing Li

    2014-10-01

    Full Text Available The highly efficient and robust stitching of aerial video captured by unmanned aerial vehicles (UAVs is a challenging problem in the field of robot vision. Existing commercial image stitching systems have seen success with offline stitching tasks, but they cannot guarantee high-speed performance when dealing with online aerial video sequences. In this paper, we present a novel system which has an unique ability to stitch high-frame rate aerial video at a speed of 150 frames per second (FPS. In addition, rather than using a high-speed vision platform such as FPGA or CUDA, our system is running on a normal personal computer. To achieve this, after the careful comparison of the existing invariant features, we choose the FAST corner and binary descriptor for efficient feature extraction and representation, and present a spatial and temporal coherent filter to fuse the UAV motion information into the feature matching. The proposed filter can remove the majority of feature correspondence outliers and significantly increase the speed of robust feature matching by up to 20 times. To achieve a balance between robustness and efficiency, a dynamic key frame-based stitching framework is used to reduce the accumulation errors. Extensive experiments on challenging UAV datasets demonstrate that our approach can break through the speed limitation and generate an accurate stitching image for aerial video stitching tasks.

  19. A critical analysis of the effect of view mode and frame rate on reading time and lesion detection during capsule endoscopy.

    Science.gov (United States)

    Nakamura, Masanao; Murino, Alberto; O'Rourke, Aine; Fraser, Chris

    2015-06-01

    Factors influencing reading time and detection of lesions include the view mode (VM) and frame rate (FR) applied during reading of small bowel capsule endoscopy images. The aims of this study were to examine the impact of VM and FR on reading time and lesion detection using a standardized, single-type lesion model. A selected video clip containing a known number of positive images (n = 60) of small bowel angioectasias was read using nine different combinations of VM and FR (VM1, VM2, and VM4 × FR10, FR15, and FR25) in randomized order by six capsule endoscopists. Readers were asked to count all positive images of angioectasias (maximum number of positive images, MPIs) seen during reading. The main outcome measurements were effect of VM and FR on reading time and lesion detection. Mean MPIs for all VM2 and VM4 were 36 (60 %) and 38 (64 %). They were significantly higher than VM1 of 24 (40 %) (P = 0.011, 0.008). A statistical difference was found when the total MPIs at FR10 were compared to FR15 (P = 0.008) and to FR25 (P reading.

  20. Effect and Analysis of Sustainable Cell Rate using MPEG video Traffic in ATM Networks

    Directory of Open Access Journals (Sweden)

    Sakshi Kaushal

    2006-04-01

    Full Text Available The broadband networks inhibit the capability to carry multiple types of traffic – voice, video and data, but these services need to be controlled according to the traffic contract negotiated at the time of the connection to maintain desired Quality of service. Such control techniques use traffic descriptors to evaluate its performance and effectiveness. In case of Variable Bit Rate (VBR services, Peak Cell Rate (PCR and its Cell Delay Variation Tolerance (CDVTPCR are mandatory descriptors. In addition to these, ATM Forum proposed Sustainable Cell Rate (SCR and its Cell delay variation tolerance (CDVTSCR. In this paper, we evaluated the impact of specific SCR and CDVTSCR values on the Usage Parameter Control (UPC performance in case of measured MPEG traffic for improving the efficiency

  1. Imaging of vaporised sub-micron phase change contrast agents with high frame rate ultrasound and optics.

    Science.gov (United States)

    Lin, Shengtao; Zhang, Ge; Jamburidze, Akaki; Chee, Melisse; Leow, Chee Hau; Garbin, Valeria; Tang, Meng-Xing

    2018-01-31

    Phase-change ultrasound contrast agent (PCCA), or nanodroplet shows promises as an alternative to conventional microbubble agent over a wide range of diagnostic applications. In the meantime, high-frame-rate (HFR) ultrasound imaging with microbubbles enables unprecedentedly temporal resolution compared to traditional contrast-enhanced ultrasound imaging. The combination of HFR ultrasound imaging and PCCAs can offer opportunities to observe and better understand PCCA behaviour after vaporisation capturing the fast phenomenon at a high temporal resolution. In this study, we utilised HFR ultrasound at frame rates in the kilohertz range (5-20 kHz) to image the native and size-selected PCCA populations immediately after vaporisation in vitro with clinical acoustic parameters. The size-selected PCCAs through filtration are shown to preserve submicron-sized (mean diameter < 200 nm) population without micron-sized outliers (> 1 µm) that are originally from the native PCCA emulsion. The results demonstrate imaging signals with different amplitude and temporal features compared to that of microbubbles. Compared with microbubbles, both B-mode and Pulse-Inversion (PI) signals from vaporised PCCA populations were reduced significantly in the first tens of milliseconds, while only B-mode signals from the PCCAs recovered during the next 400 ms, suggesting significant changes to the size distribution of PCCAs after vaporisation. It is also shown that such recovery in signal over time is not evident when using size-selective PCCAs. Furthermore, it was found that signals from the vaporised PCCA populations are affected by the amplitude and frame rate of the HFR ultrasound imaging. Using high-speed optical camera observation (30 kHz), we observed the particle size change in the vaporised PCCA populations exposed to the HFR ultrasound imaging pulses. These findings can benefit the understandings of PCCA behaviour under HFR ultrasound imaging. © 2018 Institute of Physics and

  2. Olympic Coast National Marine Sanctuary - stil120_0602a - Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during September 2006. Video data...

  3. still116_0501n-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vesselTatoosh during August 2005. Video data from...

  4. still116_0501d-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  5. still116_0501c-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  6. still116_0501s-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  7. still114_0402c-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  8. still115_0403-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  9. still114_0402b-- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  10. Plane-wave transverse oscillation for high-frame-rate 2-D vector flow imaging.

    Science.gov (United States)

    Lenge, Matteo; Ramalli, Alessandro; Tortoli, Piero; Cachard, Christian; Liebgott, Hervé

    2015-12-01

    Transverse oscillation (TO) methods introduce oscillations in the pulse-echo field (PEF) along the direction transverse to the ultrasound propagation direction. This may be exploited to extend flow investigations toward multidimensional estimates. In this paper, the TOs are coupled with the transmission of plane waves (PWs) to reconstruct high-framerate RF images with bidirectional oscillations in the pulse-echo field. Such RF images are then processed by a 2-D phase-based displacement estimator to produce 2-D vector flow maps at thousands of frames per second. First, the capability of generating TOs after PW transmissions was thoroughly investigated by varying the lateral wavelength, the burst length, and the transmission frequency. Over the entire region of interest, the generated lateral wavelengths, compared with the designed ones, presented bias and standard deviation of -3.3 ± 5.7% and 10.6 ± 7.4% in simulations and experiments, respectively. The performance of the ultrafast vector flow mapping method was also assessed by evaluating the differences between the estimated velocities and the expected ones. Both simulations and experiments show overall biases lower than 20% when varying the beam-to-flow angle, the peak velocity, and the depth of interest. In vivo applications of the method on the common carotid and the brachial arteries are also presented.

  11. Development and Reliability Evaluation of the Movement Rating Instrument for Virtual Reality Video Game Play.

    Science.gov (United States)

    Levac, Danielle; Nawrotek, Joanna; Deschenes, Emilie; Giguere, Tia; Serafin, Julie; Bilodeau, Martin; Sveistrup, Heidi

    2016-06-01

    Virtual reality active video games are increasingly popular physical therapy interventions for children with cerebral palsy. However, physical therapists require educational resources to support decision making about game selection to match individual patient goals. Quantifying the movements elicited during virtual reality active video game play can inform individualized game selection in pediatric rehabilitation. The objectives of this study were to develop and evaluate the feasibility and reliability of the Movement Rating Instrument for Virtual Reality Game Play (MRI-VRGP). Item generation occurred through an iterative process of literature review and sample videotape viewing. The MRI-VRGP includes 25 items quantifying upper extremity, lower extremity, and total body movements. A total of 176 videotaped 90-second game play sessions involving 7 typically developing children and 4 children with cerebral palsy were rated by 3 raters trained in MRI-VRGP use. Children played 8 games on 2 virtual reality and active video game systems. Intraclass correlation coefficients (ICCs) determined intra-rater and interrater reliability. Excellent intrarater reliability was evidenced by ICCs of >0.75 for 17 of the 25 items across the 3 raters. Interrater reliability estimates were less precise. Excellent interrater reliability was achieved for far reach upper extremity movements (ICC=0.92 [for right and ICC=0.90 for left) and for squat (ICC=0.80) and jump items (ICC=0.99), with 9 items achieving ICCs of >0.70, 12 items achieving ICCs of between 0.40 and 0.70, and 4 items achieving poor reliability (close-reach upper extremity-ICC=0.14 for right and ICC=0.07 for left) and single-leg stance (ICC=0.55 for right and ICC=0.27 for left). Poor video quality, differing item interpretations between raters, and difficulty quantifying the high-speed movements involved in game play affected reliability. With item definition clarification and further psychometric property evaluation, the MRI

  12. Comoving frame models of hot star winds. II. Reduction of O star wind mass-loss rates in global models

    Science.gov (United States)

    Krtička, J.; Kubát, J.

    2017-10-01

    We calculate global (unified) wind models of main-sequence, giant, and supergiant O stars from our Galaxy. The models are calculated by solving hydrodynamic, kinetic equilibrium (also known as NLTE) and comoving frame (CMF) radiative transfer equations from the (nearly) hydrostatic photosphere to the supersonic wind. For given stellar parameters, our models predict the photosphere and wind structure and in particular the wind mass-loss rates without any free parameters. Our predicted mass-loss rates are by a factor of 2-5 lower than the commonly used predictions. A possible cause of the difference is abandoning of the Sobolev approximation for the calculation of the radiative force, because our models agree with predictions of CMF NLTE radiative transfer codes. Our predicted mass-loss rates agree nicely with the mass-loss rates derived from observed near-infrared and X-ray line profiles and are slightly lower than mass-loss rates derived from combined UV and Hα diagnostics. The empirical mass-loss rate estimates corrected for clumping may therefore be reconciled with theoretical predictions in such a way that the average ratio between individual mass-loss rate estimates is not higher than about 1.6. On the other hand, our predictions are by factor of 4.7 lower than pure Hα mass-loss rate estimates and can be reconciled with these values only assuming a microclumping factor of at least eight.

  13. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  14. Video-based heart rate monitoring across a range of skin pigmentations during an acute hypoxic challenge.

    Science.gov (United States)

    Addison, Paul S; Jacquel, Dominique; Foo, David M H; Borg, Ulf R

    2017-11-09

    The robust monitoring of heart rate from the video-photoplethysmogram (video-PPG) during challenging conditions requires new analysis techniques. The work reported here extends current research in this area by applying a motion tolerant algorithm to extract high quality video-PPGs from a cohort of subjects undergoing marked heart rate changes during a hypoxic challenge, and exhibiting a full range of skin pigmentation types. High uptimes in reported video-based heart rate (HRvid) were targeted, while retaining high accuracy in the results. Ten healthy volunteers were studied during a double desaturation hypoxic challenge. Video-PPGs were generated from the acquired video image stream and processed to generate heart rate. HRvid was compared to the pulse rate posted by a reference pulse oximeter device (HRp). Agreement between video-based heart rate and that provided by the pulse oximeter was as follows: Bias = - 0.21 bpm, RMSD = 2.15 bpm, least squares fit gradient = 1.00 (Pearson R = 0.99, p < 0.0001), with a 98.78% reporting uptime. The difference between the HRvid and HRp exceeded 5 and 10 bpm, for 3.59 and 0.35% of the reporting time respectively, and at no point did these differences exceed 25 bpm. Excellent agreement was found between the HRvid and HRp in a study covering the whole range of skin pigmentation types (Fitzpatrick scales I-VI), using standard room lighting and with moderate subject motion. Although promising, further work should include a larger cohort with multiple subjects per Fitzpatrick class combined with a more rigorous motion and lighting protocol.

  15. Ultraminiature video-rate forward-view spectrally encoded endoscopy with straight axis configuration

    Science.gov (United States)

    Wang, Zhuo; Wu, Tzu-Yu; Hamm, Mark A.; Altshuler, Alexander; Mach, Anderson T.; Gilbody, Donald I.; Wu, Bin; Ganesan, Santosh N.; Chung, James P.; Ikuta, Mitsuhiro; Brauer, Jacob S.; Takeuchi, Seiji; Honda, Tokuyuki

    2017-02-01

    As one of the smallest endoscopes that have been demonstrated, the spectrally encoded endoscope (SEE) shows potential for the use in minimally invasive surgeries. While the original SEE is designed for side-view applications, the forwardview (FV) scope is more desired by physicians for many clinical applications because it provides a more natural navigation. Several FV SEEs have been designed in the past, which involve either multiple optical elements or one optical element with multiple optically active surfaces. Here we report a complete FV SEE which comprises a rotating illumination probe within a drive cable, a sheath and a window to cover the optics, a customized spectrometer, hardware controllers for both motor control and synchronization, and a software suite to capture, process and store images and videos. In this solution, the optical axis is straight and the dispersion element, i.e. the grating, is designed such that the slightly focused light after the focusing element will be dispersed by the grating, covering forward view angles with high diffraction efficiencies. As such, the illumination probe is fabricated with a diameter of only 275 μm. The twodimensional video-rate image acquisition is realized by rotating the illumination optics at 30 Hz. In one finished design, the scope diameter including the window assembly is 1.2 mm.

  16. How Fast Is Your Body Motion? Determining a Sufficient Frame Rate for an Optical Motion Tracking System Using Passive Markers: e0150993

    National Research Council Canada - National Science Library

    Min-Ho Song; Rolf Inge Godøy

    2016-01-01

    .... When using passive markers for the optical motion tracking, avoiding identity confusion between the markers becomes a problem as the speed of motion increases, necessitating a higher frame rate...

  17. Happiness and arousal: framing happiness as arousing results in lower happiness ratings for older adults

    OpenAIRE

    Bjalkebring, Par; Västfjäll, Daniel; Johansson, Boo E. A.

    2015-01-01

    Older adults have been shown to describe their happiness as lower in arousal when compared to younger adults. In addition, older adults prefer low arousal positive emotions over high arousal positive emotions in their daily lives. We experimentally investigated whether or not changing a few words in the description of happiness could influence a person’s rating of their happiness. We randomly assigned 193 participants, aged 22–92 years, to one of three conditions (high arousal, low arousal, o...

  18. A Video Rate Confocal Laser Beam Scanning Light Microscope Using An Image Dissector

    Science.gov (United States)

    Goldstein, Seth R.; Hubin, Thomas; Rosenthal, Scott; Washburn, Clayton

    1989-12-01

    A video rate confocal reflected light microscope with no moving parts has been developed. Return light from an acousto-optically raster scanned laser beam is imaged from the microscope stage onto the photocathode of an Image Dissector Tube (IDT). Confocal operation is achieved by appropriately raster scanning with the IDT x and y deflection coils so as to continuously "sample" that portion of the photocathode that is being instantaneously illuminated by the return image of the scanning laser spot. Optimum IDT scan parameters and geometric distortion correction parameters are determined under computer control within seconds and are then continuously applied to insure system alignment. The system is operational and reflected light images from a variety of objects have been obtained. The operating principle can be extended to fluorescence and transmission microscopy.

  19. Video-rate bioluminescence imaging of matrix metalloproteinase-2 secreted from a migrating cell.

    Directory of Open Access Journals (Sweden)

    Takahiro Suzuki

    Full Text Available BACKGROUND: Matrix metalloproteinase-2 (MMP-2 plays an important role in cancer progression and metastasis. MMP-2 is secreted as a pro-enzyme, which is activated by the membrane-bound proteins, and the polarized distribution of secretory and the membrane-associated MMP-2 has been investigated. However, the real-time visualizations of both MMP-2 secretion from the front edge of a migration cell and its distribution on the cell surface have not been reported. METHODOLOGY/PRINCIPAL FINDINGS: The method of video-rate bioluminescence imaging was applied to visualize exocytosis of MMP-2 from a living cell using Gaussia luciferase (GLase as a reporter. The luminescence signals of GLase were detected by a high speed electron-multiplying charge-coupled device camera (EM-CCD camera with a time resolution within 500 ms per image. The fusion protein of MMP-2 to GLase was expressed in a HeLa cell and exocytosis of MMP-2 was detected in a few seconds along the leading edge of a migrating HeLa cell. The membrane-associated MMP-2 was observed at the specific sites on the bottom side of the cells, suggesting that the sites of MMP-2 secretion are different from that of MMP-2 binding. CONCLUSIONS: We were the first to successfully demonstrate secretory dynamics of MMP-2 and the specific sites for polarized distribution of MMP-2 on the cell surface. The video-rate bioluminescence imaging using GLase is a useful method to investigate distribution and dynamics of secreted proteins on the whole surface of polarized cells in real time.

  20. Spatiotemporal super-resolution for low bitrate H.264 video

    OpenAIRE

    Anantrasirichai, N; Canagarajah, CN

    2010-01-01

    Super-resolution and frame interpolation enhance low resolution low-framerate videos. Such techniques are especially important for limited bandwidth communications. This paper proposes a novel technique to up-scale videos compressed with H.264 at low bit-rate both in spatial and temporal dimensions. A quantisation noise model is used in the super-resolution estimator, designed for low bitrate video, and a weighting map for decreasing inaccuracy of motion estimation are proposed. Results show ...

  1. COMPARATIVE STUDY OF COMPRESSION TECHNIQUES FOR SYNTHETIC VIDEOS

    OpenAIRE

    Ayman Abdalla; Ahmad Mazhar; Mosa Salah

    2014-01-01

    We evaluate the performance of three state of the art video codecs on synthetic videos. The evaluation is based on both subjective and objective quality metrics. The subjective quality of the compressed video sequences is evaluated using the Double Stimulus Impairment Scale (DSIS) assessment metric while the Peak Signal-to-Noise Ratio (PSNR) is used for the objective evaluation. An extensive number of experiments are conducted to study the effect of frame rate and resolution o...

  2. Framing the frame

    Directory of Open Access Journals (Sweden)

    Todd McElroy

    2007-08-01

    Full Text Available We examined how the goal of a decision task influences the perceived positive, negative valence of the alternatives and thereby the likelihood and direction of framing effects. In Study 1 we manipulated the goal to increase, decrease or maintain the commodity in question and found that when the goal of the task was to increase the commodity, a framing effect consistent with those typically observed in the literature was found. When the goal was to decrease, a framing effect opposite to the typical findings was observed whereas when the goal was to maintain, no framing effect was found. When we examined the decisions of the entire population, we did not observe a framing effect. In Study 2, we provided participants with a similar decision task except in this situation the goal was ambiguous, allowing us to observe participants' self-imposed goals and how they influenced choice preferences. The findings from Study 2 demonstrated individual variability in imposed goal and provided a conceptual replication of Study 1. %need keywords

  3. PillCam® SB3 capsule: Does the increased frame rate eliminate the risk of missing lesions?

    Science.gov (United States)

    Monteiro, Sara; de Castro, Francisca Dias; Carvalho, Pedro Boal; Moreira, Maria João; Rosa, Bruno; Cotter, José

    2016-03-14

    Since its emergence in 2000, small bowel capsule endoscopy (SBCE) has assumed a pivotal role as an investigation method for small bowel diseases. The PillCam(®) SB2-ex offers 12 h of battery time, 4 more than the previous version (SB2). Rahman et al recently found that the PillCam(®) SB2-ex has a significantly increased completion rate, although without higher diagnostic yield, compared with the SB2. We would like to discuss these somewhat surprising results and the new potentialities of the PillCam(®) SB3 regarding the diagnostic yield of small bowel studies. PillCam(®) SB3 offers improved image resolution and faster adaptable frame rate over previous versions of SBCE. We recently compared the major duodenal papilla detection rate obtained with PillCam(®) SB3 and SB2 as a surrogate indicator of diagnostic yield in the proximal small bowel. The PillCam(®) SB3 had a significantly higher major duodenal papilla detection rate than the PillCam(®) SB2 (42.7% vs 24%, P = 0.015). Thus, the most recent version of the PillCam(®) capsule, SB3, may increase diagnostic yield, particularly in the proximal segments of the small bowel.

  4. Frame Permutation Quantization

    OpenAIRE

    Nguyen, Ha Q.; Goyal, Vivek K; Lav R Varshney

    2009-01-01

    Frame permutation quantization (FPQ) is a new vector quantization technique using finite frames. In FPQ, a vector is encoded using a permutation source code to quantize its frame expansion. This means that the encoding is a partial ordering of the frame expansion coefficients. Compared to ordinary permutation source coding, FPQ produces a greater number of possible quantization rates and a higher maximum rate. Various representations for the partitions induced by FPQ are presented, and recons...

  5. Stand-Alone Front-End System for High-Frequency, High-Frame-Rate Coded Excitation Ultrasonic Imaging

    Science.gov (United States)

    Park, Jinhyoung; Hu, Changhong; Shung, K. Kirk

    2012-01-01

    A stand-alone front-end system for high-frequency coded excitation imaging was implemented to achieve a wider dynamic range. The system included an arbitrary waveform amplifier, an arbitrary waveform generator, an analog receiver, a motor position interpreter, a motor controller and power supplies. The digitized arbitrary waveforms at a sampling rate of 150 MHz could be programmed and converted to an analog signal. The pulse was subsequently amplified to excite an ultrasound transducer, and the maximum output voltage level achieved was 120 Vpp. The bandwidth of the arbitrary waveform amplifier was from 1 to 70 MHz. The noise figure of the preamplifier was less than 7.7 dB and the bandwidth was 95 MHz. Phantoms and biological tissues were imaged at a frame rate as high as 68 frames per second (fps) to evaluate the performance of the system. During the measurement, 40-MHz lithium niobate (LiNbO3) single-element lightweight (<0.28 g) transducers were utilized. The wire target measurement showed that the −6-dB axial resolution of a chirp-coded excitation was 50 µm and lateral resolution was 120 µm. The echo signal-to-noise ratios were found to be 54 and 65 dB for the short burst and coded excitation, respectively. The contrast resolution in a sphere phantom study was estimated to be 24 dB for the chirp-coded excitation and 15 dB for the short burst modes. In an in vivo study, zebrafish and mouse hearts were imaged. Boundaries of the zebrafish heart in the image could be differentiated because of the low-noise operation of the implemented system. In mouse heart images, valves and chambers could be readily visualized with the coded excitation. PMID:23443698

  6. Good clean fun? A content analysis of profanity in video games and its prevalence across game systems and ratings.

    Science.gov (United States)

    Ivory, James D; Williams, Dmitri; Martins, Nicole; Consalvo, Mia

    2009-08-01

    Although violent video game content and its effects have been examined extensively by empirical research, verbal aggression in the form of profanity has received less attention. Building on preliminary findings from previous studies, an extensive content analysis of profanity in video games was conducted using a sample of the 150 top-selling video games across all popular game platforms (including home consoles, portable consoles, and personal computers). The frequency of profanity, both in general and across three profanity categories, was measured and compared to games' ratings, sales, and platforms. Generally, profanity was found in about one in five games and appeared primarily in games rated for teenagers or above. Games containing profanity, however, tended to contain it frequently. Profanity was not found to be related to games' sales or platforms.

  7. Frames and semi-frames

    OpenAIRE

    Antoine, Jean-Pierre; Balazs, Peter

    2011-01-01

    Loosely speaking, a semi-frame is a generalized frame for which one of the frame bounds is absent. More precisely, given a total sequence in a Hilbert space, we speak of an upper (resp. lower) semi-frame if only the upper (resp. lower) frame bound is valid. Equivalently, for an upper semi-frame, the frame operator is bounded, but has an unbounded inverse, whereas a lower semi-frame has an unbounded frame operator, with bounded inverse. We study mostly upper semi-frames, bot...

  8. Effect of scanline orientation on ventricular flow propagation: assessment using high frame-rate color Doppler echocardiography

    Science.gov (United States)

    Greenberg, N. L.; Castro, P. L.; Drinko, J.; Garcia, M. J.; Thomas, J. D.

    2000-01-01

    Color M-mode echocardiography has recently been utilized to describe diastolic flow propagation velocity (Vp) in the left ventricle. While increasing temporal resolution from 15 to 200 Hz, this M-mode technique requires the user to select a single scanline, potentially limiting quantification of Vp due to the complex three-dimensional inflow pattern. We previously performed computational fluid dynamics simulations to demonstrate the insignificance of the scanline orientation, however geometric complexity was limited. The purpose of this study was to utilize high frame-rate 2D color Doppler images to investigate the importance of scanline selection in patients for the quantification of Vp. 2D color Doppler images were digitally acquired at 50 frames/s in 6 subjects from the apical 4-chamber window (System 5, GE/Vingmed, Milwaukee, WI). Vp was determined for a set of scanlines positioned through 5 locations across the mitral annulus (from the anterior to posterior mitral annulus). An analysis of variance was performed to examine the differences in Vp as a function of scanline position. Vp was not effected by scanline position in sampled locations from the center of the mitral valve towards the posterior annulus. Although not statistically significant, there was a trend to slower propagation velocities on the anterior side of the valve (60.8 +/- 16.7 vs. 54.4 +/- 13.6 cm/s). This study clinically validates our previous numerical experiment showing that Vp is insensitive to small perturbations of the scanline through the mitral valve. However, further investigation is necessary to examine the impact of ventricular geometry in pathologies including dilated cardiomyopathy.

  9. Optimal types of probe, and tissue Doppler frame rates, for use during tissue Doppler recording and off-line analysis of strain and strain rate in neonates at term.

    Science.gov (United States)

    Nestaas, Eirik; Støylen, Asbjørn; Fugelseth, Drude

    2008-10-01

    Measurements of strain and strain rate, obtained by tissue Doppler might provide new parameters for assessing cardiac function in neonates. The noise-to-signal ratio is high. We investigated the effect of the frequency of the probe used, and the settings for tissue Doppler frame rate, on the noise in the analyses in three series of tissue Doppler images. In the first series, we used the 5S probe, with a frequency of 2.4 MHz, and the default frame rate. We used the10S probe, with a frequency of 8.0 MHz, in the other two series, one with a low and one with the default frame rate. The noise was lower using the 5S rather than the 10S probe, and lower when using the low frame rate rather than the default rate with the 10S probe. Using the settings eligible for two segment analyses with the lowest noise for each series, the noise was from 36 to 42% higher when using the 10S probe at default frame rate, and from 13 to 14% higher when using the 10S probe at low frame rate compared to the 5S probe at default frame rate. There were no differences in peak systolic strain or strain rate between the series. We found, therefore, that use of the 5S probe with the default setting for frame rate, along with a length of 1 mm and width of 2 mm for the region of interest, and a strain length of 10 mm, provided the optimal settings for two-segment analyses in this study.

  10. Error resilient H.264/AVC Video over Satellite for low Packet Loss Rates

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren; Andersen, Jakob Dahl

    2007-01-01

    The performance of video over satellite is simulated. The error resilience tools of intra macroblock refresh and slicing are optimized for live broadcast video over satellite. The improved performance using feedback, using a cross- layer approach, over the satellite link is also simulated. The new...

  11. Fair rate allocation of scalable multiple description video for many clients

    NARCIS (Netherlands)

    Taal, R.J.; Lagendijk, R.L.

    2005-01-01

    Peer-to-peer networks (P2P) form a distributed communication infrastructure that is particularly well matched to video streaming using multiple description coding. We form M descriptions using MDC-FEC building on a scalable version of the “Dirac” video coder. The M descriptions are streamed via M

  12. Dynamic Power-Saving Method for Wi-Fi Direct Based IoT Networks Considering Variable-Bit-Rate Video Traffic.

    Science.gov (United States)

    Jin, Meihua; Jung, Ji-Young; Lee, Jung-Ryun

    2016-10-12

    With the arrival of the era of Internet of Things (IoT), Wi-Fi Direct is becoming an emerging wireless technology that allows one to communicate through a direct connection between the mobile devices anytime, anywhere. In Wi-Fi Direct-based IoT networks, all devices are categorized by group of owner (GO) and client. Since portability is emphasized in Wi-Fi Direct devices, it is essential to control the energy consumption of a device very efficiently. In order to avoid unnecessary power consumed by GO, Wi-Fi Direct standard defines two power-saving methods: Opportunistic and Notice of Absence (NoA) power-saving methods. In this paper, we suggest an algorithm to enhance the energy efficiency of Wi-Fi Direct power-saving, considering the characteristics of multimedia video traffic. Proposed algorithm utilizes the statistical distribution for the size of video frames and adjusts the lengths of awake intervals in a beacon interval dynamically. In addition, considering the inter-dependency among video frames, the proposed algorithm ensures that a video frame having high priority is transmitted with higher probability than other frames having low priority. Simulation results show that the proposed method outperforms the traditional NoA method in terms of average delay and energy efficiency.

  13. Olympic Coast National Marine Sanctuary - stil110_0204c - Still frame shots of sediment extracted from video for survey area 110_0204c

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfit with video equipment, lasers and lights was deployed from the NOAA research vessel Tatoosh during the month of September 2006 and...

  14. Olympic Coast National Marine Sanctuary - stil110_0204a - Still frame shots of sediment extracted from video for survey area 110_0204a.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfit with video equipment, lasers and lights was deployed from the NOAA research vessel Tatoosh during the month of September 2006 and...

  15. stil113_0401p -- Still frame locations of sediment extracted from video imagery collected by Delta submersible in September 2001.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Delta submersible vehicle, outfitted with video equipment (and other devices), was deployed from the R/V Auriga during September 2001 to monitor seafloor...

  16. Slow motion replay detection of tennis video based on color auto-correlogram

    Science.gov (United States)

    Zhang, Xiaoli; Zhi, Min

    2012-04-01

    In this paper, an effective slow motion replay detection method for tennis videos which contains logo transition is proposed. This method is based on the theory of color auto-correlogram and achieved by fowllowing steps: First,detect the candidate logo transition areas from the video frame sequence. Second, generate logo template. Then use color auto-correlogram for similarity matching between video frames and logo template in the candidate logo transition areas. Finally, select logo frames according to the matching results and locate the borders of slow motion accurately by using the brightness change during logo transition process. Experiment shows that, unlike previous approaches, this method has a great improvement in border locating accuracy rate, and can be used for other sports videos which have logo transition, too. In addition, as the algorithm only calculate the contents in the central area of the video frames, speed of the algorithm has been improved greatly.

  17. Multisensor data fusion for enhanced respiratory rate estimation in thermal videos.

    Science.gov (United States)

    Pereira, Carina B; Xinchi Yu; Blazek, Vladimir; Venema, Boudewijn; Leonhardt, Steffen

    2016-08-01

    Scientific studies have demonstrated that an atypical respiratory rate (RR) is frequently one of the earliest and major indicators of physiological distress. However, it is also described in the literature as "the neglected vital parameter", mainly due to shortcomings of clinical available monitoring techniques, which require attachment of sensors to the patient's body. The current paper introduces a novel approach that uses multisensor data fusion for an enhanced RR estimation in thermal videos. It considers not only the temperature variation around nostrils and mouth, but the upward and downward movement of both shoulders. In order to analyze the performance of our approach, two experiments were carried out on five healthy candidates. While during phase A, the subjects breathed normally, during phase B they simulated different breathing patterns. Thoracic effort was the gold standard elected to validate our algorithm. Our results show an excellent agreement between infrared thermography (IRT) and ground truth. While in phase A a mean correlation of 0.983 and a root-mean-square error of 0.240 bpm (breaths per minute) was obtained, in phase B they hovered around 0.995 and 0.890 bpm, respectively. In sum, IRT may be a promising clinical alternative to conventional sensors. Additionally, multisensor data fusion contributes to an enhancement of RR estimation and robustness.

  18. Realization of a video-rate distributed aperture millimeter-wave imaging system using optical upconversion

    Science.gov (United States)

    Schuetz, Christopher; Martin, Richard; Dillon, Thomas; Yao, Peng; Mackrides, Daniel; Harrity, Charles; Zablocki, Alicia; Shreve, Kevin; Bonnett, James; Curt, Petersen; Prather, Dennis

    2013-05-01

    Passive imaging using millimeter waves (mmWs) has many advantages and applications in the defense and security markets. All terrestrial bodies emit mmW radiation and these wavelengths are able to penetrate smoke, fog/clouds/marine layers, and even clothing. One primary obstacle to imaging in this spectrum is that longer wavelengths require larger apertures to achieve the resolutions desired for many applications. Accordingly, lens-based focal plane systems and scanning systems tend to require large aperture optics, which increase the achievable size and weight of such systems to beyond what can be supported by many applications. To overcome this limitation, a distributed aperture detection scheme is used in which the effective aperture size can be increased without the associated volumetric increase in imager size. This distributed aperture system is realized through conversion of the received mmW energy into sidebands on an optical carrier. This conversion serves, in essence, to scale the mmW sparse aperture array signals onto a complementary optical array. The side bands are subsequently stripped from the optical carrier and recombined to provide a real time snapshot of the mmW signal. Using this technique, we have constructed a real-time, video-rate imager operating at 75 GHz. A distributed aperture consisting of 220 upconversion channels is used to realize 2.5k pixels with passive sensitivity. Details of the construction and operation of this imager as well as field testing results will be presented herein.

  19. May I continue or should I stop? the effects of regulatory focus and message framings on video game players’ self-control

    OpenAIRE

    Ho, Shu-Hsun; Putthiwanit, Chutinon; Lin, Chia-Yin

    2011-01-01

    Two types of motivations exist in terms of regulatory focus: a promotion orientation concerned with advancement and achievement and a prevention orientation concerned with safety and security. The central premise of this research is that promotion-focused and prevention-focused players differ in their sensitivity to message frames and therefore respond with different levels of self-control. This study adopted a 2 (message frames: positive vs. negative) × 2 (regulatory focus: promotion vs. pre...

  20. Myocardial Strain Rate by Anatomic Doppler Spectrum: First Clinical Experience Using Retrospective Spectral Tissue Doppler from Ultra-High Frame Rate Imaging.

    Science.gov (United States)

    Lervik, Lars Christian Naterstad; Brekke, Birger; Aase, Svein Arne; Lønnebakken, Mai Tone; Stensvåg, Dordi; Amundsen, Brage H; Torp, Hans; Støylen, Asbjorn

    2017-09-01

    Strain rate imaging by tissue Doppler (TDI) is vulnerable to stationary reverberations and noise (clutter). Anatomic Doppler spectrum (ADS) presents retrospective spectral Doppler from ultra-high frame rate imaging (UFR-TDI) data for a region of interest, that is, ventricular wall or segment, at one time instance. This enables spectral assessment of strain rate (SR) without the influence of clutter. In this study, we assessed SR with ADS and conventional TDI in 20 patients with a recent myocardial infarction and 10 healthy volunteers. ADS-based SR correlated with fraction of scarred myocardium of the left ventricle (r = 0.68, p < 0.001), whereas SR by conventional TDI did not (r = 0.23, p = 0.30). ADS identified scarred myocardium and ADS Visual was the only method that differentiated transmural from non-transmural distribution of myocardial scar on a segmental level (p = 0.002). Finally, analysis of SR by ADS was feasible in a larger number of segments compared with SR by conventional TDI (p < 0.001). Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  1. Statistical conditional sampling for variable-resolution video compression.

    Directory of Open Access Journals (Sweden)

    Alexander Wong

    Full Text Available In this study, we investigate a variable-resolution approach to video compression based on Conditional Random Field and statistical conditional sampling in order to further improve compression rate while maintaining high-quality video. In the proposed approach, representative key-frames within a video shot are identified and stored at full resolution. The remaining frames within the video shot are stored and compressed at a reduced resolution. At the decompression stage, a region-based dictionary is constructed from the key-frames and used to restore the reduced resolution frames to the original resolution via statistical conditional sampling. The sampling approach is based on the conditional probability of the CRF modeling by use of the constructed dictionary. Experimental results show that the proposed variable-resolution approach via statistical conditional sampling has potential for improving compression rates when compared to compressing the video at full resolution, while achieving higher video quality when compared to compressing the video at reduced resolution.

  2. Categorization of Fetal Heart Rate Decelerations in American and European Practice: Importance and Imperative of Avoiding Framing and Confirmation Biases.

    Science.gov (United States)

    Sholapurkar, Shashikant L

    2015-09-01

    Interpretation of electronic fetal monitoring (EFM) remains controversial and unsatisfactory. Fetal heart rate (FHR) decelerations are the commonest aberrant feature on cardiotocographs and considered "center-stage" in the interpretation of EFM. A recent American study suggested that the lack of correlation of American three-tier system to neonatal acidemia may be due to the current peculiar nomenclature of FHR decelerations leading to loss of meaning. The pioneers like Hon and Caldeyro-Barcia classified decelerations based primarily on time relationship to contractions and not on etiology per se. This critical analysis debates pros and cons of significant anchoring/framing and confirmation biases in defining different types of decelerations based primarily on the shape (slope) or time of descent. It would be important to identify benign early decelerations correctly to avoid unnecessary intervention as well as to improve the positive predictive value of the other types of decelerations. Currently the vast majority of decelerations are classed as "variable". This review shows that the most common rapid decelerations during contractions with trough corresponding to peak of contraction cannot be explained by "cord-compression" hypothesis but by direct/pure (defined here as not mediated through baro-/chemoreceptors) or non-hypoxic vagal reflex. These decelerations are benign, most likely and mainly a result of head-compression and hence should be called "early" rather than "variable". Standardization is important but should be appropriate and withstand scientific scrutiny. Significant framing and confirmation biases are necessarily unscientific and the succeeding three-tier interpretation systems and structures embodying these biases would be dysfunctional and clinically unhelpful. Clinical/pathophysiological analysis and avoidance of flaws/biases suggest that a more physiological and scientific categorization of decelerations should be based on time relationship to

  3. Biased representation of disturbance rates in the roadside sampling frame in boreal forests: implications for monitoring design

    Directory of Open Access Journals (Sweden)

    Steven L. Van Wilgenburg

    2015-12-01

    Full Text Available The North American Breeding Bird Survey (BBS is the principal source of data to inform researchers about the status of and trend for boreal forest birds. Unfortunately, little BBS coverage is available in the boreal forest, where increasing concern over the status of species breeding there has increased interest in northward expansion of the BBS. However, high disturbance rates in the boreal forest may complicate roadside monitoring. If the roadside sampling frame does not capture variation in disturbance rates because of either road placement or the use of roads for resource extraction, biased trend estimates might result. In this study, we examined roadside bias in the proportional representation of habitat disturbance via spatial data on forest "loss," forest fires, and anthropogenic disturbance. In each of 455 BBS routes, the area disturbed within multiple buffers away from the road was calculated and compared against the area disturbed in degree blocks and BBS strata. We found a nonlinear relationship between bias and distance from the road, suggesting forest loss and forest fires were underrepresented below 75 and 100 m, respectively. In contrast, anthropogenic disturbance was overrepresented at distances below 500 m and underrepresented thereafter. After accounting for distance from road, BBS routes were reasonably representative of the degree blocks they were within, with only a few strata showing biased representation. In general, anthropogenic disturbance is overrepresented in southern strata, and forest fires are underrepresented in almost all strata. Similar biases exist when comparing the entire road network and the subset sampled by BBS routes against the amount of disturbance within BBS strata; however, the magnitude of biases differed. Based on our results, we recommend that spatial stratification and rotating panel designs be used to spread limited BBS and off-road sampling effort in an unbiased fashion and that new BBS routes

  4. H.264/AVC Video Compressed Traces: Multifractal and Fractal Analysis

    Directory of Open Access Journals (Sweden)

    Samčović Andreja

    2006-01-01

    Full Text Available Publicly available long video traces encoded according to H.264/AVC were analyzed from the fractal and multifractal points of view. It was shown that such video traces, as compressed videos (H.261, H.263, and MPEG-4 Version 2 exhibit inherent long-range dependency, that is, fractal, property. Moreover they have high bit rate variability, particularly at higher compression ratios. Such signals may be better characterized by multifractal (MF analysis, since this approach describes both local and global features of the process. From multifractal spectra of the frame size video traces it was shown that higher compression ratio produces broader and less regular MF spectra, indicating to higher MF nature and the existence of additive components in video traces. Considering individual frames (I, P, and B and their MF spectra one can approve additive nature of compressed video and the particular influence of these frames to a whole MF spectrum. Since compressed video occupies a main part of transmission bandwidth, results obtained from MF analysis of compressed video may contribute to more accurate modeling of modern teletraffic. Moreover, by appropriate choice of the method for estimating MF quantities, an inverse MF analysis is possible, that means, from a once derived MF spectrum of observed signal it is possible to recognize and extract parts of the signal which are characterized by particular values of multifractal parameters. Intensive simulations and results obtained confirm the applicability and efficiency of MF analysis of compressed video.

  5. VIDEO-BASED POINT CLOUD GENERATION USING MULTIPLE ACTION CAMERAS

    Directory of Open Access Journals (Sweden)

    T. Teo

    2015-05-01

    Full Text Available Due to the development of action cameras, the use of video technology for collecting geo-spatial data becomes an important trend. The objective of this study is to compare the image-mode and video-mode of multiple action cameras for 3D point clouds generation. Frame images are acquired from discrete camera stations while videos are taken from continuous trajectories. The proposed method includes five major parts: (1 camera calibration, (2 video conversion and alignment, (3 orientation modelling, (4 dense matching, and (5 evaluation. As the action cameras usually have large FOV in wide viewing mode, camera calibration plays an important role to calibrate the effect of lens distortion before image matching. Once the camera has been calibrated, the author use these action cameras to take video in an indoor environment. The videos are further converted into multiple frame images based on the frame rates. In order to overcome the time synchronous issues in between videos from different viewpoints, an additional timer APP is used to determine the time shift factor between cameras in time alignment. A structure form motion (SfM technique is utilized to obtain the image orientations. Then, semi-global matching (SGM algorithm is adopted to obtain dense 3D point clouds. The preliminary results indicated that the 3D points from 4K video are similar to 12MP images, but the data acquisition performance of 4K video is more efficient than 12MP digital images.

  6. MedlinePlus FAQ: Framing

    Science.gov (United States)

    ... Map FAQs Customer Support Health Topics Drugs & Supplements Videos & Tools Español You Are Here: Home → FAQs → Question URL of this page: https://medlineplus.gov/faq/framing.html I'd like to link to MedlinePlus, but only if I can frame it. Why don't you allow this? To ...

  7. Feasibility of Radon projection acquisition for compressive imaging in MMW region based new video rate 16×16 GDD FPA camera

    Science.gov (United States)

    Levanon, Assaf; Konstantinovsky, Michael; Kopeika, Natan S.; Yitzhaky, Yitzhak; Stern, A.; Turak, Svetlana; Abramovich, Amir

    2015-05-01

    In this article we present preliminary results for the combination of two interesting fields in the last few years: 1) Compressed imaging (CI), which is a joint sensing and compressing process, that attempts to exploit the large redundancy in typical images in order to capture fewer samples than usual. 2) Millimeter Waves (MMW) imaging. MMW based imaging systems are required for a large variety of applications in many growing fields such as medical treatments, homeland security, concealed weapon detection, and space technology. Moreover, the possibility to create a reliable imaging in low visibility conditions such as heavy cloud, smoke, fog and sandstorms in the MMW region, generate high interest from military groups in order to be ready for new combat. The lack of inexpensive room temperature imaging sensors makes it difficult to provide a suitable MMW system for many of the above applications. A system based on Glow Discharge Detector (GDD) Focal Plane Arrays (FPA) can be very efficient in real time imaging with significant results. The GDD is located in free space and it can detect MMW radiation almost isotropically. In this article, we present a new approach of reconstruction MMW imaging by rotation scanning of the target. The Collection process here, based on Radon projections allows implementation of the compressive sensing principles into the MMW region. Feasibility of concept was obtained as radon line imaging results. MMW imaging results with our resent sensor are also presented for the first time. The multiplexing frame rate of 16×16 GDD FPA permits real time video rate imaging of 30 frames per second and comprehensive 3D MMW imaging. It uses commercial GDD lamps with 3mm diameter, Ne indicator lamps as pixel detectors. Combination of these two fields should make significant improvement in MMW region imaging research, and new various of possibilities in compressing sensing technique.

  8. Personalized video summarization based on group scoring

    OpenAIRE

    Darabi, K; G. Ghinea

    2014-01-01

    In this paper an expert-based model for generation of personalized video summaries is suggested. The video frames are initially scored and annotated by multiple video experts. Thereafter, the scores for the video segments that have been assigned the higher priorities by end users will be upgraded. Considering the required summary length, the highest scored video frames will be inserted into a personalized final summary. For evaluation purposes, the video summaries generated by our system have...

  9. Video Classification and Adaptive QoP/QoS Control for Multiresolution Video Applications on IPTV

    Directory of Open Access Journals (Sweden)

    Huang Shyh-Fang

    2012-01-01

    Full Text Available With the development of heterogeneous networks and video coding standards, multiresolution video applications over networks become important. It is critical to ensure the service quality of the network for time-sensitive video services. Worldwide Interoperability for Microwave Access (WIMAX is a good candidate for delivering video signals because through WIMAX the delivery quality based on the quality-of-service (QoS setting can be guaranteed. The selection of suitable QoS parameters is, however, not trivial for service users. Instead, what a video service user really concerns with is the video quality of presentation (QoP which includes the video resolution, the fidelity, and the frame rate. In this paper, we present a quality control mechanism in multiresolution video coding structures over WIMAX networks and also investigate the relationship between QoP and QoS in end-to-end connections. Consequently, the video presentation quality can be simply mapped to the network requirements by a mapping table, and then the end-to-end QoS is achieved. We performed experiments with multiresolution MPEG coding over WIMAX networks. In addition to the QoP parameters, the video characteristics, such as, the picture activity and the video mobility, also affect the QoS significantly.

  10. still108_0201 -- Point coverage of locations of still frames extracted from video imagery which depict sediment types at various locations.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A Phantom DH2+2 remotely operated vehicle (ROV) outfitted with video equipment (and other devices) was deployed from the NOAA Ship McAurthurII (AR04-04) in an...

  11. PanDAR: a wide-area, frame-rate, and full color lidar with foveated region using backfilling interpolation upsampling

    Science.gov (United States)

    Mundhenk, T. Nathan; Kim, Kyungnam; Owechko, Yuri

    2015-01-01

    LIDAR devices for on-vehicle use need a wide field of view and good fidelity. For instance, a LIDAR for avoidance of landing collisions by a helicopter needs to see a wide field of view and show reasonable details of the area. The same is true for an online LIDAR scanning device placed on an automobile. In this paper, we describe a LIDAR system with full color and enhanced resolution that has an effective vertical scanning range of 60 degrees with a central 20 degree fovea. The extended range with fovea is achieved by using two standard Velodyne 32-HDL LIDARs placed head to head and counter rotating. The HDL LIDARS each scan 40 degrees vertical and a full 360 degrees horizontal with an outdoor effective range of 100 meters. By positioning them head to head, they overlap by 20 degrees. This creates a double density fovea. The LIDAR returns from the two Velodyne sensors do not natively contain color. In order to add color, a Point Grey LadyBug panoramic camera is used to gather color data of the scene. In the first stage of our system, the two LIDAR point clouds and the LadyBug video are fused in real time at a frame rate of 10 Hz. A second stage is used to intelligently interpolate the point cloud and increase its resolution by approximately four times while maintaining accuracy with respect to the 3D scene. By using GPGPU programming, we can compute this at 10 Hz. Our backfilling interpolation methods works by first computing local linear approximations from the perspective of the LIDAR depth map. The color features from the image are used to select point cloud support points that are the best points in a local group for building the local linear approximations. This makes the colored point cloud more detailed while maintaining fidelity to the 3D scene. Our system also makes objects appearing in the PanDAR display easier to recognize for a human operator.

  12. Applying high frame-rate digital radiography and dual-energy distributed-sources for advanced tomosynthesis

    Science.gov (United States)

    Travish, Gil; Rangel, Felix J.; Evans, Mark A.; Schmiedehausen, Kristin

    2013-09-01

    Conventional radiography uses a single point x-ray source with a fan or cone beam to visualize various areas of the human body. An imager records the transmitted photons—historically film and now increasingly digital radiography (DR) flat panel detectors—followed by optional image post-processing. Some post-processing techniques of particular interest are tomosynthesis, and dual energy subtraction. Tomosynthesis adds the ability to recreate quasi-3D images from a series of 2D projections. These exposures are typically taken along an arc or other path; and, tomosynthesis reconstruction is used to form a three-dimensional representation of the area of interest. Dual-energy radiography adds the ability to enhance or "eliminate" structures based on their different attenuation of well-separated end-point energies in two exposures. These advanced capabilities come at a high cost in terms of complexity, imaging time, capital equipment, space, and potentially reduced image quality due to motion blur if acquired sequentially. Recently, the prospect of creating x-ray sources, which are composed of arrays of micro-emitters, has been put forward. These arrays offer a flat-panel geometry and may afford advantages in fabrication methodology, size and cost. They also facilitate the use of the dual energy technology. Here we examine the possibility of using such an array of x-ray sources combined with high frame-rate (~kHz) DR detectors to produce advanced medical images without the need for moving gantries or other complex motion systems. Combining the advantages of dual energy imaging with the ability to determine the relative depth location of anatomical structures or pathological findings from imaging procedures should prove to be a powerful diagnostic tool. We also present use cases that would benefit from the capabilities of this modality.

  13. Intracardiac Vortex Dynamics by High-Frame-Rate Doppler Vortography-In Vivo Comparison With Vector Flow Mapping and 4-D Flow MRI.

    Science.gov (United States)

    Faurie, Julia; Baudet, Mathilde; Assi, Kondo Claude; Auger, Dominique; Gilbert, Guillaume; Tournoux, Francois; Garcia, Damien

    2017-02-01

    Recent studies have suggested that intracardiac vortex flow imaging could be of clinical interest to early diagnose the diastolic heart function. Doppler vortography has been introduced as a simple color Doppler method to detect and quantify intraventricular vortices. This method is able to locate a vortex core based on the recognition of an antisymmetric pattern in the Doppler velocity field. Because the heart is a fast-moving organ, high frame rates are needed to decipher the whole blood vortex dynamics during diastole. In this paper, we adapted the vortography method to high-frame-rate echocardiography using circular waves. Time-resolved Doppler vortography was first validated in vitro in an ideal forced vortex. We observed a strong correlation between the core vorticity determined by high-frame-rate vortography and the ground-truth vorticity. Vortography was also tested in vivo in ten healthy volunteers using high-frame-rate duplex ultrasonography. The main vortex that forms during left ventricular filling was tracked during two-three successive cardiac cycles, and its core vorticity was determined at a sampling rate up to 80 duplex images per heartbeat. Three echocardiographic apical views were evaluated. Vortography-derived vorticities were compared with those returned by the 2-D vector flow mapping approach. Comparison with 4-D flow magnetic resonance imaging was also performed in four of the ten volunteers. Strong intermethod agreements were observed when determining the peak vorticity during early filling. It is concluded that high-frame-rate Doppler vortography can accurately investigate the diastolic vortex dynamics.

  14. Subjective rating and objective evaluation of the acoustic and indoor climate conditions in video conferencing rooms

    DEFF Research Database (Denmark)

    Hauervig-Jørgensen, Charlotte; Jeong, Cheol-Ho; Toftum, Jørn

    2017-01-01

    Today, face-to-face meetings are frequently replaced by video conferences in order to reduce costs and carbon footprint related to travels and to increase the company efficiency. Yet, complaints about the difficulty of understanding the speech of the participants in both rooms of the video...... conference occur. The aim of this study is to find out the main causes of difficulties in speech communication. Correlation studies between subjective perceptions were conducted through questionnaires and objective acoustic and indoor climate parameters related to video conferencing. Based on four single......-room and three combined-room measurements, it was found that the traditional measure of speech, such as the speech transmission index, was not correlated with the subjective classifications. Thus, a correlation analysis was conducted as an attempt to find the hidden factors behind the subjective perceptions...

  15. Selling Gender: Associations of Box Art Representation of Female Characters With Sales for Teen- and Mature-rated Video Games

    Science.gov (United States)

    Near, Christopher E.

    2012-01-01

    Content analysis of video games has consistently shown that women are portrayed much less frequently than men and in subordinate roles, often in “hypersexualized” ways. However, the relationship between portrayal of female characters and videogame sales has not previously been studied. In order to assess the cultural influence of video games on players, it is important to weight differently those games seen by the majority of players (in the millions), rather than a random sample of all games, many of which are seen by only a few thousand people. Box art adorning the front of video game boxes is a form of advertising seen by most game customers prior to purchase and should therefore predict sales if indeed particular depictions of female and male characters influence sales. Using a sample of 399 box art cases from games with ESRB ratings of Teen or Mature released in the US during the period of 2005 through 2010, this study shows that sales were positively related to sexualization of non-central female characters among cases with women present. In contrast, sales were negatively related to the presence of any central female characters (sexualized or non-sexualized) or the presence of female characters without male characters present. These findings suggest there is an economic motive for the marginalization and sexualization of women in video game box art, and that there is greater audience exposure to these stereotypical depictions than to alternative depictions because of their positive relationship to sales. PMID:23467816

  16. Selling Gender: Associations of Box Art Representation of Female Characters With Sales for Teen- and Mature-rated Video Games.

    Science.gov (United States)

    Near, Christopher E

    2013-02-01

    Content analysis of video games has consistently shown that women are portrayed much less frequently than men and in subordinate roles, often in "hypersexualized" ways. However, the relationship between portrayal of female characters and videogame sales has not previously been studied. In order to assess the cultural influence of video games on players, it is important to weight differently those games seen by the majority of players (in the millions), rather than a random sample of all games, many of which are seen by only a few thousand people. Box art adorning the front of video game boxes is a form of advertising seen by most game customers prior to purchase and should therefore predict sales if indeed particular depictions of female and male characters influence sales. Using a sample of 399 box art cases from games with ESRB ratings of Teen or Mature released in the US during the period of 2005 through 2010, this study shows that sales were positively related to sexualization of non-central female characters among cases with women present. In contrast, sales were negatively related to the presence of any central female characters (sexualized or non-sexualized) or the presence of female characters without male characters present. These findings suggest there is an economic motive for the marginalization and sexualization of women in video game box art, and that there is greater audience exposure to these stereotypical depictions than to alternative depictions because of their positive relationship to sales.

  17. Are pedicle screw perforation rates influenced by distance from the reference frame in multilevel registration using a computed tomography-based navigation system in the setting of scoliosis?

    Science.gov (United States)

    Uehara, Masashi; Takahashi, Jun; Ikegami, Shota; Kuraishi, Shugo; Shimizu, Masayuki; Futatsugi, Toshimasa; Oba, Hiroki; Kato, Hiroyuki

    2017-04-01

    Pedicle screw fixation is commonly employed for the surgical correction of scoliosis but carries a risk of serious neurovascular or visceral structure events during screw insertion. To avoid these complications, we have been using a computed tomography (CT)-based navigation system during pedicle screw placement. As this could also prolong operation time, multilevel registration for pedicle screw insertion for posterior scoliosis surgery was developed to register three consecutive vertebrae in a single time with CT-based navigation. The reference frame was set either at the caudal end of three consecutive vertebrae or at one or two vertebrae inferior to the most caudal registered vertebra, and then pedicle screws were inserted into the three consecutive registered vertebrae and into the one or two adjacent vertebrae. This study investigated the perforation rates of vertebrae at zero, one, two, three, or four or more levels above or below the vertebra at which the reference frame was set. This is a retrospective, single-center, single-surgeon study. One hundred sixty-one scoliosis patients who had undergone pedicle screw fixation were reviewed. Screw perforation rates were evaluated by postoperative CT. We evaluated 161 scoliosis patients (34 boys and 127 girls; mean±standard deviation age: 14.6±2.8 years) who underwent pedicle screw fixation guided by a CT-based navigation system between March 2006 and December 2015. A total of 2,203 pedicle screws were inserted into T2-L5 using multilevel registration with CT-based navigation. The overall perforation rates for Grade 1, 2, or 3, Grade 2 or 3 (major perforations), and Grade 3 perforations (violations) were as follows: vertebrae at which the reference frame was set: 15.9%, 6.1%, and 2.5%; one vertebra above or below the reference frame vertebra: 16.5%, 4.0%, and 1.2%; two vertebrae above or below the reference frame vertebra: 20.7%, 8.7%, and 2.3%; three vertebrae above or below the reference frame vertebra: 23

  18. Influence of image compression on the quality of UNB pan-sharpened imagery: a case study with security video image frames

    Science.gov (United States)

    Adhamkhiabani, Sina Adham; Zhang, Yun; Fathollahi, Fatemeh

    2014-05-01

    UNB Pan-sharp, also named FuzeGo, is an image fusion technique to produce high resolution color satellite images by fusing a high resolution panchromatic (monochrome) image and a low resolution multispectral (color) image. This is an effective solution that modern satellites have been using to capture high resolution color images at an ultra-high speed. Initial research on security camera systems shows that the UNB Pan-sharp technique can also be utilized to produce high resolution and high sensitive color video images for various imaging and monitoring applications. Based on UNB Pansharp technique, a video camera prototype system, called the UNB Super-camera system, was developed that captures high resolution panchromatic images and low resolution color images simultaneously, and produces real-time high resolution color video images on the fly. In a separate study, it was proved that UNB Super Camera outperforms conventional 1-chip and 3-chip color cameras in image quality, especially when the illumination is low such as in room lighting. In this research the influence of image compression on the quality of UNB Pan-sharped high resolution color images is evaluated, since image compression is widely used in still and video cameras to reduce data volume and speed up data transfer. The results demonstrate that UNB Pan-sharp can consistently produce high resolution color images that have the same detail as the input high resolution panchromatic image and the same color of the input low resolution color image, regardless the compression ratio and lighting condition. In addition, the high resolution color images produced by UNB Pan-sharp have higher sensitivity (signal to noise ratio) and better edge sharpness and color rendering than those of the same generation 1-chip color camera, regardless the compression ratio and lighting condition.

  19. The influence of frame rate on two-dimensional speckle-tracking strain measurements: a study on silico-simulated models and images recorded in patients.

    Science.gov (United States)

    Rösner, Assami; Barbosa, Daniel; Aarsæther, Erling; Kjønås, Didrik; Schirmer, Henrik; D'hooge, Jan

    2015-10-01

    Ultrasound-derived myocardial strain can render valuable diagnostic and prognostic information. However, acquisition settings can have an important impact on the measurements. Frame rate (i.e. temporal resolution) seems to be of particular importance. The aim of this study was to find the optimal range of frame rates needed for most accurate and reproducible 2D strain measurements using a 2D speckle-tracking software package. Synthetic two dimensional (2D) ultrasound grey-scale images of the left ventricle (LV) were generated in which the strain in longitudinal, circumferential, and radial direction were precisely known from the underlying kinematic LV model. Four different models were generated at frame rates between 20 and 110 Hz. The resulting images were repeatedly analysed. Results of the synthetic data were validated in 66 patients, where long- and short-axis recordings at different frame rates were analysed. In simulated data, accurate strain estimates could be achieved at >30 frames per cycle (FpC) for longitudinal and circumferential strains. Lower FpC underestimated strain systematically. Radial strain estimates were less accurate and less reproducible. Patient strain displayed the same plateaus as in the synthetic models. Higher noise and the presence of artefacts in patient data were followed by higher measurement variability. Standard machine settings with a FR of 50-60 Hz allow correct assessment of peak global longitudinal and circumferential strain. Correct definition of the region of interest within the myocardium as well as the reduction of noise and artefacts seem to be of highest importance for accurate 2D strain estimation. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.

  20. Organ donation on Web 2.0: content and audience analysis of organ donation videos on YouTube.

    Science.gov (United States)

    Tian, Yan

    2010-04-01

    This study examines the content of and audience response to organ donation videos on YouTube, a Web 2.0 platform, with framing theory. Positive frames were identified in both video content and audience comments. Analysis revealed a reciprocity relationship between media frames and audience frames. Videos covered content categories such as kidney, liver, organ donation registration process, and youth. Videos were favorably rated. No significant differences were found between videos produced by organizations and individuals in the United States and those produced in other countries. The findings provide insight into how new communication technologies are shaping health communication in ways that differ from traditional media. The implications of Web 2.0, characterized by user-generated content and interactivity, for health communication and health campaign practice are discussed.

  1. Framing Gangnam Style

    Directory of Open Access Journals (Sweden)

    Hyunsun Catherine Yoon

    2017-08-01

    Full Text Available This paper examines the way in which news about Gangnam Style was framed in the Korean press. First released on 15th July 2012, it became the first video to pass two billion views on YouTube. 400 news articles between July 2012 and March 2013 from two South Korean newspapers - Chosun Ilbo and Hankyoreh were analyzed using the frame analysis method in five categories: industry/economy, globalization, cultural interest, criticism, and competition. The right-left opinion cleavage is important because news frames interact with official discourses, audience frames and prior knowledge which consequently mediate effects on public opinion, policy debates, social movement and individual interpretations. Whilst the existing literature on Gangnam Style took rather holistic approach, this study aimed to fill the lacuna, considering this phenomenon as a dynamic process, by segmenting different stages - recognition, spread, peak and continuation. Both newspapers acknowledged Gangnam Style was an epochal event but their perspectives and news frames were different; globalization frame was most frequently used in Chosun Ilbo whereas cultural interest frame was most often used in Hankyoreh. Although more critical approaches were found in Hankyoreh, reflecting the right-left opinion cleavage, both papers lacked in critical appraisal and analysis of Gangnam Style’s reception in a broader context of the new Korean Wave.

  2. High-frame-rate Imaging of a Carotid Bifurcation using a Low-complexity Velocity Estimation Approach

    DEFF Research Database (Denmark)

    di Ianni, Tommaso; Villagómez Hoyos, Carlos Armando; Ewertsen, Caroline

    2017-01-01

    In this paper, a 2-D vector flow imaging (VFI) method developed by combining synthetic aperture sequential beamforming and directional transverse oscillation is used to image a carotid bifurcation. Ninety-six beamformed lines are sent from the probe to the host system for each VFI frame, enabling...

  3. Media Framing

    DEFF Research Database (Denmark)

    Pedersen, Rasmus T.

    2017-01-01

    The concept of media framing refers to the way in which the news media organize and provide meaning to a news story by emphasizing some parts of reality and disregarding other parts. These patterns of emphasis and exclusion in news coverage create frames that can have considerable effects on news...... consumers’ perceptions and attitudes regarding the given issue or event. This entry briefly elaborates on the concept of media framing, presents key types of media frames, and introduces the research on media framing effects....

  4. Obscene Video Recognition Using Fuzzy SVM and New Sets of Features

    Directory of Open Access Journals (Sweden)

    Alireza Behrad

    2013-02-01

    Full Text Available In this paper, a novel approach for identifying normal and obscene videos is proposed. In order to classify different episodes of a video independently and discard the need to process all frames, first, key frames are extracted and skin regions are detected for groups of video frames starting with key frames. In the second step, three different features including 1- structural features based on single frame information, 2- features based on spatiotemporal volume and 3-motion-based features, are extracted for each episode of video. The PCA-LDA method is then applied to reduce the size of structural features and select more distinctive features. For the final step, we use fuzzy or a Weighted Support Vector Machine (WSVM classifier to identify video episodes. We also employ a multilayer Kohonen network as an initial clustering algorithm to increase the ability to discriminate between the extracted features into two classes of videos. Features based on motion and periodicity characteristics increase the efficiency of the proposed algorithm in videos with bad illumination and skin colour variation. The proposed method is evaluated using 1100 videos in different environmental and illumination conditions. The experimental results show a correct recognition rate of 94.2% for the proposed algorithm.

  5. Exploring survival rates of companies in the UK video-games industry: An empirical study

    OpenAIRE

    Cabras, I.; Goumagias, N. D.; Fernandes, K.; Cowling, P.; Li, F.; Kudenko, D.; Devlin, S.; Nucciarelli, A.

    2016-01-01

    The study presented in this paper investigates companies operating in the UK video-game industry with regard to their levels of survivability. Using a unique dataset of companies founded between 2009 and 2014, and combining elements and theories from the fields of Organisational Ecology and Industrial Organisation, the authors develop a set of hierarchical logistic regressions to explore and examine the effects of a range of variables such as industry concentration, market size and density on...

  6. Coronary artery disease is associated with an increased mortality rate following video-assisted thoracoscopic lobectomy

    DEFF Research Database (Denmark)

    Sandri, Alberto; Petersen, Rene Horsleben; Decaluwé, Herbert

    2017-01-01

    OBJECTIVE: To compare the incidence of major adverse cardiac events (MACE) and mortality following video-assisted thoracoscopic surgery (VATS) lobectomy in patients with and without coronary artery disease (CAD). METHODS: Multicentre retrospective analysis of 1699 patients undergoing VATS lobectomy...... (January 2012-March 2015). CAD definition: previous acute myocardial infarct (AMI), angina, percutaneous coronary intervention (PCI) or coronary artery bypass graft (CABG). MACE definition: postoperative acute myocardial ischemia, cardiac arrest or any cardiac death. Propensity score analysis was performed...

  7. Video-rate computational super-resolution and integral imaging at longwave-infrared wavelengths

    OpenAIRE

    Preciado, Miguel A.; Carles, Guillem; Harvey, Andrew R.

    2017-01-01

    We report the first computational super-resolved, multi-camera integral imaging at long-wave infrared (LWIR) wavelengths. A synchronized array of FLIR Lepton cameras was assembled, and computational super-resolution and integral-imaging reconstruction employed to generate video with light-field imaging capabilities, such as 3D imaging and recognition of partially obscured objects, while also providing a four-fold increase in effective pixel count. This approach to high-resolution imaging enab...

  8. A high resolution, high frame rate detector based on a microchannel plate read out with the Medipix2 counting CMOS pixel chip.

    CERN Document Server

    Mikulec, Bettina; McPhate, J B; Tremsin, A S; Siegmund, O H W; Clark, Allan G; CERN. Geneva

    2005-01-01

    The future of ground-based optical astronomy lies with advancements in adaptive optics (AO) to overcome the limitations that the atmosphere places on high resolution imaging. A key technology for AO systems on future very large telescopes are the wavefront sensors (WFS) which detect the optical phase error and send corrections to deformable mirrors. Telescopes with >30 m diameters will require WFS detectors that have large pixel formats (512x512), low noise (<3 e-/pixel) and very high frame rates (~1 kHz). These requirements have led to the idea of a bare CMOS active pixel device (the Medipix2 chip) functioning in counting mode as an anode with noiseless readout for a microchannel plate (MCP) detector and at 1 kHz continuous frame rate. First measurement results obtained with this novel detector are presented both for UV photons and beta particles.

  9. Backside illuminated CCD operating at 16,000,000 frames per second with sub-ten-photon sensitivity

    Science.gov (United States)

    Etoh, Takeharu G.; Dung, Nguyen H.; Dao, Son V. T.; Vo Le, Cuong; Tanaka, Masatoshi

    2011-08-01

    An ultra-high-speed and very high sensitivity video camera is developed. The highest frame rate reaches 16,000,000 frames per second (16 Mfps). The full well capacity is 22,000 e- at frame rates up to 4 Mfps, and 8000 e- at 16 Mfps. The pixel count is 165,072 (362×456) pixels. The total number of consecutive frames is 117, which can be doubled to 234 by interlaced imaging operation. The sensitivity is less than 10 photons/pixel.

  10. SECRETS OF SONG VIDEO

    Directory of Open Access Journals (Sweden)

    Chernyshov Alexander V.

    2014-04-01

    Full Text Available The article focuses on the origins of the song videos as TV and Internet-genre. In addition, it considers problems of screen images creation depending on the musical form and the text of a songs in connection with relevant principles of accent and phraseological video editing and filming techniques as well as with additional frames and sound elements.

  11. High Frame-Rate, High Resolution Ultrasound Imaging With Multi-Line Transmission and Filtered-Delay Multiply And Sum Beamforming.

    Science.gov (United States)

    Matrone, Giulia; Ramalli, Alessandro; Savoia, Alessandro Stuart; Tortoli, Piero; Magenes, Giovanni

    2017-02-01

    Multi-Line Transmission (MLT) was recently demonstrated as a valuable tool to increase the frame rate of ultrasound images. In this approach, the multiple beams that are simultaneously transmitted may determine cross-talk artifacts that are typically reduced, although not eliminated, by the use of Tukey apodization on both transmission and reception apertures, which unfortunately worsens the image lateral resolution. In this paper we investigate the combination, and related performance, of Filtered-Delay Multiply And Sum (F-DMAS) beamforming with MLT for high frame-rate ultrasound imaging. F-DMAS is a non-linear beamformer based on the computation of the receive aperture spatial autocorrelation, which was recently proposed for use in ultrasound B-mode imaging by some of the authors. The main advantages of such beamformer are the improved contrast resolution, obtained by lowering the beam side lobes and narrowing the main lobe, and the increased noise rejection. This study shows that in MLT images, compared to standard Delay And Sum (DAS) beamforming including Tukey apodization, F-DMAS beamforming yields better suppression of cross-talk and improved lateral resolution. The method's effectiveness is demonstrated by simulations and phantom experiments. Preliminary in vivo cardiac images also show that the frame rate can be improved up to 8-fold by combining F-DMAS and MLT without affecting the image quality.

  12. Analog Frame Store Memory.

    Science.gov (United States)

    1980-01-15

    enhancement when coupled with the video acquisition performed by the Frame Store Memory in mode 2. 3 -11 It cr (A 3 coJ 0( kLL P.- FAIRCHILO IMAGING ...AD-AOSI 979 FAIRCHILD IMAGING SYSTEMS SYOSSET N Y F/0 17/2 ANALOG FRAME STORE MEMORY . CU) JAN 80 OAAK77-C-0165 UNCLASSIFIED ED-CX-141-5 M 1 .0 H " 1...DESCRIPTION 1-2 1.1.1 Analog Frame Store Memory 1-2 1.1.2 Analog Field Storage Device 1-4 1.1.3 Image Analyzer Digital Display (IADD) 1-4 1.2 PROGRAM’S

  13. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Benoit Macq

    2008-07-01

    Full Text Available Based on the analysis of real mobile ad hoc network (MANET traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS to wireless clients is demonstrated.

  14. A video annotation methodology for interactive video sequence generation

    NARCIS (Netherlands)

    C.A. Lindley; R.A. Earnshaw; J.A. Vince

    2001-01-01

    textabstractThe FRAMES project within the RDN CRC (Cooperative Research Centre for Research Data Networks) has developed an experimental environment for dynamic virtual video sequence synthesis from databases of video data. A major issue for the development of dynamic interactive video applications

  15. Framing politics

    NARCIS (Netherlands)

    Lecheler, S.K.

    2010-01-01

    This dissertation supplies a number of research findings that add to a theory of news framing effects, and also to the understanding of the role media effects play in political communication. We show that researchers must think more about what actually constitutes a framing effect, and that a

  16. Framing theory

    NARCIS (Netherlands)

    de Vreese, C.H.; Lecheler, S.; Mazzoleni, G.; Barnhurst, K.G.; Ikeda, K.; Maia, R.C.M.; Wessler, H.

    2016-01-01

    Political issues can be viewed from different perspectives and they can be defined differently in the news media by emphasizing some aspects and leaving others aside. This is at the core of news framing theory. Framing originates within sociology and psychology and has become one of the most used

  17. Technological Frames

    Directory of Open Access Journals (Sweden)

    Karin Olesen

    2014-03-01

    Full Text Available The purpose of this article is to identify and explain the barriers that prevented the case study organization, an Australasian university, from implementing a groupware package. This is an insider action research case study, using qualitative semi-structured interviews, group and individual training to look at users’ technological frames around the implementation and use of a groupware product. Technological frames were used to enable a systematic examination of the assumptions, expectations, and knowledge of technology; in particular, the use of technological frames reveals aspects of user resistance. While addressing criticisms of the technological frames genre, this study uses technological frames as a lens to examine the underlying drivers and impediments to information systems (IS implementation. In this case study, changes to a groupware product failed to be implemented, not because of user resistance to the product, but because of organizational politics. This study demonstrates how the culture of an organization may stifle the implementation of IS.

  18. Visual instance mining of news videos using a graph-based approach

    OpenAIRE

    Almendros Gutiérrez, David

    2014-01-01

    [ANGLÈS] The aim of this thesis is to design a tool that performs visual instance search mining for news video summarization. This means to extract the relevant content of the video in order to be able to recognize the storyline of the news. Initially, a sampling of the video is required to get the frames with a desired rate. Then, different relevant contents are detected from each frame, focusing on faces, text and several objects that the user can select. Next, we use a graph-based clusteri...

  19. Localized specific absorption rate in the human head in metal-framed spectacles for 1.5GHz hand-held mobile telephones

    Energy Technology Data Exchange (ETDEWEB)

    Wang, J.; Joko, T.; Fujikawa, O. [Nagoya Institute of Technology, Nagoya (Japan)

    1998-11-01

    Enhancements of the localized specific absorption rate (SAR) caused by metal-framed spectacles are analyzed numerically for 1.5 GHz hand-held mobile telephones. The finite-difference time-domain (FDTD) method and an anatomically based human head model are employed in the analysis. Enhancements up to 1.2 times for the ten-gram-averaged spatial peak SAR in the head and up to 2.75 times for the one-gram-averaged SAR in the eye are found, whereas there is no significant variation on the absorbed power or averaged SAR in the whole head. The mechanism of localized SAR enhancement is clarified to be due to an induced current on the metal frame. 17 refs., 8 figs., 2 tabs.

  20. Error Transmission in Video Coding with Gaussian Noise

    Directory of Open Access Journals (Sweden)

    A Purwadi

    2015-06-01

    Full Text Available In video transmission, there is a possibility of packet lost and a large load variation in the bandwidth. These are the sources of network congestion, which can interfere the communication data rate. The coding system used is a video coding standard, which is either MPEG-2 or H.263 with SNR scalability. The algorithm used for motion compensation, temporal redundancy and spatial redundancy is the Discrete Cosine Transform (DCT and quantization. The transmission error is simulated by adding Gaussian noise (error on motion vectors. From the simulation results, the SNR and Peak Signal to Noise Ratio (PSNR in the noisy video frames decline with averages of 3dB and increase Mean Square Error (MSE on video frames received noise.

  1. Generation of super-resolution stills from video

    CSIR Research Space (South Africa)

    Duvenhage, B

    2014-11-01

    Full Text Available The real-time super-resolution technique discussed in this paper increases the effective pixel density of an image sensor by combining consecutive image frames from a video. In surveillance, the higher pixel density lowers the Nyquist rate...

  2. M-Rated Video Games and Aggressive or Problem Behavior among Young Adolescents

    Science.gov (United States)

    Olson, Cheryl K.; Kutner, Lawrence A.; Baer, Lee; Beresin, Eugene V.; Warner, Dorothy E.; Nicholi, Armand M., II

    2009-01-01

    This research examined the potential relationship between adolescent problem behaviors and amount of time spent with violent electronic games. Survey data were collected from 1,254 7th and 8th grade students in two states. A "dose" of exposure to Mature-rated games was calculated using Entertainment Software Rating Board ratings of…

  3. Flash X-Ray Cinematography At Framing Rates Up To 4.10 7 Images/Sec: Application To Terminal Ballistics

    Science.gov (United States)

    Jamet, Francis; Hatterer, Francis

    1983-08-01

    The two following cinematographic methods can be considered : 1. generation of an X-ray pulse train with a single X-ray tube. The deionization time between two discharges limits the upper value of the frame repetition rate ; 2. cineradiography with multiple tubes. The time interval has no lower limit, but errors can result from the parallax and this method is not suitable for the visualization of phenomena lacking axial symetry. We present a novel tube including four flash X-ray sources equally spaced by 20 mm only so that parallax can be neglected in most cases. The electrodes consist of four cathodes and an single anode on which deflectors are mounted in order to prevent the vapors produced in a discharge space from attaining the other discharge spaces. The shortest time interval separating two successive discharges is only limited by the X-ray pulse duration, i.e. 25ns. The novel flash X-ray tube is used in connection with four 500 kV, 60 J, Marx-surge generators. A dose of 1.94 μC/kg per pulse at 1 meter from the anode is reached. The frame recording method uses a fluorescent screen converting X-rays in light photons and a high speed electronic camera ("Imacon 790"). This new flash X-ray cinematographic device has been used for recording phenomena in terminal ballistics. Examples are given of frames showing the penetration of projectiles into a target.

  4. Transmission of compressed video

    Science.gov (United States)

    Pasch, H. L.

    1990-09-01

    An overview of video coding is presented. The aim is not to give a technical summary of possible coding techniques, but to address subjects related to video compression in general and to the transmission of compressed video in more detail. Bit rate reduction is in general possible by removing redundant information; removing information the eye does not use anyway; and reducing the quality of the video. The codecs which are used for reducing the bit rate, can be divided into two groups: Constant Bit rate Codecs (CBC's), which keep the bit rate constant, but vary the video quality; and Variable Bit rate Codecs (VBC's), which keep the video quality constant by varying the bit rate. VBC's can be in general reach a higher video quality than CBC's using less bandwidth, but need a transmission system that allows the bandwidth of a connection to fluctuate in time. The current and the next generation of the PSTN does not allow this; ATM might. There are several factors which influence the quality of video: the bit error rate of the transmission channel, slip rate, packet loss rate/packet insertion rate, end-to-end delay, phase shift between voice and video, and bit rate. Based on the bit rate of the coded video, the following classification of coded video can be made: High Definition Television (HDTV); Broadcast Quality Television (BQTV); video conferencing; and video telephony. The properties of these classes are given. The video conferencing and video telephony equipment available now and in the next few years can be divided into three categories: conforming to 1984 CCITT standard for video conferencing; conforming to 1988 CCITT standard; and conforming to no standard.

  5. Framing truths

    OpenAIRE

    Kuester, Martin

    1992-01-01

    Framing truths : parodic structures in contemporary English-Canadian historical novels. - Toronto u.a. : Univ. of Toronto Pr., 1992. - VIII, 192 S. - Zugl.: Winnipeg, Univ. of Manitoba, Diss., 1990. - (Theory/culture ; 12)

  6. Summarization of human activity videos via low-rank approximation

    OpenAIRE

    Mademlis, Ioannis; Tefas, Anastasios; Nikolaidis, Nikos; Pitas, Ioannis

    2017-01-01

    Summarization of videos depicting human activities is a timely problem with important applications, e.g., in the domains of surveillance or film/TV production, that steadily becomes more relevant. Research on video summarization has mainly relied on global clustering or local (frame-by-frame) saliency methods to provide automated algorithmic solutions for key-frame extraction. This work presents a method based on selecting as key-frames video frames able to optimally reconstruct the entire vi...

  7. Video special effects editing in MPEG-2 compressed video

    OpenAIRE

    Fernando, WAC; Canagarajah, CN; Bull, David

    2000-01-01

    With the increase of digital technology in video production, several types of complex video special effects editing have begun to appear in video clips. In this paper we consider fade-out and fade-in special effects editing in MPEG-2 compressed video without full frame decompression and motion estimation. We estimated the DCT coefficients and use these coefficients together with the existing motion vectors to produce these special effects editing in compressed domain. Results show that both o...

  8. Five-Factor Personality Traits and Age Trajectories of Self-Rated Health: The Role of Question Framing

    Science.gov (United States)

    Löckenhoff, Corinna E.; Terracciano, Antonio; Ferrucci, Luigi; Costa, Paul T.

    2011-01-01

    We examined the influence of personality traits on mean levels and age trends in four single-item measures of self-rated health: General rating, comparison to age peers, comparison to past health, and expectations for future health. Community-dwelling participants (N = 1,683) completed 7,474 self-rated health assessments over a period of up to 19-years. In hierarchical linear modeling analyses, age-associated declines differed across the four health items. Across age groups, high neuroticism and low conscientiousness, low extraversion, and low openness were associated with worse health ratings, with notable differences across the four health items. Furthermore, high neuroticism predicted steeper declines in health ratings involving temporal comparisons. We consider theoretical implications regarding the mechanisms behind associations among personality traits and self-rated health. PMID:21299558

  9. Facial esthetics and the assignment of personality traits before and after orthognathic surgery rated on video clips.

    Science.gov (United States)

    Sinko, Klaus; Jagsch, Reinhold; Drog, Claudio; Mosgoeller, Wilhelm; Wutzl, Arno; Millesi, Gabriele; Klug, Clemens

    2018-01-01

    Typically, before and after surgical correction faces are assessed on still images by surgeons, orthodontists, the patients, and family members. We hypothesized that judgment of faces in motion and by naïve raters may closer reflect the impact on patients' real life, and the treatment impact on e.g. career chances. Therefore we assessed faces from dysgnathic patients (Class II, III and Laterognathia) on video clips. Class I faces served as anchor and controls. Each patient's face was assessed twice before and after treatment in changing sequence, by 155 naïve raters with similar age to the patients. The raters provided independent estimates on aesthetic trait pairs like ugly /beautiful, and personality trait pairs like dominant /flexible. Furthermore the perception of attractiveness, intelligence, health, the persons' erotic aura, faithfulness, and five additional items were rated. We estimated the significance of the perceived treatment related differences and the respective effect size by general linear models for repeated measures. The obtained results were comparable to our previous rating on still images. There was an overall trend, that faces in video clips are rated along common stereotypes to a lesser extent than photographs. We observed significant class differences and treatment related changes of most aesthetic traits (e.g. beauty, attractiveness), these were comparable to intelligence, erotic aura and to some extend healthy appearance. While some personality traits (e.g. faithfulness) did not differ between the classes and between baseline and after treatment, we found that the intervention significantly and effectively altered the perception of the personality trait self-confidence. The effect size was highest in Class III patients, smallest in Class II patients, and in between for patients with Laterognathia. All dysgnathic patients benefitted from orthognathic surgery. We conclude that motion can mitigate marked stereotypes but does not entirely

  10. Generic Film Forms for Dynamic Virtual Video Synthesis

    NARCIS (Netherlands)

    C.A. Lindley

    1999-01-01

    textabstractThe FRAMES project within the RDN CRC (Cooperative Research Centre for Research Data Networks) is developing an experimental environment for video content-based retrieval and dynamic virtual video synthesis from archives of video data. The FRAMES research prototype is a video synthesis

  11. Non-intrusive burning rate measurement under pressure by evaluation of video data

    OpenAIRE

    Weiser, V.; Ebeling, H.; Weindel, M.; Eckl, W.; Klahn, T.

    2004-01-01

    A non-intrusive and simple to use method to determine the burning rate of propellants, fragile gas generators, liquid or bulk materials in tubes is introduced. A minimum preparation expense is necessary. Via a high-speed CCD-camera or an ordinary DV-camcorder the combustion process of a strand burner is digitised and automatically analysed. A good agreement with Crawford measurements is found. The method allows controlling the quality of the high precision burning rate measurements during eac...

  12. The Impact of Video Compression on Remote Cardiac Pulse Measurement Using Imaging Photoplethysmography

    Science.gov (United States)

    2017-05-30

    quality is human subjective perception assessed by a Mean Opinion Score (MOS). Alternatively, video quality may be assessed using one of numerous...cameras. Synchronization of the image capture from the array was achieved using a PCIe-6323 data acquisition card (National Instruments, Austin...large reductions of either video resolution or frame rate did not strongly impact iPPG pulse rate measurements [9]. A balanced approach may yield

  13. MPEG2 video parameter and no reference PSNR estimation

    DEFF Research Database (Denmark)

    Li, Huiying; Forchhammer, Søren

    2009-01-01

    MPEG coded video may be processed for quality assessment or postprocessed to reduce coding artifacts or transcoded. Utilizing information about the MPEG stream may be useful for these tasks. This paper deals with estimating MPEG parameter information from the decoded video stream without access...... to the MPEG stream. This may be used in systems and applications where the coded stream is not accessible. Detection of MPEG I-frames and DCT (discrete cosine transform) block size is presented. For the I-frames, the quantization parameters are estimated. Combining these with statistics of the reconstructed...... DCT coefficients, the PSNR is estimated from the decoded video without reference images. Tests on decoded fixed rate MPEG2 sequences demonstrate perfect detection rates and good performance of the PSNR estimation....

  14. An Introduction to Recording, Editing, and Streaming Picture-in-Picture Ultrasound Videos.

    Science.gov (United States)

    Rajasekaran, Sathish; Hall, Mederic M; Finnoff, Jonathan T

    2016-08-01

    This paper describes the process by which high-definition resolution (up to 1920 × 1080 pixels) ultrasound video can be captured in conjunction with high-definition video of the transducer position (picture-in-picture). In addition, we describe how to edit the recorded video feeds to combine both feeds, and to crop, resize, split, stitch, cut, annotate videos, and also change the frame rate, insert pictures, edit the audio feed, and use chroma keying. We also describe how to stream a picture-in-picture ultrasound feed during a videoconference. Copyright © 2016 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  15. Read noise for a 2.5μm cutoff Teledyne H2RG at 1-1000Hz frame rates

    Science.gov (United States)

    Smith, Roger M.; Hale, David

    2012-07-01

    A camera operating a Teledyne H2RG in H and Ks bands is under construction at Caltech to serve as a near-infrared tip-tilt sensor for the Keck-1 Laser Guide Star Adaptive Optics system. After imaging the full field for acquisition, small readout windows are placed around one or more natural guide stars anywhere in the AO corrected field of view. Windowed data may be streamed to RAM in the host for a limited time then written to disk as a single file, analogous to a “film strip”, or be transmitted indefinitely via a second fiber optic output to a dedicated computer providing real time control of the AO system. The various windows can be visited at differing cadences, depending on signal levels. We describe a readout algorithm that maximizes exposure duty cycle, minimizes latency, and achieves very low noise by resetting infrequently then synthesizing exposures from Sample Up The Ramp data. To illustrate which noise sources dominate under various conditions, noise measurements are presented as a function of synthesized frame rate and window sizes for a range of detector temperatures. The consequences of spatial variation in noise properties, and dependence on frame rate and temperature are discussed, together with probable causes of statistical outliers.

  16. Shadow Detection Based on Regions of Light Sources for Object Extraction in Nighttime Video.

    Science.gov (United States)

    Lee, Gil-Beom; Lee, Myeong-Jin; Lee, Woo-Kyung; Park, Joo-Heon; Kim, Tae-Hwan

    2017-03-22

    Intelligent video surveillance systems detect pre-configured surveillance events through background modeling, foreground and object extraction, object tracking, and event detection. Shadow regions inside video frames sometimes appear as foreground objects, interfere with ensuing processes, and finally degrade the event detection performance of the systems. Conventional studies have mostly used intensity, color, texture, and geometric information to perform shadow detection in daytime video, but these methods lack the capability of removing shadows in nighttime video. In this paper, a novel shadow detection algorithm for nighttime video is proposed; this algorithm partitions each foreground object based on the object's vertical histogram and screens out shadow objects by validating their orientations heading toward regions of light sources. From the experimental results, it can be seen that the proposed algorithm shows more than 93.8% shadow removal and 89.9% object extraction rates for nighttime video sequences, and the algorithm outperforms conventional shadow removal algorithms designed for daytime videos.

  17. TEM Video Compressive Sensing

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.

    2015-08-01

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental conditions

  18. Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding

    Directory of Open Access Journals (Sweden)

    Pengyu Liu

    2013-01-01

    Full Text Available A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI. The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding.

  19. An Affect-Responsive Interactive Photo Frame

    NARCIS (Netherlands)

    Dibeklioğlu, H.; Kosunen, I.; Ortega Hortas, M.; Salah, A.A.; Zuzánek, P.; Salah, A.A.; Gevers, T.

    2010-01-01

    We develop an interactive photo-frame system in which a series of videos of a single person are automatically segmented and a response logic is derived to interact with the user in real-time. The system is composed of five modules. The first module analyzes the uploaded videos and prepares segments

  20. Online sparse representation for remote sensing compressed-sensed video sampling

    Science.gov (United States)

    Wang, Jie; Liu, Kun; Li, Sheng-liang; Zhang, Li

    2014-11-01

    Most recently, an emerging Compressed Sensing (CS) theory has brought a major breakthrough for data acquisition and recovery. It asserts that a signal, which is highly compressible in a known basis, can be reconstructed with high probability through sampling frequency which is well below Nyquist Sampling Frequency. When applying CS to Remote Sensing (RS) Video imaging, it can directly and efficiently acquire compressed image data by randomly projecting original data to obtain linear and non-adaptive measurements. In this paper, with the help of distributed video coding scheme which is a low-complexity technique for resource limited sensors, the frames of a RS video sequence are divided into Key frames (K frames) and Non-Key frames (CS frames). In other words, the input video sequence consists of many groups of pictures (GOPs) and each GOP consists of one K frame followed by several CS frames. Both of them are measured based on block, but at different sampling rates. In this way, the major encoding computation burden will be shifted to the decoder. At the decoder, the Side Information (SI) is generated for the CS frames using traditional Motion-Compensated Interpolation (MCI) technique according to the reconstructed key frames. The over-complete dictionary is trained by dictionary learning methods based on SI. These learning methods include ICA-like, PCA, K-SVD, MOD, etc. Using these dictionaries, the CS frames could be reconstructed according to sparse-land model. In the numerical experiments, the reconstruction performance of ICA algorithm, which is often evaluated by Peak Signal-to-Noise Ratio (PSNR), has been made compared with other online sparse representation algorithms. The simulation results show its advantages in reducing reconstruction time and robustness in reconstruction performance when applying ICA algorithm to remote sensing video reconstruction.

  1. Relating pressure measurements to phenomena observed in high speed video recordings during tests of explosive charges in a semi-confined blast chamber

    CSIR Research Space (South Africa)

    Mostert, FJ

    2012-09-01

    Full Text Available video recordings were obtained from the open end of the chamber of the fireball and post detonative behaviour of explosive products. The framing rate of the video camera was 10 000 fps and the pressure measurements were obtained for at least 10 ms after...

  2. A game-theoretical pricing mechanism for multiuser rate allocation for video over WiMAX

    Science.gov (United States)

    Chen, Chao-An; Lo, Chi-Wen; Lin, Chia-Wen; Chen, Yung-Chang

    2010-07-01

    In multiuser rate allocation in a wireless network, strategic users can bias the rate allocation by misrepresenting their bandwidth demands to a base station, leading to an unfair allocation. Game-theoretical approaches have been proposed to address the unfair allocation problems caused by the strategic users. However, existing approaches rely on a timeconsuming iterative negotiation process. Besides, they cannot completely prevent unfair allocations caused by inconsistent strategic behaviors. To address these problems, we propose a Search Based Pricing Mechanism to reduce the communication time and to capture a user's strategic behavior. Our simulation results show that the proposed method significantly reduce the communication time as well as converges stably to an optimal allocation.

  3. Robust video transmission with distributed source coded auxiliary channel.

    Science.gov (United States)

    Wang, Jiajun; Majumdar, Abhik; Ramchandran, Kannan

    2009-12-01

    We propose a novel solution to the problem of robust, low-latency video transmission over lossy channels. Predictive video codecs, such as MPEG and H.26x, are very susceptible to prediction mismatch between encoder and decoder or "drift" when there are packet losses. These mismatches lead to a significant degradation in the decoded quality. To address this problem, we propose an auxiliary codec system that sends additional information alongside an MPEG or H.26x compressed video stream to correct for errors in decoded frames and mitigate drift. The proposed system is based on the principles of distributed source coding and uses the (possibly erroneous) MPEG/H.26x decoder reconstruction as side information at the auxiliary decoder. The distributed source coding framework depends upon knowing the statistical dependency (or correlation) between the source and the side information. We propose a recursive algorithm to analytically track the correlation between the original source frame and the erroneous MPEG/H.26x decoded frame. Finally, we propose a rate-distortion optimization scheme to allocate the rate used by the auxiliary encoder among the encoding blocks within a video frame. We implement the proposed system and present extensive simulation results that demonstrate significant gains in performance both visually and objectively (on the order of 2 dB in PSNR over forward error correction based solutions and 1.5 dB in PSNR over intrarefresh based solutions for typical scenarios) under tight latency constraints.

  4. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  5. A motion-tolerant approach for monitoring SpO2and heart rate using photoplethysmography signal with dual frame length processing and multi-classifier fusion.

    Science.gov (United States)

    Fan, Feiyi; Yan, Yuepeng; Tang, Yongzhong; Zhang, Hao

    2017-12-01

    Monitoring pulse oxygen saturation (SpO 2 ) and heart rate (HR) using photoplethysmography (PPG) signal contaminated by a motion artifact (MA) remains a difficult problem, especially when the oximeter is not equipped with a 3-axis accelerometer for adaptive noise cancellation. In this paper, we report a pioneering investigation on the impact of altering the frame length of Molgedey and Schuster independent component analysis (ICAMS) on performance, design a multi-classifier fusion strategy for selecting the PPG correlated signal component, and propose a novel approach to extract SpO 2 and HR readings from PPG signal contaminated by strong MA interference. The algorithm comprises multiple stages, including dual frame length ICAMS, a multi-classifier-based PPG correlated component selector, line spectral analysis, tree-based HR monitoring, and post-processing. Our approach is evaluated by multi-subject tests. The root mean square error (RMSE) is calculated for each trial. Three statistical metrics are selected as performance evaluation criteria: mean RMSE, median RMSE and the standard deviation (SD) of RMSE. The experimental results demonstrate that a shorter ICAMS analysis window probably results in better performance in SpO 2 estimation. Notably, the designed multi-classifier signal component selector achieved satisfactory performance. The subject tests indicate that our algorithm outperforms other baseline methods regarding accuracy under most criteria. The proposed work can contribute to improving the performance of current pulse oximetry and personal wearable monitoring devices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Framing scales and scaling frames

    NARCIS (Netherlands)

    Lieshout, van M.; Dewulf, A.; Aarts, M.N.C.; Termeer, C.J.A.M.

    2009-01-01

    Policy problems are not just out there. Actors highlight different aspects of a situation as problematic and situate the problem on different scales. In this study we will analyse the way actors apply scales in their talk (or texts) to frame the complex decision-making process of the establishment

  7. A comparative study of scalable video coding schemes utilizing wavelet technology

    Science.gov (United States)

    Schelkens, Peter; Andreopoulos, Yiannis; Barbarien, Joeri; Clerckx, Tom; Verdicchio, Fabio; Munteanu, Adrian; van der Schaar, Mihaela

    2004-02-01

    Video transmission over variable-bandwidth networks requires instantaneous bit-rate adaptation at the server site to provide an acceptable decoding quality. For this purpose, recent developments in video coding aim at providing a fully embedded bit-stream with seamless adaptation capabilities in bit-rate, frame-rate and resolution. A new promising technology in this context is wavelet-based video coding. Wavelets have already demonstrated their potential for quality and resolution scalability in still-image coding. This led to the investigation of various schemes for the compression of video, exploiting similar principles to generate embedded bit-streams. In this paper we present scalable wavelet-based video-coding technology with competitive rate-distortion behavior compared to standardized non-scalable technology.

  8. Robust Shot Boundary Detection from Video Using Dynamic Texture

    Directory of Open Access Journals (Sweden)

    Peng Taile

    2014-03-01

    Full Text Available Video boundary detection belongs to a basis subject in computer vision. It is more important to video analysis and video understanding. The existing video boundary detection methods always are effective to certain types of video data. These methods have relatively low generalization ability. We present a novel shot boundary detection algorithm based on video dynamic texture. Firstly, the two adjacent frames are read from a given video. We normalize the two frames to get the same size frame. Secondly, we divide these frames into some sub-domain on the same standard. The following thing is to calculate the average gradient direction of sub-domain and form dynamic texture. Finally, the dynamic texture of adjacent frames is compared. We have done some experiments in different types of video data. These experimental results show that our method has high generalization ability. To different type of videos, our algorithm can achieve higher average precision and average recall relative to some algorithms.

  9. Framing Feminism

    OpenAIRE

    Mendes, Kaitlynn

    2011-01-01

    This article analyses the framing in four British and American newspapers of the second wave feminist movement during its most politically active period (1968–1982). Using content and critical discourse analysis of 555 news articles, the article investigates how movement members were represented, what problems and solutions to women’s oppression/inequality were posed and whose voices were used. This paper identifies: opposition to the movement, support for the movement, conflict and movement ...

  10. Does rating the operation videos with a checklist score improve the effect of E-learning for bariatric surgical training? Study protocol for a randomized controlled trial.

    Science.gov (United States)

    De La Garza, Javier Rodrigo; Kowalewski, Karl-Friedrich; Friedrich, Mirco; Schmidt, Mona Wanda; Bruckner, Thomas; Kenngott, Hannes Götz; Fischer, Lars; Müller-Stich, Beat-Peter; Nickel, Felix

    2017-03-21

    Laparoscopic training has become an important part of surgical education. Laparoscopic Roux-en-Y gastric bypass (RYGB) is the most common bariatric procedure performed. Surgeons must be well trained prior to operating on a patient. Multimodality training is vital for bariatric surgery. E-learning with videos is a standard approach for training. The present study investigates whether scoring the operation videos with performance checklists improves learning effects and transfer to a simulated operation. This is a monocentric, two-arm, randomized controlled trial. The trainees are medical students from the University of Heidelberg in their clinical years with no prior laparoscopic experience. After a laparoscopic basic virtual reality (VR) training, 80 students are randomized into one of two arms in a 1:1 ratio to the checklist group (group A) and control group without a checklist (group B). After all students are given an introduction of the training center, VR trainer and laparoscopic instruments, they start with E-learning while watching explanations and videos of RYGB. Only group A will perform ratings with a modified Bariatric Objective Structured Assessment of Technical Skill (BOSATS) scale checklist for all videos watched. Group B watches the same videos without rating. Both groups will then perform an RYGB in the VR trainer as a primary endpoint and small bowel suturing as an additional test in the box trainer for evaluation. This study aims to assess if E-learning and rating bariatric surgical videos with a modified BOSATS checklist will improve the learning curve for medical students in an RYGB VR performance. This study may help in future laparoscopic and bariatric training courses. German Clinical Trials Register, DRKS00010493 . Registered on 20 May 2016.

  11. Mechanisms of video-game epilepsy.

    Science.gov (United States)

    Fylan, F; Harding, G F; Edson, A S; Webb, R M

    1999-01-01

    We aimed to elucidate the mechanisms underlying video-game epilepsy by comparing the flicker- and spatial-frequency ranges over which photic and pattern stimulation elicited photoparoxysmal responses in two different populations: (a) 25 patients with a history of seizures experienced while playing video games; and (b) 25 age- and medication-matched controls with a history of photosensitive epilepsy, but no history of video-game seizures. Abnormality ranges were determined by measuring photoparoxysmal EEG abnormalities as a function of the flicker frequency of patterned and diffuse intermittent photic stimulation (IPS) and the spatial frequency of patterns on a raster display. There was no significant difference between the groups in respect of the abnormality ranges elicited by patterned or diffuse IPS or by spatial patterns. When the groups were compared at one specific IPS frequency (-50 Hz), however, the flicker frequency of European television displays, the video-game patients were significantly more likely to be sensitive. The results suggest that video-game seizures are a manifestation of photosensitive epilepsy. The increased sensitivity of video-game patients to IPS at 50 Hz indicates that display flicker may underlie video-game seizures. The similarity in photic- and pattern-stimulation ranges over which abnormalities are elicited in video-game patients and controls suggests that all patients with photosensitive epilepsy may be predisposed toward video-game-induced seizures. Photosensitivity screening should therefore include assessment by using both IPS at 50 Hz and patterns displayed on a television or monitor with a 50-Hz frame rate.

  12. Open-source telemedicine platform for wireless medical video communication.

    Science.gov (United States)

    Panayides, A; Eleftheriou, I; Pantziaris, M

    2013-01-01

    An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN) and 3.5G high-speed packet access (HSPA) wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD) algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings.

  13. Open-Source Telemedicine Platform for Wireless Medical Video Communication

    Directory of Open Access Journals (Sweden)

    A. Panayides

    2013-01-01

    Full Text Available An m-health system for real-time wireless communication of medical video based on open-source software is presented. The objective is to deliver a low-cost telemedicine platform which will allow for reliable remote diagnosis m-health applications such as emergency incidents, mass population screening, and medical education purposes. The performance of the proposed system is demonstrated using five atherosclerotic plaque ultrasound videos. The videos are encoded at the clinically acquired resolution, in addition to lower, QCIF, and CIF resolutions, at different bitrates, and four different encoding structures. Commercially available wireless local area network (WLAN and 3.5G high-speed packet access (HSPA wireless channels are used to validate the developed platform. Objective video quality assessment is based on PSNR ratings, following calibration using the variable frame delay (VFD algorithm that removes temporal mismatch between original and received videos. Clinical evaluation is based on atherosclerotic plaque ultrasound video assessment protocol. Experimental results show that adequate diagnostic quality wireless medical video communications are realized using the designed telemedicine platform. HSPA cellular networks provide for ultrasound video transmission at the acquired resolution, while VFD algorithm utilization bridges objective and subjective ratings.

  14. Context based Coding of Quantized Alpha Planes for Video Objects

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren

    2002-01-01

    In object based video, each frame is a composition of objects that are coded separately. The composition is performed through the alpha plane that represents the transparency of the object. We present an alternative to MPEG-4 for coding of alpha planes that considers their specific properties. Co....... Comparisons in terms of rate and distortion are provided, showing that the proposed coding scheme for still alpha planes is better than the algorithms for I-frames used in MPEG-4.......In object based video, each frame is a composition of objects that are coded separately. The composition is performed through the alpha plane that represents the transparency of the object. We present an alternative to MPEG-4 for coding of alpha planes that considers their specific properties...

  15. Computer-based video digitizer analysis of surface extension in maize roots: kinetics of growth rate changes during gravitropism

    Science.gov (United States)

    Ishikawa, H.; Hasenstein, K. H.; Evans, M. L.

    1991-01-01

    We used a video digitizer system to measure surface extension and curvature in gravistimulated primary roots of maize (Zea mays L.). Downward curvature began about 25 +/- 7 min after gravistimulation and resulted from a combination of enhanced growth along the upper surface and reduced growth along the lower surface relative to growth in vertically oriented controls. The roots curved at a rate of 1.4 +/- 0.5 degrees min-1 but the pattern of curvature varied somewhat. In about 35% of the samples the roots curved steadily downward and the rate of curvature slowed as the root neared 90 degrees. A final angle of about 90 degrees was reached 110 +/- 35 min after the start of gravistimulation. In about 65% of the samples there was a period of backward curvature (partial reversal of curvature) during the response. In some cases (about 15% of those showing a period of reverse bending) this period of backward curvature occurred before the root reached 90 degrees. Following transient backward curvature, downward curvature resumed and the root approached a final angle of about 90 degrees. In about 65% of the roots showing a period of reverse curvature, the roots curved steadily past the vertical, reaching maximum curvature about 205 +/- 65 min after gravistimulation. The direction of curvature then reversed back toward the vertical. After one or two oscillations about the vertical the roots obtained a vertical orientation and the distribution of growth within the root tip became the same as that prior to gravistimulation. The period of transient backward curvature coincided with and was evidently caused by enhancement of growth along the concave and inhibition of growth along the convex side of the curve, a pattern opposite to that prevailing in the earlier stages of downward curvature. There were periods during the gravitropic response when the normally unimodal growth-rate distribution within the elongation zone became bimodal with two peaks of rapid elongation separated by

  16. Automated Tracking of Whiskers in Videos of Head Fixed Rodents

    Science.gov (United States)

    Clack, Nathan G.; O'Connor, Daniel H.; Huber, Daniel; Petreanu, Leopoldo; Hires, Andrew; Peron, Simon; Svoboda, Karel; Myers, Eugene W.

    2012-01-01

    We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception. PMID:22792058

  17. Automated tracking of whiskers in videos of head fixed rodents.

    Science.gov (United States)

    Clack, Nathan G; O'Connor, Daniel H; Huber, Daniel; Petreanu, Leopoldo; Hires, Andrew; Peron, Simon; Svoboda, Karel; Myers, Eugene W

    2012-01-01

    We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception.

  18. Long-term video surveillance and automated analyses of hibernating bats in Virginia and Indiana, winters 2011-2014.

    Science.gov (United States)

    Hayman, David T.S.; Cryan, Paul; Fricker, Paul D.; Dannemiller, Nicholas G.

    2017-01-01

    This data release includes video files and image-processing results used to conduct the analyses of hibernation patterns in groups of bats reported by Hayman et al. (2017), "Long-term video surveillance and automated analyses reveal arousal patterns in groups of hibernating bats.”  Thermal-imaging surveillance video cameras were used to observe little brown bats (Myotis lucifugus) in a cave in Virginia and Indiana bats (M. sodalis) in a cave in Indiana during three winters between 2011 and 2014.  There are 740 video files used for analysis (‘Analysis videos’), organized into 7 folders by state/site and winter.  Total size of the video data set is 14.1 gigabytes.  Each video file in this analysis set represents one 24-hour period of observation, time-lapsed at a rate of one frame per 30 seconds of real time (video plays at 30 frames per second).  A folder of illustrative videos is also included, which shows all of the analysis days for one winter of monitoring merged into a single video clip, time-lapsed at a rate of one frame per two hours of real time.  The associated image-processing results are included in 7 data files, each representing computer derived values of mean pixel intensity in every 10th frame of the 740 time-lapsed video files, concatenated by site and winter of observation.  Details on the format of these data, as well as how they were processed and derived are included in Hayman et al. (2017) and with the project metadata on Science Base.Hayman, DTS, Cryan PM, Fricker PD, Dannemiller NG. 2017. Long-term video surveillance and automated analyses reveal arousal patterns in groups of hibernating bats. Methods Ecol Evol. 2017;00:1-9. https://doi.org/10.1111/2041-210X.12823

  19. Wii, Kinect, and Move. Heart Rate, Oxygen Consumption, Energy Expenditure, and Ventilation due to Different Physically Active Video Game Systems in College Students

    OpenAIRE

    SCHEER, KRISTA S.; SIEBRANT, SARAH M.; Brown, Gregory A.; Brandon S. Shaw; Shaw, Ina

    2014-01-01

    Nintendo Wii, Sony Playstation Move, and Microsoft XBOX Kinect are home video gaming systems that involve player movement to control on-screen game play. Numerous investigations have demonstrated that playing Wii is moderate physical activity at best, but Move and Kinect have not been as thoroughly investigated. The purpose of this study was to compare heart rate, oxygen consumption, and ventilation while playing the games Wii Boxing, Kinect Boxing, and Move Gladiatorial Combat. Heart rate, o...

  20. Assessment of short-term variability in human spontaneous blink rate during video observation with or without head / chin support.

    Science.gov (United States)

    Doughty, Michael J

    2016-03-01

    The aim was to assess the variability of spontaneous blink rate (SBR) with and without a chin and forehead support. Forty-eight healthy non-contact lens wearers, aged from 20 to 39 years, had five-minute video recordings made under ambient lighting of 350 to 400 lux, while directing their gaze to a distant target at head height. Half the subjects (group 1) were seated resting against the chair head rest and the other half (group 2) seated but with chin and forehead at a slitlamp. The first 35 blinks were analysed in detail. As assessed over five minutes, 35 to 111 blinks were counted, with SBR between 6.9 and 21.8 blinks per min (average 13.9 per min). Over the initial 35 blinks, the average momentary SBR values (calculated from the inter-blink intervals) averaged 24.8 blinks per minute in group 1 and 19.3 blinks per minute in group 2 (not significantly different, p = 0.273) but a statistically significant (p blinks in group 2. The variability in momentary SBR values, as assessed from successive blinks, had coefficient of variation (COV) values of 80 and 78 per cent, respectively over 35 blinks. Averaged spontaneous blink rates over short time periods (that is, five minutes) should be suitable to compare various experimental paradigms but if very short periods are used (for example one minute or less), then there could be significant time-related changes, especially when a subject is seated with chin and forehead support. © 2016 Optometry Australia.

  1. Live, video-rate super-resolution microscopy using structured illumination and rapid GPU-based parallel processing.

    Science.gov (United States)

    Lefman, Jonathan; Scott, Keana; Stranick, Stephan

    2011-04-01

    Structured illumination fluorescence microscopy is a powerful super-resolution method that is capable of achieving a resolution below 100 nm. Each super-resolution image is computationally constructed from a set of differentially illuminated images. However, real-time application of structured illumination microscopy (SIM) has generally been limited due to the computational overhead needed to generate super-resolution images. Here, we have developed a real-time SIM system that incorporates graphic processing unit (GPU) based in-line parallel processing of raw/differentially illuminated images. By using GPU processing, the system has achieved a 90-fold increase in processing speed compared to performing equivalent operations on a multiprocessor computer--the total throughput of the system is limited by data acquisition speed, but not by image processing. Overall, more than 350 raw images (16-bit depth, 512 × 512 pixels) can be processed per second, resulting in a maximum frame rate of 39 super-resolution images per second. This ultrafast processing capability is used to provide immediate feedback of super-resolution images for real-time display. These developments are increasing the potential for sophisticated super-resolution imaging applications.

  2. The Impact of a Prenatal Education Video on Rates of Breastfeeding Initiation and Exclusivity during the Newborn Hospital Stay in a Low-income Population.

    Science.gov (United States)

    Kellams, Ann L; Gurka, Kelly K; Hornsby, Paige P; Drake, Emily; Riffon, Mark; Gellerson, Daphne; Gulati, Gauri; Coleman, Valerie

    2016-02-01

    Guidelines recommend prenatal education to improve breastfeeding rates; however, effective educational interventions targeted at low-income, minority populations are needed as they remain less likely to breastfeed. To determine whether a low-cost prenatal education video improves hospital rates of breastfeeding initiation and exclusivity in a low-income population. A total of 522 low-income women were randomized during a prenatal care visit occurring in the third trimester to view an educational video on either breastfeeding or prenatal nutrition and exercise. Using multivariable analyses, breastfeeding initiation rates and exclusivity during the hospital stay were compared. Exposure to the intervention did not affect breastfeeding initiation rates or duration during the hospital stay. The lack of an effect on breastfeeding initiation persisted even after controlling for partner, parent, or other living at home and infant complications (adjusted odds ratio [OR] = 1.05, 95% CI, 0.70-1.56). In addition, breastfeeding exclusivity rates during the hospital stay did not differ between the groups (P = .87). This study suggests that an educational breastfeeding video alone is ineffective in improving the hospital breastfeeding practices of low-income women. Increasing breastfeeding rates in this at-risk population likely requires a multipronged effort begun early in pregnancy or preconception. © The Author(s) 2015.

  3. A Transparent Loss Recovery Scheme Using Packet Redirection for Wireless Video Transmissions

    Directory of Open Access Journals (Sweden)

    Chi-Huang Shih

    2008-05-01

    Full Text Available With the wide deployment of wireless networks and the rapid integration of various emerging networking technologies nowadays, Internet video applications must be updated on a sufficiently timely basis to support high end-to-end quality of service (QoS levels over heterogeneous infrastructures. However, updating the legacy applications to provide QoS support is both complex and expensive since the video applications must communicate with underlying architectures when carrying out QoS provisioning, and furthermore, should be both aware of and adaptive to variations in the network conditions. Accordingly, this paper presents a transparent loss recovery scheme to transparently support the robust video transmission on behalf of real-time streaming video applications. The proposed scheme includes the following two modules: (i a transparent QoS mechanism which enables the QoS setup of video applications without the requirement for any modification of the existing legacy applications through its use of an efficient packet redirection scheme; and (ii an instant frame-level FEC technique which performs online FEC bandwidth allocation within TCP-friendly rate constraints in a frame-by-frame basis to minimize the additional FEC processing delay. The experimental results show that the proposed scheme achieves nearly the same video quality that can be obtained by the optimal frame-level FEC under varying network conditions while maintaining low end-to-end delay.

  4. Selling Gender: Associations of Box Art Representation of Female Characters With Sales for Teen- and Mature-rated Video Games

    OpenAIRE

    Near, Christopher E.

    2012-01-01

    Content analysis of video games has consistently shown that women are portrayed much less frequently than men and in subordinate roles, often in “hypersexualized” ways. However, the relationship between portrayal of female characters and videogame sales has not previously been studied. In order to assess the cultural influence of video games on players, it is important to weight differently those games seen by the majority of players (in the millions), rather than a random sample of all games...

  5. Resource-Constrained Low-Complexity Video Coding for Wireless Transmission

    DEFF Research Database (Denmark)

    Ukhanova, Ann

    of video quality. We proposed a new metric for objective quality assessment that considers frame rate. As many applications deal with wireless video transmission, we performed an analysis of compression and transmission systems with a focus on power-distortion trade-off. We proposed an approach......Constrained resources like memory, power, bandwidth and delay requirements in many mobile systems pose limitations for video applications. Standard approaches for video compression and transmission do not always satisfy system requirements. In this thesis we have shown that it is possible to modify...... for ratedistortion-complexity optimization of upcoming video compression standard HEVC. We also provided a new method allowing decrease of power consumption on mobile devices in 3G networks. Finally, we proposed low-delay and low-power approaches for video transmission over wireless personal area networks, including...

  6. Transform domain Wyner-Ziv video coding with refinement of noise residue and side information

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2010-01-01

    are successively updating the estimated noise residue for noise modeling and side information frame quality during decoding. Experimental results show that the proposed decoder can improve the Rate- Distortion (RD) performance of a state-of-the-art Wyner Ziv video codec for the set of test sequences.......Distributed Video Coding (DVC) is a video coding paradigm which mainly exploits the source statistics at the decoder based on the availability of side information at the decoder. This paper considers feedback channel based Transform Domain Wyner-Ziv (TDWZ) DVC. The coding efficiency of TDWZ video...... coding does not match that of conventional video coding yet, mainly due to the quality of side information and inaccurate noise estimation. In this context, a novel TDWZ video decoder with noise residue refinement (NRR) and side information refinement (SIR) is proposed. The proposed refinement schemes...

  7. Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror

    Directory of Open Access Journals (Sweden)

    Michiaki Inoue

    2017-10-01

    Full Text Available This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time.

  8. Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror.

    Science.gov (United States)

    Inoue, Michiaki; Gu, Qingyi; Jiang, Mingjun; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji

    2017-10-29

    This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time.

  9. A Study of Vehicle Detection and Counting System Based on Video

    Directory of Open Access Journals (Sweden)

    Shuang XU

    2014-10-01

    Full Text Available About the video image processing's vehicle detection and counting system research, which has video vehicle detection, vehicle targets' image processing, and vehicle counting function. Vehicle detection is the use of inter-frame difference method and vehicle shadow segmentation techniques for vehicle testing. Image processing functions is the use of color image gray processing, image segmentation, mathematical morphology analysis and image fills, etc. on target detection to be processed, and then the target vehicle extraction. Counting function is to count the detected vehicle. The system is the use of inter-frame video difference method to detect vehicle and the use of the method of adding frame to vehicle and boundary comparison method to complete the counting function, with high recognition rate, fast, and easy operation. The purpose of this paper is to enhance traffic management modernization and automation levels. According to this study, it can provide a reference for the future development of related applications.

  10. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  11. High frame rate and high line density ultrasound imaging for local pulse wave velocity estimation using motion matching: A feasibility study on vessel phantoms.

    Science.gov (United States)

    Li, Fubing; He, Qiong; Huang, Chengwu; Liu, Ke; Shao, Jinhua; Luo, Jianwen

    2016-04-01

    Pulse wave imaging (PWI) is an ultrasound-based method to visualize the propagation of pulse wave and to quantitatively estimate regional pulse wave velocity (PWV) of the arteries within the imaging field of view (FOV). To guarantee the reliability of PWV measurement, high frame rate imaging is required, which can be achieved by reducing the line density of ultrasound imaging or transmitting plane wave at the expense of spatial resolution and/or signal-to-noise ratio (SNR). In this study, a composite, full-view imaging method using motion matching was proposed with both high temporal and spatial resolution. Ultrasound radiofrequency (RF) data of 4 sub-sectors, each with 34 beams, including a common beam, were acquired successively to achieve a frame rate of ∼507 Hz at an imaging depth of 35 mm. The acceleration profiles of the vessel wall estimated from the common beam were used to reconstruct the full-view (38-mm width, 128-beam) image sequence. The feasibility of mapping local PWV variation along the artery using PWI technique was preliminarily validated on both homogeneous and inhomogeneous polyvinyl alcohol (PVA) cryogel vessel phantoms. Regional PWVs for the three homogeneous phantoms measured by the proposed method were in accordance with the sparse imaging method (38-mm width, 32-beam) and plane wave imaging method. Local PWV was estimated using the above-mentioned three methods on 3 inhomogeneous phantoms, and good agreement was obtained in both the softer (1.91±0.24 m/s, 1.97±0.27 m/s and 1.78±0.28 m/s) and the stiffer region (4.17±0.46 m/s, 3.99±0.53 m/s and 4.27±0.49 m/s) of the phantoms. In addition to the improved spatial resolution, higher precision of local PWV estimation in low SNR circumstances was also obtained by the proposed method as compared with the sparse imaging method. The proposed method might be helpful in disease detections through mapping the local PWV of the vascular wall. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. A Computational Framework for Vertical Video Editing

    OpenAIRE

    Gandhi, Vineet; Ronfard, Rémi

    2015-01-01

    International audience; Vertical video editing is the process of digitally editing the image within the frame as opposed to horizontal video editing, which arranges the shots along a timeline. Vertical editing can be a time-consuming and error-prone process when using manual key-framing and simple interpolation. In this paper, we present a general framework for automatically computing a variety of cinematically plausible shots from a single input video suitable to the special case of live per...

  13. Video temporal alignment for object viewpoint

    OpenAIRE

    Papazoglou, Anestis; Del Pero, Luca; Ferrari, Vittorio

    2017-01-01

    We address the problem of temporally aligning semantically similar videos, for example two videos of cars on different tracks. We present an alignment method that establishes frame-to-frame correspondences such that the two cars are seen from a similar viewpoint (e.g. facing right), while also being temporally smooth and visually pleasing. Unlike previous works, we do not assume that the videos show the same scripted sequence of events. We compare against three alternative methods, including ...

  14. VBR video traffic models

    CERN Document Server

    Tanwir, Savera

    2014-01-01

    There has been a phenomenal growth in video applications over the past few years. An accurate traffic model of Variable Bit Rate (VBR) video is necessary for performance evaluation of a network design and for generating synthetic traffic that can be used for benchmarking a network. A large number of models for VBR video traffic have been proposed in the literature for different types of video in the past 20 years. Here, the authors have classified and surveyed these models and have also evaluated the models for H.264 AVC and MVC encoded video and discussed their findings.

  15. Optical coherence elastography based on high speed imaging of single-hot laser-induced acoustic waves at 16 kHz frame rate

    Science.gov (United States)

    Song, Shaozhen; Hsieh, Bao-Yu; Wei, Wei; Shen, Tueng; Pelivanov, Ivan; O'Donnell, Matthew; Wang, Ruikang K.

    2016-03-01

    Shear wave OCE (SW-OCE) is a novel technique that relies on the detection of the localized shear wave speed to map tissue elasticity. In this study, we demonstrate high speed imaging to capture single-shot transient shear wave propagation for SW-OCE. The fast imaging speed is achieved using a Fourier domain mode-locked (FDML) high-speed swept-source OCT (SS-OCT) system. The frame rate of shear wave imaging is 16 kHz, at an A-line rate of ~1.62 MHz, enabling the detection of high-frequency shear waves up to 8 kHz in bandwidth. Several measures are taken to improve the phase-stability of the SS-OCT system, and the measured displacement sensitivity is ~10 nanometers. To facilitate non-contact elastography, shear waves are generated with the photo-thermal effect using an ultra-violet pulsed laser. High frequency shear waves launched by the pulsed laser contain shorter wavelengths and carry rich localized elasticity information. Benefiting from single-shot acquisition, each SWI scan only takes 2.5 milliseconds, and the reconstruction of the elastogram can be performed in real-time with ~20 Hz refresh rate. SW-OCE measurements are demonstrated on porcine cornea ex vivo. This study is the first demonstration of an all-optical method to perform real-time 3D SW-OCE. It is hoped that this technique will be applicable in the clinic to obtain high-resolution localized quantitative measurements of tissue biomechanical properties.

  16. Satellite Video Stabilization with Geometric Distortion

    OpenAIRE

    Wang, Xia; Zhang, Guo; Shen, Xin; Li, Beibei; Jiang, Yonghua

    2016-01-01

    There is an exterior orientation difference in each satellite video frame, and the corresponding points have different image locations in adjacent frames images which has geometric distortion. So the projection model, affine model and other classical image stabilization registration model cannot accurately describe the relationship between adjacent frames. This paper proposes a new satellite video image stabilization method with geometric distortion to solve the problem, based on the simulate...

  17. Motion-compensated scan conversion of interlaced video sequences

    Science.gov (United States)

    Schultz, Richard R.; Stevenson, Robert L.

    1996-03-01

    When an interlaced image sequence is viewed at the rate of sixty frames per second, the human visual system interpolates the data so that the missing fields are not noticeable. However, if frames are viewed individually, interlacing artifacts are quite prominent. This paper addresses the problem of deinterlacing image sequences for the purposes of analyzing video stills and generating high-resolution hardcopy of individual frames. Multiple interlaced frames are temporally integrated to estimate a single progressively-scanned still image, with motion compensation used between frames. A video observation model is defined which incorporates temporal information via estimated interframe motion vectors. The resulting ill- posed inverse problem is regularized through Bayesian maximum a posteriori (MAP) estimation, utilizing a discontinuity-preserving prior model for the spatial data. Progressively- scanned estimates computed from interlaced image sequences are shown at several spatial interpolation factors, since the multiframe Bayesian scan conversion algorithm is capable of simultaneously deinterlacing the data and enhancing spatial resolution. Problems encountered in the estimation of motion vectors from interlaced frames are addressed.

  18. Practical, Real-Time, and Robust Watermarking on the Spatial Domain for High-Definition Video Contents

    Science.gov (United States)

    Kim, Kyung-Su; Lee, Hae-Yeoun; Im, Dong-Hyuck; Lee, Heung-Kyu

    Commercial markets employ digital right management (DRM) systems to protect valuable high-definition (HD) quality videos. DRM system uses watermarking to provide copyright protection and ownership authentication of multimedia contents. We propose a real-time video watermarking scheme for HD video in the uncompressed domain. Especially, our approach is in aspect of practical perspectives to satisfy perceptual quality, real-time processing, and robustness requirements. We simplify and optimize human visual system mask for real-time performance and also apply dithering technique for invisibility. Extensive experiments are performed to prove that the proposed scheme satisfies the invisibility, real-time processing, and robustness requirements against video processing attacks. We concentrate upon video processing attacks that commonly occur in HD quality videos to display on portable devices. These attacks include not only scaling and low bit-rate encoding, but also malicious attacks such as format conversion and frame rate change.

  19. Robust video object cosegmentation.

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing; Li, Xuelong; Porikli, Fatih

    2015-10-01

    With ever-increasing volumes of video data, automatic extraction of salient object regions became even more significant for visual analytic solutions. This surge has also opened up opportunities for taking advantage of collective cues encapsulated in multiple videos in a cooperative manner. However, it also brings up major challenges, such as handling of drastic appearance, motion pattern, and pose variations, of foreground objects as well as indiscriminate backgrounds. Here, we present a cosegmentation framework to discover and segment out common object regions across multiple frames and multiple videos in a joint fashion. We incorporate three types of cues, i.e., intraframe saliency, interframe consistency, and across-video similarity into an energy optimization framework that does not make restrictive assumptions on foreground appearance and motion model, and does not require objects to be visible in all frames. We also introduce a spatio-temporal scale-invariant feature transform (SIFT) flow descriptor to integrate across-video correspondence from the conventional SIFT-flow into interframe motion flow from optical flow. This novel spatio-temporal SIFT flow generates reliable estimations of common foregrounds over the entire video data set. Experimental results show that our method outperforms the state-of-the-art on a new extensive data set (ViCoSeg).

  20. Target Area Extension in Synthetic Aperture Array Signal Processing for High-Frame-Rate Estimate of Two-Dimensional Motion Vector In vivo

    Science.gov (United States)

    Yagi, Shin-ichi; Yokoyama, Ryouta; Tamura, Kiyoshi; Sato, Masakazu

    2011-07-01

    A strategic synthetic aperture radar (SAR) along a flight path has been developed including a potential extensibility in a wide-range target area and an excellent spatial resolution by utilizing two-way range stacking, matched filtering, and chirp signal transmission. For the simultaneous ultrahigh-frame-rate ultrasonic imaging of microdynamics in a living tissue, a one-way synthetic aperture array processing of a real-time-received two-dimensional (2D) echo signal followed by a successive transmission is indispensable, in which the range stacking in SAR should be modified toward the pulsed ultrasonic irradiation generated by the array transducer. Therefore, the modification of the range stacking was proposed for pulsed radiation from a flexible point ultrasonic source. Firstly, a one-way receiving range stacking algorithm was described in a spatiotemporal frequency domain, and it was consequently extended to account for the forward-range- and cross-range-dependent time delay of the 2D echo signal in each range bin for the reconstruction of the target area. The overall system performance for the linear array transducer having 256 elements with a 3.0 MHz center frequency and a 0.25 mm pitch was verified for the reconstructed images in a numerical simulation and a hardware experiment.

  1. Wireless medical ultrasound video transmission through noisy channels.

    Science.gov (United States)

    Panayides, A; Pattichis, M S; Pattichis, C S

    2008-01-01

    Recent advances in video compression such as the current state-of-the-art H.264/AVC standard in conjunction with increasingly available bitrate through new technologies like 3G, and WiMax have brought mobile health (m-Health) healthcare systems and services closer to reality. Despite this momentum towards m-Health systems and especially e-Emergency systems, wireless channels remain error prone, while the absence of objective quality metrics limits the ability of providing medical video of adequate diagnostic quality at a required bitrate. In this paper we investigate different encoding schemes and loss rates in medical ultrasound video transmission and come to conclusions involving efficiency, the trade-off between bitrate and quality, while we highlight the relationship linking video quality and the error ratio of corrupted P and B frames. More specifically, we investigate IPPP, IBPBP and IBBPBBP coding structures under packet loss rates of 2%, 5%, 8% and 10% and derive that the latter attains higher SNR ratings in all tested cases. A preliminary clinical evaluation shows that for SNR ratings higher than 30 db, video diagnostic quality may be adequate, while above 30.5 db the diagnostic information available in the reconstructed ultrasound video is close to that of the original.

  2. Design, implementation and evaluation of a point cloud codec for Tele-Immersive Video

    NARCIS (Netherlands)

    R.N. Mekuria (Rufael); C.L. Blom (Kees); P.S. Cesar Garcia (Pablo Santiago)

    2017-01-01

    htmlabstractwe present a generic and real-time time-varying point cloud codec for 3D immersive video. This codec is suitable for mixed reality applications where 3D point clouds are acquired at a fast rate. In this codec, intra frames are coded progressively in an octree subdivision. To further

  3. Color spaces in digital video

    Energy Technology Data Exchange (ETDEWEB)

    Gaunt, R.

    1997-05-01

    Whether it`s photography, computer graphics, publishing, or video; each medium has a defined color space, or gamut, which defines the extent that a given set of RGB colors can be mixed. When converting from one medium to another, an image must go through some form of conversion which maps colors into the destination color space. The conversion process isn`t always straight forward, easy, or reversible. In video, two common analog composite color spaces are Y`tjv (used in PAL) and Y`IQ (used in NTSC). These two color spaces have been around since the beginning of color television, and are primarily used in video transmission. Another analog scheme used in broadcast studios is Y`, R`-Y`, B`-Y` (used in Betacam and Mll) which is a component format. Y`, R`-Y`,B`-Y` maintains the color information of RGB but in less space. From this, the digital component video specification, ITU-Rec. 601-4 (formerly CCIR Rec. 601) was based. The color space for Rec. 601 is symbolized as Y`CbCr. Digital video formats such as DV, Dl, Digital-S, etc., use Rec. 601 to define their color gamut. Digital composite video (for D2 tape) is digitized analog Y`UV and is seeing decreased use. Because so much information is contained in video, segments of any significant length usually require some form of data compression. All of the above mentioned analog video formats are a means of reducing the bandwidth of RGB video. Video bulk storage devices, such as digital disk recorders, usually store frames in Y`CbCr format, even if no other compression method is used. Computer graphics and computer animations originate in RGB format because RGB must be used to calculate lighting and shadows. But storage of long animations in RGB format is usually cost prohibitive and a 30 frame-per-second data rate of uncompressed RGB is beyond most computers. By taking advantage of certain aspects of the human visual system, true color 24-bit RGB video images can be compressed with minimal loss of visual information

  4. Wall-motion tracking in fetal echocardiography-Influence of frame rate on longitudinal strain analysis assessed by two-dimensional speckle tracking.

    Science.gov (United States)

    Enzensberger, Christian; Achterberg, Friederike; Graupner, Oliver; Wolter, Aline; Herrmann, Johannes; Axt-Fliedner, Roland

    2017-06-01

    Frame rates (FR) used for strain analysis assessed by speckle tracking in fetal echocardiography show a considerable variation. The aim of this study was to investigate the influence of the FR on strain analysis in 2D speckle tracking. Fetal echocardiography was performed prospectively on a Toshiba Aplio 500 system and a Toshiba Artida system, respectively. Based on an apical or basal four-chamber view of the fetal heart, cine loops were stored with a FR of 30 fps (Aplio 500) and 60 fps (Artida/Aplio 500). For both groups (30fps and 60fps), global and segmental longitudinal peak systolic strain (LPSS) values of both, left (LV) and right ventricle (RV), were assessed by 2D wall-motion tracking. A total of 101 fetuses, distributed to three study groups, were included. The mean gestational age was 25.2±5.0 weeks. Mean global LPSS values for RV in the 30 fps group and in the 60 fps group were -16.07% and -16.47%, respectively. Mean global LPSS values for LV in the 30 fps group and in the 60 fps group were -17.54% and -17.06%, respectively. Comparing global and segmental LPSS values of both, the RV and LV, did not show any statistically significant differences within the two groups. Performance of myocardial 2D strain analysis by wall-motion tracking was feasible with 30 and 60 fps. Obtained global and segmental LPSS values of both ventricles were relatively independent from acquisition rate. © 2017, Wiley Periodicals, Inc.

  5. Surveillance Video Synopsis in GIS

    Directory of Open Access Journals (Sweden)

    Yujia Xie

    2017-10-01

    Full Text Available Surveillance videos contain a considerable amount of data, wherein interesting information to the user is sparsely distributed. Researchers construct video synopsis that contain key information extracted from a surveillance video for efficient browsing and analysis. Geospatial–temporal information of a surveillance video plays an important role in the efficient description of video content. Meanwhile, current approaches of video synopsis lack the introduction and analysis of geospatial-temporal information. Owing to the preceding problems mentioned, this paper proposes an approach called “surveillance video synopsis in GIS”. Based on an integration model of video moving objects and GIS, the virtual visual field and the expression model of the moving object are constructed by spatially locating and clustering the trajectory of the moving object. The subgraphs of the moving object are reconstructed frame by frame in a virtual scene. Results show that the approach described in this paper comprehensively analyzed and created fusion expression patterns between video dynamic information and geospatial–temporal information in GIS and reduced the playback time of video content.

  6. A Framework for Advanced Video Traces: Evaluating Visual Quality for Video Transmission Over Lossy Networks

    Directory of Open Access Journals (Sweden)

    Reisslein Martin

    2006-01-01

    Full Text Available Conventional video traces (which characterize the video encoding frame sizes in bits and frame quality in PSNR are limited to evaluating loss-free video transmission. To evaluate robust video transmission schemes for lossy network transport, generally experiments with actual video are required. To circumvent the need for experiments with actual videos, we propose in this paper an advanced video trace framework. The two main components of this framework are (i advanced video traces which combine the conventional video traces with a parsimonious set of visual content descriptors, and (ii quality prediction schemes that based on the visual content descriptors provide an accurate prediction of the quality of the reconstructed video after lossy network transport. We conduct extensive evaluations using a perceptual video quality metric as well as the PSNR in which we compare the visual quality predicted based on the advanced video traces with the visual quality determined from experiments with actual video. We find that the advanced video trace methodology accurately predicts the quality of the reconstructed video after frame losses.

  7. CHARACTER RECOGNITION OF VIDEO SUBTITLES\\

    Directory of Open Access Journals (Sweden)

    Satish S Hiremath

    2016-11-01

    Full Text Available An important task in content based video indexing is to extract text information from videos. The challenges involved in text extraction and recognition are variation of illumination on each video frame with text, the text present on the complex background and different font size of the text. Using various image processing algorithms like morphological operations, blob detection and histogram of oriented gradients the character recognition of video subtitles is implemented. Segmentation, feature extraction and classification are the major steps of character recognition. Several experimental results are shown to demonstrate the performance of the proposed algorithm

  8. Twofold Video Hashing with Automatic Synchronization

    OpenAIRE

    Li, Mu; Monga, Vishal

    2014-01-01

    Video hashing finds a wide array of applications in content authentication, robust retrieval and anti-piracy search. While much of the existing research has focused on extracting robust and secure content descriptors, a significant open challenge still remains: Most existing video hashing methods are fallible to temporal desynchronization. That is, when the query video results by deleting or inserting some frames from the reference video, most existing methods assume the positions of the dele...

  9. Quantifying the effect of disruptions to temporal coherence on the intelligibility of compressed American Sign Language video

    Science.gov (United States)

    Ciaramello, Frank M.; Hemami, Sheila S.

    2009-02-01

    Communication of American Sign Language (ASL) over mobile phones would be very beneficial to the Deaf community. ASL video encoded to achieve the rates provided by current cellular networks must be heavily compressed and appropriate assessment techniques are required to analyze the intelligibility of the compressed video. As an extension to a purely spatial measure of intelligibility, this paper quantifies the effect of temporal compression artifacts on sign language intelligibility. These artifacts can be the result of motion-compensation errors that distract the observer or frame rate reductions. They reduce the the perception of smooth motion and disrupt the temporal coherence of the video. Motion-compensation errors that affect temporal coherence are identified by measuring the block-level correlation between co-located macroblocks in adjacent frames. The impact of frame rate reductions was quantified through experimental testing. A subjective study was performed in which fluent ASL participants rated the intelligibility of sequences encoded at a range of 5 different frame rates and with 3 different levels of distortion. The subjective data is used to parameterize an objective intelligibility measure which is highly correlated with subjective ratings at multiple frame rates.

  10. Frame sequences analysis technique of linear objects movement

    Science.gov (United States)

    Oshchepkova, V. Y.; Berg, I. A.; Shchepkin, D. V.; Kopylova, G. V.

    2017-12-01

    Obtaining data by noninvasive methods are often needed in many fields of science and engineering. This is achieved through video recording in various frame rate and light spectra. In doing so quantitative analysis of movement of the objects being studied becomes an important component of the research. This work discusses analysis of motion of linear objects on the two-dimensional plane. The complexity of this problem increases when the frame contains numerous objects whose images may overlap. This study uses a sequence containing 30 frames at the resolution of 62 × 62 pixels and frame rate of 2 Hz. It was required to determine the average velocity of objects motion. This velocity was found as an average velocity for 8-12 objects with the error of 15%. After processing dependencies of the average velocity vs. control parameters were found. The processing was performed in the software environment GMimPro with the subsequent approximation of the data obtained using the Hill equation.

  11. On the definition of adapted audio/video profiles for high-quality video calling services over LTE/4G

    Science.gov (United States)

    Ndiaye, Maty; Quinquis, Catherine; Larabi, Mohamed Chaker; Le Lay, Gwenael; Saadane, Hakim; Perrine, Clency

    2014-01-01

    During the last decade, the important advances and widespread availability of mobile technology (operating systems, GPUs, terminal resolution and so on) have encouraged a fast development of voice and video services like video-calling. While multimedia services have largely grown on mobile devices, the generated increase of data consumption is leading to the saturation of mobile networks. In order to provide data with high bit-rates and maintain performance as close as possible to traditional networks, the 3GPP (The 3rd Generation Partnership Project) worked on a high performance standard for mobile called Long Term Evolution (LTE). In this paper, we aim at expressing recommendations related to audio and video media profiles (selection of audio and video codecs, bit-rates, frame-rates, audio and video formats) for a typical video-calling services held over LTE/4G mobile networks. These profiles are defined according to targeted devices (smartphones, tablets), so as to ensure the best possible quality of experience (QoE). Obtained results indicate that for a CIF format (352 x 288 pixels) which is usually used for smartphones, the VP8 codec provides a better image quality than the H.264 codec for low bitrates (from 128 to 384 kbps). However sequences with high motion, H.264 in slow mode is preferred. Regarding audio, better results are globally achieved using wideband codecs offering good quality except for opus codec (at 12.2 kbps).

  12. Scanning laser video camera/ microscope

    Science.gov (United States)

    Wang, C. P.; Bow, R. T.

    1984-10-01

    A laser scanning system capable of scanning at standard video rate has been developed. The scanning mirrors, circuit design and system performance, as well as its applications to video cameras and ultra-violet microscopes, are discussed.

  13. EgoSampling: Wide View Hyperlapse from Egocentric Videos

    OpenAIRE

    Halperin, Tavi; Poleg, Yair; Arora, Chetan; Peleg, Shmuel

    2016-01-01

    The possibility of sharing one's point of view makes use of wearable cameras compelling. These videos are often long, boring and coupled with extreme shake, as the camera is worn on a moving person. Fast forwarding (i.e. frame sampling) is a natural choice for quick video browsing. However, this accentuates the shake caused by natural head motion in an egocentric video, making the fast forwarded video useless. We propose EgoSampling, an adaptive frame sampling that gives stable, fast forwarde...

  14. High-Quality Real-Time Video Inpaintingwith PixMix.

    Science.gov (United States)

    Herling, Jan; Broll, Wolfgang

    2014-06-01

    While image inpainting has recently become widely available in image manipulation tools, existing approaches to video inpainting typically do not even achieve interactive frame rates yet as they are highly computationally expensive. Further, they either apply severe restrictions on the movement of the camera or do not provide a high-quality coherent video stream. In this paper we will present our approach to high-quality real-time capable image and video inpainting. Our PixMix approach even allows for the manipulation of live video streams, providing the basis for real Diminished Reality (DR) applications. We will show how our approach generates coherent video streams dealing with quite heterogeneous background environments and non-trivial camera movements, even applying constraints in real-time.

  15. On frame multiresolution analysis

    DEFF Research Database (Denmark)

    Christensen, Ole

    2003-01-01

    We use the freedom in frame multiresolution analysis to construct tight wavelet frames (even in the case where the refinable function does not generate a tight frame). In cases where a frame multiresolution does not lead to a construction of a wavelet frame we show how one can nevertheless...

  16. A portable wireless power transmission system for video capsule endoscopes.

    Science.gov (United States)

    Shi, Yu; Yan, Guozheng; Zhu, Bingquan; Liu, Gang

    2015-01-01

    Wireless power transmission (WPT) technology can solve the energy shortage problem of the video capsule endoscope (VCE) powered by button batteries, but the fixed platform limited its clinical application. This paper presents a portable WPT system for VCE. Besides portability, power transfer efficiency and stability are considered as the main indexes of optimization design of the system, which consists of the transmitting coil structure, portable control box, operating frequency, magnetic core and winding of receiving coil. Upon the above principles, the correlation parameters are measured, compared and chosen. Finally, through experiments on the platform, the methods are tested and evaluated. In the gastrointestinal tract of small pig, the VCE is supplied with sufficient energy by the WPT system, and the energy conversion efficiency is 2.8%. The video obtained is clear with a resolution of 320×240 and a frame rate of 30 frames per second. The experiments verify the feasibility of design scheme, and further improvement direction is discussed.

  17. Robust Watermarking of Video Streams

    Directory of Open Access Journals (Sweden)

    T. Polyák

    2006-01-01

    Full Text Available In the past few years there has been an explosion in the use of digital video data. Many people have personal computers at home, and with the help of the Internet users can easily share video files on their computer. This makes possible the unauthorized use of digital media, and without adequate protection systems the authors and distributors have no means to prevent it.Digital watermarking techniques can help these systems to be more effective by embedding secret data right into the video stream. This makes minor changes in the frames of the video, but these changes are almost imperceptible to the human visual system. The embedded information can involve copyright data, access control etc. A robust watermark is resistant to various distortions of the video, so it cannot be removed without affecting the quality of the host medium. In this paper I propose a video watermarking scheme that fulfills the requirements of a robust watermark. 

  18. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  19. Tuning of Synchronous-Frame PI Current Controllers in Grid-Connected Converters Operating at a Low Sampling Rate by MIMO Root Locus

    DEFF Research Database (Denmark)

    Fernandez, Francisco Daniel Freijedo; Vidal, Ana; Yepes, Alejandro G.

    2015-01-01

    affects achievable control time constant. With this perspective, this paper presents a systematic procedure for accurate dynamics assessment and tuning of synchronous-frame PI current controllers, which is based on linear control for multiple input multiple output (MIMO) systems. The dominant eigenvalues...

  20. SnapVideo: Personalized Video Generation for a Sightseeing Trip.

    Science.gov (United States)

    Zhang, Luming; Jing, Peiguang; Su, Yuting; Zhang, Chao; Shaoz, Ling

    2017-11-01

    Leisure tourism is an indispensable activity in urban people's life. Due to the popularity of intelligent mobile devices, a large number of photos and videos are recorded during a trip. Therefore, the ability to vividly and interestingly display these media data is a useful technique. In this paper, we propose SnapVideo, a new method that intelligently converts a personal album describing of a trip into a comprehensive, aesthetically pleasing, and coherent video clip. The proposed framework contains three main components. The scenic spot identification model first personalizes the video clips based on multiple prespecified audience classes. We then search for some auxiliary related videos from YouTube 1 according to the selected photos. To comprehensively describe a scenery, the view generation module clusters the crawled video frames into a number of views. Finally, a probabilistic model is developed to fit the frames from multiple views into an aesthetically pleasing and coherent video clip, which optimally captures the semantics of a sightseeing trip. Extensive user studies demonstrated the competitiveness of our method from an aesthetic point of view. Moreover, quantitative analysis reflects that semantically important spots are well preserved in the final video clip. 1 https://www.youtube.com/.

  1. As time passes by: Observed motion-speed and psychological time during video playback.

    Science.gov (United States)

    Nyman, Thomas Jonathan; Karlsson, Eric Per Anders; Antfolk, Jan

    2017-01-01

    Research shows that psychological time (i.e., the subjective experience and assessment of the passage of time) is malleable and that the central nervous system re-calibrates temporal information in accordance with situational factors so that psychological time flows slower or faster. Observed motion-speed (e.g., the visual perception of a rolling ball) is an important situational factor which influences the production of time estimates. The present study examines previous findings showing that observed slow and fast motion-speed during video playback respectively results in over- and underproductions of intervals of time. Here, we investigated through three separate experiments: a) the main effect of observed motion-speed during video playback on a time production task and b) the interactive effect of the frame rate (frames per second; fps) and motion-speed during video playback on a time production task. No main effect of video playback-speed or interactive effect between video playback-speed and frame rate was found on time production.

  2. High-Performance Motion Estimation for Image Sensors with Video Compression

    OpenAIRE

    Weizhi Xu; Shouyi Yin; Leibo Liu; Zhiyong Liu; Shaojun Wei

    2015-01-01

    It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME) is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME). Pixels of reconstructed...

  3. Video Stabilization Using Feature Point Matching

    Science.gov (United States)

    Kulkarni, Shamsundar; Bormane, D. S.; Nalbalwar, S. L.

    2017-01-01

    Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper an algorithm is proposed to stabilize jittery videos. A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video is identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances.

  4. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  5. Gaze location prediction for broadcast football video.

    Science.gov (United States)

    Cheng, Qin; Agrafiotis, Dimitris; Achim, Alin M; Bull, David R

    2013-12-01

    The sensitivity of the human visual system decreases dramatically with increasing distance from the fixation location in a video frame. Accurate prediction of a viewer's gaze location has the potential to improve bit allocation, rate control, error resilience, and quality evaluation in video compression. Commercially, delivery of football video content is of great interest because of the very high number of consumers. In this paper, we propose a gaze location prediction system for high definition broadcast football video. The proposed system uses knowledge about the context, extracted through analysis of a gaze tracking study that we performed, to build a suitable prior map. We further classify the complex context into different categories through shot classification thus allowing our model to prelearn the task pertinence of each object category and build the prior map automatically. We thus avoid the limitation of assigning the viewers a specific task, allowing our gaze prediction system to work under free-viewing conditions. Bayesian integration of bottom-up features and top-down priors is finally applied to predict the gaze locations. Results show that the prediction performance of the proposed model is better than that of other top-down models that we adapted to this context.

  6. Online scene change detection of multicast (MBone) video

    Science.gov (United States)

    Zhou, Wensheng; Shen, Ye; Vellaikal, Asha; Kuo, C.-C. Jay

    1998-10-01

    Many multimedia applications, such as multimedia data management systems and communication systems, require efficient representation of multimedia content. Thus semantic interpretation of video content has been a popular research area. Currently, most content-based video representation involves the segmentation of video based on key frames which are generated using scene change detection techniques as well as camera/object motion. Then, video features can be extracted from key frames. However most of such research performs off-line video processing in which the whole video scope is known as a priori which allows multiple scans of the stored video files during video processing. In comparison, relatively not much research has been done in the area of on-line video processing, which is crucial in video communication applications such as on-line collaboration, news broadcasts and so on. Our research investigates on-line real-time scene change detection of multicast video over the Internet. Our on-line processing system are designed to meet the requirements of real-time video multicasting over the Internet and to utilize the successful video parsing techniques available today. The proposed algorithms extract key frames from video bitstreams sent through the MBone network, and the extracted key frames are multicasted as annotations or metadata over a separate channel to assist in content filtering such as those anticipated to be in use by on-line filtering proxies in the Internet. The performance of the proposed algorithms are demonstrated and discussed in this paper.

  7. Attention modeling for video quality assessment

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Perkis, Andrew

    2010-01-01

    This paper proposes to evaluate video quality by balancing two quality components: global quality and local quality. The global quality is a result from subjects allocating their ttention equally to all regions in a frame and all frames n a video. It is evaluated by image quality metrics (IQM) ith...... quality modeling algorithm can improve the performance of image quality metrics on video quality assessment compared to the normal averaged spatiotemporal pooling scheme....... averaged spatiotemporal pooling. The local quality is derived from visual attention modeling and quality variations over frames. Saliency, motion, and contrast information are taken into account in modeling visual attention, which is then integrated into IQMs to calculate the local quality of a video frame...

  8. Improved side information generation for distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2008-01-01

    As a new coding paradigm, distributed video coding (DVC) deals with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. The performance of DVC highly depends on the quality of side information. With a better side...... information generation method, fewer bits will be requested from the encoder and more reliable decoded frames will be obtained. In this paper, a side information generation method is introduced to further improve the rate-distortion (RD) performance of transform domain distributed video coding. This algorithm...... consists of a variable block size based Y, U and V component motion estimation and an adaptive weighted overlapped block motion compensation (OBMC). The proposal is tested and compared with the results of an executable DVC codec released by DISCOVER group (DIStributed COding for Video sERvices). RD...

  9. Smart Streaming for Online Video Services

    OpenAIRE

    Chen, Liang; Zhou, Yipeng; Chiu, Dah Ming

    2013-01-01

    Bandwidth consumption is a significant concern for online video service providers. Practical video streaming systems usually use some form of HTTP streaming (progressive download) to let users download the video at a faster rate than the video bitrate. Since users may quit before viewing the complete video, however, much of the downloaded video will be "wasted". To the extent that users' departure behavior can be predicted, we develop smart streaming that can be used to improve user QoE with ...

  10. Learning to Segment Moving Objects in Videos

    OpenAIRE

    Fragkiadaki, Katerina; Arbelaez, Pablo; Felsen, Panna; Malik, Jitendra

    2014-01-01

    We segment moving objects in videos by ranking spatio-temporal segment proposals according to "moving objectness": how likely they are to contain a moving object. In each video frame, we compute segment proposals using multiple figure-ground segmentations on per frame motion boundaries. We rank them with a Moving Objectness Detector trained on image and motion fields to detect moving objects and discard over/under segmentations or background parts of the scene. We extend the top ranked segmen...

  11. Multicore-based 3D-DWT video encoder

    Science.gov (United States)

    Galiano, Vicente; López-Granado, Otoniel; Malumbres, Manuel P.; Migallón, Hector

    2013-12-01

    Three-dimensional wavelet transform (3D-DWT) encoders are good candidates for applications like professional video editing, video surveillance, multi-spectral satellite imaging, etc. where a frame must be reconstructed as quickly as possible. In this paper, we present a new 3D-DWT video encoder based on a fast run-length coding engine. Furthermore, we present several multicore optimizations to speed-up the 3D-DWT computation. An exhaustive evaluation of the proposed encoder (3D-GOP-RL) has been performed, and we have compared the evaluation results with other video encoders in terms of rate/distortion (R/D), coding/decoding delay, and memory consumption. Results show that the proposed encoder obtains good R/D results for high-resolution video sequences with nearly in-place computation using only the memory needed to store a group of pictures. After applying the multicore optimization strategies over the 3D DWT, the proposed encoder is able to compress a full high-definition video sequence in real-time.

  12. Content-Aware Video Adaptation under Low-Bitrate Constraint

    Directory of Open Access Journals (Sweden)

    Hsiao Ming-Ho

    2007-01-01

    Full Text Available With the development of wireless network and the improvement of mobile device capability, video streaming is more and more widespread in such an environment. Under the condition of limited resource and inherent constraints, appropriate video adaptations have become one of the most important and challenging issues in wireless multimedia applications. In this paper, we propose a novel content-aware video adaptation in order to effectively utilize resource and improve visual perceptual quality. First, the attention model is derived from analyzing the characteristics of brightness, location, motion vector, and energy features in compressed domain to reduce computation complexity. Then, through the integration of attention model, capability of client device and correlational statistic model, attractive regions of video scenes are derived. The information object- (IOB- weighted rate distortion model is used for adjusting the bit allocation. Finally, the video adaptation scheme dynamically adjusts video bitstream in frame level and object level. Experimental results validate that the proposed scheme achieves better visual quality effectively and efficiently.

  13. Prime tight frames

    DEFF Research Database (Denmark)

    Lemvig, Jakob; Miller, Christopher; Okoudjou, Kasso A.

    2014-01-01

    We introduce a class of finite tight frames called prime tight frames and prove some of their elementary properties. In particular, we show that any finite tight frame can be written as a union of prime tight frames. We then characterize all prime harmonic tight frames and use thischaracterization...... to suggest effective analysis and synthesis computation strategies for such frames. Finally, we describe all prime frames constructed from the spectral tetris method, and, as a byproduct, we obtain a characterization of when the spectral tetris construction works for redundancies below two....

  14. Improved Side Information Generation for Distributed Video Coding by Exploiting Spatial and Temporal Correlations

    Directory of Open Access Journals (Sweden)

    Ye Shuiming

    2009-01-01

    Full Text Available Distributed video coding (DVC is a video coding paradigm allowing low complexity encoding for emerging applications such as wireless video surveillance. Side information (SI generation is a key function in the DVC decoder, and plays a key-role in determining the performance of the codec. This paper proposes an improved SI generation for DVC, which exploits both spatial and temporal correlations in the sequences. Partially decoded Wyner-Ziv (WZ frames, based on initial SI by motion compensated temporal interpolation, are exploited to improve the performance of the whole SI generation. More specifically, an enhanced temporal frame interpolation is proposed, including motion vector refinement and smoothing, optimal compensation mode selection, and a new matching criterion for motion estimation. The improved SI technique is also applied to a new hybrid spatial and temporal error concealment scheme to conceal errors in WZ frames. Simulation results show that the proposed scheme can achieve up to 1.0 dB improvement in rate distortion performance in WZ frames for video with high motion, when compared to state-of-the-art DVC. In addition, both the objective and perceptual qualities of the corrupted sequences are significantly improved by the proposed hybrid error concealment scheme, outperforming both spatial and temporal concealments alone.

  15. Noise Residual Learning for Noise Modeling in Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The noise model is one of the inherently difficult challenges in DVC. This paper considers Transform Domain Wyner-Ziv (TDWZ) coding and proposes...... noise residual learning techniques that take residues from previously decoded frames into account to estimate the decoding residue more precisely. Moreover, the techniques calculate a number of candidate noise residual distributions within a frame to adaptively optimize the soft side information during...... decoding. A residual refinement step is also introduced to take advantage of correlation of DCT coefficients. Experimental results show that the proposed techniques robustly improve the coding efficiency of TDWZ DVC and for GOP=2 bit-rate savings up to 35% on WZ frames are achieved compared with DISCOVER....

  16. Framing the Realist Novel

    OpenAIRE

    Blunden, Ralph

    2017-01-01

    Framing the Realist novel develops a theory of framing the realist novel with a distinction made between internal framing devices and the external frames of ontology and ethics. An encompassing methodological frame of critical rationality is developed. What the realist novel is is discussed in chapter one and a distinction is drawn between the historical period in the second half of the nineteenth century and realism that pervades realist narrative more generally. Chapter three applies the th...

  17. Authentication of digital video evidence

    Science.gov (United States)

    Beser, Nicholas D.; Duerr, Thomas E.; Staisiunas, Gregory P.

    2003-11-01

    In response to a requirement from the United States Postal Inspection Service, the Technical Support Working Group tasked The Johns Hopkins University Applied Physics Laboratory (JHU/APL) to develop a technique tha will ensure the authenticity, or integrity, of digital video (DV). Verifiable integrity is needed if DV evidence is to withstand a challenge to its admissibility in court on the grounds that it can be easily edited. Specifically, the verification technique must detect additions, deletions, or modifications to DV and satisfy the two-part criteria pertaining to scientific evidence as articulated in Daubert et al. v. Merrell Dow Pharmaceuticals Inc., 43 F3d (9th Circuit, 1995). JHU/APL has developed a prototype digital video authenticator (DVA) that generates digital signatures based on public key cryptography at the frame level of the DV. Signature generation and recording is accomplished at the same time as DV is recorded by the camcorder. Throughput supports the consumer-grade camcorder data rate of 25 Mbps. The DVA software is implemented on a commercial laptop computer, which is connected to a commercial digital camcorder via the IEEE-1394 serial interface. A security token provides agent identification and the interface to the public key infrastructure (PKI) that is needed for management of the public keys central to DV integrity verification.

  18. A Method for Estimating Surveillance Video Georeferences

    Directory of Open Access Journals (Sweden)

    Aleksandar Milosavljević

    2017-07-01

    Full Text Available The integration of a surveillance camera video with a three-dimensional (3D geographic information system (GIS requires the georeferencing of that video. Since a video consists of separate frames, each frame must be georeferenced. To georeference a video frame, we rely on the information about the camera view at the moment that the frame was captured. A camera view in 3D space is completely determined by the camera position, orientation, and field-of-view. Since the accurate measuring of these parameters can be extremely difficult, in this paper we propose a method for their estimation based on matching video frame coordinates of certain point features with their 3D geographic locations. To obtain these coordinates, we rely on high-resolution orthophotos and digital elevation models (DEM of the area of interest. Once an adequate number of points are matched, Levenberg–Marquardt iterative optimization is applied to find the most suitable video frame georeference, i.e., position and orientation of the camera.

  19. Classifying smoke in laparoscopic videos using SVM

    Directory of Open Access Journals (Sweden)

    Alshirbaji Tamer Abdulbaki

    2017-09-01

    Full Text Available Smoke in laparoscopic videos usually appears due to the use of electrocautery when cutting or coagulating tissues. Therefore, detecting smoke can be used for event-based annotation in laparoscopic surgeries by retrieving the events associated with the electrocauterization. Furthermore, smoke detection can also be used for automatic smoke removal. However, detecting smoke in laparoscopic video is a challenge because of the changeability of smoke patterns, the moving camera and the different lighting conditions. In this paper, we present a video-based smoke detection algorithm to detect smoke of different densities such as fog, low and high density in laparoscopic videos. The proposed method depends on extracting various visual features from the laparoscopic images and providing them to support vector machine (SVM classifier. Features are based on motion, colour and texture patterns of the smoke. We validated our algorithm using experimental evaluation on four laparoscopic cholecystectomy videos. These four videos were manually annotated by defining every frame as smoke or non-smoke frame. The algorithm was applied to the videos by using different feature combinations for classification. Experimental results show that the combination of all proposed features gives the best classification performance. The overall accuracy (i.e. correctly classified frames is around 84%, with the sensitivity (i.e. correctly detected smoke frames and the specificity (i.e. correctly detected non-smoke frames are 89% and 80%, respectively.

  20. [Video documentation in forensic practice].

    Science.gov (United States)

    Schyma, C; Schyma, P

    1995-01-01

    The authors report in part 1 about their experiences with the Canon Ex1 Hi camcorder and the possibilities of documentation with the modern video technique. Application examples in legal medicine and criminalistics are described autopsy, scene, reconstruction of crimes etc. The online video documentation of microscopic sessions makes the discussion of findings easier. The use of video films for instruction produces a good resonance. The use of the video documentation can be extended by digitizing (Part 2). Two frame grabbers are presented, with which we obtained good results in digitizing of images captured from video. The best quality of images is achieved by online use of an image analysis chain. Corel 5.0 and PicEd Cora 4.0 allow complete image processings and analysis. The digital image processing influences the objectivity of the documentation. The applicabilities of image libraries are discussed.

  1. Predicting present-day rates of glacial isostatic adjustment using a smoothed GPS velocity field for the reconciliation of NAD83 reference frames in Canada

    Science.gov (United States)

    Craymer, M. R.; Henton, J. A.; Piraszewski, M.

    2008-12-01

    Glacial isostatic adjustment following the last glacial period is the dominant source of crustal deformation in Canada east of the Rocky Mountains. The present-day vertical component of motion associated with this process may exceed 1 cm/y and is being directly measured with the Global Positioning System (GPS). A consequence of this steady deformation is that high accuracy coordinates at one epoch may not be compatible with those at another epoch. For example, modern precise point positioning (PPP) methods provide coordinates at the epoch of observation while NAD83, the officially adopted reference frame in Canada and the U.S., is expressed at some past reference epoch. The PPP positions are therefore incompatible with coordinates in such a realization of the reference frame and need to be propagated back to the frame's reference epoch. Moreover, the realizations of NAD83 adopted by the provincial geodetic agencies in Canada are referenced to different coordinate epochs; either 1997.0 or 2002.0. Proper comparison of coordinates between provinces therefore requires propagating them from one reference epoch to another. In an effort to reconcile PPP results and different realizations of NAD83, we empirically represent crustal deformation throughout Canada using a velocity field based solely on high accuracy continuous and episodic GPS observations. The continuous observations from 2001 to 2007 were obtained from nearly 100 permanent GPS stations, predominately operated by Natural Resources Canada (NRCan) and provincial geodetic agencies. Many of these sites are part of the International GNSS Service (IGS) global network. Episodic observations from 1994 to 2006 were obtained from repeated occupations of the Canadian Base Network (CBN), which consists of approximately 160 stable pillar-type monuments across the entire country. The CBN enables a much denser spatial sampling of crustal motions although coverage in the far north is still rather sparse. NRCan solutions of

  2. In-network adaptation of SHVC video in software-defined networks

    Science.gov (United States)

    Awobuluyi, Olatunde; Nightingale, James; Wang, Qi; Alcaraz Calero, Jose Maria; Grecos, Christos

    2016-04-01

    Software Defined Networks (SDN), when combined with Network Function Virtualization (NFV) represents a paradigm shift in how future networks will behave and be managed. SDN's are expected to provide the underpinning technologies for future innovations such as 5G mobile networks and the Internet of Everything. The SDN architecture offers features that facilitate an abstracted and centralized global network view in which packet forwarding or dropping decisions are based on application flows. Software Defined Networks facilitate a wide range of network management tasks, including the adaptation of real-time video streams as they traverse the network. SHVC, the scalable extension to the recent H.265 standard is a new video encoding standard that supports ultra-high definition video streams with spatial resolutions of up to 7680×4320 and frame rates of 60fps or more. The massive increase in bandwidth required to deliver these U-HD video streams dwarfs the bandwidth requirements of current high definition (HD) video. Such large bandwidth increases pose very significant challenges for network operators. In this paper we go substantially beyond the limited number of existing implementations and proposals for video streaming in SDN's all of which have primarily focused on traffic engineering solutions such as load balancing. By implementing and empirically evaluating an SDN enabled Media Adaptation Network Entity (MANE) we provide a valuable empirical insight into the benefits and limitations of SDN enabled video adaptation for real time video applications. The SDN-MANE is the video adaptation component of our Video Quality Assurance Manager (VQAM) SDN control plane application, which also includes an SDN monitoring component to acquire network metrics and a decision making engine using algorithms to determine the optimum adaptation strategy for any real time video application flow given the current network conditions. Our proposed VQAM application has been implemented and

  3. Digital video.

    Science.gov (United States)

    Johnson, Don; Johnson, Mike

    2004-04-01

    The process of digital capture, editing, and archiving video has become an important aspect of documenting arthroscopic surgery. Recording the arthroscopic findings before and after surgery is an essential part of the patient's medical record. The hardware and software has become more reasonable to purchase, but the learning curve to master the software is steep. Digital video is captured at the time of arthroscopy to a hard disk, and written to a CD at the end of the operative procedure. The process of obtaining video of open procedures is more complex. Outside video of the procedure is recorded on digital tape with a digital video camera. The camera must be plugged into a computer to capture the video on the hard disk. Adobe Premiere software is used to edit the video and render the finished video to the hard drive. This finished video is burned onto a CD. We outline the choice of computer hardware and software for the manipulation of digital video. The techniques of backup and archiving the completed projects and files also are outlined. The uses of digital video for education and the formats that can be used in PowerPoint presentations are discussed.

  4. Research of Video Steganalysis Algorithm Based on H265 Protocol

    Directory of Open Access Journals (Sweden)

    Wu Kaicheng

    2015-01-01

    This paper researches LSB matching VSA based on H265 protocol with the research background of 26 original Video sequences, it firstly extracts classification features out from training samples as input of SVM, and trains in SVM to obtain high-quality category classification model, and then tests whether there is suspicious information in the video sample. The experimental results show that VSA algorithm based on LSB matching can be more practical to obtain all frame embedded secret information and carrier and video of local frame embedded. In addition, VSA adopts the method of frame by frame with a strong robustness in resisting attack in the corresponding time domain.

  5. The Video Drift Method to Measure Double Stars

    Science.gov (United States)

    Nugent, Richard

    2017-06-01

    A new video method has been developed to measure double stars. The double star components are video recorded as they drift across the camera’s field of view from east to west with the telescope’s motor drive turned off. Using an existing software program that was specifically written for the analysis of occultation videos (Limovie - Light Measurement Tool for Occultation Observation using Video Recorder), standard (x,y) coordinates are extracted for each component star for each video frame. An Excel program written by author RLN (VidPro - Video Drift Program Reduction) analyses the (x,y) positions for determining position angle (PA), separation and other statistical quantities. Unlike other double star reduction methods, no star catalogue or calibration doubles are needed as each video drift is self-calibrating. The duration of a typical video for an f/10 telescope system ranges between 20 sec - 1 minute; this along with a 30 frame/sec recording rate produces 100’s to 1,000’s of (x,y) pairs for analysis. The video chip’s offset from the true east-west direction (drift angle) is computed simultaneously along with a scale factor for each video. The drift angle and scale factor are used with all (x,y) positions to generate a unique position angle and separation. For 1,800+ doubles measured to date typical standard deviations (our own internal precision) for position angles are 1.1° and for separations 0.35". A comparison was made with the Washington Double Star catalog (WDS) entries that had little or no change in PA and separation for 120 + years. For these doubles our PA’s and separations differed by an average of 0.2 deg and 0.2" respectively. Sources of error are discussed along with tips to maximize the quality of the (x,y) data produced by Limovie. Limovie and VidPro programs are available as free downloads.

  6. Deteksi Keaslian Video Pada Handycam Dengan Metode Localization Tampering

    Directory of Open Access Journals (Sweden)

    Dewi Yunita Sari

    2017-07-01

    Full Text Available Video merupakan barang bukti digital yang salah satunya berasal dari handycam, dalam hal kejahatan video biasanya dimanipulasi untuk menghilangkan bukti-bukti yang ada di dalamnya, oleh sebab itu diperlukan analisis forensik untuk dapat mendeteksi keaslian video tersebut. Dalam penelitian ini di lakukan manipulasi video dengan attack cropping, zooming, rotation, dan grayscale, hal ini bertujuan untuk membandingkan antara rekaman video asli dan rekaman video tampering, dari rekaman video tersebut dianalisis dengan menggunakann metode localization tampering, yaitu metode deteksi yang menunjukkan bagian pada video yang telah dimanipulasi, dengan menganalisis frame, perhitungan histogram, dan grafik histogram. Dengan localization tampering tersebut maka dapat diketahui letak frame dan durasi pada video yang telah mengalami tampering.

  7. Toward real-time remote processing of laparoscopic video.

    Science.gov (United States)

    Ronaghi, Zahra; Duffy, Edward B; Kwartowitz, David M

    2015-10-01

    Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and use small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery uses the images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, California). The video streams generate approximately 360 MB of data per second, demonstrating a trend toward increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process, and visualize data in real time is essential for the performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We have performed image processing algorithms on a high-definition head phantom video (1920 × 1080 pixels) and transferred the video using a message passing interface. The total transfer time is around 53 ms or 19 fps. We will optimize and parallelize these algorithms to reduce the total time to 30 ms.

  8. ANALISA OPTIMALISASI TEKNIK ESTIMASI DAN KOMPENSASI GERAK PADA ENKODER VIDEO H.263

    Directory of Open Access Journals (Sweden)

    Oka Widyantara

    2009-05-01

    Full Text Available Mode baseline encoder video H.263 menerapkan teknik estimasi dan kompensasi gerak dengan satu vector gerak untuk setiap macroblock. Prosedur area pencarian menggunakan pencarian penuh dengan akurasi setengah pixel pada bidang [16,15.5] membuat prediksi di tepian frame tidak dapat diprediksi dengan baik. Peningkatan unjuk kerja pengkodean prediksi interframe encoder video H.263 dengan optimalisasi teknik estimasi dan kompensasi gerak diimplementasikan dengan penambahan area pencarian [31.5,31.5] (unrestricted motion vector, Annex D dan 4 motion vector (advanced prediction mode, Annex F. Hasil penelitian menunjukkan bahwa advanced mode mampu meningkatkan nilai SNR sebesar 0.03 dB untuk sequence video claire, 0.2 dB untuk sequence video foreman, 0.041 dB untuk sequence video Glasgow, dan juga mampu menurunkan bit rate pengkodean sebesar 2.3 % untuk video Claire, 15.63 % untuk video Foreman,  dan 9.8% untuk video Glasgow dibandingkan dengan implementasi 1 motion vector pada pengkodean baseline mode.

  9. The influence of motion quality on responses towards video playback stimuli

    Directory of Open Access Journals (Sweden)

    Emma Ware

    2015-07-01

    Full Text Available Visual motion, a critical cue in communication, can be manipulated and studied using video playback methods. A primary concern for the video playback researcher is the degree to which objects presented on video appear natural to the non-human subject. Here we argue that the quality of motion cues on video, as determined by the video's image presentation rate (IPR, are of particular importance in determining a subject's social response behaviour. We present an experiment testing the effect of variations in IPR on pigeon (Columbia livia response behaviour towards video images of courting opposite sex partners. Male and female pigeons were presented with three video playback stimuli, each containing a different social partner. Each stimulus was then modified to appear at one of three IPRs: 15, 30 or 60 progressive (p frames per second. The results showed that courtship behaviour became significantly longer in duration as IPR increased. This finding implies that the IPR significantly affects the perceived quality of motion cues impacting social behaviour. In males we found that the duration of courtship also depended on the social partner viewed and that this effect interacted with the effects of IPR on behaviour. Specifically, the effect of social partner reached statistical significance only when the stimuli were displayed at 60 p, demonstrating the potential for erroneous results when insufficient IPRs are used. In addition to demonstrating the importance of IPR in video playback experiments, these findings help to highlight and describe the role of visual motion processing in communication behaviour.

  10. Search the Audio, Browse the Video—A Generic Paradigm for Video Collections

    Directory of Open Access Journals (Sweden)

    Efrat Alon

    2003-01-01

    Full Text Available The amount of digital video being shot, captured, and stored is growing at a rate faster than ever before. The large amount of stored video is not penetrable without efficient video indexing, retrieval, and browsing technology. Most prior work in the field can be roughly categorized into two classes. One class is based on image processing techniques, often called content-based image and video retrieval, in which video frames are indexed and searched for visual content. The other class is based on spoken document retrieval, which relies on automatic speech recognition and text queries. Both approaches have major limitations. In the first approach, semantic queries pose a great challenge, while the second, speech-based approach, does not support efficient video browsing. This paper describes a system where speech is used for efficient searching and visual data for efficient browsing, a combination that takes advantage of both approaches. A fully automatic indexing and retrieval system has been developed and tested. Automated speech recognition and phonetic speech indexing support text-to-speech queries. New browsable views are generated from the original video. A special synchronized browser allows instantaneous, context-preserving switching from one view to another. The system was successfully used to produce searchable-browsable video proceedings for three local conferences.

  11. Search the Audio, Browse the Video—A Generic Paradigm for Video Collections

    Science.gov (United States)

    Amir, Arnon; Srinivasan, Savitha; Efrat, Alon

    2003-12-01

    The amount of digital video being shot, captured, and stored is growing at a rate faster than ever before. The large amount of stored video is not penetrable without efficient video indexing, retrieval, and browsing technology. Most prior work in the field can be roughly categorized into two classes. One class is based on image processing techniques, often called content-based image and video retrieval, in which video frames are indexed and searched for visual content. The other class is based on spoken document retrieval, which relies on automatic speech recognition and text queries. Both approaches have major limitations. In the first approach, semantic queries pose a great challenge, while the second, speech-based approach, does not support efficient video browsing. This paper describes a system where speech is used for efficient searching and visual data for efficient browsing, a combination that takes advantage of both approaches. A fully automatic indexing and retrieval system has been developed and tested. Automated speech recognition and phonetic speech indexing support text-to-speech queries. New browsable views are generated from the original video. A special synchronized browser allows instantaneous, context-preserving switching from one view to another. The system was successfully used to produce searchable-browsable video proceedings for three local conferences.

  12. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  13. Enhancing the quality of service of video streaming over MANETs using MDC and FEC

    Science.gov (United States)

    Zang, Weihua; Guo, Rui

    2012-04-01

    Path and server diversities have been used to guarantee reliable video streaming communication over wireless networks. In this paper, server diversity over mobile wireless ad hoc networks (MANETs) is implemented. Particularly, multipoint-to-point transmission together with multiple description coding (MDC) and forward error correction (FEC) technique is used to enhance the quality of service of video streaming over the wireless lossy networks. Additionally, the dynamic source routing (DSR) protocol is used to discover maximally disjoint routes for each sender and to distribute the workload evenly within the MANETs for video streaming applications. NS-2 Simulation study demonstrates the feasibility of the proposed mechanism and it shows that the approach achieves better quality of video streaming, in terms of the playable frame rate, reliability and real-time performance on the receiving side.

  14. People detection in nuclear plants by video processing for safety purpose

    Energy Technology Data Exchange (ETDEWEB)

    Jorge, Carlos Alexandre F.; Mol, Antonio Carlos A., E-mail: calexandre@ien.gov.b, E-mail: mol@ien.gov.b [Instituto de Engenharia Nuclear (IEN/CNEN), Rio de Janeiro, RJ (Brazil); Seixas, Jose M.; Silva, Eduardo Antonio B., E-mail: seixas@lps.ufrj.b, E-mail: eduardo@lps.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Eletrica; Cota, Raphael E.; Ramos, Bruno L., E-mail: brunolange@poli.ufrj.b [Universidade Federal do Rio de Janeiro (EP/UFRJ), RJ (Brazil). Dept. de Engenharia Eletronica e de Computacao

    2011-07-01

    This work describes the development of a surveillance system for safety purposes in nuclear plants. The final objective is to track people online in videos, in order to estimate the dose received by personnel, during the execution of working tasks in nuclear plants. The estimation will be based on their tracked positions and on dose rate mapping in a real nuclear plant at Instituto de Engenharia Nuclear, Argonauta nuclear research reactor. Cameras have been installed within Argonauta's room, supplying the data needed. Both video processing and statistical signal processing techniques may be used for detection, segmentation and tracking people in video. This first paper reports people segmentation in video using background subtraction, by two different approaches, namely frame differences, and blind signal separation based on the independent component analysis method. Results are commented, along with perspectives for further work. (author)

  15. No-Reference Video Quality Assessment using MPEG Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2013-01-01

    We present a method for No-Reference (NR) Video Quality Assessment (VQA) for decoded video without access to the bitstream. This is achieved by extracting and pooling features from a NR image quality assessment method used frame by frame. We also present methods to identify the video coding...... and estimate the video coding parameters for MPEG-2 and H.264/AVC which can be used to improve the VQA. The analysis differs from most other video coding analysis methods since it is without access to the bitstream. The results show that our proposed method is competitive with other recent NR VQA methods...

  16. Video Texture Synthesis Based on Flow-Like Stylization Painting

    Directory of Open Access Journals (Sweden)

    Qian Wenhua

    2014-01-01

    Full Text Available The paper presents an NP-video rendering system based on natural phenomena. It provides a simple nonphotorealistic video synthesis system in which user can obtain a flow-like stylization painting and infinite video scene. Firstly, based on anisotropic Kuwahara filtering in conjunction with line integral convolution, the phenomena video scene can be rendered to flow-like stylization painting. Secondly, the methods of frame division, patches synthesis, will be used to synthesize infinite playing video. According to selection examples from different natural video texture, our system can generate stylized of flow-like and infinite video scenes. The visual discontinuities between neighbor frames are decreased, and we also preserve feature and details of frames. This rendering system is easy and simple to implement.

  17. Internet video telephony allows speech reading by deaf individuals and improves speech perception by cochlear implant users.

    Directory of Open Access Journals (Sweden)

    Georgios Mantokoudis

    Full Text Available OBJECTIVE: To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI users. METHODS: Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px, frame rates (30, 20, 10, 7, 5 frames per second (fps, speech velocities (three different speakers, webcameras (Logitech Pro9000, C600 and C500 and image/sound delays (0-500 ms. All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. RESULTS: Higher frame rate (>7 fps, higher camera resolution (>640 × 480 px and shorter picture/sound delay (<100 ms were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009 in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11 showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032. CONCLUSION: Webcameras have the potential to improve telecommunication of hearing-impaired individuals.

  18. Internet video telephony allows speech reading by deaf individuals and improves speech perception by cochlear implant users.

    Science.gov (United States)

    Mantokoudis, Georgios; Dähler, Claudia; Dubach, Patrick; Kompis, Martin; Caversaccio, Marco D; Senn, Pascal

    2013-01-01

    To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (1280 × 720, 640 × 480, 320 × 240, 160 × 120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0-500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live Skype™ video connection and live face-to-face communication were assessed. Higher frame rate (>7 fps), higher camera resolution (>640 × 480 px) and shorter picture/sound delay (<100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). Webcameras have the potential to improve telecommunication of hearing-impaired individuals.

  19. Another frame, another game? : Explaining framing effects in economic games

    NARCIS (Netherlands)

    Gerlach, Philipp; Jaeger, B.; Hopfensitz, A.; Lori, E.

    2016-01-01

    Small changes in the framing of games (i.e., the way in which the game situation is described to participants) can have large effects on players' choices. For example, referring to a prisoner's dilemma game as the "Community Game" as opposed to the "Wall Street Game" can double the cooperation rate

  20. Filter Bank Fusion Frames

    OpenAIRE

    Chebira, Amina; Fickus, Matthew; Mixon, Dustin G.

    2010-01-01

    In this paper we characterize and construct novel oversampled filter banks implementing fusion frames. A fusion frame is a sequence of orthogonal projection operators whose sum can be inverted in a numerically stable way. When properly designed, fusion frames can provide redundant encodings of signals which are optimally robust against certain types of noise and erasures. However, up to this point, few implementable constructions of such frames were known; we show how to construct them using ...