WorldWideScience

Sample records for live video capture

  1. Brownian motion using video capture

    International Nuclear Information System (INIS)

    Salmon, Reese; Robbins, Candace; Forinash, Kyle

    2002-01-01

    Although other researchers had previously observed the random motion of pollen grains suspended in water through a microscope, Robert Brown's name is associated with this behaviour based on observations he made in 1828. It was not until Einstein's work in the early 1900s however, that the origin of this irregular motion was established to be the result of collisions with molecules which were so small as to be invisible in a light microscope (Einstein A 1965 Investigations on the Theory of the Brownian Movement ed R Furth (New York: Dover) (transl. Cowper A D) (5 papers)). Jean Perrin in 1908 (Perrin J 1923 Atoms (New York: Van Nostrand-Reinhold) (transl. Hammick D)) was able, through a series of painstaking experiments, to establish the validity of Einstein's equation. We describe here the details of a junior level undergraduate physics laboratory experiment where students used a microscope, a video camera and video capture software to verify Einstein's famous calculation of 1905. (author)

  2. Video Liveness for Citizen Journalism: Attacks and Defenses

    OpenAIRE

    Rahman, Mahmudur; Azimpourkivi, Mozhgan; Topkara, Umut; Carbunar, Bogdan

    2017-01-01

    The impact of citizen journalism raises important video integrity and credibility issues. In this article, we introduce Vamos, the first user transparent video "liveness" verification solution based on video motion, that accommodates the full range of camera movements, and supports videos of arbitrary length. Vamos uses the agreement between video motion and camera movement to corroborate the video authenticity. Vamos can be integrated into any mobile video capture application without requiri...

  3. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  4. The LivePhoto Physics videos and video analysis site

    Science.gov (United States)

    Abbott, David

    2009-09-01

    The LivePhoto site is similar to an archive of short films for video analysis. Some videos have Flash tools for analyzing the video embedded in the movie. Most of the videos address mechanics topics with titles like Rolling Pencil (check this one out for pedagogy and content knowledge—nicely done!), Juggler, Yo-yo, Puck and Bar (this one is an inelastic collision with rotation), but there are a few titles in other areas (E&M, waves, thermo, etc.).

  5. Capture and playback synchronization in video conferencing

    Science.gov (United States)

    Shae, Zon-Yin; Chang, Pao-Chi; Chen, Mon-Song

    1995-03-01

    Packet-switching based video conferencing has emerged as one of the most important multimedia applications. Lip synchronization can be disrupted in the packet network as the result of the network properties: packet delay jitters at the capture end, network delay jitters, packet loss, packet arrived out of sequence, local clock mismatch, and video playback overlay with the graphic system. The synchronization problem become more demanding as the real time and multiparty requirement of the video conferencing application. Some of the above mentioned problem can be solved in the more advanced network architecture as ATM having promised. This paper will present some of the solutions to the problems that can be useful at the end station terminals in the massively deployed packet switching network today. The playback scheme in the end station will consist of two units: compression domain buffer management unit and the pixel domain buffer management unit. The pixel domain buffer management unit is responsible for removing the annoying frame shearing effect in the display. The compression domain buffer management unit is responsible for parsing the incoming packets for identifying the complete data blocks in the compressed data stream which can be decoded independently. The compression domain buffer management unit is also responsible for concealing the effects of clock mismatch, lip synchronization, and packet loss, out of sequence, and network jitters. This scheme can also be applied to the multiparty teleconferencing environment. Some of the schemes presented in this paper have been implemented in the Multiparty Multimedia Teleconferencing (MMT) system prototype at the IBM watson research center.

  6. General Video Game AI: Learning from Screen Capture

    OpenAIRE

    Kunanusont, Kamolwan; Lucas, Simon M.; Perez-Liebana, Diego

    2017-01-01

    General Video Game Artificial Intelligence is a general game playing framework for Artificial General Intelligence research in the video-games domain. In this paper, we propose for the first time a screen capture learning agent for General Video Game AI framework. A Deep Q-Network algorithm was applied and improved to develop an agent capable of learning to play different games in the framework. After testing this algorithm using various games of different categories and difficulty levels, th...

  7. Capturing, annotating and reflecting video footage

    DEFF Research Database (Denmark)

    Eckardt, Max Roald; Wagner, Johannes

    A presentation of interaction field data capturing setups for uninterrupted long term capturing. Two setups were described: the AMU forklift driving school with 17 cameras and the Digital Days 2016 at University College Nord in Aalborg with 16 kameras, 14 audio recorders, and two HDMI recorders....

  8. Acute Pectoralis Major Rupture Captured on Video

    Directory of Open Access Journals (Sweden)

    Alejandro Ordas Bayon

    2016-01-01

    Full Text Available Pectoralis major (PM ruptures are uncommon injuries, although they are becoming more frequent. We report a case of a PM rupture in a young male who presented with axillar pain and absence of the anterior axillary fold after he perceived a snap while lifting 200 kg in the bench press. Diagnosis of PM rupture was suspected clinically and confirmed with imaging studies. The patient was treated surgically, reinserting the tendon to the humerus with suture anchors. One-year follow-up showed excellent results. The patient was recording his training on video, so we can observe in detail the most common mechanism of injury of PM rupture.

  9. Quality and noise measurements in mobile phone video capture

    Science.gov (United States)

    Petrescu, Doina; Pincenti, John

    2011-02-01

    The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.

  10. Self-Recognition in Live Videos by Young Children: Does Video Training Help?

    Science.gov (United States)

    Demir, Defne; Skouteris, Helen

    2010-01-01

    The overall aim of the experiment reported here was to establish whether self-recognition in live video can be facilitated when live video training is provided to children aged 2-2.5 years. While the majority of children failed the test of live self-recognition prior to video training, more than half exhibited live self-recognition post video…

  11. Objective analysis of image quality of video image capture systems

    Science.gov (United States)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give

  12. Live Action: Can Young Children Learn Verbs from Video?

    Science.gov (United States)

    Roseberry, Sarah; Hirsh-Pasek, Kathy; Parish-Morris, Julia; Golinkoff, Roberta M.

    2009-01-01

    The availability of educational programming aimed at infants and toddlers is increasing, yet the effect of video on language acquisition remains unclear. Three studies of 96 children aged 30-42 months investigated their ability to learn verbs from video. Study 1 asked whether children could learn verbs from video when supported by live social…

  13. Live Action: Can Young Children Learn Verbs From Video?

    OpenAIRE

    Roseberry, Sarah; Hirsh-Pasek, Kathy; Parish-Morris, Julia; Golinkoff, Roberta Michnick

    2009-01-01

    The availability of educational programming aimed at infants and toddlers is increasing, yet the effect of video on language acquisition remains unclear. Three studies of 96 children aged 30–42 months investigated their ability to learn verbs from video. Study 1 asked whether children could learn verbs from video when supported by live social interaction. Study 2 tested whether children could learn verbs from video alone. Study 3 clarified whether the benefits of social interaction remained w...

  14. Virtual Environments Using Video Capture for Social Phobia with Psychosis

    Science.gov (United States)

    White, Richard; Clarke, Timothy; Turner, Ruth; Fowler, David

    2013-01-01

    Abstract A novel virtual environment (VE) system was developed and used as an adjunct to cognitive behavior therapy (CBT) with six socially anxious patients recovering from psychosis. The novel aspect of the VE system is that it uses video capture so the patients can see a life-size projection of themselves interacting with a specially scripted and digitally edited filmed environment played in real time on a screen in front of them. Within-session process outcomes (subjective units of distress and belief ratings on individual behavioral experiments), as well as patient feedback, generated the hypothesis that this type of virtual environment can potentially add value to CBT by helping patients understand the role of avoidance and safety behaviors in the maintenance of social anxiety and paranoia and by boosting their confidence to carry out “real-life” behavioral experiments. PMID:23659722

  15. An openstack-based flexible video transcoding framework in live

    Science.gov (United States)

    Shi, Qisen; Song, Jianxin

    2017-08-01

    With the rapid development of mobile live business, transcoding HD video is often a challenge for mobile devices due to their limited processing capability and bandwidth-constrained network connection. For live service providers, it's wasteful for resources to delay lots of transcoding server because some of them are free to work sometimes. To deal with this issue, this paper proposed an Openstack-based flexible transcoding framework to achieve real-time video adaption for mobile device and make computing resources used efficiently. To this end, we introduced a special method of video stream splitting and VMs resource scheduling based on access pressure prediction,which is forecasted by an AR model.

  16. The live service of video geo-information

    Science.gov (United States)

    Xue, Wu; Zhang, Yongsheng; Yu, Ying; Zhao, Ling

    2016-03-01

    In disaster rescue, emergency response and other occasions, traditional aerial photogrammetry is difficult to meet real-time monitoring and dynamic tracking demands. To achieve the live service of video geo-information, a system is designed and realized—an unmanned helicopter equipped with video sensor, POS, and high-band radio. This paper briefly introduced the concept and design of the system. The workflow of video geo-information live service is listed. Related experiments and some products are shown. In the end, the conclusion and outlook is given.

  17. The Students Experiences With Live Video-Streamed Teaching Classes

    DEFF Research Database (Denmark)

    Jelsbak, Vibe Alopaeus; Ørngreen, Rikke; Buus, Lillian

    2017-01-01

    The Bachelor's Degree Programme of Biomedical Laboratory Science at VIA Faculty of Health Sciences offers a combination of live video-streamed and traditional teaching. It is the student’s individual choice whether to attend classes on-site or to attend classes from home via live video-stream. Our...... previous studies revealed that the live-streamed sessions compared to on-site teaching reduced interaction and dialogue between attendants, and that the main reasons were technological issues and the teacher’s choice of teaching methods. One of our goals therefore became to develop methods and implement...... transparency in the live video-streamed teaching sessions during a 5-year period of continuous development of technological and pedagogical solutions for live-streamed teaching. Data describing student’s experiences were gathered in a longitudinal study of four sessions from 2012 to 2017 using a qualitative...

  18. Reduced attentional capture in action video game players

    NARCIS (Netherlands)

    Chisholm, J; Hickey, C.; Theeuwes, J.; Kingstone, A.

    2010-01-01

    Recent studies indicate that playing action video games improves performance on a number of attention-based tasks. However, it remains unclear whether action video game experience primarily affects endogenous or exogenous forms of spatial orienting. To examine this issue, action video game players

  19. A Novel Mobile Video Community Discovery Scheme Using Ontology-Based Semantical Interest Capture

    Directory of Open Access Journals (Sweden)

    Ruiling Zhang

    2016-01-01

    Full Text Available Leveraging network virtualization technologies, the community-based video systems rely on the measurement of common interests to define and steady relationship between community members, which promotes video sharing performance and improves scalability community structure. In this paper, we propose a novel mobile Video Community discovery scheme using ontology-based semantical interest capture (VCOSI. An ontology-based semantical extension approach is proposed, which describes video content and measures video similarity according to video key word selection methods. In order to reduce the calculation load of video similarity, VCOSI designs a prefix-filtering-based estimation algorithm to decrease energy consumption of mobile nodes. VCOSI further proposes a member relationship estimate method to construct scalable and resilient node communities, which promotes video sharing capacity of video systems with the flexible and economic community maintenance. Extensive tests show how VCOSI obtains better performance results in comparison with other state-of-the-art solutions.

  20. Point-of-View Recording Devices for Intraoperative Neurosurgical Video Capture

    Directory of Open Access Journals (Sweden)

    Jose Luis Porras

    2016-10-01

    Full Text Available AbstractIntroduction: The ability to record and stream neurosurgery is an unprecedented opportunity to further research, medical education, and quality improvement. Here, we appraise the ease of implementation of existing POV devices when capturing and sharing procedures from the neurosurgical operating room, and detail their potential utility in this context.Methods: Our neurosurgical team tested and critically evaluated features of the Google Glass and Panasonic HX-A500 cameras including ergonomics, media quality, and media sharing in both the operating theater and the angiography suite.Results: Existing devices boast several features that facilitate live recording and streaming of neurosurgical procedures. Given that their primary application is not intended for the surgical environment, we identified a number of concrete, yet improvable, limitations.Conclusion: The present study suggests that neurosurgical video capture and live streaming represents an opportunity to contribute to research, education, and quality improvement. Despite this promise, shortcomings render existing devices impractical for serious consideration. We describe the features that future recording platforms should possess to improve upon existing technology.

  1. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+

    Directory of Open Access Journals (Sweden)

    Steven Nicholas Graves, MA

    2015-02-01

    Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  2. Scheduling Heuristics for Live Video Transcoding on Cloud Edges

    Institute of Scientific and Technical Information of China (English)

    Panagiotis Oikonomou; Maria G. Koziri; Nikos Tziritas; Thanasis Loukopoulos; XU Cheng-Zhong

    2017-01-01

    Efficient video delivery involves the transcoding of the original sequence into various resolutions, bitrates and standards, in order to match viewers 'capabilities. Since video coding and transcoding are computationally demanding, performing a portion of these tasks at the network edges promises to decrease both the workload and network traffic towards the data centers of media provid-ers. Motivated by the increasing popularity of live casting on social media platforms, in this paper we focus on the case of live vid-eo transcoding. Specifically, we investigate scheduling heuristics that decide on which jobs should be assigned to an edge mini-datacenter and which to a backend datacenter. Through simulation experiments with different QoS requirements we conclude on the best alternative.

  3. Simultaneous Class-based and Live Video Streamed Teaching

    DEFF Research Database (Denmark)

    Ørngreen, Rikke; Levinsen, Karin Ellen Tweddell; Jelsbak, Vibe Alopaeus

    2015-01-01

    . From here a number of general principles and perspective were derived for the specific program which can be useful to contemplate in general for similar educations. It is concluded that the blended class model using live video stream represents a viable pedagogical solution for the Bachelor Programme......The Bachelor Programme in Biomedical Laboratory Analysis at VIA's healthcare university college in Aarhus has established a blended class which combines traditional and live broadcast teaching (via an innovative choice of video conferencing system). On the so-called net-days, students have...... sheds light on the pedagogical challenges, the educational designs possible, the opportunities and constrains associated with video conferencing as a pedagogical practice, as well as the technological, structural and organisational conditions involved. In this paper a participatory action research...

  4. Capturing Better Photos and Video with your iPhone

    CERN Document Server

    Thomas, J Dennis; Sammon, Rick

    2011-01-01

    Offers unique advice for taking great photos and videos with your iPod or iPhone!. Packed with unique advice, tips, and tricks, this one-of-a-kind, full-color reference presents step-by-step guidance for taking the best possible quality photos and videos using your iPod or iPhone. Top This unique book walks you through everything from composing a picture, making minor edits, and posting content to using apps to create more dynamic images. You'll quickly put to use this up-to-date coverage of executing both common and uncommon photo and video tasks on your mobile device.: Presents unique advice

  5. Evaluation of video capture equipment for secondary image acquisition in the PACS.

    Science.gov (United States)

    Sukenobu, Yoshiharu; Sasagaki, Michihiro; Hirabuki, Norio; Naito, Hiroaki; Narumi, Yoshifumi; Inamura, Kiyonari

    2002-01-01

    There are many cases in which picture archiving and communication systems (PACS) are built with old-type existing modalities with no DICOM output. One of the methods for interfacing them to the PACS is to implement video capture (/ frame grabber) equipment. This equipment takes analog video signal output from medical imaging modalities, and amplitude of the video signal is A/D converted and supplied to the PACS. In this report, we measured and evaluated the accuracy at which this video capture equipment could capture the image. From the physical evaluation, we found the pixel values of an original image and its captured image were almost equal in gray level from 20%-90%. The change in the pixel values of a captured image was +/-3 on average. The change of gray level concentration was acceptable and had an average standard deviation of around 0.63. As for resolution, the degradation was observed at the highest physical level. In a subjective evaluation, the evaluation value of the CT image had a grade of 2.81 on the average (the same quality for a reference image was set to a grade of 3.0). Abnormalities in heads, chests, and abdomens were judged not to influence diagnostic accuracy. Some small differences were seen when comparing captured and reference images, but they are recognized as having no influence on the diagnoses.

  6. Optimization of video capturing and tone mapping in video camera systems

    NARCIS (Netherlands)

    Cvetkovic, S.D.

    2011-01-01

    Image enhancement techniques are widely employed in many areas of professional and consumer imaging, machine vision and computational imaging. Image enhancement techniques used in surveillance video cameras are complex systems involving controllable lenses, sensors and advanced signal processing. In

  7. Video Lecture Capture Technology Helps Students Study without Affecting Attendance in Large Microbiology Lecture Courses?

    OpenAIRE

    McLean, Jennifer L.; Suchman, Erica L.

    2016-01-01

    Recording lectures using video lecture capture software and making them available for students to watch anytime, from anywhere, has become a common practice in many universities across many disciplines. The software has become increasingly easy to use and is commonly provided and maintained by higher education institutions. Several studies have reported that students use lecture capture to enhance their learning and study for assessments, as well as to catch up on material they miss when they...

  8. Integrating Video-Capture Virtual Reality Technology into a Physically Interactive Learning Environment for English Learning

    Science.gov (United States)

    Yang, Jie Chi; Chen, Chih Hung; Jeng, Ming Chang

    2010-01-01

    The aim of this study is to design and develop a Physically Interactive Learning Environment, the PILE system, by integrating video-capture virtual reality technology into a classroom. The system is designed for elementary school level English classes where students can interact with the system through physical movements. The system is designed to…

  9. Video Lecture Capture Technology Helps Students Study without Affecting Attendance in Large Microbiology Lecture Courses

    Directory of Open Access Journals (Sweden)

    Jennifer Lynn McLean

    2016-12-01

    Full Text Available Recording lectures using video lecture capture software and making them available for students to watch anytime, from anywhere, has become a common practice in many universities across many disciplines. The software has become increasingly easy to use and is commonly provided and maintained by higher education institutions. Several studies have reported that students use lecture capture to enhance their learning and study for assessments, as well as to catch up on material they miss when they cannot attend class due to extenuating circumstances. Furthermore, students with disabilities and students from non-English Speaking Backgrounds (NESB may benefit from being able to watch the video lecture captures at their own pace. Yet, the effect of this technology on class attendance remains a controversial topic and largely unexplored in undergraduate microbiology education. Here, we show that when video lecture captures were available in our large enrollment general microbiology courses, attendance did not decrease. In fact, the majority of students reported that having the videos available did not encourage them to skip class, but rather they used them as a study tool. When we surveyed NESB students and nontraditional students about their attitudes toward this technology, they found it helpful for their learning and for keeping up with the material.

  10. Video capture on student-owned mobile devices to facilitate psychomotor skills acquisition: A feasibility study.

    Science.gov (United States)

    Hinck, Glori; Bergmann, Thomas F

    2013-01-01

    Objective : We evaluated the feasibility of using mobile device technology to allow students to record their own psychomotor skills so that these recordings can be used for self-reflection and formative evaluation. Methods : Students were given the choice of using DVD recorders, zip drive video capture equipment, or their personal mobile phone, device, or digital camera to record specific psychomotor skills. During the last week of the term, they were asked to complete a 9-question survey regarding their recording experience, including details of mobile phone ownership, technology preferences, technical difficulties, and satisfaction with the recording experience and video critique process. Results : Of those completing the survey, 83% currently owned a mobile phone with video capability. Of the mobile phone owners 62% reported having email capability on their phone and that they could transfer their video recording successfully to their computer, making it available for upload to the learning management system. Viewing the video recording of the psychomotor skill was valuable to 88% of respondents. Conclusions : Our results suggest that mobile phones are a viable technology to use for the video capture and critique of psychomotor skills, as most students own this technology and their satisfaction with this method is high.

  11. Video capture virtual reality as a flexible and effective rehabilitation tool

    Directory of Open Access Journals (Sweden)

    Katz Noomi

    2004-12-01

    Full Text Available Abstract Video capture virtual reality (VR uses a video camera and software to track movement in a single plane without the need to place markers on specific bodily locations. The user's image is thereby embedded within a simulated environment such that it is possible to interact with animated graphics in a completely natural manner. Although this technology first became available more than 25 years ago, it is only within the past five years that it has been applied in rehabilitation. The objective of this article is to describe the way this technology works, to review its assets relative to other VR platforms, and to provide an overview of some of the major studies that have evaluated the use of video capture technologies for rehabilitation.

  12. Visual Self-Recognition in Mirrors and Live Videos: Evidence for a Developmental Asynchrony

    Science.gov (United States)

    Suddendorf, Thomas; Simcock, Gabrielle; Nielsen, Mark

    2007-01-01

    Three experiments (N = 123) investigated the development of live-video self-recognition using the traditional mark test. In Experiment 1, 24-, 30- and 36-month-old children saw a live video image of equal size and orientation as a control group saw in a mirror. The video version of the test was more difficult than the mirror version with only the…

  13. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.

    Science.gov (United States)

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H

    2015-02-01

    Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  14. Implementation of nuclear material surveillance system based on the digital video capture card and counter

    International Nuclear Information System (INIS)

    Lee, Sang Yoon; Song, Dae Yong; Ko, Won Il; Ha, Jang Ho; Kim, Ho Dong

    2003-07-01

    In this paper, the implementation techniques of nuclear material surveillance system based on the digital video capture board and digital counter was described. The surveillance system that is to be developed is consist of CCD cameras, neutron monitors, and PC for data acquisition. To develop the system, the properties of the PCI based capture board and counter was investigated, and the characteristics of related SDK library was summarized. This report could be used for the developers who want to develop the surveillance system for various experimental environments based on the DVR and sensors using Borland C++ Builder

  15. Live video monitoring robot controlled by web over internet

    Science.gov (United States)

    Lokanath, M.; Akhil Sai, Guruju

    2017-11-01

    Future is all about robots, robot can perform tasks where humans cannot, Robots have huge applications in military and industrial area for lifting heavy weights, for accurate placements, for repeating the same task number of times, where human are not efficient. Generally robot is a mix of electronic, electrical and mechanical engineering and can do the tasks automatically on its own or under the supervision of humans. The camera is the eye for robot, call as robovision helps in monitoring security system and also can reach into the places where the human eye cannot reach. This paper presents about developing a live video streaming robot controlled from the website. We designed the web, controlling for the robot to move left, right, front and back while streaming video. As we move to the smart environment or IoT (Internet of Things) by smart devices the system we developed here connects over the internet and can be operated with smart mobile phone using a web browser. The Raspberry Pi model B chip acts as heart for this system robot, the sufficient motors, surveillance camera R pi 2 are connected to Raspberry pi.

  16. Comparison of Video and Live Modeling in Teaching Response Chains to Children with Autism

    Science.gov (United States)

    Ergenekon, Yasemin; Tekin-Iftar, Elif; Kapan, Alper; Akmanoglu, Nurgul

    2014-01-01

    Research has shown that video and live modeling are both effective in teaching new skills to children with autism. An adapted alternating treatments design was used to compare the effectiveness and efficiency of video and live modeling in teaching response chains to three children with autism. Each child was taught two chained skills; one skill…

  17. Deep learning for quality assessment in live video streaming

    NARCIS (Netherlands)

    Torres Vega, M.; Mocanu, D.C.; Famaey, J.; Stavrou, S.; Liotta, A.

    Video content providers put stringent requirements on the quality assessment methods realized on their services. They need to be accurate, real-time, adaptable to new content, and scalable as the video set grows. In this letter, we introduce a novel automated and computationally efficient video

  18. Re-living anatomy: medical student use of lecture capture

    OpenAIRE

    Diss, L; Sharp, A; Scott, F; Moore, L; Daniel, P; Memon, S; Smith, C

    2017-01-01

    Lecture capture resources have become common place within UK Higher education to enhance and support learning in addition to the tradition lecture. These resources can be particularly useful for medical students in anatomy teaching where time dedicated to anatomy within the curriculum has been reduced compared to previous generations(1).\\ud \\ud This study aimed to investigate how lecture capture aided student learning Qualitative feedback was also collected in view to further improve the reso...

  19. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+

    OpenAIRE

    Steven Nicholas Graves, MA; Deana Saleh Shenaq, MD; Alexander J. Langerman, MD; David H. Song, MD, MBA

    2015-01-01

    Background: Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons? point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. Methods: The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon?s perspective using the GoPro App. The camera was used to ...

  20. Examining the Effects of Video Modeling and Prompts to Teach Activities of Daily Living Skills.

    Science.gov (United States)

    Aldi, Catarina; Crigler, Alexandra; Kates-McElrath, Kelly; Long, Brian; Smith, Hillary; Rehak, Kim; Wilkinson, Lisa

    2016-12-01

    Video modeling has been shown to be effective in teaching a number of skills to learners diagnosed with autism spectrum disorders (ASD). In this study, we taught two young men diagnosed with ASD three different activities of daily living skills (ADLS) using point-of-view video modeling. Results indicated that both participants met criterion for all ADLS. Participants did not maintain mastery criterion at a 1-month follow-up, but did score above baseline at maintenance with and without video modeling. • Point-of-view video models may be an effective intervention to teach daily living skills. • Video modeling with handheld portable devices (Apple iPod or iPad) can be just as effective as video modeling with stationary viewing devices (television or computer). • The use of handheld portable devices (Apple iPod and iPad) makes video modeling accessible and possible in a wide variety of environments.

  1. Adaptive live multicast video streaming of SVC with UEP FEC

    Science.gov (United States)

    Lev, Avram; Lasry, Amir; Loants, Maoz; Hadar, Ofer

    2014-09-01

    Ideally, video streaming systems should provide the best quality video a user's device can handle without compromising on downloading speed. In this article, an improved video transmission system is presented which dynamically enhances the video quality based on a user's current network state and repairs errors from data lost in the video transmission. The system incorporates three main components: Scalable Video Coding (SVC) with three layers, multicast based on Receiver Layered Multicast (RLM) and an UnEqual Forward Error Correction (FEC) algorithm. The SVC provides an efficient method for providing different levels of video quality, stored as enhancement layers. In the presented system, a proportional-integral-derivative (PID) controller was implemented to dynamically adjust the video quality, adding or subtracting quality layers as appropriate. In addition, an FEC algorithm was added to compensate for data lost in transmission. A two dimensional FEC was used. The FEC algorithm came from the Pro MPEG code of practice #3 release 2. Several bit errors scenarios were tested (step function, cosine wave) with different bandwidth size and error values were simulated. The suggested scheme which includes SVC video encoding with 3 layers over IP Multicast with Unequal FEC algorithm was investigated under different channel conditions, variable bandwidths and different bit error rates. The results indicate improvement of the video quality in terms of PSNR over previous transmission schemes.

  2. High-emulation mask recognition with high-resolution hyperspectral video capture system

    Science.gov (United States)

    Feng, Jiao; Fang, Xiaojing; Li, Shoufeng; Wang, Yongjin

    2014-11-01

    We present a method for distinguishing human face from high-emulation mask, which is increasingly used by criminals for activities such as stealing card numbers and passwords on ATM. Traditional facial recognition technique is difficult to detect such camouflaged criminals. In this paper, we use the high-resolution hyperspectral video capture system to detect high-emulation mask. A RGB camera is used for traditional facial recognition. A prism and a gray scale camera are used to capture spectral information of the observed face. Experiments show that mask made of silica gel has different spectral reflectance compared with the human skin. As multispectral image offers additional spectral information about physical characteristics, high-emulation mask can be easily recognized.

  3. Advantages of Live Microscope Video for Laboratory and Teaching Applications

    Science.gov (United States)

    Michels, Kristin K.; Michels, Zachary D.; Hotchkiss, Sara C.

    2016-01-01

    Although spatial reasoning and penetrative thinking skills are essential for many disciplines, these concepts are difficult for students to comprehend. In microscopy, traditional educational materials (i.e., photographs) are static. Conversely, video-based training methods convey dimensionality. We implemented a real-time digital video imaging…

  4. How to implement live video recording in the clinical environment: A practical guide for clinical services.

    Science.gov (United States)

    Lloyd, Adam; Dewar, Alistair; Edgar, Simon; Caesar, Dave; Gowens, Paul; Clegg, Gareth

    2017-06-01

    The use of video in healthcare is becoming more common, particularly in simulation and educational settings. However, video recording live episodes of clinical care is far less routine. To provide a practical guide for clinical services to embed live video recording. Using Kotter's 8-step process for leading change, we provide a 'how to' guide to navigate the challenges required to implement a continuous video-audit system based on our experience of video recording in our emergency department resuscitation rooms. The most significant hurdles in installing continuous video audit in a busy clinical area involve change management rather than equipment. Clinicians are faced with considerable ethical, legal and data protection challenges which are the primary barriers for services that pursue video recording of patient care. Existing accounts of video use rarely acknowledge the organisational and cultural dimensions that are key to the success of establishing a video system. This article outlines core implementation issues that need to be addressed if video is to become part of routine care delivery. By focussing on issues such as staff acceptability, departmental culture and organisational readiness, we provide a roadmap that can be pragmatically adapted by all clinical environments, locally and internationally, that seek to utilise video recording as an approach to improving clinical care. © 2017 John Wiley & Sons Ltd.

  5. Using the Periscope Live Video-Streaming Application for Global Pathology Education: A Brief Introduction.

    Science.gov (United States)

    Fuller, Maren Y; Mukhopadhyay, Sanjay; Gardner, Jerad M

    2016-07-21

    Periscope is a live video-streaming smartphone application (app) that allows any individual with a smartphone to broadcast live video simultaneously to multiple smartphone users around the world. The aim of this review is to describe the potential of this emerging technology for global pathology education. To our knowledge, since the launch of the Periscope app (2015), only a handful of educational presentations by pathologists have been streamed as live video via Periscope. This review includes links to these initial attempts, a step-by-step guide for those interested in using the app for pathology education, and a summary of the pros and cons, including ethical/legal issues. We hope that pathologists will appreciate the potential of Periscope for sharing their knowledge, expertise, and research with a live (and potentially large) audience without the barriers associated with traditional video equipment and standard classroom/conference settings.

  6. A Simple FSPN Model of P2P Live Video Streaming System

    OpenAIRE

    Kotevski, Zoran; Mitrevski, Pece

    2011-01-01

    Peer to Peer (P2P) live streaming is relatively new paradigm that aims at streaming live video to large number of clients at low cost. Many such applications already exist in the market, but, prior to creating such system it is necessary to analyze its performance via representative model that can provide good insight in the system’s behavior. Modeling and performance analysis of P2P live video streaming systems is challenging task which requires addressing many properties and issues of P2P s...

  7. Recognition of Indian Sign Language in Live Video

    Science.gov (United States)

    Singha, Joyeeta; Das, Karen

    2013-05-01

    Sign Language Recognition has emerged as one of the important area of research in Computer Vision. The difficulty faced by the researchers is that the instances of signs vary with both motion and appearance. Thus, in this paper a novel approach for recognizing various alphabets of Indian Sign Language is proposed where continuous video sequences of the signs have been considered. The proposed system comprises of three stages: Preprocessing stage, Feature Extraction and Classification. Preprocessing stage includes skin filtering, histogram matching. Eigen values and Eigen Vectors were considered for feature extraction stage and finally Eigen value weighted Euclidean distance is used to recognize the sign. It deals with bare hands, thus allowing the user to interact with the system in natural way. We have considered 24 different alphabets in the video sequences and attained a success rate of 96.25%.

  8. Live lecture versus video podcast in undergraduate medical education: A randomised controlled trial

    Directory of Open Access Journals (Sweden)

    Fukuta Junaid

    2010-10-01

    Full Text Available Abstract Background Information technology is finding an increasing role in the training of medical students. We compared information recall and student experience and preference after live lectures and video podcasts in undergraduate medical education. Methods We performed a crossover randomised controlled trial. 100 students were randomised to live lecture or video podcast for one clinical topic. Live lectures were given by the same instructor as the narrator of the video podcasts. The video podcasts comprised Powerpoint™ slides narrated using the same script as the lecture. They were then switched to the other group for a second clinical topic. Knowledge was assessed using multiple choice questions and qualitative information was collected using a questionnaire. Results No significant difference was found on multiple choice questioning immediately after the session. The subjects enjoyed the convenience of the video podcast and the ability to stop, review and repeat it, but found it less engaging as a teaching method. They expressed a clear preference for the live lecture format. Conclusions We suggest that video podcasts are not ready to replace traditional teaching methods, but may have an important role in reinforcing learning and aiding revision.

  9. Live lecture versus video podcast in undergraduate medical education: A randomised controlled trial.

    Science.gov (United States)

    Schreiber, Benjamin E; Fukuta, Junaid; Gordon, Fabiana

    2010-10-08

    Information technology is finding an increasing role in the training of medical students. We compared information recall and student experience and preference after live lectures and video podcasts in undergraduate medical education. We performed a crossover randomised controlled trial. 100 students were randomised to live lecture or video podcast for one clinical topic. Live lectures were given by the same instructor as the narrator of the video podcasts. The video podcasts comprised Powerpoint™ slides narrated using the same script as the lecture. They were then switched to the other group for a second clinical topic. Knowledge was assessed using multiple choice questions and qualitative information was collected using a questionnaire. No significant difference was found on multiple choice questioning immediately after the session. The subjects enjoyed the convenience of the video podcast and the ability to stop, review and repeat it, but found it less engaging as a teaching method. They expressed a clear preference for the live lecture format. We suggest that video podcasts are not ready to replace traditional teaching methods, but may have an important role in reinforcing learning and aiding revision.

  10. The interrupted learner: How distractions during live and video lectures influence learning outcomes.

    Science.gov (United States)

    Zureick, Andrew H; Burk-Rafel, Jesse; Purkiss, Joel A; Hortsch, Michael

    2017-11-27

    New instructional technologies have been increasingly incorporated into the medical school learning environment, including lecture video recordings as a substitute for live lecture attendance. The literature presents varying conclusions regarding how this alternative experience impacts students' academic success. Previously, a multi-year study of the first-year medical histology component at the University of Michigan found that live lecture attendance was positively correlated with learning success, while lecture video use was negatively correlated. Here, three cohorts of first-year medical students (N = 439 respondents, 86.6% response rate) were surveyed in greater detail regarding lecture attendance and video usage, focusing on study behaviors that may influence histology learning outcomes. Students who reported always attending lectures or viewing lecture videos had higher average histology scores than students who employed an inconsistent strategy (i.e., mixing live attendance and video lectures). Several behaviors were negatively associated with histology performance. Students who engaged in "non-lecture activities" (e.g., social media use), students who reported being interrupted while watching the lecture video, or feeling sleepy/losing focus had lower scores than their counterparts not engaging in these behaviors. This study suggests that interruptions and distractions during medical learning activities-whether live or recorded-can have an important impact on learning outcomes. Anat Sci Educ 00: 000-000. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.

  11. Live lecture versus video-recorded lecture: are students voting with their feet?

    Science.gov (United States)

    Cardall, Scott; Krupat, Edward; Ulrich, Michael

    2008-12-01

    In light of educators' concerns that lecture attendance in medical school has declined, the authors sought to assess students' perceptions, evaluations, and motivations concerning live lectures compared with accelerated, video-recorded lectures viewed online. The authors performed a cross-sectional survey study of all first- and second-year students at Harvard Medical School. Respondents answered questions regarding their lecture attendance; use of class and personal time; use of accelerated, video-recorded lectures; and reasons for viewing video-recorded and live lectures. Other questions asked students to compare how well live and video-recorded lectures satisfied learning goals. Of the 353 students who received questionnaires, 204 (58%) returned responses. Collectively, students indicated watching 57.2% of lectures live, 29.4% recorded, and 3.8% using both methods. All students have watched recorded lectures, and most (88.5%) have used video-accelerating technologies. When using accelerated, video-recorded lecture as opposed to attending lecture, students felt they were more likely to increase their speed of knowledge acquisition (79.3% of students), look up additional information (67.7%), stay focused (64.8%), and learn more (63.7%). Live attendance remains the predominant method for viewing lectures. However, students find accelerated, video-recorded lectures equally or more valuable. Although educators may be uncomfortable with the fundamental change in the learning process represented by video-recorded lecture use, students' responses indicate that their decisions to attend lectures or view recorded lectures are motivated primarily by a desire to satisfy their professional goals. A challenge remains for educators to incorporate technologies students find useful while creating an interactive learning culture.

  12. Real-Time Human Detection for Aerial Captured Video Sequences via Deep Models

    Directory of Open Access Journals (Sweden)

    Nouar AlDahoul

    2018-01-01

    Full Text Available Human detection in videos plays an important role in various real life applications. Most of traditional approaches depend on utilizing handcrafted features which are problem-dependent and optimal for specific tasks. Moreover, they are highly susceptible to dynamical events such as illumination changes, camera jitter, and variations in object sizes. On the other hand, the proposed feature learning approaches are cheaper and easier because highly abstract and discriminative features can be produced automatically without the need of expert knowledge. In this paper, we utilize automatic feature learning methods which combine optical flow and three different deep models (i.e., supervised convolutional neural network (S-CNN, pretrained CNN feature extractor, and hierarchical extreme learning machine for human detection in videos captured using a nonstatic camera on an aerial platform with varying altitudes. The models are trained and tested on the publicly available and highly challenging UCF-ARG aerial dataset. The comparison between these models in terms of training, testing accuracy, and learning speed is analyzed. The performance evaluation considers five human actions (digging, waving, throwing, walking, and running. Experimental results demonstrated that the proposed methods are successful for human detection task. Pretrained CNN produces an average accuracy of 98.09%. S-CNN produces an average accuracy of 95.6% with soft-max and 91.7% with Support Vector Machines (SVM. H-ELM has an average accuracy of 95.9%. Using a normal Central Processing Unit (CPU, H-ELM’s training time takes 445 seconds. Learning in S-CNN takes 770 seconds with a high performance Graphical Processing Unit (GPU.

  13. 4kUHD H264 Wireless Live Video Streaming Using CUDA

    Directory of Open Access Journals (Sweden)

    A. O. Adeyemi-Ejeye

    2014-01-01

    Full Text Available Ultrahigh definition video streaming has been explored in recent years. Most recently the possibility of 4kUHD video streaming over wireless 802.11n was presented, using preencoded video. Live encoding for streaming using x264 has proven to be very slow. The use of parallel encoding has been explored to speed up the process using CUDA. However there hasnot been a parallel implementation for video streaming. We therefore present for the first time a novel implementation of 4kUHD live encoding for streaming over a wireless network at low bitrate indoors, using CUDA for parallel H264 encoding. Our experimental results are used to verify our claim.

  14. Video games, cinema, Bazin, and the myth of simulated lived experience

    Directory of Open Access Journals (Sweden)

    Mark J.P. Wolf

    2015-09-01

    Full Text Available Video games theory has advanced far enough that we can use it to reevaluate film theory as a  result, en route to broader, transmedial theorizing. This essay looks particularly at how video  games can be seen as participating in and advancing Andre Bazin’s “Myth of Total Cinema”, and  perhaps recontextualzing it as the Myth of Simulated Lived Experience.

  15. Analisis Kualitas Layanan Video Live Streaming pada Jaringan Lokal Universitas Telkom

    Directory of Open Access Journals (Sweden)

    Anggelina I Diwi

    2014-09-01

    Full Text Available Streaming adalah salah satu bentuk teknologi yang memperkenankan file digunakan secara langsung tanpa menunggu selesainya unggahan (download dan berlangsung secara kontinyu tanpa interupsi. Untuk mengaplikasikan video streaming kedalam jaringan, diperlukan pertama-tama untuk mengkalkulasi bandwidth yang tersedia, untuk mendukung transmisi data. Bandwidth merupakan parameter penting untuk streaming di dalam jaringan. Makin besar bandwidth yang tersedia, makin baik kualitas video yang ditampilkan. Penelitian ini bertujuan untuk mengetahui kebutuhan bandwidth untuk layanan video live streaming; metode yang digunakan di dalam penelitian ini adalah dengan mengadakan pengukuran unjuk kerja jaringan secara langsung di lapangan, yaitu LAN di lingkungan Universitas Telkom. Implementasi media streaming server-client di dalam penelitian ini menggunakan file video yang berbeda, berdasarkan jumlah frame yang dikirim (fps. Skenario video streaming ini dilakukan dengan menggunakan latar belakang  trafik  yang  bervariasi,  untuk  melihat  pengaruhnya terhadap parameter QoS jaringan. Pengujian performansi Quality of Service (QoS dalam implementasi video live streaming ini menggunakan software network analyzer Wireshark. Hasil penilitian menunjukkan, bahwa video dengan laju frame lebih besar dari 15 fps, memberikan jitter dan throughput yang besar pula.

  16. The use of video capture virtual reality in burn rehabilitation: the possibilities.

    Science.gov (United States)

    Haik, Josef; Tessone, Ariel; Nota, Ayala; Mendes, David; Raz, Liat; Goldan, Oren; Regev, Elli; Winkler, Eyal; Mor, Elisheva; Orenstein, Arie; Hollombe, Ilana

    2006-01-01

    We independently explored the use of the Sony PlayStation II EyeToy (Sony Corporation, Foster City, CA) as a tool for use in the rehabilitation of patients with severe burns. Intensive occupational and physical therapy is crucial in minimizing and preventing long-term disability for the burn patient; however, the therapist faces a difficult challenge combating the agonizing pain experienced by the patient during therapy. The Sony PlayStation II EyeToy is a projected, video-capture system that, although initially developed as a gaming environment for children, may be a useful application in a rehabilitative context. As compared with other virtual reality systems the EyeToy is an efficient rehabilitation tool that is sold commercially at a relatively low cost. This report presents the potential advantages for use of the EyeToy as an innovative rehabilitative tool with mitigating effects on pain in burn rehabilitation. This new technology represents a challenging and motivating way for the patient to immerse himself or herself in an alternate reality while undergoing treatment, thereby reducing the pain and discomfort he or she experiences. This simple, affordable technique may prove to heighten the level of patient cooperation and therefore speed the process of rehabilitation and return of functional ability.

  17. Video surveillance captures student hand hygiene behavior, reactivity to observation, and peer influence in Kenyan primary schools.

    Directory of Open Access Journals (Sweden)

    Amy J Pickering

    Full Text Available In-person structured observation is considered the best approach for measuring hand hygiene behavior, yet is expensive, time consuming, and may alter behavior. Video surveillance could be a useful tool for objectively monitoring hand hygiene behavior if validated against current methods.Student hand cleaning behavior was monitored with video surveillance and in-person structured observation, both simultaneously and separately, at four primary schools in urban Kenya over a study period of 8 weeks.Video surveillance and in-person observation captured similar rates of hand cleaning (absolute difference <5%, p = 0.74. Video surveillance documented higher hand cleaning rates (71% when at least one other person was present at the hand cleaning station, compared to when a student was alone (48%; rate ratio  = 1.14 [95% CI 1.01-1.28]. Students increased hand cleaning rates during simultaneous video and in-person monitoring as compared to single-method monitoring, suggesting reactivity to each method of monitoring. This trend was documented at schools receiving a handwashing with soap intervention, but not at schools receiving a sanitizer intervention.Video surveillance of hand hygiene behavior yields results comparable to in-person observation among schools in a resource-constrained setting. Video surveillance also has certain advantages over in-person observation, including rapid data processing and the capability to capture new behavioral insights. Peer influence can significantly improve student hand cleaning behavior and, when possible, should be exploited in the design and implementation of school hand hygiene programs.

  18. Surgeon-Manipulated Live Surgery Video Recording Apparatuses: Personal Experience and Review of Literature.

    Science.gov (United States)

    Kapi, Emin

    2017-06-01

    Visual recording of surgical procedures is a method that is used quite frequently in practices of plastic surgery. While presentations containing photographs are quite common in education seminars and congresses, video-containing presentations find more favour. For this reason, the presentation of surgical procedures in the form of real-time video display has increased especially recently. Appropriate technical equipment for video recording is not available in most hospitals, so there is a need to set up external apparatus in the operating room. Among these apparatuses can be listed such options as head-mounted video cameras, chest-mounted cameras, and tripod-mountable cameras. The head-mounted video camera is an apparatus that is capable of capturing high-resolution and detailed close-up footage. The tripod-mountable camera enables video capturing from a fixed point. Certain user-specific modifications can be made to overcome some of these restrictions. Among these modifications, custom-made applications are one of the most effective solutions. The article makes an attempt to present the features and experiences concerning the use of a combination of a head- or chest-mounted action camera, a custom-made portable tripod apparatus of versatile features, and an underwater camera. The descriptions we used are quite easy-to-assembly, quickly installed, and inexpensive apparatuses that do not require specific technical knowledge and can be manipulated by the surgeon personally in all procedures. The author believes that video recording apparatuses will be integrated more to the operating room, become a standard practice, and become more enabling for self-manipulation by the surgeon in the near future. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  19. Design and approach of the Living Organ Video Educated Donors (LOVED) program to promote living kidney donation in African Americans.

    Science.gov (United States)

    Sieverdes, John C; Price, Matthew; Ruggiero, Kenneth J; Baliga, Prabhakar K; Chavin, Kenneth D; Brunner-Jackson, Brenda; Patel, Sachin; Treiber, Frank A

    2017-10-01

    To describe the rationale, methodology, design, and interventional approach of a mobile health education program designed for African Americans with end stage renal disease (ESRD) to increase knowledge and self-efficacy to approach others about their need for a living donor kidney transplant (LDKT). The Living Organ Video Educated Donors (LOVED) program is a theory-guided iterative designed, mixed methods study incorporating three phases: 1) a formative evaluation using focus groups to develop program content and approach; 2) a 2-month proof of concept trial (n=27) to primarily investigate acceptability, tolerability and investigate increases of LDKT knowledge and self-efficacy; and 3) a 6-month, 2-arm, 60-person feasibility randomized control trial (RCT) to primarily investigate increases in LDKT knowledge and self-efficacy, and secondarily, to increase the number of living donor inquiries, medical evaluations, and LDKTs. The 8-week LOVED program includes an interactive web-based app delivered on 10″ tablet computer incorporating weekly interactive video education modules, weekly group video chat sessions with an African American navigator who has had LDKT and other group interactions for support and improve strategies to promote their need for a kidney. Phase 1 and 2 have been completed and the program is currently enrolling for the feasibility RCT. Phase 2 experienced 100% retention rates with 91% adherence completing the video modules and 88% minimum adherence to the video chat sessions. We are in the early stages of an RCT to evaluate the LOVED program; to date, we have found high tolerability reported from Phase 2. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. A Complexity-Aware Video Adaptation Mechanism for Live Streaming Systems

    Directory of Open Access Journals (Sweden)

    Chen Homer H

    2007-01-01

    Full Text Available The paradigm shift of network design from performance-centric to constraint-centric has called for new signal processing techniques to deal with various aspects of resource-constrained communication and networking. In this paper, we consider the computational constraints of a multimedia communication system and propose a video adaptation mechanism for live video streaming of multiple channels. The video adaptation mechanism includes three salient features. First, it adjusts the computational resource of the streaming server block by block to provide a fine control of the encoding complexity. Second, as far as we know, it is the first mechanism to allocate the computational resource to multiple channels. Third, it utilizes a complexity-distortion model to determine the optimal coding parameter values to achieve global optimization. These techniques constitute the basic building blocks for a successful application of wireless and Internet video to digital home, surveillance, IPTV, and online games.

  1. A Complexity-Aware Video Adaptation Mechanism for Live Streaming Systems

    Science.gov (United States)

    Lu, Meng-Ting; Yao, Jason J.; Chen, Homer H.

    2007-12-01

    The paradigm shift of network design from performance-centric to constraint-centric has called for new signal processing techniques to deal with various aspects of resource-constrained communication and networking. In this paper, we consider the computational constraints of a multimedia communication system and propose a video adaptation mechanism for live video streaming of multiple channels. The video adaptation mechanism includes three salient features. First, it adjusts the computational resource of the streaming server block by block to provide a fine control of the encoding complexity. Second, as far as we know, it is the first mechanism to allocate the computational resource to multiple channels. Third, it utilizes a complexity-distortion model to determine the optimal coding parameter values to achieve global optimization. These techniques constitute the basic building blocks for a successful application of wireless and Internet video to digital home, surveillance, IPTV, and online games.

  2. Using Text Mining to Uncover Students' Technology-Related Problems in Live Video Streaming

    Science.gov (United States)

    Abdous, M'hammed; He, Wu

    2011-01-01

    Because of their capacity to sift through large amounts of data, text mining and data mining are enabling higher education institutions to reveal valuable patterns in students' learning behaviours without having to resort to traditional survey methods. In an effort to uncover live video streaming (LVS) students' technology related-problems and to…

  3. A New Motion Capture System For Automated Gait Analysis Based On Multi Video Sequence Analysis

    DEFF Research Database (Denmark)

    Jensen, Karsten; Juhl, Jens

    There is an increasing demand for assessing foot mal positions and an interest in monitoring the effect of treatment. In the last decades several different motion capture systems has been used. This abstract describes a new low cost motion capture system.......There is an increasing demand for assessing foot mal positions and an interest in monitoring the effect of treatment. In the last decades several different motion capture systems has been used. This abstract describes a new low cost motion capture system....

  4. Capturing lived experiences in movement educational contexts through videographic participation and visual narratives

    DEFF Research Database (Denmark)

    Svendler Nielsen, Charlotte; Degerbøl, Stine Mikés

    visualizing and communicating the meaning-making of the participants and emphasizes the role of the researcher’s embodied involvement when ‘looking for lived experiences’. The paper exemplifies the use of videographic participation and presents (audio)visual narratives from two educational contexts: children...... of how meaning-making of the participants can be captured and disseminated through (audio)visual narratives....

  5. An Automatic Video Meteor Observation Using UFO Capture at the Showa Station

    Science.gov (United States)

    Fujiwara, Y.; Nakamura, T.; Ejiri, M.; Suzuki, H.

    2012-05-01

    The goal of our study is to clarify meteor activities in the southern hemi-sphere by continuous optical observations with video cameras with automatic meteor detection and recording at Syowa station, Antarctica.

  6. Live lectures or online videos: students' resource choices in a first-year university mathematics module

    Science.gov (United States)

    Howard, Emma; Meehan, Maria; Parnell, Andrew

    2018-05-01

    In Maths for Business, a mathematics module for non-mathematics specialists, students are given the choice of completing the module content via short online videos, live lectures or a combination of both. In this study, we identify students' specific usage patterns with both of these resources and discuss their reasons for the preferences they exhibit. In 2015-2016, we collected quantitative data on each student's resource usage (attendance at live lectures and access of online videos) for the entire class of 522 students and employed model-based clustering which identified four distinct resource usage patterns with lectures and/or videos. We also collected qualitative data on students' perceptions of resource usage through a survey administered at the end of the semester, to which 161 students responded. The 161 survey responses were linked to each cluster and analysed using thematic analysis. Perceived benefits of videos include flexibility of scheduling and pace, and avoidance of large, long lectures. In contrast, the main perceived advantages of lectures are the ability to engage in group tasks, to ask questions, and to learn 'gradually'. Students in the two clusters with high lecture attendance achieved, on average, higher marks in the module.

  7. Teaching Daily Living Skills to Seven Individuals with Severe Intellectual Disabilities: A Comparison of Video Prompting to Video Modeling

    Science.gov (United States)

    Cannella-Malone, Helen I.; Fleming, Courtney; Chung, Yi-Cheih; Wheeler, Geoffrey M.; Basbagill, Abby R.; Singh, Angella H.

    2011-01-01

    We conducted a systematic replication of Cannella-Malone et al. by comparing the effects of video prompting to video modeling for teaching seven students with severe disabilities to do laundry and wash dishes. The video prompting and video modeling procedures were counterbalanced across tasks and participants and compared in an alternating…

  8. Enhancing the Dialogue in Simultaneous Class-Based and Live Video-Streamed Teaching

    DEFF Research Database (Denmark)

    Jelsbak, Vibe Alopaeus; Ørngreen, Rikke; Thorsen, Jonas

    2015-01-01

    teaching. This paper describes a work-in-progress project focused on developing possibilities for a more dialogue-based approach to live video-streamed teaching. We present our new setup and argue for educational designs which this is believed to support, and we outline the research design for collecting...... and analysing data. The first analysis and interpretations will be discussed at the ECEL 2015 conference poster session....

  9. Social learning in nest-building birds watching live-streaming video demonstrators.

    Science.gov (United States)

    Guillette, Lauren M; Healy, Susan D

    2018-02-13

    Determining the role that social learning plays in construction behaviours, such as nest building or tool manufacture, could be improved if more experimental control could be gained over the exact public information that is provided by the demonstrator, to the observing individual. Using video playback allows the experimenter to choose what information is provided, but will only be useful in determining the role of social learning if observers attend to, and learn from, videos in a manner that is similar to live demonstration. The goal of the current experiment was to test whether live-streamed video presentations of nest building by zebra finches Taeniopygia guttata would lead observers to copy the material choice demonstrated to them. Here, males that had not previously built a nest were given an initial preference test between materials of two colours. Those observers then watched live-stream footage of a familiar demonstrator building a nest with material of the colour that the observer did not prefer. After this experience, observers were given the chance to build a nest with materials of the two colours. Although two-thirds of the observer males preferred material of the demonstrated colour after viewing the demonstrator build a nest with material of that colour more than they had previously, their preference for the demonstrated material was not as strong as that of observers that had viewed live demonstrator builders in a previous experiment. Our results suggest researchers should proceed with caution before using video demonstration in tests of social learning. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  10. Enhancing the Dialogue in Simultaneous Class-Based and Live Video-Streamed Teaching

    DEFF Research Database (Denmark)

    Jelsbak, Vibe Alopaeus; Bendsen, Thomas; Thorsen, Jonas

    Abstract: The bachelor programme in biomedical laboratory analysis at VIA University College in Aarhus has established a blended class concept which combines traditional and live broadcast teaching. 1-2 days a week students have the choice either to attend teaching sessions in the traditional way...... or to work from home via the Internet. In live video-streamed teaching classes teachers tend to choose one-way communication instead of dialogue. We know from our early findings that technology issues are one of the main reasons for this, since the same teachers use dialogue and discussions in traditional...

  11. Creating a Video Documentary as a Tool for Reflection and Assessment: Capturing Guerilla Marketing in Action

    Science.gov (United States)

    Lee, Seung Hwan; Hoffman, K. Douglas; Chowdhury, Shahin A.; Sergueeva, Ksenia

    2018-01-01

    In this exercise, students were asked to devise a guerilla marketing campaign that achieved the four primary goals of guerilla marketing: message, unconventionality, hedonics, and value. Students documented their guerilla marketing event (via a video documentary) and discussed how they achieved their four objectives using the concepts and theories…

  12. Using video-reflexive ethnography to capture the complexity of leadership enactment in the healthcare workplace.

    Science.gov (United States)

    Gordon, Lisi; Rees, Charlotte; Ker, Jean; Cleland, Jennifer

    2017-12-01

    Current theoretical thinking asserts that leadership should be distributed across many levels of healthcare organisations to improve the patient experience and staff morale. However, much healthcare leadership education focusses on the training and competence of individuals and little attention is paid to the interprofessional workplace and how its inherent complexities might contribute to the emergence of leadership. Underpinned by complexity theory, this research aimed to explore how interprofessional healthcare teams enact leadership at a micro-level through influential acts of organising. A whole (interprofessional) team workplace-based study utilising video-reflexive ethnography occurred in two UK clinical sites. Thematic framework analyses of the video data (video-observation and video-reflexivity sessions) were undertaken, followed by in-depth analyses of human-human and human-material interactions. Data analysis revealed a complex interprofessional environment where leadership is a dynamic process, negotiated and renegotiated in various ways throughout interactions (both formal and informal). Being able to "see" themselves at work gave participants the opportunity to discuss and analyse their everyday leadership practices and challenge some of their sometimes deeply entrenched values, beliefs, practices and assumptions about healthcare leadership. These study findings therefore indicate a need to redefine the way that medical and healthcare educators facilitate leadership development and argue for new approaches to research which shifts the focus from leaders to leadership.

  13. Nyquist Sampling Theorem: Understanding the Illusion of a Spinning Wheel Captured with a Video Camera

    Science.gov (United States)

    Levesque, Luc

    2014-01-01

    Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…

  14. A multiframe soft x-ray camera with fast video capture for the LSX field reversed configuration (FRC) experiment

    International Nuclear Information System (INIS)

    Crawford, E.A.

    1992-01-01

    Soft x-ray pinhole imaging has proven to be an exceptionally useful diagnostic for qualitative observation of impurity radiation from field reversed configuration plasmas. We used a four frame device, similar in design to those discussed in an earlier paper [E. A. Crawford, D. P. Taggart, and A. D. Bailey III, Rev. Sci. Instrum. 61, 2795 (1990)] as a routine diagnostic during the last six months of the Large s Experiment (LSX) program. Our camera is an improvement over earlier implementations in several significant aspects. It was designed and used from the onset of the LSX experiments with a video frame capture system so that an instant visual record of the shot was available to the machine operator as well as facilitating quantitative interpretation of intensity information recorded in the images. The camera was installed in the end region of the LSX on axis approximately 5.5 m from the plasma midplane. Experience with bolometers on LSX showed serious problems with ''particle dumps'' at the axial location at various times during the plasma discharge. Therefore, the initial implementation of the camera included an effective magnetic sweeper assembly. Overall performance of the camera, video capture system, and sweeper is discussed

  15. Recovery After Psychosis: Qualitative Study of Service User Experiences of Lived Experience Videos on a Recovery-Oriented Website.

    Science.gov (United States)

    Williams, Anne; Fossey, Ellie; Farhall, John; Foley, Fiona; Thomas, Neil

    2018-05-08

    Digital interventions offer an innovative way to make the experiences of people living with mental illness available to others. As part of the Self-Management And Recovery Technology (SMART) research program on the use of digital resources in mental health services, an interactive website was developed including videos of people with lived experience of mental illness discussing their recovery. These peer videos were designed to be watched on a tablet device with a mental health worker, or independently. Our aim was to explore how service users experienced viewing the lived experience videos on this interactive website, as well as its influence on their recovery journey. In total, 36 service users with experience of using the website participated in individual semistructured qualitative interviews. All participants had experience of psychosis. Data analysis occurred alongside data collection, following principles of constructivist grounded theory methodology. According to participants, engaging with lived experience videos was a pivotal experience of using the website. Participants engaged with peers through choosing and watching the videos and reflecting on their own experience in discussions that opened up with a mental health worker. Benefits of seeing others talking about their experience included "being inspired," "knowing I'm not alone," and "believing recovery is possible." Experiences of watching the videos were influenced by the participants' intrapersonal context, particularly their ways of coping with life and use of technology. The interpersonal context of watching the videos with a worker, who guided website use and facilitated reflection, enriched the experience. Engaging with lived experience videos was powerful for participants, contributing to their feeling connected and hopeful. Making websites with lived experience video content available to service users and mental health workers demonstrates strong potential to support service users' recovery

  16. GIFT-Grab: Real-time C++ and Python multi-channel video capture, processing and encoding API

    Directory of Open Access Journals (Sweden)

    Dzhoshkun Ismail Shakir

    2017-10-01

    Full Text Available GIFT-Grab is an open-source API for acquiring, processing and encoding video streams in real time. GIFT-Grab supports video acquisition using various frame-grabber hardware as well as from standard-compliant network streams and video files. The current GIFT-Grab release allows for multi-channel video acquisition and encoding at the maximum frame rate of supported hardware – 60 frames per second (fps. GIFT-Grab builds on well-established highly configurable multimedia libraries including FFmpeg and OpenCV. GIFT-Grab exposes a simplified high-level API, aimed at facilitating integration into client applications with minimal coding effort. The core implementation of GIFT-Grab is in C++11. GIFT-Grab also features a Python API compatible with the widely used scientific computing packages NumPy and SciPy. GIFT-Grab was developed for capturing multiple simultaneous intra-operative video streams from medical imaging devices. Yet due to the ubiquity of video processing in research, GIFT-Grab can be used in many other areas. GIFT-Grab is hosted and managed on the software repository of the Centre for Medical Image Computing (CMIC at University College London, and is also mirrored on GitHub. In addition it is available for installation from the Python Package Index (PyPI via the pip installation tool. Funding statement: This work was supported through an Innovative Engineering for Health award by the Wellcome Trust [WT101957], the Engineering and Physical Sciences Research Council (EPSRC [NS/A000027/1] and a National Institute for Health Research Biomedical Research Centre UCLH/UCL High Impact Initiative. Sébastien Ourselin receives funding from the EPSRC (EP/H046410/1, EP/J020990/1, EP/K005278 and the MRC (MR/J01107X/1. Luis C. García-Peraza-Herrera is supported by the EPSRC-funded UCL Centre for Doctoral Training in Medical Imaging (EP/L016478/1.

  17. Use of Video Modeling and Video Prompting Interventions for Teaching Daily Living Skills to Individuals with Autism Spectrum Disorders: A Review

    Science.gov (United States)

    Gardner, Stephanie; Wolfe, Pamela

    2013-01-01

    Identifying methods to increase the independent functioning of individuals with autism spectrum disorders (ASD) is vital in enhancing their quality of life; teaching students with ASD daily living skills can foster independent functioning. This review examines interventions that implement video modeling and/or prompting to teach individuals with…

  18. A methodology to leverage cross-sectional accelerometry to capture weather's influence in active living research.

    Science.gov (United States)

    Katapally, Tarun R; Rainham, Daniel; Muhajarine, Nazeem

    2016-06-27

    While active living interventions focus on modifying urban design and built environment, weather variation, a phenomenon that perennially interacts with these environmental factors, is consistently underexplored. This study's objective is to develop a methodology to link weather data with existing cross-sectional accelerometry data in capturing weather variation. Saskatoon's neighbourhoods were classified into grid-pattern, fractured grid-pattern and curvilinear neighbourhoods. Thereafter, 137 Actical accelerometers were used to derive moderate to vigorous physical activity (MVPA) and sedentary behaviour (SB) data from 455 children in 25 sequential one-week cycles between April and June, 2010. This sequential deployment was necessary to overcome the difference in the ratio between the sample size and the number of accelerometers. A data linkage methodology was developed, where each accelerometry cycle was matched with localized (Saskatoon-specific) weather patterns derived from Environment Canada. Statistical analyses were conducted to depict the influence of urban design on MVPA and SB after factoring in localized weather patterns. Integration of cross-sectional accelerometry with localized weather patterns allowed the capture of weather variation during a single seasonal transition. Overall, during the transition from spring to summer in Saskatoon, MVPA increased and SB decreased during warmer days. After factoring in localized weather, a recurring observation was that children residing in fractured grid-pattern neighbourhoods accumulated significantly lower MVPA and higher SB. The proposed methodology could be utilized to link globally available cross-sectional accelerometry data with place-specific weather data to understand how built and social environmental factors interact with varying weather patterns in influencing active living.

  19. The Role of Live Video Capture Production in the Development of Student Communication Skills

    Science.gov (United States)

    O'Donoghue, Michael; Cochrane, Tom A.

    2010-01-01

    Civil and natural resources engineering students at the University of Canterbury, New Zealand, take specific courses requiring small group research projects and the presentation of findings to staff and peers. Although one of the aims of these presentations is to assist in the development of the students' communication skills, staff have raised…

  20. Promoting Reflexive Thinking and Adaptive Expertise through Video Capturing to Challenge Postgraduate Primary Student Teachers to Think, Know, Feel, and Act Like a Teacher

    Science.gov (United States)

    Sexton, Steven S.; Williamson-Leadley, Sandra

    2017-01-01

    This article reports on a study of how a 1-year, course-taught, master's level initial teacher education (ITE) program challenged primary student teachers (n = 4) in developing their sense of self-as-teacher. This study examined how the program's incorporation of video capturing technology impacted on these student teachers' development of…

  1. Anatomical knowledge gain through a clay-modeling exercise compared to live and video observations.

    Science.gov (United States)

    Kooloos, Jan G M; Schepens-Franke, Annelieke N; Bergman, Esther M; Donders, Rogier A R T; Vorstenbosch, Marc A T M

    2014-01-01

    Clay modeling is increasingly used as a teaching method other than dissection. The haptic experience during clay modeling is supposed to correspond to the learning effect of manipulations during exercises in the dissection room involving tissues and organs. We questioned this assumption in two pretest-post-test experiments. In these experiments, the learning effects of clay modeling were compared to either live observations (Experiment I) or video observations (Experiment II) of the clay-modeling exercise. The effects of learning were measured with multiple choice questions, extended matching questions, and recognition of structures on illustrations of cross-sections. Analysis of covariance with pretest scores as the covariate was used to elaborate the results. Experiment I showed a significantly higher post-test score for the observers, whereas Experiment II showed a significantly higher post-test score for the clay modelers. This study shows that (1) students who perform clay-modeling exercises show less gain in anatomical knowledge than students who attentively observe the same exercise being carried out and (2) performing a clay-modeling exercise is better in anatomical knowledge gain compared to the study of a video of the recorded exercise. The most important learning effect seems to be the engagement in the exercise, focusing attention and stimulating time on task. © 2014 American Association of Anatomists.

  2. Live Video Classroom Observation: An Effective Approach to Reducing Reactivity in Collecting Observational Information for Teacher Professional Development

    Science.gov (United States)

    Liang, Jiwen

    2015-01-01

    This paper examines the significance of live video classroom observations of teaching practice to reduce reactivity (the observer effect) so as to obtain more credible observational information for teacher professional development in a secondary school in the largest city in southern China. Although much has been discussed regarding the use of…

  3. Two Dimensional Array Based Overlay Network for Balancing Load of Peer-to-Peer Live Video Streaming

    Science.gov (United States)

    Faruq Ibn Ibrahimy, Abdullah; Rafiqul, Islam Md; Anwar, Farhat; Ibn Ibrahimy, Muhammad

    2013-12-01

    The live video data is streaming usually in a tree-based overlay network or in a mesh-based overlay network. In case of departure of a peer with additional upload bandwidth, the overlay network becomes very vulnerable to churn. In this paper, a two dimensional array-based overlay network is proposed for streaming the live video stream data. As there is always a peer or a live video streaming server to upload the live video stream data, so the overlay network is very stable and very robust to churn. Peers are placed according to their upload and download bandwidth, which enhances the balance of load and performance. The overlay network utilizes the additional upload bandwidth of peers to minimize chunk delivery delay and to maximize balance of load. The procedure, which is used for distributing the additional upload bandwidth of the peers, distributes the additional upload bandwidth to the heterogeneous strength peers in a fair treat distribution approach and to the homogeneous strength peers in a uniform distribution approach. The proposed overlay network has been simulated by Qualnet from Scalable Network Technologies and results are presented in this paper.

  4. Live Lectures or Online Videos: Students' Resource Choices in a First-Year University Mathematics Module

    Science.gov (United States)

    Howard, Emma; Meehan, Maria; Parnell, Andrew

    2018-01-01

    In "Maths for Business", a mathematics module for non-mathematics specialists, students are given the choice of completing the module content via short online videos, live lectures or a combination of both. In this study, we identify students' specific usage patterns with both of these resources and discuss their reasons for the…

  5. Two Dimensional Array Based Overlay Network for Balancing Load of Peer-to-Peer Live Video Streaming

    International Nuclear Information System (INIS)

    Ibrahimy, Abdullah Faruq Ibn; Rafiqul, Islam Md; Anwar, Farhat; Ibrahimy, Muhammad Ibn

    2013-01-01

    The live video data is streaming usually in a tree-based overlay network or in a mesh-based overlay network. In case of departure of a peer with additional upload bandwidth, the overlay network becomes very vulnerable to churn. In this paper, a two dimensional array-based overlay network is proposed for streaming the live video stream data. As there is always a peer or a live video streaming server to upload the live video stream data, so the overlay network is very stable and very robust to churn. Peers are placed according to their upload and download bandwidth, which enhances the balance of load and performance. The overlay network utilizes the additional upload bandwidth of peers to minimize chunk delivery delay and to maximize balance of load. The procedure, which is used for distributing the additional upload bandwidth of the peers, distributes the additional upload bandwidth to the heterogeneous strength peers in a fair treat distribution approach and to the homogeneous strength peers in a uniform distribution approach. The proposed overlay network has been simulated by Qualnet from Scalable Network Technologies and results are presented in this paper

  6. Video-Recorded Validation of Wearable Step Counters under Free-living Conditions.

    Science.gov (United States)

    Toth, Lindsay P; Park, Susan; Springer, Cary M; Feyerabend, McKenzie D; Steeves, Jeremy A; Bassett, David R

    2018-06-01

    The purpose of this study was to determine the accuracy of 14-step counting methods under free-living conditions. Twelve adults (mean ± SD age, 35 ± 13 yr) wore a chest harness that held a GoPro camera pointed down at the feet during all waking hours for 1 d. The GoPro continuously recorded video of all steps taken throughout the day. Simultaneously, participants wore two StepWatch (SW) devices on each ankle (all programmed with different settings), one activPAL on each thigh, four devices at the waist (Fitbit Zip, Yamax Digi-Walker SW-200, New Lifestyles NL-2000, and ActiGraph GT9X (AG)), and two devices on the dominant and nondominant wrists (Fitbit Charge and AG). The GoPro videos were downloaded to a computer and researchers counted steps using a hand tally device, which served as the criterion method. The SW devices recorded between 95.3% and 102.8% of actual steps taken throughout the day (P > 0.05). Eleven step counting methods estimated less than 100% of actual steps; Fitbit Zip, Yamax Digi-Walker SW-200, and AG with the moving average vector magnitude algorithm on both wrists recorded 71% to 91% of steps (P > 0.05), whereas the activPAL, New Lifestyles NL-2000, and AG (without low-frequency extension (no-LFE), moving average vector magnitude) worn on the hip, and Fitbit Charge recorded 69% to 84% of steps (P 0.05), whereas the AG (LFE) on both wrists and the hip recorded 128% to 220% of steps (P < 0.05). Across all waking hours of 1 d, step counts differ between devices. The SW, regardless of settings, was the most accurate method of counting steps.

  7. An Analysis of Quality of Service (QoS In Live Video Streaming Using Evolved HSPA Network Media

    Directory of Open Access Journals (Sweden)

    Achmad Zakaria Azhar

    2016-10-01

    Full Text Available Evolved High Speed Packet Access (HSPA+ is a mobile telecommunication system technology and the evolution of HSPA technology. This technology has a packet data based service with downlink speeds up to 21.1 Mbps and uplink speed up to 11.5 Mbps on the bandwidth 5MHz. This technology is expected to fulfill and support the needs for information that involves all aspects of multimedia such as video and audio, especially live video streaming. By utilizing this technology it will facilitate communicating the information, for example to monitoring the situation of the house, the news coverage at some certain area, and other events in real time. This thesis aims to identify and test the Quality of Service (QoS performance on the network that is used for live video streaming with the parameters of throughput, delay, jitter and packet loss. The software used for monitoring the data traffic of the live video streaming network is wireshark network analyzer. From the test results it is obtained that the average throughput of provider B is 5,295 Kbps bigger than the provider A, the average delay of provider B is 0.618 ms smaller than the provider A, the average jitter of provider B is 0.420 ms smaller than the provider A and the average packet loss of provider B is 0.451% smaller than the provider A.

  8. Mobile Video in Everyday Social Interactions

    Science.gov (United States)

    Reponen, Erika; Lehikoinen, Jaakko; Impiö, Jussi

    Video recording has become a spontaneous everyday activity for many people, thanks to the video capabilities of modern mobile phones. Internet connectivity of mobile phones enables fluent sharing of captured material even real-time, which makes video an up-and-coming everyday interaction medium. In this article we discuss the effect of the video camera in the social environment, everyday life situations, mainly based on a study where four groups of people used digital video cameras in their normal settings. We also reflect on another study of ours, relating to real-time mobile video communication and discuss future views. The aim of our research is to understand the possibilities in the domain of mobile video. Live and delayed sharing seem to have their special characteristics, live video being used as a virtual window between places whereas delayed video usage has more scope for good-quality content. While this novel way of interacting via mobile video enables new social patterns, it also raises new concerns for privacy and trust between participating persons in all roles, largely due to the widely spreading possibilities of videos. Video in a social situation affects cameramen (who record), targets (who are recorded), passers-by (who are unintentionally in the situation), and the audience (who follow the videos or recording situations) but also the other way around, the participants affect the video by their varying and evolving personal and communicational motivations for recording.

  9. Remediating childhood recollection: facilitating intermedial theatre based on lived-experience, recollection and remediation of digital video

    OpenAIRE

    Kelly, Jeremy

    2016-01-01

    This paper offers a critically informed report examining ways in which nondirective pedagogy can be an effective learning agency for Level 5 and 6 undergraduate performance makers. I draw on two case studies to illustrate different themes for student devised intermedial practice – one, Gardens Of Eden, is a response to a Bible Class the other, Together Again is a re-framing and remediation of family videos with live performer. The examples given are developed through a nondirective pedagogica...

  10. What speakers do and what addressees look at: Visual attention to gestures in human interaction live and on video

    OpenAIRE

    Gullberg, Marianne; Holmqvist, Kenneth

    2006-01-01

    This study investigates whether addressees visually attend to speakers’ gestures in interaction and whether attention is modulated by changes in social setting and display size. We compare a live face-to-face setting to two video conditions. In all conditions, the face dominates as a fixation target and only a minority of gestures draw fixations. The social and size parameters affect gaze mainly when combined and in the opposite direction from the predicted with fewer gestures fixated on vide...

  11. Does methodology matter in eyewitness identification research? The effect of live versus video exposure on eyewitness identification accuracy.

    Science.gov (United States)

    Pozzulo, Joanna D; Crescini, Charmagne; Panton, Tasha

    2008-01-01

    The present study examined the effect of mode of target exposure (live versus video) on eyewitness identification accuracy. Adult participants (N=104) were exposed to a staged crime that they witnessed either live or on videotape. Participants were then asked to rate their stress and arousal levels prior to being presented with either a target-present or -absent simultaneous lineup. Across target-present and -absent lineups, mode of target exposure did not have a significant effect on identification accuracy. However, mode of target exposure was found to have a significant effect on stress and arousal levels. Participants who witnessed the crime live had higher levels of stress and arousal than those who were exposed to the videotaped crime. A higher level of arousal was significantly related to poorer identification accuracy for those in the video condition. For participants in the live condition however, stress and arousal had no effect on eyewitness identification accuracy. Implications of these findings in regards to the generalizability of laboratory-based research on eyewitness testimony to real-life crime are discussed.

  12. Data from: Acquired versus innate prey capturing skills in super-precocial live-bearing fish

    NARCIS (Netherlands)

    Lankheet, M.J.M.; Stoffers, Twan; Leeuwen, van J.L.; Pollux, B.J.A.

    2016-01-01

    Live-bearing fish start hunting for mobile prey within hours after birth, an example of extreme precociality. Because prenatal, in utero, development of this behaviour is constrained by the lack of free-swimming sensory-motor interactions, immediate success after birth depends on innate,

  13. Acquired versus innate prey capturing skills in super-precocial live-bearing fish

    NARCIS (Netherlands)

    Lankheet, Martin J.; Stoffers, Twan; Leeuwen, van Johan L.; Pollux, Bart J.A.

    2016-01-01

    Live-bearing fish start hunting for mobile prey within hours after birth, an example of extreme precociality. Because prenatal, in utero, development of this behaviour is constrained by the lack of free-swimming sensory-motor interactions, immediate success after birth depends on innate,

  14. The everyday lives of video game developers: Experimentally understanding underlying systems/structures

    Directory of Open Access Journals (Sweden)

    Casey O'Donnell

    2009-03-01

    Full Text Available This essay examines how tensions between work and play for video game developers shape the worlds they create. The worlds of game developers, whose daily activity is linked to larger systems of experimentation and technoscientific practice, provide insights that transcend video game development work. The essay draws on ethnographic material from over 3 years of fieldwork with video game developers in the United States and India. It develops the notion of creative collaborative practice based on work in the fields of science and technology studies, game studies, and media studies. The importance of, the desire for, or the drive to understand underlying systems and structures has become fundamental to creative collaborative practice. I argue that the daily activity of game development embodies skills fundamental to creative collaborative practice and that these capabilities represent fundamental aspects of critical thought. Simultaneously, numerous interests have begun to intervene in ways that endanger these foundations of creative collaborative practice.

  15. Narratives of health and illness: Arts-based research capturing the lived experience of dementia.

    Science.gov (United States)

    Moss, Hilary; O'Neill, Desmond

    2017-01-01

    Introduction This paper presents three artists' residencies in a geriatric medicine unit in a teaching hospital. The aim of the residencies was creation of new work of high artistic quality reflecting the lived experience of the person with dementia and greater understanding of service user experience of living with dementia. This paper also explores arts-based research methodologies in a medical setting. Method Arts-based research and narrative enquiry were the method used in this study. Artists had extensive access to service users with dementia, family carers and clinical team. Projects were created through collaboration between clinical staff, arts and health director, artist, patients and family carers. Each performance was accompanied by a public seminar discussing dementia. Evaluations were undertaken following each residency. The process of creating artistic responses to dementia is outlined, presented and discussed. Results The artworks were well received with repeat performances and exhibitions requested. Evaluations of each residency indicated increased understanding of dementia. The narratives within the artworks aided learning about dementia. The results are a new chamber music composition, a series of visual artworks created collaboratively between visual artist and patients and family carers and a dance film inspired by a dancer's residency, all created through narrative enquiry. These projects support the role of arts-based research as creative process and qualitative research method which contributes to illuminating and exploring the lived experience of dementia. The arts act as a reflective tool for learning and understanding a complex health condition, as well as creating opportunities for increased understanding and public awareness of dementia. Issues arising in arts-based research in medical settings are highlighted, including ethical issues, the importance of service user narrative and multidisciplinary collaboration in arts and health

  16. A pilot project in distance education: nurse practitioner students' experience of personal video capture technology as an assessment method of clinical skills.

    Science.gov (United States)

    Strand, Haakan; Fox-Young, Stephanie; Long, Phil; Bogossian, Fiona

    2013-03-01

    This paper reports on a pilot project aimed at exploring postgraduate distance students' experiences using personal video capture technology to complete competency assessments in physical examination. A pre-intervention survey gathered demographic data from nurse practitioner students (n=31) and measured their information communication technology fluency. Subsequently, thirteen (13) students were allocated a hand held video camera to use in their clinical setting. Those participating in the trial completed a post-intervention survey and further data were gathered using semi-structured interviews. Data were analysed by descriptive statistics and deductive content analysis, and the Unified Theory of Acceptance and Use of Technology (Venkatesh et al., 2003) were used to guide the project. Uptake of the intervention was high (93%) as students recognised the potential benefit. Students were video recorded while performing physical examinations. They described high level of stress and some anxiety, which decreased rapidly while assessment was underway. Barriers experienced were in the areas of facilitating conditions (technical character e.g. upload of files) and social influence (e.g. local ethical approval). Students valued the opportunity to reflect on their recorded performance with their clinical mentors and by themselves. This project highlights the demands and difficulties of introducing technology to support work-based learning. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Monitoring of processes with gamma-rays of neutron capture and short-living radionuclides

    International Nuclear Information System (INIS)

    Aripov, G.A.; Kurbanov, B.I.; Allamuratova, G.

    2004-01-01

    Element content is a fundamental parameter of a substance, on which all its properties, and also character of physical, chemical, biological, technological and ecological processes depend. Therefore monitoring of element content (in the course of technological process - on line; in natural conditions - in site; or in living organisms - in vivo) becomes necessary for investigation of aforementioned processes. This problem can be successfully solved by using the methods of prompt gamma activation analysis (PGAA) and instrumental neutron activation analysis (INAA) on short-living radionuclides. These methods don't depend on type of substance (biological, geological, technological etc.), since the content is determined by gamma radiation of nuclei, and allows to meet such a serious requirement like the necessity of achieving minimal irradiation of the object and its minimal residual activity. In this work minimal determinable concentrations of various elements are estimated (based on experimental data) by the method of PGAA using radionuclide 252 Cf - source of neutrons with the yield of the oil of 10 8 neutron/sec on the experimental device with preliminary focusing of neutrons /1/, and also data of determination of elements by their isotopes with maximum time efficiency /2,3/ by the method of INAA. (author)

  18. Viscous dipping, application to the capture of fluids in living organisms

    Science.gov (United States)

    Lechantre, Amandine; Michez, Denis; Damman, Pascal

    Some insects, birds and mammals use flower nectar as their energy resources. For this purpose, they developed specific skills to ingest viscous fluids. Depending on the sugar content, i.e., the viscosity, different strategies are observed in vivo. Indeed, butterflies use simple suction for low viscosity nectars; hummingbirds have a tongue made from two thin flexible sheets that bend to form a tube when immersed in a fluid; other animals exhibit in contrast complex papillary structures. We focus on this last method generally used for very viscous nectars. More specifically, bees and bats possess a tongue decorated with microstructures that, according to biologists, would be optimized for fluid capture by viscous dipping. In this talk, we will discuss this assumption by comparing physical models of viscous dipping to in vivo measurements. To mimic the tongue morphology, we used various rod shapes obtained by 3D printing. The influence of the type and size of lateral microstructures was then investigated and used to build a global framework describing viscous dipping for structured rods/tongues.

  19. Mirroring the videos of Anonymous:cloud activism, living networks, and political mimesis

    OpenAIRE

    Fish, Adam Richard

    2016-01-01

    Mirrors describe the multiplication of data across a network. In this article, I examine the politics of mirroring as practiced on videos by the hacktivist network Anonymous. Mirrors are designed to retain visibility on social media platforms and motivate viewers towards activism. They emerge from a particular social structure and propagate a specific symbolic system. Furthermore, mirrors are not exact replicas nor postmodern representations. Rather, mirroring maps a contestation over visibil...

  20. Video Modeling for Teaching Daily Living Skills to Children with Autism Spectrum Disorder: A Pilot Study

    Science.gov (United States)

    Meister, Christine; Salls, Joyce

    2015-01-01

    This pilot study investigated the efficacy of point-of-view video modeling as an intervention strategy to improve self-help skills in children with autism spectrum disorder (ASD). A single-subject A-B design was implemented with eight school-aged children ages 7.5 years to 13.5 years. Six of the students participated in general education classes…

  1. Compliance with dental treatment recommendations by rural paediatric patients after a live-video teledentistry consultation: A preliminary report.

    Science.gov (United States)

    McLaren, Sean W; Kopycka-Kedzierawski, Dorota T

    2016-04-01

    The purpose of this research was to assess the compliance rate with recommended dental treatment by rural paediatric dental patients after a live-video teledentistry consultation. A retrospective dental chart review was completed for 251 rural paediatric patients from the Finger Lakes region of New York State who had an initial teledentistry appointment with a paediatric dentist located remotely at the Eastman Institute for Oral Health in Rochester, NY. The recommended treatment modalities were tabulated and comprehensive dental treatment completion rates were obtained. The recommended treatment modality options of: treatment in the paediatric dental clinic; treatment using nitrous oxide anxiolysis; treatment with oral sedation; treatment in the operating room with general anaesthesia; or teleconsultation were identified for the 251 patients. Compliance rates for completed dental treatment based on initial teleconsultation recommendations were: 100% for treatment in the paediatric dental clinic; 56% for nitrous oxide patients; 87% for oral sedation; 93% for operating room; and 90% for teleconsultations. The differences in the compliance rates for all treatment modalities were not statistically significant (Fisher's exact test, p > 0.05). Compliance rates for completed comprehensive dental treatment for this rural population of paediatric dental patients were quite high, ranging from 56% to 100%, and tended to be higher when treatment was completed in fewer visits. Live-video teledentistry consultations conducted among rural paediatric patients and a paediatric dentist in the specialty clinic were feasible options for increasing dental treatment compliance rates when treating complex paediatric dental cases. © The Author(s) 2015.

  2. Photogrammetric Applications of Immersive Video Cameras

    OpenAIRE

    Kwiatek, K.; Tokarczyk, R.

    2014-01-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to ov...

  3. System architecture for ubiquitous live video streaming in university network environment

    CSIR Research Space (South Africa)

    Dludla, AG

    2013-09-01

    Full Text Available an architecture which supports ubiquitous live streaming for university or campus networks using a modified bluetooth inquiry mechanism with extended ID, integrated end-user device usage and adaptation to heterogeneous networks. Riding on that architecture...

  4. LIVE AUTHORITY IN THE CLASSROOM IN VIDEO CONFERENCE-BASED SYNCHRONOUS DISTANCE EDUCATION: The Teaching Assistant

    Directory of Open Access Journals (Sweden)

    Hasan KARAL

    2010-07-01

    Full Text Available The aim of this study was to define the role of the assistant in a classroom environment where students are taught using video conference-based synchronous distance education. Qualitative research approach was adopted and, among purposeful sampling methods, criterion sampling method was preferred in the scope of the study. The study was carried out during the spring semester of the 2008-2009 academic years. A teaching assistant and a total of 9 sophomore or senior students from the Department of City and Regional Development, Faculty of Architecture, Karadeniz Technical University, participated as subjects. The students included in the study sampling were taking lessons from the Middle East Technical University on the basis of synchronous distance education. Among the qualitative research methods, case study method was used and the study data were obtained from the semi-structured interview and observation results. Study data were analyzed with descriptive analysis methods. Data obtained at the end of the study were found to support the suggestion that there should be an authority in the video conference-based synchronous distance education. Findings obtained during the interviews made with the students revealed that some of the teacher’s classroom management related responsibilities are transferred to the assistant present in the classroom during the synchronous distance education. It was concluded at the end of the interviews that a teaching assistant’s presence should be obligatory in the undergraduate synchronous distance classroom environment. However, it was also concluded that there may not be any need for an authority in the classroom environment at the postgraduate education level due to the profile and expectations of the student, which differ from those of students at lower educational levels.

  5. Understanding Motion Capture for Computer Animation

    CERN Document Server

    Menache, Alberto

    2010-01-01

    The power of today's motion capture technology has taken animated characters and special effects to amazing new levels of reality. And with the release of blockbusters like Avatar and Tin-Tin, audiences continually expect more from each new release. To live up to these expectations, film and game makers, particularly technical animators and directors, need to be at the forefront of motion capture technology. In this extensively updated edition of Understanding Motion Capture for Computer Animation and Video Games, an industry insider explains the latest research developments in digital design

  6. The Role of Books, Television, Computers and Video Games in Children's Day to Day Lives.

    Science.gov (United States)

    Welch, Alicia J.

    A study assessed the role of various mass media in the day-to-day lives of school-aged children. Research questions dealt with the nature of children's media experiences at home, how use of media impacts school activities, the social context of media use, interior responses to different media, and whether gender or socioeconomic differences among…

  7. Neutron-captures in Low Mass Stars and the Early Solar System Record of Short-lived Radioactivities

    Science.gov (United States)

    Busso, Maurizio; Vescovi, Diego; Trippella, Oscar; Palmerini, Sara; Cristallo, Sergio; Piersanti, Luciano

    2018-01-01

    Noticeable improvements were recently introduced in the modelling of n-capture nucleosynthesis in the advanced evolutionary stages of giant stars (Asymptotic Giant Branch, or AGB, stars). Two such improvements are closely linked together and concern the introduction of non-parameterized, physical models for extended mixing processes and the adoption of accurate reaction rates for H- and He-burning reactions, including the one for the main neutron source 13C(α,n)16O. These improvements profited of a longstanding collaboration between stellar physicists and C. Spitaleri's team and of his seminal work both as a leader in the Nuclear Astrophysics scenario and as a talent-scout in the recruitment of young researchers in the field. We present an example of the innovative results that can be obtained thanks to the novelties introduced, by estimating the contributions from a nearby AGB star to the synthesis of short-lived (t1/2 ≤ 10 Myr) radioactive nuclei which were alive in early Solar System condensates. We find that the scenario indicating an AGB star as the source of such radioactivities, discussed for many years by researchers in this field, appears now to be no longer viable, when the mentioned improvements of AGB models and nuclear parameters are considered.

  8. The effects of video modeling in teaching functional living skills to persons with ASD: A meta-analysis of single-case studies.

    Science.gov (United States)

    Hong, Ee Rea; Ganz, Jennifer B; Mason, Rose; Morin, Kristi; Davis, John L; Ninci, Jennifer; Neely, Leslie C; Boles, Margot B; Gilliland, Whitney D

    2016-10-01

    Many individuals with autism spectrum disorders (ASD) show deficits in functional living skills, leading to low independence, limited community involvement, and poor quality of life. With development of mobile devices, utilizing video modeling has become more feasible for educators to promote functional living skills of individuals with ASD. This article aims to review the single-case experimental literature and aggregate results across studies involving the use of video modeling to improve functional living skills of individuals with ASD. The authors extracted data from single-case experimental studies and evaluated them using the Tau-U effect size measure. Effects were also differentiated by categories of potential moderators and other variables, including age of participants, concomitant diagnoses, types of video modeling, and outcome measures. Results indicate that video modeling interventions are overall moderately effective with this population and dependent measures. While significant differences were not found between categories of moderators and other variables, effects were found to be at least moderate for most of them. It is apparent that more single-case experiments are needed in this area, particularly with preschool and secondary-school aged participants, participants with ASD-only and those with high-functioning ASD, and for video modeling interventions addressing community access skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  10. Interim report on research between Oak Ridge National Laboratory and Japan Nuclear Cycle Development Institute on neutron-capture cross sections by long-lived fission product nuclides

    International Nuclear Information System (INIS)

    Furutaka, Kazuyoshi; Nakamura, Shoji; Harada, Hideo

    2004-03-01

    Neutron capture cross sections of long-lived fission products (LLFP) are important quantities as fundamental data for the study of nuclear transmutation of radioactive wastes. Previously obtained thermal-neutron capture gamma-ray data were analyzed to deduce the partial neutron-capture cross sections of LLFPs including 99 Tc, 93 Zr, and 107 Pd for thermal neutrons. By comparing the decay gamma-ray data and prompt gamma-ray data for 99 Tc, the relation between the neutron-capture cross section deduced by the two different methods was studied. For the isotopes 93 Zr and 107 Pd, thermal neutron-capture gamma-ray production cross sections were deduced for the first time. The level schemes of 99 Tc, 93 Zr, and 107 Pd have also been constructed form the analyzed data and compared with previously reported levels. This work has been done under the cooperative program 'Neutron Capture Cross Sections of Long-Lived Fission products (LLFPs)' by Japan Nuclear Cycle Development Institute (JNC) and Oak Ridge National Laboratory (ORNL). (author)

  11. Using a Hero as a Model in Video Instruction to Improve the Daily Living Skills of an Elementary-Aged Student with Autism Spectrum Disorder: A Pilot Study

    Science.gov (United States)

    Ohtake, Yoshihisa

    2015-01-01

    The present pilot study investigated the impact of video hero modelling (VHM) on the daily living skills of an elementary-aged student with autism spectrum disorder. The VHM, in which a character much admired by the student exhibited a correct response, was shown to the participant immediately before the situation where he needed to exhibit the…

  12. A multi-environment dataset for activity of daily living recognition in video streams.

    Science.gov (United States)

    Borreo, Alessandro; Onofri, Leonardo; Soda, Paolo

    2015-08-01

    Public datasets played a key role in the increasing level of interest that vision-based human action recognition has attracted in last years. While the production of such datasets has been influenced by the variability introduced by various actors performing the actions, the different modalities of interactions with the environment introduced by the variation of the scenes around the actors has been scarcely took into account. As a consequence, public datasets do not provide a proper test-bed for recognition algorithms that aim at achieving high accuracy, irrespective of the environment where actions are performed. This is all the more so, when systems are designed to recognize activities of daily living (ADL), which are characterized by a high level of human-environment interaction. For that reason, we present in this manuscript the MEA dataset, a new multi-environment ADL dataset, which permitted us to show how the change of scenario can affect the performances of state-of-the-art approaches for action recognition.

  13. Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms.

    Science.gov (United States)

    Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori; Yamamoto, Kenji; Kurita, Taiichiro

    2012-09-10

    We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second.

  14. Growth, development, reproduction, physiological and behavioural studies on living organisms, human adults and children exposed to radiation from video displays

    International Nuclear Information System (INIS)

    Laverdure, A.M.; Surbeck, J.; North, M.O.; Tritto, J.

    2001-01-01

    Various living organisms, human workers and children were tested for any biological action resulting from exposure to radiation from video display terminals (VDTs). VDTs were powered by a 50-Hz alternating voltage of 220 V. Measured electric and magnetic fields were 13 V/M and 50 nT, respectively. Living organisms were maintained under their normal breeding conditions and control values were obtained before switching on the VDT. Various effects related to the irradiation time were demonstrated, i.e. growth delay in algae and Drosophila, a body weight deficiency in rats, abnormal peaks of mortality in Daphnia and Drosophila, teratological effects in chick embryos and behavioural disturbances in rats. The embryonic and neonatal periods showed a high sensitivity to the VDT radiation. In humans, after 4 h of working in front of a VDT screen, an increase in tiredness and a decrease in the resistance of the immune system were observed in workers. In prepubertal children, 20 min of exposure were sufficient to induce neuropsychological disturbances; pre-pubertal young people appear to be particularly sensitive to the effect of the radiation. In human testicular biopsies cultured in vitro for 24 h in front of a VDT screen, mitotic and meiotic disturbances, the appearance of degeneration in some aspects of the cells and significant disorganisation of the seminiferous tubules were demonstrated and related to modification of the metabolism of the sample. An experimental apparatus has been developed and tested that aims to prevent the harm from VDT radiation. Known commercially as the 'emf-Bioshield', it ensures effective protection against harmful biological effects of VDT radiation. (author)

  15. Novel aspects of live intestinal epithelial cell function revealed using a custom time-lapse video microscopy apparatus.

    Science.gov (United States)

    Papetti, Michael; Kozlowski, Piotr

    2018-04-01

    Many aspects of cell physiology, including migration, membrane function, and cell division, are best understood by observing live cell dynamics over time using video microscopy. To probe these phenomena in colon epithelial cells using simple components with a limited budget, we have constructed an inexpensive (PID (proportional-integrative-derivative) controller contained within a 0.077 m 3 insulated acrylic box. Temperature, humidity, pH, and proliferative capacity of colon epithelial cells in this system mimic those in a standard tissue culture incubator for over four days. Our system offers significant advantages over existing cost-prohibitive commercially available and custom-made devices because of its very low cost, use of PID temperature control, lack of reliance on constant infusion of external humidified, heated air or carbon dioxide, ability to directly measure cell culture medium temperature, and combination of exquisite cellular detail with minimal focus drift under physiological conditions for extended periods of time. Using this apparatus, coupled with an inverted microscope equipped with phase contrast optics and a programmable digital camera, we have observed many events in colon epithelial cells not visible by static imaging, including kinetics of normal and abnormal mitoses, dynamic membrane structures, intracellular vesicle movements, and cell migration. © 2018 International Society for Advancement of Cytometry. © 2018 International Society for Advancement of Cytometry.

  16. Video Sharing System Based on Wi-Fi Camera

    OpenAIRE

    Qidi Lin; Hewei Yu; Jinbin Huang; Weile Liang

    2015-01-01

    This paper introduces a video sharing platform based on WiFi, which consists of camera, mobile phone and PC server. This platform can receive wireless signal from the camera and show the live video on the mobile phone captured by camera. In addition, it is able to send commands to camera and control the camera's holder to rotate. The platform can be applied to interactive teaching and dangerous area's monitoring and so on. Testing results show that the platform can share ...

  17. Cage-based performance capture

    CERN Document Server

    Savoye, Yann

    2014-01-01

    Nowadays, highly-detailed animations of live-actor performances are increasingly easier to acquire and 3D Video has reached considerable attentions in visual media production. In this book, we address the problem of extracting or acquiring and then reusing non-rigid parametrization for video-based animations. At first sight, a crucial challenge is to reproduce plausible boneless deformations while preserving global and local captured properties of dynamic surfaces with a limited number of controllable, flexible and reusable parameters. To solve this challenge, we directly rely on a skin-detached dimension reduction thanks to the well-known cage-based paradigm. First, we achieve Scalable Inverse Cage-based Modeling by transposing the inverse kinematics paradigm on surfaces. Thus, we introduce a cage inversion process with user-specified screen-space constraints. Secondly, we convert non-rigid animated surfaces into a sequence of optimal cage parameters via Cage-based Animation Conversion. Building upon this re...

  18. Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution – an application in higher education

    NARCIS (Netherlands)

    Jan Kuijten; Ajda Ortac; Hans Maier; Gert de Heer

    2015-01-01

    To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels).

  19. Physics and Video Analysis

    Science.gov (United States)

    Allain, Rhett

    2016-05-01

    We currently live in a world filled with videos. There are videos on YouTube, feature movies and even videos recorded with our own cameras and smartphones. These videos present an excellent opportunity to not only explore physical concepts, but also inspire others to investigate physics ideas. With video analysis, we can explore the fantasy world in science-fiction films. We can also look at online videos to determine if they are genuine or fake. Video analysis can be used in the introductory physics lab and it can even be used to explore the make-believe physics embedded in video games. This book covers the basic ideas behind video analysis along with the fundamental physics principles used in video analysis. The book also includes several examples of the unique situations in which video analysis can be used.

  20. Comparison of neutron capture cross sections obtained from two Hauser-Feshbach statistical models on a short-lived nucleus using experimentally constrained input

    Science.gov (United States)

    Lewis, Rebecca; Liddick, Sean; Spyrou, Artemis; Crider, Benjamin; Dombos, Alexander; Naqvi, Farheen; Prokop, Christopher; Quinn, Stephen; Larsen, Ann-Cecilie; Crespo Campo, Lucia; Guttormsen, Magne; Renstrom, Therese; Siem, Sunniva; Bleuel, Darren; Couture, Aaron; Mosby, Shea; Perdikakis, George

    2017-09-01

    A majority of the abundance of the elements above iron are produced by neutron capture reactions, and, in explosive stellar processes, many of these reactions take place on unstable nuclei. Direct neutron capture experiments can only be performed on stable and long-lived nuclei, requiring indirect methods for the remaining isotopes. Statistical neutron capture can be described using the nuclear level density (NLD), the γ strength function (γSF), and an optical model. The NLD and γSF can be obtained using the β-Oslo method. The NLD and γSF were recently determined for 74Zn using the β-Oslo method, and were used in both TALYS and CoH to calculate the 73Zn(n, γ)74Zn neutron capture cross section. The cross sections calculated in TALYS and CoH are expected to be identical if the inputs for both codes are the same, however, after a thorough investigation into the inputs for the 73Zn(n, γ)74Zn reaction there is still a factor of two discrepancy between the two codes.

  1. Medical application of neutron capture γ-ray spectroscopy: measurement of cadmium and nitrogen in living human subjects

    International Nuclear Information System (INIS)

    Vartsky, D.; Ellis, K.J.; Cohn, S.H.

    1978-01-01

    In-vivo measurement of small quantities of Cd is possible due to the high radiative neutron-capture cross-section of 113 Cd (12.3%, 20000 b). Under slow neutron capture in 113 Cd, the excited 114 Cd decays by prompt emission of cascade of gamma-rays of which the most intense is the 559 keV transition from the first excited state to the ground state. For a total kidney or liver dose of 670 mrem, the detection limits are 2.5 mg or 1.5 μg/g respectively. A table shows the results of a study on normal subjects with smoking and non-smoking history. The study indicates higher cadmium levels in the group of smokers. The method of measuring body N utilizes the 14 N(n,γ) 15 N reaction. The total energy available on slow neutron capture is 10.83 MeV and approximately 15% of the de-excitations take place directly to the ground state of 15 N. The irradiation facility is basically the same as that described for measurement of Cd. The Cd collimator, however is replaced by a second collimator designed to provide a wide beam 13 x 60 cm at the level of the bed. During the irradiation the subject lies on a motorized bed which moves across the neutron beam. The precision or reproducibility of the measurements was performed using an Alderson phantom. For a standard 70 kg man having 2000 g of N, the accuracy of the measurement is +-2% with an error of 1.3% for reproducibility, based on several measurements over a 6-month period. The total radiation dose for a bilateral irradiation is 45 mrem. Initial clinical studies will concentrate on sequential measurements of body N

  2. OAS :: Videos

    Science.gov (United States)

    subscriptions Videos Photos Live Webcast Social Media Facebook @oasofficial Facebook Twitter @oas_official Audios Photos Social Media Facebook Twitter Newsletters Press and Communications Department Contact us at Rights Actions against Corruption C Children Civil Registry Civil Society Contact Us Culture Cyber

  3. Petmanship: Understanding Elderly Filipinos' Self-Perceived Health and Self-Esteem Captured from Their Lived Experiences with Pet Companions

    Science.gov (United States)

    de Guzman, Allan B.; Cucueco, Denise S.; Cuenco, Ian Benedict V.; Cunanan, Nigel Gerome C.; Dabandan, Robel T.; Dacanay, Edgar Joseph E.

    2009-01-01

    Understanding of the lived experiences of geriatric clients with pets, particularly in the Western cultures, has been the subject of many studies. However, little is known about how Asian cultures, particularly the Filipino elderly, view their experiences with their pets in regard to their self-esteem and self-perceived health. This…

  4. Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution - an application in higher education.

    Science.gov (United States)

    Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan

    2015-11-01

    To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study. © 2015 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.

  5. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    Science.gov (United States)

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  6. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    Directory of Open Access Journals (Sweden)

    Behrang Barekatain

    Full Text Available In recent years, Random Network Coding (RNC has emerged as a promising solution for efficient Peer-to-Peer (P2P video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  7. Madonna: Feminist or Antifeminist? Domination of Sex in Her Music Videos and Live Performances From the 20th Century to the Present Day

    Directory of Open Access Journals (Sweden)

    Katarina Mitić

    2015-10-01

    Full Text Available After a nearly four-decade career, Madonna has not stopped with a modernist/postmodernist strategy of shock, which provides the reader or viewer the possibility of different interpretations of her art. While some art theorists condemn her as a ‘total antifeminist’, others praise Madonna’s work and point out her feminist side, through which she represents the ideal of a strong, independent and successful womanconfirming her own power and sexuality. Breaking conventional stereotypes through her videos and concert performances, the ‘Queen of Pop’ constantly demonstrates sexual dominance over both genders. In this paper, based on the contemporary research of Douglas Kellner and other theorists, I will analyze music videos and live performances from the 1990suntil the recent video for the song Bitch, I’m Madonna and consider why Madonna can be interpreted in two ways – as someone who ‘undermines her own feminism’ or as someone who is transparently presented as a feminist in the world of pop culture.

  8. Continuous Video Modeling to Assist with Completion of Multi-Step Home Living Tasks by Young Adults with Moderate Intellectual Disability

    Science.gov (United States)

    Mechling, Linda C.; Ayres, Kevin M.; Bryant, Kathryn J.; Foster, Ashley L.

    2014-01-01

    The current study evaluated a relatively new video-based procedure, continuous video modeling (CVM), to teach multi-step cleaning tasks to high school students with moderate intellectual disability. CVM in contrast to video modeling and video prompting allows repetition of the video model (looping) as many times as needed while the user completes…

  9. Monte Carlo analysis of the long-lived fission product neutron capture rates at the Transmutation by Adiabatic Resonance Crossing (TARC) experiment

    International Nuclear Information System (INIS)

    Abánades, A.; Álvarez-Velarde, F.; González-Romero, E.M.; Ismailov, K.; Lafuente, A.; Nishihara, K.; Saito, M.; Stanculescu, A.; Sugawara, T.

    2013-01-01

    Highlights: ► TARC experiment benchmark capture rates results. ► Utilization of updated databases, included ADSLib. ► Self-shielding effect in reactor design for transmutation. ► Effect of Lead nuclear data. - Abstract: The design of Accelerator Driven Systems (ADS) requires the development of simulation tools that are able to describe in a realistic way their nuclear performance and transmutation rate capability. In this publication, we present an evaluation of state of the art Monte Carlo design tools to assess their performance concerning transmutation of long-lived fission products. This work, performed under the umbrella of the International Atomic Energy Agency, analyses two important aspects for transmutation systems: moderation on Lead and neutron captures of 99 Tc, 127 I and 129 I. The analysis of the results shows how shielding effects due to the resonances at epithermal energies of these nuclides affects strongly their transmutation rate. The results suggest that some research effort should be undertaken to improve the quality of Iodine nuclear data at epithermal and fast neutron energy to obtain a reliable transmutation estimation.

  10. Monte Carlo analysis of the long-lived fission product neutron capture rates at the Transmutation by Adiabatic Resonance Crossing (TARC) experiment

    Energy Technology Data Exchange (ETDEWEB)

    Abanades, A., E-mail: abanades@etsii.upm.es [Grupo de Modelizacion de Sistemas Termoenergeticos, ETSII, Universidad Politecnica de Madrid, c/Ramiro de Maeztu, 7, 28040 Madrid (Spain); Alvarez-Velarde, F.; Gonzalez-Romero, E.M. [Centro de Investigaciones Medioambientales y Tecnologicas (CIEMAT), Avda. Complutense, 40, Ed. 17, 28040 Madrid (Spain); Ismailov, K. [Tokyo Institute of Technology, 2-12-1, O-okayama, Meguro-ku, Tokyo 152-8550 (Japan); Lafuente, A. [Grupo de Modelizacion de Sistemas Termoenergeticos, ETSII, Universidad Politecnica de Madrid, c/Ramiro de Maeztu, 7, 28040 Madrid (Spain); Nishihara, K. [Transmutation Section, J-PARC Center, JAEA, Tokai-mura, Ibaraki-ken 319-1195 (Japan); Saito, M. [Tokyo Institute of Technology, 2-12-1, O-okayama, Meguro-ku, Tokyo 152-8550 (Japan); Stanculescu, A. [International Atomic Energy Agency (IAEA), Vienna (Austria); Sugawara, T. [Transmutation Section, J-PARC Center, JAEA, Tokai-mura, Ibaraki-ken 319-1195 (Japan)

    2013-01-15

    Highlights: Black-Right-Pointing-Pointer TARC experiment benchmark capture rates results. Black-Right-Pointing-Pointer Utilization of updated databases, included ADSLib. Black-Right-Pointing-Pointer Self-shielding effect in reactor design for transmutation. Black-Right-Pointing-Pointer Effect of Lead nuclear data. - Abstract: The design of Accelerator Driven Systems (ADS) requires the development of simulation tools that are able to describe in a realistic way their nuclear performance and transmutation rate capability. In this publication, we present an evaluation of state of the art Monte Carlo design tools to assess their performance concerning transmutation of long-lived fission products. This work, performed under the umbrella of the International Atomic Energy Agency, analyses two important aspects for transmutation systems: moderation on Lead and neutron captures of {sup 99}Tc, {sup 127}I and {sup 129}I. The analysis of the results shows how shielding effects due to the resonances at epithermal energies of these nuclides affects strongly their transmutation rate. The results suggest that some research effort should be undertaken to improve the quality of Iodine nuclear data at epithermal and fast neutron energy to obtain a reliable transmutation estimation.

  11. Lifetime of the long-lived isomer of /sup 236/Np from. cap alpha. -,. beta. - and electron-capture decay measurements

    Energy Technology Data Exchange (ETDEWEB)

    Lindner, M.; Dupzyk, R.J.; Hoff, R.W.; Nagle, R.J. (California Univ., Livermore (USA). Lawrence Livermore National Lab.)

    1981-01-01

    The half-life of long-lived /sup 236/Np, due to ..cap alpha.., ..beta.. and electron-capture decay, was found to be 1.55 x 10/sup 5/ yr. Of all decays, 88% populate excited states in /sup 236/U and 12% populate levels in /sup 236/Pu. Lifetimes measured by growth of the ground states of /sup 236/U and /sup 236/Pu agree with values from corresponding ..gamma.. de-excitations in these daughter nuclei. Therefore, nearly all the electron-capture decays populate the 6/sup +/ level of the ground-state band in /sup 236/U. Similarly, essentially all the ..beta../sup -/ decay populates an analogous 6/sup +/ level in /sup 236/Pu, which de-excites through a previously unreported transition of 158.3 keV. If a very week ..gamma..-ray at 894 keV can be ascribed to a level in /sup 232/U populated by ..beta.. decay of /sup 232/Pa, its existence establishes a 0.2% ..cap alpha..-branching decay in /sup 236/Np.

  12. High-frequency video capture and a computer program with frame-by-frame angle determination functionality as tools that support judging in artistic gymnastics.

    Science.gov (United States)

    Omorczyk, Jarosław; Nosiadek, Leszek; Ambroży, Tadeusz; Nosiadek, Andrzej

    2015-01-01

    The main aim of this study was to verify the usefulness of selected simple methods of recording and fast biomechanical analysis performed by judges of artistic gymnastics in assessing a gymnast's movement technique. The study participants comprised six artistic gymnastics judges, who assessed back handsprings using two methods: a real-time observation method and a frame-by-frame video analysis method. They also determined flexion angles of knee and hip joints using the computer program. In the case of the real-time observation method, the judges gave a total of 5.8 error points with an arithmetic mean of 0.16 points for the flexion of the knee joints. In the high-speed video analysis method, the total amounted to 8.6 error points and the mean value amounted to 0.24 error points. For the excessive flexion of hip joints, the sum of the error values was 2.2 error points and the arithmetic mean was 0.06 error points during real-time observation. The sum obtained using frame-by-frame analysis method equaled 10.8 and the mean equaled 0.30 error points. Error values obtained through the frame-by-frame video analysis of movement technique were higher than those obtained through the real-time observation method. The judges were able to indicate the number of the frame in which the maximal joint flexion occurred with good accuracy. Using the real-time observation method as well as the high-speed video analysis performed without determining the exact angle for assessing movement technique were found to be insufficient tools for improving the quality of judging.

  13. Guerrilla Video: A New Protocol for Producing Classroom Video

    Science.gov (United States)

    Fadde, Peter; Rich, Peter

    2010-01-01

    Contemporary changes in pedagogy point to the need for a higher level of video production value in most classroom video, replacing the default video protocol of an unattended camera in the back of the classroom. The rich and complex environment of today's classroom can be captured more fully using the higher level, but still easily manageable,…

  14. A mobile phone food record app to digitally capture dietary intake for adolescents in a free-living environment: usability study.

    Science.gov (United States)

    Casperson, Shanon L; Sieling, Jared; Moon, Jon; Johnson, LuAnn; Roemmich, James N; Whigham, Leah

    2015-03-13

    Mobile technologies are emerging as valuable tools to collect and assess dietary intake. Adolescents readily accept and adopt new technologies; thus, a food record app (FRapp) may be a useful tool to better understand adolescents' dietary intake and eating patterns. We sought to determine the amenability of adolescents, in a free-living environment with minimal parental input, to use the FRapp to record their dietary intake. Eighteen community-dwelling adolescents (11-14 years) received detailed instructions to record their dietary intake for 3-7 days using the FRapp. Participants were instructed to capture before and after images of all foods and beverages consumed and to include a fiducial marker in the image. Participants were also asked to provide text descriptors including amount and type of all foods and beverages consumed. Eight of 18 participants were able to follow all instructions: included pre- and post-meal images, a fiducial marker, and a text descriptor and collected diet records on 2 weekdays and 1 weekend day. Dietary intake was recorded on average for 3.2 (SD 1.3 days; 68% weekdays and 32% weekend days) with an average of 2.2 (SD 1.1) eating events per day per participant. A total of 143 eating events were recorded, of which 109 had at least one associated image and 34 were recorded with text only. Of the 109 eating events with images, 66 included all foods, beverages and a fiducial marker and 44 included both a pre- and post-meal image. Text was included with 78 of the captured images. Of the meals recorded, 36, 33, 35, and 39 were breakfasts, lunches, dinners, and snacks, respectively. These data suggest that mobile devices equipped with an app to record dietary intake will be used by adolescents in a free-living environment; however, a minority of participants followed all directions. User-friendly mobile food record apps may increase participant amenability, increasing our understanding of adolescent dietary intake and eating patterns. To

  15. Real Time Synchronization of Live Broadcast Streams with User Generated Content and Social Network Streams

    NARCIS (Netherlands)

    Stokking, H.M.; Kaptein, A.M.; Veenhuizen, A.T.; Spitters4, M.M.; Niamut, O.A.

    2013-01-01

    This paper describes the work in the FP7 STEER project on augmenting a live broadcast with live user generated content. This user generated content consists of both video content, captured with mobile devices, and social network content, such as Facebook or Twitter messages. To enable multi-source

  16. Impact of Mini-drone based Video Surveillance on Invasion of Privacy

    OpenAIRE

    Korshunov, Pavel; Bonetto, Margherita; Ebrahimi, Touradj; Ramponi, Giovanni

    2015-01-01

    An increase in adoption of video surveillance, affecting many aspects of daily lives, raises public concern about an intrusion into individual privacy. New sensing and surveillance technologies, such as mini-drones, threaten to eradicate boundaries of private space even more. Therefore, it is important to study the effect of mini-drones on privacy intrusion and to understand how existing protection privacy filters perform on a video captured by a mini-drone. To this end, we have built a publi...

  17. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  18. Sound for digital video

    CERN Document Server

    Holman, Tomlinson

    2013-01-01

    Achieve professional quality sound on a limited budget! Harness all new, Hollywood style audio techniques to bring your independent film and video productions to the next level.In Sound for Digital Video, Second Edition industry experts Tomlinson Holman and Arthur Baum give you the tools and knowledge to apply recent advances in audio capture, video recording, editing workflow, and mixing to your own film or video with stunning results. This fresh edition is chockfull of techniques, tricks, and workflow secrets that you can apply to your own projects from preproduction

  19. Evaluation of the DTBird video-system at the Smoela wind-power plant. Detection capabilities for capturing near-turbine avian behaviour

    Energy Technology Data Exchange (ETDEWEB)

    Roel, May; Hamre, Oeyvind; Vang, Roald; Nygaard, Torgeir

    2012-07-01

    Collisions between birds and wind turbines can be a problem at wind-power plants both onshore and offshore, and the presence of endangered bird species or proximity to key functional bird areas can have major impact on the choice of site or location wind turbines. There is international consensus that one of the mail challenges in the development of measures to reduce bird collisions is the lack of good methods for assessment of the efficacy of inventions. In order to be better abe to assess the efficacy of mortality-reducing measures Statkraft wishes to find a system that can be operated under Norwegian conditions and that renders objective and quantitative information on collisions and near-flying birds. DTbird developed by Liquen Consultoria Ambiental S.L. is such a system, which is based on video-recording bird flights near turbines during the daylight period (light levels>200 lux). DTBird is a self-working system developed to detect flying birds and to take programmed actions (i.e. warming, dissuasion, collision registration, and turbine stop control) linked to real-time bird detection. This report evaluates how well the DTBird system is able to detect birds in the vicinity of a wind turbine, and assess to which extent it can be utilized to study near-turbine bird flight behaviour and possible deterrence. The evaluation was based on the video sequence recorded with the DTBird systems installed at turbine 21 and turbine 42 at the Smoela wind-power plant between March 2 2012 and September 30 2012, together with GPS telemetry data on white-tailed eagles and avian radar data. The average number of falsely triggered video sequences (false positive rate) was 1.2 per day, and during daytime the DTBird system recorded between 76% and 96% of all bird flights in the vicinity of the turbines. Visually estimated distances of recorded bird flights in the video sequences were in general assessed to be farther from the turbines com pared to the distance settings used within

  20. Video-rate confocal microscopy for single-molecule imaging in live cells and superresolution fluorescence imaging.

    Science.gov (United States)

    Lee, Jinwoo; Miyanaga, Yukihiro; Ueda, Masahiro; Hohng, Sungchul

    2012-10-17

    There is no confocal microscope optimized for single-molecule imaging in live cells and superresolution fluorescence imaging. By combining the swiftness of the line-scanning method and the high sensitivity of wide-field detection, we have developed a, to our knowledge, novel confocal fluorescence microscope with a good optical-sectioning capability (1.0 μm), fast frame rates (fluorescence detection efficiency. Full compatibility of the microscope with conventional cell-imaging techniques allowed us to do single-molecule imaging with a great ease at arbitrary depths of live cells. With the new microscope, we monitored diffusion motion of fluorescently labeled cAMP receptors of Dictyostelium discoideum at both the basal and apical surfaces and obtained superresolution fluorescence images of microtubules of COS-7 cells at depths in the range 0-85 μm from the surface of a coverglass. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  1. Capturing Thoughts, Capturing Minds?

    DEFF Research Database (Denmark)

    Nielsen, Janni

    2004-01-01

    Think Aloud is cost effective, promises access to the user's mind and is the applied usability technique. But 'keep talking' is difficult, besides, the multimodal interface is visual not verbal. Eye-tracking seems to get around the verbalisation problem. It captures the visual focus of attention...

  2. Reduction in Fall Rate in Dementia Managed Care Through Video Incident Review: Pilot Study.

    Science.gov (United States)

    Bayen, Eleonore; Jacquemot, Julien; Netscher, George; Agrawal, Pulkit; Tabb Noyce, Lynn; Bayen, Alexandre

    2017-10-17

    Falls of individuals with dementia are frequent, dangerous, and costly. Early detection and access to the history of a fall is crucial for efficient care and secondary prevention in cognitively impaired individuals. However, most falls remain unwitnessed events. Furthermore, understanding why and how a fall occurred is a challenge. Video capture and secure transmission of real-world falls thus stands as a promising assistive tool. The objective of this study was to analyze how continuous video monitoring and review of falls of individuals with dementia can support better quality of care. A pilot observational study (July-September 2016) was carried out in a Californian memory care facility. Falls were video-captured (24×7), thanks to 43 wall-mounted cameras (deployed in all common areas and in 10 out of 40 private bedrooms of consenting residents and families). Video review was provided to facility staff, thanks to a customized mobile device app. The outcome measures were the count of residents' falls happening in the video-covered areas, the acceptability of video recording, the analysis of video review, and video replay possibilities for care practice. Over 3 months, 16 falls were video-captured. A drop in fall rate was observed in the last month of the study. Acceptability was good. Video review enabled screening for the severity of falls and fall-related injuries. Video replay enabled identifying cognitive-behavioral deficiencies and environmental circumstances contributing to the fall. This allowed for secondary prevention in high-risk multi-faller individuals and for updated facility care policies regarding a safer living environment for all residents. Video monitoring offers high potential to support conventional care in memory care facilities. ©Eleonore Bayen, Julien Jacquemot, George Netscher, Pulkit Agrawal, Lynn Tabb Noyce, Alexandre Bayen. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 17.10.2017.

  3. Carbon Capture and Storage

    NARCIS (Netherlands)

    Benson, S.M.; Bennaceur, K.; Cook, P.; Davison, J.; Coninck, H. de; Farhat, K.; Ramirez, C.A.; Simbeck, D.; Surles, T.; Verma, P.; Wright, I.

    2012-01-01

    Emissions of carbon dioxide, the most important long-lived anthropogenic greenhouse gas, can be reduced by Carbon Capture and Storage (CCS). CCS involves the integration of four elements: CO 2 capture, compression of the CO2 from a gas to a liquid or a denser gas, transportation of pressurized CO 2

  4. Women with fibromyalgia's experience with three motion-controlled video game consoles and indicators of symptom severity and performance of activities of daily living.

    Science.gov (United States)

    Mortensen, Jesper; Kristensen, Lola Qvist; Brooks, Eva Petersson; Brooks, Anthony Lewis

    2015-01-01

    Little is known of Motion-Controlled Video Games (MCVGs) as an intervention for people with chronic pain. The aim of this study was to explore the experience women with fibromyalgia syndrome (FMS) had, using commercially available MCVGs; and to investigate indicators of symptom severity and performance of activities of daily living (ADL). Of 15 female participants diagnosed with FMS, 7 completed a program of five sessions with Nintendo Wii (Wii), five sessions with PlayStation 3 Move (PS3 Move) and five sessions with Microsoft Xbox Kinect (Xbox Kinect). Interviews were conducted at baseline and post-intervention and were supported by data from observation and self-reported assessment. Participants experienced play with MCVGs as a way to get distraction from pain symptoms while doing fun and manageable exercise. They enjoyed the slow pace and familiarity of Wii, while some considered PS3 Move to be too fast paced. Xbox Kinect was reported as the best console for exercise. There were no indication of general improvement in symptom severity or performance of ADL. This study demonstrated MCVG as an effective healthcare intervention for the women with FMS who completed the program, with regards to temporary pain relief and enjoyable low impact exercise. Implications for Rehabilitation Exercise is recommended in the management of fibromyalgia syndrome (FMS). People with FMS often find it counterintuitive to exercise because of pain exacerbation, which may influence adherence to an exercise program. Motion-controlled video games may offer temporary pain relief and fun low impact exercise for women with FMS.

  5. Identification of IL-28B Genotype Modification in Hepatocytes after Living Donor Liver Transplantation by Laser Capture Microdissection and Pyrosequencing Analysis

    Directory of Open Access Journals (Sweden)

    King-Wah Chiu

    2018-01-01

    Full Text Available The aim of this study is to elucidate the biogenetic modification of donor and recipient interleukin-28B (IL-28B genotypes in liver graft biopsies after living donor liver transplantation (LDLT for chronic hepatitis C virus- (HCV- related, end-stage liver disease. Fifty liver graft biopsies were collected from recipients during LDLT treatment for HCV-related, end-stage liver disease. DNA was extracted from all 50 liver tissues, and the IL-28B single-nucleotide polymorphisms (SNPs rs8099917 and rs12979860 were studied for allelic discrimination by real-time PCR analysis. Blood samples were obtained from donors and recipients on postoperative day 0 (POD0, POD7, and POD30. We randomly selected five liver biopsies and isolated the hepatocytes by laser capture microdissection (LCM to evaluate genotype modifications resulting from LDLT. After LDLT, the IL-28B SNP rs8099917 was identified not only in the liver graft biopsies and donors’ sera (TT = 41 : 43; GT = 9 : 5; GG = 0 : 2, but also in liver graft biopsies and recipients’ sera on POD0 (TT = 41 : 44; GT = 9 : 4; GG = 0 : 2, POD7 (TT = 41 : 30; GT = 9 : 18; GG = 0 : 2, and POD30 (TT = 41 : 29; GT = 9 : 19; GG = 0 : 2. A significant difference was observed between the rs8099917 allele frequencies of liver graft biopsies and recipients’ sera on POD30 (p=0.039. In addition, a significant difference was also noted between the rs12979860 allele frequencies of liver graft biopsies and donors’ sera (CT = 49 : 39; TT = 1 : 10 (p=0.012 and of liver graft biopsies and recipients’ sera on POD0 (CT = 49 : 39; TT = 1 : 11 (p=0.002, POD7 (CT = 49 : 42; TT = 1 : 8 (p=0.016, and POD30 (CT = 49 : 41; TT = 1 : 9 (p=0.008. This phenomenon was confirmed by pyrosequencing of hepatocytes isolated by LCM. Following LDLT, the TT-to-GT IL-28B genotype modification predominated in rs8099917, and the CC-to-CT modification predominated

  6. Multimodal Semantics Extraction from User-Generated Videos

    Directory of Open Access Journals (Sweden)

    Francesco Cricri

    2012-01-01

    Full Text Available User-generated video content has grown tremendously fast to the point of outpacing professional content creation. In this work we develop methods that analyze contextual information of multiple user-generated videos in order to obtain semantic information about public happenings (e.g., sport and live music events being recorded in these videos. One of the key contributions of this work is a joint utilization of different data modalities, including such captured by auxiliary sensors during the video recording performed by each user. In particular, we analyze GPS data, magnetometer data, accelerometer data, video- and audio-content data. We use these data modalities to infer information about the event being recorded, in terms of layout (e.g., stadium, genre, indoor versus outdoor scene, and the main area of interest of the event. Furthermore we propose a method that automatically identifies the optimal set of cameras to be used in a multicamera video production. Finally, we detect the camera users which fall within the field of view of other cameras recording at the same public happening. We show that the proposed multimodal analysis methods perform well on various recordings obtained in real sport events and live music performances.

  7. Video frame processor

    International Nuclear Information System (INIS)

    Joshi, V.M.; Agashe, Alok; Bairi, B.R.

    1993-01-01

    This report provides technical description regarding the Video Frame Processor (VFP) developed at Bhabha Atomic Research Centre. The instrument provides capture of video images available in CCIR format. Two memory planes each with a capacity of 512 x 512 x 8 bit data enable storage of two video image frames. The stored image can be processed on-line and on-line image subtraction can also be carried out for image comparisons. The VFP is a PC Add-on board and is I/O mapped within the host IBM PC/AT compatible computer. (author). 9 refs., 4 figs., 19 photographs

  8. Digital video clips for improved pedagogy and illustration of scientific research — with illustrative video clips on atomic spectrometry

    Science.gov (United States)

    Michel, Robert G.; Cavallari, Jennifer M.; Znamenskaia, Elena; Yang, Karl X.; Sun, Tao; Bent, Gary

    1999-12-01

    This article is an electronic publication in Spectrochimica Acta Electronica (SAE), a section of Spectrochimica Acta Part B (SAB). The hardcopy text is accompanied by an electronic archive, stored on the CD-ROM accompanying this issue. The archive contains video clips. The main article discusses the scientific aspects of the subject and explains the purpose of the video files. Short, 15-30 s, digital video clips are easily controllable at the computer keyboard, which gives a speaker the ability to show fine details through the use of slow motion. Also, they are easily accessed from the computer hard drive for rapid extemporaneous presentation. In addition, they are easily transferred to the Internet for dissemination. From a pedagogical point of view, the act of making a video clip by a student allows for development of powers of observation, while the availability of the technology to make digital video clips gives a teacher the flexibility to demonstrate scientific concepts that would otherwise have to be done as 'live' demonstrations, with all the likely attendant misadventures. Our experience with digital video clips has been through their use in computer-based presentations by undergraduate and graduate students in analytical chemistry classes, and by high school and middle school teachers and their students in a variety of science and non-science classes. In physics teaching laboratories, we have used the hardware to capture digital video clips of dynamic processes, such as projectiles and pendulums, for later mathematical analysis.

  9. The Effectiveness of Classroom Capture Technology

    Science.gov (United States)

    Ford, Maire B.; Burns, Colleen E.; Mitch, Nathan; Gomez, Melissa M.

    2012-01-01

    The use of classroom capture systems (systems that capture audio and video footage of a lecture and attempt to replicate a classroom experience) is becoming increasingly popular at the university level. However, research on the effectiveness of classroom capture systems in the university classroom has been limited due to the recent development and…

  10. Camera network video summarization

    Science.gov (United States)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  11. Video gallery of educational lectures integrated in faculty's portal

    Directory of Open Access Journals (Sweden)

    Jaroslav Majerník

    2013-05-01

    Full Text Available This paper presents a web based educational video-clips exhibition created to share various archived lectures for medical students, health care professionals as well as for general public. The presentation of closely related topics was developed as video gallery and it is based solely on free or open source tools to be available for wide academic and/or non-commercial use. Even if the educational video records can be embedded in any websites, we preferred to use our faculty’s portal, which should be a central point to offer various multimedia educational materials. The system was integrated and tested to offer open access to infectology lectures that were captured and archived from live-streamed sessions and from videoconferences.

  12. Digital Video Revisited: Storytelling, Conferencing, Remixing

    Science.gov (United States)

    Godwin-Jones, Robert

    2012-01-01

    Five years ago in the February, 2007, issue of LLT, I wrote about developments in digital video of potential interest to language teachers. Since then, there have been major changes in options for video capture, editing, and delivery. One of the most significant has been the rise in popularity of video-based storytelling, enabled largely by…

  13. Dashboard Videos

    Science.gov (United States)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  14. The effects of video observation of chewing during lunch on masticatory ability, food intake, cognition, activities of daily living, depression, and quality of life in older adults with dementia: a study protocol of an adjusted randomized controlled trial.

    Science.gov (United States)

    Douma, Johanna G; Volkers, Karin M; Vuijk, Pieter Jelle; Scherder, Erik J A

    2016-02-04

    Masticatory functioning alters with age. However, mastication has been found to be related to, for example, cognitive functioning, food intake, and some aspects of activities of daily living. Since cognitive functioning and activities of daily living show a decline in older adults with dementia, improving masticatory functioning may be of relevance to them. A possible way to improve mastication may be showing videos of people who are chewing. Observing chewing movements may activate the mirror neuron system, which becomes also activated during the execution of that same movement. The primary hypothesis is that the observation of chewing has a beneficial effect on masticatory functioning, or, more specifically, masticatory ability of older adults with dementia. Secondary, the intervention is hypothesized to have beneficial effects on food intake, cognition, activities of daily living, depression, and quality of life. An adjusted parallel randomized controlled trial is being performed in dining rooms of residential care settings. Older adults with dementia, for whom also additional eligibility criteria apply, are randomly assigned to the experimental (videos of chewing people) or control condition (videos of nature and buildings), by drawing folded pieces of paper. Participants who are able to watch each other's videos are assigned to the same study condition. The intervention takes place during lunchtime, from Monday to Friday, for 3 months. During four moments of measurement, masticatory ability, food intake, cognitive functioning, activities of daily living, depression, and quality of life are assessed. Tests administrators blind to the group allocation administer the tests to participants. The goal of this study is to examine the effects of video observation of chewing on masticatory ability and several secondary outcome measures. In this study, the observation of chewing is added to the execution of the same action (i.e., during eating). Beneficial effects on

  15. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  16. Gravitational capture

    International Nuclear Information System (INIS)

    Bondi, H.

    1979-01-01

    In spite of the strength of gravitational focres between celestial bodies, gravitational capture is not a simple concept. The principles of conservation of linear momentum and of conservation of angular momentum, always impose severe constraints, while conservation of energy and the vital distinction between dissipative and non-dissipative systems allows one to rule out capture in a wide variety of cases. In complex systems especially those without dissipation, long dwell time is a more significant concept than permanent capture. (author)

  17. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  18. Digital video recording and archiving in ophthalmic surgery

    Directory of Open Access Journals (Sweden)

    Raju Biju

    2006-01-01

    Full Text Available Currently most ophthalmic operating rooms are equipped with an analog video recording system [analog Charge Couple Device camera for video grabbing and a Video Cassette Recorder for recording]. We discuss the various advantages of a digital video capture device, its archiving capabilities and our experience during the transition from analog to digital video recording and archiving. The basic terminology and concepts related to analog and digital video, along with the choice of hardware, software and formats for archiving are discussed.

  19. High Definition Video Streaming Using H.264 Video Compression

    OpenAIRE

    Bechqito, Yassine

    2009-01-01

    This thesis presents high definition video streaming using H.264 codec implementation. The experiment carried out in this study was done for an offline streaming video but a model for live high definition streaming is introduced, as well. Prior to the actual experiment, this study describes digital media streaming. Also, the different technologies involved in video streaming are covered. These include streaming architecture and a brief overview on H.264 codec as well as high definition t...

  20. Video pedagogy

    OpenAIRE

    Länsitie, Janne; Stevenson, Blair; Männistö, Riku; Karjalainen, Tommi; Karjalainen, Asko

    2016-01-01

    The short film is an introduction to the concept of video pedagogy. The five categories of video pedagogy further elaborate how videos can be used as a part of instruction and learning process. Most pedagogical videos represent more than one category. A video itself doesn’t necessarily define the category – the ways in which the video is used as a part of pedagogical script are more defining factors. What five categories did you find? Did you agree with the categories, or are more...

  1. Social Properties of Mobile Video

    Science.gov (United States)

    Mitchell, April Slayden; O'Hara, Kenton; Vorbau, Alex

    Mobile video is now an everyday possibility with a wide array of commercially available devices, services, and content. These new technologies have created dramatic shifts in the way video-based media can be produced, consumed, and delivered by people beyond the familiar behaviors associated with fixed TV and video technologies. Such technology revolutions change the way users behave and change their expectations in regards to their mobile video experiences. Building upon earlier studies of mobile video, this paper reports on a study using diary techniques and ethnographic interviews to better understand how people are using commercially available mobile video technologies in their everyday lives. Drawing on reported episodes of mobile video behavior, the study identifies the social motivations and values underpinning these behaviors that help characterize mobile video consumption beyond the simplistic notion of viewing video only to kill time. This paper also discusses the significance of user-generated content and the usage of video in social communities through the description of two mobile video technology services that allow users to create and share content. Implications for adoption and design of mobile video technologies and services are discussed as well.

  2. "You're Not Just Learning It, You're Living It!" Constructing the "Good Life" in Australian University Online Promotional Videos

    Science.gov (United States)

    Gottschall, Kristina; Saltmarsh, Sue

    2017-01-01

    Online promotional videos on Australian university websites are a form of institutional branding and marketing that construct university experience in a variety of ways. Here we consider how these multimedia texts represent student lifestyles, identities and aspirations in terms of the "good life." We consider how the "promise of…

  3. Technology survey on video face tracking

    Science.gov (United States)

    Zhang, Tong; Gomes, Herman Martins

    2014-03-01

    With the pervasiveness of monitoring cameras installed in public areas, schools, hospitals, work places and homes, video analytics technologies for interpreting these video contents are becoming increasingly relevant to people's lives. Among such technologies, human face detection and tracking (and face identification in many cases) are particularly useful in various application scenarios. While plenty of research has been conducted on face tracking and many promising approaches have been proposed, there are still significant challenges in recognizing and tracking people in videos with uncontrolled capturing conditions, largely due to pose and illumination variations, as well as occlusions and cluttered background. It is especially complex to track and identify multiple people simultaneously in real time due to the large amount of computation involved. In this paper, we present a survey on literature and software that are published or developed during recent years on the face tracking topic. The survey covers the following topics: 1) mainstream and state-of-the-art face tracking methods, including features used to model the targets and metrics used for tracking; 2) face identification and face clustering from face sequences; and 3) software packages or demonstrations that are available for algorithm development or trial. A number of publically available databases for face tracking are also introduced.

  4. US Spacesuit Knowledge Capture

    Science.gov (United States)

    Chullen, Cinda; Thomas, Ken; McMann, Joe; Dolan, Kristi; Bitterly, Rose; Lewis, Cathleen

    2011-01-01

    The ability to learn from both the mistakes and successes of the past is vital to assuring success in the future. Due to the close physical interaction between spacesuit systems and human beings as users, spacesuit technology and usage lends itself rather uniquely to the benefits realized from the skillful organization of historical information; its dissemination; the collection and identification of artifacts; and the education of those in the field. The National Aeronautics and Space Administration (NASA), other organizations and individuals have been performing United States (U.S.) Spacesuit Knowledge Capture since the beginning of space exploration. Avenues used to capture the knowledge have included publication of reports; conference presentations; specialized seminars; and classes usually given by veterans in the field. More recently the effort has been more concentrated and formalized whereby a new avenue of spacesuit knowledge capture has been added to the archives in which videotaping occurs engaging both current and retired specialists in the field presenting technical scope specifically for education and preservation of knowledge. With video archiving, all these avenues of learning can now be brought to life with the real experts presenting their wealth of knowledge on screen for future learners to enjoy. Scope and topics of U.S. spacesuit knowledge capture have included lessons learned in spacesuit technology, experience from the Gemini, Apollo, Skylab and Shuttle programs, hardware certification, design, development and other program components, spacesuit evolution and experience, failure analysis and resolution, and aspects of program management. Concurrently, U.S. spacesuit knowledge capture activities have progressed to a level where NASA, the National Air and Space Museum (NASM), Hamilton Sundstrand (HS) and the spacesuit community are now working together to provide a comprehensive closed-looped spacesuit knowledge capture system which includes

  5. Protecting embryos from stress: Corticosterone effects and the corticosterone response to capture and confinement during pregnancy in a live-bearing lizard (Hoplodactylus maculatus)

    Science.gov (United States)

    Cree, A.; Tyrrell, C.L.; Preest, M.R.; Thorburn, D.; Guillette, L.J.

    2003-01-01

    Hormones in the embryonic environment, including those of the hypothalamo-pituitary-adrenal (HPA) axis, have profound effects on development in eutherian mammals. However, little is known about their effects in reptiles that have independently evolved viviparity. We investigated whether exogenous corticosterone affected embryonic development in the viviparous gecko Hoplodactylus maculatus, and whether pregnant geckos have a corticosterone response to capture and confinement that is suppressed relative to that in non-pregnant (vitellogenic) females and males. Corticosterone implants (5 mg, slow-release) administered to females in mid-pregnancy caused a large elevation of corticosterone in maternal plasma (P<0.001), probable reductions in embryonic growth and development (P=0.069-0.073), developmental abnormalities and eventual abortions. Cool temperature produced similar reductions in embryonic growth and development (P???0.036 cf. warm controls), but pregnancies were eventually successful. Despite the potentially harmful effects of elevated plasma corticosterone, pregnant females did not suppress their corticosterone response to capture and confinement relative to vitellogenic females, and both groups of females had higher responses than males. Future research should address whether lower maternal doses of corticosterone produce non-lethal effects on development that could contribute to phenotypic plasticity. Corticosterone implants also led to increased basking in pregnant females (P<0.001), and basal corticosterone in wild geckos (independent of reproductive condition) was positively correlated with body temperature (P<0.001). Interactions between temperature and corticosterone may have broad significance to other terrestrial ectotherms, and body temperature should be considered as a variable influencing plasma corticosterone concentrations in all future studies on reptiles. ?? 2003 Elsevier Inc. All rights reserved.

  6. An interactive sports video game as an intervention for rehabilitation of community-living patients with schizophrenia: A controlled, single-blind, crossover study

    OpenAIRE

    Shimizu, Nobuko; Umemura, Tomohiro; Matsunaga, Masahiro; Hirai, Takayoshi

    2017-01-01

    Hypofrontality is a state of decreased cerebral blood flow in the prefrontal cortex during executive function performance; it is commonly observed in patients with schizophrenia. Cognitive dysfunction, as well as the psychological symptoms of schizophrenia, influences the ability of patients to reintegrate into society. The current study investigated the effects of an interactive sports video game (IVG; Nintendo Wii™ Sports Resort) on frontal lobe function of patients with schizophrenia. A sa...

  7. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  8. Video games

    OpenAIRE

    Kolář, Vojtěch

    2012-01-01

    This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...

  9. Face Recognition and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Swapnil Vitthal Tathe

    2017-07-01

    Full Text Available Advancement in computer vision technology and availability of video capturing devices such as surveillance cameras has evoked new video processing applications. The research in video face recognition is mostly biased towards law enforcement applications. Applications involves human recognition based on face and iris, human computer interaction, behavior analysis, video surveillance etc. This paper presents face tracking framework that is capable of face detection using Haar features, recognition using Gabor feature extraction, matching using correlation score and tracking using Kalman filter. The method has good recognition rate for real-life videos and robust performance to changes due to illumination, environmental factors, scale, pose and orientations.

  10. What Online User Innovation Communities Can Teach Us about Capturing the Experiences of Patients Living with Chronic Health Conditions. A Scoping Review.

    Directory of Open Access Journals (Sweden)

    Julia Amann

    Full Text Available In order to adapt to societal changes, healthcare systems need to switch from a disease orientation to a patient-centered approach. Virtual patient networks are a promising tool to favor this switch and much can be learned from the open and user innovation literature where the involvement of online user communities in the innovation process is well-documented.The objectives of this study were 1 to describe the use of online communities as a tool to capture and harness innovative ideas of end users or consumers; and 2 to point to the potential value and challenges of these virtual platforms to function as a tool to inform and promote patient-centered care in the context of chronic health conditions.A scoping review was conducted. A total of seven databases were searched for scientific articles published in English between 1995 and 2014. The search strategy was refined through an iterative process.A total of 144 studies were included in the review. Studies were coded inductively according to their research focus to identify groupings of papers. The first set of studies focused on the interplay of factors related to user roles, motivations, and behaviors that shape the innovation process within online communities. Studies of the second set examined the role of firms in online user innovation initiatives, identifying different organizational strategies and challenges. The third set of studies focused on the idea selection process and measures of success with respect to online user innovation initiatives. Finally, the findings from the review are presented in the light of the particularities and challenges discussed in current healthcare research.The present paper highlights the potential of virtual patient communities to inform and promote patient-centered care, describes the key challenges involved in this process, and makes recommendations on how to address them.

  11. What Online User Innovation Communities Can Teach Us about Capturing the Experiences of Patients Living with Chronic Health Conditions. A Scoping Review.

    Science.gov (United States)

    Amann, Julia; Zanini, Claudia; Rubinelli, Sara

    2016-01-01

    In order to adapt to societal changes, healthcare systems need to switch from a disease orientation to a patient-centered approach. Virtual patient networks are a promising tool to favor this switch and much can be learned from the open and user innovation literature where the involvement of online user communities in the innovation process is well-documented. The objectives of this study were 1) to describe the use of online communities as a tool to capture and harness innovative ideas of end users or consumers; and 2) to point to the potential value and challenges of these virtual platforms to function as a tool to inform and promote patient-centered care in the context of chronic health conditions. A scoping review was conducted. A total of seven databases were searched for scientific articles published in English between 1995 and 2014. The search strategy was refined through an iterative process. A total of 144 studies were included in the review. Studies were coded inductively according to their research focus to identify groupings of papers. The first set of studies focused on the interplay of factors related to user roles, motivations, and behaviors that shape the innovation process within online communities. Studies of the second set examined the role of firms in online user innovation initiatives, identifying different organizational strategies and challenges. The third set of studies focused on the idea selection process and measures of success with respect to online user innovation initiatives. Finally, the findings from the review are presented in the light of the particularities and challenges discussed in current healthcare research. The present paper highlights the potential of virtual patient communities to inform and promote patient-centered care, describes the key challenges involved in this process, and makes recommendations on how to address them.

  12. Akademisk video

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2017-01-01

    Dette kapitel har fokus på metodiske problemstillinger, der opstår i forhold til at bruge (digital) video i forbindelse med forskningskommunikation, ikke mindst online. Video har længe været benyttet i forskningen til dataindsamling og forskningskommunikation. Med digitaliseringen og internettet ...

  13. SnapVideo: Personalized Video Generation for a Sightseeing Trip.

    Science.gov (United States)

    Zhang, Luming; Jing, Peiguang; Su, Yuting; Zhang, Chao; Shaoz, Ling

    2017-11-01

    Leisure tourism is an indispensable activity in urban people's life. Due to the popularity of intelligent mobile devices, a large number of photos and videos are recorded during a trip. Therefore, the ability to vividly and interestingly display these media data is a useful technique. In this paper, we propose SnapVideo, a new method that intelligently converts a personal album describing of a trip into a comprehensive, aesthetically pleasing, and coherent video clip. The proposed framework contains three main components. The scenic spot identification model first personalizes the video clips based on multiple prespecified audience classes. We then search for some auxiliary related videos from YouTube 1 according to the selected photos. To comprehensively describe a scenery, the view generation module clusters the crawled video frames into a number of views. Finally, a probabilistic model is developed to fit the frames from multiple views into an aesthetically pleasing and coherent video clip, which optimally captures the semantics of a sightseeing trip. Extensive user studies demonstrated the competitiveness of our method from an aesthetic point of view. Moreover, quantitative analysis reflects that semantically important spots are well preserved in the final video clip. 1 https://www.youtube.com/.

  14. Living with Lupus (For Parents)

    Science.gov (United States)

    ... Videos for Educators Search English Español Living With Lupus KidsHealth / For Parents / Living With Lupus What's in ... disease for both doctors and their patients. About Lupus A healthy immune system produces proteins called antibodies ...

  15. Video Podcasts

    DEFF Research Database (Denmark)

    Nortvig, Anne Mette; Sørensen, Birgitte Holm

    2016-01-01

    This project’s aim was to support and facilitate master’s students’ preparation and collaboration by making video podcasts of short lectures available on YouTube prior to students’ first face-to-face seminar. The empirical material stems from group interviews, from statistical data created through...... YouTube analytics and from surveys answered by students after the seminar. The project sought to explore how video podcasts support learning and reflection online and how students use and reflect on the integration of online activities in the videos. Findings showed that students engaged actively...

  16. Video games.

    Science.gov (United States)

    Funk, Jeanne B

    2005-06-01

    The video game industry insists that it is doing everything possible to provide information about the content of games so that parents can make informed choices; however, surveys indicate that ratings may not reflect consumer views of the nature of the content. This article describes some of the currently popular video games, as well as developments that are on the horizon, and discusses the status of research on the positive and negative impacts of playing video games. Recommendations are made to help parents ensure that children play games that are consistent with their values.

  17. Video Tutorial of Continental Food

    Science.gov (United States)

    Nurani, A. S.; Juwaedah, A.; Mahmudatussa'adah, A.

    2018-02-01

    This research is motivated by the belief in the importance of media in a learning process. Media as an intermediary serves to focus on the attention of learners. Selection of appropriate learning media is very influential on the success of the delivery of information itself both in terms of cognitive, affective and skills. Continental food is a course that studies food that comes from Europe and is very complex. To reduce verbalism and provide more real learning, then the tutorial media is needed. Media tutorials that are audio visual can provide a more concrete learning experience. The purpose of this research is to develop tutorial media in the form of video. The method used is the development method with the stages of analyzing the learning objectives, creating a story board, validating the story board, revising the story board and making video tutorial media. The results show that the making of storyboards should be very thorough, and detailed in accordance with the learning objectives to reduce errors in video capture so as to save time, cost and effort. In video capturing, lighting, shooting angles, and soundproofing make an excellent contribution to the quality of tutorial video produced. In shooting should focus more on tools, materials, and processing. Video tutorials should be interactive and two-way.

  18. Video-cued narrative reflection: a research approach for articulating tacit, relational, and embodied understandings.

    Science.gov (United States)

    Raingruber, Bonnie

    2003-10-01

    The author's purpose in this article is to describe the effectiveness of video-cued narrative reflection as a research approach for accessing relational, practice-based, and lived understandings. Video-cued narrative reflection provides moment-by-moment access to tacit experience. The immediate nature of the videotape captures emotional nuances, embodied perceptions, spatial influences, relational understandings, situational factors, and temporal manifestations. By watching videotaped interactions, participants are able to re-collect, re-experience, and interpret their life world. Video-cued narrative reflection allows participants to be simultaneously engaged and reflective while describing significant understandings. By inserting audiotaped reflective commentary of participants into the original videotape transcript, contextual meanings can be located and articulated more easily. Although not appropriate for all types of research, this approach offers promise for certain studies.

  19. Fast Aerial Video Stitching

    Directory of Open Access Journals (Sweden)

    Jing Li

    2014-10-01

    Full Text Available The highly efficient and robust stitching of aerial video captured by unmanned aerial vehicles (UAVs is a challenging problem in the field of robot vision. Existing commercial image stitching systems have seen success with offline stitching tasks, but they cannot guarantee high-speed performance when dealing with online aerial video sequences. In this paper, we present a novel system which has an unique ability to stitch high-frame rate aerial video at a speed of 150 frames per second (FPS. In addition, rather than using a high-speed vision platform such as FPGA or CUDA, our system is running on a normal personal computer. To achieve this, after the careful comparison of the existing invariant features, we choose the FAST corner and binary descriptor for efficient feature extraction and representation, and present a spatial and temporal coherent filter to fuse the UAV motion information into the feature matching. The proposed filter can remove the majority of feature correspondence outliers and significantly increase the speed of robust feature matching by up to 20 times. To achieve a balance between robustness and efficiency, a dynamic key frame-based stitching framework is used to reduce the accumulation errors. Extensive experiments on challenging UAV datasets demonstrate that our approach can break through the speed limitation and generate an accurate stitching image for aerial video stitching tasks.

  20. Video Comparator

    International Nuclear Information System (INIS)

    Rose, R.P.

    1978-01-01

    The Video Comparator is a comparative gage that uses electronic images from two sources, a standard and an unknown. Two matched video cameras are used to obtain the electronic images. The video signals are mixed and displayed on a single video receiver (CRT). The video system is manufactured by ITP of Chatsworth, CA and is a Tele-Microscope II, Model 148. One of the cameras is mounted on a toolmaker's microscope stand and produces a 250X image of a cast. The other camera is mounted on a stand and produces an image of a 250X template. The two video images are mixed in a control box provided by ITP and displayed on a CRT. The template or the cast can be moved to align the desired features. Vertical reference lines are provided on the CRT, and a feature on the cast can be aligned with a line on the CRT screen. The stage containing the casts can be moved using a Boeckleler micrometer equipped with a digital readout, and a second feature aligned with the reference line and the distance moved obtained from the digital display

  1. Student Perceptions of Online Tutoring Videos

    Science.gov (United States)

    Sligar, Steven R.; Pelletier, Christopher D.; Bonner, Heidi Stone; Coghill, Elizabeth; Guberman, Daniel; Zeng, Xiaoming; Newman, Joyce J.; Muller, Dorothy; Dennis, Allen

    2017-01-01

    Online tutoring is made possible by using videos to replace or supplement face to face services. The purpose of this research was to examine student reactions to the use of lecture capture technology in a university tutoring setting and to assess student knowledge of some features of Tegrity lecture capture software. A survey was administered to…

  2. Advanced real-time manipulation of video streams

    CERN Document Server

    Herling, Jan

    2014-01-01

    Diminished Reality is a new fascinating technology that removes real-world content from live video streams. This sensational live video manipulation actually removes real objects and generates a coherent video stream in real-time. Viewers cannot detect modified content. Existing approaches are restricted to moving objects and static or almost static cameras and do not allow real-time manipulation of video content. Jan Herling presents a new and innovative approach for real-time object removal with arbitrary camera movements.

  3. An interactive sports video game as an intervention for rehabilitation of community-living patients with schizophrenia: A controlled, single-blind, crossover study.

    Science.gov (United States)

    Shimizu, Nobuko; Umemura, Tomohiro; Matsunaga, Masahiro; Hirai, Takayoshi

    2017-01-01

    Hypofrontality is a state of decreased cerebral blood flow in the prefrontal cortex during executive function performance; it is commonly observed in patients with schizophrenia. Cognitive dysfunction, as well as the psychological symptoms of schizophrenia, influences the ability of patients to reintegrate into society. The current study investigated the effects of an interactive sports video game (IVG; Nintendo Wii™ Sports Resort) on frontal lobe function of patients with schizophrenia. A sample of eight patients (6 male and 2 female; mean age = 46.7 years, standard deviation (SD) = 13.7) engaged in an IVG every week for 3 months in a controlled, single-blind, crossover study. Before and after the intervention we examined frontal lobe blood-flow volume using functional near-infrared spectroscopy (fNIRS), and assessed functional changes using the Frontal Assessment Battery, Health-Related Quality of Life scale, and behaviorally-assessed physical function tests. fNIRS revealed that prefrontal activity during IVG performance significantly increased in the IVG period compared with the control period. Furthermore, significant correlations between cerebral blood flow changes in different channels were observed during IVG performance. In addition, we observed intervention-related improvement in health-related quality of life following IVG. IVG intervention was associated with increased prefrontal cortex activation and improved health-related quality of life performance in patients with schizophrenia. Patients with chronic schizophrenia are characterized by withdrawal and a lack of social responsiveness or interest in others. Interventions using IVG may provide a useful low-cost rehabilitation method for such patients, without the need for specialized equipment.

  4. An interactive sports video game as an intervention for rehabilitation of community-living patients with schizophrenia: A controlled, single-blind, crossover study.

    Directory of Open Access Journals (Sweden)

    Nobuko Shimizu

    Full Text Available Hypofrontality is a state of decreased cerebral blood flow in the prefrontal cortex during executive function performance; it is commonly observed in patients with schizophrenia. Cognitive dysfunction, as well as the psychological symptoms of schizophrenia, influences the ability of patients to reintegrate into society. The current study investigated the effects of an interactive sports video game (IVG; Nintendo Wii™ Sports Resort on frontal lobe function of patients with schizophrenia. A sample of eight patients (6 male and 2 female; mean age = 46.7 years, standard deviation (SD = 13.7 engaged in an IVG every week for 3 months in a controlled, single-blind, crossover study. Before and after the intervention we examined frontal lobe blood-flow volume using functional near-infrared spectroscopy (fNIRS, and assessed functional changes using the Frontal Assessment Battery, Health-Related Quality of Life scale, and behaviorally-assessed physical function tests. fNIRS revealed that prefrontal activity during IVG performance significantly increased in the IVG period compared with the control period. Furthermore, significant correlations between cerebral blood flow changes in different channels were observed during IVG performance. In addition, we observed intervention-related improvement in health-related quality of life following IVG. IVG intervention was associated with increased prefrontal cortex activation and improved health-related quality of life performance in patients with schizophrenia. Patients with chronic schizophrenia are characterized by withdrawal and a lack of social responsiveness or interest in others. Interventions using IVG may provide a useful low-cost rehabilitation method for such patients, without the need for specialized equipment.

  5. Video Golf

    Science.gov (United States)

    1995-01-01

    George Nauck of ENCORE!!! invented and markets the Advanced Range Performance (ARPM) Video Golf System for measuring the result of a golf swing. After Nauck requested their assistance, Marshall Space Flight Center scientists suggested video and image processing/computing technology, and provided leads on commercial companies that dealt with the pertinent technologies. Nauck contracted with Applied Research Inc. to develop a prototype. The system employs an elevated camera, which sits behind the tee and follows the flight of the ball down range, catching the point of impact and subsequent roll. Instant replay of the video on a PC monitor at the tee allows measurement of the carry and roll. The unit measures distance and deviation from the target line, as well as distance from the target when one is selected. The information serves as an immediate basis for making adjustments or as a record of skill level progress for golfers.

  6. Geotail Video News Release

    Science.gov (United States)

    1992-01-01

    The Geotail mission, part of the International Solar Terrestrial Physics (ISTP) program, measures global energy flow and transformation in the magnetotail to increase understanding of fundamental magnetospheric processes. The satellite was launched on July 24, 1992 onboard a Delta II rocket. This video shows with animation the solar wind, and its effect on the Earth. The narrator explains that the Geotail spacecraft was designed and built by the Institute of Space and Astronautical Science (ISAS), the Japanese Space Agency. The mission objectives are reviewed by one of the scientist in a live view. The video also shows an animation of the orbit, while the narrator explains the orbit and the reason for the small launch window.

  7. The Aesthetics of the Ambient Video Experience

    Directory of Open Access Journals (Sweden)

    Jim Bizzocchi

    2008-01-01

    Full Text Available Ambient Video is an emergent cultural phenomenon, with roots that go deeply into the history of experimental film and video art. Ambient Video, like Brian Eno's ambient music, is video that "must be as easy to ignore as notice" [9]. This minimalist description conceals the formidable aesthetic challenge that faces this new form. Ambient video art works will hang on the walls of our living rooms, corporate offices, and public spaces. They will play in the background of our lives, living video paintings framed by the new generation of elegant, high-resolution flat-panel display units. However, they cannot command attention like a film or television show. They will patiently play in the background of our lives, yet they must always be ready to justify our attention in any given moment. In this capacity, ambient video works need to be equally proficient at rewarding a fleeting glance, a more direct look, or a longer contemplative gaze. This paper connects a series of threads that collectively illuminate the aesthetics of this emergent form: its history as a popular culture phenomenon, its more substantive artistic roots in avant-garde cinema and video art, its relationship to new technologies, the analysis of the viewer's conditions of reception, and the work of current artists who practice within this form.

  8. Learning from Narrated Instruction Videos.

    Science.gov (United States)

    Alayrac, Jean-Baptiste; Bojanowski, Piotr; Agrawal, Nishant; Sivic, Josef; Laptev, Ivan; Lacoste-Julien, Simon

    2017-09-05

    Automatic assistants could guide a person or a robot in performing new tasks, such as changing a car tire or repotting a plant. Creating such assistants, however, is non-trivial and requires understanding of visual and verbal content of a video. Towards this goal, we here address the problem of automatically learning the main steps of a task from a set of narrated instruction videos. We develop a new unsupervised learning approach that takes advantage of the complementary nature of the input video and the associated narration. The method sequentially clusters textual and visual representations of a task, where the two clustering problems are linked by joint constraints to obtain a single coherent sequence of steps in both modalities. To evaluate our method, we collect and annotate a new challenging dataset of real-world instruction videos from the Internet. The dataset contains videos for five different tasks with complex interactions between people and objects, captured in a variety of indoor and outdoor settings. We experimentally demonstrate that the proposed method can automatically discover, learn and localize the main steps of a task input videos.

  9. Enzymes in CO2 Capture

    DEFF Research Database (Denmark)

    Fosbøl, Philip Loldrup; Gladis, Arne; Thomsen, Kaj

    The enzyme Carbonic Anhydrase (CA) can accelerate the absorption rate of CO2 into aqueous solutions by several-fold. It exist in almost all living organisms and catalyses different important processes like CO2 transport, respiration and the acid-base balances. A new technology in the field...... of carbon capture is the application of enzymes for acceleration of typically slow ternary amines or inorganic carbonates. There is a hidden potential to revive currently infeasible amines which have an interesting low energy consumption for regeneration but too slow kinetics for viable CO2 capture. The aim...... of this work is to discuss the measurements of kinetic properties for CA promoted CO2 capture solvent systems. The development of a rate-based model for enzymes will be discussed showing the principles of implementation and the results on using a well-known ternary amine for CO2 capture. Conclusions...

  10. VAP/VAT: video analytics platform and test bed for testing and deploying video analytics

    Science.gov (United States)

    Gorodnichy, Dmitry O.; Dubrofsky, Elan

    2010-04-01

    Deploying Video Analytics in operational environments is extremely challenging. This paper presents a methodological approach developed by the Video Surveillance and Biometrics Section (VSB) of the Science and Engineering Directorate (S&E) of the Canada Border Services Agency (CBSA) to resolve these problems. A three-phase approach to enable VA deployment within an operational agency is presented and the Video Analytics Platform and Testbed (VAP/VAT) developed by the VSB section is introduced. In addition to allowing the integration of third party and in-house built VA codes into an existing video surveillance infrastructure, VAP/VAT also allows the agency to conduct an unbiased performance evaluation of the cameras and VA software available on the market. VAP/VAT consists of two components: EventCapture, which serves to Automatically detect a "Visual Event", and EventBrowser, which serves to Display & Peruse of "Visual Details" captured at the "Visual Event". To deal with Open architecture as well as with Closed architecture cameras, two video-feed capture mechanisms have been developed within the EventCapture component: IPCamCapture and ScreenCapture.

  11. Distributed Video Coding: Iterative Improvements

    DEFF Research Database (Denmark)

    Luong, Huynh Van

    Nowadays, emerging applications such as wireless visual sensor networks and wireless video surveillance are requiring lightweight video encoding with high coding efficiency and error-resilience. Distributed Video Coding (DVC) is a new coding paradigm which exploits the source statistics...... and noise modeling and also learn from the previous decoded Wyner-Ziv (WZ) frames, side information and noise learning (SING) is proposed. The SING scheme introduces an optical flow technique to compensate the weaknesses of the block based SI generation and also utilizes clustering of DCT blocks to capture...... cross band correlation and increase local adaptivity in noise modeling. During decoding, the updated information is used to iteratively reestimate the motion and reconstruction in the proposed motion and reconstruction reestimation (MORE) scheme. The MORE scheme not only reestimates the motion vectors...

  12. Blind prediction of natural video quality.

    Science.gov (United States)

    Saad, Michele A; Bovik, Alan C; Charrier, Christophe

    2014-03-01

    We propose a blind (no reference or NR) video quality evaluation model that is nondistortion specific. The approach relies on a spatio-temporal model of video scenes in the discrete cosine transform domain, and on a model that characterizes the type of motion occurring in the scenes, to predict video quality. We use the models to define video statistics and perceptual features that are the basis of a video quality assessment (VQA) algorithm that does not require the presence of a pristine video to compare against in order to predict a perceptual quality score. The contributions of this paper are threefold. 1) We propose a spatio-temporal natural scene statistics (NSS) model for videos. 2) We propose a motion model that quantifies motion coherency in video scenes. 3) We show that the proposed NSS and motion coherency models are appropriate for quality assessment of videos, and we utilize them to design a blind VQA algorithm that correlates highly with human judgments of quality. The proposed algorithm, called video BLIINDS, is tested on the LIVE VQA database and on the EPFL-PoliMi video database and shown to perform close to the level of top performing reduced and full reference VQA algorithms.

  13. The energy expenditure of an activity-promoting video game compared to sedentary video games and TV watching.

    Science.gov (United States)

    Mitre, Naim; Foster, Randal C; Lanningham-Foster, Lorraine; Levine, James A

    2011-01-01

    In the present study we investigated the effect of television watching and the use of activity-promoting video games on energy expenditure in obese and lean children. Energy expenditure and physical activity were measured while participants were watching television, playing a video game on a traditional sedentary video game console, and while playing the same video game on an activity-promoting video game console. Energy expenditure was significantly greater than television watching and playing video games on a sedentary video game console when children played the video game on the activity-promoting console. When examining movement with accelerometry, children moved significantly more when playing the video game on the Nintendo Wii console. Activity-promoting video games have shown to increase movement, and be an important tool to raise energy expenditure by 50% when compared to sedentary activities of daily living.

  14. Revolutionize Propulsion Test Facility High-Speed Video Imaging with Disruptive Computational Photography Enabling Technology

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced rocket propulsion testing requires high-speed video recording that can capture essential information for NASA during rocket engine flight certification...

  15. Evaluating the effectiveness of methods for capturing meetings

    OpenAIRE

    Hall, Mark John; Bermell-Garcia, Pablo; McMahon, Chris A.; Johansson, Anders; Gonzalez-Franco, Mar

    2015-01-01

    The purpose of this paper is to evaluate the effectiveness of commonly used methods to capture synchronous meetings for information and knowledge retrieval. Four methods of capture are evaluated in the form of a case study whereby a technical design meeting was captured by; (i) transcription; (ii) diagrammatic argumentation; (iii) meeting minutes; and (iv) video. The paper describes an experiment where participants undertook an information retrieval task and provided feedback on the methods. ...

  16. Hand Hygiene Saves Lives: Patient Admission Video

    Centers for Disease Control (CDC) Podcasts

    2008-05-01

    This podcast is for hospital patients and visitors. It emphasizes two key points to help prevent infections: the importance of practicing hand hygiene while in the hospital, and that it's appropriate to ask or remind healthcare providers to practice hand hygiene.  Created: 5/1/2008 by National Center for Preparedness, Detection, and Control of Infectious Diseases (NCPDCID).   Date Released: 6/19/2008.

  17. Hand Hygiene Saves Lives: Patient Admission Video

    Centers for Disease Control (CDC) Podcasts

    This podcast is for hospital patients and visitors. It emphasizes two key points to help prevent infections: the importance of practicing hand hygiene while in the hospital, and that it's appropriate to ask or remind healthcare providers to practice hand hygiene.

  18. Violence and video games in youngsters' lives

    OpenAIRE

    Cachide, Olga Rute Gil Lemos de Albuquerque Carvalho

    2009-01-01

    O presente trabalho propõe-se divulgar o impacto que os vídeo jogos têm na vida dos pré-adolescentes e adolescentes; quais os seus vídeo jogos preferidos e o controlo que pais e lojas exercem sobre o uso de vídeo jogos por parte destes adolescentes. Este trabalho também visa contribuir para o debate em curso sobre uma possível associação entre o consumo de vídeo jogos violentos e comportamentos agressivos e violentos. Depois de estudar a literatura existente sobre os possíveis efeitos da e...

  19. Glyph-Based Video Visualization for Semen Analysis

    KAUST Repository

    Duffy, Brian; Thiyagalingam, Jeyarajan; Walton, Simon; Smith, David J.; Trefethen, Anne; Kirkman-Brown, Jackson C.; Gaffney, Eamonn A.; Chen, Min

    2015-01-01

    scientists and researchers to interpret, compare and correlate the multidimensional and time-varying measurements captured from video data. In this work, we use glyphs to encode a collection of numerical measurements taken at a regular interval

  20. Celiac Family Health Education Video Series

    Medline Plus

    Full Text Available ... Boston Children's Hospital will teach you and your family about a healthful celiac lifestyle. Education is key in making parents feel more at ease and allow children with celiac disease to live happy and productive lives. Each of our video segments ... I. Introduction : Experiencing ...

  1. Identification, synchronisation and composition of user-generated videos

    OpenAIRE

    Bano, Sophia

    2016-01-01

    Cotutela Universitat Politècnica de Catalunya i Queen Mary University of London The increasing availability of smartphones is facilitating people to capture videos of their experience when attending events such as concerts, sports competitions and public rallies. Smartphones are equipped with inertial sensors which could be beneficial for event understanding. The captured User-Generated Videos (UGVs) are made available on media sharing websites. Searching and mining of UGVs of the same eve...

  2. Automatic mashup generation of multiple-camera videos

    NARCIS (Netherlands)

    Shrestha, P.

    2009-01-01

    The amount of user generated video content is growing enormously with the increase in availability and affordability of technologies for video capturing (e.g. camcorders, mobile-phones), storing (e.g. magnetic and optical devices, online storage services), and sharing (e.g. broadband internet,

  3. Video Journaling as a Method of Reflective Practice

    Science.gov (United States)

    Parikh, Sejal B.; Janson, Christopher; Singleton, Tiffany

    2012-01-01

    The purpose of this phenomenological study was to examine seven school counseling students' experiences of creating reflective video journals during their first internship course. Specifically, this study focused on capturing the essence of the experiences related to personal reactions, feelings, and thoughts about creating two video journal…

  4. Cellphones in Classrooms Land Teachers on Online Video Sites

    Science.gov (United States)

    Honawar, Vaishali

    2007-01-01

    Videos of teachers that students taped in secrecy are all over online sites like YouTube and MySpace. Angry teachers, enthusiastic teachers, teachers clowning around, singing, and even dancing are captured, usually with camera phones, for the whole world to see. Some students go so far as to create elaborately edited videos, shot over several…

  5. Airborne Video Surveillance

    National Research Council Canada - National Science Library

    Blask, Steven

    2002-01-01

    The DARPA Airborne Video Surveillance (AVS) program was established to develop and promote technologies to make airborne video more useful, providing capabilities that achieve a UAV force multiplier...

  6. Implications of the law on video recording in clinical practice.

    Science.gov (United States)

    Henken, Kirsten R; Jansen, Frank Willem; Klein, Jan; Stassen, Laurents P S; Dankelman, Jenny; van den Dobbelsteen, John J

    2012-10-01

    Technological developments allow for a variety of applications of video recording in health care, including endoscopic procedures. Although the value of video registration is recognized, medicolegal concerns regarding the privacy of patients and professionals are growing. A clear understanding of the legal framework is lacking. Therefore, this research aims to provide insight into the juridical position of patients and professionals regarding video recording in health care practice. Jurisprudence was searched to exemplify legislation on video recording in health care. In addition, legislation was translated for different applications of video in health care found in the literature. Three principles in Western law are relevant for video recording in health care practice: (1) regulations on privacy regarding personal data, which apply to the gathering and processing of video data in health care settings; (2) the patient record, in which video data can be stored; and (3) professional secrecy, which protects the privacy of patients including video data. Practical implementation of these principles in video recording in health care does not exist. Practical regulations on video recording in health care for different specifically defined purposes are needed. Innovations in video capture technology that enable video data to be made anonymous automatically can contribute to protection for the privacy of all the people involved.

  7. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... YouTube Videos » NEI YouTube Videos: Amblyopia Listen NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration ... Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: ...

  8. A video authentication technique

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1987-01-01

    Unattended video surveillance systems are particularly vulnerable to the substitution of false video images into the cable that connects the camera to the video recorder. New technology has made it practical to insert a solid state video memory into the video cable, freeze a video image from the camera, and hold this image as long as desired. Various techniques, such as line supervision and sync detection, have been used to detect video cable tampering. The video authentication technique described in this paper uses the actual video image from the camera as the basis for detecting any image substitution made during the transmission of the video image to the recorder. The technique, designed for unattended video systems, can be used for any video transmission system where a two-way digital data link can be established. The technique uses similar microprocessor circuitry at the video camera and at the video recorder to select sample points in the video image for comparison. The gray scale value of these points is compared at the recorder controller and if the values agree within limits, the image is authenticated. If a significantly different image was substituted, the comparison would fail at a number of points and the video image would not be authenticated. The video authentication system can run as a stand-alone system or at the request of another system

  9. Broadcast-quality-stereoscopic video in a time-critical entertainment and corporate environment

    Science.gov (United States)

    Gay, Jean-Philippe

    1995-03-01

    `reality present: Peter Gabrial and Cirque du Soleil' is a 12 minute original work directed and produced by Doug Brown, Jean-Philippe Gay & A. Coogan, which showcases creative content applications of commercial stereoscopic video equipment. For production, a complete equipment package including a Steadicam mount was used in support of the Ikegami LK-33 camera. Remote production units were fielded in the time critical, on-stage and off-stage environments of 2 major live concerts: Peter Gabriel's Secret World performance at the San Diego Sports Arena, and Cirque du Soleil's Saltimbanco performance in Chicago. Twin 60 Hz video channels were captured on Beta SP for maximum post production flexibility. Digital post production and field sequential mastering were effected in D-2 format at studio facilities in Los Angeles. The program was world premiered to a large public at the World of Music, Arts and Dance festivals in Los Angeles and San Francisco, in late 1993. It was presented to the artists in Los Angeles, Montreal and Washington D.C. Additional presentations have been made using a broad range of commercial and experimental stereoscopic video equipment, including projection systems, LCD and passive eyewear, and digital signal processors. Technical packages for live presentation have been fielded on site and off, through to the present.

  10. Development of an emergency medical video multiplexing transport system. Aiming at the nation wide prehospital care on ambulance.

    Science.gov (United States)

    Nagatuma, Hideaki

    2003-04-01

    The Emergency Medical Video Multiplexing Transport System (EMTS) is designed to support prehospital cares by delivering high quality live video streams of patients in an ambulance to emergency doctors in a remote hospital via satellite communications. The important feature is that EMTS divides a patient's live video scene into four pieces and transports the four video streams on four separate network channels. By multiplexing four video streams, EMTS is able to transport high quality videos through low data transmission rate networks such as satellite communications and cellular phone networks. In order to transport live video streams constantly, EMTS adopts Real-time Transport Protocol/Real-time Control Protocol as a network protocol and video stream data are compressed by Moving Picture Experts Group 4 format. As EMTS combines four video streams with checking video frame numbers, it uses a refresh packet that initializes server's frame numbers to synchronize the four video streams.

  11. Content-based retrieval in videos from laparoscopic surgery

    Science.gov (United States)

    Schoeffmann, Klaus; Beecks, Christian; Lux, Mathias; Uysal, Merih Seran; Seidl, Thomas

    2016-03-01

    In the field of medical endoscopy more and more surgeons are changing over to record and store videos of their endoscopic procedures for long-term archival. These endoscopic videos are a good source of information for explanations to patients and follow-up operations. As the endoscope is the "eye of the surgeon", the video shows the same information the surgeon has seen during the operation, and can describe the situation inside the patient much more precisely than an operation report would do. Recorded endoscopic videos can also be used for training young surgeons and in some countries the long-term archival of video recordings from endoscopic procedures is even enforced by law. A major challenge, however, is to efficiently access these very large video archives for later purposes. One problem, for example, is to locate specific images in the videos that show important situations, which are additionally captured as static images during the procedure. This work addresses this problem and focuses on contentbased video retrieval in data from laparoscopic surgery. We propose to use feature signatures, which can appropriately and concisely describe the content of laparoscopic images, and show that by using this content descriptor with an appropriate metric, we are able to efficiently perform content-based retrieval in laparoscopic videos. In a dataset with 600 captured static images from 33 hours recordings, we are able to find the correct video segment for more than 88% of these images.

  12. Rare Disease Video Portal

    OpenAIRE

    Sánchez Bocanegra, Carlos Luis

    2011-01-01

    Rare Disease Video Portal (RD Video) is a portal web where contains videos from Youtube including all details from 12 channels of Youtube. Rare Disease Video Portal (RD Video) es un portal web que contiene los vídeos de Youtube incluyendo todos los detalles de 12 canales de Youtube. Rare Disease Video Portal (RD Video) és un portal web que conté els vídeos de Youtube i que inclou tots els detalls de 12 Canals de Youtube.

  13. Adolescents and Video Games: Consumption of Leisure and the Social Construction of the Peer Group.

    Science.gov (United States)

    Panelas, Tom

    1983-01-01

    Addresses the role of video games in the lives of adolescents. Considers the significance of video games both as cultural "texts" and organized social activities. Examines motivations, practical interests, and behavior of suppliers and consumers of these products. (CMG)

  14. Deep hierarchical attention network for video description

    Science.gov (United States)

    Li, Shuohao; Tang, Min; Zhang, Jun

    2018-03-01

    Pairing video to natural language description remains a challenge in computer vision and machine translation. Inspired by image description, which uses an encoder-decoder model for reducing visual scene into a single sentence, we propose a deep hierarchical attention network for video description. The proposed model uses convolutional neural network (CNN) and bidirectional LSTM network as encoders while a hierarchical attention network is used as the decoder. Compared to encoder-decoder models used in video description, the bidirectional LSTM network can capture the temporal structure among video frames. Moreover, the hierarchical attention network has an advantage over single-layer attention network on global context modeling. To make a fair comparison with other methods, we evaluate the proposed architecture with different types of CNN structures and decoders. Experimental results on the standard datasets show that our model has a more superior performance than the state-of-the-art techniques.

  15. Captured by Aliens

    Science.gov (United States)

    Achenbach, Joel

    2000-03-01

    Captured by Aliens is a long and twisted voyage from science to the supernatural and back again. I hung out in Roswell, N.M., spent time with the Mars Society, met a guy who was figuring out the best way to build a spaceship to go to Alpha Centauri. I visited the set of the X-Files and talked to Mulder and Scully. One day over breakfast I was told by NASA administrator Dan Goldin, We live in a fog, man! He wants the big answers to the big questions. I spent a night in the base of a huge radio telescope in the boondocks of West Virginia, awaiting the signal from the aliens. I was hypnotized in a hotel room by someone who suspected that I'd been abducted by aliens and that this had triggered my interest in the topic. In the last months of his life, I talked to Carl Sagan, who believed that the galaxy riots with intelligent civilizations. He's my hero, for his steadfast adherence to the scientific method. What I found in all this is that the big question that needs immediate attention is not what's out THERE, but what's going on HERE, on Earth, and why we think the way we do, and how we came to be here in the first place.

  16. Radiative electron capture

    International Nuclear Information System (INIS)

    Biggerstaff, J.A.; Appleton, B.R.; Datz, S.; Moak, C.D.; Neelavathi, V.N.; Noggle, T.S.; Ritchie, R.H.; VerBeek, H.

    1975-01-01

    Some data are presented for radiative electron capture by fast moving ions. The radiative electron capture spectrum is shown for O 8+ in Ag, along with the energy dependence of the capture cross-section. A discrepancy between earlier data, theoretical prediction, and the present data is pointed out. (3 figs) (U.S.)

  17. Web-based video monitoring of CT and MRI procedures

    Science.gov (United States)

    Ratib, Osman M.; Dahlbom, Magdalena; Kho, Hwa T.; Valentino, Daniel J.; McCoy, J. Michael

    2000-05-01

    A web-based video transmission of images from CT and MRI consoles was implemented in an Intranet environment for real- time monitoring of ongoing procedures. Images captured from the consoles are compressed to video resolution and broadcasted through a web server. When called upon, the attending radiologists can view these live images on any computer within the secured Intranet network. With adequate compression, these images can be displayed simultaneously in different locations at a rate of 2 to 5 images/sec through standard LAN. The quality of the images being insufficient for diagnostic purposes, our users survey showed that they were suitable for supervising a procedure, positioning the imaging slices and for routine quality checking before completion of a study. The system was implemented at UCLA to monitor 9 CTs and 6 MRIs distributed in 4 buildings. This system significantly improved the radiologists productivity by saving precious time spent in trips between reading rooms and examination rooms. It also improved patient throughput by reducing the waiting time for the radiologists to come to check a study before moving the patient from the scanner.

  18. Video repairing under variable illumination using cyclic motions.

    Science.gov (United States)

    Jia, Jiaya; Tai, Yu-Wing; Wu, Tai-Pang; Tang, Chi-Keung

    2006-05-01

    This paper presents a complete system capable of synthesizing a large number of pixels that are missing due to occlusion or damage in an uncalibrated input video. These missing pixels may correspond to the static background or cyclic motions of the captured scene. Our system employs user-assisted video layer segmentation, while the main processing in video repair is fully automatic. The input video is first decomposed into the color and illumination videos. The necessary temporal consistency is maintained by tensor voting in the spatio-temporal domain. Missing colors and illumination of the background are synthesized by applying image repairing. Finally, the occluded motions are inferred by spatio-temporal alignment of collected samples at multiple scales. We experimented on our system with some difficult examples with variable illumination, where the capturing camera can be stationary or in motion.

  19. Prey capture kinematics and four-bar linkages in the bay pipefish, Syngnathus leptorhynchus.

    Science.gov (United States)

    Flammang, Brooke E; Ferry-Graham, Lara A; Rinewalt, Christopher; Ardizzone, Daniele; Davis, Chante; Trejo, Tonatiuh

    2009-01-01

    Because of their modified cranial morphology, syngnathid pipefishes have been described as extreme suction feeders. The presumption is that these fishes use their elongate snout much like a pipette in capturing planktonic prey. In this study, we quantify the contribution of suction to the feeding strike and quantitatively describe the prey capture mechanics of the bay pipefish Syngnathus leptorhynchus, focusing specifically on the role of both cranial elevation and snout movement. We used high-speed video to capture feeding sequences from nine individuals feeding on live brine shrimp. Sequences were digitized in order to calculate kinematic variables that could be used to describe prey capture. Prey capture was very rapid, from 2 to 6 ms from the onset of cranial rotation. We found that suction contributed at most about one-eighth as much as ram to the reduction of the distance between predator and prey. This movement of the predator was due almost exclusively to movement of the snout and neurocranium rather than movement of the whole body. The body was positioned ventral and posterior to the prey and the snout was rotated dorsally by as much as 21 degrees, thereby placing the mouth immediately behind the prey for capture. The snout did not follow the identical trajectory as the neurocranium, however, and reached a maximum angle of only about 10 degrees. The snout consists, in part, of elongate suspensorial elements and the linkages among these elements are retained despite changes in shape. Thus, when the neurocranium is rotated, the four-bar linkage that connects this action with hyoid depression simultaneously acts to expand and straighten the snout relative to the neurocranium. We confirm the presence of a four-bar linkage that facilitates these kinematics by couplings between the pectoral girdle, urohyal, hyoid complex, and the neurocranium-suspensorium complex.

  20. Capture cross sections on unstable nuclei

    Science.gov (United States)

    Tonchev, A. P.; Escher, J. E.; Scielzo, N.; Bedrossian, P.; Ilieva, R. S.; Humby, P.; Cooper, N.; Goddard, P. M.; Werner, V.; Tornow, W.; Rusev, G.; Kelley, J. H.; Pietralla, N.; Scheck, M.; Savran, D.; Löher, B.; Yates, S. W.; Crider, B. P.; Peters, E. E.; Tsoneva, N.; Goriely, S.

    2017-09-01

    Accurate neutron-capture cross sections on unstable nuclei near the line of beta stability are crucial for understanding the s-process nucleosynthesis. However, neutron-capture cross sections for short-lived radionuclides are difficult to measure due to the fact that the measurements require both highly radioactive samples and intense neutron sources. Essential ingredients for describing the γ decays following neutron capture are the γ-ray strength function and level densities. We will compare different indirect approaches for obtaining the most relevant observables that can constrain Hauser-Feshbach statistical-model calculations of capture cross sections. Specifically, we will consider photon scattering using monoenergetic and 100% linearly polarized photon beams. Challenges that exist on the path to obtaining neutron-capture cross sections for reactions on isotopes near and far from stability will be discussed.

  1. Capture cross sections on unstable nuclei

    Directory of Open Access Journals (Sweden)

    Tonchev A.P.

    2017-01-01

    Full Text Available Accurate neutron-capture cross sections on unstable nuclei near the line of beta stability are crucial for understanding the s-process nucleosynthesis. However, neutron-capture cross sections for short-lived radionuclides are difficult to measure due to the fact that the measurements require both highly radioactive samples and intense neutron sources. Essential ingredients for describing the γ decays following neutron capture are the γ-ray strength function and level densities. We will compare different indirect approaches for obtaining the most relevant observables that can constrain Hauser-Feshbach statistical-model calculations of capture cross sections. Specifically, we will consider photon scattering using monoenergetic and 100% linearly polarized photon beams. Challenges that exist on the path to obtaining neutron-capture cross sections for reactions on isotopes near and far from stability will be discussed.

  2. Commonwealth Edison captures intruders on screen

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    Commonwealth Edison has developed three software programs, with the supporting hardware, that significantly upgrade security monitoring capabilities at nuclear power stations. These are Video Capture, the Alternate Perimeter Alarm Reporting System, and the Redundant Access Control System. Conventional video systems only display what is happening at the moment and rewinding a VCR to discover what occurred earlier takes time. With Video Capture the images can be instantly restored to the monitor screen and printed out. When one of the security devices used to monitor the perimeter of a Commonwealth Edison nuclear power station is tripped, the Video Capture program stores the visual image digitally. This is done using similar technology to the employed in fax machines. The security staff are thus able to distinguish immediately between disturbances taking place simultaneously at different security zones. They can magnify and compare the stored images and print them out. The Alternate Perimeter Alarm Reporting System was developed to speed the transmission of alarm signals from the security sensors to the security computer. The Redundant Access Control System (RACS) was originally developed to meet the requirement of the Nuclear Regulatory Commission (NRC) for a secondary computer-operated security measure to monitor employee access to a nuclear power station. When employee drug testing became an additional NRC requirement, the Nuclear Division of Commonwealth Edison asked their programmers to modify RACS to generate a random list of personnel to be tested for substance abuse. RACS was then further modified to produce numerous station operating reports that had been previously compiled manually. (author)

  3. ATLAS Live: Collaborative Information Streams

    CERN Document Server

    Goldfarb, S; The ATLAS collaboration

    2010-01-01

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using the SCALA digital signage software system. The system is robust and flexible, allowing for the usage of scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intrascreen divisibility. The video is made available to the collaboration or public through the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video t...

  4. Robust efficient video fingerprinting

    Science.gov (United States)

    Puri, Manika; Lubin, Jeffrey

    2009-02-01

    We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.

  5. Human recognition at a distance in video

    CERN Document Server

    Bhanu, Bir

    2010-01-01

    Most biometric systems employed for human recognition require physical contact with, or close proximity to, a cooperative subject. Far more challenging is the ability to reliably recognize individuals at a distance, when viewed from an arbitrary angle under real-world environmental conditions. Gait and face data are the two biometrics that can be most easily captured from a distance using a video camera. This comprehensive and logically organized text/reference addresses the fundamental problems associated with gait and face-based human recognition, from color and infrared video data that are

  6. Heartbeat Rate Measurement from Facial Video

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal

    2016-01-01

    Heartbeat Rate (HR) reveals a person’s health condition. This paper presents an effective system for measuring HR from facial videos acquired in a more realistic environment than the testing environment of current systems. The proposed method utilizes a facial feature point tracking method...... by combining a ‘Good feature to track’ and a ‘Supervised descent method’ in order to overcome the limitations of currently available facial video based HR measuring systems. Such limitations include, e.g., unrealistic restriction of the subject’s movement and artificial lighting during data capture. A face...

  7. Veterans Crisis Line: Videos About Reaching out for Help

    Medline Plus

    Full Text Available ... a Self-Check Quiz Resources Spread the Word Videos Homeless Resources Additional Information Make the Connection Get Help When To Call What To Expect Resource Locator Veterans Live Chat Veterans Text Homeless Veterans Live Chat Military Live ...

  8. Stimulus-driven capture and contingent capture

    NARCIS (Netherlands)

    Theeuwes, J.; Olivers, C.N.L.; Belopolsky, A.V.

    2010-01-01

    Whether or not certain physical events can capture attention has been one of the most debated issues in the study of attention. This discussion is concerned with how goal-directed and stimulus-driven processes interact in perception and cognition. On one extreme of the spectrum is the idea that

  9. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  10. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer ... me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ANetwork ...

  11. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Doctor Find a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For ... Doctor Find a Provider Meet the Team Blog Articles & Stories News Provider Directory Donate Resources Links Videos ...

  12. Video quality pooling adaptive to perceptual distortion severity.

    Science.gov (United States)

    Park, Jincheol; Seshadrinathan, Kalpana; Lee, Sanghoon; Bovik, Alan Conrad

    2013-02-01

    It is generally recognized that severe video distortions that are transient in space and/or time have a large effect on overall perceived video quality. In order to understand this phenomena, we study the distribution of spatio-temporally local quality scores obtained from several video quality assessment (VQA) algorithms on videos suffering from compression and lossy transmission over communication channels. We propose a content adaptive spatial and temporal pooling strategy based on the observed distribution. Our method adaptively emphasizes "worst" scores along both the spatial and temporal dimensions of a video sequence and also considers the perceptual effect of large-area cohesive motion flow such as egomotion. We demonstrate the efficacy of the method by testing it using three different VQA algorithms on the LIVE Video Quality database and the EPFL-PoliMI video quality database.

  13. Video Design Games

    DEFF Research Database (Denmark)

    Smith, Rachel Charlotte; Christensen, Kasper Skov; Iversen, Ole Sejer

    We introduce Video Design Games to train educators in teaching design. The Video Design Game is a workshop format consisting of three rounds in which participants observe, reflect and generalize based on video snippets from their own practice. The paper reports on a Video Design Game workshop...... in which 25 educators as part of a digital fabrication and design program were able to critically reflect on their teaching practice....

  14. Production-Level Facial Performance Capture Using Deep Convolutional Neural Networks

    OpenAIRE

    Laine, Samuli; Karras, Tero; Aila, Timo; Herva, Antti; Saito, Shunsuke; Yu, Ronald; Li, Hao; Lehtinen, Jaakko

    2016-01-01

    We present a real-time deep learning framework for video-based facial performance capture -- the dense 3D tracking of an actor's face given a monocular video. Our pipeline begins with accurately capturing a subject using a high-end production facial capture pipeline based on multi-view stereo tracking and artist-enhanced animations. With 5-10 minutes of captured footage, we train a convolutional neural network to produce high-quality output, including self-occluded regions, from a monocular v...

  15. The Children's Video Marketplace.

    Science.gov (United States)

    Ducey, Richard V.

    This report examines a growing submarket, the children's video marketplace, which comprises broadcast, cable, and video programming for children 2 to 11 years old. A description of the tremendous growth in the availability and distribution of children's programming is presented, the economics of the children's video marketplace are briefly…

  16. Video Self-Modeling

    Science.gov (United States)

    Buggey, Tom; Ogle, Lindsey

    2012-01-01

    Video self-modeling (VSM) first appeared on the psychology and education stage in the early 1970s. The practical applications of VSM were limited by lack of access to tools for editing video, which is necessary for almost all self-modeling videos. Thus, VSM remained in the research domain until the advent of camcorders and VCR/DVD players and,…

  17. Capture ready study

    Energy Technology Data Exchange (ETDEWEB)

    Minchener, A.

    2007-07-15

    There are a large number of ways in which the capture of carbon as carbon dioxide (CO{sub 2}) can be integrated into fossil fuel power stations, most being applicable for both gas and coal feedstocks. To add to the choice of technology is the question of whether an existing plant should be retrofitted for capture, or whether it is more attractive to build totally new. This miscellany of choices adds considerably to the commercial risk of investing in a large power station. An intermediate stage between the non-capture and full capture state would be advantageous in helping to determine the best way forward and hence reduce those risks. In recent years the term 'carbon capture ready' or 'capture ready' has been coined to describe such an intermediate stage plant and is now widely used. However a detailed and all-encompassing definition of this term has never been published. All fossil fuel consuming plant produce a carbon dioxide gas byproduct. There is a possibility of scrubbing it with an appropriate CO{sub 2} solvent. Hence it could be said that all fossil fuel plant is in a condition for removal of its CO{sub 2} effluent and therefore already in a 'capture ready' state. Evidently, the practical reality of solvent scrubbing could cost more than the rewards offered by such as the ETS (European Trading Scheme). In which case, it can be said that although the possibility exists of capturing CO{sub 2}, it is not a commercially viable option and therefore the plant could not be described as ready for CO{sub 2} capture. The boundary between a capture ready and a non-capture ready condition using this definition cannot be determined in an objective and therefore universally acceptable way and criteria must be found which are less onerous and less potentially contentious to assess. 16 refs., 2 annexes.

  18. Take-home video for adult literacy

    Science.gov (United States)

    Yule, Valerie

    1996-01-01

    In the past, it has not been possible to "teach oneself to read" at home, because learners could not read the books to teach them. Videos and interactive compact discs have changed that situation and challenge current assumptions of the pedagogy of literacy. This article describes an experimental adult literacy project using video technology. The language used is English, but the basic concepts apply to any alphabetic or syllabic writing system. A half-hour cartoon video can help adults and adolescents with learning difficulties. Computer-animated cartoon graphics are attractive to look at, and simplify complex material in a clear, lively way. This video technique is also proving useful for distance learners, children, and learners of English as a second language. Methods and principles are to be extended using interactive compact discs.

  19. A Method for Estimating Surveillance Video Georeferences

    Directory of Open Access Journals (Sweden)

    Aleksandar Milosavljević

    2017-07-01

    Full Text Available The integration of a surveillance camera video with a three-dimensional (3D geographic information system (GIS requires the georeferencing of that video. Since a video consists of separate frames, each frame must be georeferenced. To georeference a video frame, we rely on the information about the camera view at the moment that the frame was captured. A camera view in 3D space is completely determined by the camera position, orientation, and field-of-view. Since the accurate measuring of these parameters can be extremely difficult, in this paper we propose a method for their estimation based on matching video frame coordinates of certain point features with their 3D geographic locations. To obtain these coordinates, we rely on high-resolution orthophotos and digital elevation models (DEM of the area of interest. Once an adequate number of points are matched, Levenberg–Marquardt iterative optimization is applied to find the most suitable video frame georeference, i.e., position and orientation of the camera.

  20. Medical students' perceptions of video-linked lectures and video-streaming

    Directory of Open Access Journals (Sweden)

    Karen Mattick

    2010-12-01

    Full Text Available Video-linked lectures allow healthcare students across multiple sites, and between university and hospital bases, to come together for the purposes of shared teaching. Recording and streaming video-linked lectures allows students to view them at a later date and provides an additional resource to support student learning. As part of a UK Higher Education Academy-funded Pathfinder project, this study explored medical students' perceptions of video-linked lectures and video-streaming, and their impact on learning. The methodology involved semi-structured interviews with 20 undergraduate medical students across four sites and five year groups. Several key themes emerged from the analysis. Students generally preferred live lectures at the home site and saw interaction between sites as a major challenge. Students reported that their attendance at live lectures was not affected by the availability of streamed lectures and tended to be influenced more by the topic and speaker than the technical arrangements. These findings will inform other educators interested in employing similar video technologies in their teaching.Keywords: video-linked lecture; video-streaming; student perceptions; decisionmaking; cross-campus teaching.

  1. CAPTURED India Country Evaluation

    NARCIS (Netherlands)

    O'Donoghue, R.; Brouwers, J.H.A.M.

    2012-01-01

    This report provides the findings of the India Country Evaluation and is produced as part of the overall CAPTURED End Evaluation. After five years of support by the CAPTURED project the End Evaluation has assessed that results are commendable. I-AIM was able to design an approach in which health

  2. Interatomic Coulombic electron capture

    International Nuclear Information System (INIS)

    Gokhberg, K.; Cederbaum, L. S.

    2010-01-01

    In a previous publication [K. Gokhberg and L. S. Cederbaum, J. Phys. B 42, 231001 (2009)] we presented the interatomic Coulombic electron capture process--an efficient electron capture mechanism by atoms and ions in the presence of an environment. In the present work we derive and discuss the mechanism in detail. We demonstrate thereby that this mechanism belongs to a family of interatomic electron capture processes driven by electron correlation. In these processes the excess energy released in the capture event is transferred to the environment and used to ionize (or to excite) it. This family includes the processes where the capture is into the lowest or into an excited unoccupied orbital of an atom or ion and proceeds in step with the ionization (or excitation) of the environment, as well as the process where an intermediate autoionizing excited resonance state is formed in the capturing center which subsequently deexcites to a stable state transferring its excess energy to the environment. Detailed derivation of the asymptotic cross sections of these processes is presented. The derived expressions make clear that the environment assisted capture processes can be important for many systems. Illustrative examples are presented for a number of model systems for which the data needed to construct the various capture cross sections are available in the literature.

  3. VBR video traffic models

    CERN Document Server

    Tanwir, Savera

    2014-01-01

    There has been a phenomenal growth in video applications over the past few years. An accurate traffic model of Variable Bit Rate (VBR) video is necessary for performance evaluation of a network design and for generating synthetic traffic that can be used for benchmarking a network. A large number of models for VBR video traffic have been proposed in the literature for different types of video in the past 20 years. Here, the authors have classified and surveyed these models and have also evaluated the models for H.264 AVC and MVC encoded video and discussed their findings.

  4. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  5. Intelligent video surveillance systems

    CERN Document Server

    Dufour, Jean-Yves

    2012-01-01

    Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi

  6. Video Spectroscopy with the RSpec Explorer

    Science.gov (United States)

    Lincoln, James

    2018-01-01

    The January 2018 issue of "The Physics Teacher" saw two articles that featured the RSpec Explorer as a supplementary lab apparatus. The RSpec Explorer provides live video spectrum analysis with which teachers can demonstrate how to investigate features of a diffracted light source. In this article I provide an introduction to the device…

  7. The benefits of playing video games

    NARCIS (Netherlands)

    Granic, I.; Lobel, A.M.; Engels, R.C.M.E.

    2014-01-01

    Video games are a ubiquitous part of almost all children’s and adolescents’ lives, with 97% playing for at least one hour per day in the United States. The vast majority of research by psychologists on the effects of “gaming” has been on its negative impact: the potential harm related to violence,

  8. Reconstructing Interlaced High-Dynamic-Range Video Using Joint Learning.

    Science.gov (United States)

    Inchang Choi; Seung-Hwan Baek; Kim, Min H

    2017-11-01

    For extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extended dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with the state-of-the-art high-dynamic-range video methods.

  9. PROTOTIPE VIDEO EDITOR DENGAN MENGGUNAKAN DIRECT X DAN DIRECT SHOW

    Directory of Open Access Journals (Sweden)

    Djoni Haryadi Setiabudi

    2004-01-01

    Full Text Available Technology development had given people the chance to capture their memorable moments in video format. A high quality digital video is a result of a good editing process. Which in turn, arise the new need of an editor application. In accordance to the problem, here the process of making a simple application for video editing needs. The application development use the programming techniques often applied in multimedia applications, especially video. First part of the application will begin with the video file compression and decompression, then we'll step into the editing part of the digital video file. Furthermore, the application also equipped with the facilities needed for the editing processes. The application made with Microsoft Visual C++ with DirectX technology, particularly DirectShow. The application provides basic facilities that will help the editing process of a digital video file. The application will produce an AVI format file after the editing process is finished. Through the testing process of this application shows the ability of this application to do the 'cut' and 'insert' of video files in AVI, MPEG, MPG and DAT formats. The 'cut' and 'insert' process only can be done in static order. Further, the aplication also provide the effects facility for transition process in each clip. Lastly, the process of saving the new edited video file in AVI format from the application. Abstract in Bahasa Indonesia : Perkembangan teknologi memberi kesempatan masyarakat untuk mengabadikan saat - saat yang penting menggunakan video. Pembentukan video digital yang baik membutuhkan proses editing yang baik pula. Untuk melakukan proses editing video digital dibutuhkan program editor. Berdasarkan permasalahan diatas maka pada penelitian ini dibuat prototipe editor sederhana untuk video digital. Pembuatan aplikasi memakai teknik pemrograman di bidang multimedia, khususnya video. Perencanaan dalam pembuatan aplikasi tersebut dimulai dengan pembentukan

  10. Flip Video for Dummies

    CERN Document Server

    Hutsko, Joe

    2010-01-01

    The full-color guide to shooting great video with the Flip Video camera. The inexpensive Flip Video camera is currently one of the hottest must-have gadgets. It's portable and connects easily to any computer to transfer video you shoot onto your PC or Mac. Although the Flip Video camera comes with a quick-start guide, it lacks a how-to manual, and this full-color book fills that void! Packed with full-color screen shots throughout, Flip Video For Dummies shows you how to shoot the best possible footage in a variety of situations. You'll learn how to transfer video to your computer and then edi

  11. Optional carbon capture

    Energy Technology Data Exchange (ETDEWEB)

    Alderson, T.; Scott, S.; Griffiths, J. [Jacobs Engineering, London (United Kingdom)

    2007-07-01

    In the case of IGCC power plants, carbon capture can be carried out before combustion. The carbon monoxide in the syngas is catalytically shifted to carbon dioxide and then captured in a standard gas absorption system. However, the insertion of a shift converter into an existing IGCC plant with no shift would mean a near total rebuild of the gasification waste heat recovery, gas treatment system and HRSG, with only the gasifier and gas turbine retaining most of their original features. To reduce the extent, cost and time taken for the revamping, the original plant could incorporate the shift, and the plant would then be operated without capture to advantage, and converted to capture mode of operation when commercially appropriate. This paper examines this concept of placing a shift converter into an IGCC plant before capture is required, and operating the same plant first without and then later with CO{sub 2} capture in a European context. The advantages and disadvantages of this 'capture ready' option are discussed. 6 refs., 2 figs., 4 tabs.

  12. Iodine neutron capture therapy

    Science.gov (United States)

    Ahmed, Kazi Fariduddin

    A new technique, Iodine Neutron Capture Therapy (INCT) is proposed to treat hyperthyroidism in people. Present thyroid therapies, surgical removal and 131I treatment, result in hypothyroidism and, for 131I, involve protracted treatment times and excessive whole-body radiation doses. The new technique involves using a low energy neutron beam to convert a fraction of the natural iodine stored in the thyroid to radioactive 128I, which has a 24-minute half-life and decays by emitting 2.12-MeV beta particles. The beta particles are absorbed in and damage some thyroid tissue cells and consequently reduce the production and release of thyroid hormones to the blood stream. Treatment times and whole-body radiation doses are thus reduced substantially. This dissertation addresses the first of the several steps needed to obtain medical profession acceptance and regulatory approval to implement this therapy. As with other such programs, initial feasibility is established by performing experiments on suitable small mammals. Laboratory rats were used and their thyroids were exposed to the beta particles coming from small encapsulated amounts of 128I. Masses of 89.0 mg reagent-grade elemental iodine crystals have been activated in the ISU AGN-201 reactor to provide 0.033 mBq of 128I. This activity delivers 0.2 Gy to the thyroid gland of 300-g male rats having fresh thyroid tissue masses of ˜20 mg. Larger iodine masses are used to provide greater doses. The activated iodine is encapsulated to form a thin (0.16 cm 2/mg) patch that is then applied directly to the surgically exposed thyroid of an anesthetized rat. Direct neutron irradiation of a rat's thyroid was not possible due to its small size. Direct in-vivo exposure of the thyroid of the rat to the emitted radiation from 128I is allowed to continue for 2.5 hours (6 half-lives). Pre- and post-exposure blood samples are taken to quantify thyroid hormone levels. The serum T4 concentration is measured by radioimmunoassay at

  13. Hierarchical Context Modeling for Video Event Recognition.

    Science.gov (United States)

    Wang, Xiaoyang; Ji, Qiang

    2016-10-11

    Current video event recognition research remains largely target-centered. For real-world surveillance videos, targetcentered event recognition faces great challenges due to large intra-class target variation, limited image resolution, and poor detection and tracking results. To mitigate these challenges, we introduced a context-augmented video event recognition approach. Specifically, we explicitly capture different types of contexts from three levels including image level, semantic level, and prior level. At the image level, we introduce two types of contextual features including the appearance context features and interaction context features to capture the appearance of context objects and their interactions with the target objects. At the semantic level, we propose a deep model based on deep Boltzmann machine to learn event object representations and their interactions. At the prior level, we utilize two types of prior-level contexts including scene priming and dynamic cueing. Finally, we introduce a hierarchical context model that systematically integrates the contextual information at different levels. Through the hierarchical context model, contexts at different levels jointly contribute to the event recognition. We evaluate the hierarchical context model for event recognition on benchmark surveillance video datasets. Results show that incorporating contexts in each level can improve event recognition performance, and jointly integrating three levels of contexts through our hierarchical model achieves the best performance.

  14. Video Toroid Cavity Imager

    Energy Technology Data Exchange (ETDEWEB)

    Gerald, Rex E. II; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  15. Digital Video in Research

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2012-01-01

    Is video becoming “the new black” in academia, if so, what are the challenges? The integration of video in research methodology (for collection, analysis) is well-known, but the use of “academic video” for dissemination is relatively new (Eriksson and Sørensen). The focus of this paper is academic......). In the video, I appear (along with other researchers) and two Danish film directors, and excerpts from their film. My challenges included how to edit the academic video and organize the collaborative effort. I consider video editing as a semiotic, transformative process of “reassembling” voices....... In the discussion, I review academic video in terms of relevance and implications for research practice. The theoretical background is social constructivist, combining social semiotics (Kress, van Leeuwen, McCloud), visual anthropology (Banks, Pink) and dialogic theory (Bakhtin). The Bakhtinian notion of “voices...

  16. Reflections on academic video

    Directory of Open Access Journals (Sweden)

    Thommy Eriksson

    2012-11-01

    Full Text Available As academics we study, research and teach audiovisual media, yet rarely disseminate and mediate through it. Today, developments in production technologies have enabled academic researchers to create videos and mediate audiovisually. In academia it is taken for granted that everyone can write a text. Is it now time to assume that everyone can make a video essay? Using the online journal of academic videos Audiovisual Thinking and the videos published in it as a case study, this article seeks to reflect on the emergence and legacy of academic audiovisual dissemination. Anchoring academic video and audiovisual dissemination of knowledge in two critical traditions, documentary theory and semiotics, we will argue that academic video is in fact already present in a variety of academic disciplines, and that academic audiovisual essays are bringing trends and developments that have long been part of academic discourse to their logical conclusion.

  17. ATLAS Live: Collaborative Information Streams

    Energy Technology Data Exchange (ETDEWEB)

    Goldfarb, Steven [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States); Collaboration: ATLAS Collaboration

    2011-12-23

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at the web portals, using the CERN SSO system.

  18. ATLAS Live: Collaborative Information Streams

    International Nuclear Information System (INIS)

    Goldfarb, Steven

    2011-01-01

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at the web portals, using the CERN SSO system.

  19. ATLAS Live: Collaborative Information Streams

    CERN Document Server

    Goldfarb, S; The ATLAS collaboration

    2011-01-01

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at th...

  20. The Use of Lecture Capture and Student Performance in Physiology

    Science.gov (United States)

    Hadgu, Rim Mekonnen; Huynh, Sophia; Gopalan, Chaya

    2016-01-01

    Lecture capture technology is fairly new and has gained interest among higher institutions, faculty and students alike. Live-lecture (LL) is captured in real-time and this recording, LC, is made available for students to access for later use, whether it be for review purpose or to replace a missed class. Student performance was compared between…

  1. A Super-resolution Reconstruction Algorithm for Surveillance Video

    Directory of Open Access Journals (Sweden)

    Jian Shao

    2017-01-01

    Full Text Available Recent technological developments have resulted in surveillance video becoming a primary method of preserving public security. Many city crimes are observed in surveillance video. The most abundant evidence collected by the police is also acquired through surveillance video sources. Surveillance video footage offers very strong support for solving criminal cases, therefore, creating an effective policy, and applying useful methods to the retrieval of additional evidence is becoming increasingly important. However, surveillance video has had its failings, namely, video footage being captured in low resolution (LR and bad visual quality. In this paper, we discuss the characteristics of surveillance video and describe the manual feature registration – maximum a posteriori – projection onto convex sets to develop a super-resolution reconstruction method, which improves the quality of surveillance video. From this method, we can make optimal use of information contained in the LR video image, but we can also control the image edge clearly as well as the convergence of the algorithm. Finally, we make a suggestion on how to adjust the algorithm adaptability by analyzing the prior information of target image.

  2. A Novel High Efficiency Fractal Multiview Video Codec

    Directory of Open Access Journals (Sweden)

    Shiping Zhu

    2015-01-01

    Full Text Available Multiview video which is one of the main types of three-dimensional (3D video signals, captured by a set of video cameras from various viewpoints, has attracted much interest recently. Data compression for multiview video has become a major issue. In this paper, a novel high efficiency fractal multiview video codec is proposed. Firstly, intraframe algorithm based on the H.264/AVC intraprediction modes and combining fractal and motion compensation (CFMC algorithm in which range blocks are predicted by domain blocks in the previously decoded frame using translational motion with gray value transformation is proposed for compressing the anchor viewpoint video. Then temporal-spatial prediction structure and fast disparity estimation algorithm exploiting parallax distribution constraints are designed to compress the multiview video data. The proposed fractal multiview video codec can exploit temporal and spatial correlations adequately. Experimental results show that it can obtain about 0.36 dB increase in the decoding quality and 36.21% decrease in encoding bitrate compared with JMVC8.5, and the encoding time is saved by 95.71%. The rate-distortion comparisons with other multiview video coding methods also demonstrate the superiority of the proposed scheme.

  3. Teasing Apart Complex Motions using VideoPoint

    Science.gov (United States)

    Fischer, Mark

    2002-10-01

    Using video analysis software such as VideoPoint, it is possible to explore the physics of any phenomenon that can be captured on videotape. The good news is that complex motions can be filmed and analyzed. The bad news is that the motions can become very complex very quickly. An example of such a complicated motion, the 2-dimensional motion of an object as filmed by a camera that is moving and rotating in the same plane will be discussed. Methods for extracting the desired object motion will be given as well as suggestions for shooting more easily analyzable video clips.

  4. High-Speed Video Analysis in a Conceptual Physics Class

    Science.gov (United States)

    Desbien, Dwain M.

    2011-09-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.

  5. Sending Safety Video over WiMAX in Vehicle Communications

    Directory of Open Access Journals (Sweden)

    Jun Steed Huang

    2013-10-01

    Full Text Available This paper reports on the design of an OPNET simulation platform to test the performance of sending real-time safety video over VANET (Vehicular Adhoc NETwork using the WiMAX technology. To provide a more realistic environment for streaming real-time video, a video model was created based on the study of video traffic traces captured from a realistic vehicular camera, and different design considerations were taken into account. A practical controller over real-time streaming protocol is implemented to control data traffic congestion for future road safety development. Our driving video model was then integrated with the WiMAX OPNET model along with a mobility model based on real road maps. Using this simulation platform, different mobility cases have been studied and the performance evaluated in terms of end-to-end delay, jitter and visual experience.

  6. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    Science.gov (United States)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  7. Range-Measuring Video Sensors

    Science.gov (United States)

    Howard, Richard T.; Briscoe, Jeri M.; Corder, Eric L.; Broderick, David

    2006-01-01

    Optoelectronic sensors of a proposed type would perform the functions of both electronic cameras and triangulation- type laser range finders. That is to say, these sensors would both (1) generate ordinary video or snapshot digital images and (2) measure the distances to selected spots in the images. These sensors would be well suited to use on robots that are required to measure distances to targets in their work spaces. In addition, these sensors could be used for all the purposes for which electronic cameras have been used heretofore. The simplest sensor of this type, illustrated schematically in the upper part of the figure, would include a laser, an electronic camera (either video or snapshot), a frame-grabber/image-capturing circuit, an image-data-storage memory circuit, and an image-data processor. There would be no moving parts. The laser would be positioned at a lateral distance d to one side of the camera and would be aimed parallel to the optical axis of the camera. When the range of a target in the field of view of the camera was required, the laser would be turned on and an image of the target would be stored and preprocessed to locate the angle (a) between the optical axis and the line of sight to the centroid of the laser spot.

  8. Geographic Video 3d Data Model And Retrieval

    Science.gov (United States)

    Han, Z.; Cui, C.; Kong, Y.; Wu, H.

    2014-04-01

    Geographic video includes both spatial and temporal geographic features acquired through ground-based or non-ground-based cameras. With the popularity of video capture devices such as smartphones, the volume of user-generated geographic video clips has grown significantly and the trend of this growth is quickly accelerating. Such a massive and increasing volume poses a major challenge to efficient video management and query. Most of the today's video management and query techniques are based on signal level content extraction. They are not able to fully utilize the geographic information of the videos. This paper aimed to introduce a geographic video 3D data model based on spatial information. The main idea of the model is to utilize the location, trajectory and azimuth information acquired by sensors such as GPS receivers and 3D electronic compasses in conjunction with video contents. The raw spatial information is synthesized to point, line, polygon and solid according to the camcorder parameters such as focal length and angle of view. With the video segment and video frame, we defined the three categories geometry object using the geometry model of OGC Simple Features Specification for SQL. We can query video through computing the spatial relation between query objects and three categories geometry object such as VFLocation, VSTrajectory, VSFOView and VFFovCone etc. We designed the query methods using the structured query language (SQL) in detail. The experiment indicate that the model is a multiple objective, integration, loosely coupled, flexible and extensible data model for the management of geographic stereo video.

  9. Adiabatic capture and debunching

    International Nuclear Information System (INIS)

    Ng, K.Y.

    2012-01-01

    In the study of beam preparation for the g-2 experiment, adiabatic debunching and adiabatic capture are revisited. The voltage programs for these adiabbatic processes are derived and their properties discussed. Comparison is made with some other form of adiabatic capture program. The muon g-2 experiment at Fermilab calls for intense proton bunches for the creation of muons. A booster batch of 84 bunches is injected into the Recycler Ring, where it is debunched and captured into 4 intense bunches with the 2.5-MHz rf. The experiment requires short bunches with total width less than 100 ns. The transport line from the Recycler to the muon-production target has a low momentum aperture of ∼ ±22 MeV. Thus each of the 4 intense proton bunches required to have an emittance less than ∼ 3.46 eVs. The incoming booster bunches have total emittance ∼ 8.4 eVs, or each one with an emittance ∼ 0.1 eVs. However, there is always emittance increase when the 84 booster bunches are debunched. There will be even larger emittance increase during adiabatic capture into the buckets of the 2.5-MHz rf. In addition, the incoming booster bunches may have emittances larger than 0.1 eVs. In this article, we will concentrate on the analysis of the adiabatic capture process with the intention of preserving the beam emittance as much as possible. At this moment, beam preparation experiment is being performed at the Main Injector. Since the Main Injector and the Recycler Ring have roughly the same lattice properties, we are referring to adiabatic capture in the Main Injector instead in our discussions.

  10. Dynamic imaging of cell-free and cell-associated viral capture in mature dendritic cells.

    Science.gov (United States)

    Izquierdo-Useros, Nuria; Esteban, Olga; Rodriguez-Plata, Maria T; Erkizia, Itziar; Prado, Julia G; Blanco, Julià; García-Parajo, Maria F; Martinez-Picado, Javier

    2011-12-01

    Dendritic cells (DCs) capture human immunodeficiency virus (HIV) through a non-fusogenic mechanism that enables viral transmission to CD4(+) T cells, contributing to in vivo viral dissemination. Although previous studies have provided important clues to cell-free viral capture by mature DCs (mDCs), dynamic and kinetic insight on this process is still missing. Here, we used three-dimensional video microscopy and single-particle tracking approaches to dynamically dissect both cell-free and cell-associated viral capture by living mDCs. We show that cell-free virus capture by mDCs operates through three sequential phases: virus binding through specific determinants expressed in the viral particle, polarized or directional movements toward concrete regions of the cell membrane and virus accumulation in a sac-like structure where trapped viral particles display a hindered diffusive behavior. Moreover, real-time imaging of cell-associated viral transfer to mDCs showed a similar dynamics to that exhibited by cell-free virus endocytosis leading to viral accumulation in compartments. However, cell-associated HIV type 1 transfer to mDCs was the most effective pathway, boosted throughout enhanced cellular contacts with infected CD4(+) T cells. Our results suggest that in lymphoid tissues, mDC viral uptake could occur either by encountering cell-free or cell-associated virus produced by infected cells generating the perfect scenario to promote HIV pathogenesis and impact disease progression. © 2011 John Wiley & Sons A/S.

  11. Motion Capturing Emotions

    OpenAIRE

    Wood Karen; Cisneros Rosemary E.; Whatley Sarah

    2017-01-01

    The paper explores the activities conducted as part of WhoLoDancE: Whole Body Interaction Learning for Dance Education which is an EU-funded Horizon 2020 project. In particular, we discuss the motion capture sessions that took place at Motek, Amsterdam as well as the dancers’ experience of being captured and watching themselves or others as varying visual representations through the HoloLens. HoloLens is Microsoft’s first holographic computer that you wear as you would a pair of glasses. The ...

  12. Nuclear muon capture

    CERN Document Server

    Mukhopadhyay, N C

    1977-01-01

    Our present knowledge of the nuclear muon capture reactions is surveyed. Starting from the formation of the muonic atom, various phenomena, having a bearing on the nuclear capture, are reviewed. The nuclear reactions are then studied from two angles-to learn about the basic muon+nucleon weak interaction process, and to obtain new insights on the nuclear dynamics. Future experimental prospects with the newer generation muon 'factories' are critically examined. Possible modification of the muon+nucleon weak interaction in complex nuclei remains the most important open problem in this field. (380 refs).

  13. Proton capture resonance studies

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, G.E. [North Carolina State University, Raleigh, North Carolina (United States) 27695]|[Triangle Universities Nuclear Laboratory, Durham, North Carolina (United States) 27708; Bilpuch, E.G. [Duke University, Durham, North Carolina (United States) 27708]|[Triangle Universities Nuclear Laboratory, Durham, North Carolina (United States) 27708; Bybee, C.R. [North Carolina State University, Raleigh, North Carolina (United States) 27695]|[Triangle Universities Nuclear Laboratory, Durham, North Carolina (United States) 27708; Cox, J.M.; Fittje, L.M. [Tennessee Technological University, Cookeville, Tennessee (United States) 38505]|[Triangle Universities Nuclear Laboratory, Durham, North Carolina (United States) 27708; Labonte, M.A.; Moore, E.F.; Shriner, J.D. [North Carolina State University, Raleigh, North Carolina (United States) 27695]|[Triangle Universities Nuclear Laboratory, Durham, North Carolina (United States) 27708; Shriner, J.F. Jr. [Tennessee Technological University, Cookeville, Tennessee (United States) 38505]|[Triangle Universities Nuclear Laboratory, Durham, North Carolina (United States) 27708; Vavrina, G.A. [North Carolina State University, Raleigh, North Carolina (United States) 27695]|[Triangle Universities Nuclear Laboratory, Durham, North Carolina (United States) 27708; Wallace, P.M. [Duke University, Durham, North Carolina (United States) 27708]|[Triangle Universities Nuclear Laboratory, Durham, North Carolina (United States) 27708

    1997-02-01

    The fluctuation properties of quantum systems now are used as a signature of quantum chaos. The analyses require data of extremely high quality. The {sup 29}Si(p,{gamma}) reaction is being used to establish a complete level scheme of {sup 30}P to study chaos and isospin breaking in this nuclide. Determination of the angular momentum J, the parity {pi}, and the isospin T from resonance capture data is considered. Special emphasis is placed on the capture angular distributions and on a geometric description of these angular distributions. {copyright} {ital 1997 American Institute of Physics.}

  14. Revisiting video game ratings: Shift from content-centric to parent-centric approach

    Directory of Open Access Journals (Sweden)

    Jiow Hee Jhee

    2017-01-01

    Full Text Available The rapid adoption of video gaming among children has placed tremendous strain on parents’ ability to manage their children’s consumption. While parents refer online to video games ratings (VGR information to support their mediation efforts, there are many difficulties associated with such practice. This paper explores the popular VGR sites, and highlights the inadequacies of VGRs to capture the parents’ concerns, such as time displacement, social interactions, financial spending and various video game effects, beyond the widespread panics over content issues, that is subjective, ever-changing and irrelevant. As such, this paper argues for a shift from content-centric to a parent-centric approach in VGRs, that captures the evolving nature of video gaming, and support parents, the main users of VGRs, in their management of their young video gaming children. This paper proposes a Video Games Repository for Parents to represent that shift.

  15. The Video Generation.

    Science.gov (United States)

    Provenzo, Eugene F., Jr.

    1992-01-01

    Video games are neither neutral nor harmless but represent very specific social and symbolic constructs. Research on the social content of today's video games reveals that sex bias and gender stereotyping are widely evident throughout the Nintendo games. Violence and aggression also pervade the great majority of the games. (MLF)

  16. SECRETS OF SONG VIDEO

    Directory of Open Access Journals (Sweden)

    Chernyshov Alexander V.

    2014-04-01

    Full Text Available The article focuses on the origins of the song videos as TV and Internet-genre. In addition, it considers problems of screen images creation depending on the musical form and the text of a songs in connection with relevant principles of accent and phraseological video editing and filming techniques as well as with additional frames and sound elements.

  17. Reviews in instructional video

    NARCIS (Netherlands)

    van der Meij, Hans

    2017-01-01

    This study investigates the effectiveness of a video tutorial for software training whose construction was based on a combination of insights from multimedia learning and Demonstration-Based Training. In the videos, a model of task performance was enhanced with instructional features that were

  18. Videos - The National Guard

    Science.gov (United States)

    Legislative Liaison Small Business Programs Social Media State Websites Videos Featured Videos On Every Front 2:17 Always Ready, Always There National Guard Bureau Diversity and Inclusion Play Button 1:04 National Guard Bureau Diversity and Inclusion The ChalleNGe Ep.5 [Graduation] Play Button 3:51 The

  19. Video at Sea: Telling the Stories of the International Ocean Discovery Program

    Science.gov (United States)

    Wright, M.; Harned, D.

    2014-12-01

    Seagoing science expeditions offer an ideal opportunity for storytelling. While many disciplines involve fieldwork, few offer the adventure of spending two months at sea on a vessel hundreds of miles from shore with several dozen strangers from all over the world. As a medium, video is nearly ideal for telling these stories; it can capture the thrill of discovery, the agony of disappointment, the everyday details of life at sea, and everything in between. At the International Ocean Discovery Program (IODP, formerly the Integrated Ocean Drilling Program), we have used video as a storytelling medium for several years with great success. Over this timeframe, camera equipment and editing software have become cheaper and easier to use, while web sites such as YouTube and Vimeo have enabled sharing with just a few mouse clicks. When it comes to telling science stories with video, the barriers to entry have never been lower. As such, we have experimented with many different approaches and a wide range of styles. On one end of the spectrum, live "ship-to-shore" broadcasts with school groups - conducted with an iPad and free videoconferencing software such as Skype and Zoom - enable curious minds to engage directly with scientists in real-time. We have also contracted with professional videographers and animators who offer the experience, skill, and equipment needed to produce polished clips of the highest caliber. Amateur videographers (including some scientists looking to make use of their free time on board) have shot and produced impressive shorts using little more than a phone camera. In this talk, I will provide a brief overview of our efforts to connect with the public using video, including a look at how effective certain tactics are for connecting to specific audiences.

  20. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos ... member of our patient care team. Managing Your Arthritis Managing Your Arthritis Managing Chronic Pain and Depression ...

  1. Rheumatoid Arthritis Educational Video Series

    Science.gov (United States)

    ... Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos ... Your Arthritis Managing Chronic Pain and Depression in Arthritis Nutrition & Rheumatoid Arthritis Arthritis and Health-related Quality of Life ...

  2. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ... *PDF files require the free Adobe® Reader® software for viewing. This website is maintained by the ...

  3. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... questions Clinical Studies Publications Catalog Photos and Images Spanish Language Information Grants and Funding Extramural Research Division ... Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video ...

  4. A Survey of Advances in Vision-Based Human Motion Capture and Analysis

    DEFF Research Database (Denmark)

    Moeslund, Thomas B.; Hilton, Adrian; Krüger, Volker

    2006-01-01

    This survey reviews advances in human motion capture and analysis from 2000 to 2006, following a previous survey of papers up to 2000 Human motion capture continues to be an increasingly active research area in computer vision with over 350 publications over this period. A number of significant...... actions and behavior. This survey reviews recent trends in video based human capture and analysis, as well as discussing open problems for future research to achieve automatic visual analysis of human movement....

  5. Social video content delivery

    CERN Document Server

    Wang, Zhi; Zhu, Wenwu

    2016-01-01

    This brief presents new architecture and strategies for distribution of social video content. A primary framework for socially-aware video delivery and a thorough overview of the possible approaches is provided. The book identifies the unique characteristics of socially-aware video access and social content propagation, revealing the design and integration of individual modules that are aimed at enhancing user experience in the social network context. The change in video content generation, propagation, and consumption for online social networks, has significantly challenged the traditional video delivery paradigm. Given the massive amount of user-generated content shared in online social networks, users are now engaged as active participants in the social ecosystem rather than as passive receivers of media content. This revolution is being driven further by the deep penetration of 3G/4G wireless networks and smart mobile devices that are seamlessly integrated with online social networking and media-sharing s...

  6. Online video examination

    DEFF Research Database (Denmark)

    Qvist, Palle

    have large influence on their own teaching, learning and curriculum. The programme offers streamed videos in combination with other learning resources. It is a concept which offers video as pure presentation - video lectures - but also as an instructional tool which gives the students the possibility...... to construct their knowledge, collaboration and communication. In its first years the programme has used Skype video communication for collaboration and communication within and between groups, group members and their facilitators. Also exams have been mediated with the help of Skype and have for all students......, examiners and external examiners been a challenge and opportunity and has brought new knowledge and experience. This paper brings results from a questionnaire focusing on how the students experience the video examination....

  7. An efficient approach for video action classification based on 3d Zernike moments

    OpenAIRE

    Lassoued , Imen; Zagrouba , Ezzedine; Chahir , Youssef

    2011-01-01

    International audience; Action recognition in video and still image is one of the most challenging research topics in pattern recognition and computer vision. This paper proposes a new method for video action classification based on 3D Zernike moments. These last ones aim to capturing both structural and temporal information of a time varying sequence. The originality of this approach consists to represent actions in video sequences by a three-dimension shape obtained from different silhouett...

  8. Muon capture in deuterium

    Czech Academy of Sciences Publication Activity Database

    Ricci, P.; Truhlík, Emil; Mosconi, B.; Smejkal, J.

    2010-01-01

    Roč. 837, - (2010), s. 110-144 ISSN 0375-9474 Institutional research plan: CEZ:AV0Z10480505 Keywords : Negative muon capture * Deuteron * Potential models Subject RIV: BE - Theoretical Physics Impact factor: 1.986, year: 2010

  9. Capture Matrices Handbook

    Science.gov (United States)

    2014-04-01

    materials, the affinity ligand would need identification , as well as chemistries that graft the affinity ligand onto the surface of magnetic...ACTIVE CAPTURE MATRICES FOR THE DETECTION/ IDENTIFICATION OF PHARMACEUTICALS...6 As shown in Figure 2.3-1a, the spectra exhibit similar baselines and the spectral peaks lineup . Under these circumstances, the spectral

  10. Capacitance for carbon capture

    International Nuclear Information System (INIS)

    Landskron, Kai

    2018-01-01

    Metal recycling: A sustainable, capacitance-assisted carbon capture and sequestration method (Supercapacitive Swing Adsorption) can turn scrap metal and CO 2 into metal carbonates at an attractive energy cost. (copyright 2018 Wiley-VCH Verlag GmbH and Co. KGaA, Weinheim)

  11. Capacitance for carbon capture

    Energy Technology Data Exchange (ETDEWEB)

    Landskron, Kai [Department of Chemistry, Lehigh University, Bethlehem, PA (United States)

    2018-03-26

    Metal recycling: A sustainable, capacitance-assisted carbon capture and sequestration method (Supercapacitive Swing Adsorption) can turn scrap metal and CO{sub 2} into metal carbonates at an attractive energy cost. (copyright 2018 Wiley-VCH Verlag GmbH and Co. KGaA, Weinheim)

  12. Embedded enzymes catalyse capture

    Science.gov (United States)

    Kentish, Sandra

    2018-05-01

    Membrane technologies for carbon capture can offer economic and environmental advantages over conventional amine-based absorption, but can suffer from limited gas flux and selectivity to CO2. Now, a membrane based on enzymes embedded in hydrophilic pores is shown to exhibit combined flux and selectivity that challenges the state of the art.

  13. Attention Capture by Faces

    Science.gov (United States)

    Langton, Stephen R. H.; Law, Anna S.; Burton, A. Mike; Schweinberger, Stefan R.

    2008-01-01

    We report three experiments that investigate whether faces are capable of capturing attention when in competition with other non-face objects. In Experiment 1a participants took longer to decide that an array of objects contained a butterfly target when a face appeared as one of the distracting items than when the face did not appear in the array.…

  14. Video as a Metaphorical Eye: Images of Positionality, Pedagogy, and Practice

    Science.gov (United States)

    Hamilton, Erica R.

    2012-01-01

    Considered by many to be cost-effective and user-friendly, video technology is utilized in a multitude of contexts, including the university classroom. One purpose, although not often used, involves recording oneself teaching. This autoethnographic study focuses on the author's use of video and reflective practice in order to capture and examine…

  15. Design considerations for view interpolation in a 3D video coding framework

    NARCIS (Netherlands)

    Morvan, Y.; Farin, D.S.; With, de P.H.N.; Lagendijk, R.L.; Weber, Jos H.; Berg, van den A.F.M.

    2006-01-01

    A 3D video stream typically consists of a set of views capturing simultaneously the same scene. For an efficient transmission of the 3D video, a compression technique is required. In this paper, we describe a coding architecture and appropriate algorithms that enable the compression and

  16. Students' Acceptance of an Educational Videos Platform: A Study in A Portuguese University

    Science.gov (United States)

    Costa, Carolina; Alvelos, Helena; Teixeira, Leonor

    2018-01-01

    The Educast is an educational videos' platform that captures simultaneously video and digital support materials. This paper presents a study on the acceptance of Educast, by students, using the Technology Acceptance Model--TAM. The data was collected through a questionnaire applied to 54 students which results were analyzed using descriptive…

  17. Age vs. experience : evaluation of a video feedback intervention for newly licensed teen drivers.

    Science.gov (United States)

    2013-02-06

    This project examines the effects of age, experience, and video-based feedback on the rate and type of safety-relevant events captured on video event : recorders in the vehicles of three groups of newly licensed young drivers: : 1. 14.5- to 15.5-year...

  18. Access Control in Smart Homes by Android-Based Liveness Detection

    Directory of Open Access Journals (Sweden)

    Susanna Spinsante

    2017-05-01

    Full Text Available Technologies for personal safety and security play an increasing role in modern life, and are among the most valuable features expected to be supported by so-called smart homes. This paper presents a low-complexity Android application designed for both mobile and embedded devices, that exploits the available on-board camera to easily capture two images of a subject, and processes them to discriminate a true 3D and live face, from a fake or printed 2D one. The liveness detection based on such a discrimination provides anti-spoofing capabilities to secure access control based on face recognition. The limited computational complexity of the developed application makes it suitable for practical implementation in video-entry phones based on embedded Android platforms. The results obtained are satisfactory even in different ambient light conditions, and further improvements are being developed to deal with low precision image acquisition.

  19. The Energy Expenditure of an Activity-Promoting Video Game compared to Sedentary Video Games and TV Watching

    Science.gov (United States)

    Mitre, Naim; Foster, Randal C; Lanningham-Foster, Lorraine; Levine, James A.

    2014-01-01

    Background Screen time continues to be a major contributing factor to sedentariness in children. There have been more creative approaches to increase physical over the last few years. One approach has been through the use of video games. In the present study we investigated the effect of television watching and the use of activity-promoting video games on energy expenditure and movement in lean and obese children. Our primary hypothesis was that energy expenditure and movement decreases while watching television, in lean and obese children. Our secondary hypothesis was that energy expenditure and movement increases when playing the same game with an activity-promoting video game console compared to a sedentary video game console, in lean and obese children. Methods Eleven boys (10 ± 1 year) and eight girls (9 ± 1 year) ranging in BMI from 14–29 kg/m2 (eleven lean and eight overweight or obese) were recruited. Energy expenditure and physical activity were measured while participants were watching television, playing a video game on a traditional sedentary video game console, and while playing the same video game on an activity-promoting video game (Nintendo Wii) console. Results Energy expenditure was significantly greater than television watching and playing video games on a sedentary video game console when children played the video game on the activity-promoting console(125.3 ± 38.2 Kcal/hr vs. 79.7 ± 20.1 and 79.4 ±15.7, Pvideo games on a sedentary video game console is not different. Activity-promoting video games have shown to increase movement, and be an important tool to raise energy expenditure by 50% when compared to sedentary activities of daily living. PMID:22145458

  20. Video-based Mobile Mapping System Using Smartphones

    Science.gov (United States)

    Al-Hamad, A.; Moussa, A.; El-Sheimy, N.

    2014-11-01

    The last two decades have witnessed a huge growth in the demand for geo-spatial data. This demand has encouraged researchers around the world to develop new algorithms and design new mapping systems in order to obtain reliable sources for geo-spatial data. Mobile Mapping Systems (MMS) are one of the main sources for mapping and Geographic Information Systems (GIS) data. MMS integrate various remote sensing sensors, such as cameras and LiDAR, along with navigation sensors to provide the 3D coordinates of points of interest from moving platform (e.g. cars, air planes, etc.). Although MMS can provide accurate mapping solution for different GIS applications, the cost of these systems is not affordable for many users and only large scale companies and institutions can benefits from MMS systems. The main objective of this paper is to propose a new low cost MMS with reasonable accuracy using the available sensors in smartphones and its video camera. Using the smartphone video camera, instead of capturing individual images, makes the system easier to be used by non-professional users since the system will automatically extract the highly overlapping frames out of the video without the user intervention. Results of the proposed system are presented which demonstrate the effect of the number of the used images in mapping solution. In addition, the accuracy of the mapping results obtained from capturing a video is compared to the same results obtained from using separate captured images instead of video.

  1. A new video programme

    CERN Multimedia

    CERN video productions

    2011-01-01

    "What's new @ CERN?", a new monthly video programme, will be broadcast on the Monday of every month on webcast.cern.ch. Aimed at the general public, the programme will cover the latest CERN news, with guests and explanatory features. Tune in on Monday 3 October at 4 pm (CET) to see the programme in English, and then at 4:20 pm (CET) for the French version.   var flash_video_player=get_video_player_path(); insert_player_for_external('Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-0753-kbps-640x360-25-fps-audio-64-kbps-44-kHz-stereo', 'mms://mediastream.cern.ch/MediaArchive/Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-Multirate-200-to-753-kbps-640x360-25-fps.wmv', 'false', 480, 360, 'https://mediastream.cern.ch/MediaArchive/Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-posterframe-640x360-at-10-percent.jpg', '1383406', true, 'Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-0600-kbps-maxH-360-25-fps-...

  2. Gamifying Video Object Segmentation.

    Science.gov (United States)

    Spampinato, Concetto; Palazzo, Simone; Giordano, Daniela

    2017-10-01

    Video object segmentation can be considered as one of the most challenging computer vision problems. Indeed, so far, no existing solution is able to effectively deal with the peculiarities of real-world videos, especially in cases of articulated motion and object occlusions; limitations that appear more evident when we compare the performance of automated methods with the human one. However, manually segmenting objects in videos is largely impractical as it requires a lot of time and concentration. To address this problem, in this paper we propose an interactive video object segmentation method, which exploits, on one hand, the capability of humans to identify correctly objects in visual scenes, and on the other hand, the collective human brainpower to solve challenging and large-scale tasks. In particular, our method relies on a game with a purpose to collect human inputs on object locations, followed by an accurate segmentation phase achieved by optimizing an energy function encoding spatial and temporal constraints between object regions as well as human-provided location priors. Performance analysis carried out on complex video benchmarks, and exploiting data provided by over 60 users, demonstrated that our method shows a better trade-off between annotation times and segmentation accuracy than interactive video annotation and automated video object segmentation approaches.

  3. Hierarchical video summarization

    Science.gov (United States)

    Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.

    1998-12-01

    We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.

  4. Medical video server construction.

    Science.gov (United States)

    Dańda, Jacek; Juszkiewicz, Krzysztof; Leszczuk, Mikołaj; Loziak, Krzysztof; Papir, Zdzisław; Sikora, Marek; Watza, Rafal

    2003-01-01

    The paper discusses two implementation options for a Digital Video Library, a repository used for archiving, accessing, and browsing of video medical records. Two crucial issues to be decided on are a video compression format and a video streaming platform. The paper presents numerous decision factors that have to be taken into account. The compression formats being compared are DICOM as a format representative for medical applications, both MPEGs, and several new formats targeted for an IP networking. The comparison includes transmission rates supported, compression rates, and at least options for controlling a compression process. The second part of the paper presents the ISDN technique as a solution for provisioning of tele-consultation services between medical parties that are accessing resources uploaded to a digital video library. There are several backbone techniques (like corporate LANs/WANs, leased lines or even radio/satellite links) available, however, the availability of network resources for hospitals was the prevailing choice criterion pointing to ISDN solutions. Another way to provide access to the Digital Video Library is based on radio frequency domain solutions. The paper describes possibilities of both, wireless and cellular network's data transmission service to be used as a medical video server transport layer. For the cellular net-work based solution two communication techniques are used: Circuit Switched Data and Packet Switched Data.

  5. Presence capture cameras - a new challenge to the image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  6. User interface using a 3D model for video surveillance

    Science.gov (United States)

    Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru

    1998-02-01

    These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.

  7. Music, videos and the risk for CERN

    CERN Multimedia

    Computer Security Team

    2012-01-01

    Do you like listening to music while you work? What about watching videos during your leisure time? Sure this is fun. Having your colleagues participate in this is even more fun. However, this fun is usually not free. There are artists and the music and film companies who earn their living from music and videos.   Thus, if you want to listen to music or watch films at CERN, make sure that you own the proper rights to do so (and that you have the agreement of your supervisor to do this during working hours). Note that these rights are personal: you usually do not have the right to share music or videos with third parties without violating copyrights. Therefore, making copyrighted music and videos public, or sharing music and videos as well as other copyrighted material, is forbidden at CERN and outside CERN. It violates the CERN Computing Rules and it contradicts CERN's Code of Conduct, which expects each of us to behave ethically and honestly, and to credit others for their c...

  8. Music, videos and the risk for CERN

    CERN Multimedia

    IT Department

    2010-01-01

    Do you like listening to music while working? What about watching videos during leisure time? Sure this is fun. Having your colleagues participating in this is even more fun. However, this fun is usually not free. There are music and film companies who earn their living from music and videos. Thus, if you want to listen to music or watch films at CERN, make sure that you own the proper rights to do so (and you have the agreement of your supervisor to do this during working hours). Note that these rights are personal: You usually do not have the right to share this music or these videos with third parties without violating copyrights. Therefore, making copyrighted music and videos public, or sharing music and video files as well as other copyrighted material, is forbidden at CERN --- and also outside CERN. It violates the CERN Computing Rules (http://cern.ch/ComputingRules) and it contradicts CERN's Code of Coduct (https://cern.ch/hr-info/codeofconduct.asp) which expects each of us to behave ethically and be ...

  9. On-board processing of video image sequences

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Chanrion, Olivier Arnaud; Forchhammer, Søren

    2008-01-01

    and evaluated. On-board there are six video cameras each capturing images of 1024times1024 pixels of 12 bpp at a frame rate of 15 fps, thus totalling 1080 Mbits/s. In comparison the average downlink data rate for these images is projected to be 50 kbit/s. This calls for efficient on-board processing to select...

  10. Video narratives: creativity and growth in teacher education

    NARCIS (Netherlands)

    Admiraal, W.; Boesenkool, F.; van Duin, G.; van de Kamp, M.-T.; Montane, M.; Salazar, J.

    2010-01-01

    Portfolios are widely used as instruments in initial teacher education in order to assess teacher competences. Video footages provides the opportunity to capture the richness ad complexity of work practices. This means that not only a larger variety of teacher competences can be demonstrated, but

  11. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  12. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  13. Collaborative Video Sketching

    DEFF Research Database (Denmark)

    Henningsen, Birgitte; Gundersen, Peter Bukovica; Hautopp, Heidi

    2017-01-01

    This paper introduces to what we define as a collaborative video sketching process. This process links various sketching techniques with digital storytelling approaches and creative reflection processes in video productions. Traditionally, sketching has been used by designers across various...... findings: 1) They are based on a collaborative approach. 2) The sketches act as a mean to externalizing hypotheses and assumptions among the participants. Based on our analysis we present an overview of factors involved in collaborative video sketching and shows how the factors relate to steps, where...... the participants: shape, record, review and edit their work, leading the participants to new insights about their work....

  14. Categorizing Video Game Audio

    DEFF Research Database (Denmark)

    Westerberg, Andreas Rytter; Schoenau-Fog, Henrik

    2015-01-01

    they can use audio in video games. The conclusion of this study is that the current models' view of the diegetic spaces, used to categorize video game audio, is not t to categorize all sounds. This can however possibly be changed though a rethinking of how the player interprets audio.......This paper dives into the subject of video game audio and how it can be categorized in order to deliver a message to a player in the most precise way. A new categorization, with a new take on the diegetic spaces, can be used a tool of inspiration for sound- and game-designers to rethink how...

  15. Using video playbacks to study visual communication in a marine fish, Salaria pavo.

    Science.gov (United States)

    Gonçalves; Oliveira; Körner; Poschadel; Schlupp

    2000-09-01

    Video playbacks have been successfully applied to the study of visual communication in several groups of animals. However, this technique is controversial as video monitors are designed with the human visual system in mind. Differences between the visual capabilities of humans and other animals will lead to perceptually different interpretations of video images. We simultaneously presented males and females of the peacock blenny, Salaria pavo, with a live conspecific male and an online video image of the same individual. Video images failed to elicit appropriate responses. Males were aggressive towards the live male but not towards video images of the same male. Similarly, females courted only the live male and spent more time near this stimulus. In contrast, females of the gynogenetic poecilid Poecilia formosa showed an equal preference for a live and video image of a P. mexicana male, suggesting a response to live animals as strong as to video images. We discuss differences between the species that may explain their opposite reaction to video images. Copyright 2000 The Association for the Study of Animal Behaviour.

  16. First results on video meteors from Crete, Greece

    Science.gov (United States)

    Maravelias, G.

    2012-01-01

    This work presents the first systematic video meteor observations from a, forthcoming permanent, station in Crete, Greece, operating as the first official node within the International Meteor Organization's Video Network. It consists of a Watec 902 H2 Ultimate camera equipped with a Panasonic WV-LA1208 (focal length 12mm, f/0.8) lens running MetRec. The system operated for 42 nights during 2011 (August 19-December 30, 2011) recording 1905 meteors. It is significantly more performant than a previous system used by the author during the Perseids 2010 (DMK camera 21AF04.AS by The Imaging Source, CCTV lens of focal length 2.8 mm, UFO Capture v2.22), which operated for 17 nights (August 4-22, 2010) recording 32 meteors. Differences - according to the author's experience - between the two softwares (MetRec, UFO Capture) are discussed along with a small guide to video meteor hardware.

  17. AMUC: Associated Motion capture User Categories.

    Science.gov (United States)

    Norman, Sally Jane; Lawson, Sian E M; Olivier, Patrick; Watson, Paul; Chan, Anita M-A; Dade-Robertson, Martyn; Dunphy, Paul; Green, Dave; Hiden, Hugo; Hook, Jonathan; Jackson, Daniel G

    2009-07-13

    The AMUC (Associated Motion capture User Categories) project consisted of building a prototype sketch retrieval client for exploring motion capture archives. High-dimensional datasets reflect the dynamic process of motion capture and comprise high-rate sampled data of a performer's joint angles; in response to multiple query criteria, these data can potentially yield different kinds of information. The AMUC prototype harnesses graphic input via an electronic tablet as a query mechanism, time and position signals obtained from the sketch being mapped to the properties of data streams stored in the motion capture repository. As well as proposing a pragmatic solution for exploring motion capture datasets, the project demonstrates the conceptual value of iterative prototyping in innovative interdisciplinary design. The AMUC team was composed of live performance practitioners and theorists conversant with a variety of movement techniques, bioengineers who recorded and processed motion data for integration into the retrieval tool, and computer scientists who designed and implemented the retrieval system and server architecture, scoped for Grid-based applications. Creative input on information system design and navigation, and digital image processing, underpinned implementation of the prototype, which has undergone preliminary trials with diverse users, allowing identification of rich potential development areas.

  18. Gadolinium neutron capture therapy

    International Nuclear Information System (INIS)

    Akine, Yasuyuki; Tokita, Nobuhiko; Tokuuye, Koichi; Satoh, Michinao; Churei, Hisahiko

    1993-01-01

    Gadolinium neutron capture therapy makes use of photons and electrons produced by nuclear reactions between gadolinium and lower-energy neutrons which occur within the tumor. The results of our studies have shown that its radiation effect is mostly of low LET and that the electrons are the significant component in the over-all dose. The dose from gadolinium neutron capture reactions does not seem to increase in proportion to the gadolinium concentration, and the Gd-157 concentration of about 100 μg/ml appears most optimal for therapy. Close contact between gadolinium and the cell is not necessarily required for cell inactivation, however, the effect of electrons released from intracellular gadolinium may be significant. Experimental studies on tumor-bearing mice and rabbits have shown that this is a very promising modality though further improvements in gadolinium delivery to tumors are needed. (author)

  19. Saving Lives through visual health communication: a multidisciplinary team approach.

    Science.gov (United States)

    Wressell, Adrian; Twaites, Heidi; Taylor, Stephen; Hartland, Dan; Gove-Humphries, Theo

    2014-10-01

    Saving Lives is a public health awareness charity that aims to educate the UK public about HIV and encourage testing for the virus. In May 2011 Saving Lives contacted the Medical Illustration department at Heart of England NHS Foundation Trust to discuss the idea of working together to develop a national HIV awareness campaign. A number of local sporting celebrities were invited to a studio photography session. All the sports stars and celebrities were photographed on a Mamiya 645 AFDII camera, with PhaseOne P30 + digital back, using prime 35 mm, 55 mm and 80 mm lenses. During the photography sessions, the team's film maker captured video footage of the subjects being photographed. Once the final avengers' graphical composition had been created, it was applied to the posters, billboards and public transport signs for the campaign. In the three-month period following the campaign launch, survey research was carried out, the initial data being recorded by a questionnaire which was provided to each of the 1800 patients attending the Heartlands Hospital sexual health clinic for HIV testing. Following the launch of the initial campaign, the Saving Lives team continues to produce material to assist in the promotion of the charity and its message. Its success has led to it becoming an on-going long-term project, and to date the team have photographed and filmed 33 sporting stars and visited numerous sporting institutes.

  20. Perceptual learning during action video game playing.

    Science.gov (United States)

    Green, C Shawn; Li, Renjie; Bavelier, Daphne

    2010-04-01

    Action video games have been shown to enhance behavioral performance on a wide variety of perceptual tasks, from those that require effective allocation of attentional resources across the visual scene, to those that demand the successful identification of fleetingly presented stimuli. Importantly, these effects have not only been shown in expert action video game players, but a causative link has been established between action video game play and enhanced processing through training studies. Although an account based solely on attention fails to capture the variety of enhancements observed after action game playing, a number of models of perceptual learning are consistent with the observed results, with behavioral modeling favoring the hypothesis that avid video game players are better able to form templates for, or extract the relevant statistics of, the task at hand. This may suggest that the neural site of learning is in areas where information is integrated and actions are selected; yet changes in low-level sensory areas cannot be ruled out. Copyright © 2009 Cognitive Science Society, Inc.

  1. Video spectroscopy with the RSpec Explorer

    Science.gov (United States)

    Lincoln, James

    2018-03-01

    The January 2018 issue of The Physics Teacher saw two articles that featured the RSpec Explorer as a supplementary lab apparatus. The RSpec Explorer provides live video spectrum analysis with which teachers can demonstrate how to investigate features of a diffracted light source. In this article I provide an introduction to the device as well as a variety of suggestions for using it, some of which go beyond its originally intended design.

  2. The Use of Smart Glasses for Surgical Video Streaming.

    Science.gov (United States)

    Hiranaka, Takafumi; Nakanishi, Yuta; Fujishiro, Takaaki; Hida, Yuichi; Tsubosaka, Masanori; Shibata, Yosaku; Okimura, Kenjiro; Uemoto, Harunobu

    2017-04-01

    Observation of surgical procedures performed by experts is extremely important for acquisition and improvement of surgical skills. Smart glasses are small computers, which comprise a head-mounted monitor and video camera, and can be connected to the internet. They can be used for remote observation of surgeries by video streaming. Although Google Glass is the most commonly used smart glasses for medical purposes, it is still unavailable commercially and has some limitations. This article reports the use of a different type of smart glasses, InfoLinker, for surgical video streaming. InfoLinker has been commercially available in Japan for industrial purposes for more than 2 years. It is connected to a video server via wireless internet directly, and streaming video can be seen anywhere an internet connection is available. We have attempted live video streaming of knee arthroplasty operations that were viewed at several different locations, including foreign countries, on a common web browser. Although the quality of video images depended on the resolution and dynamic range of the video camera, speed of internet connection, and the wearer's attention to minimize image shaking, video streaming could be easily performed throughout the procedure. The wearer could confirm the quality of the video as the video was being shot by the head-mounted display. The time and cost for observation of surgical procedures can be reduced by InfoLinker, and further improvement of hardware as well as the wearer's video shooting technique is expected. We believe that this can be used in other medical settings.

  3. A Depth Video Sensor-Based Life-Logging Human Activity Recognition System for Elderly Care in Smart Indoor Environments

    Directory of Open Access Journals (Sweden)

    Ahmad Jalal

    2014-07-01

    Full Text Available Recent advancements in depth video sensors technologies have made human activity recognition (HAR realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.

  4. A depth video sensor-based life-logging human activity recognition system for elderly care in smart indoor environments.

    Science.gov (United States)

    Jalal, Ahmad; Kamal, Shaharyar; Kim, Daijin

    2014-07-02

    Recent advancements in depth video sensors technologies have made human activity recognition (HAR) realizable for elderly monitoring applications. Although conventional HAR utilizes RGB video sensors, HAR could be greatly improved with depth video sensors which produce depth or distance information. In this paper, a depth-based life logging HAR system is designed to recognize the daily activities of elderly people and turn these environments into an intelligent living space. Initially, a depth imaging sensor is used to capture depth silhouettes. Based on these silhouettes, human skeletons with joint information are produced which are further used for activity recognition and generating their life logs. The life-logging system is divided into two processes. Firstly, the training system includes data collection using a depth camera, feature extraction and training for each activity via Hidden Markov Models. Secondly, after training, the recognition engine starts to recognize the learned activities and produces life logs. The system was evaluated using life logging features against principal component and independent component features and achieved satisfactory recognition rates against the conventional approaches. Experiments conducted on the smart indoor activity datasets and the MSRDailyActivity3D dataset show promising results. The proposed system is directly applicable to any elderly monitoring system, such as monitoring healthcare problems for elderly people, or examining the indoor activities of people at home, office or hospital.

  5. Secure and Efficient Reactive Video Surveillance for Patient Monitoring

    Directory of Open Access Journals (Sweden)

    An Braeken

    2016-01-01

    Full Text Available Video surveillance is widely deployed for many kinds of monitoring applications in healthcare and assisted living systems. Security and privacy are two promising factors that align the quality and validity of video surveillance systems with the caliber of patient monitoring applications. In this paper, we propose a symmetric key-based security framework for the reactive video surveillance of patients based on the inputs coming from data measured by a wireless body area network attached to the human body. Only authenticated patients are able to activate the video cameras, whereas the patient and authorized people can consult the video data. User and location privacy are at each moment guaranteed for the patient. A tradeoff between security and quality of service is defined in order to ensure that the surveillance system gets activated even in emergency situations. In addition, the solution includes resistance against tampering with the device on the patient’s side.

  6. Parkinson's Disease Videos

    Medline Plus

    Full Text Available ... Nonmotor Symptoms of Parkinson's Disease Expert Briefings: Gait, Balance and Falls in Parkinson's Disease Expert Briefings: Coping ... Library is an extensive collection of books, fact sheets, videos, podcasts, and more. To get started, use ...

  7. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Facts What is acoustic neuroma? Diagnosing Symptoms Side Effects Keywords World Language Videos Questions to ask Choosing ... Surgery What is acoustic neuroma Diagnosing Symptoms Side effects Question To Ask Treatment Options Back Overview Observation ...

  8. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... 8211 info@ANAUSA.org About ANA Mission, Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video English English Arabic Catalan Chinese (Simplified) Chinese ( ...

  9. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Care Disease Types FAQ Handout for Patients and Families Is It Right for You How to Get ... For the Media For Clinicians For Policymakers For Family Caregivers Glossary Menu In this section Links Videos ...

  10. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... patient kit Treatment Options Overview Observation Radiation Surgery What is acoustic neuroma Diagnosing ... Back Community Patient Stories Share Your Story Video Stories Caregivers Milestones Gallery Submit Your Milestone Team ANA Volunteer ...

  11. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Support Groups Is a support group for me? Find a Group Upcoming Events Video Library Photo Gallery ... Support ANetwork Peer Support Program Community Connections Overview Find a Meeting Host a Meeting Volunteer Become a ...

  12. Photos and Videos

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Observers are required to take photos and/or videos of all incidentally caught sea turtles, marine mammals, seabirds and unusual or rare fish. On the first 3...

  13. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Mission, Vision & Values Shop ANA Leadership & Staff Annual Reports Acoustic Neuroma Association 600 Peachtree Parkway Suite 108 ... About ANA Mission, Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video English English ...

  14. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Search Search What Is It Definition Pediatric Palliative Care Disease Types FAQ Handout for Patients and Families ... For Family Caregivers Glossary Resources Browse our palliative care resources below: Links Videos Podcasts Webinars For the ...

  15. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... About ANA Mission, Vision & Values Shop ANA Leadership & Staff Annual Reports Acoustic Neuroma Association 600 Peachtree Parkway ... ANAUSA.org About ANA Mission, Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video ...

  16. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... info@ANAUSA.org About ANA Mission, Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video English English Arabic Catalan Chinese (Simplified) Chinese ( ...

  17. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... to your Doctor Find a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts ... to your Doctor Find a Provider Meet the Team Blog Articles & Stories News Provider Directory Donate Resources ...

  18. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Click to learn more... LOGIN CALENDAR DONATE NEWS Home Learn Back Learn about acoustic neuroma AN Facts ... Vision & Values Leadership & Staff Annual Reports Shop ANA Home Learn Educational Video English English Arabic Catalan Chinese ( ...

  19. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... the Media For Clinicians For Policymakers For Family Caregivers Glossary Menu In this section Links Videos Podcasts ... the Media For Clinicians For Policymakers For Family Caregivers Glossary Resources Browse our palliative care resources below: ...

  20. Augmented video viewing: transforming video consumption into an active experience

    OpenAIRE

    WIJNANTS, Maarten; Leën, Jeroen; QUAX, Peter; LAMOTTE, Wim

    2014-01-01

    Traditional video productions fail to cater to the interactivity standards that the current generation of digitally native customers have become accustomed to. This paper therefore advertises the \\activation" of the video consumption process. In particular, it proposes to enhance HTML5 video playback with interactive features in order to transform video viewing into a dynamic pastime. The objective is to enable the authoring of more captivating and rewarding video experiences for end-users. T...

  1. Deception Detection in Videos

    OpenAIRE

    Wu, Zhe; Singh, Bharat; Davis, Larry S.; Subrahmanian, V. S.

    2017-01-01

    We present a system for covert automated deception detection in real-life courtroom trial videos. We study the importance of different modalities like vision, audio and text for this task. On the vision side, our system uses classifiers trained on low level video features which predict human micro-expressions. We show that predictions of high-level micro-expressions can be used as features for deception prediction. Surprisingly, IDT (Improved Dense Trajectory) features which have been widely ...

  2. Industrial-Strength Streaming Video.

    Science.gov (United States)

    Avgerakis, George; Waring, Becky

    1997-01-01

    Corporate training, financial services, entertainment, and education are among the top applications for streaming video servers, which send video to the desktop without downloading the whole file to the hard disk, saving time and eliminating copyrights questions. Examines streaming video technology, lists ten tips for better net video, and ranks…

  3. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... search for current job openings visit HHS USAJobs Home >> NEI YouTube Videos >> NEI YouTube Videos: Amblyopia Listen NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia Animations Blindness Cataract ...

  4. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... Amaurosis Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia NEI Home Contact Us A-Z Site Map NEI on Social Media Information in Spanish (Información en español) Website, ...

  5. A Framework for Video Modeling

    NARCIS (Netherlands)

    Petkovic, M.; Jonker, Willem

    In recent years, research in video databases has increased greatly, but relatively little work has been done in the area of semantic content-based retrieval. In this paper, we present a framework for video modelling with emphasis on semantic content of video data. The video data model presented

  6. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... search for current job openings visit HHS USAJobs Home » NEI YouTube Videos » NEI YouTube Videos: Amblyopia Listen NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia Animations Blindness Cataract ...

  7. Max Weber Visits America: A Review of the Video

    OpenAIRE

    Michael Wise

    2006-01-01

    The North Carolina Sociological Society is proud to announce the long-awaited video of Max Weber's trip to North Carolina as retold by two of his cousins. Max Weber made a trip to visit relatives in Mount Airy, North Carolina, in 1904. This 2004 narrative by Larry Keeter and Stephen Hall is the story of locating and interviewing two living eyewitnesses (1976) to Max Weber's trip. The video includes information about Weber's contributions to modern sociology. Dowloadable files are provided...

  8. Robust video object cosegmentation.

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing; Li, Xuelong; Porikli, Fatih

    2015-10-01

    With ever-increasing volumes of video data, automatic extraction of salient object regions became even more significant for visual analytic solutions. This surge has also opened up opportunities for taking advantage of collective cues encapsulated in multiple videos in a cooperative manner. However, it also brings up major challenges, such as handling of drastic appearance, motion pattern, and pose variations, of foreground objects as well as indiscriminate backgrounds. Here, we present a cosegmentation framework to discover and segment out common object regions across multiple frames and multiple videos in a joint fashion. We incorporate three types of cues, i.e., intraframe saliency, interframe consistency, and across-video similarity into an energy optimization framework that does not make restrictive assumptions on foreground appearance and motion model, and does not require objects to be visible in all frames. We also introduce a spatio-temporal scale-invariant feature transform (SIFT) flow descriptor to integrate across-video correspondence from the conventional SIFT-flow into interframe motion flow from optical flow. This novel spatio-temporal SIFT flow generates reliable estimations of common foregrounds over the entire video data set. Experimental results show that our method outperforms the state-of-the-art on a new extensive data set (ViCoSeg).

  9. Home Video Telemetry vs inpatient telemetry: A comparative study looking at video quality

    Directory of Open Access Journals (Sweden)

    Sutapa Biswas

    Full Text Available Objective: To compare the quality of home video recording with inpatient telemetry (IPT to evaluate our current Home Video Telemetry (HVT practice. Method: To assess our HVT practice, a retrospective comparison of the video quality against IPT was conducted with the latter as the gold standard. A pilot study had been conducted in 2008 on 5 patients.Patients (n = 28 were included in each group over a period of one year.The data was collected from referral spreadsheets, King’s EPR and telemetry archive.Scoring of the events captured was by consensus using two scorers.The variables compared included: visibility of the body part of interest, visibility of eyes, time of event, illumination, contrast, sound quality and picture clarity when amplified to 200%.Statistical evaluation was carried out using Shapiro–Wilk and Chi-square tests. The P-value of ⩽0.05 was considered statistically significant. Results: Significant differences were demonstrated in lighting and contrast between the two groups (HVT performed better in both.Amplified picture quality was slightly better in the HVT group. Conclusion: Video quality of HVT is comparable to IPT, even surpassing IPT in certain aspects such as the level of illumination and contrast. Results were reconfirmed in a larger sample of patients with more variables. Significance: Despite the user and environmental variability in HVT, it looks promising and can be seriously considered as a preferable alternative for patients who may require investigation at locations remote from an EEG laboratory. Keywords: Home Video Telemetry, EEG, Home video monitoring, Video quality

  10. Robust Adaptable Video Copy Detection

    DEFF Research Database (Denmark)

    Assent, Ira; Kremer, Hardy

    2009-01-01

    in contrast). Our query processing combines filtering and indexing structures for efficient multistep computation of video copies under this model. We show that our model successfully identifies altered video copies and does so more reliably than existing models.......Video copy detection should be capable of identifying video copies subject to alterations e.g. in video contrast or frame rates. We propose a video copy detection scheme that allows for adaptable detection of videos that are altered temporally (e.g. frame rate change) and/or visually (e.g. change...

  11. Live Ultra-High Definition from the International Space Station

    Science.gov (United States)

    Grubbs, Rodney; George, Sandy

    2017-01-01

    The first ever live downlink of Ultra-High Definition (UHD) video from the International Space Station (ISS) was the highlight of a 'Super Session' at the National Association of Broadcasters (NAB) in April 2017. The Ultra-High Definition video downlink from the ISS all the way to the Las Vegas Convention Center required considerable planning, pushed the limits of conventional video distribution from a space-craft, and was the first use of High Efficiency Video Coding (HEVC) from a space-craft. The live event at NAB will serve as a pathfinder for more routine downlinks of UHD as well as use of HEVC for conventional HD downlinks to save bandwidth. HEVC may also enable live Virtual Reality video downlinks from the ISS. This paper will describe the overall work flow and routing of the UHD video, how audio was synchronized even though the video and audio were received many seconds apart from each other, and how the demonstration paves the way for not only more efficient video distribution from the ISS, but also serves as a pathfinder for more complex video distribution from deep space. The paper will also describe how a 'live' event was staged when the UHD coming from the ISS had a latency of 10+ seconds. Finally, the paper will discuss how NASA is leveraging commercial technologies for use on-orbit vs. creating technology as was required during the Apollo Moon Program and early space age.

  12. Study of Temporal Effects on Subjective Video Quality of Experience.

    Science.gov (United States)

    Bampis, Christos George; Zhi Li; Moorthy, Anush Krishna; Katsavounidis, Ioannis; Aaron, Anne; Bovik, Alan Conrad

    2017-11-01

    HTTP adaptive streaming is being increasingly deployed by network content providers, such as Netflix and YouTube. By dividing video content into data chunks encoded at different bitrates, a client is able to request the appropriate bitrate for the segment to be played next based on the estimated network conditions. However, this can introduce a number of impairments, including compression artifacts and rebuffering events, which can severely impact an end-user's quality of experience (QoE). We have recently created a new video quality database, which simulates a typical video streaming application, using long video sequences and interesting Netflix content. Going beyond previous efforts, the new database contains highly diverse and contemporary content, and it includes the subjective opinions of a sizable number of human subjects regarding the effects on QoE of both rebuffering and compression distortions. We observed that rebuffering is always obvious and unpleasant to subjects, while bitrate changes may be less obvious due to content-related dependencies. Transient bitrate drops were preferable over rebuffering only on low complexity video content, while consistently low bitrates were poorly tolerated. We evaluated different objective video quality assessment algorithms on our database and found that objective video quality models are unreliable for QoE prediction on videos suffering from both rebuffering events and bitrate changes. This implies the need for more general QoE models that take into account objective quality models, rebuffering-aware information, and memory. The publicly available video content as well as metadata for all of the videos in the new database can be found at http://live.ece.utexas.edu/research/LIVE_NFLXStudy/nflx_index.html.

  13. Healthy Living

    Science.gov (United States)

    ... Health Menu Topics Environment & Health Healthy Living Pollution Reduce, Reuse, Recycle Science – How It Works The Natural World Games ... Lessons Topics Expand Environment & Health Healthy Living Pollution Reduce, Reuse, Recycle Science – How It Works The Natural World Games ...

  14. An Aerial Video Stabilization Method Based on SURF Feature

    Directory of Open Access Journals (Sweden)

    Wu Hao

    2016-01-01

    Full Text Available The video captured by Micro Aerial Vehicle is often degraded due to unexpected random trembling and jitter caused by wind and the shake of the aerial platform. An approach for stabilizing the aerial video based on SURF feature and Kalman filter is proposed. SURF feature points are extracted in each frame, and the feature points between adjacent frames are matched using Fast Library for Approximate Nearest Neighbors search method. Then Random Sampling Consensus matching algorithm and Least Squares Method are used to remove mismatching points pairs, and estimate the transformation between the adjacent images. Finally, Kalman filter is applied to smooth the motion parameters and separate Intentional Motion from Unwanted Motion to stabilize the aerial video. Experiments results show that the approach can stabilize aerial video efficiently with high accuracy, and it is robust to the translation, rotation and zooming motion of camera.

  15. Joint Rendering and Segmentation of Free-Viewpoint Video

    Directory of Open Access Journals (Sweden)

    Ishii Masato

    2010-01-01

    Full Text Available Abstract This paper presents a method that jointly performs synthesis and object segmentation of free-viewpoint video using multiview video as the input. This method is designed to achieve robust segmentation from online video input without per-frame user interaction and precomputations. This method shares a calculation process between the synthesis and segmentation steps; the matching costs calculated through the synthesis step are adaptively fused with other cues depending on the reliability in the segmentation step. Since the segmentation is performed for arbitrary viewpoints directly, the extracted object can be superimposed onto another 3D scene with geometric consistency. We can observe that the object and new background move naturally along with the viewpoint change as if they existed together in the same space. In the experiments, our method can process online video input captured by a 25-camera array and show the result image at 4.55 fps.

  16. Max Weber Visits America: A Review of the Video

    Directory of Open Access Journals (Sweden)

    Michael Wise

    2006-11-01

    Full Text Available The North Carolina Sociological Society is proud to announce the long-awaited video of Max Weber's trip to North Carolina as retold by two of his cousins. Max Weber made a trip to visit relatives in Mount Airy, North Carolina, in 1904. This 2004 narrative by Larry Keeter and Stephen Hall is the story of locating and interviewing two living eyewitnesses (1976 to Max Weber's trip. The video includes information about Weber's contributions to modern sociology. Dowloadable files are provided using the .mp4 format. The video should appeal to students and professors interested in Max Weber. It can be included in courses ranging from introductory sociology to theory.

  17. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... quality of life for people living with congestive heart failure Living well with serious illness: Barbara and Laren’s pancreatic cancer story Living well with serious illness: Gregory’s lung cancer story Living with Kidney Disease: Pain and Itch, and the Role of Palliative ...

  18. Assisted Living

    Science.gov (United States)

    ... it, too. Back to top What is the Cost for Assisted Living? Although assisted living costs less than nursing home ... Primarily, older persons or their families pay the cost of assisted living. Some health and long-term care insurance policies ...

  19. Real-time unmanned aircraft systems surveillance video mosaicking using GPU

    Science.gov (United States)

    Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.

    2010-04-01

    Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.

  20. The Living Textbook of Nuclear Chemistry

    International Nuclear Information System (INIS)

    Loveland, W.; Gallant, A.; Joiner, C.

    2005-01-01

    The Living Textbook of Nuclear Chemistry (http://livingtextbook.orst.edu) is a website, which is a collection of supplemental materials for the teaching of nuclear and radiochemistry. It contains audio-video presentations of the history of nuclear chemistry, tutorial lectures by recognized experts on advanced topics in nuclear and radiochemistry, links to data compilations, articles, and monographs, an audio course on radiochemistry, on-line editions of textbooks, training videos, etc. All content has been refereed. (author)

  1. Video profile monitor diagnostic system for GTA

    International Nuclear Information System (INIS)

    Sandovil, D.P.; Garcia, R.C.; Gilpatrick, J.D.; Johnson, K.F.; Shinas, M.A.; Wright, R.; Yuan, V.; Zander, M.E.

    1992-01-01

    This paper describes a video diagnostic system used to measure the beam profile and position of the Ground Test Accelerator 2.5 MeV H - ion beam as it exits the intermediate matching section. Inelastic collisions between H - ions and residual nitrogen in the vacuum chamber cause the nitrogen to fluoresce. The resulting light is captured through transport optics by an intensified CCD camera and is digitized. Real-time beam profile images are displayed and stored for detailed analysis. Analyzed data showing resolutions for both position and profile measurements will also be presented. (Author) 5 refs., 7 figs

  2. Video profile monitor diagnostic system for GTA

    International Nuclear Information System (INIS)

    Sandoval, D.P.; Garcia, R.C.; Gilpatrick, J.D.; Johnson, K.F.; Shinas, M.A.; Wright, R.; Yuan, V.; Zander, M.E.

    1992-01-01

    This paper describes a video diagnostic system used to measure the beam profile and position of the Ground Test Accelerator 2.5-MeV H - ion beam as it exits the intermediate matching section. Inelastic collisions between H-ions and residual nitrogen to fluoresce. The resulting light is captured through transport optics by an intensified CCD camera and is digitized. Real-time beam-profile images are displayed and stored for detailed analysis. Analyzed data showing resolutions for both position and profile measurements will also be presented

  3. Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.

    Science.gov (United States)

    Huang, Yan; Wang, Wei; Wang, Liang

    2018-04-01

    Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.

  4. Consumer-based technology for distribution of surgical videos for objective evaluation.

    Science.gov (United States)

    Gonzalez, Ray; Martinez, Jose M; Lo Menzo, Emanuele; Iglesias, Alberto R; Ro, Charles Y; Madan, Atul K

    2012-08-01

    The Global Operative Assessment of Laparoscopic Skill (GOALS) is one validated metric utilized to grade laparoscopic skills and has been utilized to score recorded operative videos. To facilitate easier viewing of these recorded videos, we are developing novel techniques to enable surgeons to view these videos. The objective of this study is to determine the feasibility of utilizing widespread current consumer-based technology to assist in distributing appropriate videos for objective evaluation. Videos from residents were recorded via a direct connection from the camera processor via an S-video output via a cable into a hub to connect to a standard laptop computer via a universal serial bus (USB) port. A standard consumer-based video editing program was utilized to capture the video and record in appropriate format. We utilized mp4 format, and depending on the size of the file, the videos were scaled down (compressed), their format changed (using a standard video editing program), or sliced into multiple videos. Standard available consumer-based programs were utilized to convert the video into a more appropriate format for handheld personal digital assistants. In addition, the videos were uploaded to a social networking website and video sharing websites. Recorded cases of laparoscopic cholecystectomy in a porcine model were utilized. Compression was required for all formats. All formats were accessed from home computers, work computers, and iPhones without difficulty. Qualitative analyses by four surgeons demonstrated appropriate quality to grade for these formats. Our preliminary results show promise that, utilizing consumer-based technology, videos can be easily distributed to surgeons to grade via GOALS via various methods. Easy accessibility may help make evaluation of resident videos less complicated and cumbersome.

  5. Living Technology

    DEFF Research Database (Denmark)

    2010-01-01

    This book is aimed at anyone who is interested in learning more about living technology, whether coming from business, the government, policy centers, academia, or anywhere else. Its purpose is to help people to learn what living technology is, what it might develop into, and how it might impact...... our lives. The phrase 'living technology' was coined to refer to technology that is alive as well as technology that is useful because it shares the fundamental properties of living systems. In particular, the invention of this phrase was called for to describe the trend of our technology becoming...... increasingly life-like or literally alive. Still, the phrase has different interpretations depending on how one views what life is. This book presents nineteen perspectives on living technology. Taken together, the interviews convey the collective wisdom on living technology's power and promise, as well as its...

  6. Motion Capturing Emotions

    Directory of Open Access Journals (Sweden)

    Wood Karen

    2017-12-01

    Full Text Available The paper explores the activities conducted as part of WhoLoDancE: Whole Body Interaction Learning for Dance Education which is an EU-funded Horizon 2020 project. In particular, we discuss the motion capture sessions that took place at Motek, Amsterdam as well as the dancers’ experience of being captured and watching themselves or others as varying visual representations through the HoloLens. HoloLens is Microsoft’s first holographic computer that you wear as you would a pair of glasses. The study embraced four dance genres: Ballet, Contemporary, Flamenco and Greek Folk dance. We are specifically interested in the kinesthetic and emotional engagement with the moving body and what new corporeal awareness may be experienced. Positioning the moving, dancing body as fundamental to technological advancements, we discuss the importance of considering the dancer’s experience in the real and virtual space. Some of the artists involved in the project have offered their experiences, which are included, and they form the basis of the discussion. In addition, we discuss the affect of immersive environments, how these environments expand reality and what effect (emotionally and otherwise that has on the body. The research reveals insights into relationships between emotion, movement and technology and what new sensorial knowledge this evokes for the dancer.

  7. Synovectomy by Neutron capture

    International Nuclear Information System (INIS)

    Vega C, H.R.; Torres M, C.

    1998-01-01

    The Synovectomy by Neutron capture has as purpose the treatment of the rheumatoid arthritis, illness which at present does not have a definitive curing. This therapy requires a neutron source for irradiating the articulation affected. The energy spectra and the intensity of these neutrons are fundamental since these neutrons induce nuclear reactions of capture with Boron-10 inside the articulation and the freely energy of these reactions is transferred at the productive tissue of synovial liquid, annihilating it. In this work it is presented the neutron spectra results obtained with moderator packings of spherical geometry which contains in its center a Pu 239 Be source. The calculations were realized through Monte Carlo method. The moderators assayed were light water, heavy water base and the both combination of them. The spectra obtained, the average energy, the neutron total number by neutron emitted by source, the thermal neutron percentage and the dose equivalent allow us to suggest that the moderator packing more adequate is what has a light water thickness 0.5 cm (radius 2 cm) and 24.5 cm heavy water (radius 26.5 cm). (Author)

  8. Recent advances in multiview distributed video coding

    Science.gov (United States)

    Dufaux, Frederic; Ouaret, Mourad; Ebrahimi, Touradj

    2007-04-01

    We consider dense networks of surveillance cameras capturing overlapped images of the same scene from different viewing directions, such a scenario being referred to as multi-view. Data compression is paramount in such a system due to the large amount of captured data. In this paper, we propose a Multi-view Distributed Video Coding approach. It allows for low complexity / low power consumption at the encoder side, and the exploitation of inter-view correlation without communications among the cameras. We introduce a combination of temporal intra-view side information and homography inter-view side information. Simulation results show both the improvement of the side information, as well as a significant gain in terms of coding efficiency.

  9. Content-based video retrieval by example video clip

    Science.gov (United States)

    Dimitrova, Nevenka; Abdel-Mottaleb, Mohamed

    1997-01-01

    This paper presents a novel approach for video retrieval from a large archive of MPEG or Motion JPEG compressed video clips. We introduce a retrieval algorithm that takes a video clip as a query and searches the database for clips with similar contents. Video clips are characterized by a sequence of representative frame signatures, which are constructed from DC coefficients and motion information (`DC+M' signatures). The similarity between two video clips is determined by using their respective signatures. This method facilitates retrieval of clips for the purpose of video editing, broadcast news retrieval, or copyright violation detection.

  10. VideoSET: Video Summary Evaluation through Text

    OpenAIRE

    Yeung, Serena; Fathi, Alireza; Fei-Fei, Li

    2014-01-01

    In this paper we present VideoSET, a method for Video Summary Evaluation through Text that can evaluate how well a video summary is able to retain the semantic information contained in its original video. We observe that semantics is most easily expressed in words, and develop a text-based approach for the evaluation. Given a video summary, a text representation of the video summary is first generated, and an NLP-based metric is then used to measure its semantic distance to ground-truth text ...

  11. Human Motion Capture Data Tailored Transform Coding.

    Science.gov (United States)

    Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He

    2015-07-01

    Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.

  12. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yasaman Samei

    2008-08-01

    Full Text Available Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN. With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture. This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  13. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks.

    Science.gov (United States)

    Aghdasi, Hadi S; Abbaspour, Maghsoud; Moghadam, Mohsen Ebrahimi; Samei, Yasaman

    2008-08-04

    Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS) and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN). With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture). This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  14. YouTube Live and Twitch: A Tour of User-Generated Live Streaming Systems

    OpenAIRE

    Pires , Karine; SIMON , Gwendal

    2015-01-01

    International audience; User-Generated live video streaming systems are services that allow anybody to broadcast a video stream over the Internet. These Over-The-Top services have recently gained popularity, in particular with e-sport, and can now be seen as competitors of the traditional cable TV. In this paper, we present a dataset for further works on these systems. This dataset contains data on the two main user-generated live streaming systems: Twitch and the live service of YouTube. We ...

  15. Capture and fission with DANCE and NEUANCE

    Energy Technology Data Exchange (ETDEWEB)

    Jandel, M.; Baramsai, B.; Bond, E.; Rusev, G.; Walker, C.; Bredeweg, T.A.; Chadwick, M.B.; Couture, A.; Fowler, M.M.; Hayes, A.; Kawano, T.; Mosby, S.; Stetcu, I.; Taddeucci, T.N.; Talou, P.; Ullmann, J.L.; Vieira, D.J.; Wilhelmy, J.B. [Los Alamos National Laboratory, Los Alamos, New Mexico (United States)

    2015-12-15

    A summary of the current and future experimental program at DANCE is presented. Measurements of neutron capture cross sections are planned for many actinide isotopes with the goal to reduce the present uncertainties in nuclear data libraries. Detailed studies of capture gamma rays in the neutron resonance region will be performed in order to derive correlated data on the de-excitation of the compound nucleus. New approaches on how to remove the DANCE detector response from experimental data and retain the correlations between the cascade gamma rays are presented. Studies on {sup 235}U are focused on quantifying the population of short-lived isomeric states in {sup 236}U after neutron capture. For this purpose, a new neutron detector array NEUANCE is under construction. It will be installed in the central cavity of the DANCE array and enable the highly efficient tagging of fission and capture events. In addition, developments of fission fragment detectors are also underway to expand DANCE capabilities to measurements of fully correlated data on fission observables. (orig.)

  16. Brains on video games.

    Science.gov (United States)

    Bavelier, Daphne; Green, C Shawn; Han, Doug Hyun; Renshaw, Perry F; Merzenich, Michael M; Gentile, Douglas A

    2011-11-18

    The popular press is replete with stories about the effects of video and computer games on the brain. Sensationalist headlines claiming that video games 'damage the brain' or 'boost brain power' do not do justice to the complexities and limitations of the studies involved, and create a confusing overall picture about the effects of gaming on the brain. Here, six experts in the field shed light on our current understanding of the positive and negative ways in which playing video games can affect cognition and behaviour, and explain how this knowledge can be harnessed for educational and rehabilitation purposes. As research in this area is still in its early days, the contributors of this Viewpoint also discuss several issues and challenges that should be addressed to move the field forward.

  17. 100 Million Views of Electronic Cigarette YouTube Videos and Counting: Quantification, Content Evaluation, and Engagement Levels of Videos.

    Science.gov (United States)

    Huang, Jidong; Kornfield, Rachel; Emery, Sherry L

    2016-03-18

    The video-sharing website, YouTube, has become an important avenue for product marketing, including tobacco products. It may also serve as an important medium for promoting electronic cigarettes, which have rapidly increased in popularity and are heavily marketed online. While a few studies have examined a limited subset of tobacco-related videos on YouTube, none has explored e-cigarette videos' overall presence on the platform. To quantify e-cigarette-related videos on YouTube, assess their content, and characterize levels of engagement with those videos. Understanding promotion and discussion of e-cigarettes on YouTube may help clarify the platform's impact on consumer attitudes and behaviors and inform regulations. Using an automated crawling procedure and keyword rules, e-cigarette-related videos posted on YouTube and their associated metadata were collected between July 1, 2012, and June 30, 2013. Metadata were analyzed to describe posting and viewing time trends, number of views, comments, and ratings. Metadata were content coded for mentions of health, safety, smoking cessation, promotional offers, Web addresses, product types, top-selling brands, or names of celebrity endorsers. As of June 30, 2013, approximately 28,000 videos related to e-cigarettes were captured. Videos were posted by approximately 10,000 unique YouTube accounts, viewed more than 100 million times, rated over 380,000 times, and commented on more than 280,000 times. More than 2200 new videos were being uploaded every month by June 2013. The top 1% of most-viewed videos accounted for 44% of total views. Text fields for the majority of videos mentioned websites (70.11%); many referenced health (13.63%), safety (10.12%), smoking cessation (9.22%), or top e-cigarette brands (33.39%). The number of e-cigarette-related YouTube videos was projected to exceed 65,000 by the end of 2014, with approximately 190 million views. YouTube is a major information-sharing platform for electronic cigarettes

  18. Robust automated knowledge capture.

    Energy Technology Data Exchange (ETDEWEB)

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  19. Fragment capture device

    Science.gov (United States)

    Payne, Lloyd R.; Cole, David L.

    2010-03-30

    A fragment capture device for use in explosive containment. The device comprises an assembly of at least two rows of bars positioned to eliminate line-of-sight trajectories between the generation point of fragments and a surrounding containment vessel or asset. The device comprises an array of at least two rows of bars, wherein each row is staggered with respect to the adjacent row, and wherein a lateral dimension of each bar and a relative position of each bar in combination provides blockage of a straight-line passage of a solid fragment through the adjacent rows of bars, wherein a generation point of the solid fragment is located within a cavity at least partially enclosed by the array of bars.

  20. Capturing the Daylight Dividend

    Energy Technology Data Exchange (ETDEWEB)

    Peter Boyce; Claudia Hunter; Owen Howlett

    2006-04-30

    Capturing the Daylight Dividend conducted activities to build market demand for daylight as a means of improving indoor environmental quality, overcoming technological barriers to effective daylighting, and informing and assisting state and regional market transformation and resource acquisition program implementation efforts. The program clarified the benefits of daylight by examining whole building systems energy interactions between windows, lighting, heating, and air conditioning in daylit buildings, and daylighting's effect on the human circadian system and productivity. The project undertook work to advance photosensors, dimming systems, and ballasts, and provided technical training in specifying and operating daylighting controls in buildings. Future daylighting work is recommended in metric development, technology development, testing, training, education, and outreach.

  1. Are trees long-lived?

    Science.gov (United States)

    Kevin T. Smith

    2009-01-01

    Trees and tree care can capture the best of people's motivations and intentions. Trees are living memorials that help communities heal at sites of national tragedy, such as Oklahoma City and the World Trade Center. We mark the places of important historical events by the trees that grew nearby even if the original tree, such as the Charter Oak in Connecticut or...

  2. Holovideo: Real-time 3D range video encoding and decoding on GPU

    Science.gov (United States)

    Karpinsky, Nikolaus; Zhang, Song

    2012-02-01

    We present a 3D video-encoding technique called Holovideo that is capable of encoding high-resolution 3D videos into standard 2D videos, and then decoding the 2D videos back into 3D rapidly without significant loss of quality. Due to the nature of the algorithm, 2D video compression such as JPEG encoding with QuickTime Run Length Encoding (QTRLE) can be applied with little quality loss, resulting in an effective way to store 3D video at very small file sizes. We found that under a compression ratio of 134:1, Holovideo to OBJ file format, the 3D geometry quality drops at a negligible level. Several sets of 3D videos were captured using a structured light scanner, compressed using the Holovideo codec, and then uncompressed and displayed to demonstrate the effectiveness of the codec. With the use of OpenGL Shaders (GLSL), the 3D video codec can encode and decode in realtime. We demonstrated that for a video size of 512×512, the decoding speed is 28 frames per second (FPS) with a laptop computer using an embedded NVIDIA GeForce 9400 m graphics processing unit (GPU). Encoding can be done with this same setup at 18 FPS, making this technology suitable for applications such as interactive 3D video games and 3D video conferencing.

  3. Video-based rendering

    CERN Document Server

    Magnor, Marcus A

    2005-01-01

    Driven by consumer-market applications that enjoy steadily increasing economic importance, graphics hardware and rendering algorithms are a central focus of computer graphics research. Video-based rendering is an approach that aims to overcome the current bottleneck in the time-consuming modeling process and has applications in areas such as computer games, special effects, and interactive TV. This book offers an in-depth introduction to video-based rendering, a rapidly developing new interdisciplinary topic employing techniques from computer graphics, computer vision, and telecommunication en

  4. CERN Video News

    CERN Document Server

    2003-01-01

    From Monday you can see on the web the new edition of CERN's Video News. Thanks to a collaboration between the audiovisual teams at CERN and Fermilab, you can see a report made by the American laboratory. The clip concerns the LHC magnets that are being constructed at Fermilab. Also in the programme: the spectacular rotation of one of the ATLAS coils, the arrival at CERN of the first American magnet made at Brookhaven, the story of the discovery 20 years ago of the W and Z bosons at CERN. http://www.cern.ch/video or Bulletin web page.

  5. Contextual analysis of videos

    CERN Document Server

    Thida, Myo; Monekosso, Dorothy

    2013-01-01

    Video context analysis is an active and vibrant research area, which provides means for extracting, analyzing and understanding behavior of a single target and multiple targets. Over the last few decades, computer vision researchers have been working to improve the accuracy and robustness of algorithms to analyse the context of a video automatically. In general, the research work in this area can be categorized into three major topics: 1) counting number of people in the scene 2) tracking individuals in a crowd and 3) understanding behavior of a single target or multiple targets in the scene.

  6. CS Seminar Videos

    OpenAIRE

    Ong, Derek; Tona, Glen; Gibb, Kyle; Parbadia, Sivani

    2013-01-01

    Main site for our project can be found at this URL: http://vtechworks.lib.vt.edu/handle/10919/19036. From here you can find videos of all the CS seminars and distinguished lectures given this semester. Each video has its own abstract and description. The files attached in this section are a final report in both raw Word Document and archival PDF formats and a presentation in both raw Powerpoint and archival PDF formats. Computer Science seminars are a very educational and interesting as...

  7. Video library for video imaging detection at intersection stop lines.

    Science.gov (United States)

    2010-04-01

    The objective of this activity was to record video that could be used for controlled : evaluation of video image vehicle detection system (VIVDS) products and software upgrades to : existing products based on a list of conditions that might be diffic...

  8. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... story Living well with serious illness: Gregory’s lung cancer story Treating pain and breathing challenges: Matt’s palliative care story Living with chronic kidney disease? ... What Is Palliative Care Definition Pediatric Palliative Care ...

  9. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... well with serious illness: Barbara and Laren’s pancreatic cancer story Living well with serious illness: Gregory’s lung cancer story Living with Kidney Disease: Pain and Itch, ...

  10. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... story Living well with serious illness: Gregory’s lung cancer story Treating pain and breathing challenges: Matt’s palliative care story Living with Kidney Disease: Pain and Itch, and the Role of ...

  11. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... quality of life for people living with congestive heart failure Living well with serious illness: Barbara and ... What Is Palliative Care Definition Pediatric Palliative Care Disease Types FAQ Handout for Patients and Families Is ...

  12. Parkinson's Disease Videos

    Medline Plus

    Full Text Available ... PD Library Legal, Financial, & Insurance Matters Blog For Caregivers Living with Parkinson's While living with PD can ... We Really Know? Nurse Webinars: Nursing Solutions: Improving Caregiver Strain through Science and Model Interventions Expert Briefings: ...

  13. Multi-Task Video Captioning with Video and Entailment Generation

    OpenAIRE

    Pasunuru, Ramakanth; Bansal, Mohit

    2017-01-01

    Video captioning, the task of describing the content of a video, has seen some promising improvements in recent years with sequence-to-sequence models, but accurately learning the temporal and logical dynamics involved in the task still remains a challenge, especially given the lack of sufficient annotated data. We improve video captioning by sharing knowledge with two related directed-generation tasks: a temporally-directed unsupervised video prediction task to learn richer context-aware vid...

  14. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  15. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... quality of life for people living with congestive heart failure Living well with serious illness: Barbara and Laren’s pancreatic cancer story Living well with serious illness: Gregory’s lung cancer ... Palliative Care Disease Types FAQ Handout for Patients and Families Is ...

  16. Scalable gastroscopic video summarization via similar-inhibition dictionary selection.

    Science.gov (United States)

    Wang, Shuai; Cong, Yang; Cao, Jun; Yang, Yunsheng; Tang, Yandong; Zhao, Huaici; Yu, Haibin

    2016-01-01

    This paper aims at developing an automated gastroscopic video summarization algorithm to assist clinicians to more effectively go through the abnormal contents of the video. To select the most representative frames from the original video sequence, we formulate the problem of gastroscopic video summarization as a dictionary selection issue. Different from the traditional dictionary selection methods, which take into account only the number and reconstruction ability of selected key frames, our model introduces the similar-inhibition constraint to reinforce the diversity of selected key frames. We calculate the attention cost by merging both gaze and content change into a prior cue to help select the frames with more high-level semantic information. Moreover, we adopt an image quality evaluation process to eliminate the interference of the poor quality images and a segmentation process to reduce the computational complexity. For experiments, we build a new gastroscopic video dataset captured from 30 volunteers with more than 400k images and compare our method with the state-of-the-arts using the content consistency, index consistency and content-index consistency with the ground truth. Compared with all competitors, our method obtains the best results in 23 of 30 videos evaluated based on content consistency, 24 of 30 videos evaluated based on index consistency and all videos evaluated based on content-index consistency. For gastroscopic video summarization, we propose an automated annotation method via similar-inhibition dictionary selection. Our model can achieve better performance compared with other state-of-the-art models and supplies more suitable key frames for diagnosis. The developed algorithm can be automatically adapted to various real applications, such as the training of young clinicians, computer-aided diagnosis or medical report generation. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Facial expression system on video using widrow hoff

    Science.gov (United States)

    Jannah, M.; Zarlis, M.; Mawengkang, H.

    2018-03-01

    Facial expressions recognition is one of interesting research. This research contains human feeling to computer application Such as the interaction between human and computer, data compression, facial animation and facial detection from the video. The purpose of this research is to create facial expression system that captures image from the video camera. The system in this research uses Widrow-Hoff learning method in training and testing image with Adaptive Linear Neuron (ADALINE) approach. The system performance is evaluated by two parameters, detection rate and false positive rate. The system accuracy depends on good technique and face position that trained and tested.

  18. Alzheimer’s Disease in Social Media: Content Analysis of YouTube Videos

    OpenAIRE

    Tang, Weizhou; Olscamp, Kate; Choi, Seul Ki; Friedman, Daniela B

    2017-01-01

    Background Approximately 5.5 million Americans are living with Alzheimer’s disease (AD) in 2017. YouTube is a popular platform for disseminating health information; however, little is known about messages specifically regarding AD that are being communicated through YouTube. Objective This study aims to examine video characteristics, content, speaker characteristics, and mobilizing information (cues to action) of YouTube videos focused on AD. Methods Videos uploaded to YouTube from 2013 to 20...

  19. Acceptance factors for the use of video call via smartphone by blind people

    OpenAIRE

    Tamanit Chanjaraspong

    2017-01-01

    Using video call via smartphones is a new technology for blind people which can be applied to facilitate their daily lives. This video call technology is different from old technology and the technology acceptance has changed users' behavior in society, culture, and especially attitude toward using new technology. This research studied the intention and the need to use video call via smartphone by blind people according to the Technology Acceptance Model, a famous and widely-accepted theory f...

  20. Streaming Video--The Wave of the Video Future!

    Science.gov (United States)

    Brown, Laura

    2004-01-01

    Videos and DVDs give the teachers more flexibility than slide projectors, filmstrips, and 16mm films but teachers and students are excited about a new technology called streaming. Streaming allows the educators to view videos on demand via the Internet, which works through the transfer of digital media like video, and voice data that is received…

  1. Diabetes HealthSense: Resources for Living Well

    Medline Plus

    Full Text Available ... unmute Watch more videos from NDEP Selected Resources Need help getting started, or feeling overwhelmed? Take a ... Journey for Control This website is filled with information about living with diabetes and developing habits for ...

  2. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    Science.gov (United States)

    Lin, Chia-Wen; Chang, Yao-Jen; Wang, Chih-Ming; Chen, Yung-Chang; Sun, Ming-Ting

    2002-12-01

    This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU) for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network) and the H.324 WAN (wide-area network) users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  3. A Standard-Compliant Virtual Meeting System with Active Video Object Tracking

    Directory of Open Access Journals (Sweden)

    Chang Yao-Jen

    2002-01-01

    Full Text Available This paper presents an H.323 standard compliant virtual video conferencing system. The proposed system not only serves as a multipoint control unit (MCU for multipoint connection but also provides a gateway function between the H.323 LAN (local-area network and the H.324 WAN (wide-area network users. The proposed virtual video conferencing system provides user-friendly object compositing and manipulation features including 2D video object scaling, repositioning, rotation, and dynamic bit-allocation in a 3D virtual environment. A reliable, and accurate scheme based on background image mosaics is proposed for real-time extracting and tracking foreground video objects from the video captured with an active camera. Chroma-key insertion is used to facilitate video objects extraction and manipulation. We have implemented a prototype of the virtual conference system with an integrated graphical user interface to demonstrate the feasibility of the proposed methods.

  4. Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video

    Science.gov (United States)

    Yeo, Boon-Lock; Liu, Bede

    1996-03-01

    Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.

  5. Video Vectorization via Tetrahedral Remeshing.

    Science.gov (United States)

    Wang, Chuan; Zhu, Jie; Guo, Yanwen; Wang, Wenping

    2017-02-09

    We present a video vectorization method that generates a video in vector representation from an input video in raster representation. A vector-based video representation offers the benefits of vector graphics, such as compactness and scalability. The vector video we generate is represented by a simplified tetrahedral control mesh over the spatial-temporal video volume, with color attributes defined at the mesh vertices. We present novel techniques for simplification and subdivision of a tetrahedral mesh to achieve high simplification ratio while preserving features and ensuring color fidelity. From an input raster video, our method is capable of generating a compact video in vector representation that allows a faithful reconstruction with low reconstruction errors.

  6. Video Pedagogy as Political Activity.

    Science.gov (United States)

    Higgins, John W.

    1991-01-01

    Asserts that the education of students in the technology of video and audio production is a political act. Discusses the structure and style of production, and the ideologies and values contained therein. Offers alternative approaches to critical video pedagogy. (PRA)

  7. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... will allow you to take a more active role in your care. The information in these videos ... Stategies to Increase your Level of Physical Activity Role of Body Weight in Osteoarthritis Educational Videos for ...

  8. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... Eye Disease Dilated Eye Exam Dry Eye For Kids Glaucoma Healthy Vision Tips Leber Congenital Amaurosis Low Vision Refractive Errors Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded ...

  9. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... of Body Weight in Osteoarthritis Educational Videos for Patients Rheumatoid Arthritis Educational Video Series Psoriatic Arthritis 101 ... Patient to an Adult Rheumatologist Drug Information for Patients Arthritis Drug Information Sheets Benefits and Risks of ...

  10. Talking Video in 'Everyday Life'

    DEFF Research Database (Denmark)

    McIlvenny, Paul

    For better or worse, video technologies have made their way into many domains of social life, for example in the domain of therapeutics. Techniques such as Marte Meo, Video Interaction Guidance (ViG), Video-Enhanced Reflection on Communication, Video Home Training and Video intervention....../prevention (VIP) all promote the use of video as a therapeutic tool. This paper focuses on media therapeutics and the various in situ uses of video technologies in the mass media for therapeutic purposes. Reality TV parenting programmes such as Supernanny, Little Angels, The House of Tiny Tearaways, Honey, We...... observation and instruction (directives) relayed across different spaces; 2) the use of recorded video by participants to visualise, spatialise and localise talk and action that is distant in time and/or space; 3) the translating, stretching and cutting of social experience in and through the situated use...

  11. Advanced digital video surveillance for safeguard and physical protection

    International Nuclear Information System (INIS)

    Kumar, R.

    2002-01-01

    Full text: Video surveillance is a very crucial component in safeguard and physical protection. Digital technology has revolutionized the surveillance scenario and brought in various new capabilities like better image quality, faster search and retrieval of video images, less storage space for recording, efficient transmission and storage of video, better protection of recorded video images, and easy remote accesses to live and recorded video etc. The basic safeguard requirement for verifiably uninterrupted surveillance has remained largely unchanged since its inception. However, changes to the inspection paradigm to admit automated review and remote monitoring have dramatically increased the demands on safeguard surveillance system. Today's safeguard systems can incorporate intelligent motion detection with very low rate of false alarm and less archiving volume, embedded image processing capability for object behavior and event based indexing, object recognition, efficient querying and report generation etc. It also demands cryptographically authenticating, encrypted, and highly compressed video data for efficient, secure, tamper indicating and transmission. In physical protection, intelligent on robust video motion detection, real time moving object detection and tracking from stationary and moving camera platform, multi-camera cooperative tracking, activity detection and recognition, human motion analysis etc. is going to play a key rote in perimeter security. Incorporation of front and video imagery exploitation tools like automatic number plate recognition, vehicle identification and classification, vehicle undercarriage inspection, face recognition, iris recognition and other biometric tools, gesture recognition etc. makes personnel and vehicle access control robust and foolproof. Innovative digital image enhancement techniques coupled with novel sensor design makes low cost, omni-directional vision capable, all weather, day night surveillance a reality

  12. Parkinson's Disease Videos

    Medline Plus

    Full Text Available ... Is Initiated After Diagnosis? CareMAP: When Is It Time to Get Help? Unconditional Love CareMAP: Rest and Sleep: ... CareMAP: Mealtime and Swallowing: Part 1 ... of books, fact sheets, videos, podcasts, and more. To get started, use the search feature or check ...

  13. Fermilab | Publications and Videos

    Science.gov (United States)

    collection of particle physics books and journals. The Library also offers a range of services including Benefits Milestones Photos and videos Latest news For the media Particle Physics Neutrinos Fermilab and the computing Quantum initiatives Research and development Key discoveries Benefits of particle physics Particle

  14. Video processing project

    CSIR Research Space (South Africa)

    Globisch, R

    2009-03-01

    Full Text Available Video processing source code for algorithms and tools used in software media pipelines (e.g. image scalers, colour converters, etc.) The currently available source code is written in C++ with their associated libraries and DirectShow- Filters....

  15. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Provider Meet the Team Blog Articles & Stories News Provider Directory Donate Resources Links Videos Podcasts Webinars For the Media For Clinicians For Policymakers For Family Caregivers Glossary Sign Up for ... Us Provider Directory What Is Palliative Care Definition Disease Types ...

  16. Video Game Controversies.

    Science.gov (United States)

    Funk, Jeanne B.; Buchman, Debra D.

    1995-01-01

    Reviews the literature on: (1) health-related effects of video games (VGs), including seizures, physiologic responses, and musculoskeletal injuries; (2) eye-hand coordination in VGs; (3) psychological adjustment related to VGs, including possible psychopathologies and violence-related effects; and (4) the educational impact of VGs. Also examines…

  17. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... 30041 770-205-8211 info@ANAUSA.org The world’s #1 acoustic neuroma resource Click to learn more... ... is acoustic neuroma? Diagnosing Symptoms Side Effects Keywords World Language Videos Questions to ask Choosing a healthcare ...

  18. Mobiele video voor bedrijfscommunicatie

    NARCIS (Netherlands)

    Niamut, O.A.; Weerdt, C.A. van der; Havekes, A.

    2009-01-01

    Het project Penta Mobilé liep van juni tot november 2009 en had als doel de mogelijkheden van mobiele video voor bedrijfscommunicatie toepassingen in kaart te brengen. Dit onderzoek werd uitgevoerd samen met vijf (‘Penta’) partijen: Business Tales, Condor Digital, European Communication Projects

  19. Characteristics of Instructional Videos

    Science.gov (United States)

    Beheshti, Mobina; Taspolat, Ata; Kaya, Omer Sami; Sapanca, Hamza Fatih

    2018-01-01

    Nowadays, video plays a significant role in education in terms of its integration into traditional classes, the principal delivery system of information in classes particularly in online courses as well as serving as a foundation of many blended classes. Hence, education is adopting a modern approach of instruction with the target of moving away…

  20. Tracing Sequential Video Production

    DEFF Research Database (Denmark)

    Otrel-Cass, Kathrin; Khalid, Md. Saifuddin

    2015-01-01

    , for one week in 2014, and collected and analyzed visual data to learn about scientists’ practices. The visual material that was collected represented the agreed on material artifacts that should aid the students' reflective process to make sense of science technology practices. It was up to the student...... video, nature of the interactional space, and material and spatial semiotics....

  1. Video narrativer i sygeplejerskeuddannelsen

    DEFF Research Database (Denmark)

    Jensen, Inger

    2009-01-01

    I artiklen gives nogle bud på hvordan video narrativer kan bruges i sygeplejerskeuddannelsen som triggers, der åbner for diskussioner og udvikling af meningsfulde holdninger til medmennesker. Det belyses også hvordan undervisere i deres didaktiske overvejelser kan inddrage elementer fra teori om...

  2. Live Well

    Science.gov (United States)

    ... Health Conditions Live Well Mental Health Substance Use Smoking Healthy Diet Physical Activity Family Planning Living with HIV: Travel ... to his or her health and well-being. Smoking - Tobacco use is the ... year. Healthy Diet - No matter your HIV status, healthy eating is ...

  3. Healthy living

    Science.gov (United States)

    ... living URL of this page: //medlineplus.gov/ency/article/002393.htm Healthy living To use the sharing features on this page, please enable JavaScript. Good health habits can allow you to avoid illness and improve your quality of life. The following steps will help you ...

  4. Recent advances in neutron capture therapy (NCT)

    International Nuclear Information System (INIS)

    Fairchild, R.G.

    1985-01-01

    The application of the 10 B(n,α) 7 Li reaction to cancer radiotherapy (Neutron Capture therapy, or NCT) has intrigued investigators since the discovery of the neutron. This paper briefly summarizes data describing recently developed boronated compounds with evident tumor specificity and extended biological half-lives. The implication of these compounds to NCT is evaluated in terms of Therapeutic Gain (TG). The optimization of NCT using band-pass filtered beams is described, again in terms of TG, and irradiation times with these less intense beams are estimated. 24 refs., 3 figs., 3 tabs

  5. User Information Needs for Environmental Opinion-forming and Decision-making in Link-enriched Video

    NARCIS (Netherlands)

    A.C. Palumbo; L. Hardman (Lynda)

    2013-01-01

    htmlabstractLink-enriched video can support users in informative processes of environmental opinion-forming and decision-making. To enable this, we need to specify the information that should be captured in an annotation schema for describing the video. We conducted expert interviews to elicit

  6. Keys to Successful Interactive Storytelling: A Study of the Booming "Choose-Your-Own-Adventure" Video Game Industry

    Science.gov (United States)

    Tyndale, Eric; Ramsoomair, Franklin

    2016-01-01

    Video gaming has become a multi-billion dollar industry that continues to capture the hearts, minds and pocketbooks of millions of gamers who span all ages. Narrative and interactive games form part of this market. The popularity of tablet computers and the technological advances of video games have led to a renaissance in the genre for both youth…

  7. 100 Million Views of Electronic Cigarette YouTube Videos and Counting: Quantification, Content Evaluation, and Engagement Levels of Videos

    Science.gov (United States)

    2016-01-01

    Background The video-sharing website, YouTube, has become an important avenue for product marketing, including tobacco products. It may also serve as an important medium for promoting electronic cigarettes, which have rapidly increased in popularity and are heavily marketed online. While a few studies have examined a limited subset of tobacco-related videos on YouTube, none has explored e-cigarette videos’ overall presence on the platform. Objective To quantify e-cigarette-related videos on YouTube, assess their content, and characterize levels of engagement with those videos. Understanding promotion and discussion of e-cigarettes on YouTube may help clarify the platform’s impact on consumer attitudes and behaviors and inform regulations. Methods Using an automated crawling procedure and keyword rules, e-cigarette-related videos posted on YouTube and their associated metadata were collected between July 1, 2012, and June 30, 2013. Metadata were analyzed to describe posting and viewing time trends, number of views, comments, and ratings. Metadata were content coded for mentions of health, safety, smoking cessation, promotional offers, Web addresses, product types, top-selling brands, or names of celebrity endorsers. Results As of June 30, 2013, approximately 28,000 videos related to e-cigarettes were captured. Videos were posted by approximately 10,000 unique YouTube accounts, viewed more than 100 million times, rated over 380,000 times, and commented on more than 280,000 times. More than 2200 new videos were being uploaded every month by June 2013. The top 1% of most-viewed videos accounted for 44% of total views. Text fields for the majority of videos mentioned websites (70.11%); many referenced health (13.63%), safety (10.12%), smoking cessation (9.22%), or top e-cigarette brands (33.39%). The number of e-cigarette-related YouTube videos was projected to exceed 65,000 by the end of 2014, with approximately 190 million views. Conclusions YouTube is a major

  8. Neutron capture therapy

    International Nuclear Information System (INIS)

    Jun, B. J.

    1998-11-01

    The overall state of the art related with neutron capture therapy(NCT) is surveyed. Since the field related with NCT is very wide, it is not intended to survey all related subjects in depth. The primary objective of this report is to help those working for the installation of a NCT facility and a PGNAA(prompt gamma ray neutron activation analysis) system for the boron analysis understand overall NCT at Hanaro. Therefore, while the parts of reactor neutron source and PGNAA are dealt in detail, other parts are limited to the level necessary to understand related fields. For example, the subject of chemical compound which requires intensive knowledge on chemistry, is not dealt as a separated item. However, the requirement of a compound for NCT, currently available compounds, their characteristics, etc. could be understood through this report. Although the subject of cancer treated by NCT is out of the capability of the author, it is dealt focussing its characteristics related with the success of NCT. Each detailed subject is expected to be dealt more detail by specialists in future. This report would be helpful for the researchers working for the NCT to understand related fields. (author). 128 refs., 3 tabs., 12 figs

  9. Error resilient H.264/AVC Video over Satellite for low Packet Loss Rates

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Forchhammer, Søren; Andersen, Jakob Dahl

    2007-01-01

    The performance of video over satellite is simulated. The error resilience tools of intra macroblock refresh and slicing are optimized for live broadcast video over satellite. The improved performance using feedback, using a cross- layer approach, over the satellite link is also simulated. The ne...

  10. Costs and financial benefits of video communication compared to usual care at home: a systematic review.

    NARCIS (Netherlands)

    Peeters, J.M.; Mistiaen, P.; Francke, A.L.

    2011-01-01

    We conducted a systematic review of video communication in home care to provide insight into the ratio between the costs and financial benefits (i.e. cost savings). Four databases (PUBMED, EMBASE, COCHRANE LIBRARY, CINAHL) were searched for studies on video communication for patients living at home

  11. Putting Your Camp on Video.

    Science.gov (United States)

    Peterson, Michael

    1997-01-01

    Creating a video to use in marketing camp involves selecting a format, writing the script, determining the video's length, obtaining release forms from campers who appear in the video, determining strategies for filming, choosing a narrator, and renting a studio and a mixing engineer (videotape editor). Includes distribution tips. (LP)

  12. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Corner / Patient Webcasts / Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos ... Your Arthritis Managing Chronic Pain and Depression in Arthritis Nutrition & Rheumatoid Arthritis Arthritis and Health-related Quality of Life ...

  13. Contemplation, Subcreation, and Video Games

    Directory of Open Access Journals (Sweden)

    Mark J. P. Wolf

    2018-04-01

    Full Text Available This essay asks how religion and theological ideas might be made manifest in video games, and particularly the creation of video games as a religious activity, looking at contemplative experiences in video games, and the creation and world-building of game worlds as a form of Tolkienian subcreation, which itself leads to contemplation regarding the creation of worlds.

  14. CERN Video News on line

    CERN Multimedia

    2003-01-01

    The latest CERN video news is on line. In this issue : an interview with the Director General and reports on the new home for the DELPHI barrel and the CERN firemen's spectacular training programme. There's also a vintage video news clip from 1954. See: www.cern.ch/video or Bulletin web page

  15. Remote stereoscopic video play platform for naked eyes based on the Android system

    Science.gov (United States)

    Jia, Changxin; Sang, Xinzhu; Liu, Jing; Cheng, Mingsheng

    2014-11-01

    As people's life quality have been improved significantly, the traditional 2D video technology can not meet people's urgent desire for a better video quality, which leads to the rapid development of 3D video technology. Simultaneously people want to watch 3D video in portable devices,. For achieving the above purpose, we set up a remote stereoscopic video play platform. The platform consists of a server and clients. The server is used for transmission of different formats of video and the client is responsible for receiving remote video for the next decoding and pixel restructuring. We utilize and improve Live555 as video transmission server. Live555 is a cross-platform open source project which provides solutions for streaming media such as RTSP protocol and supports transmission of multiple video formats. At the receiving end, we use our laboratory own player. The player for Android, which is with all the basic functions as the ordinary players do and able to play normal 2D video, is the basic structure for redevelopment. Also RTSP is implemented into this structure for telecommunication. In order to achieve stereoscopic display, we need to make pixel rearrangement in this player's decoding part. The decoding part is the local code which JNI interface calls so that we can extract video frames more effectively. The video formats that we process are left and right, up and down and nine grids. In the design and development, a large number of key technologies from Android application development have been employed, including a variety of wireless transmission, pixel restructuring and JNI call. By employing these key technologies, the design plan has been finally completed. After some updates and optimizations, the video player can play remote 3D video well anytime and anywhere and meet people's requirement.

  16. Analysis of User Requirements in Interactive 3D Video Systems

    Directory of Open Access Journals (Sweden)

    Haiyue Yuan

    2012-01-01

    Full Text Available The recent development of three dimensional (3D display technologies has resulted in a proliferation of 3D video production and broadcasting, attracting a lot of research into capture, compression and delivery of stereoscopic content. However, the predominant design practice of interactions with 3D video content has failed to address its differences and possibilities in comparison to the existing 2D video interactions. This paper presents a study of user requirements related to interaction with the stereoscopic 3D video. The study suggests that the change of view, zoom in/out, dynamic video browsing, and textual information are the most relevant interactions with stereoscopic 3D video. In addition, we identified a strong demand for object selection that resulted in a follow-up study of user preferences in 3D selection using virtual-hand and ray-casting metaphors. These results indicate that interaction modality affects users’ decision of object selection in terms of chosen location in 3D, while user attitudes do not have significant impact. Furthermore, the ray-casting-based interaction modality using Wiimote can outperform the volume-based interaction modality using mouse and keyboard for object positioning accuracy.

  17. Dynamic Textures Modeling via Joint Video Dictionary Learning.

    Science.gov (United States)

    Wei, Xian; Li, Yuanxiang; Shen, Hao; Chen, Fang; Kleinsteuber, Martin; Wang, Zhongfeng

    2017-04-06

    Video representation is an important and challenging task in the computer vision community. In this paper, we consider the problem of modeling and classifying video sequences of dynamic scenes which could be modeled in a dynamic textures (DT) framework. At first, we assume that image frames of a moving scene can be modeled as a Markov random process. We propose a sparse coding framework, named joint video dictionary learning (JVDL), to model a video adaptively. By treating the sparse coefficients of image frames over a learned dictionary as the underlying "states", we learn an efficient and robust linear transition matrix between two adjacent frames of sparse events in time series. Hence, a dynamic scene sequence is represented by an appropriate transition matrix associated with a dictionary. In order to ensure the stability of JVDL, we impose several constraints on such transition matrix and dictionary. The developed framework is able to capture the dynamics of a moving scene by exploring both sparse properties and the temporal correlations of consecutive video frames. Moreover, such learned JVDL parameters can be used for various DT applications, such as DT synthesis and recognition. Experimental results demonstrate the strong competitiveness of the proposed JVDL approach in comparison with state-of-the-art video representation methods. Especially, it performs significantly better in dealing with DT synthesis and recognition on heavily corrupted data.

  18. CameraCast: flexible access to remote video sensors

    Science.gov (United States)

    Kong, Jiantao; Ganev, Ivan; Schwan, Karsten; Widener, Patrick

    2007-01-01

    New applications like remote surveillance and online environmental or traffic monitoring are making it increasingly important to provide flexible and protected access to remote video sensor devices. Current systems use application-level codes like web-based solutions to provide such access. This requires adherence to user-level APIs provided by such services, access to remote video information through given application-specific service and server topologies, and that the data being captured and distributed is manipulated by third party service codes. CameraCast is a simple, easily used system-level solution to remote video access. It provides a logical device API so that an application can identically operate on local vs. remote video sensor devices, using its own service and server topologies. In addition, the application can take advantage of API enhancements to protect remote video information, using a capability-based model for differential data protection that offers fine grain control over the information made available to specific codes or machines, thereby limiting their ability to violate privacy or security constraints. Experimental evaluations of CameraCast show that the performance of accessing remote video information approximates that of accesses to local devices, given sufficient networking resources. High performance is also attained when protection restrictions are enforced, due to an efficient kernel-level realization of differential data protection.

  19. Mining Contextual Information for Ephemeral Digital Video Preservation

    Directory of Open Access Journals (Sweden)

    Chirag Shah

    2009-06-01

    Full Text Available Normal 0 For centuries the archival community has understood and practiced the art of adding contextual information while preserving an artifact. The question now is how these practices can be transferred to the digital domain. With the growing expansion of production and consumption of digital objects (documents, audio, video, etc. it has become essential to identify and study issues related to their representation. A cura­tor in the digital realm may be said to have the same responsibilities as one in a traditional archival domain. However, with the mass production and spread of digital objects, it may be difficult to do all the work manually. In the present article this problem is considered in the area of digital video preservation. We show how this problem can be formulated and propose a framework for capturing contextual infor­mation for ephemeral digital video preservation. This proposal is realized in a system called ContextMiner, which allows us to cater to a digital curator's needs with its four components: digital video curation, collection visualization, browsing interfaces, and video harvesting and monitoring. While the issues and systems described here are geared toward digital videos, they can easily be applied to other kinds of digital objects.

  20. Speed Biases With Real-Life Video Clips

    Directory of Open Access Journals (Sweden)

    Federica Rossi

    2018-03-01

    Full Text Available We live almost literally immersed in an artificial visual world, especially motion pictures. In this exploratory study, we asked whether the best speed for reproducing a video is its original, shooting speed. By using adjustment and double staircase methods, we examined speed biases in viewing real-life video clips in three experiments, and assessed their robustness by manipulating visual and auditory factors. With the tested stimuli (short clips of human motion, mixed human-physical motion, physical motion and ego-motion, speed underestimation was the rule rather than the exception, although it depended largely on clip content, ranging on average from 2% (ego-motion to 32% (physical motion. Manipulating display size or adding arbitrary soundtracks did not modify these speed biases. Estimated speed was not correlated with estimated duration of these same video clips. These results indicate that the sense of speed for real-life video clips can be systematically biased, independently of the impression of elapsed time. Measuring subjective visual tempo may integrate traditional methods that assess time perception: speed biases may be exploited to develop a simple, objective test of reality flow, to be used for example in clinical and developmental contexts. From the perspective of video media, measuring speed biases may help to optimize video reproduction speed and validate “natural” video compression techniques based on sub-threshold temporal squeezing.

  1. The Generic Data Capture Facility

    Science.gov (United States)

    Connell, Edward B.; Barnes, William P.; Stallings, William H.

    1987-01-01

    The Generic Data Capture Facility, which can provide data capture support for a variety of different types of spacecraft while enabling operations costs to be carefully controlled, is discussed. The data capture functions, data protection, isolation of users from data acquisition problems, data reconstruction, and quality and accounting are addressed. The TDM and packet data formats utilized by the system are described, and the development of generic facilities is considered.

  2. Carbon captured from the air

    Energy Technology Data Exchange (ETDEWEB)

    Keith, D. [Calgary Univ., AB (Canada)

    2008-10-15

    This article presented an innovative way to achieve the efficient capture of atmospheric carbon. A team of scientists from the University of Calgary's Institute for Sustainable Energy, Environment and Economy have shown that it is possible to reduce carbon dioxide (CO{sub 2}) using a simple machine that can capture the trace amount of CO{sub 2} present in ambient air at any place on the planet. The thermodynamics of capturing the small concentrations of CO{sub 2} from the air is only slightly more difficult than capturing much larger concentrations of CO{sub 2} from power plants. The research is significant because it offers a way to capture CO{sub 2} emissions from transportation sources such as vehicles and airplanes, which represent more than half of the greenhouse gases emitted on Earth. The energy efficient and cost effective air capture technology could complement other approaches for reducing emissions from the transportation sector, such as biofuels and electric vehicles. Air capture differs from carbon capture and storage (CCS) technology used at coal-fired power plants where CO{sub 2} is captured and pipelined for permanent storage underground. Air capture can capture the CO{sub 2} that is present in ambient air and store it wherever it is cheapest. The team at the University of Calgary showed that CO{sub 2} could be captured directly from the air with less than 100 kWhrs of electricity per tonne of CO{sub 2}. A custom-built tower was able to capture the equivalent of 20 tonnes per year of CO{sub 2} on a single square meter of scrubbing material. The team devised a way to use a chemical process from the pulp and paper industry to cut the energy cost of air capture in half. Although the technology is only in its early stage, it appears that CO{sub 2} could be captured from the air with an energy demand comparable to that needed for CO{sub 2} capture from conventional power plants, but costs will be higher. The simple, reliable and scalable technology

  3. Resource capture by single leaves

    Energy Technology Data Exchange (ETDEWEB)

    Long, S.P.

    1992-05-01

    Leaves show a variety of strategies for maximizing CO{sub 2} and light capture. These are more meaningfully explained if they are considered in the context of maximizing capture relative to the utilization of water, nutrients and carbohydrates reserves. There is considerable variation between crops in their efficiency of CO{sub 2} and light capture at the leaf level. Understanding of these mechanisms indicate some ways in which efficiency of resource capture could be level cannot be meaningfully considered without simultaneous understanding of implications at the canopy level. 36 refs., 5 figs., 1 tab.

  4. Carbon captured from the air

    International Nuclear Information System (INIS)

    Keith, D.

    2008-01-01

    This article presented an innovative way to achieve the efficient capture of atmospheric carbon. A team of scientists from the University of Calgary's Institute for Sustainable Energy, Environment and Economy have shown that it is possible to reduce carbon dioxide (CO 2 ) using a simple machine that can capture the trace amount of CO 2 present in ambient air at any place on the planet. The thermodynamics of capturing the small concentrations of CO 2 from the air is only slightly more difficult than capturing much larger concentrations of CO 2 from power plants. The research is significant because it offers a way to capture CO 2 emissions from transportation sources such as vehicles and airplanes, which represent more than half of the greenhouse gases emitted on Earth. The energy efficient and cost effective air capture technology could complement other approaches for reducing emissions from the transportation sector, such as biofuels and electric vehicles. Air capture differs from carbon capture and storage (CCS) technology used at coal-fired power plants where CO 2 is captured and pipelined for permanent storage underground. Air capture can capture the CO 2 that is present in ambient air and store it wherever it is cheapest. The team at the University of Calgary showed that CO 2 could be captured directly from the air with less than 100 kWhrs of electricity per tonne of CO 2 . A custom-built tower was able to capture the equivalent of 20 tonnes per year of CO 2 on a single square meter of scrubbing material. The team devised a way to use a chemical process from the pulp and paper industry to cut the energy cost of air capture in half. Although the technology is only in its early stage, it appears that CO 2 could be captured from the air with an energy demand comparable to that needed for CO 2 capture from conventional power plants, but costs will be higher. The simple, reliable and scalable technology offers an opportunity to build a commercial-scale plant. 1 fig

  5. 47 CFR 79.1 - Closed captioning of video programming.

    Science.gov (United States)

    2010-10-01

    ... chapter, and any other distributor of video programming for residential reception that delivers such... intended for viewing. This exemption is to be determined based on the primary reception locations and... subtitles in the language of the target audience may be used in lieu of closed captioning; (3) Live...

  6. Video Stream Retrieval of Unseen Queries using Semantic Memory

    NARCIS (Netherlands)

    Cappallo, S.; Mensink, T.; Snoek, C.G.M.; Wilson, R.C.; Hancock, E.R.; Smith, W.A.P.

    2016-01-01

    Retrieval of live, user-broadcast video streams is an under-addressed and increasingly relevant challenge. The on-line nature of the problem requires temporal evaluation and the unforeseeable scope of potential queries motivates an approach which can accommodate arbitrary search queries. To account

  7. Community-made mobile videos as a mechanism for maternal ...

    African Journals Online (AJOL)

    Aim: This study aimed at evaluating the feasibility of using locally made videos by local community groups in local languages as a channel for increasing knowledge, practices, demand and use of maternal and child health messages among women living in rural communities in Eastern Uganda. Methods: This paper ...

  8. Action video game players' visual search advantage extends to biologically relevant stimuli.

    Science.gov (United States)

    Chisholm, Joseph D; Kingstone, Alan

    2015-07-01

    Research investigating the effects of action video game experience on cognition has demonstrated a host of performance improvements on a variety of basic tasks. Given the prevailing evidence that these benefits result from efficient control of attentional processes, there has been growing interest in using action video games as a general tool to enhance everyday attentional control. However, to date, there is little evidence indicating that the benefits of action video game playing scale up to complex settings with socially meaningful stimuli - one of the fundamental components of our natural environment. The present experiment compared action video game player (AVGP) and non-video game player (NVGP) performance on an oculomotor capture task that presented participants with face stimuli. In addition, the expression of a distractor face was manipulated to assess if action video game experience modulated the effect of emotion. Results indicate that AVGPs experience less oculomotor capture than NVGPs; an effect that was not influenced by the emotional content depicted by distractor faces. It is noteworthy that this AVGP advantage emerged despite participants being unaware that the investigation had to do with video game playing, and participants being equivalent in their motivation and treatment of the task as a game. The results align with the notion that action video game experience is associated with superior attentional and oculomotor control, and provides evidence that these benefits can generalize to more complex and biologically relevant stimuli. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Living PSA

    International Nuclear Information System (INIS)

    Evans, M.G.K.

    1997-01-01

    The aim of this presentation is to gain an understanding of the requirements for a PSA to be considered a Living PSA. The presentation is divided into the following topics: Definition; Planning/Documentation; Task Performance; Maintenance; Management. 4 figs

  10. Parkinson's Disease Videos

    Medline Plus

    Full Text Available ... live well with Parkinson's disease. Learn More Expert Care Patient Centered Care Centers of Excellence Bringing Care to You Expert Care Programs Professional Education Expert ...

  11. Chapman's Reef Oculina Banks Clelia Dive 621 2001 Digital Imagery - Captured from Videotapes taken during Submersible Dives to the Oculina Banks Deep Sea Coral Reefs (NODC Accession 0047190)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Digitial imagery, mpegs and jpegs, captured from mini-DV magnetic videotapes collected with an underwater 3-chip CCD color video camera, deployed from the research...

  12. Sebastian Pinnacles, Oculina Banks Clelia Dive 615 2001 Digital Imagery - Captured from Videotapes taken during Submersible Dives to the Oculina Banks Deep Sea Coral Reefs (NODC Accession 0047190)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Digitial imagery, mpegs and jpegs, captured from mini-DV magnetic videotapes collected with an underwater 3-chip CCD color video camera, deployed from the research...

  13. Sebastian Pinnacles Oculina Banks Clelia Dive 619 2001 Digital Imagery - Captured from Videotapes taken during Submersible Dives to the Oculina Banks Deep Sea Coral Reefs (NODC Accession 0047190)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Digitial imagery, mpegs and jpegs, captured from mini-DV magnetic videotapes collected with an underwater 3-chip CCD color video camera, deployed from the research...

  14. Sebastian Pinnacles Oculina Banks Clelia Dive 618 2001 Digital Imagery - Captured from Videotapes taken during Submersible Dives to the Oculina Banks Deep Sea Coral Reefs (NODC Accession 0047190)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Digitial imagery, mpegs and jpegs, captured from mini-DV magnetic videotapes collected with an underwater 3-chip CCD color video camera, deployed from the research...

  15. Sebastian Pinnacles, Oculina Banks Clelia Dive 614 2001 Digital Imagery - Captured from Videotapes taken during Submersible Dives to the Oculina Banks Deep Sea Coral Reefs (NODC Accession 0047190)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Digitial imagery, mpegs and jpegs, captured from mini-DV magnetic videotapes collected with an underwater 3-chip CCD color video camera, deployed from the research...

  16. Jeff's Reef Oculina Banks Clelia Dive 606 2001 Digital Imagery - Captured from Videotapes taken during Submersible Dives to the Oculina Banks Deep Sea Coral Reefs (NODC Accession 0047190)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Digitial imagery, mpegs and jpegs, captured from mini-DV magnetic videotapes collected with an underwater 3-chip CCD color video camera, deployed from the research...

  17. Eau Galllie Oculina Banks Clelia Dive 609 2001 Digital Imagery - Captured from Videotapes tken during Submersible Dives to the Oculina Banks Deep Sea Coral Reefs (NODC Accession 0047190)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Digitial imagery, mpegs and jpegs, captured from mini-DV magnetic videotapes collected with an underwater 3-chip CCD color video camera, deployed from the research...

  18. Cocoa Beach Oculina Banks Clelia Dive 617 2001 Digital Imagery - Captured from Videotapes taken during Submersible Dives to the Oculina Banks Deep Sea Coral Reefs (NODC Accession 0047190)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Digitial imagery, mpegs and jpegs, captured from mini-DV magnetic videotapes collected with an underwater 3-chip CCD color video camera, deployed from the research...

  19. Cape Canaveral Oculina Banks Clelia Dive 616 2001 Digital Imagery - Captured from Videotapes taken during Submersible Dives to the Oculina Banks Deep Sea Coral Reefs (NODC Accession 0047190)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Digitial imagery, mpegs and jpegs, captured from mini-DV magnetic videotapes collected with an underwater 3-chip CCD color video camera, deployed from the research...

  20. Jeff's Reef Oculina Banks Clelia Dive 607 2001 Digital Imagery - Captured from Videotapes taken during Submersible Dives to the Oculina Banks Deep Sea Coral Reefs (NODC Accession 0047190)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Digitial imagery, mpegs and jpegs, captured from mini-DV magnetic videotapes collected with an underwater 3-chip CCD color video camera, deployed from the research...

  1. Water surface modeling from a single viewpoint video.

    Science.gov (United States)

    Li, Chuan; Pickup, David; Saunders, Thomas; Cosker, Darren; Marshall, David; Hall, Peter; Willis, Philip

    2013-07-01

    We introduce a video-based approach for producing water surface models. Recent advances in this field output high-quality results but require dedicated capturing devices and only work in limited conditions. In contrast, our method achieves a good tradeoff between the visual quality and the production cost: It automatically produces a visually plausible animation using a single viewpoint video as the input. Our approach is based on two discoveries: first, shape from shading (SFS) is adequate to capture the appearance and dynamic behavior of the example water; second, shallow water model can be used to estimate a velocity field that produces complex surface dynamics. We will provide qualitative evaluation of our method and demonstrate its good performance across a wide range of scenes.

  2. Digital cinema video compression

    Science.gov (United States)

    Husak, Walter

    2003-05-01

    The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.

  3. Studenterproduceret video til eksamen

    DEFF Research Database (Denmark)

    Jensen, Kristian Nøhr; Hansen, Kenneth

    2016-01-01

    Formålet med denne artikel er at vise, hvordan læringsdesign og stilladsering kan anvendes til at skabe en ramme for studenterproduceret video til eksamen på videregående uddannelser. Artiklen tager udgangspunkt i en problemstilling, hvor uddannelsesinstitutionerne skal håndtere og koordinere...... medieproduktioner. Med afsæt i Lanarca Declarationens perspektiver på læringsdesign og hovedsageligt Jerome Bruners principper for stilladsering, sammensættes en model for understøttelse af videoproduktion af studerende på videregående uddannelser. Ved at anvende denne model for undervisningssessioner og forløb får...... de fagfaglige og mediefaglige undervisere et redskab til at fokusere og koordinere indsatsen frem mod målet med, at de studerende producerer og anvender video til eksamen....

  4. Video material and epilepsy.

    Science.gov (United States)

    Harding, G F; Jeavons, P M; Edson, A S

    1994-01-01

    Nine patients who had epileptic attacks while playing computer games were studied in the laboratory. Patients had an EEG recorded as well as their response to intermittent photic stimulation (IPS) at flash rates of 1-60 fps. In addition, pattern sensitivity was assessed in all patients by a gratings pattern. Only 2 patients had no previous history of convulsions, and only 2 had a normal basic EEG. All but 1 were sensitive to IPS, and all but 1 were pattern sensitive. Most patients were male, but although this appears to conflict with previously published literature results regarding the sex ratio in photosensitivity, it was due to the male predominance of video game usage. We compared our results with those reported in the literature. Diagnosing video game epilepsy requires performing an EEG with IPS and pattern stimulation. We propose a standard method of testing.

  5. A Physical Activity Reference Data-Set Recorded from Older Adults Using Body-Worn Inertial Sensors and Video Technology—The ADAPT Study Data-Set

    Directory of Open Access Journals (Sweden)

    Alan Kevin Bourke

    2017-03-01

    Full Text Available Physical activity monitoring algorithms are often developed using conditions that do not represent real-life activities, not developed using the target population, or not labelled to a high enough resolution to capture the true detail of human movement. We have designed a semi-structured supervised laboratory-based activity protocol and an unsupervised free-living activity protocol and recorded 20 older adults performing both protocols while wearing up to 12 body-worn sensors. Subjects’ movements were recorded using synchronised cameras (≥25 fps, both deployed in a laboratory environment to capture the in-lab portion of the protocol and a body-worn camera for out-of-lab activities. Video labelling of the subjects’ movements was performed by five raters using 11 different category labels. The overall level of agreement was high (percentage of agreement >90.05%, and Cohen’s Kappa, corrected kappa, Krippendorff’s alpha and Fleiss’ kappa >0.86. A total of 43.92 h of activities were recorded, including 9.52 h of in-lab and 34.41 h of out-of-lab activities. A total of 88.37% and 152.01% of planned transitions were recorded during the in-lab and out-of-lab scenarios, respectively. This study has produced the most detailed dataset to date of inertial sensor data, synchronised with high frame-rate (≥25 fps video labelled data recorded in a free-living environment from older adults living independently. This dataset is suitable for validation of existing activity classification systems and development of new activity classification algorithms.

  6. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  7. Mining Conversational Social Video

    OpenAIRE

    Biel, Joan-Isaac

    2013-01-01

    The ubiquity of social media in our daily life, the intense user participation, and the explo- sion of multimedia content have generated an extraordinary interest from computer and social scientists to investigate the traces left by users to understand human behavior online. From this perspective, YouTube can be seen as the largest collection of audiovisual human behavioral data, among which conversational video blogs (vlogs) are one of the basic formats. Conversational vlogs have evolved fro...

  8. Video Bandwidth Compression System.

    Science.gov (United States)

    1980-08-01

    scaling function, located between the inverse DPCM and inverse transform , on the decoder matrix multiplier chips. 1"V1 T.. ---- i.13 SECURITY...Bit Unpacker and Inverse DPCM Slave Sync Board 15 e. Inverse DPCM Loop Boards 15 f. Inverse Transform Board 16 g. Composite Video Output Board 16...36 a. Display Refresh Memory 36 (1) Memory Section 37 (2) Timing and Control 39 b. Bit Unpacker and Inverse DPCM 40 c. Inverse Transform Processor 43

  9. The Future of Video

    OpenAIRE

    Li, F.

    2016-01-01

    Executive Summary \\ud \\ud A range of technological innovations (e.g. smart phones and digital cameras), infrastructural advances (e.g. broadband and 3G/4G wireless networks) and platform developments (e.g. YouTube, Facebook, Snapchat, Instagram, Amazon, and Netflix) are collectively transforming the way video is produced, distributed, consumed, archived – and importantly, monetised. Changes have been observed well beyond the mainstream TV and film industries, and these changes are increasingl...

  10. Video game addiction, ADHD symptomatology, and video game reinforcement.

    Science.gov (United States)

    Mathews, Christine L; Morrell, Holly E R; Molle, Jon E

    2018-06-06

    Up to 23% of people who play video games report symptoms of addiction. Individuals with attention deficit hyperactivity disorder (ADHD) may be at increased risk for video game addiction, especially when playing games with more reinforcing properties. The current study tested whether level of video game reinforcement (type of game) places individuals with greater ADHD symptom severity at higher risk for developing video game addiction. Adult video game players (N = 2,801; Mean age = 22.43, SD = 4.70; 93.30% male; 82.80% Caucasian) completed an online survey. Hierarchical multiple linear regression analyses were used to test type of game, ADHD symptom severity, and the interaction between type of game and ADHD symptomatology as predictors of video game addiction severity, after controlling for age, gender, and weekly time spent playing video games. ADHD symptom severity was positively associated with increased addiction severity (b = .73 and .68, ps .05. The relationship between ADHD symptom severity and addiction severity did not depend on the type of video game played or preferred most, ps > .05. Gamers who have greater ADHD symptom severity may be at greater risk for developing symptoms of video game addiction and its negative consequences, regardless of type of video game played or preferred most. Individuals who report ADHD symptomatology and also identify as gamers may benefit from psychoeducation about the potential risk for problematic play.

  11. Semantic Information Extraction of Lanes Based on Onboard Camera Videos

    Science.gov (United States)

    Tang, L.; Deng, T.; Ren, C.

    2018-04-01

    In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.

  12. Content-based analysis and indexing of sports video

    Science.gov (United States)

    Luo, Ming; Bai, Xuesheng; Xu, Guang-you

    2001-12-01

    An explosion of on-line image and video data in digital form is already well underway. With the exponential rise in interactive information exploration and dissemination through the World-Wide Web, the major inhibitors of rapid access to on-line video data are the management of capture and storage, and content-based intelligent search and indexing techniques. This paper proposes an approach for content-based analysis and event-based indexing of sports video. It includes a novel method to organize shots - classifying shots as close shots and far shots, an original idea of blur extent-based event detection, and an innovative local mutation-based algorithm for caption detection and retrieval. Results on extensive real TV programs demonstrate the applicability of our approach.

  13. Non-Cooperative Facial Recognition Video Dataset Collection Plan

    Energy Technology Data Exchange (ETDEWEB)

    Kimura, Marcia L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Erikson, Rebecca L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lombardo, Nicholas J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-08-31

    The Pacific Northwest National Laboratory (PNNL) will produce a non-cooperative (i.e. not posing for the camera) facial recognition video data set for research purposes to evaluate and enhance facial recognition systems technology. The aggregate data set consists of 1) videos capturing PNNL role players and public volunteers in three key operational settings, 2) photographs of the role players for enrolling in an evaluation database, and 3) ground truth data that documents when the role player is within various camera fields of view. PNNL will deliver the aggregate data set to DHS who may then choose to make it available to other government agencies interested in evaluating and enhancing facial recognition systems. The three operational settings that will be the focus of the video collection effort include: 1) unidirectional crowd flow 2) bi-directional crowd flow, and 3) linear and/or serpentine queues.

  14. Subtitled video tutorials, an accessible teaching material

    Directory of Open Access Journals (Sweden)

    Luis Bengochea

    2012-11-01

    Full Text Available The use of short-lived audio-visual tutorials constitutes an educational resource very attractive for young students, widely familiar with this type of format similar to YouTube clips. Considered as "learning pills", these tutorials are intended to strengthen the understanding of complex concepts that because their dynamic nature can’t be represented through texts or diagrams. However, the inclusion of this type of content in eLearning platforms presents accessibility problems for students with visual or hearing disabilities. This paper describes this problem and shows the way in which a teacher could add captions and subtitles to their videos.

  15. Teaching Children with Autism to Play a Video Game Using Activity Schedules and Game-Embedded Simultaneous Video Modeling

    Science.gov (United States)

    Blum-Dimaya, Alyssa; Reeve, Sharon A.; Reeve, Kenneth F.; Hoch, Hannah

    2010-01-01

    Children with autism have severe and pervasive impairments in social interactions and communication that impact most areas of daily living and often limit independent engagement in leisure activities. We taught four children with autism to engage in an age-appropriate leisure skill, playing the video game Guitar Hero II[TM], through the use of (a)…

  16. Video segmentation using keywords

    Science.gov (United States)

    Ton-That, Vinh; Vong, Chi-Tai; Nguyen-Dao, Xuan-Truong; Tran, Minh-Triet

    2018-04-01

    At DAVIS-2016 Challenge, many state-of-art video segmentation methods achieve potential results, but they still much depend on annotated frames to distinguish between background and foreground. It takes a lot of time and efforts to create these frames exactly. In this paper, we introduce a method to segment objects from video based on keywords given by user. First, we use a real-time object detection system - YOLOv2 to identify regions containing objects that have labels match with the given keywords in the first frame. Then, for each region identified from the previous step, we use Pyramid Scene Parsing Network to assign each pixel as foreground or background. These frames can be used as input frames for Object Flow algorithm to perform segmentation on entire video. We conduct experiments on a subset of DAVIS-2016 dataset in half the size of its original size, which shows that our method can handle many popular classes in PASCAL VOC 2012 dataset with acceptable accuracy, about 75.03%. We suggest widely testing by combining other methods to improve this result in the future.

  17. Utilizing Video Games

    Science.gov (United States)

    Blaize, L.

    Almost from its birth, the computer and video gaming industry has done an admirable job of communicating the vision and attempting to convey the experience of traveling through space to millions of gamers from all cultures and demographics. This paper will propose several approaches the 100 Year Starship Study can take to use the power of interactive media to stir interest in the Starship and related projects among a global population. It will examine successful gaming franchises from the past that are relevant to the mission and consider ways in which the Starship Study could cooperate with game development studios to bring the Starship vision to those franchises and thereby to the public. The paper will examine ways in which video games can be used to crowd-source research aspects for the Study, and how video games are already considering many of the same topics that will be examined by this Study. Finally, the paper will propose some mechanisms by which the 100 Year Starship Study can establish very close ties with the gaming industry and foster cooperation in pursuit of the Study's goals.

  18. Fish welfare in capture fisheries

    NARCIS (Netherlands)

    Veldhuizen, L.J.L.; Berentsen, P.B.M.; Boer, de I.J.M.; Vis, van de J.W.; Bokkers, E.A.M.

    2018-01-01

    Concerns about the welfare of production animals have extended from farm animals to fish, but an overview of the impact of especially capture fisheries on fish welfare is lacking. This review provides a synthesis of 85 articles, which demonstrates that research interest in fish welfare in capture

  19. The role of depth of encoding in attentional capture

    NARCIS (Netherlands)

    Sasin, Edyta; Nieuwenstein, Mark; Johnson, Addie

    2015-01-01

    The aim of the current study was to examine whether depth of encoding influences attentional capture by recently attended objects. In Experiment 1, participants first had to judge whether a word referred to a living or a nonliving thing (deep encoding condition) or whether the word was written in

  20. Automatic Keyframe Summarization of User-Generated Video

    Science.gov (United States)

    2014-06-01

    over longer periods of space and time. Additionally, the storyline may be less crafted or coherent when compared to professional cinema . As such, shot...attention in videos, whether it be their presence, location, identity , actions, or relationships to other humans. In this regard, automatic human capture...among other things. A person AOC has an identity property. Properties of an AOC that a stakeholder considers important are called POCs. 3.1.3

  1. Living Decently

    OpenAIRE

    Peter Travers; Sue Richardson

    1992-01-01

    Our starting point is to re-examine the concept of poverty, in particular its ethical dimensions, in order to understand more clearly exactly what poverty lines are intended to capture. Our concern with poverty lines is twofold. First, they are not credible measures of poverty, because they treat ethical judgments as matters of technical measurement. Second, they are in practice much more to do with inequality at the bottom end of the income distribution than with poverty. We wish to rehabili...

  2. Foucault's Heterotopia and Children's Everyday Lives.

    Science.gov (United States)

    McNamee, Sara

    2000-01-01

    Discusses Foucault's notion of "heterotopia"--real places but which exist unto themselves, such as a floating ship. Considers data on children's use of computer and video games to apply "heterotopia" to children's everyday social lives. Argues that childhood is subject to increasing boundaries, and that children create…

  3. Treatment Considerations in Internet and Video Game Addiction: A Qualitative Discussion.

    Science.gov (United States)

    Greenfield, David N

    2018-04-01

    Internet and video game addiction has been a steadily developing consequence of modern living. Behavioral and process addictions and particularly Internet and video game addiction require specialized treatment protocols and techniques. Recent advances in addiction medicine have improved our understanding of the neurobiology of substance and behavioral addictions. Novel research has expanded the ways we understand and apply well-established addiction treatments as well as newer therapies specific to Internet and video game addiction. This article reviews the etiology, psychology, and neurobiology of Internet and video game addiction and presents treatment strategies and protocols for addressing this growing problem. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. An evaluation of the production effects of video self-modeling.

    Science.gov (United States)

    O'Handley, Roderick D; Allen, Keith D

    2017-12-01

    A multiple baseline across tasks design was used to evaluate the production effects of video self-modeling on three activities of daily living tasks of an adult male with Autism Spectrum Disorder and Intellectual Disability. Results indicated large increases in task accuracy after the production of a self-modeling video for each task, but before the video was viewed by the participant. Results also indicated small increases when the participant was directed to view the same video self-models before being prompted to complete each task. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Parkinson's Disease Videos

    Medline Plus

    Full Text Available ... Caregivers Living with Parkinson's While living with PD can be challenging, there are many things you can do to maintain and improve your quality of ... Ways to Give There are many ways you can support the fight against Parkinson’s. Whatever form your ...

  6. Creep Measurement Video Extensometer

    Science.gov (United States)

    Jaster, Mark; Vickerman, Mary; Padula, Santo, II; Juhas, John

    2011-01-01

    Understanding material behavior under load is critical to the efficient and accurate design of advanced aircraft and spacecraft. Technologies such as the one disclosed here allow accurate creep measurements to be taken automatically, reducing error. The goal was to develop a non-contact, automated system capable of capturing images that could subsequently be processed to obtain the strain characteristics of these materials during deformation, while maintaining adequate resolution to capture the true deformation response of the material. The measurement system comprises a high-resolution digital camera, computer, and software that work collectively to interpret the image.

  7. Hand Hygiene Saves Lives: Patient Admission Video (Short Version)

    Centers for Disease Control (CDC) Podcasts

    2008-05-01

    This podcast is for hospital patients and visitors. It emphasizes two key points to help prevent infections: the importance of practicing hand hygiene while in the hospital, and that it's appropriate to ask or remind healthcare providers to practice hand hygiene.  Created: 5/1/2008 by National Center for Preparedness, Detection, and Control of Infectious Diseases (NCPDCID).   Date Released: 4/26/2010.

  8. Hand Hygiene Saves Lives: Patient Admission Video (Short Version)

    Centers for Disease Control (CDC) Podcasts

    This podcast is for hospital patients and visitors. It emphasizes two key points to help prevent infections: the importance of practicing hand hygiene while in the hospital, and that it's appropriate to ask or remind healthcare providers to practice hand hygiene.

  9. Doing Things. A Live Action Video for Preschoolers [Videotape].

    Science.gov (United States)

    Bo Peep Productions, Eureka, MT.

    Some preschool teachers have expressed concern regarding the lack of science instructional material for students age 2 through the preschool years. This videotape was developed to help fill this chasm in our educational system. It contains activities from students' everyday life such as eating, washing, and playing. These daily processes are then…

  10. Evaluation of sea otter capture after the Exxon Valdez oil spill, Prince William Sound, Alaska

    Science.gov (United States)

    Bodkin, James L.; Weltz, F.; Bayha, Keith; Kormendy, Jennifer

    1990-01-01

    After the T/V Exxon Valdez oil spill into Prince William Sound, the U.S. Fish and Wildlife Service and Exxon Company, U.S.A., began rescuing sea otters (Enhydra lutris). The primary objective of this operation was to capture live, oiled sea otters for cleaning and rehabilitation. Between 30 March and 29 May 1989, 139 live sea otters were captured in the sound and transported to rehabilitation centers in Valdez, Alaska. Within the first 15 days of capture operations, 122 (88%) otters were captured. Most sea otters were captured near Knight, Green, and Evans islands in the western sound. The primary capture method consisted of dipnetting otters out of water and off beaches. While capture rates declined over time, survival of captured otters increased as the interval from spill date to capture date increased. The relative degree of oiling observed for each otter captured declined over time. Declining capture rates led to the use of tangle nets. The evidence suggests the greatest threat to sea otters in Prince William Sound occurred within the first 3 weeks after the spill. Thus, in the future, the authors believe rescue efforts should begin as soon as possible after an oil spill in sea otter habitat. Further, preemptive capture and relocation of sea otters in Prince William Sound may have increased the number of otters that could have survived this event.

  11. Online Interactive Video Vignettes (IVVs)

    Science.gov (United States)

    Laws, Priscilla

    2016-03-01

    Interest in on-line learning is increasing rapidly. A few years ago members of the LivePhoto Physics Group1 received collaborative NSF Grants2 to create short, single-topic, on-line activities that invite introductory physics students to make individual predictions about a phenomenon and test them though video observations or analysis. Each Vignette is designed for web delivery as: (1) an ungraded homework assignment or (2) an exercise to prepare for a class or tutorial session. Sample IVVs are available at the ComPadre website http://www.compadre.org/ivv/. Portions of Vignettes on mechanics topics including Projectile Motion, Circular Motion, the Bullet-Block phenomenon, and Newton's Third Law will be presented. Those attending this talk will be asked to guess what predictions students are likely to make about phenomena in various IVVs. These predictions can be compared to those made by students who completed Vignettes. Finally, research on the impact of Vignettes on student learning and attitudes will be discussed. Co-PI Robert Teese, Rochester Institute of Technology.

  12. Joint denoising, demosaicing, and chromatic aberration correction for UHD video

    Science.gov (United States)

    Jovanov, Ljubomir; Philips, Wilfried; Damstra, Klaas Jan; Ellenbroek, Frank

    2017-09-01

    High-resolution video capture is crucial for numerous applications such as surveillance, security, industrial inspection, medical imaging and digital entertainment. In the last two decades, we are witnessing a dramatic increase of the spatial resolution and the maximal frame rate of video capturing devices. In order to achieve further resolution increase, numerous challenges will be facing us. Due to the reduced size of the pixel, the amount of light also reduces, leading to the increased noise level. Moreover, the reduced pixel size makes the lens imprecisions more pronounced, which especially applies to chromatic aberrations. Even in the case when high quality lenses are used some chromatic aberration artefacts will remain. Next, noise level additionally increases due to the higher frame rates. To reduce the complexity and the price of the camera, one sensor captures all three colors, by relying on Color Filter Arrays. In order to obtain full resolution color image, missing color components have to be interpolated, i.e. demosaicked, which is more challenging than in the case of lower resolution, due to the increased noise and aberrations. In this paper, we propose a new method, which jointly performs chromatic aberration correction, denoising and demosaicking. By jointly performing the reduction of all artefacts, we are reducing the overall complexity of the system and the introduction of new artefacts. In order to reduce possible flicker we also perform temporal video enhancement. We evaluate the proposed method on a number of publicly available UHD sequences and on sequences recorded in our studio.

  13. Experimental Neutron Capture Rate Constraint Far from Stability.

    Science.gov (United States)

    Liddick, S N; Spyrou, A; Crider, B P; Naqvi, F; Larsen, A C; Guttormsen, M; Mumpower, M; Surman, R; Perdikakis, G; Bleuel, D L; Couture, A; Crespo Campo, L; Dombos, A C; Lewis, R; Mosby, S; Nikas, S; Prokop, C J; Renstrom, T; Rubio, B; Siem, S; Quinn, S J

    2016-06-17

    Nuclear reactions where an exotic nucleus captures a neutron are critical for a wide variety of applications, from energy production and national security, to astrophysical processes, and nucleosynthesis. Neutron capture rates are well constrained near stable isotopes where experimental data are available; however, moving far from the valley of stability, uncertainties grow by orders of magnitude. This is due to the complete lack of experimental constraints, as the direct measurement of a neutron-capture reaction on a short-lived nucleus is extremely challenging. Here, we report on the first experimental extraction of a neutron capture reaction rate on ^{69}Ni, a nucleus that is five neutrons away from the last stable isotope of Ni. The implications of this measurement on nucleosynthesis around mass 70 are discussed, and the impact of similar future measurements on the understanding of the origin of the heavy elements in the cosmos is presented.

  14. Materials For Gas Capture, Methods Of Making Materials For Gas Capture, And Methods Of Capturing Gas

    KAUST Repository

    Polshettiwar, Vivek

    2013-06-20

    In accordance with the purpose(s) of the present disclosure, as embodied and broadly described herein, embodiments of the present disclosure, in one aspect, relate to materials that can be used for gas (e.g., CO.sub.2) capture, methods of making materials, methods of capturing gas (e.g., CO.sub.2), and the like, and the like.

  15. Videos Designed to Watch but Audience Required Telling stories is a cliché for best practice in videos. Frontier Scientists, a NSF project titled Science in Alaska: using Multimedia to Support Science Education stressed story but faced audience limitations. FS describes project's story process, reach results, and hypothesizes better scenarios.

    Science.gov (United States)

    O'Connell, E. A.

    2016-12-01

    Telling stories is a cliché for best practice in science videos. It's upheld as a method to capture audience attention in many fields. Findings from neurobiology research show character-driven stories cause the release of the neurochemical oxytocin in the brain. Oxytocin motivates cooperation with others and enhances a sense of empathy, in particular the ability to experience others' emotions. Developing character tension- as in our video design showcasing scientists along with their work- holds the viewers' attention, promotes recall of story, and has the potential to clearly broadcast the feelings and behaviors of the scientists. The brain chemical change should help answer the questions: Why should a viewer care about this science? How does it improve the world, or our lives? Is just a story-driven video the solution to science outreach? Answer: Not in our multi-media world. Frontier Scientists (FS) discovered in its three year National Science Foundation project titled 'Science in Alaska: using Multimedia to Support Science Education': the storied video is only part of the effort. Although FS created from scratch and drove a multimedia national campaign throughout the project, major reach was not achieved. Despite FS' dedicated web site, YouTube channel, weekly blog, monthly press release, Facebook and G+ pages, Twitter activity, contact with scientists' institutions, and TV broadcast, monthly activity on the web site seemed to plateau at about 3000 visitors to the FS website per month. Several factors hampered the effort: Inadequate funding for social media limited the ability of FS to get the word to untapped markets: those whose interest might be sparked by ad campaigns but who do not actively explore unfamiliar agencies' science education content. However, when institutions took advantage of promoting their scientists through the FS videos we saw an uptick in video views and the participating scientists were often contacted for additional stories or were

  16. Video Quality Prediction over Wireless 4G

    KAUST Repository

    Lau, Chun Pong

    2013-04-14

    In this paper, we study the problem of video quality prediction over the wireless 4G network. Video transmission data is collected from a real 4G SCM testbed for investigating factors that affect video quality. After feature transformation and selection on video and network parameters, video quality is predicted by solving as regression problem. Experimental results show that the dominated factor on video quality is the channel attenuation and video quality can be well estimated by our models with small errors.

  17. Video Quality Prediction over Wireless 4G

    KAUST Repository

    Lau, Chun Pong; Zhang, Xiangliang; Shihada, Basem

    2013-01-01

    In this paper, we study the problem of video quality prediction over the wireless 4G network. Video transmission data is collected from a real 4G SCM testbed for investigating factors that affect video quality. After feature transformation and selection on video and network parameters, video quality is predicted by solving as regression problem. Experimental results show that the dominated factor on video quality is the channel attenuation and video quality can be well estimated by our models with small errors.

  18. Electron capture and stellar collapse

    International Nuclear Information System (INIS)

    Chung, K.C.

    1979-01-01

    In order, to investigate the function of electron capture in the phenomenon of pre-supernovae gravitacional collapse, an hydrodynamic caculation was carried out, coupling capture, decay and nuclear reaction equation system. A star simplified model (homogeneous model) was adopted using fermi ideal gas approximation for tthe sea of free electrons and neutrons. The non simplified treatment from quasi-static evolution to collapse is presented. The capture and beta decay rates, as wellas neutron delayed emission, were calculated by beta decay crude theory, while the other reaction rates were determined by usual theories. The preliminary results are presented. (M.C.K.) [pt

  19. Proton capture by magnetic monopoles

    International Nuclear Information System (INIS)

    Olaussen, K.; Olsen, H.A.; Oeverboe, I.; Osland, P.

    1983-09-01

    In the Kazama-Yang approximation, the lowest monopole-proton bound states have binding energies of 938 MeV, 263 keV, 105 eV, and 0.04 eV. The cross section for radiative capture to these states is for velocities β = 10 -5 - 10 -3 found to be of the order of 10 -28 - 10 -26 cm 2 . For the state that has a binding energy of 263 keV, the capture length in water is 171 x (β/10 -4 )sup(0.48) m. Observation of photons from the capture process would indicate the presence of monopoles. (orig.)

  20. Robust Watermarking of Video Streams

    Directory of Open Access Journals (Sweden)

    T. Polyák

    2006-01-01

    Full Text Available In the past few years there has been an explosion in the use of digital video data. Many people have personal computers at home, and with the help of the Internet users can easily share video files on their computer. This makes possible the unauthorized use of digital media, and without adequate protection systems the authors and distributors have no means to prevent it.Digital watermarking techniques can help these systems to be more effective by embedding secret data right into the video stream. This makes minor changes in the frames of the video, but these changes are almost imperceptible to the human visual system. The embedded information can involve copyright data, access control etc. A robust watermark is resistant to various distortions of the video, so it cannot be removed without affecting the quality of the host medium. In this paper I propose a video watermarking scheme that fulfills the requirements of a robust watermark.