WorldWideScience

Sample records for video clips featuring

  1. Using Video Clips To Teach Social Psychology.

    Science.gov (United States)

    Roskos-Ewoldsen, David R.; Roskos-Ewoldsen, Beverly

    2001-01-01

    Explores the effectiveness of using short video clips from feature films to highlight theoretical concepts when teaching social psychology. Reveals that short video clips have many of the same advantages as showing full-length films and demonstrates that students saw the use of these clips as an effective tool. (CMK)

  2. Memory Facilitation effect in Interaction between Video Clips and Music

    OpenAIRE

    吉岡, 賢治; 岩永, 誠

    2007-01-01

    Previous studies examined memories of video clips under the condition of affects combination of pictures and music. Video clips, which were combined with music in same impressions, were easy to remember their contents. The present study aimed to examine the memory facilitation about pictures in two perspectives, the strength of affects and the distribution of the processing recourses. Participants were 39 undergraduate volunteers, who were divided into three experimental conditions randomly. ...

  3. Emotion Index of Cover Song Music Video Clips based on Facial Expression Recognition

    DEFF Research Database (Denmark)

    Vidakis, Nikolaos; Kavallakis, George; Triantafyllidis, Georgios

    2017-01-01

    This paper presents a scheme of creating an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video. More specifically, it fuses effective and robust algorithms which are employed for expression recognition, along with the use of ...... of a neural network system using the features extracted by the SIFT algorithm. Also we support the need of this fusion of different expression recognition algorithms, because of the way that emotions are linked to facial expressions in music video clips.......This paper presents a scheme of creating an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video. More specifically, it fuses effective and robust algorithms which are employed for expression recognition, along with the use...

  4. Emotion Index of Cover Song Music Video Clips based on Facial Expression Recognition

    DEFF Research Database (Denmark)

    Kavallakis, George; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2017-01-01

    This paper presents a scheme of creating an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video. More specifically, it fuses effective and robust algorithms which are employed for expression recognition, along with the use...... of a neural network system using the features extracted by the SIFT algorithm. Also we support the need of this fusion of different expression recognition algorithms, because of the way that emotions are linked to facial expressions in music video clips....

  5. Teaching professionalism to first year medical students using video clips.

    Science.gov (United States)

    Shevell, Allison Haley; Thomas, Aliki; Fuks, Abraham

    2015-01-01

    Medical schools are confronted with the challenge of teaching professionalism during medical training. The aim of this study was to examine medical students' perceptions of using video clips as a beneficial teaching tool to learn professionalism and other aspects of physicianship. As part of the longitudinal Physician Apprenticeship course at McGill University, first year medical students viewed video clips from the television series ER. The study used qualitative description and thematic analysis to interpret responses to questionnaires, which explored the educational merits of this exercise. Completed questionnaires were submitted by 112 students from 21 small groups. A major theme concerned the students' perceptions of the utility of video clips as a teaching tool, and consisted of comments organized into 10 categories: "authenticity and believability", "thought provoking", "skills and approaches", "setting", "medium", "level of training", "mentorship", "experiential learning", "effectiveness" and "relevance to practice". Another major theme reflected the qualities of physicianship portrayed in video clips, and included seven categories: "patient-centeredness", "communication", "physician-patient relationship", "professionalism", "ethical behavior", "interprofessional practice" and "mentorship". This study demonstrated that students perceived the value of using video clips from a television series as a means of teaching professionalism and other aspects of physicianship.

  6. Student Views on Learning Environments Enriched by Video Clips

    Science.gov (United States)

    Kosterelioglu, Ilker

    2016-01-01

    This study intended to identify student views regarding the enrichment of instructional process via video clips based on the goals of the class. The study was conducted in Educational Psychology classes at Amasya University Faculty of Education during the 2012-2013 academic year. The study was implemented on students in the Classroom Teaching and…

  7. The use of video clips in teleconsultation for preschool children with movement disorders.

    Science.gov (United States)

    Gorter, Hetty; Lucas, Cees; Groothuis-Oudshoorn, Karin; Maathuis, Carel; van Wijlen-Hempel, Rietje; Elvers, Hans

    2013-01-01

    To investigate the reliability and validity of video clips in assessing movement disorders in preschool children. The study group included 27 children with neuromotor concerns. The explorative validity group included children with motor problems (n = 21) or with typical development (n = 9). Hempel screening was used for live observation of the child, full recording, and short video clips. The explorative study tested the validity of the clinical classifications "typical" or "suspect." Agreement between live observation and the full recording was almost perfect; Agreement for the clinical classification "typical" or "suspect" was substantial. Agreement between the full recording and short video clips was substantial to moderate. The explorative validity study, based on short video clips and the presence of a neuromotor developmental disorder, showed substantial agreement. Hempel screening enables reliable and valid observation of video clips, but further research is necessary to demonstrate the predictive value.

  8. Electroencephalography Amplitude Modulation Analysis for Automated Affective Tagging of Music Video Clips

    Directory of Open Access Journals (Sweden)

    Andrea Clerico

    2018-01-01

    Full Text Available The quantity of music content is rapidly increasing and automated affective tagging of music video clips can enable the development of intelligent retrieval, music recommendation, automatic playlist generators, and music browsing interfaces tuned to the users' current desires, preferences, or affective states. To achieve this goal, the field of affective computing has emerged, in particular the development of so-called affective brain-computer interfaces, which measure the user's affective state directly from measured brain waves using non-invasive tools, such as electroencephalography (EEG. Typically, conventional features extracted from the EEG signal have been used, such as frequency subband powers and/or inter-hemispheric power asymmetry indices. More recently, the coupling between EEG and peripheral physiological signals, such as the galvanic skin response (GSR, have also been proposed. Here, we show the importance of EEG amplitude modulations and propose several new features that measure the amplitude-amplitude cross-frequency coupling per EEG electrode, as well as linear and non-linear connections between multiple electrode pairs. When tested on a publicly available dataset of music video clips tagged with subjective affective ratings, support vector classifiers trained on the proposed features were shown to outperform those trained on conventional benchmark EEG features by as much as 6, 20, 8, and 7% for arousal, valence, dominance and liking, respectively. Moreover, fusion of the proposed features with EEG-GSR coupling features showed to be particularly useful for arousal (feature-level fusion and liking (decision-level fusion prediction. Together, these findings show the importance of the proposed features to characterize human affective states during music clip watching.

  9. Fault Diagnosis of Motor Bearing by Analyzing a Video Clip

    Directory of Open Access Journals (Sweden)

    Siliang Lu

    2016-01-01

    Full Text Available Conventional bearing fault diagnosis methods require specialized instruments to acquire signals that can reflect the health condition of the bearing. For instance, an accelerometer is used to acquire vibration signals, whereas an encoder is used to measure motor shaft speed. This study proposes a new method for simplifying the instruments for motor bearing fault diagnosis. Specifically, a video clip recording of a running bearing system is captured using a cellphone that is equipped with a camera and a microphone. The recorded video is subsequently analyzed to obtain the instantaneous frequency of rotation (IFR. The instantaneous fault characteristic frequency (IFCF of the defective bearing is obtained by analyzing the sound signal that is recorded by the microphone. The fault characteristic order is calculated by dividing IFCF by IFR to identify the fault type of the bearing. The effectiveness and robustness of the proposed method are verified by a series of experiments. This study provides a simple, flexible, and effective solution for motor bearing fault diagnosis. Given that the signals are gathered using an affordable and accessible cellphone, the proposed method is proven suitable for diagnosing the health conditions of bearing systems that are located in remote areas where specialized instruments are unavailable or limited.

  10. High-Frequency Electroencephalographic Activity in Left Temporal Area Is Associated with Pleasant Emotion Induced by Video Clips

    Directory of Open Access Journals (Sweden)

    Jukka Kortelainen

    2015-01-01

    Full Text Available Recent findings suggest that specific neural correlates for the key elements of basic emotions do exist and can be identified by neuroimaging techniques. In this paper, electroencephalogram (EEG is used to explore the markers for video-induced emotions. The problem is approached from a classifier perspective: the features that perform best in classifying person’s valence and arousal while watching video clips with audiovisual emotional content are searched from a large feature set constructed from the EEG spectral powers of single channels as well as power differences between specific channel pairs. The feature selection is carried out using a sequential forward floating search method and is done separately for the classification of valence and arousal, both derived from the emotional keyword that the subject had chosen after seeing the clips. The proposed classifier-based approach reveals a clear association between the increased high-frequency (15–32 Hz activity in the left temporal area and the clips described as “pleasant” in the valence and “medium arousal” in the arousal scale. These clips represent the emotional keywords amusement and joy/happiness. The finding suggests the occurrence of a specific neural activation during video-induced pleasant emotion and the possibility to detect this from the left temporal area using EEG.

  11. Public awareness of melioidosis in Thailand and potential use of video clips as educational tools.

    Directory of Open Access Journals (Sweden)

    Praveen Chansrichavala

    Full Text Available Melioidosis causes more than 1,000 deaths in Thailand each year. Infection occurs via inoculation, ingestion or inhalation of the causative organism (Burkholderia pseuodmallei present in soil and water. Here, we evaluated public awareness of melioidosis using a combination of population-based questionnaire, a public engagement campaign to obtain video clips made by the public, and viewpoints on these video clips as potential educational tools about the disease and its prevention.A questionnaire was developed to evaluate public awareness of melioidosis, and knowledge about its prevention. From 1 March to 31 April 2012, the questionnaire was delivered to five randomly selected adults in each of 928 districts in Thailand. A video clip contest entitled "Melioidosis, an infectious disease that Thais must know" was run between May and October 2012. The best 12 video clips judged by a contest committee were shown to 71 people at risk from melioidosis (diabetics. Focus group interviews were used to evaluate their perceptions of the video clips.Of 4,203 Thais who completed our study questionnaire, 74% had never heard of melioidosis, and 19% had heard of the disease but had no further knowledge. Most participants in all focus group sessions felt that video clips were beneficial and could positively influence them to increase adherence to recommended preventive behaviours, including drinking boiled water and wearing protective gear if in contact with soil or environmental water. Participants suggested that video clips should be presented in the local dialect with simple words rather than medical terms, in a serious manner, with a doctor as the one presenting the facts, and having detailed pictures of each recommended prevention method.In summary, public awareness of melioidosis in Thailand is very low, and video clips could serve as a useful medium to educate people and promote disease prevention.World Melioidosis Congress 2013, Bangkok, Thailand, 18

  12. The effect of music video clips on adolescent boys' body image, mood, and schema activation.

    Science.gov (United States)

    Mulgrew, Kate E; Volcevski-Kostas, Diana; Rendell, Peter G

    2014-01-01

    There is limited research that has examined experimentally the effects of muscular images on adolescent boys' body image, with no research specifically examining the effects of music television. The aim of the current study was to examine the effects of viewing muscular and attractive singers in music video clips on early, mid, and late adolescent boys' body image, mood, and schema activation. Participants were 180 boys in grade 7 (mean age = 12.73 years), grade 9 (mean age = 14.40 years) or grade 11 (mean age = 16.15 years) who completed pre- and post-test measures of mood and body satisfaction after viewing music videos containing male singers of muscular or average appearance. They also completed measures of schema activation and social comparison after viewing the clips. The results showed that the boys who viewed the muscular clips reported poorer upper body satisfaction, lower appearance satisfaction, lower happiness, and more depressive feelings compared to boys who viewed the clips depicting singers of average appearance. There was no evidence of increased appearance schema activation but the boys who viewed the muscular clips did report higher levels of social comparison to the singers. The results suggest that music video clips are a powerful form of media in conveying information about the male ideal body shape and that negative effects are found in boys as young as 12 years.

  13. 78 FR 78319 - Media Bureau Seeks Comment on Application of the IP Closed Captioning Rules to Video Clips

    Science.gov (United States)

    2013-12-26

    ... COMMISSION 47 CFR Part 79 Media Bureau Seeks Comment on Application of the IP Closed Captioning Rules to... seeks updated information on the closed captioning of video clips delivered by Internet protocol (``IP''), including the extent to which industry has voluntarily captioned IP- delivered video clips. The Commission...

  14. Engaging Students: Using Video Clips of Authentic Client Interactions in Pre-Clinical Veterinary Medical Education.

    Science.gov (United States)

    Hafen, McArthur; Siqueira Drake, Adryanna A; Rush, Bonnie R; Sibley, D Scott

    2015-01-01

    The present study evaluated third-year veterinary medical students' perceptions of a communication lab protocol. The protocol used clips of fourth-year veterinary medical students working with authentic clients. These clips supplemented course material. Clips showed examples of proficient communication as well as times of struggle for fourth-year students. Third-year students were asked to critique interactions during class. One hundred and eight third-year students provided feedback about the communication lab. While initial interest in communication proved low, interest in communication training at the end of the course increased substantially. The majority of students cited watching videos clips of authentic client interactions as being an important teaching tool.

  15. Microanalysis on selected video clips with focus on communicative response in music therapy

    DEFF Research Database (Denmark)

    Ridder, Hanne Mette Ochsner

    2007-01-01

    session is obtained with the help of a session-graph that is a systematic way of collecting video observations from one music therapy session and combining the data in one figure. The systematic procedures do not demand sophisticated computer equipment; only standard programmes such as Excel and a media......This chapter describes a five-step procedure for video analysis where the topic of investigation is the communicative response of clients in music therapy. In this microanalysis procedure only very short video clips are used, and in order to select these clips an overview of each music therapy...... player. They are based on individual music therapy work with a population who are difficult to engage in joint activities and who show little response (e.g. persons suffering from severe dementia). The video analysis tools might be relevant to other groups of clients where it is important to form a clear...

  16. Peer teaching in medical sciences through video clips – a case study

    African Journals Online (AJOL)

    Background. Anecdotally, 2015 was declared the year of the selfie. The theme of selfies is used as an opportunity to engage neuroanatomy students by drawing from it as a newly created art form by means of models and video clips. Objectives. To provide a synopsis of student perceptions of a team project to inform further ...

  17. The Use of Video Clips in Teleconsultation for Preschool Children With Movement Disorders

    NARCIS (Netherlands)

    Gorter, Hetty; Lucas, Cees; Groothuis-Oudshoorn, Catharina Gerarda Maria; Maathuis, Carel; van Wijlen-Hempel, Rietje; Elvers, Hans

    2013-01-01

    Purpose: To investigate the reliability and validity of video clips in assessing movement disorders in preschool children. Methods: The study group included 27 children with neuromotor concerns. The explorative validity group included children with motor problems (n = 21) or with typical development

  18. Real-time video streaming of sonographic clips using domestic internet networks and free videoconferencing software.

    Science.gov (United States)

    Liteplo, Andrew S; Noble, Vicki E; Attwood, Ben H C

    2011-11-01

    As the use of point-of-care sonography spreads, so too does the need for remote expert over-reading via telesonogrpahy. We sought to assess the feasibility of using familiar, widespread, and cost-effective existent technology to allow remote over-reading of sonograms in real time and to compare 4 different methods of transmission and communication for both the feasibility of transmission and image quality. Sonographic video clips were transmitted using 2 different connections (WiFi and 3G) and via 2 different videoconferencing modalities (iChat [Apple Inc, Cupertino, CA] and Skype [Skype Software Sàrl, Luxembourg]), for a total of 4 different permutations. The clips were received at a remote location and recorded and then scored by expert reviewers for image quality, resolution, and detail. Wireless transmission of sonographic clips was feasible in all cases when WiFi was used and when Skype was used over a 3G connection. Images transmitted via a WiFi connection were statistically superior to those transmitted via 3G in all parameters of quality (average P = .031), and those sent by iChat were superior to those sent by Skype but not statistically so (average P = .057). Wireless transmission of sonographic video clips using inexpensive hardware, free videoconferencing software, and domestic Internet networks is feasible with retention of image quality sufficient for interpretation. WiFi transmission results in greater image quality than transmission by a 3G network.

  19. Microsurgical Clipping of an Unruptured Carotid Cave Aneurysm: 3-Dimensional Operative Video.

    Science.gov (United States)

    Tabani, Halima; Yousef, Sonia; Burkhardt, Jan-Karl; Gandhi, Sirin; Benet, Arnau; Lawton, Michael T

    2017-08-01

    Most aneurysms originating from the clinoidal segment of the internal carotid artery (ICA) are nowadays managed conservatively, treated endovascularly with coiling (with or without stenting) or flow diverters. However, microsurgical clip occlusion remains an alternative. This video demonstrates clip occlusion of an unruptured right carotid cave aneurysm measuring 7 mm in a 39-year-old woman. The patient opted for surgery because of concerns about prolonged antiplatelet use associated with endovascular therapy. After patient consent, a standard pterional craniotomy was performed followed by extradural anterior clinoidectomy. After dural opening and sylvian fissure split, a clinoidal flap was opened to enter the extradural space around the clinoidal segment. The dural ring was dissected circumferentially, freeing the medial wall of the ICA down to the sellar region and mobilizing the ICA out of its canal of the clinoidal segment. With the aneurysm neck in view, the aneurysm was clipped with a 45° angled fenestrated clip over the ICA. Indocyanine green angiography confirmed no further filling of the aneurysm and patency of the ICA. Complete aneurysm occlusion was confirmed with postoperative angiography, and the patient had no neurologic deficits (Video 1). This case demonstrates the importance of anterior clinoidectomy and thorough distal dural ring dissection for effective clipping of carotid cave aneurysms. Control of venous bleeding from the cavernous sinus with fibrin glue injection simplifies the dissection, which should minimize manipulation of the optic nerve. Knowledge of this anatomy and proficiency with these techniques is important in an era of declining open aneurysm cases. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Short term exposure to attractive and muscular singers in music video clips negatively affects men's body image and mood.

    Science.gov (United States)

    Mulgrew, K E; Volcevski-Kostas, D

    2012-09-01

    Viewing idealized images has been shown to reduce men's body satisfaction; however no research has examined the impact of music video clips. This was the first study to examine the effects of exposure to muscular images in music clips on men's body image, mood and cognitions. Ninety men viewed 5 min of clips containing scenery, muscular or average-looking singers, and completed pre- and posttest measures of mood and body image. Appearance schema activation was also measured. Men exposed to the muscular clips showed poorer posttest levels of anger, body and muscle tone satisfaction compared to men exposed to the scenery or average clips. No evidence of schema activation was found, although potential problems with the measure are noted. These preliminary findings suggest that even short term exposure to music clips can produce negative effects on men's body image and mood. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Facial attractiveness ratings from video-clips and static images tell the same story.

    Science.gov (United States)

    Rhodes, Gillian; Lie, Hanne C; Thevaraja, Nishta; Taylor, Libby; Iredell, Natasha; Curran, Christine; Tan, Shi Qin Claire; Carnemolla, Pia; Simmons, Leigh W

    2011-01-01

    Most of what we know about what makes a face attractive and why we have the preferences we do is based on attractiveness ratings of static images of faces, usually photographs. However, several reports that such ratings fail to correlate significantly with ratings made to dynamic video clips, which provide richer samples of appearance, challenge the validity of this literature. Here, we tested the validity of attractiveness ratings made to static images, using a substantial sample of male faces. We found that these ratings agreed very strongly with ratings made to videos of these men, despite the presence of much more information in the videos (multiple views, neutral and smiling expressions and speech-related movements). Not surprisingly, given this high agreement, the components of video-attractiveness were also very similar to those reported previously for static-attractiveness. Specifically, averageness, symmetry and masculinity were all significant components of attractiveness rated from videos. Finally, regression analyses yielded very similar effects of attractiveness on success in obtaining sexual partners, whether attractiveness was rated from videos or static images. These results validate the widespread use of attractiveness ratings made to static images in evolutionary and social psychological research. We speculate that this validity may stem from our tendency to make rapid and robust judgements of attractiveness.

  2. The Narrative Analysis of the Discourse on Homosexual BDSM Pornograhic Video Clips of The Manhunt Variety

    Directory of Open Access Journals (Sweden)

    Milica Vasić

    2016-02-01

    Full Text Available In this paper we have analyzed the ideal-type model of the story which represents the basic framework of action in Manhunt category pornographic internet video clips, using narrative analysis methods of Claude Bremond. The results have shown that it is possible to apply the theoretical model to elements of visual and mass culture, with certain modifications and taking into account the wider context of the narrative itself. The narrative analysis indicated the significance of researching categories of pornography on the internet, because it leads to a deep analysis of the distribution of power in relations between the categories of heterosexual and homosexual within a virtual environment.

  3. Judgments of Nonverbal Behaviour by Children with High-Functioning Autism Spectrum Disorder: Can They Detect Signs of Winning and Losing from Brief Video Clips?

    Science.gov (United States)

    Ryan, Christian; Furley, Philip; Mulhall, Kathleen

    2016-01-01

    Typically developing children are able to judge who is winning or losing from very short clips of video footage of behaviour between active match play across a number of sports. Inferences from "thin slices" (short video clips) allow participants to make complex judgments about the meaning of posture, gesture and body language. This…

  4. The low interaction of viewers of video clips on the internet The case study of YouTube Spain

    Directory of Open Access Journals (Sweden)

    Ana Jorge-Alonso, Ph.D.

    2010-01-01

    Full Text Available This research study demonstrates that viewers of video clips on the Internet adopt a viewing attitude that is as passive as the one adopted when watching unidirectional and traditional media. Research on the attitude of the viewer of video clips on the Internet is almost non-existent. The authors dispute the widespread myths and the few studies that suggest that most Internet users exercise the interactive potentiality of this medium. The article focuses on Youtube Spain as the main referent of video consumption over the Internet, and demonstrates the initial hypothesis with quantitative data. Their methodology studies the behaviour of Internet users by analysing 278 videos and 650,884,405 visits registered until the end of 2009. These results shed light on many questions, and open other interesting lines of research.

  5. Changes in salivary testosterone concentrations and subsequent voluntary squat performance following the presentation of short video clips.

    Science.gov (United States)

    Cook, Christian J; Crewther, Blair T

    2012-01-01

    Previous studies have shown that visual images can produce rapid changes in testosterone concentrations. We explored the acute effects of video clips on salivary testosterone and cortisol concentrations and subsequent voluntary squat performance in highly trained male athletes (n=12). Saliva samples were collected on 6 occasions immediately before and 15 min after watching a brief video clip (approximately 4 min in duration) on a computer screen. The watching of a sad, erotic, aggressive, training motivational, humorous or a neutral control clip was randomised. Subjects then performed a squat workout aimed at producing a 3 repetition maximum (3RM) lift. Significant (Ppre-workout environment offers an opportunity for understanding the outcomes of hormonal change, athlete behaviour and subsequent voluntary performance. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Using Relevant Video Clips from Popular Media to Enhance Learning in Large Introductory Psychology Classes: A Pilot Study

    Science.gov (United States)

    Rowland-Bryant, Emily; Skinner, Amy L.; Dixon, Lee; Skinner, Christopher H.; Saudargas, Richard

    2011-01-01

    The purpose of this study was to enhance students' learning by supplementing a multimedia lesson with interesting and relevant video clips (VCs). Undergraduate students watched a target material PowerPoint (tmPP) presentation with voice-over lecture covering the Big Five trait theory of personality. Students were randomly assigned to one of four…

  7. The Application of Video Clips with Small Group and Individual Activities to Improve Young Learners' Speaking Performance

    Science.gov (United States)

    Muslem, Asnawi; Mustafa, Faisal; Usman, Bustami; Rahman, Aulia

    2017-01-01

    This study investigated whether the application of video clips with small groups or with individual teaching-learning activities improved the speaking skills of young EFL learners the most; accordingly a quasi-experimental study with a pre-test, post-test design was done. The instrument used in this study was a test in the form of an oral test or…

  8. Integrating customised video clips into The veterinary nursing curriculum to enhance practical competency training and the development of student confidence

    OpenAIRE

    Dunne, Karen; Brereton, Bernadette; Bree, Ronan; Dallat, John

    2015-01-01

    Competency training is a critical aspect of veterinary nursing education, as graduates must complete a practical competency assessment prior to registration as a veterinary nurse. Despite this absolute requirement for practical training across a range of domestic animal species, there is a lack of published literature on optimal teaching approaches. The aim of this project was to assess the value of customised video clips in the practical skills training of veterinary nursing students. The ef...

  9. Habitat diversity in the Northeastern Gulf of Mexico: Selected video clips from the Gulfstream Natural Gas Pipeline digital archive

    Science.gov (United States)

    Raabe, Ellen A.; D'Anjou, Robert; Pope, Domonique K.; Robbins, Lisa L.

    2011-01-01

    This project combines underwater video with maps and descriptions to illustrate diverse seafloor habitats from Tampa Bay, Florida, to Mobile Bay, Alabama. A swath of seafloor was surveyed with underwater video to 100 meters (m) water depth in 1999 and 2000 as part of the Gulfstream Natural Gas System Survey. The U.S. Geological Survey (USGS) in St. Petersburg, Florida, in cooperation with Eckerd College and the Florida Department of Environmental Protection (FDEP), produced an archive of analog-to-digital underwater movies. Representative clips of seafloor habitats were selected from hundreds of hours of underwater footage. The locations of video clips were mapped to show the distribution of habitat and habitat transitions. The numerous benthic habitats in the northeastern Gulf of Mexico play a vital role in the region's economy, providing essential resources for tourism, natural gas, recreational water sports (fishing, boating, scuba diving), materials, fresh food, energy, a source of sand for beach renourishment, and more. These submerged natural resources are important to the economy but are often invisible to the general public. This product provides a glimpse of the seafloor with sample underwater video, maps, and habitat descriptions. It was developed to depict the range and location of seafloor habitats in the region but is limited by depth and by the survey track. It should not be viewed as comprehensive, but rather as a point of departure for inquiries and appreciation of marine resources and seafloor habitats. Further information is provided in the Resources section.

  10. Forest Fire Smoke Video Detection Using Spatiotemporal and Dynamic Texture Features

    Directory of Open Access Journals (Sweden)

    Yaqin Zhao

    2015-01-01

    Full Text Available Smoke detection is a very key part of fire recognition in a forest fire surveillance video since the smoke produced by forest fires is visible much before the flames. The performance of smoke video detection algorithm is often influenced by some smoke-like objects such as heavy fog. This paper presents a novel forest fire smoke video detection based on spatiotemporal features and dynamic texture features. At first, Kalman filtering is used to segment candidate smoke regions. Then, candidate smoke region is divided into small blocks. Spatiotemporal energy feature of each block is extracted by computing the energy features of its 8-neighboring blocks in the current frame and its two adjacent frames. Flutter direction angle is computed by analyzing the centroid motion of the segmented regions in one candidate smoke video clip. Local Binary Motion Pattern (LBMP is used to define dynamic texture features of smoke videos. Finally, smoke video is recognized by Adaboost algorithm. The experimental results show that the proposed method can effectively detect smoke image recorded from different scenes.

  11. Adventure Racing and Organizational Behavior: Using Eco Challenge Video Clips to Stimulate Learning

    Science.gov (United States)

    Kenworthy-U'Ren, Amy; Erickson, Anthony

    2009-01-01

    In this article, the Eco Challenge race video is presented as a teaching tool for facilitating theory-based discussion and application in organizational behavior (OB) courses. Before discussing the intricacies of the video series itself, the authors present a pedagogically based rationale for using reality TV-based video segments in a classroom…

  12. Validation of the fifth edition BI-RADS ultrasound lexicon with comparison of fourth and fifth edition diagnostic performance using video clips

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Jung Hyun; Kim, Min Jung; Lee, Hye Sun [Yonsei University College of Medicine, Seoul (Korea, Republic of); Kim, Sung Hun [Dept. of Radiology, Seoul St. Mary' s Hospital, The Catholic University of Korea, Seoul(Korea, Republic of); Youk, Ji Hyun [Dept. of Radiology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul (Korea, Republic of); Jeong, Sun Hye [Dept. of Radiology, Soonchunhyang University Bucheon Hospital, Bucheon (Korea, Republic of); Kim, You Me [Dept. of Radiology, Dankook University Hospital, Dankook University College of Medicine, Cheonan (Korea, Republic of)

    2016-08-15

    The aim of this study was to evaluate the positive predictive value (PPV) and the diagnostic performance of the ultrasonographic descriptors in the fifth edition of BI-RADS, comparing with the fourth edition using video clips. From September 2013 to July 2014, 80 breast masses in 74 women (mean age, 47.5±10.7 years) from five institutions of the Korean Society of Breast Imaging were included. Two radiologists individually reviewed the static and video images and analyzed the images according to the fourth and fifth edition of BI-RADS. The PPV of each descriptor was calculated and diagnostic performances between the fourth and fifth editions were compared. Of the 80 breast masses, 51 (63.8%) were benign and 29 (36.2%) were malignant. Suspicious ultrasonographic features such as irregular shape, non-parallel orientation, angular or spiculated margins, and combined posterior features showed higher PPV in both editions (all P<0.05). No significant differences were found in the diagnostic performances between the two editions (all P>0.05). The area under the receiver operating characteristics curve was higher in the fourth edition (0.708 to 0.690), without significance (P=0.416). The fifth edition of the BI-RADS ultrasound lexicon showed comparable performance to the fourth edition and can be useful in the differential diagnosis of breast masses using ultrasonography.

  13. CATEGORIZATION OF PORNOGRAPHIC VIDEO CLIPS ON THE INTERNET: A COGNITIVE ANTHROPOLOGICAL APPROACH

    OpenAIRE

    Vucurovic Vasic, Milica

    2013-01-01

    Anthropological study of the Internet pornography can refer to the cultural communication between the creators of the contents and authors of pornographic sites, as well as between the authors of sites and users, the latter being more relevant to this work as it assumes supracultural activities on the Internet and comprises the pornography users as a distinct population. The aim of this study is to determine, through the categorization of porn clips in the Internet, cognitive schemes and cult...

  14. Video Stabilization Using Feature Point Matching

    Science.gov (United States)

    Kulkarni, Shamsundar; Bormane, D. S.; Nalbalwar, S. L.

    2017-01-01

    Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper an algorithm is proposed to stabilize jittery videos. A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video is identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances.

  15. Video clip editing and duration of the shot: analysis of the editing in ‘I Took a Pill in Ibiza’ (Mike Posner

    Directory of Open Access Journals (Sweden)

    José Patricio Pérez Rufí

    2017-11-01

    Full Text Available Music videos are an audiovisual format that is created by the music industry for a performer’s public image and commercial purposes. This research aims to describe common practices in production and editing of current music videos regarding one particular aspect: the duration of the shot according to its frame. We carried out a textual analysis of Mike Posner’s video clip I took a Pill in Ibiza (Jon Jon Augustavo, 2016. In this case, the most frequent shots in the video clip are close shots, and the average duration is around two seconds. The average length of the shots creates a scale of durations which is equivalent to the scale of the size of the frames.

  16. Microsurgery Simulator of Cerebral Aneurysm Clipping with Interactive Cerebral Deformation Featuring a Virtual Arachnoid.

    Science.gov (United States)

    Shono, Naoyuki; Kin, Taichi; Nomura, Seiji; Miyawaki, Satoru; Saito, Toki; Imai, Hideaki; Nakatomi, Hirofumi; Oyama, Hiroshi; Saito, Nobuhito

    2017-08-01

    A virtual reality simulator for aneurysmal clipping surgery is an attractive research target for neurosurgeons. Brain deformation is one of the most important functionalities necessary for an accurate clipping simulator and is vastly affected by the status of the supporting tissue, such as the arachnoid membrane. However, no virtual reality simulator implementing the supporting tissue of the brain has yet been developed. To develop a virtual reality clipping simulator possessing interactive brain deforming capability closely dependent on arachnoid dissection and apply it to clinical cases. Three-dimensional computer graphics models of cerebral tissue and surrounding structures were extracted from medical images. We developed a new method for modifiable cerebral tissue complex deformation by incorporating a nonmedical image-derived virtual arachnoid/trabecula in a process called multitissue integrated interactive deformation (MTIID). MTIID made it possible for cerebral tissue complexes to selectively deform at the site of dissection. Simulations for 8 cases of actual clipping surgery were performed before surgery and evaluated for their usefulness in surgical approach planning. Preoperatively, each operative field was precisely reproduced and visualized with the virtual brain retraction defined by users. The clear visualization of the optimal approach to treating the aneurysm via an appropriate arachnoid incision was possible with MTIID. A virtual clipping simulator mainly focusing on supporting tissues and less on physical properties seemed to be useful in the surgical simulation of cerebral aneurysm clipping. To our knowledge, this article is the first to report brain deformation based on supporting tissues.

  17. Facial esthetics and the assignment of personality traits before and after orthognathic surgery rated on video clips.

    Science.gov (United States)

    Sinko, Klaus; Jagsch, Reinhold; Drog, Claudio; Mosgoeller, Wilhelm; Wutzl, Arno; Millesi, Gabriele; Klug, Clemens

    2018-01-01

    Typically, before and after surgical correction faces are assessed on still images by surgeons, orthodontists, the patients, and family members. We hypothesized that judgment of faces in motion and by naïve raters may closer reflect the impact on patients' real life, and the treatment impact on e.g. career chances. Therefore we assessed faces from dysgnathic patients (Class II, III and Laterognathia) on video clips. Class I faces served as anchor and controls. Each patient's face was assessed twice before and after treatment in changing sequence, by 155 naïve raters with similar age to the patients. The raters provided independent estimates on aesthetic trait pairs like ugly /beautiful, and personality trait pairs like dominant /flexible. Furthermore the perception of attractiveness, intelligence, health, the persons' erotic aura, faithfulness, and five additional items were rated. We estimated the significance of the perceived treatment related differences and the respective effect size by general linear models for repeated measures. The obtained results were comparable to our previous rating on still images. There was an overall trend, that faces in video clips are rated along common stereotypes to a lesser extent than photographs. We observed significant class differences and treatment related changes of most aesthetic traits (e.g. beauty, attractiveness), these were comparable to intelligence, erotic aura and to some extend healthy appearance. While some personality traits (e.g. faithfulness) did not differ between the classes and between baseline and after treatment, we found that the intervention significantly and effectively altered the perception of the personality trait self-confidence. The effect size was highest in Class III patients, smallest in Class II patients, and in between for patients with Laterognathia. All dysgnathic patients benefitted from orthognathic surgery. We conclude that motion can mitigate marked stereotypes but does not entirely

  18. Feature Quantization and Pooling for Videos

    Science.gov (United States)

    2014-05-01

    similar. 1.2 Context Video has become a very popular media for communication, entertainment , and science. Videos are widely used in educational...The same approach applied to action classification from YouTube videos of sport events shows that BoW approaches on real world data sets need further...dog videos, where the camera also tracks the people and animals . In Figure 4.38 we compare across action classes how well each segmentation

  19. Semisupervised feature selection via spline regression for video semantic recognition.

    Science.gov (United States)

    Han, Yahong; Yang, Yi; Yan, Yan; Ma, Zhigang; Sebe, Nicu; Zhou, Xiaofang

    2015-02-01

    To improve both the efficiency and accuracy of video semantic recognition, we can perform feature selection on the extracted video features to select a subset of features from the high-dimensional feature set for a compact and accurate video data representation. Provided the number of labeled videos is small, supervised feature selection could fail to identify the relevant features that are discriminative to target classes. In many applications, abundant unlabeled videos are easily accessible. This motivates us to develop semisupervised feature selection algorithms to better identify the relevant video features, which are discriminative to target classes by effectively exploiting the information underlying the huge amount of unlabeled video data. In this paper, we propose a framework of video semantic recognition by semisupervised feature selection via spline regression (S(2)FS(2)R) . Two scatter matrices are combined to capture both the discriminative information and the local geometry structure of labeled and unlabeled training videos: A within-class scatter matrix encoding discriminative information of labeled training videos and a spline scatter output from a local spline regression encoding data distribution. An l2,1 -norm is imposed as a regularization term on the transformation matrix to ensure it is sparse in rows, making it particularly suitable for feature selection. To efficiently solve S(2)FS(2)R , we develop an iterative algorithm and prove its convergency. In the experiments, three typical tasks of video semantic recognition, such as video concept detection, video classification, and human action recognition, are used to demonstrate that the proposed S(2)FS(2)R achieves better performance compared with the state-of-the-art methods.

  20. Impact of Video Clips on the Development of the Listening Skills in English Classes: A Case Study of Turkish Students

    Science.gov (United States)

    Tekin, Inan; Parmaksiz, Ramazan Sükrü

    2016-01-01

    The purpose of this research is to examine whether using feature films in video lessons has an effect on the development of listening skills of students or not. The research has been conducted at one of the state universities in Black Sea region of Turkey with 126 students. The students watched and listened to only the sentences taken from…

  1. Claim of Clips to change the World

    OpenAIRE

    鼓, みどり

    2016-01-01

    Does video clip claim for political or social issues? How the artists represent their opinions in their clips? This paper focuses on claims of clips on environmental issues, victims of the war and racism. Firstly we invest clips claiming environmental issues and victims of the war. Their message is quite political even though they were made for promotion. Clips present opinions of artists. Secondly we look into claims for racism in clips, especially of Michel Jackson's. His message is quite s...

  2. Effects of teaching communication skills using a video clip on a smart phone on communication competence and emotional intelligence in nursing students.

    Science.gov (United States)

    Choi, Yeonja; Song, Eunju; Oh, Eunjung

    2015-04-01

    This study aims to verify the communication skills training for nursing students by using a video clip on a smart phone. The study settings were the nursing departments of two universities in South Korea. This study was a quasi-experimental one using a nonequivalent control group pre-posttest design. The experimental and control groups consisted of second-year nursing students who had taken a communication course. The experimental group included 45 students, and the control group included 42 students. The experimental group improved more significantly than the control group in communication competence and emotional intelligence. Using a video clip on a smart phone is helpful for communication teaching method. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Having students create short video clips to help transition from naïve conceptions about mechanics to true Newtonian physics

    Science.gov (United States)

    Corten-Gualtieri, Pascale; Ritter, Christian; Plumat, Jim; Keunings, Roland; Lebrun, Marcel; Raucent, Benoit

    2016-07-01

    Most students enter their first university physics course with a system of beliefs and intuitions which are often inconsistent with the Newtonian frame of reference. This article presents an experiment of collaborative learning aiming at helping first-year students in an engineering programme to transition from their naïve intuition about dynamics to the Newtonian way of thinking. In a first activity, students were asked to critically analyse the contents of two video clips from the point of view of Newtonian mechanics. In a second activity, students had to design and realise their own video clip to illustrate a given aspect of Newtonian mechanics. The preparation of the scenario for the second activity required looking up and assimilating scientific knowledge. The efficiency of the activity was assessed on an enhanced version of the statistical analysis method proposed by Hestenes and Halloun, which relies on a pre-test and a post-test to measure individual learning.

  4. Superior canal dehiscence with tegmen defect revealed by otoscopy: Video clip demonstration of pulsatile tympanic membrane.

    Science.gov (United States)

    Castellucci, Andrea; Brandolini, Cristina; Piras, Gianluca; Fernandez, Ignacio Javier; Giordano, Davide; Pernice, Carmine; Modugno, Giovanni Carlo; Pirodda, Antonio; Ferri, Gian Gaetano

    2018-02-01

    Superior canal dehiscence is a pathologic condition of the otic capsule acting as aberrant window of the inner ear. It results in reduction of inner ear impedance and in abnormal exposure of the labyrinthine neuroepithelium to the action of the surrounding structures. The sum of these phenomena leads to the onset of typical cochleo-vestibular symptoms and signs. Among them, pulsatile tinnitus has been attributed to a direct transmission of intracranial vascular activities to labyrinthine fluids. We present the first video-otoscopic documentation of spontaneous pulse-synchronous movements of the tympanic membrane in two patients with superior canal dehiscence. Pulsating eardrum may represent an additional sign of third-mobile window lesion. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Feature Weighting via Optimal Thresholding for Video Analysis (Open Access)

    Science.gov (United States)

    2014-03-03

    combine multiple descriptors. For example, STIP [10] feature combines HOG descriptor for shape information and HOF descriptor for motion informa- tion...Dense Trajectories feature [23] is an integration of de- scriptors of trajectory, HOG , HOF and Motion Boundary Histogram (MBH). In the video action...three features pro- vided by [5]: STIP features with 5,000 dimensional BoWs representation, SIFT features extracted every two seconds with 5,000

  6. Exemplar-Based Image and Video Stylization Using Fully Convolutional Semantic Features.

    Science.gov (United States)

    Zhu, Feida; Yan, Zhicheng; Bu, Jiajun; Yu, Yizhou

    2017-05-10

    Color and tone stylization in images and videos strives to enhance unique themes with artistic color and tone adjustments. It has a broad range of applications from professional image postprocessing to photo sharing over social networks. Mainstream photo enhancement softwares, such as Adobe Lightroom and Instagram, provide users with predefined styles, which are often hand-crafted through a trial-and-error process. Such photo adjustment tools lack a semantic understanding of image contents and the resulting global color transform limits the range of artistic styles it can represent. On the other hand, stylistic enhancement needs to apply distinct adjustments to various semantic regions. Such an ability enables a broader range of visual styles. In this paper, we first propose a novel deep learning architecture for exemplar-based image stylization, which learns local enhancement styles from image pairs. Our deep learning architecture consists of fully convolutional networks (FCNs) for automatic semantics-aware feature extraction and fully connected neural layers for adjustment prediction. Image stylization can be efficiently accomplished with a single forward pass through our deep network. To extend our deep network from image stylization to video stylization, we exploit temporal superpixels (TSPs) to facilitate the transfer of artistic styles from image exemplars to videos. Experiments on a number of datasets for image stylization as well as a diverse set of video clips demonstrate the effectiveness of our deep learning architecture.

  7. BB-CLIPS: Blackboard extensions to CLIPS

    Science.gov (United States)

    Orchard, Robert A.; Diaz, Aurora C.

    1990-01-01

    This paper describes a set of extensions made to CLIPS version 4.3 that provide capabilities similar to the blackboard control architecture described by Hayes-Roth. There are three types of additions made to the CLIPS shell. The first extends the syntax to allow the specification of blackboard locations for CLIPS facts. The second implements changes in CLIPS rules and the agenda manager that provide some of the powerful features of the blackboard control architecture. These additions provide dynamic prioritization of rules on the agenda allowing control strategies to be implemented that respond to the changing goals of the system. The final category of changes support the needs of continuous systems, including the ability for CLIPS to continue execution with an empty agenda.

  8. COMPOSITIONAL AND CONTENT-RELATED PARTICULARITIES OF POLITICAL MEDIA TEXTS (THROUGH THE EXAMPLE OF THE TEXTS OF POLITICAL VIDEO CLIPS ISSUED BY THE CANDIDATES FOR PRESIDENCY IN FRANCE IN 2017

    Directory of Open Access Journals (Sweden)

    Dmitrieva, A.V.

    2017-09-01

    Full Text Available The article examines the texts of political advertising video clips issued by the candidates for presidency in France during the campaign before the first round of elections in 2017. The mentioned examples of media texts are analysed from the compositional point of view as well as from that of the content particularities which are directly connected to the text structure. In general, the majority of the studied clips have a similar structure and consist of three parts: introduction, main part and conclusion. However, as a result of the research, a range of advantages marking well-structured videos was revealed. These include: addressing the voters and stating the speech topic clearly at the beginning of the clip, a relevant attention-grabbing opening phrase, consistency and clarity of the information presentation, appropriate use of additional video plots, conclusion at the end of the clip.

  9. Evaluation of Different Features for Face Recognition in Video

    Science.gov (United States)

    2014-09-01

    15 4 Graph presents the performance comparison among different algorithms implemented in OpenCV (Fisherfaces, Eigenfaces and LBPH)- all use...for face recog- nition in video, in particular those available in the OpenCV library [13]. Comparative performance analysis of these algorithms is...videos. The first one used a generic class that exists in OpenCV (version 2.4.1), called FeatureDetector, which allowed the automatic extraction of

  10. Video Anomaly Detection with Compact Feature Sets for Online Performance.

    Science.gov (United States)

    Leyva, Roberto; Sanchez, Victor; Li, Chang-Tsun

    2017-04-18

    Over the past decade, video anomaly detection has been explored with remarkable results. However, research on methodologies suitable for online performance is still very limited. In this paper, we present an online framework for video anomaly detection. The key aspect of our framework is a compact set of highly descriptive features, which is extracted from a novel cell structure that helps to define support regions in a coarse-to-fine fashion. Based on the scene's activity, only a limited number of support regions are processed, thus limiting the size of the feature set. Specifically, we use foreground occupancy and optical flow features. The framework uses an inference mechanism that evaluates the compact feature set via Gaussian Mixture Models, Markov Chains and Bag-of-Words in order to detect abnormal events. Our framework also considers the joint response of the models in the local spatio-temporal neighborhood to increase detection accuracy. We test our framework on popular existing datasets and on a new dataset comprising a wide variety of realistic videos captured by surveillance cameras. This particular dataset includes surveillance videos depicting criminal activities, car accidents and other dangerous situations. Evaluation results show that our framework outperforms other online methods and attains a very competitive detection performance compared to state-of-the-art non-online methods.

  11. Emotion recognition based on EEG features in movie clips with channel selection.

    Science.gov (United States)

    Özerdem, Mehmet Siraç; Polat, Hasan

    2017-07-15

    Emotion plays an important role in human interaction. People can explain their emotions in terms of word, voice intonation, facial expression, and body language. However, brain-computer interface (BCI) systems have not reached the desired level to interpret emotions. Automatic emotion recognition based on BCI systems has been a topic of great research in the last few decades. Electroencephalogram (EEG) signals are one of the most crucial resources for these systems. The main advantage of using EEG signals is that it reflects real emotion and can easily be processed by computer systems. In this study, EEG signals related to positive and negative emotions have been classified with preprocessing of channel selection. Self-Assessment Manikins was used to determine emotional states. We have employed discrete wavelet transform and machine learning techniques such as multilayer perceptron neural network (MLPNN) and k-nearest neighborhood (kNN) algorithm to classify EEG signals. The classifier algorithms were initially used for channel selection. EEG channels for each participant were evaluated separately, and five EEG channels that offered the best classification performance were determined. Thus, final feature vectors were obtained by combining the features of EEG segments belonging to these channels. The final feature vectors with related positive and negative emotions were classified separately using MLPNN and kNN algorithms. The classification performance obtained with both the algorithms are computed and compared. The average overall accuracies were obtained as 77.14 and 72.92% by using MLPNN and kNN, respectively.

  12. Video Clips for Youtube: Collaborative Video Creation as an Educational Concept for Knowledge Acquisition and Attitude Change Related to Obesity Stigmatization

    Science.gov (United States)

    Zahn, Carmen; Schaeffeler, Norbert; Giel, Katrin Elisabeth; Wessel, Daniel; Thiel, Ansgar; Zipfel, Stephan; Hesse, Friedrich W.

    2014-01-01

    Mobile phones and advanced web-based video tools have pushed forward new paradigms for using video in education: Today, students can readily create and broadcast their own digital videos for others and create entirely new patterns of video-based information structures for modern online-communities and multimedia environments. This paradigm shift…

  13. Microsurgical Clipping of an Anterior Communicating Artery Aneurysm Using a Novel Robotic Visualization Tool in Lieu of the Binocular Operating Microscope: Operative Video.

    Science.gov (United States)

    Klinger, Daniel R; Reinard, Kevin A; Ajayi, Olaide O; Delashaw, Johnny B

    2018-01-01

    The binocular operating microscope has been the visualization instrument of choice for microsurgical clipping of intracranial aneurysms for many decades. To discuss recent technological advances that have provided novel visualization tools, which may prove to be superior to the binocular operating microscope in many regards. We present an operative video and our operative experience with the BrightMatterTM Servo System (Synaptive Medical, Toronto, Ontario, Canada) during the microsurgical clipping of an anterior communicating artery aneurysm. To the best of our knowledge, the use of this device for the microsurgical clipping of an intracranial aneurysm has never been described in the literature. The BrightMatterTM Servo System (Synaptive Medical) is a surgical exoscope which avoids many of the ergonomic constraints of the binocular operating microscope, but is associated with a steep learning curve. The BrightMatterTM Servo System (Synaptive Medical) is a maneuverable surgical exoscope that is positioned with a directional aiming device and a surgeon-controlled foot pedal. While utilizing this device comes with a steep learning curve typical of any new technology, the BrightMatterTM Servo System (Synaptive Medical) has several advantages over the conventional surgical microscope, which include a relatively unobstructed surgical field, provision of high-definition images, and visualization of difficult angles/trajectories. This device can easily be utilized as a visualization tool for a variety of cranial and spinal procedures in lieu of the binocular operating microscope. We anticipate that this technology will soon become an integral part of the neurosurgeon's armamentarium.

  14. Audio-video feature correlation: faces and speech

    Science.gov (United States)

    Durand, Gwenael; Montacie, Claude; Caraty, Marie-Jose; Faudemay, Pascal

    1999-08-01

    This paper presents a study of the correlation of features automatically extracted from the audio stream and the video stream of audiovisual documents. In particular, we were interested in finding out whether speech analysis tools could be combined with face detection methods, and to what extend they should be combined. A generic audio signal partitioning algorithm as first used to detect Silence/Noise/Music/Speech segments in a full length movie. A generic object detection method was applied to the keyframes extracted from the movie in order to detect the presence or absence of faces. The correlation between the presence of a face in the keyframes and of the corresponding voice in the audio stream was studied. A third stream, which is the script of the movie, is warped on the speech channel in order to automatically label faces appearing in the keyframes with the name of the corresponding character. We naturally found that extracted audio and video features were related in many cases, and that significant benefits can be obtained from the joint use of audio and video analysis methods.

  15. Psychogenic Tremor: A Video Guide to Its Distinguishing Features

    Directory of Open Access Journals (Sweden)

    Joseph Jankovic

    2014-08-01

    Full Text Available Background: Psychogenic tremor is the most common psychogenic movement disorder. It has characteristic clinical features that can help distinguish it from other tremor disorders. There is no diagnostic gold standard and the diagnosis is based primarily on clinical history and examination. Despite proposed diagnostic criteria, the diagnosis of psychogenic tremor can be challenging. While there are numerous studies evaluating psychogenic tremor in the literature, there are no publications that provide a video/visual guide that demonstrate the clinical characteristics of psychogenic tremor. Educating clinicians about psychogenic tremor will hopefully lead to earlier diagnosis and treatment. Methods: We selected videos from the database at the Parkinson's Disease Center and Movement Disorders Clinic at Baylor College of Medicine that illustrate classic findings supporting the diagnosis of psychogenic tremor.Results: We include 10 clinical vignettes with accompanying videos that highlight characteristic clinical signs of psychogenic tremor including distractibility, variability, entrainability, suggestibility, and coherence.Discussion: Psychogenic tremor should be considered in the differential diagnosis of patients presenting with tremor, particularly if it is of abrupt onset, intermittent, variable and not congruous with organic tremor. The diagnosis of psychogenic tremor, however, should not be simply based on exclusion of organic tremor, such as essential, parkinsonian, or cerebellar tremor, but on positive criteria demonstrating characteristic features. Early recognition and management are critical for good long-term outcome.

  16. Cell Phone Video Recording Feature as a Language Learning Tool: A Case Study

    Science.gov (United States)

    Gromik, Nicolas A.

    2012-01-01

    This paper reports on a case study conducted at a Japanese national university. Nine participants used the video recording feature on their cell phones to produce weekly video productions. The task required that participants produce one 30-second video on a teacher-selected topic. Observations revealed the process of video creation with a cell…

  17. Feature Extraction in IR Images Via Synchronous Video Detection

    Science.gov (United States)

    Shepard, Steven M.; Sass, David T.

    1989-03-01

    IR video images acquired by scanning imaging radiometers are subject to several problems which make measurement of small temperature differences difficult. Among these problems are 1) aliasing, which occurs When events at frequencies higher than the video frame rate are observed, 2) limited temperature resolution imposed by the 3-bit digitization available in existing commercial systems, and 3) susceptibility to noise and background clutter. Bandwidth narrowing devices (e.g. lock-in amplifiers or boxcar averagers) are routinely used to achieve a high degree of signal to noise improvement for time-varying 1-dimensional signals. We will describe techniques which allow similar S/N improvement for 2-dimensional imagery acquired with an off the shelf scanning imaging radiometer system. These techniques are iplemented in near-real-time, utilizing a microcomputer and specially developed hardware and software . We will also discuss the application of the system to feature extraction in cluttered images, and to acquisition of events which vary faster than the frame rate.

  18. Obscene Video Recognition Using Fuzzy SVM and New Sets of Features

    Directory of Open Access Journals (Sweden)

    Alireza Behrad

    2013-02-01

    Full Text Available In this paper, a novel approach for identifying normal and obscene videos is proposed. In order to classify different episodes of a video independently and discard the need to process all frames, first, key frames are extracted and skin regions are detected for groups of video frames starting with key frames. In the second step, three different features including 1- structural features based on single frame information, 2- features based on spatiotemporal volume and 3-motion-based features, are extracted for each episode of video. The PCA-LDA method is then applied to reduce the size of structural features and select more distinctive features. For the final step, we use fuzzy or a Weighted Support Vector Machine (WSVM classifier to identify video episodes. We also employ a multilayer Kohonen network as an initial clustering algorithm to increase the ability to discriminate between the extracted features into two classes of videos. Features based on motion and periodicity characteristics increase the efficiency of the proposed algorithm in videos with bad illumination and skin colour variation. The proposed method is evaluated using 1100 videos in different environmental and illumination conditions. The experimental results show a correct recognition rate of 94.2% for the proposed algorithm.

  19. Content analysis of antismoking videos on YouTube: message sensation value, message appeals, and their relationships with viewer responses

    National Research Council Canada - National Science Library

    Paek, Hye-Jin; Kim, Kyongseok; Hove, Thomas

    2010-01-01

    Focusing on several message features that are prominent in antismoking campaign literature, this content-analytic study examines 934 antismoking video clips on YouTube for the following characteristics...

  20. Comparison of Image Transform-Based Features for Visual Speech Recognition in Clean and Corrupted Videos

    Directory of Open Access Journals (Sweden)

    Ji Ming

    2008-03-01

    Full Text Available We present results of a study into the performance of a variety of different image transform-based feature types for speaker-independent visual speech recognition of isolated digits. This includes the first reported use of features extracted using a discrete curvelet transform. The study will show a comparison of some methods for selecting features of each feature type and show the relative benefits of both static and dynamic visual features. The performance of the features will be tested on both clean video data and also video data corrupted in a variety of ways to assess each feature type's robustness to potential real-world conditions. One of the test conditions involves a novel form of video corruption we call jitter which simulates camera and/or head movement during recording.

  1. A spatiotemporal feature-based approach for facial expression recognition from depth video

    Science.gov (United States)

    Uddin, Md. Zia

    2015-07-01

    In this paper, a novel spatiotemporal feature-based method is proposed to recognize facial expressions from depth video. Independent Component Analysis (ICA) spatial features of the depth faces of facial expressions are first augmented with the optical flow motion features. Then, the augmented features are enhanced by Fisher Linear Discriminant Analysis (FLDA) to make them robust. The features are then combined with on Hidden Markov Models (HMMs) to model different facial expressions that are later used to recognize appropriate expression from a test expression depth video. The experimental results show superior performance of the proposed approach over the conventional methods.

  2. Complete closure of artificial gastric ulcer after endoscopic submucosal dissection by combined use of a single over-the-scope clip and through-the-scope clips (with videos).

    Science.gov (United States)

    Maekawa, Satoshi; Nomura, Ryosuke; Murase, Takayuki; Ann, Yasuyoshi; Harada, Masaru

    2015-02-01

    A 5-7 day hospital stay is usually needed after endoscopic submucosal dissection (ESD) of gastric tumor because of the possibility of delayed perforation or bleeding. The aim of this study was to evaluate the efficacy of combined use of a single over-the-scope clip (OTSC) and through-the-scope clips (TTSCs) to achieve complete closure of artificial gastric ulcer after ESD. We prospectively studied 12 patients with early gastric cancer or gastric adenoma. We performed complete closure of post-ESD artificial gastric ulcer using a combination of a single OTSC and TTSCs. Mean size of post-ESD artificial ulcer was 54.6 mm. The mean operating time for the closure procedure was 15.2 min., and the success rate was 91.7 % (11/12). Patients who underwent complete closure of post-ESD artificial gastric ulcer could be discharged the day after ESD and the closing procedure. Complete closure of post-ESD artificial gastric ulcer using a combination of a single OTSC and TTSCs is useful for shortening the period of hospitalization and reducing treatment cost.

  3. Analysis of only 0-1 min clip or 1-4 min Clip for focal liver lesions ...

    African Journals Online (AJOL)

    video clips is very time-consuming and demanding in terms of technical ... This consisted in choosing a scan in which the lesion was ... A continuous video clip of CEUS was acquired (duration 3-4min) following contrast injection. All investigations were performed in the same standardized way by the same expert operator, ...

  4. A Joint Compression Scheme of Video Feature Descriptors and Visual Content.

    Science.gov (United States)

    Zhang, Xiang; Ma, Siwei; Wang, Shiqi; Zhang, Xinfeng; Sun, Huifang; Gao, Wen

    2017-02-01

    High-efficiency compression of visual feature descriptors has recently emerged as an active topic due to the rapidly increasing demand in mobile visual retrieval over bandwidth-limited networks. However, transmitting only those feature descriptors may largely restrict its application scale due to the lack of necessary visual content. To facilitate the wide spread of feature descriptors, a hybrid framework of jointly compressing the feature descriptors and visual content is highly desirable. In this paper, such a content-plus-feature coding scheme is investigated, aiming to shape the next generation of video compression system toward visual retrieval, where the high-efficiency coding of both feature descriptors and visual content can be achieved by exploiting the interactions between each other. On the one hand, visual feature descriptors can achieve compact and efficient representation by taking advantages of the structure and motion information in the compressed video stream. To optimize the retrieval performance, a novel rate-accuracy optimization technique is proposed to accurately estimate the retrieval performance degradation in feature coding. On the other hand, the already compressed feature data can be utilized to further improve the video coding efficiency by applying feature matching-based affine motion compensation. Extensive simulations have shown that the proposed joint compression framework can offer significant bitrate reduction in representing both feature descriptors and video frames, while simultaneously maintaining the state-of-the-art visual retrieval performance.

  5. Real-time UAV trajectory generation using feature points matching between video image sequences

    Science.gov (United States)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  6. A reverse-action clip applier for aneurysm surgery.

    Science.gov (United States)

    Sato, Atsushi; Koyama, Jun-ichi; Hanaoka, Yoshiki; Hongo, Kazuhiro

    2015-06-01

    Clipping is an important technique for cerebral aneurysm surgery. Although clip mechanisms and features have been refined, little attention has been paid to clip appliers. Clip closure is traditionally achieved by opening the grip of the clip applier. We reconsidered this motion and identified an important drawback, namely that the standard applier holding power decreased at the moment of clip release, which could lead to unstable clip application. To develop a forceps to address this clip applier design flaw. The new clip applier has a non--cross-type fulcrum that is closed at the time of clip release, with an action similar to that of a bipolar forceps or scissors. Thus, a surgeon can steadily apply the clip from various angles. We successfully used our clip applier to treat 103 aneurysms. Although training was required to ensure smooth applier use, no difficulties associated with applier use were noted. This clip applier can improve clipping surgery safety because it offers additional stability during clip release.

  7. Exploiting Feature and Class Relationships in Video Categorization with Regularized Deep Neural Networks.

    Science.gov (United States)

    Jiang, Yu-Gang; Wu, Zuxuan; Wang, Jun; Xue, Xiangyang; Chang, Shih-Fu

    2018-02-01

    In this paper, we study the challenging problem of categorizing videos according to high-level semantics such as the existence of a particular human action or a complex event. Although extensive efforts have been devoted in recent years, most existing works combined multiple video features using simple fusion strategies and neglected the utilization of inter-class semantic relationships. This paper proposes a novel unified framework that jointly exploits the feature relationships and the class relationships for improved categorization performance. Specifically, these two types of relationships are estimated and utilized by imposing regularizations in the learning process of a deep neural network (DNN). Through arming the DNN with better capability of harnessing both the feature and the class relationships, the proposed regularized DNN (rDNN) is more suitable for modeling video semantics. We show that rDNN produces better performance over several state-of-the-art approaches. Competitive results are reported on the well-known Hollywood2 and Columbia Consumer Video benchmarks. In addition, to stimulate future research on large scale video categorization, we collect and release a new benchmark dataset, called FCVID, which contains 91,223 Internet videos and 239 manually annotated categories.

  8. Extracting foreground ensemble features to detect abnormal crowd behavior in intelligent video-surveillance systems

    Science.gov (United States)

    Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien

    2017-09-01

    Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.

  9. Scientists feature their work in Arctic-focused short videos by FrontierScientists

    Science.gov (United States)

    Nielsen, L.; O'Connell, E.

    2013-12-01

    Whether they're guiding an unmanned aerial vehicle into a volcanic plume to sample aerosols, or documenting core drilling at a frozen lake in Siberia formed 3.6 million years ago by a massive meteorite impact, Arctic scientists are using video to enhance and expand their science and science outreach. FrontierScientists (FS), a forum for showcasing scientific work, produces and promotes radically different video blogs featuring Arctic scientists. Three- to seven- minute multimedia vlogs help deconstruct researcher's efforts and disseminate stories, communicating scientific discoveries to our increasingly connected world. The videos cover a wide range of current field work being performed in the Arctic. All videos are freely available to view or download from the FrontierScientists.com website, accessible via any internet browser or via the FrontierScientists app. FS' filming process fosters a close collaboration between the scientist and the media maker. Film creation helps scientists reach out to the public, communicate the relevance of their scientific findings, and craft a discussion. Videos keep audience tuned in; combining field footage, pictures, audio, and graphics with a verbal explanation helps illustrate ideas, allowing one video to reach people with different learning strategies. The scientists' stories are highlighted through social media platforms online. Vlogs grant scientists a voice, letting them illustrate their own work while ensuring accuracy. Each scientific topic on FS has its own project page where easy-to-navigate videos are featured prominently. Video sets focus on different aspects of a researcher's work or follow one of their projects into the field. We help the scientist slip the answers to their five most-asked questions into the casual script in layman's terms in order to free the viewers' minds to focus on new concepts. Videos are accompanied by written blogs intended to systematically demystify related facts so the scientists can focus

  10. Identifying Key Features of Student Performance in Educational Video Games and Simulations through Cluster Analysis

    Science.gov (United States)

    Kerr, Deirdre; Chung, Gregory K. W. K.

    2012-01-01

    The assessment cycle of "evidence-centered design" (ECD) provides a framework for treating an educational video game or simulation as an assessment. One of the main steps in the assessment cycle of ECD is the identification of the key features of student performance. While this process is relatively simple for multiple choice tests, when…

  11. Feature-based fast coding unit partition algorithm for high efficiency video coding

    Directory of Open Access Journals (Sweden)

    Yih-Chuan Lin

    2015-04-01

    Full Text Available High Efficiency Video Coding (HEVC, which is the newest video coding standard, has been developed for the efficient compression of ultra high definition videos. One of the important features in HEVC is the adoption of a quad-tree based video coding structure, in which each incoming frame is represented as a set of non-overlapped coding tree blocks (CTB by variable-block sized prediction and coding process. To do this, each CTB needs to be recursively partitioned into coding unit (CU, predict unit (PU and transform unit (TU during the coding process, leading to a huge computational load in the coding of each video frame. This paper proposes to extract visual features in a CTB and uses them to simplify the coding procedure by reducing the depth of quad-tree partition for each CTB in HEVC intra coding mode. A measure for the edge strength in a CTB, which is defined with simple Sobel edge detection, is used to constrain the possible maximum depth of quad-tree partition of the CTB. With the constrained partition depth, the proposed method can reduce a lot of encoding time. Experimental results by HM10.1 show that the average time-savings is about 13.4% under the increase of encoded BD-Rate by only 0.02%, which is a less performance degradation in comparison to other similar methods.

  12. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    Science.gov (United States)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  13. EEG-based recognition of video-induced emotions: selecting subject-independent feature set.

    Science.gov (United States)

    Kortelainen, Jukka; Seppänen, Tapio

    2013-01-01

    Emotions are fundamental for everyday life affecting our communication, learning, perception, and decision making. Including emotions into the human-computer interaction (HCI) could be seen as a significant step forward offering a great potential for developing advanced future technologies. While the electrical activity of the brain is affected by emotions, offers electroencephalogram (EEG) an interesting channel to improve the HCI. In this paper, the selection of subject-independent feature set for EEG-based emotion recognition is studied. We investigate the effect of different feature sets in classifying person's arousal and valence while watching videos with emotional content. The classification performance is optimized by applying a sequential forward floating search algorithm for feature selection. The best classification rate (65.1% for arousal and 63.0% for valence) is obtained with a feature set containing power spectral features from the frequency band of 1-32 Hz. The proposed approach substantially improves the classification rate reported in the literature. In future, further analysis of the video-induced EEG changes including the topographical differences in the spectral features is needed.

  14. Clipping an Angel's Wings

    NARCIS (Netherlands)

    Nolte, R.J.M.; Rowan, A.E.; Elemans, J.A.A.W.

    2016-01-01

    Glycoluril is a concave molecule with hydrogen donor and acceptor sites. It can be provided with aromatic side walls to form a molecule with a clip-shape structure. This review highlights the host-guest and self-assembling properties of glycoluril clips and discusses their use as building blocks for

  15. Videography-Based Unconstrained Video Analysis.

    Science.gov (United States)

    Li, Kang; Li, Sheng; Oh, Sangmin; Fu, Yun

    2017-05-01

    Video analysis and understanding play a central role in visual intelligence. In this paper, we aim to analyze unconstrained videos, by designing features and approaches to represent and analyze videography styles in the videos. Videography denotes the process of making videos. The unconstrained videos are defined as the long duration consumer videos that usually have diverse editing artifacts and significant complexity of contents. We propose to construct a videography dictionary, which can be utilized to represent every video clip as a sequence of videography words. In addition to semantic features, such as foreground object motion and camera motion, we also incorporate two novel interpretable features to characterize videography, including the scale information and the motion correlations. We then demonstrate that, by using statistical analysis methods, the unique videography signatures extracted from different events can be automatically identified. For real-world applications, we explore the use of videography analysis for three types of applications, including content-based video retrieval, video summarization (both visual and textual), and videography-based feature pooling. In the experiments, we evaluate the performance of our approach and other methods on a large-scale unconstrained video dataset, and show that the proposed approach significantly benefits video analysis in various ways.

  16. El video clip una herramienta para focalizar aspectos fundamentales de la argumentación científica en el aula

    OpenAIRE

    Ruiz Ortega, Francisco Javier

    2016-01-01

    La investigación tiene como objetivo promover en futuros docentes de la licenciatura en Biología y Química de la Universidad de Caldas, la identificación de los aspectos fundamentales de la argumentación y para ello se hace uso de los video episodios en el aula. La evaluación de los datos obtenidos muestra que los futuros docentes logran identificar aspectos conceptuales, estructurales y didácticos, siendo éstos últimos los que deben enfatizarse mucho más en los procesos de formación para per...

  17. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    National Research Council Canada - National Science Library

    Chang, Yuchou; Lee, DJ; Hong, Yi; Archibald, James

    .... In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection...

  18. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Science.gov (United States)

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  19. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Directory of Open Access Journals (Sweden)

    Florian Eyben

    Full Text Available Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  20. Modified Clipped LMS Algorithm

    National Research Council Canada - National Science Library

    Lotfizad, Mojtaba; Yazdi, Hadi Sadoghi

    2005-01-01

    A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization...

  1. Use of digital video for documentation of microscopic features of tissue samples.

    Science.gov (United States)

    Melín-Aldana, Héctor; Gasilionis, Valdas; Kapur, Umesh

    2008-05-01

    Digital photography is commonly used to document microscopic features of tissue samples, but it relies on the capture of arbitrarily selected representative areas. Current technologic advances permit the review of an entire sample, some even replicating the use of a microscope. To demonstrate the applicability of digital video to the documentation of histologic samples. A Canon Elura MC40 digital camcorder was mounted on a microscope, glass slide-mounted tissue sections were filmed, and the unedited movies were transferred to a Apple Mac Pro computer. Movies were edited using the software iMovie HD, including placement of a time counter and a voice recording. The finished movies can be viewed in computers, incorporated onto DVDs, or placed on a Web site after compression with Flash software. The final movies range, on average, between 2 and 8 minutes, depending on the size of the sample, and between 50 MB and 1.6 GB, depending on the intended means of distribution, with DVDs providing the best image quality. Digital video is a practical methodology for documentation of entire tissue samples. We propose an affordable method that uses easily available hardware and software and does not require significant computer knowledge. Pathology education can be enhanced by the implementation of digital video technology.

  2. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Hong Yi

    2008-01-01

    Full Text Available Abstract Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  3. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Yuchou Chang

    2008-02-01

    Full Text Available Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  4. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  5. Extracting Road Features from Aerial Videos of Small Unmanned Aerial Vehicles

    Science.gov (United States)

    Rajamohan, D.; Rajan, K. S.

    2013-09-01

    With major aerospace companies showing interest in certifying UAV systems for civilian airspace, their use in commercial remote sensing applications like traffic monitoring, map refinement, agricultural data collection, etc., are on the rise. But ambitious requirements like real-time geo-referencing of data, support for multiple sensor angle-of-views, smaller UAV size and cheaper investment cost have lead to challenges in platform stability, sensor noise reduction and increased onboard processing. Especially in small UAVs the geo-referencing of data collected is only as good as the quality of their localization sensors. This drives a need for developing methods that pickup spatial features from the captured video/image and aid in geo-referencing. This paper presents one such method to identify road segments and intersections based on traffic flow and compares well with the accuracy of manual observation. Two test video datasets, one each from moving and stationary platforms were used. The results obtained show a promising average percentage difference of 7.01 % and 2.48 % for the road segment extraction process using moving and stationary platform respectively. For the intersection identification process, the moving platform shows an accuracy of 75 % where as the stationary platform data reaches an accuracy of 100 %.

  6. Investigation on effectiveness of mid-level feature representation for semantic boundary detection in news video

    Science.gov (United States)

    Radhakrishan, Regunathan; Xiong, Ziyou; Divakaran, Ajay; Raj, Bhiksha

    2003-11-01

    In our past work, we have attempted to use a mid-level feature namely the state population histogram obtained from the Hidden Markov Model (HMM) of a general sound class, for speaker change detection so as to extract semantic boundaries in broadcast news. In this paper, we compare the performance of our previous approach with another approach based on video shot detection and speaker change detection using the Bayesian Information Criterion (BIC). Our experiments show that the latter approach performs significantly better than the former. This motivated us to examine the mid-level feature closely. We found that the component population histogram enabled discovery of broad phonetic categories such as vowels, nasals, fricatives etc, regardless of the number of distinct speakers in the test utterance. In order for it to be useful for speaker change detection, the individual components should model the phonetic sounds of each speaker separately. From our experiments, we conclude that state/component population histograms can only be useful for further clustering or semantic class discovery if the features are chosen carefully so that the individual states represent the semantic categories of interest.

  7. Clinical features and axis I comorbidity of Australian adolescent pathological Internet and video game users.

    Science.gov (United States)

    King, Daniel L; Delfabbro, Paul H; Zwaans, Tara; Kaptsis, Dean

    2013-11-01

    Although there is growing international recognition of pathological technology use (PTU) in adolescence, there has been a paucity of empirical research conducted in Australia. This study was designed to assess the clinical features of pathological video gaming (PVG) and pathological Internet use (PIU) in a normative Australian adolescent population. A secondary objective was to investigate the axis I comorbidities associated with PIU and video gaming. A total of 1287 South Australian secondary school students aged 12-18 years were recruited. Participants were assessed using the PTU checklist, Revised Children's Anxiety and Depression Scale, Social Anxiety Scale for Adolescents, revised UCLA Loneliness Scale, and Teenage Inventory of Social Skills. Adolescents who met the criteria for PVG or PIU or both were compared to normal adolescents in terms of axis I comorbidity. The prevalence rates of PIU and PVG were 6.4% and 1.8%, respectively. A subgroup with co-occurring PIU and PVG was identified (3.3%). The most distinguishing clinical features of PTU were withdrawal, tolerance, lies and secrecy, and conflict. Symptoms of preoccupation, inability to self-limit, and using technology as an escape were commonly reported by adolescents without PTU, and therefore may be less useful as clinical indicators. Depression, panic disorder, and separation anxiety were most prevalent among adolescents with PIU. PTU among Australian adolescents remains an issue warranting clinical concern. These results suggest an emerging trend towards the greater uptake and use of the Internet among female adolescents, with associated PIU. Although there exists an overlap of PTU disorders, adolescents with PIU appear to be at greater risk of axis I comorbidity than adolescents with PVG alone. Further research with an emphasis on validation techniques, such as verified identification of harm, may enable an informed consensus on the definition and diagnosis of PTU.

  8. Sizing algorithm with continuous customizable clipping

    Science.gov (United States)

    Morales, Domingo; Baytelman, Felipe; Araya, Hugo

    2008-10-01

    Polygon sizing is required during Mask Data Preparation in order to generate derived layers and as process bias to account for edge effects of etching. Two main features are required for polygon sizing algorithms to be useful in Mask Data Preparation software: correctness to avoid data corruption and clipping of the projection of acute angle vertices to limit connectivity modifications. However, current available solutions are either based on heuristics, producing corrupted results for certain input, or based on algorithms which may fail to maintain original design's connectivity for certain input. A novel algorithm including customizable clipping is presented.

  9. Clip art rendering of smooth isosurfaces.

    Science.gov (United States)

    Stroila, Matei; Eisemann, Elmar; Hart, John

    2008-01-01

    Clip art is a simplified illustration form consisting of layered filled polygons or closed curves used to convey 3D shape information in a 2D vector graphics format. This paper focuses on the problem of direct conversion of smooth surfaces, ranging from the free-form shapes of art and design to the mathematical structures of geometry and topology, into a clip art form suitable for illustration use in books, papers and presentations. We show how to represent silhouette, shadow, gleam and other surface feature curves as the intersection of implicit surfaces, and derive equations for their efficient interrogation via particle chains. We further describe how to sort, orient, identify and fill the closed regions that overlay to form clip art. We demonstrate the results with numerous renderings used to illustrate the paper itself.

  10. Real-time skin feature identification in a time-sequential video stream

    Science.gov (United States)

    Kramberger, Iztok

    2005-04-01

    Skin color can be an important feature when tracking skin-colored objects. Particularly this is the case for computer-vision-based human-computer interfaces (HCI). Humans have a highly developed feeling of space and, therefore, it is reasonable to support this within intelligent HCI, where the importance of augmented reality can be foreseen. Joining human-like interaction techniques within multimodal HCI could, or will, gain a feature for modern mobile telecommunication devices. On the other hand, real-time processing plays an important role in achieving more natural and physically intuitive ways of human-machine interaction. The main scope of this work is the development of a stereoscopic computer-vision hardware-accelerated framework for real-time skin feature identification in the sense of a single-pass image segmentation process. The hardware-accelerated preprocessing stage is presented with the purpose of color and spatial filtering, where the skin color model within the hue-saturation-value (HSV) color space is given with a polyhedron of threshold values representing the basis of the filter model. An adaptive filter management unit is suggested to achieve better segmentation results. This enables the adoption of filter parameters to the current scene conditions in an adaptive way. Implementation of the suggested hardware structure is given at the level of filed programmable system level integrated circuit (FPSLIC) devices using an embedded microcontroller as their main feature. A stereoscopic clue is achieved using a time-sequential video stream, but this shows no difference for real-time processing requirements in terms of hardware complexity. The experimental results for the hardware-accelerated preprocessing stage are given by efficiency estimation of the presented hardware structure using a simple motion-detection algorithm based on a binary function.

  11. Cerebral activation associated with sexual arousal in response to a pornographic clip: A 15O-H2O PET study in heterosexual men.

    Science.gov (United States)

    Bocher, M; Chisin, R; Parag, Y; Freedman, N; Meir Weil, Y; Lester, H; Mishani, E; Bonne, O

    2001-07-01

    This study attempted to use PET and 15O-H2O to measure changes in regional cerebral blood flow (rCBF) during sexual arousal evoked in 10 young heterosexual males while they watched a pornographic video clip, featuring heterosexual intercourse. This condition was compared with other mental setups evoked by noisy, nature, and talkshow audiovisual clips. Immediately after each clip, the participants answered three questions pertaining to what extent they thought about sex, felt aroused, and sensed an erection. They scored their answers using a 1 to 10 scale. SPM was used for data analysis. Sexual arousal was mainly associated with activation of bilateral, predominantly right, inferoposterior extrastriate cortices, of the right inferolateral prefrontal cortex and of the midbrain. The significance of those findings is discussed in the light of current theories concerning selective attention, "mind reading" and mirroring, reinforcement of pleasurable stimuli, and penile erection.

  12. Image Segmentation and Feature Extraction for Recognizing Strokes in Tennis Game Videos

    NARCIS (Netherlands)

    Zivkovic, Z.; van der Heijden, Ferdinand; Petkovic, M.; Jonker, Willem; Langendijk, R.L.; Heijnsdijk, J.W.J.; Pimentel, A.D.; Wilkinson, M.H.F.

    This paper addresses the problem of recognizing human actions from video. Particularly, the case of recognizing events in tennis game videos is analyzed. Driven by our domain knowledge, a robust player segmentation algorithm is developed for real video data. Further, we introduce a number of novel

  13. Gaussian Process Regression-Based Video Anomaly Detection and Localization With Hierarchical Feature Representation.

    Science.gov (United States)

    Cheng, Kai-Wen; Chen, Yie-Tarng; Fang, Wen-Hsien

    2015-12-01

    This paper presents a hierarchical framework for detecting local and global anomalies via hierarchical feature representation and Gaussian process regression (GPR) which is fully non-parametric and robust to the noisy training data, and supports sparse features. While most research on anomaly detection has focused more on detecting local anomalies, we are more interested in global anomalies that involve multiple normal events interacting in an unusual manner, such as car accidents. To simultaneously detect local and global anomalies, we cast the extraction of normal interactions from the training videos as a problem of finding the frequent geometric relations of the nearby sparse spatio-temporal interest points (STIPs). A codebook of interaction templates is then constructed and modeled using the GPR, based on which a novel inference method for computing the likelihood of an observed interaction is also developed. Thereafter, these local likelihood scores are integrated into globally consistent anomaly masks, from which anomalies can be succinctly identified. To the best of our knowledge, it is the first time GPR is employed to model the relationship of the nearby STIPs for anomaly detection. Simulations based on four widespread datasets show that the new method outperforms the main state-of-the-art methods with lower computational burden.

  14. Wireless capsule endoscopy video segmentation using an unsupervised learning approach based on probabilistic latent semantic analysis with scale invariant features.

    Science.gov (United States)

    Shen, Yao; Guturu, Parthasarathy Partha; Buckles, Bill P

    2012-01-01

    Since wireless capsule endoscopy (WCE) is a novel technology for recording the videos of the digestive tract of a patient, the problem of segmenting the WCE video of the digestive tract into subvideos corresponding to the entrance, stomach, small intestine, and large intestine regions is not well addressed in the literature. A selected few papers addressing this problem follow supervised leaning approaches that presume availability of a large database of correctly labeled training samples. Considering the difficulties in procuring sizable WCE training data sets needed for achieving high classification accuracy, we introduce in this paper an unsupervised learning approach that employs Scale Invariant Feature Transform (SIFT) for extraction of local image features and the probabilistic latent semantic analysis (pLSA) model used in the linguistic content analysis for data clustering. Results of experimentation indicate that this method compares well in classification accuracy with the state-of-the-art supervised classification approaches to WCE video segmentation.

  15. A Sieving ANN for Emotion-Based Movie Clip Classification

    National Research Council Canada - National Science Library

    WATANAPA, Saowaluk C; THIPAKORN, Bundit; CHAROENKITKARN, Nipon

    2008-01-01

    .... Our research attempts to classify movie clips into three groups of commonly elicited emotions, namely excitement, joy and sadness, based on a set of abstract-level semantic features extracted from the film sequence...

  16. HyperCLIPS: A HyperCard interface to CLIPS

    Science.gov (United States)

    Pickering, Brad; Hill, Randall W., Jr.

    1990-01-01

    HyperCLIPS combines the intuitive, interactive user interface of the Apple Macintosh(TM) with the powerful symbolic computation of an expert system interpreter. HyperCard(TM) is an excellent environment for quickly developing the front end of an application with buttons, dialogs, and pictures, while the CLIPS interpreter provides a powerful inference engine for complex problem solving and analysis. By integrating HyperCard and CLIPS the advantages and uses of both packages are made available for a wide range of uses: rapid prototyping of knowledge-based expert systems, interactive simulations of physical systems, and intelligent control of hypertext processes, to name a few. Interfacing HyperCard and CLIPS is natural. HyperCard was designed to be extended through the use of external commands (XCMDs), and CLIPS was designed to be embedded through the use of the I/O router facilities and callable interface routines. With the exception of some technical difficulties which will be discussed later, HyperCLIPS implements this interface in a straight forward manner, using the facilities provided. An XCMD called 'ClipsX' was added to HyperCard to give access to the CLIPS routines: clear, load, reset, and run. And an I/O router was added to CLIPS to handle the communication of data between CLIPS and HyperCard.

  17. Legal drug content in music video programs shown on Australian television on saturday mornings.

    Science.gov (United States)

    Johnson, Rebecca; Croager, Emma; Pratt, Iain S; Khoo, Natalie

    2013-01-01

    To examine the extent to which legal drug references (alcohol and tobacco) are present in the music video clips shown on two music video programs broadcast in Australia on Saturday mornings. Further, to examine the music genres in which the references appeared and the dominant messages associated with the references. Music video clips shown on the music video programs 'Rage' (ABC TV) and [V] 'Music Video Chart' (Channel [V]) were viewed over 8 weeks from August 2011 to October 2011 and the number of clips containing verbal and/or visual drug references in each program was counted. The songs were classified by genre and the dominant messages associated with drug references were also classified and analysed. A considerable proportion of music videos (approximately one-third) contained drug references. Alcohol featured in 95% of the music videos that contained drug references. References to alcohol generally associated it with fun and humour, and alcohol and tobacco were both overwhelmingly presented in contexts that encouraged, rather than discouraged, their use. In Australia, Saturday morning is generally considered a children's television viewing timeslot, and several broadcaster Codes of Practice dictate that programs shown on Saturday mornings must be appropriate for viewing by audiences of all ages. Despite this, our findings show that music video programs aired on Saturday mornings contain a considerable level of drug-related content.

  18. RST-Resilient Video Watermarking Using Scene-Based Feature Extraction

    OpenAIRE

    Jung Han-Seung; Lee Young-Yoon; Lee Sang Uk

    2004-01-01

    Watermarking for video sequences should consider additional attacks, such as frame averaging, frame-rate change, frame shuffling or collusion attacks, as well as those of still images. Also, since video is a sequence of analogous images, video watermarking is subject to interframe collusion. In order to cope with these attacks, we propose a scene-based temporal watermarking algorithm. In each scene, segmented by scene-change detection schemes, a watermark is embedded temporally to one-dimens...

  19. Tracking of Moving Objects in Video Through Invariant Features in Their Graph Representation

    Directory of Open Access Journals (Sweden)

    Averbuch A

    2008-01-01

    Full Text Available Abstract The paper suggests a contour-based algorithm for tracking moving objects in video. The inputs are segmented moving objects. Each segmented frame is transformed into region adjacency graphs (RAGs. The object's contour is divided into subcurves. Contour's junctions are derived. These junctions are the unique “signature� of the tracked object. Junctions from two consecutive frames are matched. The junctions' motion is estimated using RAG edges in consecutive frames. Each pair of matched junctions may be connected by several paths (edges that become candidates that represent a tracked contour. These paths are obtained by the -shortest paths algorithm between two nodes. The RAG is transformed into a weighted directed graph. The final tracked contour construction is derived by a match between edges (subcurves and candidate paths sets. The RAG constructs the tracked contour that enables an accurate and unique moving object representation. The algorithm tracks multiple objects, partially covered (occluded objects, compounded object of merge/split such as players in a soccer game and tracking in a crowded area for surveillance applications. We assume that features of topologic signature of the tracked object stay invariant in two consecutive frames. The algorithm's complexity depends on RAG's edges and not on the image's size.

  20. The Effect of Typographical Features of Subtitles on Nonnative English Viewers’ Retention and Recall of Lyrics in English Music Videos

    OpenAIRE

    Farshid Tayari Ashtiani

    2017-01-01

    The goal of this study was to test the effect of typographical features of subtitles including size, color and position on nonnative English viewers’ retention and recall of lyrics in music videos. To do so, the researcher played a simple subtitled music video for the participants at the beginning of their classes, and administered a 31-blank cloze test from the lyrics at the end of the classes. In the second test, the control group went through the same procedure but experimental group watch...

  1. Fast Mode Decision in the HEVC Video Coding Standard by Exploiting Region with Dominated Motion and Saliency Features.

    Science.gov (United States)

    Podder, Pallab Kanti; Paul, Manoranjan; Murshed, Manzur

    2016-01-01

    The emerging High Efficiency Video Coding (HEVC) standard introduces a number of innovative and powerful coding tools to acquire better compression efficiency compared to its predecessor H.264. The encoding time complexities have also increased multiple times that is not suitable for realtime video coding applications. To address this limitation, this paper employs a novel coding strategy to reduce the time complexity in HEVC encoder by efficient selection of appropriate block-partitioning modes based on human visual features (HVF). The HVF in the proposed technique comprise with human visual attention modelling-based saliency feature and phase correlation-based motion features. The features are innovatively combined through a fusion process by developing a content-based adaptive weighted cost function to determine the region with dominated motion/saliency (RDMS)- based binary pattern for the current block. The generated binary pattern is then compared with a codebook of predefined binary pattern templates aligned to the HEVC recommended block-paritioning to estimate a subset of inter-prediction modes. Without exhaustive exploration of all modes available in the HEVC standard, only the selected subset of modes are motion estimated and motion compensated for a particular coding unit. The experimental evaluation reveals that the proposed technique notably down-scales the average computational time of the latest HEVC reference encoder by 34% while providing similar rate-distortion (RD) performance for a wide range of video sequences.

  2. Music Video: An Analysis at Three Levels.

    Science.gov (United States)

    Burns, Gary

    This paper is an analysis of the different aspects of the music video. Music video is defined as having three meanings: an individual clip, a format, or the "aesthetic" that describes what the clips and format look like. The paper examines interruptions, the dialectical tension and the organization of the work of art, shot-scene…

  3. An Automatic Gastrointestinal Polyp Detection System in Video Endoscopy Using Fusion of Color Wavelet and Convolutional Neural Network Features.

    Science.gov (United States)

    Billah, Mustain; Waheed, Sajjad; Rahman, Mohammad Motiur

    2017-01-01

    Gastrointestinal polyps are considered to be the precursors of cancer development in most of the cases. Therefore, early detection and removal of polyps can reduce the possibility of cancer. Video endoscopy is the most used diagnostic modality for gastrointestinal polyps. But, because it is an operator dependent procedure, several human factors can lead to misdetection of polyps. Computer aided polyp detection can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention to. In this paper, an automatic system has been proposed as a support to gastrointestinal polyp detection. This system captures the video streams from endoscopic video and, in the output, it shows the identified polyps. Color wavelet (CW) features and convolutional neural network (CNN) features of video frames are extracted and combined together which are used to train a linear support vector machine (SVM). Evaluations on standard public databases show that the proposed system outperforms the state-of-the-art methods, gaining accuracy of 98.65%, sensitivity of 98.79%, and specificity of 98.52%.

  4. High-quality and small-capacity e-learning video featuring lecturer-superimposing PC screen images

    Science.gov (United States)

    Nomura, Yoshihiko; Murakami, Michinobu; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    2006-10-01

    Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene. While dynamic lecturing scene is shot by a digital video camera, screen images are electronically stored by a PC screen image capturing software in relatively long period at a practical class. Then, a lecturer and a lecture stick are extracted from the digital video images by pattern recognition techniques, and the extracted images are superimposed on the appropriate PC screen images by off-line processing. Thus, we have succeeded to create a high-quality and small-capacity (HQ/SC) video-on-demand educational content featuring the advantages: the high quality of image sharpness, the small electronic file capacity, and the realistic lecturer motion.

  5. An Automatic Gastrointestinal Polyp Detection System in Video Endoscopy Using Fusion of Color Wavelet and Convolutional Neural Network Features

    Science.gov (United States)

    Waheed, Sajjad; Rahman, Mohammad Motiur

    2017-01-01

    Gastrointestinal polyps are considered to be the precursors of cancer development in most of the cases. Therefore, early detection and removal of polyps can reduce the possibility of cancer. Video endoscopy is the most used diagnostic modality for gastrointestinal polyps. But, because it is an operator dependent procedure, several human factors can lead to misdetection of polyps. Computer aided polyp detection can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention to. In this paper, an automatic system has been proposed as a support to gastrointestinal polyp detection. This system captures the video streams from endoscopic video and, in the output, it shows the identified polyps. Color wavelet (CW) features and convolutional neural network (CNN) features of video frames are extracted and combined together which are used to train a linear support vector machine (SVM). Evaluations on standard public databases show that the proposed system outperforms the state-of-the-art methods, gaining accuracy of 98.65%, sensitivity of 98.79%, and specificity of 98.52%. PMID:28894460

  6. An Automatic Gastrointestinal Polyp Detection System in Video Endoscopy Using Fusion of Color Wavelet and Convolutional Neural Network Features

    Directory of Open Access Journals (Sweden)

    Mustain Billah

    2017-01-01

    Full Text Available Gastrointestinal polyps are considered to be the precursors of cancer development in most of the cases. Therefore, early detection and removal of polyps can reduce the possibility of cancer. Video endoscopy is the most used diagnostic modality for gastrointestinal polyps. But, because it is an operator dependent procedure, several human factors can lead to misdetection of polyps. Computer aided polyp detection can reduce polyp miss detection rate and assists doctors in finding the most important regions to pay attention to. In this paper, an automatic system has been proposed as a support to gastrointestinal polyp detection. This system captures the video streams from endoscopic video and, in the output, it shows the identified polyps. Color wavelet (CW features and convolutional neural network (CNN features of video frames are extracted and combined together which are used to train a linear support vector machine (SVM. Evaluations on standard public databases show that the proposed system outperforms the state-of-the-art methods, gaining accuracy of 98.65%, sensitivity of 98.79%, and specificity of 98.52%.

  7. Motion Entropy Feature and Its Applications to Event-Based Segmentation of Sports Video

    Science.gov (United States)

    Chen, Chen-Yu; Wang, Jia-Ching; Wang, Jhing-Fa; Hu, Yu-Hen

    2008-12-01

    An entropy-based criterion is proposed to characterize the pattern and intensity of object motion in a video sequence as a function of time. By applying a homoscedastic error model-based time series change point detection algorithm to this motion entropy curve, one is able to segment the corresponding video sequence into individual sections, each consisting of a semantically relevant event. The proposed method is tested on six hours of sports videos including basketball, soccer, and tennis. Excellent experimental results are observed.

  8. Modeling Timbre Similarity of Short Music Clips.

    Science.gov (United States)

    Siedenburg, Kai; Müllensiefen, Daniel

    2017-01-01

    There is evidence from a number of recent studies that most listeners are able to extract information related to song identity, emotion, or genre from music excerpts with durations in the range of tenths of seconds. Because of these very short durations, timbre as a multifaceted auditory attribute appears as a plausible candidate for the type of features that listeners make use of when processing short music excerpts. However, the importance of timbre in listening tasks that involve short excerpts has not yet been demonstrated empirically. Hence, the goal of this study was to develop a method that allows to explore to what degree similarity judgments of short music clips can be modeled with low-level acoustic features related to timbre. We utilized the similarity data from two large samples of participants: Sample I was obtained via an online survey, used 16 clips of 400 ms length, and contained responses of 137,339 participants. Sample II was collected in a lab environment, used 16 clips of 800 ms length, and contained responses from 648 participants. Our model used two sets of audio features which included commonly used timbre descriptors and the well-known Mel-frequency cepstral coefficients as well as their temporal derivates. In order to predict pairwise similarities, the resulting distances between clips in terms of their audio features were used as predictor variables with partial least-squares regression. We found that a sparse selection of three to seven features from both descriptor sets-mainly encoding the coarse shape of the spectrum as well as spectrotemporal variability-best predicted similarities across the two sets of sounds. Notably, the inclusion of non-acoustic predictors of musical genre and record release date allowed much better generalization performance and explained up to 50% of shared variance (R(2)) between observations and model predictions. Overall, the results of this study empirically demonstrate that both acoustic features related to

  9. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction

    National Research Council Canada - National Science Library

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    .... Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer...

  10. CERN Video News on line

    CERN Multimedia

    2003-01-01

    The latest CERN video news is on line. In this issue : an interview with the Director General and reports on the new home for the DELPHI barrel and the CERN firemen's spectacular training programme. There's also a vintage video news clip from 1954. See: www.cern.ch/video or Bulletin web page

  11. Mounting clips for panel installation

    Science.gov (United States)

    Cavieres, Andres; Al-Haddad, Tristan; Goodman, Joseph; Valdes, Francisco

    2017-02-14

    An exemplary mounting clip for removably attaching panels to a supporting structure comprises a base, spring locking clips, a lateral flange, a lever flange, and a spring bonding pad. The spring locking clips extend upwardly from the base. The lateral flange extends upwardly from a first side of the base. The lateral flange comprises a slot having an opening configured to receive at least a portion of one of the one or more panels. The lever flange extends outwardly from the lateral flange. The spring bonding flange extends downwardly from the lever flange. At least a portion of the first spring bonding flange comprises a serrated edge for gouging at least a portion of the one or more panels when the one or more panels are attached to the mounting clip to electrically and mechanically couple the one or more panels to the mounting clip.

  12. The Effect of Typographical Features of Subtitles on Nonnative English Viewers’ Retention and Recall of Lyrics in English Music Videos

    Directory of Open Access Journals (Sweden)

    Farshid Tayari Ashtiani

    2017-10-01

    Full Text Available The goal of this study was to test the effect of typographical features of subtitles including size, color and position on nonnative English viewers’ retention and recall of lyrics in music videos. To do so, the researcher played a simple subtitled music video for the participants at the beginning of their classes, and administered a 31-blank cloze test from the lyrics at the end of the classes. In the second test, the control group went through the same procedure but experimental group watched the customized subtitled version of the music video. The results demonstrated no significant difference between the two groups in the first test but in the second, the scores remarkably increased in the experimental group and proved better retention and recall. This study has implications for English language teachers and material developers to benefit customized bimodal subtitles as a mnemonic tool for better comprehension, retention and recall of aural contents in videos via Computer Assisted Language Teaching approach.

  13. CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (MACINTOSH VERSION)

    Science.gov (United States)

    Culbert, C.

    1994-01-01

    The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh

  14. CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (IBM PC VERSION)

    Science.gov (United States)

    Riley, G.

    1994-01-01

    The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh

  15. Word2VisualVec: Image and Video to Sentence Matching by Visual Feature Prediction

    OpenAIRE

    Dong, Jianfeng; Li, Xirong; Snoek, Cees G. M.

    2016-01-01

    This paper strives to find the sentence best describing the content of an image or video. Different from existing works, which rely on a joint subspace for image / video to sentence matching, we propose to do so in a visual space only. We contribute Word2VisualVec, a deep neural network architecture that learns to predict a deep visual encoding of textual input based on sentence vectorization and a multi-layer perceptron. We thoroughly analyze its architectural design, by varying the sentence...

  16. Learning Computational Models of Video Memorability from fMRI Brain Imaging.

    Science.gov (United States)

    Han, Junwei; Chen, Changyuan; Shao, Ling; Hu, Xintao; Han, Jungong; Liu, Tianming

    2015-08-01

    Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects' brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework.

  17. Detection of Double-Compressed H.264/AVC Video Incorporating the Features of the String of Data Bits and Skip Macroblocks

    Directory of Open Access Journals (Sweden)

    Heng Yao

    2017-12-01

    Full Text Available Today’s H.264/AVC coded videos have a high quality, high data-compression ratio. They also have a strong fault tolerance, better network adaptability, and have been widely applied on the Internet. With the popularity of powerful and easy-to-use video editing software, digital videos can be tampered with in various ways. Therefore, the double compression in the H.264/AVC video can be used as a first step in the study of video-tampering forensics. This paper proposes a simple, but effective, double-compression detection method that analyzes the periodic features of the string of data bits (SODBs and the skip macroblocks (S-MBs for all I-frames and P-frames in a double-compressed H.264/AVC video. For a given suspicious video, the SODBs and S-MBs are extracted for each frame. Both features are then incorporated to generate one enhanced feature to represent the periodic artifact of the double-compressed video. Finally, a time-domain analysis is conducted to detect the periodicity of the features. The primary Group of Pictures (GOP size is estimated based on an exhaustive strategy. The experimental results demonstrate the efficacy of the proposed method.

  18. Mograph Cinema 4d untuk Menunjang Efek Visual Video Klip

    Directory of Open Access Journals (Sweden)

    Ardiyan Ardiyan

    2010-10-01

    Full Text Available This research is to talk about the advantages of MoGraph as one reliability feature in 3D modeling application, 4D Cinema as the implemented example in Cinta Laura video clip. The advantage in MoGraph is the ability to create multiple object moving effect accordingly and (or randomly easily and efficiently, also supported by the render quality of Cinema 4D that clean and relatively fast. The advantage made MoGraph Cinema 4D is suitable to use to enrich the visual effect a motion graphic work. The quality is hoped to support MoGraph usage as more creative. Regarding today’s visual variation is effected by the digital technology development, therefore the implementation of MoGraph Conema 4D is hoped to be optimally supporting creativity in making video clip in motion graphic art content. 

  19. Repurposing Video Documentaries as Features of a Flipped-Classroom Approach to Community-Centered Development

    Science.gov (United States)

    Arbogast, Douglas; Eades, Daniel; Plein, L. Christopher

    2017-01-01

    Online and off-site educational programming is increasingly incorporated by Extension educators to reach their clientele. Models such as the flipped classroom combine online content and in-person learning, allowing clients to both gain information and build peer learning communities. We demonstrate how video documentaries used in traditional…

  20. Proliferative and necrotising otitis externa in a cat without pinnal involvement: video-otoscopic features.

    Science.gov (United States)

    Borio, Stefano; Massari, Federico; Abramo, Francesca; Colombo, Silvia

    2013-04-01

    Proliferative and necrotising otitis externa is a rare and recently described disease affecting the ear canals and concave pinnae of kittens. This article describes a case of proliferative and necrotising otits externa in a young adult cat. In this case, the lesions did not affected the pinnae, but both ear canals were severely involved. Video-otoscopy revealed a digitally proliferative lesion, growing at 360° all around the ear canals for their entire length, without involvement of the middle ear. Histopathological examination confirmed the diagnosis, and the cat responded completely to a once-daily application of 0.1% tacrolimus ointment diluted in mineral oil in the ear canals. Video-otoscopy findings, not described previously, were very peculiar and may help clinicians to diagnose this rare disease.

  1. Mounting clips for panel installation

    Energy Technology Data Exchange (ETDEWEB)

    Cavieres, Andres; Al-Haddad, Tristan; Goodman, Joseph

    2017-07-11

    A photovoltaic panel mounting clip comprising a base, central indexing tabs, flanges, lateral indexing tabs, and vertical indexing tabs. The mounting clip removably attaches one or more panels to a beam or the like structure, both mechanically and electrically. It provides secure locking of the panels in all directions, while providing guidance in all directions for accurate installation of the panels to the beam or the like structure.

  2. Prototyping user displays using CLIPS

    Science.gov (United States)

    Kosta, Charles P.; Miller, Ross; Krolak, Patrick; Vesty, Matt

    1990-01-01

    CLIPS is being used as an integral module of a rapid prototyping system. The prototyping system consists of a display manager for object browsing, a graph program for displaying line and bar charts, and a communications server for routing messages between modules. A CLIPS simulation of a physical model provides dynamic control of the user's display. Currently, a project is well underway to prototype the Advanced Automation System (AAS) for the Federal Aviation Administration.

  3. 21 CFR 882.4150 - Scalp clip.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Scalp clip. 882.4150 Section 882.4150 Food and... NEUROLOGICAL DEVICES Neurological Surgical Devices § 882.4150 Scalp clip. (a) Identification. A scalp clip is a plastic or metal clip used to stop bleeding during surgery on the scalp. (b) Classification. Class II...

  4. Video special effects editing in MPEG-2 compressed video

    OpenAIRE

    Fernando, WAC; Canagarajah, CN; Bull, David

    2000-01-01

    With the increase of digital technology in video production, several types of complex video special effects editing have begun to appear in video clips. In this paper we consider fade-out and fade-in special effects editing in MPEG-2 compressed video without full frame decompression and motion estimation. We estimated the DCT coefficients and use these coefficients together with the existing motion vectors to produce these special effects editing in compressed domain. Results show that both o...

  5. Evaluation and development of a novel binocular treatment (I-BiT™) system using video clips and interactive games to improve vision in children with amblyopia ('lazy eye'): study protocol for a randomised controlled trial

    National Research Council Canada - National Science Library

    Foss, Alexander J; Gregson, Richard M; MacKeith, Daisy; Herbison, Nicola; Ash, Isabel M; Cobb, Sue V; Eastgate, Richard M; Hepburn, Trish; Vivian, Anthony; Moore, Diane; Haworth, Stephen M

    2013-01-01

    .... This treatment is unpopular and compliance is often low. Therefore results can be poor. A novel binocular treatment which uses 3D technology to present specially developed computer games and video footage (I-BiT...

  6. A Snapshot of the Depiction of Electronic Cigarettes in YouTube Videos.

    Science.gov (United States)

    Romito, Laura M; Hurwich, Risa A; Eckert, George J

    2015-11-01

    To assess the depiction of e-cigarettes in YouTube videos. The sample (N = 63) was selected from the top 20 search results for "electronic cigarette," and "e-cig" with each term searched twice by the filters "Relevance" and "View Count." Data collected included title, length, number of views, "likes," "dislikes," comments, and inferred demographics of individuals appearing in the videos. Seventy-six percent of videos included at least one man, 62% included a Caucasian, and 50% included at least one young individual. Video content connotation was coded as positive (76%), neutral (18%), or negative (6%). Videos were categorized as advertisement (33%), instructional (17%), news clip (19%), product review (13%), entertainment (11%), public health (3%), and personal testimonial (3%). Most e-cigarette YouTube videos are non-traditional or covert advertisements featuring young Caucasian men.

  7. Can interface features affect aggression resulting from violent video game play? An examination of realistic controller and large screen size.

    Science.gov (United States)

    Kim, Ki Joon; Sundar, S Shyam

    2013-05-01

    Aggressiveness attributed to violent video game play is typically studied as a function of the content features of the game. However, can interface features of the game also affect aggression? Guided by the General Aggression Model (GAM), we examine the controller type (gun replica vs. mouse) and screen size (large vs. small) as key technological aspects that may affect the state aggression of gamers, with spatial presence and arousal as potential mediators. Results from a between-subjects experiment showed that a realistic controller and a large screen display induced greater aggression, presence, and arousal than a conventional mouse and a small screen display, respectively, and confirmed that trait aggression was a significant predictor of gamers' state aggression. Contrary to GAM, however, arousal showed no effects on aggression; instead, presence emerged as a significant mediator.

  8. Fusion of visual and audio features for person identification in real video

    Science.gov (United States)

    Li, Dongge; Wei, Gang; Sethi, Ishwar K.; Dimitrova, Nevenka

    2001-01-01

    In this research, we studied the joint use of visual and audio information for the problem of identifying persons in real video. A person identification system, which is able to identify characters in TV shows by the fusion of audio and visual information, is constructed based on two different fusion strategies. In the first strategy, speaker identification is used to verify the face recognition result. The second strategy consists of using face recognition and tracking to supplement speaker identification results. To evaluate our system's performance, an information database was generated by manually labeling the speaker and the main person's face in every I-frame of a video segment of the TV show 'Seinfeld'. By comparing the output form our system with our information database, we evaluated the performance of each of the analysis channels and their fusion. The results show that while the first fusion strategy is suitable for applications where precision is much more critical than recall. The second fusion strategy, on the other hand, generates the best overall identification performance. It outperforms either of the analysis channels greatly in both precision an recall and is applicable to more general applications, such as, in our case, to identify persons in TV programs.

  9. Concept of the central clip

    DEFF Research Database (Denmark)

    Alegria-Barrero, Eduardo; Chan, Pak Hei; Foin, Nicolas

    2014-01-01

    AIMS: Percutaneous edge-to-edge mitral valve repair with the MitraClip(®) was shown to be a safe and feasible alternative compared to conventional surgical mitral valve repair. We analyse the concept of the central clip and the predictors for the need of more than one MitraClip(®) in our high...... with transoesophageal echocardiographic (TOE) guidance. Device success was defined as placement of one or more MitraClips(®) with a reduction of MR to ≤2+. Patients were followed up clinically and with TOE at one month and one year. From September 2009 to March 2012, 43 patients with severe MR with a mean age of 74.......8±10.7 years (30 males, 13 females; mean logistic EuroSCORE 24.1±11, mean LVEF 47.5±18.5%; mean±SD) were treated. Median follow-up was 385 days (104-630; Q1-Q3). Device implantation success was 93%. All patients were treated following the central clip concept: 52.5% of MR was degenerative in aetiology and 47...

  10. SnapVideo: Personalized Video Generation for a Sightseeing Trip.

    Science.gov (United States)

    Zhang, Luming; Jing, Peiguang; Su, Yuting; Zhang, Chao; Shaoz, Ling

    2017-11-01

    Leisure tourism is an indispensable activity in urban people's life. Due to the popularity of intelligent mobile devices, a large number of photos and videos are recorded during a trip. Therefore, the ability to vividly and interestingly display these media data is a useful technique. In this paper, we propose SnapVideo, a new method that intelligently converts a personal album describing of a trip into a comprehensive, aesthetically pleasing, and coherent video clip. The proposed framework contains three main components. The scenic spot identification model first personalizes the video clips based on multiple prespecified audience classes. We then search for some auxiliary related videos from YouTube 1 according to the selected photos. To comprehensively describe a scenery, the view generation module clusters the crawled video frames into a number of views. Finally, a probabilistic model is developed to fit the frames from multiple views into an aesthetically pleasing and coherent video clip, which optimally captures the semantics of a sightseeing trip. Extensive user studies demonstrated the competitiveness of our method from an aesthetic point of view. Moreover, quantitative analysis reflects that semantically important spots are well preserved in the final video clip. 1 https://www.youtube.com/.

  11. Coding Local and Global Binary Visual Features Extracted From Video Sequences.

    Science.gov (United States)

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios.

  12. Designing Socially-Aware Video Exploration Interfaces: A Case Study using School Concert Assets

    NARCIS (Netherlands)

    D.C. Pedrosa; R.L. Guimarães (Rodrigo); P.S. Cesar Garcia (Pablo Santiago); D.C.A. Bulterman (Dick)

    2013-01-01

    htmlabstractOnline video sharing systems, such as YouTube, do not provide users enough support to explore community videos that portray people within their social circle. Such services typically consider each video clip as an isolated object, and not as part of a set of related clips. Even though

  13. 76 FR 171 - Paper Clips From China

    Science.gov (United States)

    2011-01-03

    ... COMMISSION Paper Clips From China AGENCY: United States International Trade Commission. ACTION: Institution of a five-year review concerning the antidumping duty order on paper clips from China. SUMMARY: The... on paper clips from China would be likely to lead to continuation or recurrence of material injury...

  14. 76 FR 42730 - Paper Clips From China

    Science.gov (United States)

    2011-07-19

    ... COMMISSION Paper Clips From China Determination On the basis of the record \\1\\ developed in the subject five... order on paper clips from China would be likely to lead to continuation or recurrence of material injury... Publication 4242 (July 2011), entitled Paper Clips from China: Investigation No. 731-TA-663 (Third Review). By...

  15. Preserving Sharp Edges with Volume Clipping

    NARCIS (Netherlands)

    Termeer, M.A.; Oliván Bescós, J.; Telea, A.C.

    2006-01-01

    Volume clipping is a useful aid for exploring volumetric datasets. To maximize the effectiveness of this technique, the clipping geometry should be flexibly specified and the resulting images should not contain artifacts due to the clipping techniques. We present an improvement to an existing

  16. Evaluation of stereoscopic medical video content on an autostereoscopic display for undergraduate medical education

    Science.gov (United States)

    Ilgner, Justus F. R.; Kawai, Takashi; Shibata, Takashi; Yamazoe, Takashi; Westhofen, Martin

    2006-02-01

    Introduction: An increasing number of surgical procedures are performed in a microsurgical and minimally-invasive fashion. However, the performance of surgery, its possibilities and limitations become difficult to teach. Stereoscopic video has evolved from a complex production process and expensive hardware towards rapid editing of video streams with standard and HDTV resolution which can be displayed on portable equipment. This study evaluates the usefulness of stereoscopic video in teaching undergraduate medical students. Material and methods: From an earlier study we chose two clips each of three different microsurgical operations (tympanoplasty type III of the ear, endonasal operation of the paranasal sinuses and laser chordectomy for carcinoma of the larynx). This material was added by 23 clips of a cochlear implantation, which was specifically edited for a portable computer with an autostereoscopic display (PC-RD1-3D, SHARP Corp., Japan). The recording and synchronization of left and right image was performed at the University Hospital Aachen. The footage was edited stereoscopically at the Waseda University by means of our original software for non-linear editing of stereoscopic 3-D movies. Then the material was converted into the streaming 3-D video format. The purpose of the conversion was to present the video clips by a file type that does not depend on a television signal such as PAL or NTSC. 25 4th year medical students who participated in the general ENT course at Aachen University Hospital were asked to estimate depth clues within the six video clips plus cochlear implantation clips. Another 25 4th year students who were shown the material monoscopically on a conventional laptop served as control. Results: All participants noted that the additional depth information helped with understanding the relation of anatomical structures, even though none had hands-on experience with Ear, Nose and Throat operations before or during the course. The monoscopic

  17. The 15 March 2007 paroxysm of Stromboli: video-image analysis, and textural and compositional features of the erupted deposit

    Science.gov (United States)

    Andronico, Daniele; Taddeucci, Jacopo; Cristaldi, Antonio; Miraglia, Lucia; Scarlato, Piergiorgio; Gaeta, Mario

    2013-07-01

    On 15 March 2007, a paroxysmal event occurred within the crater terrace of Stromboli, in the Aeolian Islands (Italy). Infrared and visible video recordings from the monitoring network reveal that there was a succession of highly explosive pulses, lasting about 5 min, from at least four eruptive vents. Initially, brief jets with low apparent temperature were simultaneously erupted from the three main vent regions, becoming hotter and transitioning to bomb-rich fountaining that lasted for 14 s. Field surveys estimate the corresponding fallout deposit to have a mass of ˜1.9 × 107 kg that, coupled with the video information on eruption duration, provides a mean mass eruption rate of ˜5.4 × 105 kg/s. Textural and chemical analyses of the erupted tephra reveal unexpected complexity, with grain-size bimodality in the samples associated with the different percentages of ash types (juvenile, lithics, and crystals) that reflects almost simultaneous deposition from multiple and evolving plumes. Juvenile glass chemistry ranges from a gas-rich, low porphyricity end member (typical of other paroxysmal events) to a gas-poor high porphyricity one usually associated with low-intensity Strombolian explosions. Integration of our diverse data sets reveals that (1) the 2007 event was a paroxysmal explosion driven by a magma sharing common features with large-scale paroxysms as well as with "ordinary" Strombolian explosions; (2) initial vent opening by the release of a pressurized gas slug and subsequent rapid magma vesiculation and ejection, which were recorded both by the infrared camera and in the texture of fallout products; and (3) lesser paroxysmal events can be highly dynamic and produce surprisingly complex fallout deposits, which would be difficult to interpret from the geological record alone.

  18. A Qualitative Inquiry into the Complex Features of Strained Interactions: Analysis and Implications for Health Care Personnel.

    Science.gov (United States)

    Thunborg, Charlotta; Salzmann-Erikson, Martin

    2017-01-01

    Communication skills are vital for successful relationships between patients and health care professionals. Failure to communicate may lead to a lack of understanding and may result in strained interactions. Our theoretical point of departure was to make use of chaos and complexity theories. To examine the features of strained interactions and to discuss their relevance for health care settings. A netnography study design was applied. Data were purposefully sampled, and video clips (122 minutes from 30 video clips) from public online venues were used. The results are presented in four categories: 1) unpredictability, 2) sensitivity dependence, 3) resistibility, and 4) iteration. They are all features of strained interactions. Strained interactions are a complex phenomenon that exists in health care settings. The findings provide health care professionals guidance to understand the complexity and the features of strained interactions.

  19. Multiple Feature Fusion Based on Co-Training Approach and Time Regularization for Place Classification in Wearable Video

    Directory of Open Access Journals (Sweden)

    Vladislavs Dovgalecs

    2013-01-01

    Full Text Available The analysis of video acquired with a wearable camera is a challenge that multimedia community is facing with the proliferation of such sensors in various applications. In this paper, we focus on the problem of automatic visual place recognition in a weakly constrained environment, targeting the indexing of video streams by topological place recognition. We propose to combine several machine learning approaches in a time regularized framework for image-based place recognition indoors. The framework combines the power of multiple visual cues and integrates the temporal continuity information of video. We extend it with computationally efficient semisupervised method leveraging unlabeled video sequences for an improved indexing performance. The proposed approach was applied on challenging video corpora. Experiments on a public and a real-world video sequence databases show the gain brought by the different stages of the method.

  20. Automatic Synthesis of Background Music Track Data by Analysis of Video Contents

    Science.gov (United States)

    Modegi, Toshio

    This paper describes an automatic creation technique of background music track data for given video file. Our proposed system is based on a novel BGM synthesizer, called “Matrix Music Player”, which can produce 3125 kinds of high-quality BGM contents by dynamically mixing 5 audio files, which are freely selected from total 25 audio waveform files. In order to retrieve appropriate BGM mixing patterns, we have constructed an acoustic analysis database, which records acoustic features of total 3125 synthesized patterns. Developing a video analyzer which generates image parameters of given video data and converts them to acoustic parameters, we will access the acoustic analysis database and retrieve an appropriate synthesized BGM signal, which can be included in the audio track of the source video file. Based on our proposed method, we have tried BGM synthesis experiments using several around 20-second video clips. The automatically inserted BGM audio streams for all of our given video clips have been objectively acceptable. In this paper, we describe briefly our proposed BGM synthesized method and its experimental results.

  1. A Depth Video-based Human Detection and Activity Recognition using Multi-features and Embedded Hidden Markov Models for Health Care Monitoring Systems

    Directory of Open Access Journals (Sweden)

    Ahmad Jalal

    2017-08-01

    Full Text Available Increase in number of elderly people who are living independently needs especial care in the form of healthcare monitoring systems. Recent advancements in depth video technologies have made human activity recognition (HAR realizable for elderly healthcare applications. In this paper, a depth video-based novel method for HAR is presented using robust multi-features and embedded Hidden Markov Models (HMMs to recognize daily life activities of elderly people living alone in indoor environment such as smart homes. In the proposed HAR framework, initially, depth maps are analyzed by temporal motion identification method to segment human silhouettes from noisy background and compute depth silhouette area for each activity to track human movements in a scene. Several representative features, including invariant, multi-view differentiation and spatiotemporal body joints features were fused together to explore gradient orientation change, intensity differentiation, temporal variation and local motion of specific body parts. Then, these features are processed by the dynamics of their respective class and learned, modeled, trained and recognized with specific embedded HMM having active feature values. Furthermore, we construct a new online human activity dataset by a depth sensor to evaluate the proposed features. Our experiments on three depth datasets demonstrated that the proposed multi-features are efficient and robust over the state of the art features for human action and activity recognition.

  2. Evaluation and development of a novel binocular treatment (I-BiT™) system using video clips and interactive games to improve vision in children with amblyopia ('lazy eye'): study protocol for a randomised controlled trial.

    Science.gov (United States)

    Foss, Alexander J; Gregson, Richard M; MacKeith, Daisy; Herbison, Nicola; Ash, Isabel M; Cobb, Sue V; Eastgate, Richard M; Hepburn, Trish; Vivian, Anthony; Moore, Diane; Haworth, Stephen M

    2013-05-20

    Amblyopia (lazy eye) affects the vision of approximately 2% of all children. Traditional treatment consists of wearing a patch over their 'good' eye for a number of hours daily, over several months. This treatment is unpopular and compliance is often low. Therefore results can be poor. A novel binocular treatment which uses 3D technology to present specially developed computer games and video footage (I-BiT™) has been studied in a small group of patients and has shown positive results over a short period of time. The system is therefore now being examined in a randomised clinical trial. Seventy-five patients aged between 4 and 8 years with a diagnosis of amblyopia will be randomised to one of three treatments with a ratio of 1:1:1 - I-BiT™ game, non-I-BiT™ game, and I-BiT™ DVD. They will be treated for 30 minutes once weekly for 6 weeks. Their visual acuity will be assessed independently at baseline, mid-treatment (week 3), at the end of treatment (week 6) and 4 weeks after completing treatment (week 10). The primary endpoint will be the change in visual acuity from baseline to the end of treatment. Secondary endpoints will be additional visual acuity measures, patient acceptability, compliance and the incidence of adverse events. This is the first randomised controlled trial using the I-BiT™ system. The results will determine if the I-BiT™ system is effective in the treatment of amblyopia and will also determine the optimal treatment for future development. ClinicalTrials.gov identifier: NCT01702727.

  3. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  4. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-03-01

    Full Text Available Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT, speed-up robust feature (SURF, local binary patterns (LBP, histogram of oriented gradients (HOG, and weighted HOG. Recently, the convolutional neural network (CNN method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  5. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  6. Humorous Videos and Idiom Achievement: Some Pedagogical Considerations for EFL Learners

    Science.gov (United States)

    Neissari, Malihe; Ashraf, Hamid; Ghorbani, Mohammad Reza

    2017-01-01

    Employing a quasi-experimental design, this study examined the efficacy of humorous idiom video clips on the achievement of Iranian undergraduate students studying English as a Foreign Language (EFL). Forty humorous video clips from the English Idiom Series called "The Teacher" from the BBC website were used to teach 120 idioms to 61…

  7. Multimodal Emotion Recognition in Response to Videos

    NARCIS (Netherlands)

    Soleymani, Mohammad; Pantic, Maja; Pun, Thierry

    This paper presents a user-independent emotion recognition method with the goal of recovering affective tags for videos using electroencephalogram (EEG), pupillary response and gaze distance. We first selected 20 video clips with extrinsic emotional content from movies and online resources. Then,

  8. Combining high-speed SVM learning with CNN feature encoding for real-time target recognition in high-definition video for ISR missions

    Science.gov (United States)

    Kroll, Christine; von der Werth, Monika; Leuck, Holger; Stahl, Christoph; Schertler, Klaus

    2017-05-01

    For Intelligence, Surveillance, Reconnaissance (ISR) missions of manned and unmanned air systems typical electrooptical payloads provide high-definition video data which has to be exploited with respect to relevant ground targets in real-time by automatic/assisted target recognition software. Airbus Defence and Space is developing required technologies for real-time sensor exploitation since years and has combined the latest advances of Deep Convolutional Neural Networks (CNN) with a proprietary high-speed Support Vector Machine (SVM) learning method into a powerful object recognition system with impressive results on relevant high-definition video scenes compared to conventional target recognition approaches. This paper describes the principal requirements for real-time target recognition in high-definition video for ISR missions and the Airbus approach of combining an invariant feature extraction using pre-trained CNNs and the high-speed training and classification ability of a novel frequency-domain SVM training method. The frequency-domain approach allows for a highly optimized implementation for General Purpose Computation on a Graphics Processing Unit (GPGPU) and also an efficient training of large training samples. The selected CNN which is pre-trained only once on domain-extrinsic data reveals a highly invariant feature extraction. This allows for a significantly reduced adaptation and training of the target recognition method for new target classes and mission scenarios. A comprehensive training and test dataset was defined and prepared using relevant high-definition airborne video sequences. The assessment concept is explained and performance results are given using the established precision-recall diagrams, average precision and runtime figures on representative test data. A comparison to legacy target recognition approaches shows the impressive performance increase by the proposed CNN+SVM machine-learning approach and the capability of real-time high

  9. CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (IBM PC VERSION WITH CLIPSITS)

    Science.gov (United States)

    Riley, , .

    1994-01-01

    The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh

  10. CERN Video News

    CERN Multimedia

    2003-01-01

    From Monday you can see on the web the new edition of CERN's Video News. Thanks to a collaboration between the audiovisual teams at CERN and Fermilab, you can see a report made by the American laboratory. The clip concerns the LHC magnets that are being constructed at Fermilab. Also in the programme: the spectacular rotation of one of the ATLAS coils, the arrival at CERN of the first American magnet made at Brookhaven, the story of the discovery 20 years ago of the W and Z bosons at CERN. http://www.cern.ch/video or Bulletin web page.

  11. Tech Tips: Using Video Management/ Analysis Technology in Qualitative Research

    Directory of Open Access Journals (Sweden)

    J.A. Spiers

    2004-03-01

    Full Text Available This article presents tips on how to use video in qualitative research. The author states that, though there many complex and powerful computer programs for working with video, the work done in qualitative research does not require those programs. For this work, simple editing software is sufficient. Also presented is an easy and efficient method of transcribing video clips.

  12. Tech Tips: Using Video Management/ Analysis Technology in Qualitative Research

    OpenAIRE

    J.A. Spiers

    2004-01-01

    This article presents tips on how to use video in qualitative research. The author states that, though there many complex and powerful computer programs for working with video, the work done in qualitative research does not require those programs. For this work, simple editing software is sufficient. Also presented is an easy and efficient method of transcribing video clips.

  13. Encoding Concept Prototypes for Video Event Detection and Summarization

    NARCIS (Netherlands)

    Mazloom, M.; Habibian, A.; Liu, D.; Snoek, C.G.M.; Chang, S.F.

    2015-01-01

    This paper proposes a new semantic video representation for few and zero example event detection and unsupervised video event summarization. Different from existing works, which obtain a semantic representation by training concepts over images or entire video clips, we propose an algorithm that

  14. The Effect of Theme Preference on Academic Word List Use: A Case for Smartphone Video Recording Feature

    Science.gov (United States)

    Gromik, Nicolas A.

    2017-01-01

    Sixty-seven Japanese English as a Second Language undergraduate learners completed one smartphone video production per week for 12 weeks, based on a teacher-selected theme. Designed as a case study for this specific context, data from students' oral performances was analyzed on a weekly basis for their use of the Academic Word List (AWL). A…

  15. Animated Video Clips: Learning in the Current Generation

    Science.gov (United States)

    Gurvitch, Rachel; Lund, Jackie

    2014-01-01

    The teaching and learning processes have undergone a variety of changes and modifications in recent years. The overall academic goal is to produce engaged, lifelong learners who are capable of solving complex problems in diverse settings. These desired skills cannot be achieved via the traditional teaching and learning model that requires students…

  16. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (IBM PC VERSION)

    Science.gov (United States)

    Donnell, B.

    1994-01-01

    CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with

  17. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (MACINTOSH VERSION)

    Science.gov (United States)

    Riley, G.

    1994-01-01

    CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with

  18. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (UNIX VERSION)

    Science.gov (United States)

    Donnell, B.

    1994-01-01

    CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with

  19. Data-driven spatio-temporal RGBD feature encoding for action recognition in operating rooms.

    Science.gov (United States)

    Twinanda, Andru P; Alkan, Emre O; Gangi, Afshin; de Mathelin, Michel; Padoy, Nicolas

    2015-06-01

    Context-aware systems for the operating room (OR) provide the possibility to significantly improve surgical workflow through various applications such as efficient OR scheduling, context-sensitive user interfaces, and automatic transcription of medical procedures. Being an essential element of such a system, surgical action recognition is thus an important research area. In this paper, we tackle the problem of classifying surgical actions from video clips that capture the activities taking place in the OR. We acquire recordings using a multi-view RGBD camera system mounted on the ceiling of a hybrid OR dedicated to X-ray-based procedures and annotate clips of the recordings with the corresponding actions. To recognize the surgical actions from the video clips, we use a classification pipeline based on the bag-of-words (BoW) approach. We propose a novel feature encoding method that extends the classical BoW approach. Instead of using the typical rigid grid layout to divide the space of the feature locations, we propose to learn the layout from the actual 4D spatio-temporal locations of the visual features. This results in a data-driven and non-rigid layout which retains more spatio-temporal information compared to the rigid counterpart. We classify multi-view video clips from a new dataset generated from 11-day recordings of real operations. This dataset is composed of 1734 video clips of 15 actions. These include generic actions (e.g., moving patient to the OR bed) and actions specific to the vertebroplasty procedure (e.g., hammering). The experiments show that the proposed non-rigid feature encoding method performs better than the rigid encoding one. The classifier's accuracy is increased by over 4 %, from 81.08 to 85.53 %. The combination of both intensity and depth information from the RGBD data provides more discriminative power in carrying out the surgical action recognition task as compared to using either one of them alone. Furthermore, the proposed non

  20. [Absorbable synthetic clips and pulmonary excision].

    Science.gov (United States)

    Nguyen, H; Nguyen, H V; Raut, Y; Briére, J

    1985-01-01

    From the beginning of this decade, two new, 2nd generation, resorbable synthetic materials--polydioxanone and lactomer--have been able to be molded into clips which, in contrast to metallic clips, are fitted with an original locking system preventing any slipping, either transversal or longitudinal, on the vessels. An experimental study was conducted to confirm their great reliability and perfect safety on large vessels (up to 8-9 mm) of large laboratory animals--both pulmonary and systemic vessels. Positive results justified their application to pulmonary vessels in humans during surgery, the pulmonary vessel being at a much lower pressure than in the main circulatory system. These new clips possess the advantages of metallic clips and resorbable synthetic threads without their inconveniences: rapidity, simplicity and ease of application, radio-transparency and complete inertia in magnetic fields enabling radiotherapy and modern postoperative investigations (scanner, NMR...) to be carried out.

  1. "So Let's Talk. Let's Chat. Let's Start a Dialog": An Analysis of the Conversation Metaphor Employed in Clinton's and Obama's YouTube Campaign Clips

    Science.gov (United States)

    Duman, Steve; Locher, Miriam A.

    2008-01-01

    This paper examines how two American presidential candidates, Barack Obama and Hillary Clinton, make use of a VIDEO EXCHANGE IS CONVERSATION metaphor on YouTube, a channel of communication that allows the exchange of video clips on the Internet. It is argued that the politicians exploit the metaphor for its connotations of creating involvement and…

  2. Video Analysis: Lessons from Professional Video Editing Practice

    Directory of Open Access Journals (Sweden)

    Eric Laurier

    2008-09-01

    Full Text Available In this paper we join a growing body of studies that learn from vernacular video analysts quite what video analysis as an intelligible course of action might be. Rather than pursuing epistemic questions regarding video as a number of other studies of video analysis have done, our concern here is with the crafts of producing the filmic. As such we examine how audio and video clips are indexed and brought to hand during the logging process, how a first assembly of the film is built at the editing bench and how logics of shot sequencing relate to wider concerns of plotting, genre and so on. In its conclusion we make a number of suggestions about the future directions of studying video and film editors at work. URN: urn:nbn:de:0114-fqs0803378

  3. Real-Time FPGA-Based Object Tracker with Automatic Pan-Tilt Features for Smart Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2017-05-01

    Full Text Available The design of smart video surveillance systems is an active research field among the computer vision community because of their ability to perform automatic scene analysis by selecting and tracking the objects of interest. In this paper, we present the design and implementation of an FPGA-based standalone working prototype system for real-time tracking of an object of interest in live video streams for such systems. In addition to real-time tracking of the object of interest, the implemented system is also capable of providing purposive automatic camera movement (pan-tilt in the direction determined by movement of the tracked object. The complete system, including camera interface, DDR2 external memory interface controller, designed object tracking VLSI architecture, camera movement controller and display interface, has been implemented on the Xilinx ML510 (Virtex-5 FX130T FPGA Board. Our proposed, designed and implemented system robustly tracks the target object present in the scene in real time for standard PAL (720 × 576 resolution color video and automatically controls camera movement in the direction determined by the movement of the tracked object.

  4. Teachers' Analyses of Classroom Video Predict Student Learning of Mathematics: Further Explorations of a Novel Measure of Teacher Knowledge

    Science.gov (United States)

    Kersting, Nicole B.; Givvin, Karen B.; Sotelo, Francisco L.; Stigler, James W.

    2010-01-01

    This study explores the relationship between teacher knowledge and student learning in the area of mathematics by developing and evaluating an innovative approach to assessing teacher knowledge. This approach is based on teachers' analyses of classroom video clips. Teachers watched 13 video clips of classroom instruction and then provided written…

  5. CLIPS: A tool for corn disease diagnostic system and an aid to neural network for automated knowledge acquisition

    Science.gov (United States)

    Wu, Cathy; Taylor, Pam; Whitson, George; Smith, Cathy

    1990-01-01

    This paper describes the building of a corn disease diagnostic expert system using CLIPS, and the development of a neural expert system using the fact representation method of CLIPS for automated knowledge acquisition. The CLIPS corn expert system diagnoses 21 diseases from 52 symptoms and signs with certainty factors. CLIPS has several unique features. It allows the facts in rules to be broken down to object-attribute-value (OAV) triples, allows rule-grouping, and fires rules based on pattern-matching. These features combined with the chained inference engine result to a natural user query system and speedy execution. In order to develop a method for automated knowledge acquisition, an Artificial Neural Expert System (ANES) is developed by a direct mapping from the CLIPS system. The ANES corn expert system uses the same OAV triples in the CLIPS system for its facts. The LHS and RHS facts of the CLIPS rules are mapped into the input and output layers of the ANES, respectively; and the inference engine of the rules is imbedded in the hidden layer. The fact representation by OAC triples gives a natural grouping of the rules. These features allow the ANES system to automate rule-generation, and make it efficient to execute and easy to expand for a large and complex domain.

  6. Single clips versus multi-firing clip device for closure of mucosal incisions after peroral endoscopic myotomy (POEM)

    NARCIS (Netherlands)

    Verlaan, Tessa; Ponds, Fraukje A. M.; Bastiaansen, Barbara A. J.; Bredenoord, Albert J.; Fockens, Paul

    2016-01-01

    Background and aims: After Peroral Endoscopic Myotomy (POEM), the mucosal incision is closed with endoscopically applied clips. After each clip placement, a subsequent clipping device has to be introduced through the working channel. With the Clipmaster3, three consecutive clips can be placed

  7. Efficacy of Endoscopic Fluorescein Video Angiography in Aneurysm Surgery-Novel and Innovative Assessment of Vascular Blood Flow in the Dead Angles of the Microscope.

    Science.gov (United States)

    Hashimoto, Koji; Kinouchi, Hiroyuki; Yoshioka, Hideyuki; Kanemaru, Kazuya; Ogiwara, Masakazu; Yagi, Takashi; Wakai, Takuma; Fukumoto, Yuichiro

    2017-08-01

    In aneurysm surgery, assessment of the blood flow around the aneurysm is crucial. Recently, intraoperative fluorescence video angiography has been widely adopted for this purpose. However, the observation field of this procedure is limited to the microscopic view, and it is difficult to visualize blood flow obscured by the skull base anatomy, parent arteries, and aneurysm. To demonstrate the efficacy of a new small-caliber endoscopic fluorescence video angiography system employing sodium fluorescein in aneurysm surgery for the first time. Eighteen patients with 18 cerebral aneurysms were enrolled in this study. Both microscopic fluorescence angiography and endoscopic fluorescein video angiography were performed before and after clip placement. Endoscopic fluorescein video angiography provided bright fluorescence imaging even with a 2.7-mm-diameter endoscope and clearly revealed blood flow within the vessels in the dead angle areas of the microscope in all 18 aneurysms. Consequently, it revealed information about aneurysmal occlusion and perforator patency in 15 aneurysms (83.3%) that was not obtainable with microscopic fluorescence video angiography. Furthermore, only endoscopic video angiography detected the incomplete clipping in 2 aneurysms and the occlusion of the perforating branches in 3 aneurysms, which led to the reapplication of clips in 2 aneurysms. The innovative endoscopic fluorescein video angiography system we developed features a small-caliber endoscope and bright fluorescence images. Because it reveals blood flow in the dead angle areas of the microscope, this novel system could contribute to the safety and long-term effectiveness of aneurysm surgery even in a narrow operative field.

  8. Video display terminal workstation improvement program: I. Baseline associations between musculoskeletal discomfort and ergonomic features of workstations.

    Science.gov (United States)

    Demure, B; Luippold, R S; Bigelow, C; Ali, D; Mundt, K A; Liese, B

    2000-08-01

    Associations between selected sites of musculoskeletal discomfort and ergonomic characteristics of the video display terminal (VDT) workstation were assessed in analyses controlling for demographic, psychosocial stress, and VDT use factors in 273 VDT users from a large administrative department. Significant associations with wrist/hand discomfort were seen for female gender; working 7+ hours at a VDT; low job satisfaction; poor keyboard position; use of new, adjustable furniture; and layout of the workstation. Significantly increased odds ratios for neck/shoulder discomfort were observed for 7+ hours at a VDT, less than complete job control, older age (40 to 49 years), and never/infrequent breaks. Lower back discomfort was related marginally to working 7+ hours at a VDT. These results demonstrate that some characteristics of VDT workstations, after accounting for psychosocial stress, can be correlated with musculoskeletal discomfort.

  9. CLIPS based decision support system for water distribution networks

    Directory of Open Access Journals (Sweden)

    K. Sandeep

    2011-10-01

    Full Text Available The difficulty in knowledge representation of a water distribution network (WDN problem has contributed to the limited use of artificial intelligence (AI based expert systems (ES in the management of these networks. This paper presents a design of a Decision Support System (DSS that facilitates "on-demand'' knowledge generation by utilizing results of simulation runs of a suitably calibrated and validated hydraulic model of an existing aged WDN corresponding to emergent or even hypothetical but likely scenarios. The DSS augments the capability of a conventional expert system by integrating together the hydraulic modelling features with heuristics based knowledge of experts under a common, rules based, expert shell named CLIPS (C Language Integrated Production System. In contrast to previous ES, the knowledge base of the DSS has been designed to be dynamic by superimposing CLIPS on Structured Query Language (SQL. The proposed ES has an inbuilt calibration module that enables calibration of an existing (aged WDN for the unknown, and unobservable, Hazen-Williams C-values. In addition, the daily run and simulation modules of the proposed ES further enable the CLIPS inference engine to evaluate the network performance for any emergent or suggested test scenarios. An additional feature of the proposed design is that the DSS integrates computational platforms such as MATLAB, open source Geographical Information System (GIS, and a relational database management system (RDBMS working under the umbrella of the Microsoft Visual Studio based common user interface. The paper also discusses implementation of the proposed framework on a case study and clearly demonstrates the utility of the application as an able aide for effective management of the study network.

  10. Sacral Tarlov cyst: surgical treatment by clipping.

    Science.gov (United States)

    Cantore, Giampaolo; Bistazzoni, Simona; Esposito, Vincenzo; Tola, Serena; Lenzi, Jacopo; Passacantilli, Emiliano; Innocenzi, Gualtiero

    2013-02-01

    This study reports the anatomopathological classification of Tarlov cysts and the various treatment techniques described in the literature. The authors present their patient series (19 cases) with a long follow-up (range 9 months to 25 years) treated by cyst remodeling around the root using titanium clips. The technique is effective in both avoiding cerebrospinal fluid leakage and resolving bladder dysfunction when urinary symptoms are incomplete and discontinuous. The clipping technique for Tarlov cysts is easy, valid, safe, rapid, and effective. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Algorithm combination of deblurring and denoising on video frames using the method search of local features on image

    Directory of Open Access Journals (Sweden)

    Semenishchev Evgeny

    2017-01-01

    Full Text Available In this paper, we propose an approach that allows us to perform an operation to reduce error in the form of noise and lubrication. To improve the processing speed and the possibility of parallelization of the process, we use the approach is based on the search for local features on the image.

  12. Know your data: understanding implicit usage versus explicit action in video content classification

    Science.gov (United States)

    Yew, Jude; Shamma, David A.

    2011-02-01

    In this paper, we present a method for video category classification using only social metadata from websites like YouTube. In place of content analysis, we utilize communicative and social contexts surrounding videos as a means to determine a categorical genre, e.g. Comedy, Music. We hypothesize that video clips belonging to different genre categories would have distinct signatures and patterns that are reflected in their collected metadata. In particular, we define and describe social metadata as usage or action to aid in classification. We trained a Naive Bayes classifier to predict categories from a sample of 1,740 YouTube videos representing the top five genre categories. Using just a small number of the available metadata features, we compare the classifications produced by our Naive Bayes classifier with those provided by the uploader of that particular video. Compared to random predictions with the YouTube data (21% accurate), our classifier attained a mediocre 33% accuracy in predicting video genres. However, we found that the accuracy of our classifier significantly improves by nominal factoring of the explicit data features. By factoring the ratings of the videos in the dataset, the classifier was able to accurately predict the genres of 75% of the videos. We argue that the patterns of social activity found in the metadata are not just meaningful in their own right, but are indicative of the meaning of the shared video content. The results presented by this project represents a first step in investigating the potential meaning and significance of social metadata and its relation to the media experience.

  13. Content analysis of antismoking videos on YouTube: message sensation value, message appeals, and their relationships with viewer responses.

    Science.gov (United States)

    Paek, Hye-Jin; Kim, Kyongseok; Hove, Thomas

    2010-12-01

    Focusing on several message features that are prominent in antismoking campaign literature, this content-analytic study examines 934 antismoking video clips on YouTube for the following characteristics: message sensation value (MSV) and three types of message appeal (threat, social and humor). These four characteristics are then linked to YouTube's interactive audience response mechanisms (number of viewers, viewer ratings and number of comments) to capture message reach, viewer preference and viewer engagement. The findings suggest the following: (i) antismoking messages are prevalent on YouTube, (ii) MSV levels of online antismoking videos are relatively low compared with MSV levels of televised antismoking messages, (iii) threat appeals are the videos' predominant message strategy and (iv) message characteristics are related to viewer reach and viewer preference.

  14. Detectability of Hygroscopic Clips Used in Breast Cancer Surgery.

    Science.gov (United States)

    Carmon, Moshe; Olsha, Oded; Gekhtman, David; Nikitin, Irena; Cohen, Yamin; Messing, Michael; Lioubashevsky, Natali; Abu Dalo, Ribhi; Hadar, Tal; Golomb, Eliahu

    2017-02-01

    Sonographically detectable clips were introduced over the last decade. We retrospectively studied the rate and duration of sonographically detectable clip detectability in patients with breast cancer who had sonographically detectable clips inserted over a 2-year period. Nine of 26 patients had neoadjuvant chemotherapy, with all clips remaining detectable 140 to 187 days after insertion. Six of the 9 had intraoperative sonographic localization, with 1 reoperation (17%). Eleven additional patients with nonpalpable tumors and sonographically detectable clips had intraoperative sonographic localization with 1 reoperation (9%). In 1 patient, a sonographically detectable clip enabled intraoperative identification of a suspicious lymph node. There were no complications or clip migration. Sonographically detectable clips are helpful in breast cancer surgery with and without neoadjuvant chemotherapy, remaining detectable for many months and often averting preoperative localization and scheduling difficulties. © 2016 by the American Institute of Ultrasound in Medicine.

  15. VIDEO BLOGGING AND ENGLISH PRESENTATION PERFORMANCE: A PILOT STUDY.

    Science.gov (United States)

    Alan Hung, Shao-Ting; Danny Huang, Heng-Tsung

    2015-10-01

    This study investigated the utility of video blogs in improving EFL students' performance in giving oral presentations and, further, examined the students' perceptions toward video blogging. Thirty-six English-major juniors participated in a semester-long video blog project for which they uploaded their 3-min. virtual presentation clips over 18 weeks. Their virtual presentation clips were rated by three raters using a scale for speaking performance that contained 14 presentation skills. Data sources included presentation clips, reflections, and interviews. The results indicated that the students' overall presentation performance improved significantly. In particular, among the 14 presentation skills projection, intonation, posture, introduction, conclusion, and purpose saw the most substantial improvement. Finally, the qualitative data revealed that learners perceived that the video blog project facilitated learning but increased anxiety.

  16. Photovoltaic module mounting clip with integral grounding

    Science.gov (United States)

    Lenox, Carl J.

    2010-08-24

    An electrically conductive mounting/grounding clip, usable with a photovoltaic (PV) assembly of the type having an electrically conductive frame, comprises an electrically conductive body. The body has a central portion and first and second spaced-apart arms extending from the central portion. Each arm has first and second outer portions with frame surface-disrupting element at the outer portions.

  17. Semantic Labeling of Nonspeech Audio Clips

    Directory of Open Access Journals (Sweden)

    Xiaojuan Ma

    2010-01-01

    Full Text Available Human communication about entities and events is primarily linguistic in nature. While visual representations of information are shown to be highly effective as well, relatively little is known about the communicative power of auditory nonlinguistic representations. We created a collection of short nonlinguistic auditory clips encoding familiar human activities, objects, animals, natural phenomena, machinery, and social scenes. We presented these sounds to a broad spectrum of anonymous human workers using Amazon Mechanical Turk and collected verbal sound labels. We analyzed the human labels in terms of their lexical and semantic properties to ascertain that the audio clips do evoke the information suggested by their pre-defined captions. We then measured the agreement with the semantically compatible labels for each sound clip. Finally, we examined which kinds of entities and events, when captured by nonlinguistic acoustic clips, appear to be well-suited to elicit information for communication, and which ones are less discriminable. Our work is set against the broader goal of creating resources that facilitate communication for people with some types of language loss. Furthermore, our data should prove useful for future research in machine analysis/synthesis of audio, such as computational auditory scene analysis, and annotating/querying large collections of sound effects.

  18. Pulse-train Stimulation of Primary Somatosensory Cortex Blocks Pain Perception in Tail Clip Test.

    Science.gov (United States)

    Lee, Soohyun; Hwang, Eunjin; Lee, Dongmyeong; Choi, Jee Hyun

    2017-04-01

    Human studies of brain stimulation have demonstrated modulatory effects on the perception of pain. However, whether the primary somatosensory cortical activity is associated with antinociceptive responses remains unknown. Therefore, we examined the antinociceptive effects of neuronal activity evoked by optogenetic stimulation of primary somatosensory cortex. Optogenetic transgenic mice were subjected to continuous or pulse-train optogenetic stimulation of the primary somatosensory cortex at frequencies of 15, 30, and 40 Hz, during a tail clip test. Reaction time was measured using a digital high-speed video camera. Pulse-train optogenetic stimulation of primary somatosensory cortex showed a delayed pain response with respect to a tail clip, whereas no significant change in reaction time was observed with continuous stimulation. In response to the pulse-train stimulation, video monitoring and local field potential recording revealed associated paw movement and sensorimotor rhythms, respectively. Our results show that optogenetic stimulation of primary somatosensory cortex at beta and gamma frequencies blocks transmission of pain signals in tail clip test.

  19. 21 CFR 886.3100 - Ophthalmic tantalum clip.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Ophthalmic tantalum clip. 886.3100 Section 886...) MEDICAL DEVICES OPHTHALMIC DEVICES Prosthetic Devices § 886.3100 Ophthalmic tantalum clip. (a) Identification. An ophthalmic tantalum clip is a malleable metallic device intended to be implanted permanently...

  20. 21 CFR 882.5225 - Implanted malleable clip.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Implanted malleable clip. 882.5225 Section 882...) MEDICAL DEVICES NEUROLOGICAL DEVICES Neurological Therapeutic Devices § 882.5225 Implanted malleable clip. (a) Identification. An implanted malleable clip is a bent wire or staple that is forcibly closed with...

  1. 21 CFR 886.1410 - Ophthalmic trial lens clip.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Ophthalmic trial lens clip. 886.1410 Section 886...) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1410 Ophthalmic trial lens clip. (a) Identification. An ophthalmic trial lens clip is a device intended to hold prisms, spheres, cylinders, or...

  2. The Effects of Multimodality through Storytelling Using Various Movie Clips

    Science.gov (United States)

    Kim, SoHee

    2016-01-01

    This study examines the salient multimodal approaches for communicative competence and learners' reactions through storytelling tasks with three different modes: a silent movie clip, a movie clip with only sound effects, and a movie clip with sound effects and dialogue. In order to measure different multimodal effects and to define better delivery…

  3. Fullerene as alligator clips for electrical conduction through ...

    Indian Academy of Sciences (India)

    2017-04-20

    Apr 20, 2017 ... presented the suitability of fullerene anchoring in coupling anthracene molecule with gold electrodes. AMJ with boron-20 (B-20) and C-20 alligator clips exhibited strongest conduction in contrast to nitrogen, oxygen, fluorine and neon alligator clips. Keywords. HOMO; LUMO; fullerenes; alligator clips; ...

  4. Nonpenetrating vascular clips for small-caliber anastomosis

    NARCIS (Netherlands)

    Zeebregts, CJ; Van den Dungen, JJ; Kalicharan, D; Cromheecke, M; Van der Want, J

    2000-01-01

    In the search for better anastomosing techniques, an improved vascular stapler device (VCS clip applier system(R)) has been introduced. The system uses nonpenetrating clips to approximate everted vessel walls. The objective of this study was to determine the effects of nonpenetrating vascular clips

  5. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  6. Using Progressive Video Prompting to Teach Students with Moderate Intellectual Disability to Shoot a Basketball

    Science.gov (United States)

    Lo, Ya-yu; Burk, Bradley; Burk, Bradley; Anderson, Adrienne L.

    2014-01-01

    The current study examined the effects of a modified video prompting procedure, namely progressive video prompting, to increase technique accuracy of shooting a basketball in the school gymnasium of three 11th-grade students with moderate intellectual disability. The intervention involved participants viewing video clips of an adult model who…

  7. Person Identification from Video with Multiple Biometric Cues: Benchmarks for Human and Machine Performance

    National Research Council Canada - National Science Library

    O'Toole, Alice

    2003-01-01

    .... Experiments have been completed comparing the effects of several types of facial motion on face recognition, the effects of face familiarity on recognition from video clips taken at a distance...

  8. Humorous Videos and Idiom Achievement: Some Pedagogical Considerations for EFL Learners

    Directory of Open Access Journals (Sweden)

    Malihe Neissari

    2017-10-01

    Full Text Available Employing a quasi-experimental design, this study examined the efficacy of humorous idiom video clips on the achievement of Iranian undergraduate students studying English as a Foreign Language (EFL. Forty humorous video clips from the English Idiom Series called “The Teacher” from the BBC website were used to teach 120 idioms to 61 undergraduate students at the University of Bojnord (UB. A 40-item idiom pretest was given to the experimental group (EG and the control group (CG while an independent-samples t-test was used to compare the means of these two groups based on posttest scores. A 15-item attitudinal questionnaire captured participants’ attitudes toward learning English idioms through video clips. The results indicate that there was a significant difference between the EG and CG mean scores: humorous video clips do facilitate EFL learners’ idioms achievement and learners exhibit a positive attitude toward their application in the classroom.

  9. Videos & Tools: MedlinePlus

    Science.gov (United States)

    ... of this page: https://medlineplus.gov/videosandcooltools.html Videos & Tools To use the sharing features on this page, please enable JavaScript. Watch health videos on topics such as anatomy, body systems, and ...

  10. Health Videos: MedlinePlus

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/anatomyvideos.html.htm Health Videos To use the sharing features on this page, please enable JavaScript. These animated videos show the anatomy of body parts and organ ...

  11. An unsupervised meta-graph clustering based prototype-specific feature quantification for human re-identification in video surveillance

    Directory of Open Access Journals (Sweden)

    Aparajita Nanda

    2017-06-01

    Full Text Available Human re-identification is an emerging research area in the field of visual surveillance. It refers to the task of associating the images of the persons captured by one camera (probe set with the images captured by another camera (gallery set at different locations in different time instances. The performance of these systems are often challenged by some factors—variation in articulated human pose and clothing, frequent occlusion with various objects, change in light illumination, and the cluttered background are to name a few. Besides, the ambiguity in recognition increases between individuals with similar appearance. In this paper, we present a novel framework for human re-identification that finds the correspondence image pair across non-overlapping camera views in the presence of the above challenging scenarios. The proposed framework handles the visual ambiguity having similar appearance by first segmenting the gallery instances into disjoint prototypes (groups, where each prototype represents the images with high commonality. Then, a weighing scheme is formulated that quantifies the selective and distinct information about the features concerning the level of contribution against each prototype. Finally, the prototype specific weights are utilized in the similarity measure and fused with the existing generic weighing to facilitates improvement in the re-identification. Exhaustive simulation on three benchmark datasets alongside the CMC (Cumulative Matching Characteristics plot enumerate the efficacy of our proposed framework over the counterparts.

  12. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (DEC VAX VMS VERSION)

    Science.gov (United States)

    Donnell, B.

    1994-01-01

    CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with

  13. Video Quality Prediction over Wireless 4G

    KAUST Repository

    Lau, Chun Pong

    2013-04-14

    In this paper, we study the problem of video quality prediction over the wireless 4G network. Video transmission data is collected from a real 4G SCM testbed for investigating factors that affect video quality. After feature transformation and selection on video and network parameters, video quality is predicted by solving as regression problem. Experimental results show that the dominated factor on video quality is the channel attenuation and video quality can be well estimated by our models with small errors.

  14. Training model for cerebral aneurysm clipping

    Directory of Open Access Journals (Sweden)

    Hiroshi Tenjin, M.D., Ph.D.

    2017-12-01

    Full Text Available Clipping of cerebral aneurysms is still an important skill in neurosurgery. We have made a training model for the clipping of cerebral aneurysms. The concepts for the model were 1: training model for beginners, 2: three dimensional manipulation using an operating microscope, 3: the aneurysm model is to be perfused by simulated blood causing premature rupture. The correct relationship between each tissue, and softness of the brain and vessels were characteristics of the model. The skull, brain, arteries, and veins were made using a 3D printer with data from DICOM. The brain and vessels were made from polyvinyl alcohol (PVA. One training course was held and this model was useful for training of cerebral aneurysm surgery for young neurosurgeons.

  15. Video diaries on social media: Creating online communities for geoscience research and education

    Science.gov (United States)

    Tong, V.

    2013-12-01

    Making video clips is an engaging way to learn and teach geoscience. As smartphones become increasingly common, it is relatively straightforward for students to produce ';video diaries' by recording their research and learning experience over the course of a science module. Instead of keeping the video diaries for themselves, students may use the social media such as Facebook for sharing their experience and thoughts. There are some potential benefits to link video diaries and social media in pedagogical contexts. For example, online comments on video clips offer useful feedback and learning materials to the students. Students also have the opportunity to engage in geoscience outreach by producing authentic scientific contents at the same time. A video diary project was conducted to test the pedagogical potential of using video diaries on social media in the context of geoscience outreach, undergraduate research and teaching. This project formed part of a problem-based learning module in field geophysics at an archaeological site in the UK. The project involved i) the students posting video clips about their research and problem-based learning in the field on a daily basis; and ii) the lecturer building an online outreach community with partner institutions. In this contribution, I will discuss the implementation of the project and critically evaluate the pedagogical potential of video diaries on social media. My discussion will focus on the following: 1) Effectiveness of video diaries on social media; 2) Student-centered approach of producing geoscience video diaries as part of their research and problem-based learning; 3) Learning, teaching and assessment based on video clips and related commentaries posted on Facebook; and 4) Challenges in creating and promoting online communities for geoscience outreach through the use of video diaries. I will compare the outcomes from this study with those from other pedagogical projects with video clips on geoscience, and

  16. Illustrating the Epitranscriptome at Nucleotide Resolution Using Methylation-iCLIP (miCLIP).

    Science.gov (United States)

    George, Harry; Ule, Jernej; Hussain, Shobbir

    2017-01-01

    Next-generation sequencing technologies have enabled the transcriptome to be profiled at a previously unprecedented speed and depth. This yielded insights into fundamental transcriptomic processes such as gene transcription, RNA processing, and mRNA splicing. Immunoprecipitation-based transcriptomic methods such as individual nucleotide resolution crosslinking immunoprecipitation (iCLIP) have also allowed high-resolution analysis of the RNA interactions of a protein of interest, thus revealing new regulatory mechanisms. We and others have recently modified this method to profile RNA methylation, and we refer to this customized technique as methylation-iCLIP (miCLIP). Variants of miCLIP have been used to map the methyl-5-cytosine (m5C) or methyl-6-adenosine (m6A) modification at nucleotide resolution in the human transcriptome. Here we describe the m5C-miCLIP protocol, discuss how it yields the nucleotide-resolution RNA modification maps, and comment on how these have contributed to the new field of molecular genetics research coined "epitranscriptomics."

  17. Estimation of Web video multiplicity

    Science.gov (United States)

    Cheung, SenChing S.; Zakhor, Avideh

    1999-12-01

    With ever more popularity of video web-publishing, many popular contents are being mirrored, reformatted, modified and republished, resulting in excessive content duplication. While such redundancy provides fault tolerance for continuous availability of information, it could potentially create problems for multimedia search engines in that the search results for a given query might become repetitious, and cluttered with a large number of duplicates. As such, developing techniques for detecting similarity and duplication is important to multimedia search engines. In addition, content providers might be interested in identifying duplicates of their content for legal, contractual or other business related reasons. In this paper, we propose an efficient algorithm called video signature to detect similar video sequences for large databases such as the web. The idea is to first form a 'signature' for each video sequence by selection a small number of its frames that are most similar to a number of randomly chosen seed images. Then the similarity between any tow video sequences can be reliably estimated by comparing their respective signatures. Using this method, we achieve 85 percent recall and precision ratios on a test database of 377 video sequences. As a proof of concept, we have applied our proposed algorithm to a collection of 1800 hours of video corresponding to around 45000 clips from the web. Our results indicate that, on average, every video in our collection from the web has around five similar copies.

  18. CLIPS, AppleEvents, and AppleScript: Integrating CLIPS with commercial software

    Science.gov (United States)

    Compton, Michael M.; Wolfe, Shawn R.

    1994-01-01

    Many of today's intelligent systems are comprised of several modules, perhaps written in different tools and languages, that together help solve the user's problem. These systems often employ a knowledge-based component that is not accessed directly by the user, but instead operates 'in the background' offering assistance to the user as necessary. In these types of modular systems, an efficient, flexible, and eady-to-use mechanism for sharing data between programs is crucial. To help permit transparent integration of CLIPS with other Macintosh applications, the AI Research Branch at NASA Ames Research Center has extended CLIPS to allow it to communicate transparently with other applications through two popular data-sharing mechanisms provided by the Macintosh operating system: Apple Events (a 'high-level' event mechanism for program-to-program communication), and AppleScript, a recently-released scripting language for the Macintosh. This capability permits other applications (running on either the same or a remote machine) to send a command to CLIPS, which then responds as if the command were typed into the CLIPS dialog window. Any result returned by the command is then automatically returned to the program that sent it. Likewise, CLIPS can send several types of Apple Events directly to other local or remote applications. This CLIPS system has been successfully integrated with a variety of commercial applications, including data collection programs, electronics forms packages, DBMS's, and email programs. These mechanisms can permit transparent user access to the knowledge base from within a commercial application, and allow a single copy of the knowledge base to service multiple users in a networked environment.

  19. An approach for line clipping against a convex polyhedron

    Directory of Open Access Journals (Sweden)

    K. R. Wijeweera

    2016-11-01

    Full Text Available Line clipping operation is a bottleneck in most of computer graphics applications. There are situations when millions of line segments need to be clipped against convex polyhedrons with millions of facets. An algorithm to clip line segments against a convex polyhedron is proposed in this work. The salient feature of the proposed algorithm is that it minimizes the number of computations by ignoring unnecessary intersection calculations. The other advantage of the proposed algorithm is that it needs minimum details about the convex polyhedron; the equations of the facets and the centroid. Therefore, it improves the efficiency of the algorithm. The line segment may have zero length (a point or positive length. When line segment is just a point which is outside with respect to at least one facet, it should be rejected as the line segment is outside the convex polyhedron. When the line segment is parallel to a facet and one of its end points is outside, that line segment is also completely outside and it should also be rejected. Unless the line segment belongs to none of the above two cases, it should be pruned against each facet in a certain order. In this case, the intersection points with only some of the facets need to be computed and some other intersection calculations can be ignored. If the line segment is completely outside then it becomes a single point. That means the two end points coincide. But due to the precision error they do not exactly coincide. Therefore, approximate equality should be tested. By using this property, completely outside line segments can be identified. Having two end points outside does not necessarily keep the line segment completely outside. The widely used Cyrus Beck algorithm computes all the intersection points with each facet of the polyhedron while the proposed algorithm successfully avoids some of the intersection point calculations. In the best case; it is capable of avoiding all the unnecessary intersection

  20. Reviews in instructional video

    NARCIS (Netherlands)

    van der Meij, Hans

    2017-01-01

    This study investigates the effectiveness of a video tutorial for software training whose construction was based on a combination of insights from multimedia learning and Demonstration-Based Training. In the videos, a model of task performance was enhanced with instructional features that were

  1. Personal Digital Video Stories

    DEFF Research Database (Denmark)

    Ørngreen, Rikke; Henningsen, Birgitte Sølbeck; Louw, Arnt Vestergaard

    2016-01-01

    agenda focusing on video productions in combination with digital storytelling, followed by a presentation of the digital storytelling features. The paper concludes with a suggestion to initiate research in what is identified as Personal Digital Video (PDV) Stories within longitudinal settings, while...

  2. Bayesian Recovery of Clipped OFDM Signals: A Receiver-based Approach

    KAUST Repository

    Al-Rabah, Abdullatif R.

    2013-05-01

    Recently, orthogonal frequency-division multiplexing (OFDM) has been adopted for high-speed wireless communications due to its robustness against multipath fading. However, one of the main fundamental drawbacks of OFDM systems is the high peak-to-average-power ratio (PAPR). Several techniques have been proposed for PAPR reduction. Most of these techniques require transmitter-based (pre-compensated) processing. On the other hand, receiver-based alternatives would save the power and reduce the transmitter complexity. By keeping this in mind, a possible approach is to limit the amplitude of the OFDM signal to a predetermined threshold and equivalently a sparse clipping signal is added. Then, estimating this clipping signal at the receiver to recover the original signal. In this work, we propose a Bayesian receiver-based low-complexity clipping signal recovery method for PAPR reduction. The method is able to i) effectively reduce the PAPR via simple clipping scheme at the transmitter side, ii) use Bayesian recovery algorithm to reconstruct the clipping signal at the receiver side by measuring part of subcarriers, iii) perform well in the absence of statistical information about the signal (e.g. clipping level) and the noise (e.g. noise variance), and at the same time iv is energy efficient due to its low complexity. Specifically, the proposed recovery technique is implemented in data-aided based. The data-aided method collects clipping information by measuring reliable 
data subcarriers, thus makes full use of spectrum for data transmission without the need for tone reservation. The study is extended further to discuss how to improve the recovery of the clipping signal utilizing some features of practical OFDM systems i.e., the oversampling and the presence of multiple receivers. Simulation results demonstrate the superiority of the proposed technique over other recovery algorithms. The overall objective is to show that the receiver-based Bayesian technique is highly

  3. Part Two: Learning Science Through Digital Video: Student Views on Watching and Creating Videos

    Science.gov (United States)

    Wade, P.; Courtney, A. R.

    2014-12-01

    The use of digital video for science education has become common with the wide availability of video imagery. This study continues research into aspects of using digital video as a primary teaching tool to enhance student learning in undergraduate science courses. Two survey instruments were administered to undergraduate non-science majors. Survey One focused on: a) What science is being learned from watching science videos such as a "YouTube" clip of a volcanic eruption or an informational video on geologic time and b) What are student preferences with regard to their learning (e.g. using video versus traditional modes of delivery)? Survey Two addressed students' perspectives on the storytelling aspect of the video with respect to: a) sustaining interest, b) providing science information, c) style of video and d) quality of the video. Undergraduate non-science majors were the primary focus group in this study. Students were asked to view video segments and respond to a survey focused on what they learned from the segments. The storytelling aspect of each video was also addressed by students. Students watched 15-20 shorter (3-15 minute science videos) created within the last four years. Initial results of this research support that shorter video segments were preferred and the storytelling quality of each video related to student learning.

  4. Use of an iPhone 4 with Video Features to Assist Location of Students with Moderate Intellectual Disability When Lost in Community Settings

    Science.gov (United States)

    Purrazzella, Kaitlin; Mechling, Linda C.

    2013-01-01

    This study evaluated the acquisition of use of an iPhone 4 by adults with moderate intellectual disability to take and send video captions of their location when lost in the community. A multiple probe across participants design was used to evaluate the effectiveness of the intervention which used video modeling, picture prompts, and instructor…

  5. Content-based TV sports video retrieval using multimodal analysis

    Science.gov (United States)

    Yu, Yiqing; Liu, Huayong; Wang, Hongbin; Zhou, Dongru

    2003-09-01

    In this paper, we propose content-based video retrieval, which is a kind of retrieval by its semantical contents. Because video data is composed of multimodal information streams such as video, auditory and textual streams, we describe a strategy of using multimodal analysis for automatic parsing sports video. The paper first defines the basic structure of sports video database system, and then introduces a new approach that integrates visual stream analysis, speech recognition, speech signal processing and text extraction to realize video retrieval. The experimental results for TV sports video of football games indicate that the multimodal analysis is effective for video retrieval by quickly browsing tree-like video clips or inputting keywords within predefined domain.

  6. Finite Element and Experimental Analysis of Function of Plastic Clips

    OpenAIRE

    Honarpardaz, Mohammad Mahdi

    2011-01-01

    The aim of this work is to investigate the function of plastic clips which are used to join different parts, during mounting and dismounting processes. The clips are made of POM and will be mounted on steel plates. The study is undertaken using experimental and numerical methods. In experiments, the mounting and dismounting forces are measured with respect to vertical displacement of the clips related to the plate. The numerical method is performed using structural implicit non-linear quasi-s...

  7. A CLIPS expert system for clinical flow cytometry data analysis

    Science.gov (United States)

    Salzman, G. C.; Duque, R. E.; Braylan, R. C.; Stewart, C. C.

    1990-01-01

    An expert system is being developed using CLIPS to assist clinicians in the analysis of multivariate flow cytometry data from cancer patients. Cluster analysis is used to find subpopulations representing various cell types in multiple datasets each consisting of four to five measurements on each of 5000 cells. CLIPS facts are derived from results of the clustering. CLIPS rules are based on the expertise of Drs. Stewart, Duque, and Braylan. The rules incorporate certainty factors based on case histories.

  8. A neural network simulation package in CLIPS

    Science.gov (United States)

    Bhatnagar, Himanshu; Krolak, Patrick D.; Mcgee, Brenda J.; Coleman, John

    1990-01-01

    The intrinsic similarity between the firing of a rule and the firing of a neuron has been captured in this research to provide a neural network development system within an existing production system (CLIPS). A very important by-product of this research has been the emergence of an integrated technique of using rule based systems in conjunction with the neural networks to solve complex problems. The systems provides a tool kit for an integrated use of the two techniques and is also extendible to accommodate other AI techniques like the semantic networks, connectionist networks, and even the petri nets. This integrated technique can be very useful in solving complex AI problems.

  9. Migration of Surgical Clips into the Common Bile Duct after Laparoscopic Cholecystectomy

    Directory of Open Access Journals (Sweden)

    Krishn Kant Rawal

    2017-01-01

    Full Text Available Laparoscopic cholecystectomy (LC is currently the treatment of choice for symptomatic gallstones. Associated complications include bile duct injury, retained common bile duct (CBD stones, and migration of surgical clips. Clip migration into the CBD can present with recurrent cholangitis over a period of time. Retained CBD stones can be another cause of recurrent cholangitis. A case of two surgical clips migrating into the common bile duct with few retained stones following LC is reported here. The patient had repeated episodes of fever, pain at epigastrium, jaundice, and pruritus 3 months after LC. Liver function tests revealed features of obstructive jaundice. Ultrasonography of the abdomen showed dilated CBD with few stones. In view of acute cholangitis, an urgent endoscopic retrograde cholangiopancreatography was done, which demonstrated few filling defects and 2 linear metallic densities in the CBD. A few retained stones along with 2 surgical clips were removed successfully from the CBD by endoscopic retrograde cholangiopancreatography after papillotomy using a Dormia basket. The patient improved dramatically following the procedure.

  10. The device ``Klyost'' for clipping vessels and soft-elastic tubular structures

    Science.gov (United States)

    Ryklina, E. P.; Khmelevskaya, I. Yu.; Prokoshkin, S. D.; Ipatkin, R. V.

    2003-10-01

    A retrospect of earlier elaborated devices applying in clinical practice is presented. A newly invented device for clipping vessels and soft-elastic tubular structures is presented. Traditionally in surgical practice, to stop a blood flow, the surgeons use suturing of vessels (legate) with a silk thread. In laparoscopic surgery this process is accompanied with great difficulties. An alternative method of clipping vessels and structures is also used. The negative feature of this method is that the structure must be first mobilized. Up for to-day there is no device allowing to combine in one method both manipulations: suturing of tissue and laying a clip on a structure. The paper describes the new device, allowing to combine the oversaid manipulations and consequently to simplify the work of the surgeon and shorten the operation time. The device functioning is based on one-way and two-way shape memory effects. The device was applied in treating of 5 patients and manifested fine functional possibilities. The implanted clip can be easily evacuated without any patient trauma.

  11. Cerebral Aneurysm Clipping Surgery Simulation Using Patient-Specific 3D Printing and Silicone Casting.

    Science.gov (United States)

    Ryan, Justin R; Almefty, Kaith K; Nakaji, Peter; Frakes, David H

    2016-04-01

    Neurosurgery simulator development is growing as practitioners recognize the need for improved instructional and rehearsal platforms to improve procedural skills and patient care. In addition, changes in practice patterns have decreased the volume of specific cases, such as aneurysm clippings, which reduces the opportunity for operating room experience. The authors developed a hands-on, dimensionally accurate model for aneurysm clipping using patient-derived anatomic data and three-dimensional (3D) printing. Design of the model focused on reproducibility as well as adaptability to new patient geometry. A modular, reproducible, and patient-derived medical simulacrum was developed for medical learners to practice aneurysmal clipping procedures. Various forms of 3D printing were used to develop a geometrically accurate cranium and vascular tree featuring 9 patient-derived aneurysms. 3D printing in conjunction with elastomeric casting was leveraged to achieve a patient-derived brain model with tactile properties not yet available from commercial 3D printing technology. An educational pilot study was performed to gauge simulation efficacy. Through the novel manufacturing process, a patient-derived simulacrum was developed for neurovascular surgical simulation. A follow-up qualitative study suggests potential to enhance current educational programs; assessments support the efficacy of the simulacrum. The proposed aneurysm clipping simulator has the potential to improve learning experiences in surgical environment. 3D printing and elastomeric casting can produce patient-derived models for a dynamic learning environment that add value to surgical training and preparation. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Dose perturbation behind tantalum clips in ocular proton therapy

    Energy Technology Data Exchange (ETDEWEB)

    Ptaszkiewicz, M., E-mail: marta.ptaszkiewicz@ifj.edu.p [Institute of Nuclear Physics Polish Academy of Science (IFJ PAN), Department of Radiation Physics and Dosimetry, u1. Radzikowskiego 152, PL 31-342 Krakow (Poland); Weber, A. [Charite-Universitatsmedizin Berlin, Berlin (Germany); Swakon, J.; Klosowski, M.; Olko, P.; Bilski, P.; Michalec, B.; Czopyk, L. [Institute of Nuclear Physics Polish Academy of Science (IFJ PAN), Department of Radiation Physics and Dosimetry, u1. Radzikowskiego 152, PL 31-342 Krakow (Poland)

    2010-03-15

    Proton therapy of eye tumors requires precise positioning in the sub-millimeter range. For this reason small (2.5 mm in diameter and a thickness of 0.2 mm) tantalum markers (clips) are sutured onto the sclera around the tumor base and used for radiological verification of the position and orientation of the eye at each treatment session. In some cases during irradiation clips might be positioned between the tumor and radiation source that may lead to perturbation of the dose distribution and underdosing of some parts of a tumor. The aim of this work was to determine experimentally the dose distribution behind the tantalum clips after irradiation with the parallel proton beam. The dose measurements were performed using the two-dimensional thermoluminescent dosimetric system, newly developed at the IFJ PAN in Poland, and a clip-phantom consisting of a 100 mm x 100 mm x 5.3 mm PMMA plate with holes for clips placing at five irradiation angles (0, 30 45, 60 and 90{sup o}). The clip-phantom was irradiated with 68 MeV proton beam. The measurements were performed at different depths ranging from 6.3 to 24.8 mm water equivalent depth. The measurements of dose modification due to tantalum clips showed underdosing ranged from 4% to 32%. For those reasons ophthalmologist need to take this effect into account during clip surgery and medical physicist need to consider the position of the clips in treatment planning.

  13. Collaborative Video Sketching

    DEFF Research Database (Denmark)

    Henningsen, Birgitte; Gundersen, Peter Bukovica; Hautopp, Heidi

    2017-01-01

    This paper introduces to what we define as a collaborative video sketching process. This process links various sketching techniques with digital storytelling approaches and creative reflection processes in video productions. Traditionally, sketching has been used by designers across various...... forms and through empirical examples, we present and discuss the video recording of sketching sessions, as well as development of video sketches by rethinking, redoing and editing the recorded sessions. The empirical data is based on workshop sessions with researchers and students from universities...... and university colleges and primary and secondary school teachers. As researchers, we have had different roles in these action research case studies where various video sketching techniques were applied.The analysis illustrates that video sketching can take many forms, and two common features are important...

  14. Splitting a colon geometry with multiplanar clipping

    Science.gov (United States)

    Ahn, David K.; Vining, David J.; Ge, Yaorong; Stelts, David R.

    1998-06-01

    Virtual colonoscopy, a recent three-dimensional (3D) visualization technique, has provided radiologists with a unique diagnostic tool. Using this technique, a radiologist can examine the internal morphology of a patient's colon by navigating through a surface-rendered model that is constructed from helical computed tomography image data. Virtual colonoscopy can be used to detect early forms of colon cancer in a way that is less invasive and expensive compared to conventional endoscopy. However, the common approach of 'flying' through the colon lumen to visually search for polyps is tedious and time-consuming, especially when a radiologist loses his or her orientation within the colon. Furthermore, a radiologist's field of view is often limited by the 3D camera position located inside the colon lumen. We have developed a new technique, called multi-planar geometry clipping, that addresses these problems. Our algorithm divides a complex colon anatomy into several smaller segments, and then splits each of these segments in half for display on a static medium. Multi-planar geometry clipping eliminates virtual colonoscopy's dependence upon expensive, real-time graphics workstations by enabling radiologists to globally inspect the entire internal surface of the colon from a single viewpoint.

  15. Clipping herbaceous vegetation improves early performance of planted seedlings of the Mediterranean shrub Quercus coccifera

    Directory of Open Access Journals (Sweden)

    A. Aubenau

    2007-12-01

    Full Text Available We tested how the conditions resulting from alternative management strategies addressed to mitigate abiotic and biotic limitations to plant establishment affect the performance of planted Quercus coccifera seedlings. This species is a xerophytic and heliophillous Mediterranean shrub, of interest for the restoration of abandoned farmland. We hypothesised that release from herb competition by clipping would allow Q. coccifera seedlings to cope more efficiently with water shortage by adjusting their mass allocation pattern. We established three environments of herb competition: absence of competition (AC, reduced competition by clipping (RC, and total competition (TC; and applied two irrigation treatments: low and high irrigation. We measured soil moisture at different depths, above- and below-ground herb biomass, and evaluated seedling responses, such as mortality, growth, biomass allocation, and morphological and physiological features. The TC treatment reduced water availability more than the RC treatment, in agreement with the highest water stress of seedlings under TC conditions. Irrigation increased above- and below-ground herb biomass, whereas clipping reduced herb production. Release of herb competition by clipping increased seedling survivorship by one order of magnitude and resulted in a growth rate comparable to the absence of competition. This growth was mostly related to carbon gain allocated to roots. The competition intensity imposed by treatments was related to a parallel reduction in total plant leaf area, biomass allocated to leaves and shoot:root ratio, and an increase in biomass allocated to roots and leaf mass area. The negative effects of herbs on Q. coccifera seedlings seem the result of competition for both water and light, in contrast with previous research with more mesic Quercus species, for which competition is primarily for water. Clipping of herbs is a feasible technique that greatly improved seedling performance, and

  16. Video Malware - Behavioral Analysis

    Directory of Open Access Journals (Sweden)

    Rajdeepsinh Dodia

    2015-04-01

    Full Text Available Abstract The counts of malware attacks exploiting the internet increasing day by day and has become a serious threat. The latest malware spreading out through the media players embedded using the video clip of funny in nature to lure the end users. Once it is executed and installed then the behavior of the malware is in the malware authors hand. The spread of the malware emulates through Internet USB drives sharing of the files and folders can be anything which makes presence concealed. The funny video named as it connected to the film celebrity where the malware variant was collected from the laptop of the terror outfit organization .It runs in the backend which it contains malicious code which steals the user sensitive information like banking credentials username amp password and send it to the remote host user called command amp control. The stealed data is directed to the email encapsulated in the malicious code. The potential malware will spread through the USB and other devices .In summary the analysis reveals the presence of malicious code in executable video file and its behavior.

  17. CLIP-170 facilitates the formation of kinetochore-microtubule attachments

    NARCIS (Netherlands)

    Tanenbaum, M.E.; Galjart, N.; Vugt, M.A.T.M. van; Medema, R.H.

    2006-01-01

    CLIP-170 is a microtubule 'plus end tracking' protein involved in several microtubule-dependent processes in interphase. At the onset of mitosis, CLIP-170 localizes to kinetochores, but at metaphase, it is no longer detectable at kinetochores. Although RNA interference (RNAi) experiments have

  18. 21 CFR 882.4190 - Clip forming/cutting instrument.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Clip forming/cutting instrument. 882.4190 Section 882.4190 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES.../cutting instrument. (a) Identification. A clip forming/cutting instrument is a device used by the...

  19. Automatic Metadata Generation Through Analysis of Narration Within Instructional Videos.

    Science.gov (United States)

    Rafferty, Joseph; Nugent, Chris; Liu, Jun; Chen, Liming

    2015-09-01

    Current activity recognition based assistive living solutions have adopted relatively rigid models of inhabitant activities. These solutions have some deficiencies associated with the use of these models. To address this, a goal-oriented solution has been proposed. In a goal-oriented solution, goal models offer a method of flexibly modelling inhabitant activity. The flexibility of these goal models can dynamically produce a large number of varying action plans that may be used to guide inhabitants. In order to provide illustrative, video-based, instruction for these numerous actions plans, a number of video clips would need to be associated with each variation. To address this, rich metadata may be used to automatically match appropriate video clips from a video repository to each specific, dynamically generated, activity plan. This study introduces a mechanism of automatically generating suitable rich metadata representing actions depicted within video clips to facilitate such video matching. This performance of this mechanism was evaluated using eighteen video files; during this evaluation metadata was automatically generated with a high level of accuracy.

  20. The 3D Human Motion Control Through Refined Video Gesture Annotation

    Science.gov (United States)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  1. Film clips and narrative text as subjective emotion elicitation techniques.

    Science.gov (United States)

    Zupan, Barbra; Babbage, Duncan R

    2017-01-01

    Film clips and narrative text are useful techniques in eliciting emotion in a laboratory setting but have not been examined side-by-side using the same methodology. This study examined the self-identification of emotions elicited by film clip and narrative text stimuli to confirm that selected stimuli appropriately target the intended emotions. Seventy participants viewed 30 film clips, and 40 additional participants read 30 narrative texts. Participants identified the emotion experienced (happy, sad, angry, fearful, neutral-six stimuli each). Eighty-five percent of participants self-identified the target emotion for at least two stimuli for all emotion categories of film clips, except angry (only one) and for all categories of narrative text, except fearful (only one). The most effective angry text was correctly identified 74% of the time. Film clips were more effective in eliciting all target emotions in participants for eliciting the correct emotion (angry), intensity rating (happy, sad), or both (fearful).

  2. Assessing Computational Steps for CLIP-Seq Data Analysis

    Directory of Open Access Journals (Sweden)

    Qi Liu

    2015-01-01

    Full Text Available RNA-binding protein (RBP is a key player in regulating gene expression at the posttranscriptional level. CLIP-Seq, with the ability to provide a genome-wide map of protein-RNA interactions, has been increasingly used to decipher RBP-mediated posttranscriptional regulation. Generating highly reliable binding sites from CLIP-Seq requires not only stringent library preparation but also considerable computational efforts. Here we presented a first systematic evaluation of major computational steps for identifying RBP binding sites from CLIP-Seq data, including preprocessing, the choice of control samples, peak normalization, and motif discovery. We found that avoiding PCR amplification artifacts, normalizing to input RNA or mRNAseq, and defining the background model from control samples can reduce the bias introduced by RNA abundance and improve the quality of detected binding sites. Our findings can serve as a general guideline for CLIP experiments design and the comprehensive analysis of CLIP-Seq data.

  3. In search of video event semantics

    NARCIS (Netherlands)

    Mazloom, M.

    2016-01-01

    In this thesis we aim to represent an event in a video using semantic features. We start from a bank of concept detectors for representing events in video. At first we considered the relevance of concepts to the event inside the video representation. We address the problem of video event

  4. Video-assisted segmentation of speech and audio track

    Science.gov (United States)

    Pandit, Medha; Yusoff, Yusseri; Kittler, Josef; Christmas, William J.; Chilton, E. H. S.

    1999-08-01

    Video database research is commonly concerned with the storage and retrieval of visual information invovling sequence segmentation, shot representation and video clip retrieval. In multimedia applications, video sequences are usually accompanied by a sound track. The sound track contains potential cues to aid shot segmentation such as different speakers, background music, singing and distinctive sounds. These different acoustic categories can be modeled to allow for an effective database retrieval. In this paper, we address the problem of automatic segmentation of audio track of multimedia material. This audio based segmentation can be combined with video scene shot detection in order to achieve partitioning of the multimedia material into semantically significant segments.

  5. Using Television Commercials as Video Illustrations: Examples from a Money and Banking Economics Class

    Science.gov (United States)

    Bowes, David R.

    2014-01-01

    Video clips are an excellent way to enhance lecture material. Television commercials are a source of video examples that should not be overlooked and they are readily available on the internet. They are familiar, short, self-contained, constantly being created, and often funny. This paper describes several examples of television commercials that…

  6. YouTube Video Project: A "Cool" Way to Learn Communication Ethics

    Science.gov (United States)

    Lehman, Carol M.; DuFrene, Debbie D.; Lehman, Mark W.

    2010-01-01

    The millennial generation embraces new technologies as a natural way of accessing and exchanging information, staying connected, and having fun. YouTube, a video-sharing site that allows users to upload, view, and share video clips, is among the latest "cool" technologies for enjoying quick laughs, employing a wide variety of corporate activities,…

  7. YouTube as a Qualitative Research Asset: Reviewing User Generated Videos as Learning Resources

    Science.gov (United States)

    Chenail, Ronald J.

    2011-01-01

    YouTube, the video hosting service, offers students, teachers, and practitioners of qualitative researchers a unique reservoir of video clips introducing basic qualitative research concepts, sharing qualitative data from interviews and field observations, and presenting completed research studies. This web-based site also affords qualitative…

  8. Approaches to Interactive Video Anchors in Problem-Based Science Learning

    Science.gov (United States)

    Kumar, David Devraj

    2010-01-01

    This paper is an invited adaptation of the IEEE Education Society Distinguished Lecture Approaches to Interactive Video Anchors in Problem-Based Science Learning. Interactive video anchors have a cognitive theory base, and they help to enlarge the context of learning with information-rich real-world situations. Carefully selected movie clips and…

  9. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  10. Video Analytics Evaluation: Survey of Datasets, Performance Metrics and Approaches

    Science.gov (United States)

    2014-09-01

    events are birthday party, parade, dog show, etc. multimedia event recounting: given an event description and some examples, review a video clip containing...trajectories. 6.10 Tianjin University of Technology team Tianjin University of Technology has modeled human behaviour with the philosophy of bag of

  11. Current Events and Technology: Video and Audio on the Internet.

    Science.gov (United States)

    Laposata, Matthew M.; Howick, Tom; Dias, Michael J.

    2002-01-01

    Explains the effectiveness of visual aids compared to written materials in teaching and recommends using television segments for teaching purposes. Introduces digitized clips provided by major television news organizations through the Internet and describes the technology requirements for successful viewing of streaming videos and audios. (YDS)

  12. Video Dubbing Projects in the Foreign Language Curriculum

    Science.gov (United States)

    Burston, Jack

    2005-01-01

    The dubbing of muted video clips offers an excellent opportunity to develop the skills of foreign language learners at all linguistic levels. In addition to its motivational value, soundtrack dubbing provides a rich source of activities in all language skill areas: listening, reading, writing, speaking. With advanced students, it also lends itself…

  13. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  14. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    OpenAIRE

    Dat Tien Nguyen; Ki Wan Kim; Hyung Gil Hong; Ja Hyung Koo; Min Cheol Kim; Kang Ryoung Park

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has ...

  15. Stereotypies in autism: a video demonstration of their clinical variability

    Directory of Open Access Journals (Sweden)

    Sylvie eGoldman

    2013-01-01

    Full Text Available In autism, stereotypies are frequent and disabling, and whether they correspond to a hyperkinetic movement disorder, a homeostatic response aiming at sensory modulation,or a regulator of arousal remains to be established.So far,it has been challenging to distinguish among these different possibilities,not only because of lack of objective and quantitative means to assess stereotypies,but in our opinion also because of the underappreciated diversity of their clinical presentations.Herein, we illustrate the broad spectrum of stereotypies and demonstrate the usefulness of video-assisted clinical observations of children with autism.The clips presented were extracted from play sessions of 129 children with autism disorder.We conclude that compared to widely used questionnaires and interviews,systematic video observations provide a unique means to classify and score precisely the clinical features of stereotypies.We believe this approach will prove useful to both clinicians and researchers as it offers the level of detail from retrievable images necessary to begin to assess effects of age and treatments on stereotypies, and to embark on the type of investigations required to unravel the physiological basis of motor behaviors in autism.

  16. Southwest, Frontier planes clip wings in Phoenix

    National Research Council Canada - National Science Library

    Ben Mutzabaugh

    2017-01-01

    ... reports did not specify which one. Video from ABC 15 of Phoenix showed damage to the wing tip of the Southwest plane. A separate image tweeted by CBS 5 of Phoenix indicated that the wing of the Frontier aircraft also was damaged. The Frontier flight was bound for Denver, and the carrier put passengers on a replacement aircraft. Passengers on Southwest's ...

  17. Video Editing System

    Science.gov (United States)

    Schlecht, Leslie E.; Kutler, Paul (Technical Monitor)

    1998-01-01

    This is a proposal for a general use system based, on the SGI IRIS workstation platform, for recording computer animation to videotape. In addition, this system would provide features for simple editing and enhancement. Described here are a list of requirements for the system, and a proposed configuration including the SGI VideoLab Integrator, VideoMedia VLAN animation controller and the Pioneer rewritable laserdisc recorder.

  18. Digital Video Teach Yourself VISUALLY

    CERN Document Server

    Watson, Lonzell

    2010-01-01

    Tips and techniques for shooting and sharing superb digital videos. Never before has video been more popular-or more accessible to the home photographer. Now you can create YouTube-worthy, professional-looking video, with the help of this richly illustrated guide. In a straightforward, simple, highly visual format, Teach Yourself VISUALLY Digital Video demystifies the secrets of great video. With colorful screenshots and illustrations plus step-by-step instructions, the book explains the features of your camera and their capabilities, and shows you how to go beyond "auto" to manually

  19. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  20. Development of an artifact-free aneurysm clip

    Directory of Open Access Journals (Sweden)

    Brack Alexander

    2016-09-01

    Full Text Available For the treatment of intracranial aneurysms with aneurysm clips, usually a follow-up inspection in MRI is required. To avoid any artifacts, which can make a proper diagnosis difficult, a new approach for the manufacturing of an aneurysm clip entirely made from fiber-reinforced plastics has been developed. In this paper the concept for the design of the clip, the development of a new manufacturing technology for the fiber-reinforced components as well as first results from the examination of the components in phantom MRI testing is shown.

  1. Observational learning in capuchin monkeys: a video deficit effect.

    Science.gov (United States)

    Anderson, James R; Kuroshima, Hika; Fujita, Kazuo

    2017-07-01

    Young human children have been shown to learn less effectively from video or televised images than from real-life demonstrations. Although nonhuman primates respond to and can learn from video images, there is a lack of direct comparisons of task acquisition from video and live demonstrations. To address this gap in knowledge, we presented capuchin monkeys with video clips of a human demonstrator explicitly hiding food under one of two containers. The clips were presented at normal, faster than normal, or slower than normal speed, and then the monkeys were allowed to choose between the real containers. Even after 55 sessions and hundreds of video demonstration trials the monkeys' performances indicated no mastery of the task, and there was no effect of video speed. When given live demonstrations of the hiding act, the monkeys' performances were vastly improved. Upon subsequent return to video demonstrations, performances declined to pre-live-demonstration levels, but this time with evidence for an advantage of fast video demonstrations. Demonstration action speed may be one aspect of images that influence nonhuman primates' ability to learn from video images, an ability that in monkeys, as in young children, appears limited compared to learning from live models.

  2. A model of face selection in viewing video stories

    Science.gov (United States)

    Suda, Yuki; Kitazawa, Shigeru

    2015-01-01

    When typical adults watch TV programs, they show surprisingly stereo-typed gaze behaviours, as indicated by the almost simultaneous shifts of their gazes from one face to another. However, a standard saliency model based on low-level physical features alone failed to explain such typical gaze behaviours. To find rules that explain the typical gaze behaviours, we examined temporo-spatial gaze patterns in adults while they viewed video clips with human characters that were played with or without sound, and in the forward or reverse direction. We here show the following: 1) the “peak” face scanpath, which followed the face that attracted the largest number of views but ignored other objects in the scene, still retained the key features of actual scanpaths, 2) gaze behaviours remained unchanged whether the sound was provided or not, 3) the gaze behaviours were sensitive to time reversal, and 4) nearly 60% of the variance of gaze behaviours was explained by the face saliency that was defined as a function of its size, novelty, head movements, and mouth movements. These results suggest that humans share a face-oriented network that integrates several visual features of multiple faces, and directs our eyes to the most salient face at each moment. PMID:25597621

  3. Camcorder 101: Buying and Using Video Cameras.

    Science.gov (United States)

    Catron, Louis E.

    1991-01-01

    Lists nine practical applications of camcorders to theater companies and programs. Discusses the purchase of video gear, camcorder features, accessories, the use of the camcorder in the classroom, theater management, student uses, and video production. (PRA)

  4. Video Player Keyboard Shortcuts: MedlinePlus

    Science.gov (United States)

    ... of this page: https://medlineplus.gov/hotkeys.html Video Player Keyboard Shortcuts To use the sharing features ... of accessible keyboard shortcuts for our latest Health videos on the MedlinePlus site. These shortcuts allow you ...

  5. Children's Video Games as Interactive Racialization

    OpenAIRE

    Martin, Cathlena

    2008-01-01

    Cathlena Martin explores in her paper "Children's Video Games as Interactive Racialization" selected children's video games. Martin argues that children's video games often act as reinforcement for the games' television and film counterparts and their racializing characteristics and features. In Martin's analysis the video games discussed represent media through which to analyze racial identities and ideologies. In making the case for positive female minority leads in children's video games, ...

  6. Digital video transcoding for transmission and storage

    CERN Document Server

    Sun, Huifang; Chen, Xuemin

    2004-01-01

    Professionals in the video and multimedia industries need a book that explains industry standards for video coding and how to convert the compressed information between standards. Digital Video Transcoding for Transmission and Storage answers this demand while also supplying the theories and principles of video compression and transcoding technologies. Emphasizing digital video transcoding techniques, this book summarizes its content via examples of practical methods for transcoder implementation. It relates almost all of its featured transcoding technologies to practical applications.This vol

  7. [Absorbable synthetic clips and pulmonary excision. Our clinical experience].

    Science.gov (United States)

    Nguyen, H; Nguyen, H V; Barra, J A; Raut, Y; Castel-Dupont, S

    1987-02-01

    Analysis of the first 50 clinical cases confirmed conclusions of experimental studies in large animals with respect to the use of absorbable synthetic clips formed of lactomer and polydioxanone. The advantages of clips when compared with ligatures include rapidity and simplicity of use and ease of insertion, even in regions difficult to approach surgically with the fingers. The advantages of clips of the absorbable synthetic type over conventional metallic material are: safety and reliability due to their locking system and the conservation of sufficient residual resistance, improved behavior in biologic media, radio-transparency and total inertia in magnetic fields compatible with postoperative radiation and modern medical imaging procedures (CT scan, IRM), and finally progressive resorption predictable by simple hydrolysis and then total disappearance after 6 to 7 months. With staplers of the TA and GIA type these clips make sutures and ligatures during lung resection surgery entirely automatic.

  8. Using PVM to host CLIPS in distributed environments

    Science.gov (United States)

    Myers, Leonard; Pohl, Kym

    1994-01-01

    It is relatively easy to enhance CLIPS (C Language Integrated Production System) to support multiple expert systems running in a distributed environment with heterogeneous machines. The task is minimized by using the PVM (Parallel Virtual Machine) code from Oak Ridge Labs to provide the distributed utility. PVM is a library of C and FORTRAN subprograms that supports distributive computing on many different UNIX platforms. A PVM deamon is easily installed on each CPU that enters the virtual machine environment. Any user with rsh or rexec access to a machine can use the one PVM deamon to obtain a generous set of distributed facilities. The ready availability of both CLIPS and PVM makes the combination of software particularly attractive for budget conscious experimentation of heterogeneous distributive computing with multiple CLIPS executables. This paper presents a design that is sufficient to provide essential message passing functions in CLIPS and enable the full range of PVM facilities.

  9. Metal artifact reduction for clipping and coiling in interventional C-arm CT.

    Science.gov (United States)

    Prell, D; Kyriakou, Y; Struffert, T; Dörfler, A; Kalender, W A

    2010-04-01

    Metallic implants induce massive artifacts in CT images which deteriorate image quality and often superimpose structures of interest. The purpose of this study was to apply and evaluate a dedicated MAR method for neuroradiologic intracranial clips and detachable platinum coiling events. We here report the first clinical results for MAR in FDCT. FDCT volume scans of several patients treated with endovascular coiling or intracranial clipping were corrected by using a dedicated FDCT MAR correction algorithm combined with an edge-preserving attenuation-normalization method in the projection space. Corrected and uncorrected images were compared by 2 experienced radiologists and evaluated for several image-quality features. After application of our algorithm, implant delineation and visibility were highly improved. CT values compared with values in metal artifact-unaffected areas showed good agreement (average correction of 1300 HU). Image noise was reduced overall by 27%. Intracranial hemorrhage in the direct surroundings of the implanted coil or clip material was displayed without worrisome metal artifacts, and our algorithm even allowed diagnosis in areas where extensive information losses were seen. The high spatial resolution provided by FDCT imaging was well preserved. Our MAR method provided metal artifact-reduced images in every studied case. It reduced image noise and corrected CT values to levels comparable with images measured without metallic implants. An overall improvement of brain tissue modeling and implant visibility was achieved. MAR in neuroradiologic FDCT imaging is a promising step forward for better image quality and diagnosis in the presence of metallic implants.

  10. Evolution of staples and clips for vascular anastomoses.

    Science.gov (United States)

    Zeebregts, Clark J; Kirsch, Wolff M; van den Dungen, Jan J; van Schilfgaarde, Reinout; Zhu, Yong H

    2004-01-01

    Because of the development of less invasive surgical techniques, there is an increasing demand for vascular anastomosing techniques that require less exposure of the operating field. This paper reviews the most important representatives of staples, clips, and other mechanical devices for vascular anastomosing described over the last five decades. This report is organized in three parts: (1) the history of clipping and stapling devices, (2) development of the Vessel Closure System (VCS) clips, and (3) current and potential status of mechanical vascular anastomotic devices. A Medline literature search was conducted and publications on the use of staples and/or clips for the creation of vascular anastomoses identified with extensive cross-referencing. The first literature description of a mechanical vascular stapling device was by Gudov in 1950. This and other reports from the Soviet Union stimulated brisk, competitive development of vascular anastomotic devices in Europe, North America, and Japan. Fasteners included staples, penetrating pin-rings, or toothed stainless steel clips, none of which gained acceptance because of their complexity and inability to facilitate end-to-side anastomoses. A more convenient and less traumatic anastomotic system (VCS Clip applier system) was introduced into clinical practice in 1995. This system differs from staples in that it is non-penetrating. A wide variety of reports have described the advantages, both technical and biological, that clips provide over conventional needle-and-suture, particularly for the construction of vascular access for hemodialysis. A steady evolution of mechanical vascular anastomotic devices has sought to eliminate the technical and biological disadvantages of conventional suturing. Although the conventional hand-sewn, overcast non-absorbable suture remains the "gold" standard, newer techniques such as the non-penetrating arcuate-legged VCS clips are gaining acceptance as a useful addition to the vascular

  11. Anesthesia management for MitraClip device implantation

    Directory of Open Access Journals (Sweden)

    Harikrishnan Kothandan

    2014-01-01

    Full Text Available Aims and Objectives: Percutaneous MitraClip implantation has been demonstrated as an alternative procedure in high-risk patients with symptomatic severe mitral regurgitation (MR who are not suitable (or denied mitral valve repair/replacement due to excessive co morbidity. The MitraClip implantation was performed under general anesthesia and with 3-dimensional transesophageal echocardiography (TEE and fluoroscopic guidance. Materials and Methods: Peri-operative patient data were extracted from the electronic and paper medical records of 21 patients who underwent MitraClip implantations. Results: Four MitraClip implantation were performed in the catheterization laboratory; remaining 17 were performed in the hybrid operating theatre. In 2 patients, procedure was aborted, in one due to migration of the Chiari network into the left atrium and in second one, the leaflets and chords of the mitral valve torn during clipping resulting in consideration for open surgery. In the remaining 19 patients, MitraClip was implanted and the patients showed acute reduction of severe MR to mild-moderate MR. All the patients had invasive blood pressure monitoring and the initial six patients had central venous catheterization prior to the procedure. Intravenous heparin was administered after the guiding catheter was introduced through the inter-atrial septum and activated clotting time was maintained beyond 250 s throughout the procedure. Protamine was administered at the end of the procedure. All the patients were monitored in the intensive care unit after the procedure. Conclusions: Percutaneous MitraClip implantation is a feasible alternative in high-risk patients with symptomatic severe MR. Anesthesia management requirements are similar to open surgical mitral valve repair or replacement. TEE plays a vital role during the MitraClip implantation.

  12. Video micro analysis in music therapy research:a research workshop

    OpenAIRE

    Holck, Ulla; Oldfield, Amelia; Plahl, Christine

    2004-01-01

    Three music therapy researchers from three different countries who have recently completed their PhD theses will each briefly discuss the role of video analysis in their investigations. All three of these research projects have involved music therapy work with children, some of whom were on the autistic spectrum. Brief video clips will be shown and workshop participants will be invited to use different micro analysis approaches to record information from the video recordings. Through this pro...

  13. Indexed Captioned Searchable Videos: A Learning Companion for STEM Coursework

    Science.gov (United States)

    Tuna, Tayfun; Subhlok, Jaspal; Barker, Lecia; Shah, Shishir; Johnson, Olin; Hovey, Christopher

    2017-01-01

    Videos of classroom lectures have proven to be a popular and versatile learning resource. A key shortcoming of the lecture video format is accessing the content of interest hidden in a video. This work meets this challenge with an advanced video framework featuring topical indexing, search, and captioning (ICS videos). Standard optical character…

  14. Digital video.

    Science.gov (United States)

    Johnson, Don; Johnson, Mike

    2004-04-01

    The process of digital capture, editing, and archiving video has become an important aspect of documenting arthroscopic surgery. Recording the arthroscopic findings before and after surgery is an essential part of the patient's medical record. The hardware and software has become more reasonable to purchase, but the learning curve to master the software is steep. Digital video is captured at the time of arthroscopy to a hard disk, and written to a CD at the end of the operative procedure. The process of obtaining video of open procedures is more complex. Outside video of the procedure is recorded on digital tape with a digital video camera. The camera must be plugged into a computer to capture the video on the hard disk. Adobe Premiere software is used to edit the video and render the finished video to the hard drive. This finished video is burned onto a CD. We outline the choice of computer hardware and software for the manipulation of digital video. The techniques of backup and archiving the completed projects and files also are outlined. The uses of digital video for education and the formats that can be used in PowerPoint presentations are discussed.

  15. Single clips versus multi-firing clip device for closure of mucosal incisions after peroral endoscopic myotomy (POEM).

    Science.gov (United States)

    Verlaan, Tessa; Ponds, Fraukje A M; Bastiaansen, Barbara A J; Bredenoord, Albert J; Fockens, Paul

    2016-10-01

    Background and aims: After Peroral Endoscopic Myotomy (POEM), the mucosal incision is closed with endoscopically applied clips. After each clip placement, a subsequent clipping device has to be introduced through the working channel. With the Clipmaster3, three consecutive clips can be placed without reloading which could reduce closure time. We performed a prospective study evaluating efficacy, safety, and ease of use. Closure using Clipmaster3 was compared to closure with standard clips. Methods: Patients undergoing closure with the Clipmaster3 were compared to patients who underwent POEM with standard clip closure. Results: In total, 12 consecutive POEM closures with Clipmaster3 were compared to 24 standard POEM procedures. The Clipmaster3 and the standard group did not differ in sex distribution, age (42 years [29 - 49] vs 41 years [34 - 54] P = 0.379), achalasia subtype, disease duration, length of the mucosal incision (25.0 mm [20 - 30] vs 20.0 mm [20 - 30], P = 1.0), and closure time (622 seconds [438 - 909] vs 599 seconds [488 - 664] P = 0.72). Endoscopically successful closure could be performed in all patients. The proportion of all clips used that were either displaced or discarded was larger for Clipmaster3 (8.8 %) compared to standard closure (2.0 %, P  = 0.00782). Ease of handling VAS (visual analogue scale) score for Clipmaster3 did not differ between endoscopist and endoscopy nurse (7 out of 10). Conclusions: Clipmaster3 is feasible and safe for closure of mucosal incisions after POEM. Clipmaster3 was not associated with reduced closure time. Compared to standard closure, more Clipmaster3 clips were displaced or discarded to achieve successful closure. A training effect cannot be excluded as a cause of these results. NCT01405417.

  16. Elevated intracranial pressure and reversible eye-tracking changes detected while viewing a film clip.

    Science.gov (United States)

    Kolecki, Radek; Dammavalam, Vikalpa; Bin Zahid, Abdullah; Hubbard, Molly; Choudhry, Osamah; Reyes, Marleen; Han, ByoungJun; Wang, Tom; Papas, Paraskevi Vivian; Adem, Aylin; North, Emily; Gilbertson, David T; Kondziolka, Douglas; Huang, Jason H; Huang, Paul P; Samadani, Uzma

    2017-06-02

    OBJECTIVE The precise threshold differentiating normal and elevated intracranial pressure (ICP) is variable among individuals. In the context of several pathophysiological conditions, elevated ICP leads to abnormalities in global cerebral functioning and impacts the function of cranial nerves (CNs), either or both of which may contribute to ocular dysmotility. The purpose of this study was to assess the impact of elevated ICP on eye-tracking performed while patients were watching a short film clip. METHODS Awake patients requiring placement of an ICP monitor for clinical purposes underwent eye tracking while watching a 220-second continuously playing video moving around the perimeter of a viewing monitor. Pupil position was recorded at 500 Hz and metrics associated with each eye individually and both eyes together were calculated. Linear regression with generalized estimating equations was performed to test the association of eye-tracking metrics with changes in ICP. RESULTS Eye tracking was performed at ICP levels ranging from -3 to 30 mm Hg in 23 patients (12 women, 11 men, mean age 46.8 years) on 55 separate occasions. Eye-tracking measures correlating with CN function linearly decreased with increasing ICP (p film clip. These results suggest that eye tracking may be used as a noninvasive, automatable means to quantitate the physiological impact of elevated ICP, which has clinical application for assessment of shunt malfunction, pseudotumor cerebri, concussion, and prevention of second-impact syndrome.

  17. A new video programme

    CERN Multimedia

    CERN video productions

    2011-01-01

    "What's new @ CERN?", a new monthly video programme, will be broadcast on the Monday of every month on webcast.cern.ch. Aimed at the general public, the programme will cover the latest CERN news, with guests and explanatory features. Tune in on Monday 3 October at 4 pm (CET) to see the programme in English, and then at 4:20 pm (CET) for the French version.   var flash_video_player=get_video_player_path(); insert_player_for_external('Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-0753-kbps-640x360-25-fps-audio-64-kbps-44-kHz-stereo', 'mms://mediastream.cern.ch/MediaArchive/Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-Multirate-200-to-753-kbps-640x360-25-fps.wmv', 'false', 480, 360, 'https://mediastream.cern.ch/MediaArchive/Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-posterframe-640x360-at-10-percent.jpg', '1383406', true, 'Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-0600-kbps-maxH-360-25-fps-...

  18. Fast Aerial Video Stitching

    Directory of Open Access Journals (Sweden)

    Jing Li

    2014-10-01

    Full Text Available The highly efficient and robust stitching of aerial video captured by unmanned aerial vehicles (UAVs is a challenging problem in the field of robot vision. Existing commercial image stitching systems have seen success with offline stitching tasks, but they cannot guarantee high-speed performance when dealing with online aerial video sequences. In this paper, we present a novel system which has an unique ability to stitch high-frame rate aerial video at a speed of 150 frames per second (FPS. In addition, rather than using a high-speed vision platform such as FPGA or CUDA, our system is running on a normal personal computer. To achieve this, after the careful comparison of the existing invariant features, we choose the FAST corner and binary descriptor for efficient feature extraction and representation, and present a spatial and temporal coherent filter to fuse the UAV motion information into the feature matching. The proposed filter can remove the majority of feature correspondence outliers and significantly increase the speed of robust feature matching by up to 20 times. To achieve a balance between robustness and efficiency, a dynamic key frame-based stitching framework is used to reduce the accumulation errors. Extensive experiments on challenging UAV datasets demonstrate that our approach can break through the speed limitation and generate an accurate stitching image for aerial video stitching tasks.

  19. Measuring classroom management expertise (CME of teachers: A video-based assessment approach and statistical results

    Directory of Open Access Journals (Sweden)

    Johannes König

    2015-12-01

    Full Text Available The study aims at developing and exploring a novel video-based assessment that captures classroom management expertise (CME of teachers and for which statistical results are provided. CME measurement is conceptualized by using four video clips that refer to typical classroom management situations in which teachers are heavily challenged (involving the challenges to manage transitions, instructional time, student behavior, and instructional feedback and by applying three cognitive demands posed on respondents when responding to test items related to the video clips (accuracy of perception, holistic perception, and justification of action. Research questions are raised regarding reliability, testlet effects (related to the four video clips applied for measurement, intercorrelations of cognitive demands, and criterion-related validity of the instrument. Evidence is provided that (1 using a video-based assessment CME can be measured in a reliable way, (2 the CME total score represents a general ability that is only slightly influenced by testlet effects related to the four video clips, (3 the three cognitive demands conceptualized for the measurement of CME are highly intercorrelated, and (4 the CME measure is positively correlated with declarative-conceptual general pedagogical knowledge (medium effect size, whereas it shows only small size correlations with non-cognitive teacher variables.

  20. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  1. Social trait judgment and affect recognition from static faces and video vignettes in schizophrenia.

    Science.gov (United States)

    McIntosh, Lindsey G; Park, Sohee

    2014-09-01

    Social impairment is a core feature of schizophrenia, present from the pre-morbid stage and predictive of outcome, but the etiology of this deficit remains poorly understood. Successful and adaptive social interactions depend on one's ability to make rapid and accurate judgments about others in real time. Our surprising ability to form accurate first impressions from brief exposures, known as "thin slices" of behavior has been studied very extensively in healthy participants. We sought to examine affect and social trait judgment from thin slices of static or video stimuli in order to investigate the ability of schizophrenic individuals to form reliable social impressions of others. 21 individuals with schizophrenia (SZ) and 20 matched healthy participants (HC) were asked to identify emotions and social traits for actors in standardized face stimuli as well as brief video clips. Sound was removed from videos to remove all verbal cues. Clinical symptoms in SZ and delusional ideation in both groups were measured. Results showed a general impairment in affect recognition for both types of stimuli in SZ. However, the two groups did not differ in the judgments of trustworthiness, approachability, attractiveness, and intelligence. Interestingly, in SZ, the severity of positive symptoms was correlated with higher ratings of attractiveness, trustworthiness, and approachability. Finally, increased delusional ideation in SZ was associated with a tendency to rate others as more trustworthy, while the opposite was true for HC. These findings suggest that complex social judgments in SZ are affected by symptomatology. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Energy saving approaches for video streaming on smartphone based on QoE modeling

    DEFF Research Database (Denmark)

    Ballesteros, Luis Guillermo Martinez; Ickin, Selim; Fiedler, Markus

    2016-01-01

    In this paper, we study the influence of video stalling on QoE. We provide QoE models that are obtained in realistic scenarios on the smartphone, and provide energy-saving approaches for smartphone by leveraging the proposed QoE models in relation to energy. Results show that approximately 5J...... is saved in a 3 minutes video clip with an acceptable Mean Opinion Score (MOS) level when the video frames are skipped. If the video frames are not skipped, then it is suggested to avoid freezes during a video stream as the freezes highly increase the energy waste on the smartphones....

  3. Attitudes of older adults toward shooter video games: An initial study to select an acceptable game for training visual processing.

    Science.gov (United States)

    McKay, Sandra M; Maki, Brian E

    2010-01-01

    A computer-based 'Useful Field of View' (UFOV) training program has been shown to be effective in improving visual processing in older adults. Studies of young adults have shown that playing video games can have similar benefits; however, these studies involved realistic and violent 'first-person shooter' (FPS) games. The willingness of older adults to play such games has not been established. OBJECTIVES: To determine the degree to which older adults would accept playing a realistic, violent FPS-game, compared to video games not involving realistic depiction of violence. METHODS: Sixteen older adults (ages 64-77) viewed and rated video-clip demonstrations of the UFOV program and three video-game genres (realistic-FPS, cartoon-FPS, fixed-shooter), and were then given an opportunity to try them out (30 minutes per game) and rate various features. RESULTS: The results supported a hypothesis that the participants would be less willing to play the realistic-FPS game in comparison to the less violent alternatives (p'svideo-clip demonstrations, 10 of 16 participants indicated they would be unwilling to try out the realistic-FPS game. Of the six who were willing, three did not enjoy the experience and were not interested in playing again. In contrast, all 12 subjects who were willing to try the cartoon-FPS game reported that they enjoyed it and would be willing to play again. A high proportion also tried and enjoyed the UFOV training (15/16) and the fixed-shooter game (12/15). DISCUSSION: A realistic, violent FPS video game is unlikely to be an appropriate choice for older adults. Cartoon-FPS and fixed-shooter games are more viable options. Although most subjects also enjoyed UFOV training, a video-game approach has a number of potential advantages (for instance, 'addictive' properties, low cost, self-administration at home). We therefore conclude that non-violent cartoon-FPS and fixed-shooter video games warrant further investigation as an alternative to the UFOV program

  4. Fluorescein Angiography in Intracranial Aneurysm Surgery: A Helpful Method to Evaluate the Security of Clipping and Observe Blood Flow.

    Science.gov (United States)

    Kakucs, Cristian; Florian, Ioan-Alexandru; Ungureanu, Gheorghe; Florian, Ioan-Stefan

    2017-09-01

    In cerebral aneurysm surgery, various tools are used to evaluate blood flow, including Doppler ultrasonography, conventional cerebral angiography, and electrophysiological monitoring. Fluorescein and indocyanine green are widely used in vascular and central nervous system tumor neurosurgery; however, their routine utilization in aneurysmal surgery is uncommon, despite the fact that they allow direct visualization of blood flow after aneurysmal sac occlusion, enabling the observation of vessel permeability and the effectiveness of aneurysmal obliteration. We report our initial experience using fluorescein video angiography as a control measure for proper clip placement and control of blood flow in aneurysm surgery, and review the relevant literature. This pilot study presents an initial experience, with enrollment of 10 patients harboring a total of 12 cerebral aneurysms who underwent surgery via clipping and subsequent fluorescence videoangiography control. The intravenous injection was performed to demonstrate the patency of the arteries adjacent to the aneurysm. Following intravenous injection, fluorescein sodium remains in the cerebral vasculature for approximately 3 minutes, providing ample time to evaluate vessel patency and determine whether clip repositioning is needed. None of the patients experienced complications during intravenous injection of fluorescein sodium, and the patency of surrounding vessels was demonstrated in all cases. Fluorescein injection in itself does not present a risk of complications, is simple to use, and offers a clear image of the cerebral vasculature. Thus, this technique is useful for determining vessel patency and the degree of aneurysmal occlusion. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. PROTOTIPE VIDEO EDITOR DENGAN MENGGUNAKAN DIRECT X DAN DIRECT SHOW

    Directory of Open Access Journals (Sweden)

    Djoni Haryadi Setiabudi

    2004-01-01

    Full Text Available Technology development had given people the chance to capture their memorable moments in video format. A high quality digital video is a result of a good editing process. Which in turn, arise the new need of an editor application. In accordance to the problem, here the process of making a simple application for video editing needs. The application development use the programming techniques often applied in multimedia applications, especially video. First part of the application will begin with the video file compression and decompression, then we'll step into the editing part of the digital video file. Furthermore, the application also equipped with the facilities needed for the editing processes. The application made with Microsoft Visual C++ with DirectX technology, particularly DirectShow. The application provides basic facilities that will help the editing process of a digital video file. The application will produce an AVI format file after the editing process is finished. Through the testing process of this application shows the ability of this application to do the 'cut' and 'insert' of video files in AVI, MPEG, MPG and DAT formats. The 'cut' and 'insert' process only can be done in static order. Further, the aplication also provide the effects facility for transition process in each clip. Lastly, the process of saving the new edited video file in AVI format from the application. Abstract in Bahasa Indonesia : Perkembangan teknologi memberi kesempatan masyarakat untuk mengabadikan saat - saat yang penting menggunakan video. Pembentukan video digital yang baik membutuhkan proses editing yang baik pula. Untuk melakukan proses editing video digital dibutuhkan program editor. Berdasarkan permasalahan diatas maka pada penelitian ini dibuat prototipe editor sederhana untuk video digital. Pembuatan aplikasi memakai teknik pemrograman di bidang multimedia, khususnya video. Perencanaan dalam pembuatan aplikasi tersebut dimulai dengan pembentukan

  6. Video games

    OpenAIRE

    Kolář, Vojtěch

    2012-01-01

    This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...

  7. Voices and visions of Syrian video activists in Aleppo and Raqqa

    DEFF Research Database (Denmark)

    Wessels, Josepha Ivanka

    2015-01-01

    Thousands of Syrian protesters took mobile phones to record the events taking place in front of their eyes, many recorded the bombings and other atrocities as well as countless videos that were uploaded on YouTube by various groups in the conflict, to be exposed to a worldwide audience. This paper...... and exhibits some sequentiality. With sequentiality comes a certain subjectivity which allows the video maker to take a political space and position. Part of an ongoing postdoctoral research, during which a general typography of You Tube clips from Syria is developed, this paper provides a focus on young...... Syrian video activists and their grassroot video work in Aleppo and Raqqa province. The research methodology is based on on-line visual observation of You Tube clips, original semi-structured interviews with video activists and field visits to Gaziantep in Turkey and Aleppo province in Syria, during...

  8. Unusual features of negative leaders' development in natural lightning, according to simultaneous records of current, electric field, luminosity, and high-speed video

    Science.gov (United States)

    Guimaraes, Miguel; Arcanjo, Marcelo; Murta Vale, Maria Helena; Visacro, Silverio

    2017-02-01

    The development of downward and upward leaders that formed two negative cloud-to-ground return strokes in natural lightning, spaced only about 200 µs apart and terminating on ground only a few hundred meters away, was monitored at Morro do Cachimbo Station, Brazil. The simultaneous records of current, close electric field, relative luminosity, and corresponding high-speed video frames (sampling rate of 20,000 frames per second) reveal that the initiation of the first return stroke interfered in the development of the second negative leader, leading it to an apparent continuous development before the attachment, without stepping, and at a regular two-dimensional speed. Based on the experimental data, the formation processes of the two return strokes are discussed, and plausible interpretations for their development are provided.

  9. CHARACTER RECOGNITION OF VIDEO SUBTITLES\\

    Directory of Open Access Journals (Sweden)

    Satish S Hiremath

    2016-11-01

    Full Text Available An important task in content based video indexing is to extract text information from videos. The challenges involved in text extraction and recognition are variation of illumination on each video frame with text, the text present on the complex background and different font size of the text. Using various image processing algorithms like morphological operations, blob detection and histogram of oriented gradients the character recognition of video subtitles is implemented. Segmentation, feature extraction and classification are the major steps of character recognition. Several experimental results are shown to demonstrate the performance of the proposed algorithm

  10. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  11. Surgical Treatment of Middle Cerebral Artery Aneurysms Without Using Indocyanine Green Videoangiography Assistance: Retrospective Monocentric Study of 263 Clipped Aneurysms.

    Science.gov (United States)

    Hallout, Sabrina

    2015-10-01

    Middle cerebral artery (MCA) aneurysms represent 20% of intracranial aneurysms. Most (80%) of them are located on the sylvian bifurcation, the seat of hemodynamic turbulence flow. Morbidity and mortality related to surgery of MCA aneurysms are not negligible. MCA vascularization areas are important eloquence functional territorial of Brain tissue. Indocyanine green videoangiography assistance (ICG-VA) is an emergent tool for intraoperative assessment of aneurysmal occlusion and for checking a possible stenosant clip in vascular area. The purposes of this study were to evaluate the safety of clipping procedure in terms of morbidity, mortality, and efficiency of aneurysm occlusion without using ICG-VA, recurrence and bleeding/rebleeding at short and long terms, and angiographic and clinical follow-ups. This study is a monocentric retrospective study performed at Pitié-Salpêtrière-Charles Foix Hospital Center, reporting clinical and angiographic follow-up of consecutive patients treated for MCA aneurysms (ruptured and unruptured) by clipping procedures. From 2002-2012, 251 consecutive patients were admitted at the author's institution for treatment of 263 MCA aneurysms (163 ruptured and 100 unruptured). Procedure-related death and complications were systematically assessed without video-angiography availability. The degree of aneurysms exclusion was evaluated according to the Raymond-Roy scale after the procedure and at long-term angiographic follow-up (mean delay = 36 months). The death rate related to aneurysmal exclusion procedure was 1.2%. The major complication rate related to surgery was 5.3%. Postprocedure, an aneurysm occlusion rate Raymond-Roy grade A or B was 95.6%. Neither recanalization controlled clipped aneurysms nor re-aneurysmal rupture was observed in the long-term clinical follow-up (mean time = 83.5 months). The institution's series of surgical outcomes reported 95.6% of complete exclusion and 4.5% incomplete procedures without ICG-VA. A clip of

  12. Prediction of transmission distortion for wireless video communication: analysis.

    Science.gov (United States)

    Chen, Zhifeng; Wu, Dapeng

    2012-03-01

    Transmitting video over wireless is a challenging problem since video may be seriously distorted due to packet errors caused by wireless channels. The capability of predicting transmission distortion (i.e., video distortion caused by packet errors) can assist in designing video encoding and transmission schemes that achieve maximum video quality or minimum end-to-end video distortion. This paper is aimed at deriving formulas for predicting transmission distortion. The contribution of this paper is twofold. First, we identify the governing law that describes how the transmission distortion process evolves over time and analytically derive the transmission distortion formula as a closed-form function of video frame statistics, channel error statistics, and system parameters. Second, we identify, for the first time, two important properties of transmission distortion. The first property is that the clipping noise, which is produced by nonlinear clipping, causes decay of propagated error. The second property is that the correlation between motion-vector concealment error and propagated error is negative and has dominant impact on transmission distortion, compared with other correlations. Due to these two properties and elegant error/distortion decomposition, our formula provides not only more accurate prediction but also lower complexity than the existing methods.

  13. Video Pulses: User-Based Modeling of Interesting Video Segments

    Directory of Open Access Journals (Sweden)

    Markos Avlonitis

    2014-01-01

    Full Text Available We present a user-based method that detects regions of interest within a video in order to provide video skims and video summaries. Previous research in video retrieval has focused on content-based techniques, such as pattern recognition algorithms that attempt to understand the low-level features of a video. We are proposing a pulse modeling method, which makes sense of a web video by analyzing users' Replay interactions with the video player. In particular, we have modeled the user information seeking behavior as a time series and the semantic regions as a discrete pulse of fixed width. Then, we have calculated the correlation coefficient between the dynamically detected pulses at the local maximums of the user activity signal and the pulse of reference. We have found that users' Replay activity significantly matches the important segments in information-rich and visually complex videos, such as lecture, how-to, and documentary. The proposed signal processing of user activity is complementary to previous work in content-based video retrieval and provides an additional user-based dimension for modeling the semantics of a social video on the web.

  14. Computational Thinking in Constructionist Video Games

    Science.gov (United States)

    Weintrop, David; Holbert, Nathan; Horn, Michael S.; Wilensky, Uri

    2016-01-01

    Video games offer an exciting opportunity for learners to engage in computational thinking in informal contexts. This paper describes a genre of learning environments called constructionist video games that are especially well suited for developing learners' computational thinking skills. These games blend features of conventional video games with…

  15. Robust video object cosegmentation.

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing; Li, Xuelong; Porikli, Fatih

    2015-10-01

    With ever-increasing volumes of video data, automatic extraction of salient object regions became even more significant for visual analytic solutions. This surge has also opened up opportunities for taking advantage of collective cues encapsulated in multiple videos in a cooperative manner. However, it also brings up major challenges, such as handling of drastic appearance, motion pattern, and pose variations, of foreground objects as well as indiscriminate backgrounds. Here, we present a cosegmentation framework to discover and segment out common object regions across multiple frames and multiple videos in a joint fashion. We incorporate three types of cues, i.e., intraframe saliency, interframe consistency, and across-video similarity into an energy optimization framework that does not make restrictive assumptions on foreground appearance and motion model, and does not require objects to be visible in all frames. We also introduce a spatio-temporal scale-invariant feature transform (SIFT) flow descriptor to integrate across-video correspondence from the conventional SIFT-flow into interframe motion flow from optical flow. This novel spatio-temporal SIFT flow generates reliable estimations of common foregrounds over the entire video data set. Experimental results show that our method outperforms the state-of-the-art on a new extensive data set (ViCoSeg).

  16. PAPR Reduction of FBMC by Clipping and Its Iterative Compensation

    Directory of Open Access Journals (Sweden)

    Zsolt Kollár

    2012-01-01

    Full Text Available Physical layers of communication systems using Filter Bank Multicarrier (FBMC as a modulation scheme provide low out-of-band leakage but suffer from the large Peak-to-Average Power Ratio (PAPR of the transmitted signal. Two special FBMC schemes are investigated in this paper: the Orthogonal Frequency Division Multiplexing (OFDM and the Staggered Multitone (SMT. To reduce the PAPR of the signal, time domain clipping is applied in both schemes. If the clipping is not compensated, the system performance is severely affected. To avoid this degradation, an iterative noise cancelation technique, Bussgang Noise Cancelation (BNC, is applied in the receiver. It is shown that clipping can be a good means for reducing the PAPR, especially for the SMT scheme. A novel modified BNC receiver is presented for SMT. It is shown how this technique can be implemented in real-life applications where special requirements must be met regarding the spectral characteristics of the transmitted signal.

  17. Low Proteolytic Clipping of Histone H3 in Cervical Cancer

    Science.gov (United States)

    Sandoval-Basilio, Jorge; Serafín-Higuera, Nicolás; Reyes-Hernandez, Octavio D.; Serafín-Higuera, Idanya; Leija-Montoya, Gabriela; Blanco-Morales, Magali; Sierra-Martínez, Monica; Ramos-Mondragon, Roberto; García, Silvia; López-Hernández, Luz Berenice; Yocupicio-Monroy, Martha; Alcaraz-Estrada, Sofia L.

    2016-01-01

    Chromatin in cervical cancer (CC) undergoes chemical and structural changes that alter the expression pattern of genes. Recently, a potential mechanism, which regulates gene expression at transcriptional levels is the proteolytic clipping of histone H3. However, until now this process in CC has not been reported. Using HeLa cells as a model of CC and human samples from patients with CC, we identify that the H3 cleavage was lower in CC compared with control tissue. Additionally, the histone H3 clipping was performed by serine and aspartyl proteases in HeLa cells. These results suggest that histone H3 clipping operates as part of post-translational modification system in CC. PMID:27698925

  18. Visual dictionaries as intermediate features in the human brain

    Directory of Open Access Journals (Sweden)

    Kandan eRamakrishnan

    2015-01-01

    Full Text Available The human visual system is assumed to transform low level visual features to object and scene representations via features of intermediate complexity. How the brain computationally represents intermediate features is still unclear. To further elucidate this, we compared the biologically plausible HMAX model and Bag of Words (BoW model from computer vision. Both these computational models use visual dictionaries, candidate features of intermediate complexity, to represent visual scenes, and the models have been proven effective in automatic object and scene recognition. These models however differ in the computation of visual dictionaries and pooling techniques. We investigated where in the brain and to what extent human fMRI responses to short video can be accounted for by multiple hierarchical levels of the HMAX and BoW models. Brain activity of 20 subjects obtained while viewing a short video clip was analyzed voxel-wise using a distance-based variation partitioning method. Results revealed that both HMAX and BoW explain a significant amount of brain activity in early visual regions V1, V2 and V3. However BoW exhibits more consistency across subjects in accounting for brain activity compared to HMAX. Furthermore, visual dictionary representations by HMAX and BoW explain significantly some brain activity in higher areas which are believed to process intermediate features. Overall our results indicate that, although both HMAX and BoW account for activity in the human visual system, the BoW seems to more faithfully represent neural responses in low and intermediate level visual areas of the brain.

  19. Face Recognition and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Swapnil Vitthal Tathe

    2017-07-01

    Full Text Available Advancement in computer vision technology and availability of video capturing devices such as surveillance cameras has evoked new video processing applications. The research in video face recognition is mostly biased towards law enforcement applications. Applications involves human recognition based on face and iris, human computer interaction, behavior analysis, video surveillance etc. This paper presents face tracking framework that is capable of face detection using Haar features, recognition using Gabor feature extraction, matching using correlation score and tracking using Kalman filter. The method has good recognition rate for real-life videos and robust performance to changes due to illumination, environmental factors, scale, pose and orientations.

  20. YUCSA: A CLIPS expert database system to monitor academic performance

    Science.gov (United States)

    Toptsis, Anestis A.; Ho, Frankie; Leindekar, Milton; Foon, Debra Low; Carbonaro, Mike

    1991-01-01

    The York University CLIPS Student Administrator (YUCSA), an expert database system implemented in C Language Integrated Processing System (CLIPS), for monitoring the academic performance of undergraduate students at York University, is discussed. The expert system component in the system has already been implemented for two major departments, and it is under testing and enhancement for more departments. Also, more elaborate user interfaces are under development. We describe the design and implementation of the system, problems encountered, and immediate future plans. The system has excellent maintainability and it is very efficient, taking less than one minute to complete an assessment of one student.

  1. Involvement of activated prorenin in the pathogenesis of slowly progressive nephropathy in the non-clipped kidney of two kidney, one-clip hypertension.

    Science.gov (United States)

    Ryuzaki, Masaki; Ichihara, Atsuhiro; Ohshima, Yoichi; Sakoda, Mariyo; Kurauchi-Mito, Asako; Narita, Tatsuya; Kinouchi, Kenichiro; Murohashi-Bokuda, Kanako; Nishiyama, Akira; Itoh, Hiroshi

    2011-03-01

    The handle region peptide (HRP), a (pro)renin receptor (P)RR blocker, did not prevent the acute nephropathy occurring 2 weeks after clipping in renovascular hypertensive rats. This study was performed to examine the effects of HRP, its scramble peptide, or a saline vehicle on slowly progressive nephropathy occurring in the kidneys of two-kidney, one-clip Goldblatt hypertensive rats. At 2 weeks after clipping, the renal morphology in the clipped and non-clipped kidneys was similar in the three groups of rats. At 12 weeks after clipping, however, the glomerulosclerosis index (GI) and the tubulointerstitial damage (TD) of the non-clipped kidneys of the HRP-treated rats were significantly lower than those of vehicle-treated rats, although the GI and the TD were similar in the rats treated with scramble peptide and vehicle. The GI and the TD of the clipped kidneys were similar in the three groups of rats at 12 weeks after clipping. In the non-clipped kidneys at 12 weeks after clipping, activated prorenin levels, angiotensin II levels and transforming growth factor (TGF)-β mRNA levels of HRP-treated rats were significantly lower than those of vehicle-treated rats, although they were similar in the non-clipped kidneys from the rats treated with scramble peptide and vehicle. In the clipped kidneys at 12 weeks after clipping, activated prorenin levels, angiotensin II levels and TGF-β mRNA levels were similar in the three groups of rats. These results suggest that the ((P)RR)-dependent activation of prorenin contributes to the pathogenesis of slowly progressive nephropathy in the intact kidney in a rat model of renovascular hypertension. © 2011 The Japanese Society of Hypertension All rights reserved

  2. Apples to Oranges: Comparing Streaming Video Platforms

    OpenAIRE

    Milewski, Steven; Threatt, Monique

    2017-01-01

    Librarians rely on an ever-increasing variety of platforms to deliver streaming video content to our patrons. These two presentations will examine different aspects of video streaming platforms to gain guidance from the comparison of platforms. The first will examine the accessibility compliance of the various video streaming platforms for users with disabilities by examining accessibility features of the platforms. The second will be a comparison of subject usage of two of the larger video s...

  3. Recognizing problem video game use.

    Science.gov (United States)

    Porter, Guy; Starcevic, Vladan; Berle, David; Fenech, Pauline

    2010-02-01

    It has been increasingly recognized that some people develop problem video game use, defined here as excessive use of video games resulting in various negative psychosocial and/or physical consequences. The main objectives of the present study were to identify individuals with problem video game use and compare them with those without problem video game use on several variables. An international, anonymous online survey was conducted, using a questionnaire with provisional criteria for problem video game use, which the authors have developed. These criteria reflect the crucial features of problem video game use: preoccupation with and loss of control over playing video games and multiple adverse consequences of this activity. A total of 1945 survey participants completed the survey. Respondents who were identified as problem video game users (n = 156, 8.0%) differed significantly from others (n = 1789) on variables that provided independent, preliminary validation of the provisional criteria for problem video game use. They played longer than planned and with greater frequency, and more often played even though they did not want to and despite believing that they should not do it. Problem video game users were more likely to play certain online role-playing games, found it easier to meet people online, had fewer friends in real life, and more often reported excessive caffeine consumption. People with problem video game use can be identified by means of a questionnaire and on the basis of the present provisional criteria, which require further validation. These findings have implications for recognition of problem video game users among individuals, especially adolescents, who present to mental health services. Mental health professionals need to acknowledge the public health significance of the multiple negative consequences of problem video game use.

  4. 76 FR 44575 - Paper Clips From the People's Republic of China: Continuation of the Antidumping Duty Order

    Science.gov (United States)

    2011-07-26

    ... order are plastic and vinyl covered paper clips, butterfly clips, binder clips, or other paper fasteners... antidumping duty cash deposits at the rates in effect at the time of entry for all imports of subject...

  5. 76 FR 26242 - Paper Clips From the People's Republic of China: Final Results of Expedited Sunset Review of...

    Science.gov (United States)

    2011-05-06

    ... from the scope of the order are plastic and vinyl covered paper clips, butterfly clips, binder clips... Central Records Unit, room 7046, of the main Commerce building. In addition, a complete public ] version...

  6. Video Podcasts

    DEFF Research Database (Denmark)

    Nortvig, Anne Mette; Sørensen, Birgitte Holm

    2016-01-01

    This project’s aim was to support and facilitate master’s students’ preparation and collaboration by making video podcasts of short lectures available on YouTube prior to students’ first face-to-face seminar. The empirical material stems from group interviews, from statistical data created through...... YouTube analytics and from surveys answered by students after the seminar. The project sought to explore how video podcasts support learning and reflection online and how students use and reflect on the integration of online activities in the videos. Findings showed that students engaged actively...

  7. Fullerene as alligator clips for electrical conduction through ...

    Indian Academy of Sciences (India)

    The conductance of a single molecule transport junction comprising anthracene molecular junction (AMJ) with fullerene as alligator clips was investigated using a b − i n i t i o density functional theory (DFT) in the Landauer–Imry regime of coherent tunnelling transport. In our previous research, we have already calculatedthe ...

  8. MitraClip para el reparo de insuficiencia mitral severa

    Directory of Open Access Journals (Sweden)

    Celin Malkun

    2016-11-01

    Conclusiones: Se reporta el primer caso de implante de MitraClip para el manejo de la IM severa, en la ciudad de Barranquilla, siendo la segunda ciudad de Colombia, después de Cali, donde se implanta este tipo de dispositivos para el reparo de la insuficiencia mitral severa.

  9. Direct mounted photovoltaic device with improved front clip

    Science.gov (United States)

    Keenihan, James R; Boven, Michelle; Brown, Jr., Claude; Gaston, Ryan S; Hus, Michael; Langmaid, Joe A; Lesniak, Mike

    2013-11-05

    The present invention is premised upon a photovoltaic assembly system for securing and/or aligning at least a plurality of vertically adjacent (overlapping) photovoltaic device assemblies to one another. The securing function being accomplished by a clip member that may be a separate component or integral to one or more of the photovoltaic device assemblies.

  10. Direct mounted photovoltaic device with improved side clip

    Science.gov (United States)

    Keenihan, James R; Boven, Michelle L; Brown, Jr., Claude; Eurich, Gerald K; Gaston, Ryan S; Hus, Michael

    2013-11-19

    The present invention is premised upon a photovoltaic assembly system for securing and/or aligning at least a plurality of vertically adjacent photovoltaic device assemblies to one another. The securing function being accomplished by a clip member that may be a separate component or integral to one or more of the photovoltaic device assemblies.

  11. Improving NAVFAC's total quality management of construction drawings with CLIPS

    Science.gov (United States)

    Antelman, Albert

    1991-01-01

    A diagnostic expert system to improve the quality of Naval Facilities Engineering Command (NAVFAC) construction drawings and specification is described. C Language Integrated Production System (CLIPS) and computer aided design layering standards are used in an expert system to check and coordinate construction drawings and specifications to eliminate errors and omissions.

  12. Equation-Method for correcting clipping errors in OFDM signals.

    Science.gov (United States)

    Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry

    2016-01-01

    Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.

  13. User Surveys in College Libraries. Clip Note #23.

    Science.gov (United States)

    Adams, Mignon S., Comp.; Beck, Jeffrey A., Comp.

    This CLIP (College Library Information Packet) offers a compilation of surveys and other methods of inquiry into user satisfaction submitted by college libraries around the United States developed to obtain feedback from the clientele they serve. In April 1995, surveys were mailed to 265 college and small university library directors, and the…

  14. Sociolinguistic import of name-clipping among Omambala cultural ...

    African Journals Online (AJOL)

    This study examines the perceived but obvious manifestation of name-clipping among Omambala cultural zone of Anambra State. This situation has given rise to distortion of names and most often, to either mis-interpretation or complete loss of the original and full meanings of the names. This situation of misinterpretation is ...

  15. Video Primal Sketch: A Unified Middle-Level Representation for Video

    OpenAIRE

    Han, Zhi; Xu, Zongben; Zhu, Song-Chun

    2015-01-01

    This paper presents a middle-level video representation named Video Primal Sketch (VPS), which integrates two regimes of models: i) sparse coding model using static or moving primitives to explicitly represent moving corners, lines, feature points, etc., ii) FRAME /MRF model reproducing feature statistics extracted from input video to implicitly represent textured motion, such as water and fire. The feature statistics include histograms of spatio-temporal filters and velocity distributions. T...

  16. Feature-aided multiple target tracking in the image plane

    Science.gov (United States)

    Brown, Andrew P.; Sullivan, Kevin J.; Miller, David J.

    2006-05-01

    Vast quantities of EO and IR data are collected on airborne platforms (manned and unmanned) and terrestrial platforms (including fixed installations, e.g., at street intersections), and can be exploited to aid in the global war on terrorism. However, intelligent preprocessing is required to enable operator efficiency and to provide commanders with actionable target information. To this end, we have developed an image plane tracker which automatically detects and tracks multiple targets in image sequences using both motion and feature information. The effects of platform and camera motion are compensated via image registration, and a novel change detection algorithm is applied for accurate moving target detection. The contiguous pixel blob on each moving target is segmented for use in target feature extraction and model learning. Feature-based target location measurements are used for tracking through move-stop-move maneuvers, close target spacing, and occlusion. Effective clutter suppression is achieved using joint probabilistic data association (JPDA), and confirmed target tracks are indicated for further processing or operator review. In this paper we describe the algorithms implemented in the image plane tracker and present performance results obtained with video clips from the DARPA VIVID program data collection and from a miniature unmanned aerial vehicle (UAV) flight.

  17. Our experience with the titanium soft clip piston stapedotomy.

    Science.gov (United States)

    Singh, P P; Goyal, Arun

    2013-07-01

    The crimping of stapes prosthesis to the long process of incus has always been the bugbear of an otologist. Malcrimping on one hand can lead to necrosis of the long process on the other it can lead to a residual air-bone gap or a postoperative reappearance of the conductive hearing loss. To solve these problems different types of stapes prostheses having different techniques to achieve a secure attachment to incus have been devised. Retrospective analysis of patient data. Tertiary care hospital. Case records of 20 patients of otosclerosis who had undergone stapedotomy using titanium soft clip stapes piston (Kurz, Germany) were retrospectively analysed. This new type of stapes piston is a modification of the earlier àWengen clip piston (Kurz, Germany) which was designed to avoid the crimping onto the incus in stapedotomy. Hearing results were analyzed using American Academy of Otolaryngology-Head and Neck Surgery guidelines including 4 frequency pure tone average. The mean postoperative air-bone gap was with in 10 dB in 8 (40% of cases), up to 15 dB in another 8 (40%) cases and in rest 4 (20%) was with in 20 dB. No adverse reactions occurred during follow-up. The use of the titanium soft clip stapes piston gives good results in cases of stapedotomy for otosclerosis. The soft clip design is a new development in the evolution of stapes piston prostheses. Surgical introduction, placement, and fixation are easier than the earlier àWengen design of clip piston.

  18. CAN INTERMITTENT VIDEO SAMPLING CAPTURE INDIVIDUAL DIFFERENCES IN NATURALISTIC DRIVING?

    Science.gov (United States)

    Aksan, Nazan; Schall, Mark; Anderson, Steven; Dawson, Jeffery; Tippin, Jon; Rizzo, Matthew

    2013-01-01

    We examined the utility and validity of intermittent video samples from black box devices for capturing individual difference variability in real-world driving performance in an ongoing study of obstructive sleep apnea (OSA) and community controls. Three types of video clips were coded for several dimensions of interest to driving research including safety, exposure, and driver state. The preliminary findings indicated that clip types successfully captured variability along targeted dimensions such as highway vs. city driving, driver state such as distraction and sleepiness, and safety. Sleepiness metrics were meaningfully associated with adherence to PAP (positive airway pressure) therapy. OSA patients who were PAP adherent showed less sleepiness and less non-driving related gaze movements than nonadherent patients. Simple differences in sleepiness did not readily translate to improvements in driver safety, consistent with epidemiologic evidence to date.

  19. Subtitled video tutorials, an accessible teaching material

    Directory of Open Access Journals (Sweden)

    Luis Bengochea

    2012-11-01

    Full Text Available The use of short-lived audio-visual tutorials constitutes an educational resource very attractive for young students, widely familiar with this type of format similar to YouTube clips. Considered as "learning pills", these tutorials are intended to strengthen the understanding of complex concepts that because their dynamic nature can’t be represented through texts or diagrams. However, the inclusion of this type of content in eLearning platforms presents accessibility problems for students with visual or hearing disabilities. This paper describes this problem and shows the way in which a teacher could add captions and subtitles to their videos.

  20. Effect of apical meristem clipping on carbon allocation and morphological development of white oak seedlings

    Science.gov (United States)

    Paul P. Kormanik; Shi-Jean S. Sung; T.L. Kormanik; Stanley J. Zarnoch

    1994-01-01

    Seedlings from three open-pollinated half-sib white oak seedlots were clipped in mid-July and their development was compared to nonclipped controls after one growing season.In general when data were analyzed by family, clipped seedlings were significantly less desirable in three to six of the eight variables tested.Numerically, in all families seedlots, the clipped...

  1. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  2. Video Game Playing and Gambling in Adolescents: Common Risk Factors

    Science.gov (United States)

    Wood, Richard T. A.; Gupta, Rina; Griffiths, Mark

    2004-01-01

    Video games and gambling often contain very similar elements with both providing intermittent rewards and elements of randomness. Furthermore, at a psychological and behavioral level, slot machine gambling, video lottery terminal (VLT) gambling and video game playing share many of the same features. Despite the similarities between video game…

  3. Using MPEG DASH SRD for zoomable and navigable video

    NARCIS (Netherlands)

    D'Acunto, L.; Berg, J. van den; Thomas, E.; Niamut, O.A.

    2016-01-01

    This paper presents a video streaming client implementation that makes use of the Spatial Relationship Description (SRD) feature of the MPEG-DASH standard, to provide a zoomable and navigable video to an end user. SRD allows a video streaming client to request spatial subparts of a particular video

  4. Satisfaction with Online Teaching Videos: A Quantitative Approach

    Science.gov (United States)

    Meseguer-Martinez, Angel; Ros-Galvez, Alejandro; Rosa-Garcia, Alfonso

    2017-01-01

    We analyse the factors that determine the number of clicks on the "Like" button in online teaching videos, with a sample of teaching videos in the area of Microeconomics across Spanish-speaking countries. The results show that users prefer short online teaching videos. Moreover, some features of the videos have a significant impact on…

  5. Video Game Structural Characteristics: A New Psychological Taxonomy

    Science.gov (United States)

    King, Daniel; Delfabbro, Paul; Griffiths, Mark

    2010-01-01

    Excessive video game playing behaviour may be influenced by a variety of factors including the structural characteristics of video games. Structural characteristics refer to those features inherent within the video game itself that may facilitate initiation, development and maintenance of video game playing over time. Numerous structural…

  6. Learning Science Through Digital Video: Views on Watching and Creating Videos

    Science.gov (United States)

    Wade, P.; Courtney, A. R.

    2013-12-01

    In science, the use of digital video to document phenomena, experiments and demonstrations has rapidly increased during the last decade. The use of digital video for science education also has become common with the wide availability of video over the internet. However, as with using any technology as a teaching tool, some questions should be asked: What science is being learned from watching a YouTube clip of a volcanic eruption or an informational video on hydroelectric power generation? What are student preferences (e.g. multimedia versus traditional mode of delivery) with regard to their learning? This study describes 1) the efficacy of watching digital video in the science classroom to enhance student learning, 2) student preferences of instruction with regard to multimedia versus traditional delivery modes, and 3) the use of creating digital video as a project-based educational strategy to enhance learning. Undergraduate non-science majors were the primary focus group in this study. Students were asked to view video segments and respond to a survey focused on what they learned from the segments. Additionally, they were asked about their preference for instruction (e.g. text only, lecture-PowerPoint style delivery, or multimedia-video). A majority of students indicated that well-made video, accompanied with scientific explanations or demonstration of the phenomena was most useful and preferred over text-only or lecture instruction for learning scientific information while video-only delivery with little or no explanation was deemed not very useful in learning science concepts. The use of student generated video projects as learning vehicles for the creators and other class members as viewers also will be discussed.

  7. Spatial-temporal forensic analysis of mass casualty incidents using video sequences.

    Science.gov (United States)

    Hao Dong; Juechen Yin; Schafer, James; Ganz, Aura

    2016-08-01

    In this paper we introduce DIORAMA based forensic analysis of mass casualty incidents (MCI) using video sequences. The video sequences captured on site are automatically annotated by metadata, which includes the capture time and the camera location and viewing direction. Using a visual interface the MCI investigators can easily understand the availability of video clips in specific areas of interest, and efficiently review them. The video-based forensic analysis system will enable the MCI investigators to better understand the rescue operations and subsequently improve training procedures.

  8. Color spaces in digital video

    Energy Technology Data Exchange (ETDEWEB)

    Gaunt, R.

    1997-05-01

    . For example, humans `see` more white-to-black (luminance) detail then red, green, or blue color detail. Also, the eye is most sensitive to green colors. Taking advantage of this, both composite and component video allocates more bandwidth for the luma (Y`) signal than the chroma signals. Y`611 is composed of 59% green`, 30% red`, and 11% blue` (prime symbol denotes gamma corrected colors). This luma signal also maintains compatibility with black and white television receivers. Component digital video converts R`G`B` signals (either from a camera or a computer) to a monochromatic brightness signal Y` (referred here as luma to distinguish it from the CIE luminance linear- light quantity), and two color difference signals Cb and Cr. These last two are the blue and red signals with the luma component subtracted out. As you know, computer graphic images are composed of red, green, and blue elements defined in a linear color space. Color monitors do not display RGB linearly. A linear RGB color space image must be gamma corrected to be displayed properly on a CRT. Gamma correction, which is approximately a 0.45 power function, must also be employed before converting an RGB image to video color space. Gamma correction is defined for video in the international standard: ITU-Rec. BT.709-4. The gamma correction transform is the same for red, green, and blue. The color coding standard for component digital video and high definition video symbolizes gamma corrected luma by Y`, the blue difference signal by Cb (Cb = B` -Y`), and the red color difference signal by Cr (Cr = R` - Y`). Component analog HDTV uses Y`PbPr. To reduce conversion errors, clip in R`G`B`, not in Y`CbCr space. View video on a video monitor, computer monitor phosphors are wrong. Use a large word size (double precision) to avoid warp around, the0232n round the results to values between 0 and 255. And finally, recall that multiplying two 8- bit numbers results in a 16-bit number, so values need to be clipped to 8

  9. Exchanging digital video of laryngeal examinations.

    Science.gov (United States)

    Crump, John M; Deutsch, Thomas

    2004-03-01

    Laryngeal examinations, especially stroboscopic examinations, are increasingly recorded using digital video formats on computer media, rather than using analog formats on videotape. It would be useful to share these examinations with other medical professionals in formats that would facilitate reliable and high-quality playback on a personal computer by the recipients. Unfortunately, a personal computer is not well designed for reliable presentation of artifact-free video. It is particularly important that laryngeal video play without artifacts of motion or color because these are often the characteristics of greatest clinical interest. With proper tools and procedures, and with reasonable compromises in image resolution and the duration of the examination, digital video of laryngeal examinations can be reliably exchanged. However, the tools, procedures, and formats for recording, converting to another digital format ("transcoding"), communicating, copying, and playing digital video with a personal computer are not familiar to most medical professionals. Some understanding of digital video and the tools available is required of those wanting to exchange digital video. Best results are achieved by recording to a digital format best suited for recording (such as MJPEG or DV),judiciously selecting a segment of the recording for sharing, and converting to a format suited to distribution (such as MPEG1 or MPEG2) using a medium suited to the situation (such as e-mail attachment, CD-ROM, a "clip" within a Microsoft PowerPoint presentation, or DVD-Video). If digital video is sent to a colleague, some guidance on playing files and using a PC media player is helpful.

  10. Primary Motor Cortex Activation during Action Observation of Tasks at Different Video Speeds Is Dependent on Movement Task and Muscle Properties

    OpenAIRE

    Moriuchi, Takefumi; Matsuda, Daiki; Nakamura, Jirou; Matsuo, Takashi; Nakashima, Akira; Nishi, Keita; Fujiwara, Kengo; Iso, Naoki; Nakane, Hideyuki; Higashi, Toshio

    2017-01-01

    The aim of the present study was to investigate how the video speed of observed action affects the excitability of the primary motor cortex (M1), as assessed by the size of motor-evoked potentials (MEPs) induced by transcranial magnetic stimulation (TMS). Twelve healthy subjects observed a video clip of a person catching a ball (Experiment 1: rapid movement) and another 12 healthy subjects observed a video clip of a person reaching to lift a ball (Experiment 2: slow movement task). We played ...

  11. Role of intraoperative indocyanine green video-angiography to identify small, posterior fossa arteriovenous malformations mimicking cavernous angiomas. Technical report and review of the literature on common features of these cerebral vascular malformations.

    Science.gov (United States)

    Barbagallo, Giuseppe M V; Certo, Francesco; Caltabiano, Rosario; Chiaramonte, Ignazio; Albanese, Vincenzo; Visocchi, Massimiliano

    2015-11-01

    To illustrate the usefulness of intraoperative indocyanine green videoangiography (ICG-VA) to identify the nidus and feeders of a small cerebellar AVM resembling a cavernous hemangioma. To review the unique features regarding the overlay between these two vascular malformations and to highlight the importance to identify with ICG-VA, and treat accordingly, the arterial and venous vessels of the AVM. A 36-year old man presented with bilateral cerebellar hemorrhage. MRI was equivocal in showing an underlying vascular malformation but angiography demonstrated a small, Spetzler-Martin grade I AVM. Surgical resection of the AVM with the aid of intraoperative ICG-VA was performed. After hematoma evacuation, pre-resection ICG-VA did not reveal tortuous arterial and venous vessels in keeping with a typical AVM but rather an unusual blackberry-like image resembling a cavernous hemangioma, with tiny surrounding vessels. Such intraoperative appearance, which could also be the consequence of a "leakage" of fluorescent dye from the nidal pathological vessels, with absent blood-brain barrier, into the surrounding parenchymal pathological capillary network, is important to be recognized as an unusual AVM appearance. Post-resection ICG-VA confirmed the AVM removal, as also shown by postoperative and 3-month follow-up DSAs. Despite technical limitations associated with ICG-VA in post-hemorrhage AVMs, this case together with the intraoperative video, demonstrates the useful role of ICG-VA in identifying small AVMs with peculiar features. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. CLIpSAT for Interplanetary Missions: Common Low-cost Interplanetary Spacecraft with Autonomy Technologies

    Science.gov (United States)

    Grasso, C.

    2015-10-01

    Blue Sun Enterprises, Inc. is creating a common deep space bus capable of a wide variety of Mars, asteroid, and comet science missions, observational missions in and near GEO, and interplanetary delivery missions. The spacecraft are modular and highly autonomous, featuring a common core and optional expansion for variable-sized science or commercial payloads. Initial spacecraft designs are targeted for Mars atmospheric science, a Phobos sample return mission, geosynchronous reconnaissance, and en-masse delivery of payloads using packetized propulsion modules. By combining design, build, and operations processes for these missions, the cost and effort for creating the bus is shared across a variety of initial missions, reducing overall costs. A CLIpSAT can be delivered to different orbits and still be able to reach interplanetary targets like Mars due to up to 14.5 km/sec of delta-V provided by its high-ISP Xenon ion thruster(s). A 6U version of the spacecraft form fits PPOD-standard deployment systems, with up to 9 km/s of delta-V. A larger 12-U (with the addition of an expansion module) enables higher overall delta-V, and has the ability to jettison the expansion module and return to the Earth-Moon system from Mars orbit with the main spacecraft. CLIpSAT utilizes radiation-hardened electronics and RF equipment, 140+ We of power at earth (60 We at Mars), a compact navigation camera that doubles as a science imager, and communications of 2000 bps from Mars to the DSN via X-band. This bus could form the cornerstone of a large number asteroid survey projects, comet intercept missions, and planetary observation missions. The TugBot architecture uses groups of CLIpSATs attached to payloads lacking innate high-delta-V propulsion. The TugBots use coordinated trajectory following by each individual spacecraft to move the payload to the desired orbit - for example, a defense asset might be moved from GEO to lunar transfer orbit in order to protect and hide it, then returned

  13. [Growth and resource allocation pattern of Artemisia frigida under different grazing and clipping intensities].

    Science.gov (United States)

    Li, Jinhua; Li, Zhenqing; Liu, Zhenguo

    2004-03-01

    In order to understand the degradation process and its mechanism of typical steppe in Inner Mongolia, this paper studied the growth and resource allocation pattern of Artimisia frigida under different grazing and clipping intensities(no grazing, light grazing 1.33 sheep.hm-2, moderate grazing 4.00 sheep.hm-2, heavy grazing 6.67 sheep.hm-2, proportional clipping and stubble clipping), which was conducted at the Inner Mongolia Grassland Ecosystem Research Station of Chinese Academy of Sciences(43 degrees 26'-44 degrees 08' N, 116 degrees 04'-117 degrees 05' E). The results showed that the regrowth ability of A. frigida under proportional clipping was superior to that under stubble clipping, and light clipping (1/4 proportional clipping or 10 cm stubble clipping) was superior to no clipping. In early growth season, the net regrowth of A. frigida was higher under no clipping than under light clipping, but reversed in late growth season (after mid-August). The biomass allocation pattern of A. frigida was roots > leaves > stems. Grazing or clipping affected biomass allocation significantly, especially for the allocation of leaves and flowers. The biomass allocation of leaves was significantly higher under 3/4 proportional clipping or 4 cm stubble clipping than under other treatments, and reverse trend was true for the biomass allocation of flowers. There were no significant differences in biomass allocation of roots and stems among treatments. Sexual reproductive allocation decreased with increasing grazing or clipping intensities, and reproductive mode of A. frigida changed under heavy grazing. The changes in priority of biomass allocation from sexual reproductive organs to clonal growth to sustain and propagate population were important ecological strategies of the species to heavy grazing.

  14. The Moving Image in Education Research: Reassembling the Body in Classroom Video Data

    Science.gov (United States)

    de Freitas, Elizabeth

    2016-01-01

    While audio recordings and observation might have dominated past decades of classroom research, video data is now the dominant form of data in the field. Ubiquitous videography is standard practice today in archiving the body of both the teacher and the student, and vast amounts of classroom and experiment clips are stored in online archives. Yet…

  15. Flipped!: Want to Get Teens Excited about Summer Reading? Just Add Video

    Science.gov (United States)

    Wooten, Jennifer

    2009-01-01

    Fully 57 percent of youth online watch videos, according to a Pew Internet & American Life study. And more and more are creating and sharing clips of their own making. With online engagement such an integral part of their world, Washington state's King County Library System (KCLS) decided to meet kids on their own turf by launching…

  16. Akademisk video

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2017-01-01

    Dette kapitel har fokus på metodiske problemstillinger, der opstår i forhold til at bruge (digital) video i forbindelse med forskningskommunikation, ikke mindst online. Video har længe været benyttet i forskningen til dataindsamling og forskningskommunikation. Med digitaliseringen og internettet er...... der dog opstået nye muligheder og udfordringer i forhold til at formidle og distribuere forskningsresultater til forskellige målgrupper via video. Samtidig er klassiske metodologiske problematikker som forskerens positionering i forhold til det undersøgte stadig aktuelle. Både klassiske og nye...... problemstillinger diskuteres i kapitlet, som rammesætter diskussionen ud fra forskellige positioneringsmuligheder: formidler, historiefortæller, eller dialogist. Disse positioner relaterer sig til genrer inden for ’akademisk video’. Afslutningsvis præsenteres en metodisk værktøjskasse med redskaber til planlægning...

  17. The impact of watching educational video clips on analogue patients' physiological arousal and information recall

    NARCIS (Netherlands)

    Bruinessen, I.R. van; Ende, I.T. van den; Visser, L.N.; Dulmen, S. van

    2016-01-01

    OBJECTIVE: Investigating the influence of watching three educational patient-provider interactions on analogue patients' emotional arousal and information recall. METHODS: In 75 analogue patients the emotional arousal was measured with physiological responses (electrodermal activity and heart rate)

  18. The impact of watching educational video clips on analogue patients' physiological arousal and information recall

    NARCIS (Netherlands)

    van Bruinessen, I. R.; van den Ende, I. T. A.; Visser, L. N. C.; van Dulmen, S.

    2016-01-01

    Investigating the influence of watching three educational patient-provider interactions on analogue patients' emotional arousal and information recall. In 75 analogue patients the emotional arousal was measured with physiological responses (electrodermal activity and heart rate) and self-reported

  19. The impact of watching educational video clips on analogue patients' physiological arousal and information recall.

    NARCIS (Netherlands)

    Bruinessen, I.R. van; Ende, I.T.A. van den; Visser, I.N.C.; Dulmen, S. van

    2016-01-01

    Objective: Investigating the influence of watching three educational patient–provider interactions on analogue patients’ emotional arousal and information recall. Methods: In 75 analogue patients the emotional arousal was measured with physiological responses (electrodermal activity and heart rate)

  20. Efficient computation of clipped Voronoi diagram for mesh generation

    KAUST Repository

    Yan, Dongming

    2013-04-01

    The Voronoi diagram is a fundamental geometric structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact domain (i.e. a bounded and closed 2D region or a 3D volume), some Voronoi cells of their Voronoi diagram are infinite or partially outside of the domain, but in practice only the parts of the cells inside the domain are needed, as when computing the centroidal Voronoi tessellation. Such a Voronoi diagram confined to a compact domain is called a clipped Voronoi diagram. We present an efficient algorithm to compute the clipped Voronoi diagram for a set of sites with respect to a compact 2D region or a 3D volume. We also apply the proposed method to optimal mesh generation based on the centroidal Voronoi tessellation. Crown Copyright © 2011 Published by Elsevier Ltd. All rights reserved.

  1. QPA-CLIPS: A language and representation for process control

    Science.gov (United States)

    Freund, Thomas G.

    1994-01-01

    QPA-CLIPS is an extension of CLIPS oriented towards process control applications. Its constructs define a dependency network of process actions driven by sensor information. The language consists of three basic constructs: TASK, SENSOR, and FILTER. TASK's define the dependency network describing alternative state transitions for a process. SENSOR's and FILTER's define sensor information sources used to activate state transitions within the network. Deftemplate's define these constructs and their run-time environment is an interpreter knowledge base, performing pattern matching on sensor information and so activating TASK's in the dependency network. The pattern matching technique is based on the repeatable occurrence of a sensor data pattern. QPA-CIPS has been successfully tested on a SPARCStation providing supervisory control to an Allen-Bradley PLC 5 controller driving molding equipment.

  2. Esophageal Perforation due to Transesophageal Echocardiogram: New Endoscopic Clip Treatment

    Directory of Open Access Journals (Sweden)

    John Robotis

    2014-07-01

    Full Text Available Esophageal perforation due to transesophageal echocardiogram (TEE during cardiac surgery is rare. A 72-year-old female underwent TEE during an operation for aortic valve replacement. Further, the patient presented hematemesis. Gastroscopy revealed an esophageal bleeding ulcer. Endoscopic therapy was successful. Although a CT scan excluded perforation, the patient became febrile, and a second gastroscopy revealed a big perforation at the site of ulcer. The patient's clinical condition required endoscopic intervention with a new OTSC® clip (Ovesco Endoscopy, Tübingen, Germany. The perforation was successfully sealed. The patient remained on intravenous antibiotics, proton pump inhibitors and parenteral nutrition for few days, followed by enteral feeding. She was discharged fully recovered 3 months later. We clearly demonstrate an effective, less invasive treatment of an esophageal perforation with a new endoscopic clip.

  3. Elementary calculation of clip connections with incomplete sweep of shaft

    Directory of Open Access Journals (Sweden)

    Ivan P. Shatsky

    2015-06-01

    Full Text Available The article describes promising structures of clip (screw and friction connections with incomplete sweep of shaft used in machines and mechanisms for the oil and gas industry. The contact problems of interaction between semi-hubs and shaft for the symmetric and asymmetric connections are formulated. For structures that are asymmetric relatively the joint bolt two types of interaction are investigated: with and without lateral displacement. Based on a priori assumption about the distribution laws of contact pressure accepted in traditional courses of “Machine Details” an engineering method for calculating of clip connections is developed. Herewith different types of details coupling (with a gap, matched, with tension correspond to concentrated, cosine and sustainable (linear distributions of contact stresses. There are determined an analytical dependences of boundary points and breakloose force on spanning angles, bolt tightening force and tribological properties of joined parts of subassembly.

  4. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... World Videos. The workshops were run on December 4, 2016, in Cancun in Mexico. The two workshops together received 13 papers. Each paper was then reviewed by at least two expert reviewers in the field. In all, 11 papers were accepted to be presented at the workshops. The topics covered in the papers...

  5. Remission of migraine after clipping of saccular intracranial aneurysms

    DEFF Research Database (Denmark)

    Lebedeva, E R; Busygina, A V; Kolotvinov, V S

    2015-01-01

    , this was reduced by 74.5% (P P > 0.5). The decrease of migraine in SIA patients was significantly higher than in controls: 74.5% vs 12.8% (P ... of TTH was given by 33 patients with SIA during the year preceding rupture and by 44 during 1 year after clipping (P > 0.75). Forty-one control patients had TTH, 27 after 1 year of treatment, a reduction 34.1% (P

  6. Histone H3 tail clipping regulates gene expression

    OpenAIRE

    Santos-Rosa, Helena; Kirmizis, Antonis; Nelson, Christopher; Bartke, Till; Saksouk, Nehme; Cote, Jacques; Kouzarides, Tony

    2008-01-01

    Induction of gene expression in yeast and human cells involves changes in histone modifications associated with promoters. Here we identify a histone H3 endopeptidase activity in S. cerevisiae that may regulate these events. The endopeptidase cleaves H3 after alanine 21, generating a histone lacking the first 21 residues and displays a preference for H3 tails carrying repressive modifications. In vivo, the H3 N-terminus is clipped, specifically within the promoter of genes following the induc...

  7. MedlinePlus FAQ: Is audio description available for videos on MedlinePlus?

    Science.gov (United States)

    ... audiodescription.html Question: Is audio description available for videos on MedlinePlus? To use the sharing features on ... page, please enable JavaScript. Answer: Audio description of videos helps make the content of videos accessible to ...

  8. Closed reduction of zygomatic tripod fractures using a towel clip.

    Science.gov (United States)

    Cinpolat, Anı; Ozkan, Ozlenen; Bektas, Gamze; Ozkan, Omer

    2017-08-01

    The zygomatic bone constitutes the prominence of the cheek. Fractures of the zygomatic bone are the second most treatment of zygomatic bone fractures and can be examined under two headings, open and closed reductions. This paper describes a new technique in the closed reduction of tripod fractures using a towel clip. Seventeen consecutive patients (three females, 14 males) with a mean age of 35.5 years (range = 18-66 years) with zygomatic tripod fracture were treated using the towel clip technique between December 2011 and February 2014. Patients were assessed in the first and 6 months postoperatively, by physical examination and computed tomography. Preoperatively, nine patients had paresthesia in the infraorbital nerve region. Three of these cases regressed postoperatively. Persistent collapse of the zygomatic projection was present in one patient. Non-comminuted zygomatic tripod fractures can be easily treated percutaneously with the towel clip method in the absence of preoperative ocular problems such as diplopia, enophthalmos, or restricted eye movements. The technique is economical, fast, and safe. The possibility of persistent zygoma collapse after reduction should be kept in mind, and preoperatively the team should be warned of the possibility of progression to open reduction during surgery.

  9. The microtubule plus-end-tracking protein CLIP-170 associates with the spermatid manchette and is essential for spermatogenesis.

    NARCIS (Netherlands)

    A.S. Akhmanova (Anna); A.L. Mausset-Bonnefont (Anne-Laure); W.A. van Cappellen (Gert); N. Keijzer (Nanda); C.C. Hoogenraad (Casper); T. Stepanova (Tatiana); K. Drabek (Ksenija); J. van der Wees (Jacqueline); M. Mommaas (Mieke); J. Onderwater (Jos); H. van der Meulen (Hans); M.E. Tanenbaum (Marvin); R.H. Medema (Rene); J.W. Hoogerbrugge (Jos); J.T.M. Vreeburg (Jan); E.J. Uringa; J.A. Grootegoed (Anton); F.G. Grosveld (Frank); N.J. Galjart (Niels)

    2005-01-01

    textabstractCLIP-170 is a microtubule "plus-end-tracking protein" implicated in the control of microtubule dynamics, dynactin localization, and the linking of endosomes to microtubules. To investigate the function of mouse CLIP-170, we generated CLIP-170 knockout and GFP-CLIP-170 knock-in alleles.

  10. The microtubule plus-end-tracking protein CLIP-170 associates with the spermatid manchette and is essential for spermatogenesis

    NARCIS (Netherlands)

    Akhmanova, A.S.; Mausset-Bonnefont, A.-L.; Cappellen, W. van; Keijzer, N.; Hoogenraad, C.C.; Stepanova, T.; Drabek, K.; Wees, J. van der; Mommaas, M.; Onderwater, J.; Meulen, H. van der; Tanenbaum, M.E.; Medema, R.H.; Hoogerbrugge, J.; Vreeburg, J.; Uringa, E.-J.; Grootegoed, J.A.; Grosveld, F.; Galjart, N.

    2005-01-01

    CLIP-170 is a microtubule "plus-end-tracking protein" implicated in the control of microtubule dynamics, dynactin localization, and the linking of endosomes to microtubules. To investigate the function of mouse CLIP-170, we generated CLIP-170 knockout and GFP-CLIP-170 knock-in alleles. Residual

  11. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real W...

  12. Video consultation use by Australian general practitioners: video vignette study.

    Science.gov (United States)

    Jiwa, Moyez; Meng, Xingqiong

    2013-06-19

    There is unequal access to health care in Australia, particularly for the one-third of the population living in remote and rural areas. Video consultations delivered via the Internet present an opportunity to provide medical services to those who are underserviced, but this is not currently routine practice in Australia. There are advantages and shortcomings to using video consultations for diagnosis, and general practitioners (GPs) have varying opinions regarding their efficacy. The aim of this Internet-based study was to explore the attitudes of Australian GPs toward video consultation by using a range of patient scenarios presenting different clinical problems. Overall, 102 GPs were invited to view 6 video vignettes featuring patients presenting with acute and chronic illnesses. For each vignette, they were asked to offer a differential diagnosis and to complete a survey based on the theory of planned behavior documenting their views on the value of a video consultation. A total of 47 GPs participated in the study. The participants were younger than Australian GPs based on national data, and more likely to be working in a larger practice. Most participants (72%-100%) agreed on the differential diagnosis in all video scenarios. Approximately one-third of the study participants were positive about video consultations, one-third were ambivalent, and one-third were against them. In all, 91% opposed conducting a video consultation for the patient with symptoms of an acute myocardial infarction. Inability to examine the patient was most frequently cited as the reason for not conducting a video consultation. Australian GPs who were favorably inclined toward video consultations were more likely to work in larger practices, and were more established GPs, especially in rural areas. The survey results also suggest that the deployment of video technology will need to focus on follow-up consultations. Patients with minor self-limiting illnesses and those with medical

  13. Restoration of clipped seismic waveforms using projection onto convex sets method.

    Science.gov (United States)

    Zhang, Jinhai; Hao, Jinlai; Zhao, Xu; Wang, Shuqin; Zhao, Lianfeng; Wang, Weimin; Yao, Zhenxing

    2016-12-14

    The seismic waveforms would be clipped when the amplitude exceeds the upper-limit dynamic range of seismometer. Clipped waveforms are typically assumed not useful and seldom used in waveform-based research. Here, we assume the clipped components of the waveform share the same frequency content with the un-clipped components. We leverage this similarity to convert clipped waveforms to true waveforms by iteratively reconstructing the frequency spectrum using the projection onto convex sets method. Using artificially clipped data we find that statistically the restoration error is ~1% and ~5% when clipped at 70% and 40% peak amplitude, respectively. We verify our method using real data recorded at co-located seismometers that have different gain controls, one set to record large amplitudes on scale and the other set to record low amplitudes on scale. Using our restoration method we recover 87 out of 93 clipped broadband records from the 2013 Mw6.6 Lushan earthquake. Estimating that we recover 20 clipped waveforms for each M5.0+ earthquake, so for the ~1,500 M5.0+ events that occur each year we could restore ~30,000 clipped waveforms each year, which would greatly enhance useable waveform data archives. These restored waveform data would also improve the azimuthal station coverage and spatial footprint.

  14. Receiver-based recovery of clipped ofdm signals for papr reduction: A bayesian approach

    KAUST Repository

    Ali, Anum

    2014-01-01

    Clipping is one of the simplest peak-to-average power ratio reduction schemes for orthogonal frequency division multiplexing (OFDM). Deliberately clipping the transmission signal degrades system performance, and clipping mitigation is required at the receiver for information restoration. In this paper, we acknowledge the sparse nature of the clipping signal and propose a low-complexity Bayesian clipping estimation scheme. The proposed scheme utilizes a priori information about the sparsity rate and noise variance for enhanced recovery. At the same time, the proposed scheme is robust against inaccurate estimates of the clipping signal statistics. The undistorted phase property of the clipped signal, as well as the clipping likelihood, is utilized for enhanced reconstruction. Furthermore, motivated by the nature of modern OFDM-based communication systems, we extend our clipping reconstruction approach to multiple antenna receivers and multi-user OFDM.We also address the problem of channel estimation from pilots contaminated by the clipping distortion. Numerical findings are presented that depict favorable results for the proposed scheme compared to the established sparse reconstruction schemes.

  15. Sex Differences in Emotional Evaluation of Film Clips: Interaction with Five High Arousal Emotional Categories.

    Science.gov (United States)

    Maffei, Antonio; Vencato, Valentina; Angrilli, Alessandro

    2015-01-01

    The present study aimed to investigate gender differences in the emotional evaluation of 18 film clips divided into six categories: Erotic, Scenery, Neutral, Sadness, Compassion, and Fear. 41 female and 40 male students rated all clips for valence-pleasantness, arousal, level of elicited distress, anxiety, jittery feelings, excitation, and embarrassment. Analysis of positive films revealed higher levels of arousal, pleasantness, and excitation to the Scenery clips in both genders, but lower pleasantness and greater embarrassment in women compared to men to Erotic clips. Concerning unpleasant stimuli, unlike men, women reported more unpleasantness to the Compassion, Sadness, and Fear compared to the Neutral clips and rated them also as more arousing than did men. They further differentiated the films by perceiving greater arousal to Fear than to Compassion clips. Women rated the Sadness and Fear clips with greater Distress and Jittery feelings than men did. Correlation analysis between arousal and the other emotional scales revealed that, although men looked less aroused than women to all unpleasant clips, they also showed a larger variance in their emotional responses as indicated by the high number of correlations and their relatively greater extent, an outcome pointing to a masked larger sensitivity of part of male sample to emotional clips. We propose a new perspective in which gender difference in emotional responses can be better evidenced by means of film clips selected and clustered in more homogeneous categories, controlled for arousal levels, as well as evaluated through a number of emotion focused adjectives.

  16. Sex Differences in Emotional Evaluation of Film Clips: Interaction with Five High Arousal Emotional Categories.

    Directory of Open Access Journals (Sweden)

    Antonio Maffei

    Full Text Available The present study aimed to investigate gender differences in the emotional evaluation of 18 film clips divided into six categories: Erotic, Scenery, Neutral, Sadness, Compassion, and Fear. 41 female and 40 male students rated all clips for valence-pleasantness, arousal, level of elicited distress, anxiety, jittery feelings, excitation, and embarrassment. Analysis of positive films revealed higher levels of arousal, pleasantness, and excitation to the Scenery clips in both genders, but lower pleasantness and greater embarrassment in women compared to men to Erotic clips. Concerning unpleasant stimuli, unlike men, women reported more unpleasantness to the Compassion, Sadness, and Fear compared to the Neutral clips and rated them also as more arousing than did men. They further differentiated the films by perceiving greater arousal to Fear than to Compassion clips. Women rated the Sadness and Fear clips with greater Distress and Jittery feelings than men did. Correlation analysis between arousal and the other emotional scales revealed that, although men looked less aroused than women to all unpleasant clips, they also showed a larger variance in their emotional responses as indicated by the high number of correlations and their relatively greater extent, an outcome pointing to a masked larger sensitivity of part of male sample to emotional clips. We propose a new perspective in which gender difference in emotional responses can be better evidenced by means of film clips selected and clustered in more homogeneous categories, controlled for arousal levels, as well as evaluated through a number of emotion focused adjectives.

  17. Radiologic advantages of potential use of polymer plastic clips in neurosurgery.

    Science.gov (United States)

    Delibegović, Samir

    2014-01-01

    Plastic clips are made of diamagnetic material and may result in fewer computed tomography (CT) and magnetic resonance artifacts than titanium clips. Considering that polymer plastic clips are increasingly being used in endoscopic surgery, our study examined the CT and magnetic resonance imaging (MRI) characteristics of plastic clips after application in the neurocranium and compared them with titanium clips. Craniotomy was performed on the heads of domestic pigs (Sus scrofa domestica), and, at an angle of 90°, a permanent Yasargil FT 746 T clip was placed in a frontobasal, interhemispheric position. A plastic polymer medium-large Hem-o-lok clip was placed in the same position into another animal. After this procedure, CT of the brain was performed using Siemens 16 slice, followed by an MRI scan, on Philips MRI, 1.5 Tesla. The CT and magnetic resonance scans were analyzed. On axial CT sections through the site of placement of titanium clips, dotted hyperdensity with a high value of Hounsfield units (HUI) of about 2800-3000 could be clearly seen. At the site where the plastic polymer clips were placed, discrete hyperdensity was observed, measuring 130-140 HUI. MRI of the brain in which titanium clips were used revealed a hypointensive T1W signal in the interhemispheric fissure, with a hypointensive T2W signal. On the other hand, upon examination of the MRI of the brain in which plastic clips were used, the T1W signal described above did not occur, and there was also no T2W signal, and no artifacts observed. The plastic clips are made of a diamagnetic, nonconductive material that results in fewer CT and MRI artifacts than titanium clips. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Mechanical evaluation of cerebral aneurysm clip scissoring phenomenon: comparison of titanium alloy and cobalt alloy.

    Science.gov (United States)

    Tsutsumi, Keiji; Horiuchi, Tetsuyoshi; Hongo, Kazuhiro

    2017-09-13

    Cerebral aneurysm clip blades crossing during surgery is well known as scissoring. Scissoring might cause rupture of the aneurysm due to laceration of its neck. Although aneurysm clip scissoring is well known, there have been few reports describing the details of this phenomenon. Quasi-scissoring phenomenon was introduced mechanically by rotating the clip head attached to a silicone sheet. The anti-scissoring torque during the twist of the blades was measured by changing the depth and the opening width. The closing force was also evaluated. Sugita straight clips of titanium alloy and cobalt alloy were used in the present study. In both materials, the anti-scissoring torque and the closing force were bigger 3 mm in thickness than 1 mm. The initial closing forces and the anti-scissoring torque values at each rotation angles were increased in proportion to depth. Closing forces of titanium alloy clip were slightly higher than those of cobalt alloy clip. By contrast, anti-scissoring torque values of cobalt alloy clip were bigger than those of titanium alloy clip in all conditions. In condition of 3 mm in thickness and 3 mm in depth, anti-scissoring torque vales of titanium alloy clip decreased suddenly when an angle surpassed 70 degrees. Aneurysm clip scissoring phenomenon tends to occur when clipping the aneurysm neck only with blade tips. Based on the results of this experiment, titanium alloy clip is more prone to scissoring than cobalt alloy clip under the condition that the wide blade separation distance and the shallow blade length.

  19. Telemetry and Communication IP Video Player

    Science.gov (United States)

    OFarrell, Zachary L.

    2011-01-01

    Aegis Video Player is the name of the video over IP system for the Telemetry and Communications group of the Launch Services Program. Aegis' purpose is to display video streamed over a network connection to be viewed during launches. To accomplish this task, a VLC ActiveX plug-in was used in C# to provide the basic capabilities of video streaming. The program was then customized to be used during launches. The VLC plug-in can be configured programmatically to display a single stream, but for this project multiple streams needed to be accessed. To accomplish this, an easy to use, informative menu system was added to the program to enable users to quickly switch between videos. Other features were added to make the player more useful, such as watching multiple videos and watching a video in full screen.

  20. "Life in the Universe" Final Event Video Now Available

    Science.gov (United States)

    2002-02-01

    ESO Video Clip 01/02 is issued on the web in conjunction with the release of a 20-min documentary video from the Final Event of the "Life in the Universe" programme. This unique event took place in November 2001 at CERN in Geneva, as part of the 2001 European Science and Technology Week, an initiative by the European Commission to raise the public awareness of science in Europe. The "Life in the Universe" programme comprised competitions in 23 European countries to identify the best projects from school students. The projects could be scientific or a piece of art, a theatrical performance, poetry or even a musical performance. The only restriction was that the final work must be based on scientific evidence. Winning teams from each country were invited to a "Final Event" at CERN on 8-11 November, 2001 to present their projects to a panel of International Experts during a special three-day event devoted to understanding the possibility of other life forms existing in our Universe. This Final Event also included a spectacular 90-min webcast from CERN with the highlights of the programme. The video describes the Final Event and the enthusiastic atmosphere when more than 200 young students and teachers from all over Europe met with some of the world's leading scientific experts of the field. The present video clip, with excerpts from the film, is available in four versions: two MPEG files and two streamer-versions of different sizes; the latter require RealPlayer software. Video Clip 01/02 may be freely reproduced. The 20-min video is available on request from ESO, for viewing in VHS and, for broadcasters, in Betacam-SP format. Please contact the ESO EPR Department for more details. Life in the Universe was jointly organised by the European Organisation for Nuclear Research (CERN) , the European Space Agency (ESA) and the European Southern Observatory (ESO) , in co-operation with the European Association for Astronomy Education (EAAE). Other research organisations were

  1. Video Analytics

    DEFF Research Database (Denmark)

    This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...... include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition...

  2. Video Analytics

    DEFF Research Database (Denmark)

    include: re-identification, consumer behavior analysis, utilizing pupillary response for task difficulty measurement, logo detection, saliency prediction, classification of facial expressions, face recognition, face verification, age estimation, super-resolution, pose estimation, and pain recognition......This book collects the papers presented at two workshops during the 23rd International Conference on Pattern Recognition (ICPR): the Third Workshop on Video Analytics for Audience Measurement (VAAM) and the Second International Workshop on Face and Facial Expression Recognition (FFER) from Real...

  3. Mining Videos for Features that Drive Attention

    Science.gov (United States)

    2015-04-01

    1_14 311 312 F. Baluch and L. Itti known as a saccade , to bring the area of interest into alignment with the fovea.Within the fovea too, attention can...infer attentional allocation. The eye traces recorded during the viewing of the stimuli by the subjects were parsed into saccades based on a threshold...of velocity as described before [1]. A total of 11,430 saccades were extracted and analyzed. Using the saliency model, we were able to extract

  4. Fear conditioning with film clips: a complex associative learning paradigm.

    Science.gov (United States)

    Kunze, Anna E; Arntz, Arnoud; Kindt, Merel

    2015-06-01

    We argue that the stimuli used in traditional fear conditioning paradigms are too simple to model the learning and unlearning of complex fear memories. We therefore developed and tested an adapted fear conditioning paradigm, specifically designed for the study of complex associative memories. Second, we explored whether manipulating the meaning and complexity of the CS-UCS association strengthened the learned fear association. In a two-day differential fear conditioning study, participants were randomly assigned to two experimental conditions. All participants were subjected to the same CSs (i.e., pictures) and UCS (i.e., 3 s film clip) during fear conditioning. However, in one of the conditions (negative-relevant context), the reinforced CS and UCS were meaningfully connected to each other by a 12 min aversive film clip presented prior to fear acquisition. Participants in the other condition (neutral context) were not able to make such meaningful connection between these stimuli, as they viewed a neutral film clip. Fear learning and unlearning were observed on fear-potentiated startle data and distress ratings within the adapted paradigm. Moreover, several group differences on these measures indicated increased UCS valence and enhanced associative memory strength in the negative-relevant context condition compared to the neutral context condition. Due to technical equipment failure, skin conductance data could not be interpreted. The fear conditioning paradigm as presented in the negative-relevant context condition holds considerable promise for the study of complex associative fear memories and therapeutic interventions for such memories. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Histological Analysis of Aneurysm Wall Occluded with Clip Blades. A Case Report.

    Science.gov (United States)

    Hasegawa, Takatoshi; Horiuchi, Tetsuyoshi; Hongo, Kazuhiro

    2015-08-01

    Reports on histological changes of vascular wall following clipping surgery have been scarce. The authors experienced a case of unruptured cerebral aneurysm in which the tissue occluded by clip blades for 6 years was obtained and histologically examined. The aneurysmal wall following clipping showed granulomatous inflammation with necrosis, and occluded aneurysmal walls were found with collagenous fibrous tissue. Mild infiltration by lymphocytes and fibrous thickened intima occurred. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Evaluation of Metallic Artifacts Caused by Nonpenetrating Titanium Clips in Postoperative Neuroimaging.

    Science.gov (United States)

    Ito, Kiyoshi; Seguchi, Tatsuya; Nakamura, Takuya; Chiba, Akihiro; Hasegawa, Takatoshi; Nagm, Alhusain; Horiuchi, Tetsuyoshi; Hongo, Kazuhiro

    2016-12-01

    Nonpenetrating titanium clips create no suture holes and thereby reduce cerebrospinal fluid leakage after dural closure. However, no data exist regarding metallic artifacts caused by these clips during postoperative neuroimaging. We aimed to evaluate clip-related artifacts on postoperative magnetic resonance (MR) images of 17 patients who underwent spinal surgery. A phantom study evaluated the size of metallic artifacts, and a clinical study evaluated the quality of postoperative spinal MR images. Both 1.5-T studies used T1-weighted and T2-weighted fast spin echo sequences. The phantom study compared clip and artifact size for 10 clips. Artifacts were defined as signal voids surrounded by high signal amplitude that followed the clip shape. In the clinical study, 2 neurosurgeons assessed 22 images from 17 patients of the spinal cord, cauda equina, and paravertebral muscles adjacent to the nonpenetrating titanium clips, using 5-point scales. Mean metallic artifact sizes were 4.82 ± 0.16 mm (T1) and 4.66 ± 0.25 mm (T2; P < 0.001 vs. control). The former and latter were respectively 207% and 200% larger than the clip size. Both readers graded spinal cord and paravertebral muscles images as 3 or 4, indicating very good image quality regardless of clip-related artifacts, with excellent interobserver agreement (κ = 0.99 and 0.98, respectively). Metallic artifacts caused by nonpenetrating titanium clips were 200% larger than the actual clip but did not affect spinal cord and extradural tissue visualization. The use of these clips for closing the spinal dura mater does not alter postoperative radiologic evaluation quality. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. User-oriented summary extraction for soccer video based on multimodal analysis

    Science.gov (United States)

    Liu, Huayong; Jiang, Shanshan; He, Tingting

    2011-11-01

    An advanced user-oriented summary extraction method for soccer video is proposed in this work. Firstly, an algorithm of user-oriented summary extraction for soccer video is introduced. A novel approach that integrates multimodal analysis, such as extraction and analysis of the stadium features, moving object features, audio features and text features is introduced. By these features the semantic of the soccer video and the highlight mode are obtained. Then we can find the highlight position and put them together by highlight degrees to obtain the video summary. The experimental results for sports video of world cup soccer games indicate that multimodal analysis is effective for soccer video browsing and retrieval.

  8. Sympathetic block by metal clips may be a reversible operation

    DEFF Research Database (Denmark)

    Thomsen, Lars L; Mikkelsen, Rasmus T; Derejko, Miroslawa

    2014-01-01

    OBJECTIVES: Thoracoscopic sympathectomy is now used routinely to treat patients with disabling primary hyperhidrosis or facial blushing. Published results are excellent, but side effects, such as compensatory sweating, are also very frequent. The surgical techniques used and the levels of targeting...... suggests in theory that application of metal clips to the sympathetic chain is a reversible procedure if only the observation period is prolonged. Further studies with longer periods between application and removal as well as investigations of nerve conduction should be encouraged, because we do not know...

  9. Production of 360° video : Introduction to 360° video and production guidelines

    OpenAIRE

    Ghimire, Sujan

    2016-01-01

    The main goal of this thesis project is to introduce latest media technology and provide a complete guideline. This project is based on the production of 360° video by using multiple GoPro cameras. This project was the first 360° video project at Helsinki Metropolia University of Applied Sciences. 360° video is a video with a totally different viewing experience and incomparable features on it. 360° x 180° video coverage and active participation from viewers are the best part of this vid...

  10. A Comparison of Comprehension Processes in Sign Language Interpreter Videos with or without Captions.

    Science.gov (United States)

    Debevc, Matjaž; Milošević, Danijela; Kožuh, Ines

    2015-01-01

    One important theme in captioning is whether the implementation of captions in individual sign language interpreter videos can positively affect viewers' comprehension when compared with sign language interpreter videos without captions. In our study, an experiment was conducted using four video clips with information about everyday events. Fifty-one deaf and hard of hearing sign language users alternately watched the sign language interpreter videos with, and without, captions. Afterwards, they answered ten questions. The results showed that the presence of captions positively affected their rates of comprehension, which increased by 24% among deaf viewers and 42% among hard of hearing viewers. The most obvious differences in comprehension between watching sign language interpreter videos with and without captions were found for the subjects of hiking and culture, where comprehension was higher when captions were used. The results led to suggestions for the consistent use of captions in sign language interpreter videos in various media.

  11. Quantifying turfgrass-available N from returned clippings using anion exchange membranes

    OpenAIRE

    Kopp, Kelly L.; Guillard, Karl

    2009-01-01

    Returning clippings can provide N to turf, but the amount of plant-available N derived from clippings is not easy to quantify. An accurate estimate of N released by clippings would be useful in guiding turf N fertilizer recommendations. The objective of this study was to determine if anion-exchange membranes (AEMs) could be used to quantify plant-available soil N when clippings are returned. A greenhouse and two field experiments were set out in randomized block designs using a factorial arra...

  12. Indexed Captioned Searchable Videos: A Learning Companion for STEM Coursework

    Science.gov (United States)

    Tuna, Tayfun; Subhlok, Jaspal; Barker, Lecia; Shah, Shishir; Johnson, Olin; Hovey, Christopher

    2017-02-01

    Videos of classroom lectures have proven to be a popular and versatile learning resource. A key shortcoming of the lecture video format is accessing the content of interest hidden in a video. This work meets this challenge with an advanced video framework featuring topical indexing, search, and captioning (ICS videos). Standard optical character recognition (OCR) technology was enhanced with image transformations for extraction of text from video frames to support indexing and search. The images and text on video frames is analyzed to divide lecture videos into topical segments. The ICS video player integrates indexing, search, and captioning in video playback providing instant access to the content of interest. This video framework has been used by more than 70 courses in a variety of STEM disciplines and assessed by more than 4000 students. Results presented from the surveys demonstrate the value of the videos as a learning resource and the role played by videos in a students learning process. Survey results also establish the value of indexing and search features in a video platform for education. This paper reports on the development and evaluation of ICS videos framework and over 5 years of usage experience in several STEM courses.

  13. To clip or not to clip the breast tumor bed? A retrospective look at the geographic miss index and normal tissue index of 110 patients with breast cancer.

    Science.gov (United States)

    Ebner, Florian; de Gregorio, Nikolaus; Rempen, Andreas; Mohr, Peter; de Gregorio, Amelie; Wöckel, Achim; Janni, Wolfgang; Witucki, Gerlo

    2017-06-01

    Planning of breast radiation for patients with breast conserving surgery often relies on clinical markers such as scars. Lately, surgical clips have been used to identify the tumor location. The purpose of this study was to evaluate the geographic miss index (GMI) and the normal tissue index (NTI) for the electron boost in breast cancer treatment plans with and without surgical clips. A retrospective descriptive study of 110 consecutive post-surgical patients who underwent breast-conserving treatment in early breast cancer, in which the clinical treatment field with the radiologic (clipped) field were compared and GMI/NTI for the electron boost were calculated respectively. The average clinical field was 100 mm (range, 100-120 mm) and the clipped field was 90 mm (range, 80-100 mm). The average GMI was 11.3% (range, 0-44%), and the average NTI was 27.5% (range, 0-54%). The GMI and NTI were reduced through the use of intra-surgically placed clips. The impact of local tumor control on the survival of patients with breast cancer is also influenced by the precision of radiotherapy. Additionally, patients demand an appealing cosmetic result. This makes "clinical" markers such as scars unreliable for radiotherapy planning. A simple way of identifying the tissue at risk is by intra-surgical clipping of the tumor bed. Our results show that the use of surgical clips can reduce the diameter of the radiotherapy field and increase the accuracy of radiotherapy planning. With the placement of surgical clips, more tissue at risk is included in the radiotherapy field. Less normal tissue receives radiotherapy with the use of surgical clips.

  14. An Evaluation of Video-to-Video Face Verification

    NARCIS (Netherlands)

    Poh, N.; Chan, C.H.; Kittler, J.; Marcel, S.; Mc Cool, C.; Argones Rúa, E.; Alba Castro, J.L.; Villegas, M.; Paredes, R.; Štruc, V.; Pavešić, N.; Salah, A.A.; Fang, H.; Costen, N.

    2010-01-01

    Person recognition using facial features, e.g., mug-shot images, has long been used in identity documents. However, due to the widespread use of web-cams and mobile devices embedded with a camera, it is now possible to realize facial video recognition, rather than resorting to just still images. In

  15. Video Capture and Editing as a Tool for the Storage, Distribution, and Illustration of Morphological Characters of Nematodes

    OpenAIRE

    De Ley, Paul; Bert, Wim

    2002-01-01

    Morphological identification and detailed observation of nematodes usually requires permanent slides, but these are never truly permanent and often prevent the same specimens to be used for other purposes. To efficiently record the morphology of nematodes in a format that allows easy archiving, editing, and distribution, we have assembled two micrographic video capture and editing (VCE) configurations. These assemblies allow production of short video clips that mimic multifocal observation of...

  16. Roadside video data analysis deep learning

    CERN Document Server

    Verma, Brijesh; Stockwell, David

    2017-01-01

    This book highlights the methods and applications for roadside video data analysis, with a particular focus on the use of deep learning to solve roadside video data segmentation and classification problems. It describes system architectures and methodologies that are specifically built upon learning concepts for roadside video data processing, and offers a detailed analysis of the segmentation, feature extraction and classification processes. Lastly, it demonstrates the applications of roadside video data analysis including scene labelling, roadside vegetation classification and vegetation biomass estimation in fire risk assessment.

  17. Automated UAV-based mapping for airborne reconnaissance and video exploitation

    Science.gov (United States)

    Se, Stephen; Firoozfam, Pezhman; Goldstein, Norman; Wu, Linda; Dutkiewicz, Melanie; Pace, Paul; Naud, J. L. Pierre

    2009-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for force protection, situational awareness, mission planning, damage assessment and others. UAVs gather huge amount of video data but it is extremely labour-intensive for operators to analyse hours and hours of received data. At MDA, we have developed a suite of tools towards automated video exploitation including calibration, visualization, change detection and 3D reconstruction. The on-going work is to improve the robustness of these tools and automate the process as much as possible. Our calibration tool extracts and matches tie-points in the video frames incrementally to recover the camera calibration and poses, which are then refined by bundle adjustment. Our visualization tool stabilizes the video, expands its field-of-view and creates a geo-referenced mosaic from the video frames. It is important to identify anomalies in a scene, which may include detecting any improvised explosive devices (IED). However, it is tedious and difficult to compare video clips to look for differences manually. Our change detection tool allows the user to load two video clips taken from two passes at different times and flags any changes between them. 3D models are useful for situational awareness, as it is easier to understand the scene by visualizing it in 3D. Our 3D reconstruction tool creates calibrated photo-realistic 3D models from video clips taken from different viewpoints, using both semi-automated and automated approaches. The resulting 3D models also allow distance measurements and line-of- sight analysis.

  18. L'uso del doppiaggio e del sottotitolaggio nell'insegnamento della L2: Il caso della piattaforma ClipFlair

    Directory of Open Access Journals (Sweden)

    Lupe Romero

    2016-01-01

    Full Text Available Abstract – The purpose of this paper is to present the Clipflair project, a web platform for foreign language learning (FLL through revoicing and captioning of clips. Using audiovisual material in the language classroom is a common resource for teachers since it introduces variety, provides exposure to nonverbal cultural elements and, most importantly, presents linguistic and cultural aspects of communication in their context. However, teachers using this resource face the difficulty of finding active tasks that will engage learners and discourage passive viewing. ClipFlair proposes working with AV material productively while also motivating learners by getting them to revoice or caption a clip. Revoicing is a term used to refer to (rerecording voice onto a clip, as in dubbing, free commentary, audio description and karaoke singing. The term captioning refers to adding written text to a clip, such as standard subtitles, annotations and intertitles. Clips can be short video or audio files, including documentaries, film scenes, news pieces, animations and songs. ClipFlair develops materials that enable foreign language learners to practice all four standard CEFR skills: writing, speaking, listening and reading. Within the project’s scope, more than 350 ready-made activities, which involve captioning and/or revoicing of clips, has been created. These activities has been created for more than 16 languages including English, Spanish and Italian, but focus is placed on less widely taught languages, namely Estonian, Greek, Romanian and Polish, as well as minority languages, i.e. Basque, Catalan and Irish. Non-European languages, namely Arabic, Chinese, Japanese, Russian and Ukrainian are also included. The platform has three different areas: The Gallery offers the materials and the activities; the Studio area, offers a captioning and revoicing tools, in order to create or practice and learn languages by using the activities; the Social Network area

  19. Fast Appearance Modeling for Automatic Primary Video Object Segmentation.

    Science.gov (United States)

    Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong

    2016-02-01

    Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.

  20. Video gallery of educational lectures integrated in faculty's portal

    Directory of Open Access Journals (Sweden)

    Jaroslav Majerník

    2013-05-01

    Full Text Available This paper presents a web based educational video-clips exhibition created to share various archived lectures for medical students, health care professionals as well as for general public. The presentation of closely related topics was developed as video gallery and it is based solely on free or open source tools to be available for wide academic and/or non-commercial use. Even if the educational video records can be embedded in any websites, we preferred to use our faculty’s portal, which should be a central point to offer various multimedia educational materials. The system was integrated and tested to offer open access to infectology lectures that were captured and archived from live-streamed sessions and from videoconferences.

  1. Mengolah Data Video Analog menjadi Video Digital Sederhana

    Directory of Open Access Journals (Sweden)

    Nick Soedarso

    2010-10-01

    Full Text Available Nowadays, editing technology has entered the digital age. Technology will demonstrate the evidence of processing analog to digital data has become simpler since editing technology has been integrated in the society in all aspects. Understanding the technique of processing analog to digital data is important in producing a video. To utilize this technology, the introduction of equipments is fundamental to understand the features. The next phase is the capturing process that supports the preparation in editing process from scene to scene; therefore, it will become a watchable video.   

  2. Using Short Movie and Television Clips in the Economics Principles Class

    Science.gov (United States)

    Sexton, Robert L.

    2006-01-01

    The author describes a teaching method that uses powerful contemporary media, movie and television clips, to demonstrate the enormous breadth and depth of economic concepts. Many different movie and television clips can be used to show the power of economic analysis. The author describes the scenes and the economic concepts within those scenes for…

  3. Real-world experience of MitraClip for treatment of severe mitral regurgitation

    DEFF Research Database (Denmark)

    Chan, Pak Hei; She, Hoi Lam; Alegria-Barrero, Eduardo

    2012-01-01

     Percutaneous edge-to-edge mitral valve repair with the MitraClip(®) was shown to be a safe and feasible alternative compared to conventional surgical mitral valve repair. Herein is reported our experience on MitraClip(®) for high-risk surgical candidates with severe mitral regurgitation (MR)....

  4. Using Film Clips to Teach Teen Pregnancy Prevention: "The Gloucester 18" at a Teen Summit

    Science.gov (United States)

    Herrman, Judith W.; Moore, Christopher C.; Anthony, Becky

    2012-01-01

    Teaching pregnancy prevention to large groups offers many challenges. This article describes the use of film clips, with guided discussion, to teach pregnancy prevention. In order to analyze the costs associated with teen pregnancy, a film clip discussion session based with the film "The Gloucester 18" was the keynote of a youth summit. The lesson…

  5. [Clip Sheets from BOCES. Opportunities. Health. Careers. = Oportunidades. Salud. Una Camera En...

    Science.gov (United States)

    State Univ. of New York, Geneseo. Coll. at Geneseo. Migrant Center.

    This collection of 83 clip sheets, or classroom handouts, was created to help U.S. migrants learn more about health, careers, and general "opportunities" including education programs. They are written in both English and Spanish and are presented in an easily understandable format. Health clip-sheet topics include the following: Abuse; AIDS;…

  6. Endoluminal compression clip : full-thickness resection of the mesenteric bowel wall in a porcine model

    NARCIS (Netherlands)

    Kopelman, Yael; Siersema, Peter D.; Nir, Yael; Szold, Amir; Bapaye, Amol; Segol, Ori; Willenz, Ehud P.; Lelcuk, Shlomo; Geller, Alexander; Kopelman, Doron

    2009-01-01

    Background: Performing a full-thickness intestinal wall resection Of a sessile polyp located on the mesenteric side with a compression clip may lead to compression of mesenteric vessels. The application of such a clip may therefore cause a compromised blood supply in the particular bowel segment,

  7. Gender Stereotyped Computer Clip-Art Images as an Implicit Influence in Instructional Message Design.

    Science.gov (United States)

    Binns, Jane C.; Branch, Robert C.

    The purpose of this paper is to sensitize instructional message designers to the stereotypical signals which may be inherent in computer clip-art selections. A rationale is presented for the power of message design, and the differences between gender equality and gender equity are discussed. Several computer clip-art libraries were analyzed, and…

  8. Antihypertensive therapy upregulates renin and (pro) renin receptor in the clipped kidney of Goldblatt hypertensive rats

    NARCIS (Netherlands)

    Krebs, C.; Hamming, I.; Sadaghiani, S.; Steinmetz, O. M.; Meyer-Schwesinger, C.; Fehr, S.; Stahl, R. A. K.; Garrelds, I. M.; Danser, A. H. J.; van Goor, H.; Contrepas, A.; Nguyen, G.; Wenzel, U.

    Recently, a (pro) renin receptor has been identified which mediates profibrotic effects independent of angiotensin II. Because antihypertensive therapy induces renal injury in the clipped kidney of two kidney-1-clip hypertensive rats, we examined the regulation of renin and the (pro) renin receptor

  9. Capturing Students' Attention: Movie Clips Set the Stage for Learning in Abnormal Psychology.

    Science.gov (United States)

    Badura, Amy S.

    2002-01-01

    Presents results of a study that evaluated using popular movie clips, shown in the first class meeting of an abnormal psychology course, in relation to student enthusiasm. Compares two classes of female juniors, one using clips and one class not using them. States that the films portrayed psychological disorders. (CMK)

  10. No-Reference Video Quality Assessment by HEVC Codec Analysis

    DEFF Research Database (Denmark)

    Huang, Xin; Søgaard, Jacob; Forchhammer, Søren

    2015-01-01

    the transform coefficients, estimates the distortion, and assesses the video quality. The proposed scheme generates VQA features based on Intra coded frames, and then maps features using an Elastic Net to predict subjective video quality. A set of HEVC coded 4K UHD sequences are tested. Results show......This paper proposes a No-Reference (NR) Video Quality Assessment (VQA) method for videos subject to the distortion given by High Efficiency Video Coding (HEVC). The proposed assessment can be performed either as a BitstreamBased (BB) method or as a Pixel-Based (PB). It extracts or estimates...

  11. Validation of a pediatric vocal fold nodule rating scale based on digital video images.

    Science.gov (United States)

    Nuss, Roger C; Ward, Jessica; Recko, Thomas; Huang, Lin; Woodnorth, Geralyn Harvey

    2012-01-01

    We sought to create a validated scale of vocal fold nodules in children, based on digital video clips obtained during diagnostic fiberoptic laryngoscopy. We developed a 4-point grading scale of vocal fold nodules in children, based upon short digital video clips. A tutorial for use of the scale, including schematic drawings of nodules, static images, and 10-second video clips, was presented to 36 clinicians with various levels of experience. The clinicians then reviewed 40 short digital video samples from pediatric patients evaluated in a voice clinic and rated the nodule size. Statistical analysis of the ratings provided inter-rater reliability scores. Thirty-six clinicians with various levels of experience rated a total of 40 short video clips. The ratings of experienced raters (14 pediatric otolaryngology attending physicians and pediatric otolaryngology fellows) were compared with those of inexperienced raters (22 nurses, medical students, otolaryngology residents, physician assistants, and pediatric speech-language pathologists). The overall intraclass correlation coefficient for the ratings of nodule size was quite good (0.62; 95% confidence interval, 0.52 to 0.74). The p value for experienced raters versus inexperienced raters was 0.1345, indicating no statistically significant difference in the ratings by these two groups. The intraclass correlation coefficient for intra-rater reliability was very high (0.89). The use of a dynamic scale of pediatric vocal fold nodule size most realistically represents the clinical assessment of nodules during an office visit. The results of this study show a high level of agreement between experienced and inexperienced raters. This scale can be used with a high level of reliability by clinicians with various levels of experience. A validated grading scale will help to assess long-term outcomes of pediatric patients with vocal fold nodules.

  12. Classifying smoke in laparoscopic videos using SVM

    Directory of Open Access Journals (Sweden)

    Alshirbaji Tamer Abdulbaki

    2017-09-01

    Full Text Available Smoke in laparoscopic videos usually appears due to the use of electrocautery when cutting or coagulating tissues. Therefore, detecting smoke can be used for event-based annotation in laparoscopic surgeries by retrieving the events associated with the electrocauterization. Furthermore, smoke detection can also be used for automatic smoke removal. However, detecting smoke in laparoscopic video is a challenge because of the changeability of smoke patterns, the moving camera and the different lighting conditions. In this paper, we present a video-based smoke detection algorithm to detect smoke of different densities such as fog, low and high density in laparoscopic videos. The proposed method depends on extracting various visual features from the laparoscopic images and providing them to support vector machine (SVM classifier. Features are based on motion, colour and texture patterns of the smoke. We validated our algorithm using experimental evaluation on four laparoscopic cholecystectomy videos. These four videos were manually annotated by defining every frame as smoke or non-smoke frame. The algorithm was applied to the videos by using different feature combinations for classification. Experimental results show that the combination of all proposed features gives the best classification performance. The overall accuracy (i.e. correctly classified frames is around 84%, with the sensitivity (i.e. correctly detected smoke frames and the specificity (i.e. correctly detected non-smoke frames are 89% and 80%, respectively.

  13. The role of structural characteristics in problem video game playing: a review

    OpenAIRE

    King, DL; Delfabbro, PH; Griffiths, M.

    2010-01-01

    The structural characteristics of video games may play an important role in explaining why some people play video games to excess. This paper provides a review of the literature on structural features of video games and the psychological experience of playing video games. The dominant view of the appeal of video games is based on operant conditioning theory and the notion that video games satisfy various needs for social interaction and belonging. However, there is a lack of experimental and ...

  14. Artificial Intelligence in Video Games: Towards a Unified Framework

    OpenAIRE

    Safadi, Firas; Fonteneau, Raphael; Ernst, Damien

    2015-01-01

    With modern video games frequently featuring sophisticated and realistic environments, the need for smart and comprehensive agents that understand the various aspects of complex environments is pressing. Since video game AI is often specifically designed for each game, video game AI tools currently focus on allowing video game developers to quickly and efficiently create specific AI. One issue with this approach is that it does not efficiently exploit the numerous similarities that exist betw...

  15. Multivariate Cryptography Based on Clipped Hopfield Neural Network.

    Science.gov (United States)

    Wang, Jia; Cheng, Lee-Ming; Su, Tong

    2018-02-01

    Designing secure and efficient multivariate public key cryptosystems [multivariate cryptography (MVC)] to strengthen the security of RSA and ECC in conventional and quantum computational environment continues to be a challenging research in recent years. In this paper, we will describe multivariate public key cryptosystems based on extended Clipped Hopfield Neural Network (CHNN) and implement it using the MVC (CHNN-MVC) framework operated in space. The Diffie-Hellman key exchange algorithm is extended into the matrix field, which illustrates the feasibility of its new applications in both classic and postquantum cryptography. The efficiency and security of our proposed new public key cryptosystem CHNN-MVC are simulated and found to be NP-hard. The proposed algorithm will strengthen multivariate public key cryptosystems and allows hardware realization practicality.

  16. Web-Mediated Augmentation and Interactivity Enhancement of Omni-Directional Video in Both 2D and 3D

    OpenAIRE

    Wijnants, Maarten; Van Erum, Kris; QUAX, Peter; Lamotte, Wim

    2015-01-01

    Video consumption has since the emergence of the medium largely been a passive affair. This paper proposes augmented Omni-Directional Video (ODV) as a novel format to engage viewers and to open up new ways of interacting with video content. Augmented ODV blends two important contemporary technologies: Augmented Video Viewing and 360 degree video. The former allows for the addition of interactive features to Web-based video playback, while the latter unlocks spatial video navigation opportunit...

  17. Pull-off characteristics of double-shanked compared to single-shanked ligation clips: an animal study

    Directory of Open Access Journals (Sweden)

    Schenk Martin

    2016-09-01

    Full Text Available The use of surgical ligation clips is considered as the gold standard for the closure of vessels, particularly in laparoscopic surgery. The safety of clips is mainly achieved by the deep indentation of the metal bar with a high retention force. A novel double-shanked (DS titanium clip was compared to two single-shanked clips with respect to axial and radial pull-off forces.

  18. Multi-Modal Surrogates for Retrieving and Making Sense of Videos: Is Synchronization between the Multiple Modalities Optimal?

    Science.gov (United States)

    Song, Yaxiao

    2010-01-01

    Video surrogates can help people quickly make sense of the content of a video before downloading or seeking more detailed information. Visual and audio features of a video are primary information carriers and might become important components of video retrieval and video sense-making. In the past decades, most research and development efforts on…

  19. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... NEI YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration ... Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: ...

  20. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia ... of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ...

  1. NEI You Tube Videos: Amblyopia

    Science.gov (United States)

    ... YouTube Videos > NEI YouTube Videos: Amblyopia NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration Amblyopia ... of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: Amblyopia ...

  2. Saying What You're Looking For: Linguistics Meets Video Search.

    Science.gov (United States)

    Barrett, Daniel Paul; Barbu, Andrei; Siddharth, N; Siskind, Jeffrey Mark

    2016-10-01

    We present an approach to searching large video corpora for clips which depict a natural-language query in the form of a sentence. Compositional semantics is used to encode subtle meaning differences lost in other approaches, such as the difference between two sentences which have identical words but entirely different meaning: The person rode the horse versus The horse rode the person. Given a sentential query and a natural-language parser, we produce a score indicating how well a video clip depicts that sentence for each clip in a corpus and return a ranked list of clips. Two fundamental problems are addressed simultaneously: detecting and tracking objects, and recognizing whether those tracks depict the query. Because both tracking and object detection are unreliable, our approach uses the sentential query to focus the tracker on the relevant participants and ensures that the resulting tracks are described by the sentential query. While most earlier work was limited to single-word queries which correspond to either verbs or nouns, we search for complex queries which contain multiple phrases, such as prepositional phrases, and modifiers, such as adverbs. We demonstrate this approach by searching for 2,627 naturally elicited sentential queries in 10 Hollywood movies.

  3. Obesity in the new media: a content analysis of obesity videos on YouTube.

    Science.gov (United States)

    Yoo, Jina H; Kim, Junghyun

    2012-01-01

    This study examines (1) how the topics of obesity are framed and (2) how obese persons are portrayed on YouTube video clips. The analysis of 417 obesity videos revealed that a newer medium like YouTube, similar to traditional media, appeared to assign responsibility and solutions for obesity mainly to individuals and their behaviors, although there was a tendency that some video categories have started to show other causal claims or solutions. However, due to the prevailing emphasis on personal causes and solutions, numerous YouTube videos had a theme of weight-based teasing, or showed obese persons engaging in stereotypical eating behaviors. We discuss a potential impact of YouTube videos on shaping viewers' perceptions about obesity and further reinforcing stigmatization of obese persons.

  4. 76 FR 31360 - Paper Clips From China; Scheduling of an Expedited Five-Year Review Concerning the Antidumping...

    Science.gov (United States)

    2011-05-31

    ... COMMISSION Paper Clips From China; Scheduling of an Expedited Five-Year Review Concerning the Antidumping Duty Order on Paper Clips From China AGENCY: United States International Trade Commission. ACTION... revocation of the antidumping duty order on paper clips from China would be likely to lead to continuation or...

  5. Rheumatoid Arthritis Educational Video Series

    Medline Plus

    Full Text Available ... Rheumatoid Arthritis Educational Video Series Rheumatoid Arthritis Educational Video Series This series of five videos was designed ... Activity Role of Body Weight in Osteoarthritis Educational Videos for Patients Rheumatoid Arthritis Educational Video Series Psoriatic ...

  6. 61214++++','DOAJ-ART-EN'); return false;" href="+++++https://jual.nipissingu.ca/wp-content/uploads/sites/25/2014/06/v61214.m4v">61214++++">Jailed - Video

    Directory of Open Access Journals (Sweden)

    Cameron CULBERT

    2012-07-01

    Full Text Available As the public education system in Northern Ontario continues to take a downward spiral, a plethora of secondary school students are being placed in an alternative educational environment. Juxtaposing the two educational settings reveals very similar methods and characteristics of educating our youth as opposed to using a truly alternative approach to education. This video reviews the relationship between public education and alternative education in a remote Northern Ontario setting. It is my belief that the traditional methods of teaching are not appropriate in educating at risk students in alternative schools. Paper and pencil worksheets do not motivate these students to learn and succeed. Alternative education should emphasize experiential learning, a just in time curriculum based on every unique individual and the students true passion for everyday life. Cameron Culbert was born on February 3rd, 1977 in North Bay, Ontario. His teenage years were split between attending public school and his willed curriculum on the ski hill. Culbert spent 10 years (1996-2002 & 2006-2010 competing for Canada as an alpine ski racer. His passion for teaching and coaching began as an athlete and has now transferred into the classroom and the community. As a graduate of Nipissing University (BA, BEd, MEd. Camerons research interests are alternative education, physical education and technology in the classroom. Currently Cameron is an active educator and coach in Northern Ontario.

  7. Video Design Games

    DEFF Research Database (Denmark)

    Smith, Rachel Charlotte; Christensen, Kasper Skov; Iversen, Ole Sejer

    We introduce Video Design Games to train educators in teaching design. The Video Design Game is a workshop format consisting of three rounds in which participants observe, reflect and generalize based on video snippets from their own practice. The paper reports on a Video Design Game workshop...

  8. Simple device to determine the pressure applied by pressure clips for the treatment of earlobe keloids

    Directory of Open Access Journals (Sweden)

    Aashish Sasidharan

    2015-01-01

    Full Text Available Background: Keloids of the ear are common problems. Various treatment modalities are available for the treatment of ear keloids. Surgical excision with intralesional steroid injection along with compression therapy has the least recurrence rate. Various types of devices are available for pressure therapy. Pressure applied by these devices is uncontrolled and is associated with the risk of pressure necrosis. We describe here a simple and easy to use device to measure pressure applied by these clips for better outcome. Objectives: To devise a simple method to measure the pressure applied by various pressure clips used in ear keloid pressure therapy. Materials and Methods: By using a force sensitive resistor (FSR, the pressure applied gets converted into voltage using electrical wires, resistors, capacitors, converter, amplifier, diode, nine-volt (9V cadmium battery and the voltage is measured using a multimeter. The measured voltage is then converted into pressure using pressure voltage graph that depicts the actual pressure applied by the pressure clip. Results: The pressure applied by different clips was variable. The spring clips were adjustable by slight variation in the design whereas the pressure applied by binder clips and magnet discs was not adjustable. Conclusion: The uncontrolled/suboptimal pressure applied by certain pressure clips can be monitored to provide optimal pressure therapy in ear keloid for better outcome.

  9. Predicting personal preferences in subjective video quality assessment

    DEFF Research Database (Denmark)

    Korhonen, Jari

    2017-01-01

    the characteristics of each test sample are represented by a set of parameters, and the individual preferences are represented by weights for the parameters. According to the validation experiment performed on public visual quality databases annotated with raw individual scores, the proposed model can predict......In this paper, we study the problem of predicting the visual quality of a specific test sample (e.g. a video clip) experienced by a specific user, based on the ratings by other users for the same sample and the same user for other samples. A simple linear model and algorithm is presented, where...

  10. Characterization of social video

    Science.gov (United States)

    Ostrowski, Jeffrey R.; Sarhan, Nabil J.

    2009-01-01

    The popularity of social media has grown dramatically over the World Wide Web. In this paper, we analyze the video popularity distribution of well-known social video websites (YouTube, Google Video, and the AOL Truveo Video Search engine) and characterize their workload. We identify trends in the categories, lengths, and formats of those videos, as well as characterize the evolution of those videos over time. We further provide an extensive analysis and comparison of video content amongst the main regions of the world.

  11. Online scene change detection of multicast (MBone) video

    Science.gov (United States)

    Zhou, Wensheng; Shen, Ye; Vellaikal, Asha; Kuo, C.-C. Jay

    1998-10-01

    Many multimedia applications, such as multimedia data management systems and communication systems, require efficient representation of multimedia content. Thus semantic interpretation of video content has been a popular research area. Currently, most content-based video representation involves the segmentation of video based on key frames which are generated using scene change detection techniques as well as camera/object motion. Then, video features can be extracted from key frames. However most of such research performs off-line video processing in which the whole video scope is known as a priori which allows multiple scans of the stored video files during video processing. In comparison, relatively not much research has been done in the area of on-line video processing, which is crucial in video communication applications such as on-line collaboration, news broadcasts and so on. Our research investigates on-line real-time scene change detection of multicast video over the Internet. Our on-line processing system are designed to meet the requirements of real-time video multicasting over the Internet and to utilize the successful video parsing techniques available today. The proposed algorithms extract key frames from video bitstreams sent through the MBone network, and the extracted key frames are multicasted as annotations or metadata over a separate channel to assist in content filtering such as those anticipated to be in use by on-line filtering proxies in the Internet. The performance of the proposed algorithms are demonstrated and discussed in this paper.

  12. Virtual Cerebral Aneurysm Clipping with Real-Time Haptic Force Feedback in Neurosurgical Education.

    Science.gov (United States)

    Gmeiner, Matthias; Dirnberger, Johannes; Fenz, Wolfgang; Gollwitzer, Maria; Wurm, Gabriele; Trenkler, Johannes; Gruber, Andreas

    2018-01-11

    Realistic, safe, and efficient modalities for simulation-based training are highly warranted to enhance the quality of surgical education, and they should be incorporated in resident training. The aim of this study was to develop a patient-specific virtual cerebral aneurysm-clipping simulator with haptic force feedback and real-time deformation of the aneurysm and vessels. A prototype simulator was developed from 2012 to 2016. Evaluation of virtual clipping by blood flow simulation was integrated in this software, and the prototype was evaluated by 18 neurosurgeons. In 4 patients with different medial cerebral artery aneurysms, virtual clipping was performed after real-life surgery, and surgical results were compared regarding clip application, surgical trajectory, and blood flow. After head positioning and craniotomy, bimanual virtual aneurysm clipping with an original forceps was performed. Blood flow simulation demonstrated residual aneurysm filling or branch stenosis. The simulator improved anatomic understanding for 89% of neurosurgeons. Simulation of head positioning and craniotomy was considered realistic by 89% and 94% of users, respectively. Most participants agreed that this simulator should be integrated into neurosurgical education (94%). Our illustrative cases demonstrated that virtual aneurysm surgery was possible using the same trajectory as in real-life cases. Both virtual clipping and blood flow simulation were realistic in broad-based but not calcified aneurysms. Virtual clipping of a calcified aneurysm could be performed using the same surgical trajectory, but not the same clip type. We have successfully developed a virtual aneurysm-clipping simulator. Next, we will prospectively evaluate this device for surgical procedure planning and education. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Insights into the design and interpretation of iCLIP experiments.

    Science.gov (United States)

    Haberman, Nejc; Huppertz, Ina; Attig, Jan; König, Julian; Wang, Zhen; Hauer, Christian; Hentze, Matthias W; Kulozik, Andreas E; Le Hir, Hervé; Curk, Tomaž; Sibley, Christopher R; Zarnack, Kathi; Ule, Jernej

    2017-01-16

    Ultraviolet (UV) crosslinking and immunoprecipitation (CLIP) identifies the sites on RNAs that are in direct contact with RNA-binding proteins (RBPs). Several variants of CLIP exist, which require different computational approaches for analysis. This variety of approaches can create challenges for a novice user and can hamper insights from multi-study comparisons. Here, we produce data with multiple variants of CLIP and evaluate the data with various computational methods to better understand their suitability. We perform experiments for PTBP1 and eIF4A3 using individual-nucleotide resolution CLIP (iCLIP), employing either UV-C or photoactivatable 4-thiouridine (4SU) combined with UV-A crosslinking and compare the results with published data. As previously noted, the positions of complementary DNA (cDNA)-starts depend on cDNA length in several iCLIP experiments and we now find that this is caused by constrained cDNA-ends, which can result from the sequence and structure constraints of RNA fragmentation. These constraints are overcome when fragmentation by RNase I is efficient and when a broad cDNA size range is obtained. Our study also shows that if RNase does not efficiently cut within the binding sites, the original CLIP method is less capable of identifying the longer binding sites of RBPs. In contrast, we show that a broad size range of cDNAs in iCLIP allows the cDNA-starts to efficiently delineate the complete RNA-binding sites. We demonstrate the advantage of iCLIP and related methods that can amplify cDNAs that truncate at crosslink sites and we show that computational analyses based on cDNAs-starts are appropriate for such methods.

  14. What kind of erotic film clips should we use in female sex research? An exploratory study.

    Science.gov (United States)

    Woodard, Terri L; Collins, Karen; Perez, Mindy; Balon, Richard; Tancer, Manuel E; Kruger, Michael; Moffat, Scott; Diamond, Michael P

    2008-01-01

    Erotic film clips are used in sex research, including studies of female sexual dysfunction and arousal. However, little is known about which clips optimize female sexual response. Furthermore, their use is not well standardized. To identify the types of film clips that are most mentally appealing and physically arousing to women for use in future sexual function and dysfunction studies; to explore the relationship between mental appeal and reported physical arousal; to characterize the content of the films that were found to be the most and least appealing and arousing. Twenty-one women viewed 90 segments of erotic film clips. They rated how (i) mentally appealing and (ii) how physically aroused they were by each clip. The data were analyzed by descriptive statistics. The means of the mental and self-reported physical responses were calculated to determine the most and least appealing/arousing film clips. Pearson correlations were calculated to assess the relationship between mental appeal and reported physical arousal. Self-reported mental and physical arousal. Of 90 film clips, 18 were identified as the most mentally appealing and physically arousing while nine were identified as the least mentally appealing and physically arousing. The level of mental appeal positively correlated with the level of perceived physical arousal in both categories (r = 0.61, P fellatio, and anal intercourse. Erotic film clips reliably produced a state of self-reported arousal in women. The most appealing and arousing films tended to depict heterosexual vaginal intercourse. Film clips with these attributes should be used in future research of sexual function and response of women.

  15. Intramolecular circularization increases efficiency of RNA sequencing and enables CLIP-Seq of nuclear RNA from human cells

    Science.gov (United States)

    Chu, Yongjun; Wang, Tao; Dodd, David; Xie, Yang; Janowski, Bethany A.; Corey, David R.

    2015-01-01

    RNA sequencing (RNA-Seq) is a powerful tool for analyzing the identity of cellular RNAs but is often limited by the amount of material available for analysis. In spite of extensive efforts employing existing protocols, we observed that it was not possible to obtain useful sequencing libraries from nuclear RNA derived from cultured human cells after crosslinking and immunoprecipitation (CLIP). Here, we report a method for obtaining strand-specific small RNA libraries for RNA sequencing that requires picograms of RNA. We employ an intramolecular circularization step that increases the efficiency of library preparation and avoids the need for intermolecular ligations of adaptor sequences. Other key features include random priming for full-length cDNA synthesis and gel-free library purification. Using our method, we generated CLIP-Seq libraries from nuclear RNA that had been UV-crosslinked and immunoprecipitated with anti-Argonaute 2 (Ago2) antibody. Computational protocols were developed to enable analysis of raw sequencing data and we observe substantial differences between recognition by Ago2 of RNA species in the nucleus relative to the cytoplasm. This RNA self-circularization approach to RNA sequencing (RC-Seq) allows data to be obtained using small amounts of input RNA that cannot be sequenced by standard methods. PMID:25813040

  16. Video visual analytics

    OpenAIRE

    Höferlin, Markus Johannes

    2013-01-01

    The amount of video data recorded world-wide is tremendously growing and has already reached hardly manageable dimensions. It originates from a wide range of application areas, such as surveillance, sports analysis, scientific video analysis, surgery documentation, and entertainment, and its analysis represents one of the challenges in computer science. The vast amount of video data renders manual analysis by watching the video data impractical. However, automatic evaluation of video material...

  17. Video Game Genre Affordances for Physics Education

    Science.gov (United States)

    Anagnostou, Kostas; Pappa, Anastasia

    2011-01-01

    In this work, the authors analyze the video game genres' features and investigate potential mappings to specific didactic approaches in the context of Physics education. To guide the analysis, the authors briefly review the main didactic approaches for Physics and identify qualities that can be projected into game features. Based on the…

  18. A Novel Quantum Video Steganography Protocol with Large Payload Based on MCQI Quantum Video

    Science.gov (United States)

    Qu, Zhiguo; Chen, Siyi; Ji, Sai

    2017-11-01

    As one of important multimedia forms in quantum network, quantum video attracts more and more attention of experts and scholars in the world. A secure quantum video steganography protocol with large payload based on the video strip encoding method called as MCQI (Multi-Channel Quantum Images) is proposed in this paper. The new protocol randomly embeds the secret information with the form of quantum video into quantum carrier video on the basis of unique features of video frames. It exploits to embed quantum video as secret information for covert communication. As a result, its capacity are greatly expanded compared with the previous quantum steganography achievements. Meanwhile, the new protocol also achieves good security and imperceptibility by virtue of the randomization of embedding positions and efficient use of redundant frames. Furthermore, the receiver enables to extract secret information from stego video without retaining the original carrier video, and restore the original quantum video as a follow. The simulation and experiment results prove that the algorithm not only has good imperceptibility, high security, but also has large payload.

  19. Mobile-Based Video Learning Outcomes in Clinical Nursing Skill Education: A Randomized Controlled Trial.

    Science.gov (United States)

    Lee, Nam-Ju; Chae, Sun-Mi; Kim, Haejin; Lee, Ji-Hye; Min, Hyojin Jennifer; Park, Da-Eun

    2016-01-01

    Mobile devices are a regular part of daily life among the younger generations. Thus, now is the time to apply mobile device use to nursing education. The purpose of this study was to identify the effects of a mobile-based video clip on learning motivation, competence, and class satisfaction in nursing students using a randomized controlled trial with a pretest and posttest design. A total of 71 nursing students participated in this study: 36 in the intervention group and 35 in the control group. A video clip of how to perform a urinary catheterization was developed, and the intervention group was able to download it to their own mobile devices for unlimited viewing throughout 1 week. All of the students participated in a practice laboratory to learn urinary catheterization and were blindly tested for their performance skills after participation in the laboratory. The intervention group showed significantly higher levels of learning motivation and class satisfaction than did the control. Of the fundamental nursing competencies, the intervention group was more confident in practicing catheterization than their counterparts. Our findings suggest that video clips using mobile devices are useful tools that educate student nurses on relevant clinical skills and improve learning outcomes.

  20. Temporal Segmentation of MPEG Video Streams

    Directory of Open Access Journals (Sweden)

    Janko Calic

    2002-06-01

    Full Text Available Many algorithms for temporal video partitioning rely on the analysis of uncompressed video features. Since the information relevant to the partitioning process can be extracted directly from the MPEG compressed stream, higher efficiency can be achieved utilizing information from the MPEG compressed domain. This paper introduces a real-time algorithm for scene change detection that analyses the statistics of the macroblock features extracted directly from the MPEG stream. A method for extraction of the continuous frame difference that transforms the 3D video stream into a 1D curve is presented. This transform is then further employed to extract temporal units within the analysed video sequence. Results of computer simulations are reported.

  1. The effect of ether anesthesia on fin-clipping rate

    Science.gov (United States)

    Eschmeyer, Paul H.

    1953-01-01

    As part of an experimental program to learn the effects of stocking lake trout (Salvelinus namaycush) in Lake Superior, 141, 392 fingerlings were marked at the Charlevoix (Michigan) Station of the U.S. Fish and Wildlife Service in October 1952. The adipose fin was removed from all fish, the right pelvic from the remainder. A random sample of 2, 417 of the fish showed an average total length of 4.0 inches (range, 2.7 to 5.4). The mean weight of all fish marked was slightly less than one-third ounce (49 fish per pound). The local women, none of whom had previous experience in the work, were employed to mark the fish. Bone-cutting forceps were used for excision of the fins, and each worker wore a bobbinet glove to facilitate handling of the fish. On alternate days the fish were anesthetized with ether before marking, to determine the effect of its use on the fin-clipping rate.

  2. ICADS: A cooperative decision making model with CLIPS experts

    Science.gov (United States)

    Pohl, Jens; Myers, Leonard

    1991-01-01

    A cooperative decision making model is described which is comprised of six concurrently executing domain experts coordinated by a blackboard control expert. The focus application field is architectural design, and the domain experts represent consultants in the area of daylighting, noise control, structural support, cost estimating, space planning, and climate responsiveness. Both the domain experts and the blackboard were implemented as production systems, using an enhanced version of the basic CLIPS package. Acting in unison as an Expert Design Advisor, the domain and control experts react to the evolving design solution progressively developed by the user in a 2-D CAD drawing environment. A Geometry Interpreter maps each drawing action taken by the user to real world objects, such as spaces, walls, windows, and doors. These objects, endowed with geometric and nongeometric attributes, are stored as frames in a semantic network. Object descriptions are derived partly from the geometry of the drawing environment and partly from knowledge bases containing prototypical, generalized information about the building type and site conditions under consideration.

  3. Clipping polygon faces through a polyhedron of vision

    Science.gov (United States)

    Rohner, Michel A. (Inventor); Florence, Judit K. (Inventor)

    1980-01-01

    A flight simulator combines flight data and polygon face terrain data to provide a CRT display at each window of the simulated aircraft. The data base specifies the relative position of each vertex of each polygon face therein. Only those terrain faces currently appearing within the pyramid of vision defined by the pilots eye and the edges of the pilots window need be displayed at any given time. As the orientation of the pyramid of vision changes in response to flight data, the displayed faces are correspondingly displaced, eventually moving out of the pyramid of vision. Faces which are currently not visible (outside the pyramid of vision) are clipped from the data flow. In addition, faces which are only partially outside of pyramid of vision are reconstructed to eliminate the outside portion. Window coordinates are generated defining the distance between each vertex and each of the boundary planes forming the pyramid of vision. The sign bit of each window coordinate indicates whether the vertex is on the pyramid of vision side of the associated boundary panel (positive), or on the other side thereof (negative). The set of sign bits accompanying each vertex constitute the outcode of that vertex. The outcodes (O.C.) are systematically processed and examined to determine which faces are completely inside the pyramid of vision (Case A--all signs positive), which faces are completely outside (Case C--All signs negative) and which faces must be reconstructed (Case B--both positive and negative signs).

  4. Supplemental knowledge acquisition through external product interface for CLIPS

    Science.gov (United States)

    Saito, Tim; Ebaud, Stephen; Loftin, Bowen R.

    1990-01-01

    Traditionally, the acquisition of knowledge for expert systems consisted of the interview process with the domain or subject matter expert (SME), observation of domain environment, and information gathering and research which constituted a direct form of knowledge acquisition (KA). The knowledge engineer would be responsible for accumulating pertinent information and/or knowledge from the SME(s) for input into the appropriate expert system development tool. The direct KA process may (or may not) have included forms of data or documentation to incorporate from the SME's surroundings. The differentiation between direct KA and supplemental KA (indirect) would be the difference in the use of data. In acquiring supplemental knowledge, the knowledge engineer would access other types of evidence (manuals, documents, data files, spreadsheets, etc.) that would support the reasoning or premises of the SME. When an expert makes a decision in a particular task, one tool that may have been used to justify a recommendation, would have been a spreadsheet total or column figure. Locating specific decision points from that data within the SME's framework would constitute supplemental KA. Data used for a specific purpose in one system or environment would be used as supplemental knowledge for another, specifically a CLIPS project.

  5. Longline Observer (HI & Am. Samoa) Opah Fin Clip Collection for Lampris spp. Distribution Study

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Data set containing information collected from the 1000+ fin clips collected by Hawaii & American Samoa Longline Observers that will be used to analyze the...

  6. Acute Cholangitis following Intraductal Migration of Surgical Clips 10 Years after Laparoscopic Cholecystectomy

    Directory of Open Access Journals (Sweden)

    Natalie E. Cookson

    2015-01-01

    Full Text Available Background. Laparoscopic cholecystectomy represents the gold standard approach for treatment of symptomatic gallstones. Surgery-associated complications include bleeding, bile duct injury, and retained stones. Migration of surgical clips after cholecystectomy is a rare complication and may result in gallstone formation “clip cholelithiasis”. Case Report. We report a case of a 55-year-old female patient who presented with right upper quadrant pain and severe sepsis having undergone an uncomplicated laparoscopic cholecystectomy 10 years earlier. Computed tomography (CT imaging revealed hyperdense material in the common bile duct (CBD compatible with retained calculus. Endoscopic retrograde cholangiopancreatography (ERCP revealed appearances in keeping with a migrated surgical clip within the CBD. Balloon trawl successfully extracted this, alleviating the patient’s jaundice and sepsis. Conclusion. Intraductal clip migration is a rarely encountered complication after laparoscopic cholecystectomy which may lead to choledocholithiasis. Appropriate management requires timely identification and ERCP.

  7. Clips migration to duodenum as a rare complication of laparoscopic cholecystectomy

    Directory of Open Access Journals (Sweden)

    Muammer Bilici

    2016-03-01

    Full Text Available Endoclip migration into the duodenum is an extremely rare complication of laparoscopic cholecystectomy. The patients usually present with bleeding ulcer. Here we report a 65-year-old female patient with a complaint of abdominal pain and dyspepsia due to clip migration into the duodenum after laparoscopic cholecystectomy secondary to symptomatic cholelithiasis 15 months previously. Ultrasonography and liver function tests were normal. Endoscopy showed metal clips in the second part of duodenum. The clips were removed endoscopically. No active bleeding was noted. In this case report, we present diagnosis and management of clips migration into wall of duodenum as a complication of laparoscopic cholecystectomy. [Cukurova Med J 2016; 41(0.100: 71-74

  8. Clipped speckle autocorrelation metric for spot size characterization of focused beam on a diffuse target

    National Research Council Canada - National Science Library

    Li, Yuanyang; Guo, Jin; Liu, Lisheng; Wang, Tingfeng; Tang, Wei; Jiang, Zhenhua

    2015-01-01

    The clipped speckle autocorrelation (CSA) metric is proposed for estimating the laser beam energy concentration on a remote diffuse target in a laser beam projection system with feedback information...

  9. Optimizing assessment of sexual arousal in postmenopausal women using erotic film clips.

    Science.gov (United States)

    Ramos Alarcon, Lauren G; Dai, Jing; Collins, Karen; Perez, Mindy; Woodard, Terri; Diamond, Michael P

    2017-10-01

    This study sought to assess sexual arousal in a subgroup of women by identifying erotic film clips that would be most mentally appealing and physically arousing to postmenopausal women. By measuring levels of mental appeal and self-reported physical arousal using a bidirectional scale, we aimed to elucidate the clips that would best be utilized for sexual health research in the postmenopausal or over 50-year-old subpopulation. Our results showed that postmenopausal women did not rate clips with older versus younger actors differently (p>0.05). The mean mental and mean physical scores were significantly correlated for both premenopausal subject ratings (r=0.69, pwomen do not show a preference for the age of actors used in erotic film clips; this knowledge is relevant for design of future sexual function research. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Attaching solar collectors to a structural framework utilizing a flexible clip

    Science.gov (United States)

    Kruse, John S

    2014-03-25

    Methods and apparatuses described herein provide for the attachment of solar collectors to a structural framework in a solar array assembly. A flexible clip is attached to either end of each solar collector and utilized to attach the solar collector to the structural framework. The solar collectors are positioned to allow a member of the framework to engage a pair of flexible clips attached to adjacent solar collectors during assembly of the solar array. Each flexible clip may have multiple frame-engaging portions, each with a flange on one end to cause the flexible clip to deflect inward when engaged by the framework member during assembly and to guide each of the frame-engaging portions into contact with a surface of the framework member for attachment.

  11. Delayed recovery of adipsic diabetes insipidus (ADI) caused by elective clipping of anterior communicating artery and left middle cerebral artery aneurysms.

    Science.gov (United States)

    Tan, Jeffrey; Ndoro, Samuel; Okafo, Uchenna; Garrahy, Aoife; Agha, Amar; Rawluk, Danny

    2016-12-16

    Adipsic diabetes insipidus (ADI) is an extremely rare complication following microsurgical clipping of anterior communicating artery aneurysm (ACoA) and left middle cerebral artery (MCA) aneurysm. It poses a significant challenge to manage due to an absent thirst response and the co-existence of cognitive impairment in our patient. Recovery from adipsic DI has hitherto been reported only once. A 52-year-old man with previous history of clipping of left posterior communicating artery aneurysm 20 years prior underwent microsurgical clipping of ACoA and left MCA aneurysms without any intraoperative complications. Shortly after surgery, he developed clear features of ADI with adipsic severe hypernatraemia and hypotonic polyuria, which was associated with cognitive impairment that was confirmed with biochemical investigations and cognitive assessments. He was treated with DDAVP along with a strict intake of oral fluids at scheduled times to maintain eunatremia. Repeat assessment at six months showed recovery of thirst and a normal water deprivation test. Management of ADI with cognitive impairment is complex and requires a multidisciplinary approach. Recovery from ADI is very rare, and this is only the second report of recovery in this particular clinical setting.

  12. Perceptual compressive sensing scalability in mobile video

    Science.gov (United States)

    Bivolarski, Lazar

    2011-09-01

    Scalability features embedded within the video sequences allows for streaming over heterogeneous networks to a variety of end devices. Compressive sensing techniques that will allow for lowering the complexity increase the robustness of the video scalability are reviewed. Human visual system models are often used in establishing perceptual metrics that would evaluate quality of video. Combining of perceptual and compressive sensing approach outlined from recent investigations. The performance and the complexity of different scalability techniques are evaluated. Application of perceptual models to evaluation of the quality of compressive sensing scalability is considered in the near perceptually lossless case and to the appropriate coding schemes is reviewed.

  13. What do home videos tell us about early motor and socio-communicative behaviours in children with autistic features during the second year of life--An exploratory study.

    Science.gov (United States)

    Zappella, Michele; Einspieler, Christa; Bartl-Pokorny, Katrin D; Krieber, Magdalena; Coleman, Mary; Bölte, Sven; Marschik, Peter B

    2015-10-01

    Little is known about the first half year of life of individuals later diagnosed with autism spectrum disorders (ASD). There is even a complete lack of observations on the first 6 months of life of individuals with transient autistic behaviours who improved in their socio-communicative functions in the pre-school age. To compare early development of individuals with transient autistic behaviours and those later diagnosed with ASD. Exploratory study; retrospective home video analysis. 18 males, videoed between birth and the age of 6 months (ten individuals later diagnosed with ASD; eight individuals who lost their autistic behaviours after the age of 3 and achieved age-adequate communicative abilities, albeit often accompanied by tics and attention deficit). The detailed video analysis focused on general movements (GMs), the concurrent motor repertoire, eye contact, responsive smiling, and pre-speech vocalisations. Abnormal GMs were observed more frequently in infants later diagnosed with ASD, whereas all but one infant with transient autistic behaviours had normal GMs (p<0.05). Eye contact and responsive smiling were inconspicuous for all individuals. Cooing was not observable in six individuals across both groups. GMs might be one of the markers which could assist the earlier identification of ASD. We recommend implementing the GM assessment in prospective studies on ASD. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. No-Reference Video Quality Assessment using MPEG Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2013-01-01

    We present a method for No-Reference (NR) Video Quality Assessment (VQA) for decoded video without access to the bitstream. This is achieved by extracting and pooling features from a NR image quality assessment method used frame by frame. We also present methods to identify the video coding...... and estimate the video coding parameters for MPEG-2 and H.264/AVC which can be used to improve the VQA. The analysis differs from most other video coding analysis methods since it is without access to the bitstream. The results show that our proposed method is competitive with other recent NR VQA methods...

  15. Microsurgical Clip Obliteration of Middle Cerebral Aneurysm Using Intraoperative Flow Assessment

    OpenAIRE

    Carter, Bob S.; Farrell, Christopher; Owen, Christopher

    2009-01-01

    Cerebral aneurysms are abnormal widening or ballooning of a localized segment of an intracranial blood vessel. Surgical clipping is an important treatment for aneurysms which attempts to exclude blood from flowing into the aneurysmal segment of the vessel while preserving blood flow in a normal fashion. Improper clip placement may result in residual aneurysm with the potential for subsequent aneurysm rupture or partial or full occlusion of distal arteries resulting in cerebral infarction. ...

  16. [Closure of the left atrial appendage by means of the AtriClip System].

    Science.gov (United States)

    Mokráček, Aleš; Kurfirst, Vojtěch; Bulava, Alan; Haniš, Jiří

    Atrial fibrillation (AFib) is related to a high risk of stroke. The main role in etiopathogenesis is played by the left atrial appendage (LAA). As many as 95 % of thrombi in nonvalvular atrial fibrillation are located in the appendage. Prevention of stroke then consists in permanent anticoagulation which, however, has its limits and risks. An alternative method is the left atrial appendage occlusion. In our report, we would like to present a new possibility of the closure using the epicardial system AtriClip (AtriCure). In the period beginning in July 2012 - September 2015 we performed LAA closure in 101 patients. A mean age of 65 ± 6 years, 47 women, CHA2DS2 VASc (Ø) 2.47 (0-6). Monitoring 1 837 (Ø 18.5) months. A concomitant procedure was performed in 37 patients, endoscopic MAZE plus clip in 57 patients, and 7 patients underwent stand-alone implantation of the clip. The clip was implanted from full sternotomy, minitoracotomy or through thoracoscopy. Clip loading, residual recess and endoleak were assessed through endoscopic ultrasound according to the Cleveland criteria. The perioperative success rate of loading reached 98 %. The clip was loaded with a neck greater than 1 cm in 2 patients. No migration of the clip occurred, no endoleak was detected and no thrombus at the appendage base was detected. One case of periprocedural stroke was recorded. Within follow-up monitoring TIA occurred in 4 patients and no stroke was recorded. Epicardial LAA occlusion using the AtriClip system is a safe and reproducible method of LAA occlusion and an important alternative in the prevention of stroke.Key words: atrial fibrillation - occlusion of left atrial appendage - stroke.

  17. [Choledochal lithiasis and stenosis secondary to the migration of a surgical clip].

    Science.gov (United States)

    Baldomà España, M; Pernas Canadell, J C; González Ceballos, S

    2014-01-01

    The migration of a clip to the common bile duct after cholecystectomy is an uncommon, usually late, complication that can lead to diverse complications like stone formation, stenosis, and obstruction in the bile duct. We present the case of a patient who presented with signs and symptoms of cholangitis due to clip migration one year after laparoscopic cholecystectomy; endoscopic retrograde cholangiopancreatography and biliary tract stent placement resolved the problem. Copyright © 2011 SERAM. Published by Elsevier Espana. All rights reserved.

  18. Hydrostatic comparison of nonpenetrating titanium clips versus conventional suture for repair of spinal durotomies.

    Science.gov (United States)

    Faulkner, Nathan D; Finn, Michael A; Anderson, Paul A

    2012-04-20

    Biomechanics. To compare the hydrostatic strength of suture and nonpenetrating titanium clip repairs of standard spinal durotomies. Dural tears are a frequent complication of spine surgery and can be associated with significant morbidity. Primary repair of durotomies with suture typically is attempted, but a true watertight closure can be difficult to obtain because of leakage through suture tracts. Nonpenetrating titanium clips have been developed for vascular anastomoses and provide a close apposition of the tissues without the creation of a suture tract. Twenty-four calf spines were prepared with laminectomies and the spinal cord was evacuated leaving an intact dura. After Foley catheters were inserted from each end and inflated adjacent to a planned dural defect, the basal flow rate was measured and a 1-cm longitudinal durotomy was made with a scalpel. Eight repairs were performed for each material, which included monofilament suture, braided suture, and nonpenetrating titanium clips. The flow rate at 30, 60, and 90 cm of water and the time needed for each closure were measured. There was no statistically significant difference in the baseline leak rate for all 3 groups. There was no difference in the leakage rate of durotomies repaired with clips and intact specimens at any pressure. Monofilament and braided suture repairs allowed significantly more leakage than both intact and clip-repaired specimens at all pressures. The difference in leak rate increased as the pressure increased. Closing the durotomy with clips took less than half the time of closure with suture. Nonpenetrating titanium clips provide a durotomy closure with immediate hydrostatic strength similar to intact dura whereas suture repair with either suture was significantly less robust. The use of titanium clips was more rapid than that of suture repair.

  19. Prenatal Ethanol Exposure and Whisker Clipping Disrupt Ultrasonic Vocalizations and Play Behavior in Adolescent Rats

    Science.gov (United States)

    Waddell, Jaylyn; Yang, Tianqi; Ho, Eric; Wellmann, Kristen A.; Mooney, Sandra M.

    2016-01-01

    Prenatal ethanol exposure can result in social deficits in humans and animals, including altered social interaction and poor communication. Rats exposed to ethanol prenatally show reduced play fighting, and a combination of prenatal ethanol exposure and neonatal whisker clipping further reduces play fighting compared with ethanol exposure alone. In this study, we explored whether expression of hedonic ultrasonic vocalizations (USVs) correlated with the number of playful attacks by ethanol-exposed rats, rats subjected to postnatal sensory deprivation by whisker clipping or both compared to control animals. In normally developing rats, hedonic USVs precede such interactions and correlate with the number of play interactions exhibited in dyads. Pregnant Long-Evans rats were fed an ethanol-containing liquid diet or a control diet. After birth, male and female pups from each litter were randomly assigned to the whisker-clipped or non-whisker-clipped condition. Animals underwent a social interaction test with a normally developing play partner during early or late-adolescence. USVs were recorded during play. Prenatal ethanol exposure reduced both play and hedonic USVs in early adolescence compared to control rats and persistently reduced social play. Interestingly, ethanol exposure, whisker clipping and the combination abolished the significant correlation between hedonic USVs and social play detected in control rats in early adolescence. This relationship remained disrupted in late adolescence only in rats subjected to both prenatal ethanol and whisker clipping. Thus, both insults more persistently disrupted the relationship between social communication and social play. PMID:27690116

  20. The Use of Film Clips in a Viewing Time Task of Sexual Interests.

    Science.gov (United States)

    Lalumière, Martin L; Babchishin, Kelly M; Ebsworth, Megan

    2017-12-04

    Viewing time tasks using still pictures to assess age and gender sexual interests have been well validated and are commonly used. The use of film clips in a viewing time task would open up interesting possibilities for the study of sexual interest toward sexual targets or activities that are not easily captured in still pictures. We examined the validity of a viewing time task using film clips to assess sexual interest toward male and female targets, in a sample of 52 young adults. Film clips produced longer viewing times than still pictures. For both men and women, the indices derived from the film viewing time task were able to distinguish individuals who identified as homosexual (14 men, 8 women) from those who identified as heterosexual (15 men, 15 women), and provided comparable group differentiation as indices derived from a viewing time task using still pictures. Men's viewing times were more gender-specific than those of women. Viewing times to film clips were correlated with participants' ratings of sexual appeal of the same clips, and with viewing times to pictures. The results support the feasibility of a viewing time measure of sexual interest that utilizes film clips and, thus, expand the types of sexual interests that could be investigated (e.g., sadism, biastophilia).

  1. Mechanics of mitral valve edge-to-edge-repair and MitraClip procedure.

    Science.gov (United States)

    Bhattacharya, Shamik; He, Zhaoming

    2015-01-01

    The edge-to-edge repair (ETER) technique has been used as a stand-alone procedure, or as a secondary procedure with ring annuloplasty for degenerative, functional mitral regurgitation, or for mitral regurgitation of other kinds of valvular etiologies. The percutaneous MitraClip technique based on ETER has been used in patients who are inoperable or at high surgical risk. However, adverse events such as residual mitral regurgitation, and clip detachment or fracture indicate that the mechanics underlying these procedures is not well understood. Therefore, current studies on mitral valve functionality and mechanics related to the ETER and MitraClip procedures are reviewed to improve the efficacy and safety of both procedures. Extensive in vivo, in vitro, and in silico studies related to ETER and MitraClip procedures along with MitraClip clinical trial results are presented and discussed herein. The ETER suture force and the mitral valve tissue mechanics and hemodynamics of each procedure are discussed. A quantitative understanding of the interplay of mitral valve components and as to biological response to the procedures remains challenging. Based on mitral valve mechanics, ETER or MitraClip therapy can be optimized to enhance repair efficacy and durability.

  2. Towards patient-specific finite-element simulation of MitralClip procedure.

    Science.gov (United States)

    Mansi, T; Voigt, I; Assoumou Mengue, E; Ionasec, R; Georgescu, B; Noack, T; Seeburger, J; Comaniciu, D

    2011-01-01

    MitralClip is a novel minimally invasive procedure to treat mitral valve (MV) regurgitation. It consists in clipping the mitral leaflets together to close the regurgitant hole. A careful preoperative planning is necessary to select respondent patients and to determine the clipping sites. Although preliminary indications criteria are established, they lack prediction power with respect to complications and effectiveness of the therapy in specific patients. We propose an integrated framework for personalized simulation of MV function and apply it to simulate MitralClip procedure. A patient-specific dynamic model of the MV apparatus is computed automatically from 4D TEE images. A biomechanical model of the MV, constrained by the observed motion of the mitral annulus and papillary muscles, is employed to simulate valve closure and MitralClip intervention. The proposed integrated framework enables, for the first time, to quantitatively evaluate an MV finite-element model in-vivo, on eleven patients, and to predict the outcome of MitralClip intervention in one of these patients. The simulations are compared to ground truth and to postoperative images, resulting in promising accuracy (average point-to-mesh distance: 1.47 +/- 0.24 mm). Our framework may constitute a tool for MV therapy planning and patient management.

  3. Prenatal Ethanol Exposure and Whisker Clipping Disrupt Ultrasonic Vocalizations and Play Behavior in Adolescent Rats

    Directory of Open Access Journals (Sweden)

    Jaylyn Waddell

    2016-09-01

    Full Text Available Prenatal ethanol exposure can result in social deficits in humans and animals, including altered social interaction and poor communication. Rats exposed to ethanol prenatally show reduced play fighting, and a combination of prenatal ethanol exposure and neonatal whisker clipping further reduces play fighting compared with ethanol exposure alone. In this study, we explored whether expression of hedonic ultrasonic vocalizations (USVs correlated with the number of playful attacks by ethanol-exposed rats, rats subjected to postnatal sensory deprivation by whisker clipping or both compared to control animals. In normally developing rats, hedonic USVs precede such interactions and correlate with the number of play interactions exhibited in dyads. Pregnant Long-Evans rats were fed an ethanol-containing liquid diet or a control diet. After birth, male and female pups from each litter were randomly assigned to the whisker-clipped or non-whisker-clipped condition. Animals underwent a social interaction test with a normally developing play partner during early or late-adolescence. USVs were recorded during play. Prenatal ethanol exposure reduced both play and hedonic USVs in early adolescence compared to control rats and persistently reduced social play. Interestingly, ethanol exposure, whisker clipping and the combination abolished the significant correlation between hedonic USVs and social play detected in control rats in early adolescence. This relationship remained disrupted in late adolescence only in rats subjected to both prenatal ethanol and whisker clipping. Thus, both insults more persistently disrupted the relationship between social communication and social play.

  4. Eye movements while viewing narrated, captioned, and silent videos

    Science.gov (United States)

    Ross, Nicholas M.; Kowler, Eileen

    2013-01-01

    Videos are often accompanied by narration delivered either by an audio stream or by captions, yet little is known about saccadic patterns while viewing narrated video displays. Eye movements were recorded while viewing video clips with (a) audio narration, (b) captions, (c) no narration, or (d) concurrent captions and audio. A surprisingly large proportion of time (>40%) was spent reading captions even in the presence of a redundant audio stream. Redundant audio did not affect the saccadic reading patterns but did lead to skipping of some portions of the captions and to delays of saccades made into the caption region. In the absence of captions, fixations were drawn to regions with a high density of information, such as the central region of the display, and to regions with high levels of temporal change (actions and events), regardless of the presence of narration. The strong attraction to captions, with or without redundant audio, raises the question of what determines how time is apportioned between captions and video regions so as to minimize information loss. The strategies of apportioning time may be based on several factors, including the inherent attraction of the line of sight to any available text, the moment by moment impressions of the relative importance of the information in the caption and the video, and the drive to integrate visual text accompanied by audio into a single narrative stream. PMID:23457357

  5. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... support group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ... group for me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support ...

  6. Video Games and Citizenship

    National Research Council Canada - National Science Library

    Bourgonjon, Jeroen; Soetaert, Ronald

    2013-01-01

    ... by exploring a particular aspect of digitization that affects young people, namely video games. They explore the new social spaces which emerge in video game culture and how these spaces relate to community building and citizenship...

  7. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Doctor Find a Provider Meet the Team Blog Articles News Resources Links Videos Podcasts Webinars For the ... Doctor Find a Provider Meet the Team Blog Articles News Provider Directory Donate Resources Links Videos Podcasts ...

  8. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... Doctor Find a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For ... Doctor Find a Provider Meet the Team Blog Articles & Stories News Provider Directory Donate Resources Links Videos ...

  9. Digital Video in Research

    DEFF Research Database (Denmark)

    Frølunde, Lisbeth

    2012-01-01

    questions of our media literacy pertaining to authoring multimodal texts (visual, verbal, audial, etc.) in research practice and the status of multimodal texts in academia. The implications of academic video extend to wider issues of how researchers harness opportunities to author different types of texts......Is video becoming “the new black” in academia, if so, what are the challenges? The integration of video in research methodology (for collection, analysis) is well-known, but the use of “academic video” for dissemination is relatively new (Eriksson and Sørensen). The focus of this paper is academic...... video, or short video essays produced for the explicit purpose of communicating research processes, topics, and research-based knowledge (see the journal of academic videos: www.audiovisualthinking.org). Video is increasingly used in popular showcases for video online, such as YouTube and Vimeo, as well...

  10. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... Back Support Groups Is a support group for me? Find a Group Upcoming Events Video Library Photo ... Support Groups Back Is a support group for me? Find a group Back Upcoming events Video Library ...

  11. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... group for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork ... for me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ...

  12. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For the Media For Clinicians For ... Family Caregivers Glossary Menu In this section Links Videos Podcasts Webinars For the Media For Clinicians For ...

  13. Videos, Podcasts and Livechats

    Science.gov (United States)

    ... the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For the Media For Clinicians For ... Family Caregivers Glossary Menu In this section Links Videos Podcasts Webinars For the Media For Clinicians For ...

  14. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... a Provider Meet the Team Blog Articles & Stories News Resources Links Videos Podcasts Webinars For the Media ... a Provider Meet the Team Blog Articles & Stories News Provider Directory Donate Resources Links Videos Podcasts Webinars ...

  15. Acoustic Neuroma Educational Video

    Medline Plus

    Full Text Available ... for me? Find a Group Upcoming Events Video Library Photo Gallery One-on-One Support ANetwork Peer ... me? Find a group Back Upcoming events Video Library Photo Gallery One-on-One Support Back ANetwork ...

  16. Video Screen Capture Basics

    Science.gov (United States)

    Dunbar, Laura

    2014-01-01

    This article is an introduction to video screen capture. Basic information of two software programs, QuickTime for Mac and BlueBerry Flashback Express for PC, are also discussed. Practical applications for video screen capture are given.

  17. Videos, Podcasts and Livechats

    Medline Plus

    Full Text Available ... News Resources Links Videos Podcasts Webinars For the Media For Clinicians For Policymakers For Family Caregivers Glossary ... this section Links Videos Podcasts Webinars For the Media For Clinicians For Policymakers For Family Caregivers Glossary ...

  18. Intelligent Analysis for Georeferenced Video Using Context-Based Random Graphs

    OpenAIRE

    Jiangfan Feng; Hu Song

    2013-01-01

    Video sensor networks are formed by the joining of heterogeneous sensor nodes, which is frequently reported as video of communication functionally bound to geographical locations. Decomposition of georeferenced video stream presents the expression of video from spatial feature set. Although it has been studied extensively, spatial relations underlying the scenario are not well understood, which are important to understand the semantics of georeferenced video and behavior of elements. Here we ...

  19. Biochemical Analysis Reveals the Multifactorial Mechanism of Histone H3 Clipping by Chicken Liver Histone H3 Protease

    KAUST Repository

    Chauhan, Sakshi

    2016-09-02

    Proteolytic clipping of histone H3 has been identified in many organisms. Despite several studies, the mechanism of clipping, the substrate specificity, and the significance of this poorly understood epigenetic mechanism are not clear. We have previously reported histone H3 specific proteolytic clipping and a protein inhibitor in chicken liver. However, the sites of clipping are still not known very well. In this study, we attempt to identify clipping sites in histone H3 and to determine the mechanism of inhibition by stefin B protein, a cysteine protease inhibitor. By employing site-directed mutagenesis and in vitro biochemical assays, we have identified three distinct clipping sites in recombinant human histone H3 and its variants (H3.1, H3.3, and H3t). However, post-translationally modified histones isolated from chicken liver and Saccharomyces cerevisiae wild-type cells showed different clipping patterns. Clipping of histone H3 N-terminal tail at three sites occurs in a sequential manner. We have further observed that clipping sites are regulated by the structure of the N-terminal tail as well as the globular domain of histone H3. We also have identified the QVVAG region of stefin B protein to be very crucial for inhibition of the protease activity. Altogether, our comprehensive biochemical studies have revealed three distinct clipping sites in histone H3 and their regulation by the structure of histone H3, histone modifications marks, and stefin B.

  20. Transmission of compressed video

    Science.gov (United States)

    Pasch, H. L.

    1990-09-01

    An overview of video coding is presented. The aim is not to give a technical summary of possible coding techniques, but to address subjects related to video compression in general and to the transmission of compressed video in more detail. Bit rate reduction is in general possible by removing redundant information; removing information the eye does not use anyway; and reducing the quality of the video. The codecs which are used for reducing the bit rate, can be divided into two groups: Constant Bit rate Codecs (CBC's), which keep the bit rate constant, but vary the video quality; and Variable Bit rate Codecs (VBC's), which keep the video quality constant by varying the bit rate. VBC's can be in general reach a higher video quality than CBC's using less bandwidth, but need a transmission system that allows the bandwidth of a connection to fluctuate in time. The current and the next generation of the PSTN does not allow this; ATM might. There are several factors which influence the quality of video: the bit error rate of the transmission channel, slip rate, packet loss rate/packet insertion rate, end-to-end delay, phase shift between voice and video, and bit rate. Based on the bit rate of the coded video, the following classification of coded video can be made: High Definition Television (HDTV); Broadcast Quality Television (BQTV); video conferencing; and video telephony. The properties of these classes are given. The video conferencing and video telephony equipment available now and in the next few years can be divided into three categories: conforming to 1984 CCITT standard for video conferencing; conforming to 1988 CCITT standard; and conforming to no standard.

  1. Making good physics videos

    Science.gov (United States)

    Lincoln, James

    2017-05-01

    Online videos are an increasingly important way technology is contributing to the improvement of physics teaching. Students and teachers have begun to rely on online videos to provide them with content knowledge and instructional strategies. Online audiences are expecting greater production value, and departments are sometimes requesting educators to post video pre-labs or to flip our classrooms. In this article, I share my advice on creating engaging physics videos.

  2. Desktop video conferencing

    OpenAIRE

    Potter, Ray; Roberts, Deborah

    2007-01-01

    This guide aims to provide an introduction to Desktop Video Conferencing. You may be familiar with video conferencing, where participants typically book a designated conference room and communicate with another group in a similar room on another site via a large screen display. Desktop video conferencing (DVC), as the name suggests, allows users to video conference from the comfort of their own office, workplace or home via a desktop/laptop Personal Computer. DVC provides live audio and visua...

  3. 47 CFR 79.3 - Video description of video programming.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 4 2010-10-01 2010-10-01 false Video description of video programming. 79.3... CLOSED CAPTIONING AND VIDEO DESCRIPTION OF VIDEO PROGRAMMING § 79.3 Video description of video programming. (a) Definitions. For purposes of this section the following definitions shall apply: (1...

  4. Learnable pooling with Context Gating for video classification

    OpenAIRE

    Miech, Antoine; Laptev, Ivan; Sivic, Josef

    2017-01-01

    Common video representations often deploy an average or maximum pooling of pre-extracted frame features over time. Such an approach provides a simple means to encode feature distributions, but is likely to be suboptimal. As an alternative, we here explore combinations of learnable pooling techniques such as Soft Bag-of-words, Fisher Vectors , NetVLAD, GRU and LSTM to aggregate video features over time. We also introduce a learnable non-linear network unit, named Context Gating, aiming at mode...

  5. Are YouTube videos accurate and reliable on basic life support and cardiopulmonary resuscitation?

    Science.gov (United States)

    Yaylaci, Serpil; Serinken, Mustafa; Eken, Cenker; Karcioglu, Ozgur; Yilmaz, Atakan; Elicabuk, Hayri; Dal, Onur

    2014-10-01

    The objective of this study is to investigate reliability and accuracy of the information on YouTube videos related to CPR and BLS in accord with 2010 CPR guidelines. YouTube was queried using four search terms 'CPR', 'cardiopulmonary resuscitation', 'BLS' and 'basic life support' between 2011 and 2013. Sources that uploaded the videos, the record time, the number of viewers in the study period, inclusion of human or manikins were recorded. The videos were rated if they displayed the correct order of resuscitative efforts in full accord with 2010 CPR guidelines or not. Two hundred and nine videos meeting the inclusion criteria after the search in YouTube with four search terms ('CPR', 'cardiopulmonary resuscitation', 'BLS' and 'basic life support') comprised the study sample subjected to the analysis. Median score of the videos is 5 (IQR: 3.5-6). Only 11.5% (n = 24) of the videos were found to be compatible with 2010 CPR guidelines with regard to sequence of interventions. Videos uploaded by 'Guideline bodies' had significantly higher rates of download when compared with the videos uploaded by other sources. Sources of the videos and date of upload (year) were not shown to have any significant effect on the scores received (P = 0.615 and 0.513, respectively). The videos' number of downloads did not differ according to the videos compatible with the guidelines (P = 0.832). The videos downloaded more than 10,000 times had a higher score than the others (P = 0.001). The majority of You-Tube video clips purporting to be about CPR are not relevant educational material. Of those that are focused on teaching CPR, only a small minority optimally meet the 2010 Resucitation Guidelines. © 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  6. Video Self-Modeling

    Science.gov (United States)

    Buggey, Tom; Ogle, Lindsey

    2012-01-01

    Video self-modeling (VSM) first appeared on the psychology and education stage in the early 1970s. The practical applications of VSM were limited by lack of access to tools for editing video, which is necessary for almost all self-modeling videos. Thus, VSM remained in the research domain until the advent of camcorders and VCR/DVD players and,…

  7. Tracing Sequential Video Production

    DEFF Research Database (Denmark)

    Otrel-Cass, Kathrin; Khalid, Md. Saifuddin

    2015-01-01

    With an interest in learning that is set in collaborative situations, the data session presents excerpts from video data produced by two of fifteen students from a class of 5th semester techno-anthropology course. Students used video cameras to capture the time they spent working with a scientist...... video, nature of the interactional space, and material and spatial semiotics....

  8. Developing a Promotional Video

    Science.gov (United States)

    Epley, Hannah K.

    2014-01-01

    There is a need for Extension professionals to show clientele the benefits of their program. This article shares how promotional videos are one way of reaching audiences online. An example is given on how a promotional video has been used and developed using iMovie software. Tips are offered for how professionals can create a promotional video and…

  9. A Multi-view Approach for Detecting Non-Cooperative Users in Online Video Sharing Systems

    OpenAIRE

    Langbehn, Hendrickson Reiter; Ricci, Saulo M. R.; Gonçalves, Marcos A.; Almeida, Jussara Marques; Pappa, Gisele Lobo; Benevenuto, Fabrício

    2010-01-01

    Most online video sharing systems (OVSSs), such as YouTube and Yahoo! Video, have several mechanisms for supporting interactions among users. One such mechanism is the  video response feature in YouTube, which allows a user to post a video in response to another video. While increasingly popular, the video response feature opens the opportunity for non-cooperative users to  introduce  ``content pollution'' into the system, thus causing loss of service effectiveness and credibility as w...

  10. Nonvariceal Upper Gastrointestinal Bleeding: the Usefulness of Rotational Angiography after Endoscopic Marking with a Metallic Clip

    Energy Technology Data Exchange (ETDEWEB)

    Song, Ji Soo; Kwak, Hyo Sung; Chung, Gyung Ho [Chonbuk National University Medical School, Chonju (Korea, Republic of)

    2011-08-15

    We wanted to assess the usefulness of rotational angiography after endoscopic marking with a metallic clip in upper gastrointestinal bleeding patients with no extravasation of contrast medium on conventional angiography. In 16 patients (mean age, 59.4 years) with acute bleeding ulcers (13 gastric ulcers, 2 duodenal ulcers, 1 malignant ulcer), a metallic clip was placed via gastroscopy and this had been preceded by routine endoscopic treatment. The metallic clip was placed in the fibrous edge of the ulcer adjacent to the bleeding point. All patients had negative results from their angiographic studies. To localize the bleeding focus, rotational angiography and high pressure angiography as close as possible to the clip were used. Of the 16 patients, seven (44%) had positive results after high pressure angiography as close as possible to the clip and they underwent transcatheter arterial embolization (TAE) with microcoils. Nine patients without extravasation of contrast medium underwent TAE with microcoils as close as possible to the clip. The bleeding was stopped initially in all patients after treatment of the feeding artery. Two patients experienced a repeat episode of bleeding two days later. Of the two patients, one had subtle oozing from the ulcer margin and that patient underwent endoscopic treatment. One patient with malignant ulcer died due to disseminated intravascular coagulation one month after embolization. Complete clinical success was achieved in 14 of 16 (88%) patients. Delayed bleeding or major/minor complications were not noted. Rotational angiography after marking with a metallic clip helps to localize accurately the bleeding focus and thus to embolize the vessel correctly.

  11. Intelligent video surveillance systems

    CERN Document Server

    Dufour, Jean-Yves

    2012-01-01

    Belonging to the wider academic field of computer vision, video analytics has aroused a phenomenal surge of interest since the current millennium. Video analytics is intended to solve the problem of the incapability of exploiting video streams in real time for the purpose of detection or anticipation. It involves analyzing the videos using algorithms that detect and track objects of interest over time and that indicate the presence of events or suspect behavior involving these objects.The aims of this book are to highlight the operational attempts of video analytics, to identify possi

  12. VBR video traffic models

    CERN Document Server

    Tanwir, Savera

    2014-01-01

    There has been a phenomenal growth in video applications over the past few years. An accurate traffic model of Variable Bit Rate (VBR) video is necessary for performance evaluation of a network design and for generating synthetic traffic that can be used for benchmarking a network. A large number of models for VBR video traffic have been proposed in the literature for different types of video in the past 20 years. Here, the authors have classified and surveyed these models and have also evaluated the models for H.264 AVC and MVC encoded video and discussed their findings.

  13. Video Texture Synthesis Based on Flow-Like Stylization Painting

    Directory of Open Access Journals (Sweden)

    Qian Wenhua

    2014-01-01

    Full Text Available The paper presents an NP-video rendering system based on natural phenomena. It provides a simple nonphotorealistic video synthesis system in which user can obtain a flow-like stylization painting and infinite video scene. Firstly, based on anisotropic Kuwahara filtering in conjunction with line integral convolution, the phenomena video scene can be rendered to flow-like stylization painting. Secondly, the methods of frame division, patches synthesis, will be used to synthesize infinite playing video. According to selection examples from different natural video texture, our system can generate stylized of flow-like and infinite video scenes. The visual discontinuities between neighbor frames are decreased, and we also preserve feature and details of frames. This rendering system is easy and simple to implement.

  14. Flip Video for Dummies

    CERN Document Server

    Hutsko, Joe

    2010-01-01

    The full-color guide to shooting great video with the Flip Video camera. The inexpensive Flip Video camera is currently one of the hottest must-have gadgets. It's portable and connects easily to any computer to transfer video you shoot onto your PC or Mac. Although the Flip Video camera comes with a quick-start guide, it lacks a how-to manual, and this full-color book fills that void! Packed with full-color screen shots throughout, Flip Video For Dummies shows you how to shoot the best possible footage in a variety of situations. You'll learn how to transfer video to your computer and then edi

  15. Mediating Tourist Experiences. Access to Places via Shared Videos

    DEFF Research Database (Denmark)

    Tussyadiah, Iis; Fesenmaier, D.R.

    2009-01-01

    The emergence of new media using multimedia features has generated a new set of mediators for tourists' experiences. This study examines two hypotheses regarding the roles that online travel videos play as mediators of tourist experiences. The results confirm that online shared videos can provide...... mental pleasure to viewers by stimulating fantasies and daydreams, as well as bringing back past travel memories. In addition, the videos act as a narrative transportation, providing access to foreign landscapes and socioscapes....

  16. Artificial Intelligence in Video Games: Towards a Unified Framework

    OpenAIRE

    Safadi, Firas

    2015-01-01

    The work presented in this dissertation revolves around the problem of designing artificial intelligence (AI) for video games. This problem becomes increasingly challenging as video games grow in complexity. With modern video games frequently featuring sophisticated and realistic environments, the need for smart and comprehensive agents that understand the various aspects of these environments is pressing. Although machine learning techniques are being successfully applied in a multitude of d...

  17. Use of streamed internet video for cytology training and education: www.PathLab.org.

    Science.gov (United States)

    Poller, David; Ljung, Britt-Marie; Gonda, Peter

    2009-05-01

    An Internet-based method is described for submission of video clips to a website editor to be reviewed, edited, and then uploaded onto a video server, with a hypertext link to a website. The information on the webpages is searchable via the website sitemap on Internet search engines. A survey of video users who accessed a single 59-minute FNA cytology training cytology video via the website showed a mean score for usefulness for specialists/consultants of 3.75, range 1-5, n = 16, usefulness for trainees mean score was 4.4, range 3-5, n = 12, with a mean score for visual and sound quality of 3.9, range 2-5, n = 16. Fifteen out of 17 respondents thought that posting video training material on the Internet was a good idea, and 9 of 17 respondents would also consider submitting training videos to a similar website. This brief exercise has shown that there is value in posting educational or training video content on the Internet and that the use of streamed video accessed via the Internet will be of increasing importance. (c) 2009 Wiley-Liss, Inc.

  18. Primary Motor Cortex Activation during Action Observation of Tasks at Different Video Speeds Is Dependent on Movement Task and Muscle Properties

    Science.gov (United States)

    Moriuchi, Takefumi; Matsuda, Daiki; Nakamura, Jirou; Matsuo, Takashi; Nakashima, Akira; Nishi, Keita; Fujiwara, Kengo; Iso, Naoki; Nakane, Hideyuki; Higashi, Toshio

    2017-01-01

    The aim of the present study was to investigate how the video speed of observed action affects the excitability of the primary motor cortex (M1), as assessed by the size of motor-evoked potentials (MEPs) induced by transcranial magnetic stimulation (TMS). Twelve healthy subjects observed a video clip of a person catching a ball (Experiment 1: rapid movement) and another 12 healthy subjects observed a video clip of a person reaching to lift a ball (Experiment 2: slow movement task). We played each video at three different speeds (slow, normal and fast). The stimulus was given at two points of timing in each experiment. These stimulus points were locked to specific frames of the video rather than occurring at specific absolute times, for ease of comparison across different speeds. We recorded MEPs from the first dorsal interosseous muscle (FDI) and abductor digiti minimi muscle (ADM) of the right hand. MEPs were significantly different for different video speeds only in the rapid movement task. MEPs for the rapid movement task were higher when subjects observed an action played at slow speed than normal or fast speed condition. There was no significant change for the slow movement task. Video speed was effective only in the ADM. Moreover, MEPs in the ADM were significantly higher than in the FDI in a rapid movement task under the slow speed condition. Our findings suggest that the M1 becomes more excitable when subjects observe the video clip at the slow speed in a rapid movement, because they could recognize the elements of movement in others. Our results suggest the effects of manipulating the speed of the viewed task on the excitability of the M1 during passive observation differ depending on the type of movement task observed. It is likely that rehabilitation in the clinical setting will be more efficient if the video speed is changed to match the task’s characteristics. PMID:28163678

  19. Primary Motor Cortex Activation during Action Observation of Tasks at Different Video Speeds Is Dependent on Movement Task and Muscle Properties.

    Science.gov (United States)

    Moriuchi, Takefumi; Matsuda, Daiki; Nakamura, Jirou; Matsuo, Takashi; Nakashima, Akira; Nishi, Keita; Fujiwara, Kengo; Iso, Naoki; Nakane, Hideyuki; Higashi, Toshio

    2017-01-01

    The aim of the present study was to investigate how the video speed of observed action affects the excitability of the primary motor cortex (M1), as assessed by the size of motor-evoked potentials (MEPs) induced by transcranial magnetic stimulation (TMS). Twelve healthy subjects observed a video clip of a person catching a ball (Experiment 1: rapid movement) and another 12 healthy subjects observed a video clip of a person reaching to lift a ball (Experiment 2: slow movement task). We played each video at three different speeds (slow, normal and fast). The stimulus was given at two points of timing in each experiment. These stimulus points were locked to specific frames of the video rather than occurring at specific absolute times, for ease of comparison across different speeds. We recorded MEPs from the first dorsal interosseous muscle (FDI) and abductor digiti minimi muscle (ADM) of the right hand. MEPs were significantly different for different video speeds only in the rapid movement task. MEPs for the rapid movement task were higher when subjects observed an action played at slow speed than normal or fast speed condition. There was no significant change for the slow movement task. Video speed was effective only in the ADM. Moreover, MEPs in the ADM were significantly higher than in the FDI in a rapid movement task under the slow speed condition. Our findings suggest that the M1 becomes more excitable when subjects observe the video clip at the slow speed in a rapid movement, because they could recognize the elements of movement in others. Our results suggest the effects of manipulating the speed of the viewed task on the excitability of the M1 during passive observation differ depending on the type of movement task observed. It is likely that rehabilitation in the clinical setting will be more efficient if the video speed is changed to match the task's characteristics.

  20. Obtaining video descriptors for a content-based video information system

    Science.gov (United States)

    Bescos, Jesus; Martinez, Jose M.; Cabrera, Julian M.; Cisneros, Guillermo

    1998-09-01

    This paper describes the first stages of a research project that is currently being developed in the Image Processing Group of the UPM. The aim of this effort is to add video capabilities to the Storage and Retrieval Information System already working at our premises. Here we will focus on the early design steps of a Video Information System. For this purpose, we present a review of most of the reported techniques for video temporal segmentation and semantic segmentation, previous steps to afford the content extraction task, and we discuss them to select the more suitable ones. We then outline a block design of a temporal segmentation module, and present guidelines to the design of the semantic segmentation one. All these operations trend to facilitate automation in the extraction of low level features and semantic features that will finally take part of the video descriptors.

  1. Defocus cue and saliency preserving video compression

    Science.gov (United States)

    Khanna, Meera Thapar; Chaudhury, Santanu; Lall, Brejesh

    2016-11-01

    There are monocular depth cues present in images or videos that aid in depth perception in two-dimensional images or videos. Our objective is to preserve the defocus depth cue present in the videos along with the salient regions during compression application. A method is provided for opportunistic bit allocation during the video compression using visual saliency information comprising both the image features, such as color and contrast, and the defocus-based depth cue. The method is divided into two steps: saliency computation followed by compression. A nonlinear method is used to combine pure and defocus saliency maps to form the final saliency map. Then quantization values are assigned on the basis of these saliency values over a frame. The experimental results show that the proposed scheme yields good results over standard H.264 compression as well as pure and defocus saliency methods.

  2. Robotic mitral valve annuloplasty with double-arm nitinol U-clips.

    Science.gov (United States)

    Reade, Clifton C; Bower, Curtis E; Bailey, B Marcus; Maziarz, David M; Masroor, Saqib; Kypson, Alan P; Nifong, L Wiley; Chitwood, W Randolph

    2005-04-01

    Robotic mitral valve repair increases precision however operative times are longer. Prior studies have indicated that robotic knot tying is time consuming and it is without potential room for improvement. We therefore investigated tissue approximation devices that may shorten operative times. A 67-year-old female was approached through a right mini-thoracotomy with the da Vinci Robotic Surgical System (Intuitive Surgical, Sunnyvale, CA). Using 12 nitinol U-clips (Coalescent Surgical, Sunnyvale, CA) an annuloplasty band was placed under robotic guidance. Clip placement and deployment times were recorded and statistical comparisons were assessed to prior suture annuloplasties. Clip placement time was 1.3 +/- 0.9 (minutes +/- standard deviation), statistical comparison with first, most recent, and all prior suture annuloplasties proving no significance. Clip deployment time was 0.5 +/- 0.2, whereas knot-tying times and respective statistical comparison for first, most recent, and all prior suture annuloplasties were 2.0 +/- 0.7 (p = 0.003), 1.2 +/- 0.4 (p = 0.0004), and 1.6 +/- 0.6 (p echocardiography performed postoperatively, at 3 months, and at 9 months revealed valvular structural integrity with only minimal mitral regurgitation. U-clips considerably reduce time for annuloplasty over conventional suture and may help reduce operative times as well.

  3. Peak reduction and clipping mitigation in OFDM by augmented compressive sensing

    KAUST Repository

    Al-Safadi, Ebrahim B.

    2012-07-01

    This work establishes the design, analysis, and fine-tuning of a peak-to-average-power-ratio (PAPR) reducing system, based on compressed sensing (CS) at the receiver of a peak-reducing sparse clipper applied to an orthogonal frequency-division multiplexing (OFDM) signal at the transmitter. By exploiting the sparsity of clipping events in the time domain relative to a predefined clipping threshold, the method depends on partially observing the frequency content of the clipping distortion over reserved tones to estimate the remaining distortion. The approach has the advantage of eliminating the computational complexity at the transmitter and reducing the overall complexity of the system compared to previous methods which incorporate pilots to cancel nonlinear distortion. Data-based augmented CS methods are also proposed that draw upon available phase and support information from data tones for enhanced estimation and cancelation of clipping noise. This enables signal recovery under more severe clipping scenarios and hence lower PAPR can be achieved compared to conventional CS techniques. © 2012 IEEE.

  4. A psychophysiological investigation of laterality in human emotion elicited by pleasant and unpleasant film clips

    Directory of Open Access Journals (Sweden)

    Kumari Veena

    2010-11-01

    Full Text Available Abstract Background Research on laterality in emotion suggests a dichotomy between the brain hemispheres. The present study aimed to investigate this further using a modulated startle reflex paradigm. Methods We examined the effects of left and the right ear stimulation on the modulated startle reflex (as indexed by eyeblink magnitude, measured from the right eye employing short (2 min film clips to elicit emotions in 16 right-handed healthy participants. The experiment consisted of two consecutive sessions on a single occasion. The acoustic startle probes were presented monaurally to one of the ears in each session, counterbalanced across order, during the viewing of film clips. Results The findings showed that eyeblink amplitude in relation to acoustic startle probes varied linearly, as expected, from pleasant through neutral to unpleasant film clips, but there was no interaction between monaural probe side and foreground valence. Conclusions Our data indicate the involvement of both hemispheres when affective states, and associated startle modulations, are produced, using materials with both audio and visual properties. From a methodological viewpoint, the robustness of film clip material including audio properties might compensate for the insufficient information reaching the ipsilateral hemisphere when using static pictures. From a theoretical viewpoint, a right ear advantage for verbal processing may account for the failure to detect the expected hemispheric difference. The verbal component of the clips would have activated the left hemisphere, possibly resulting in an increased role for the left hemisphere in both positive and negative affect generation.

  5. Primary closure of inadvertent durotomies utilizing the U-Clip in minimally invasive spinal surgery.

    Science.gov (United States)

    Song, Debbie; Park, Paul

    2011-12-15

    Retrospective clinical cohort study. To examine performance of the U-Clip for the closure of inadvertent durotomy occurring during minimally invasive spinal surgery. Primary closure of inadvertent durotomies that occur during minimally invasive spinal surgery can be technically difficult to accomplish when using standard knot-tying and suture management techniques, owing to the narrow and deep surgical corridor afforded by tubular retraction systems. The U-Clip is a novel device that can achieve tight tissue approximation without the need for knot-tying and excessive suture manipulation, making it ideally suited for use in minimally invasive spinal surgeries. We performed a retrospective review of patients who underwent minimally invasive decompressive procedures complicated by durotomy and repaired using U-Clips for the period January 2008 to January 2010. A total of seven patients were identified. Four of the seven patients were male. Six patients underwent lumbar laminectomy or discectomy. One patient underwent resection of a cervical synovial cyst. In each patient, the durotomy was repaired primarily using U-Clips. All six lumbar patients were discharged home on the same day, and the remaining patient was discharged the following morning. Mean follow-up was 6.3 months. No patient experienced symptoms related to persistent cerebrospinal fluid leakage. Primary closure of an inadvertent durotomy occurring during minimally invasive spinal surgery can be effectively achieved using the self-closing U-Clip device.

  6. Harmonic Scalpel versus electrocautery and surgical clips in head and neck free-flap harvesting.

    Science.gov (United States)

    Dean, Nichole R; Rosenthal, Eben L; Morgan, Bruce A; Magnuson, J Scott; Carroll, William R

    2014-06-01

    We sought to determine the safety and utility of Harmonic Scalpel-assisted free-flap harvesting as an alternative to a combined electrocautery and surgical clip technique. The medical records of 103 patients undergoing radial forearm free-flap reconstruction (105 free flaps) for head and neck surgical defects between 2006 and 2008 were reviewed. The use of bipolar electrocautery and surgical clips for division of small perforating vessels (n = 53) was compared to ultrasonic energy (Harmonic Scalpel; Ethicon Endo-Surgery, Inc., Cincinnati, Ohio) (n = 52) free-tissue harvesting techniques. Flap-harvesting time was reduced with the use of the Harmonic Scalpel when compared with electrocautery and surgical clip harvest (31.4 vs. 36.9 minutes, respectively; p = 0.06). Two patients who underwent flap harvest with electrocautery and surgical clips developed postoperative donor site hematomas, whereas no donor site complications were noted in the Harmonic Scalpel group. Recipient site complication rates for infection, fistula, and hematoma were similar for both harvesting techniques (p = 0.77). Two flap failures occurred in the clip-assisted radial forearm free-flap harvest group, and none in the Harmonic Scalpel group. Median length of hospitalization was significantly reduced for patients who underwent free-flap harvest with the Harmonic Scalpel when compared with the other technique (7 vs. 8 days; p = 0.01). The Harmonic Scalpel is safe, and its use is feasible for radial forearm free-flap harvest.

  7. Leveraging cross-link modification events in CLIP-seq for motif discovery.

    Science.gov (United States)

    Bahrami-Samani, Emad; Penalva, Luiz O F; Smith, Andrew D; Uren, Philip J

    2015-01-01

    High-throughput protein-RNA interaction data generated by CLIP-seq has provided an unprecedented depth of access to the activities of RNA-binding proteins (RBPs), the key players in co- and post-transcriptional regulation of gene expression. Motif discovery forms part of the necessary follow-up data analysis for CLIP-seq, both to refine the exact locations of RBP binding sites, and to characterize them. The specific properties of RBP binding sites, and the CLIP-seq methods, provide additional information not usually present in the classic motif discovery problem: the binding site structure, and cross-linking induced events in reads. We show that CLIP-seq data contains clear secondary structure signals, as well as technology- and RBP-specific cross-link signals. We introduce Zagros, a motif discovery algorithm specifically designed to leverage this information and explore its impact on the quality of recovered motifs. Our results indicate that using both secondary structure and cross-link modifications can greatly improve motif discovery on CLIP-seq data. Further, the motifs we recover provide insight into the balance between sequence- and structure-specificity struck by RBP binding. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Video game characteristics, happiness and flow as predictors of addiction among video game players: a pilot study

    OpenAIRE

    Hull, DC; Williams, GA; Griffiths, MD

    2013-01-01

    Aims:\\ud Video games provide opportunities for positive psychological experiences such as flow-like phenomena during play and general happiness that could be associated with gaming achievements. However, research has shown that specific features of game play may be associated with problematic behaviour associated with addiction-like experiences. The study was aimed at analysing whether certain structural characteristics of video games, flow, and global happiness could be predictive of video g...

  9. The Effect of Video Context on Foreign Language Learning.

    Science.gov (United States)

    Secules, Teresa; And Others

    1992-01-01

    Two experiments are reported that compare teacher-managed videotaped instructional materials featuring native speakers in everyday situations (using the "French in Action" video-based curriculum) to more traditional pedagogical methods involving a variety of classroom exercises and drills. The benefits of video use are discussed. (27…

  10. Sigur Rós and the "Valtari Mystery Film Experiment" project: for an enunciative reflection on music videos

    OpenAIRE

    Liene Nunes Saddi

    2015-01-01

    This paper seeks to propose an enunciative space to analyze music video clips as objects of visual culture incorporated into the universe of contemporary art, through the case study of the project “Valtari Mystery Film Experiment” (2012), created by the post-rock Icelandic band Sigur Rós. To achieve this, it brings together the fields of communication and the theoretical domains of Visual Arts, by tracking observation points that go from the subjectivities of musicians and directors to issues...

  11. An Overview of Structural Characteristics in Problematic Video Game Playing.

    Science.gov (United States)

    Griffiths, Mark D; Nuyens, Filip

    2017-01-01

    There are many different factors involved in how and why people develop problems with video game playing. One such set of factors concerns the structural characteristics of video games (i.e., the structure, elements, and components of the video games themselves). Much of the research examining the structural characteristics of video games was initially based on research and theorizing from the gambling studies field. The present review briefly overviews the key papers in the field to date. The paper examines a number of areas including (i) similarities in structural characteristics of gambling and video gaming, (ii) structural characteristics in video games, (iii) narrative and flow in video games, (iv) structural characteristic taxonomies for video games, and (v) video game structural characteristics and game design ethics. Many of the studies carried out to date are small-scale, and comprise self-selected convenience samples (typically using self-report surveys or non-ecologically valid laboratory experiments). Based on the small amount of empirical data, it appears that structural features that take a long time to achieve in-game are the ones most associated with problematic video game play (e.g., earning experience points, managing in-game resources, mastering the video game, getting 100% in-game). The study of video games from a structural characteristic perspective is of benefit to many different stakeholders including academic researchers, video game players, and video game designers, as well as those interested in prevention and policymaking by making the games more socially responsible. It is important that researchers understand and recognize the psycho-social effects and impacts that the structural characteristics of video games can have on players, both positive and negative.

  12. Depth-based human fall detection via shape features and improved extreme learning machine.

    Science.gov (United States)

    Ma, Xin; Wang, Haibo; Xue, Bingxia; Zhou, Mingang; Ji, Bing; Li, Yibin

    2014-11-01

    Falls are one of the major causes leading to injury of elderly people. Using wearable devices for fall detection has a high cost and may cause inconvenience to the daily lives of the elderly. In this paper, we present an automated fall detection approach that requires only a low-cost depth camera. Our approach combines two computer vision techniques-shape-based fall characterization and a learning-based classifier to distinguish falls from other daily actions. Given a fall video clip, we extract curvature scale space (CSS) features of human silhouettes at each frame and represent the action by a bag of CSS words (BoCSS). Then, we utilize the extreme learning machine (ELM) classifier to identify the BoCSS representation of a fall from those of other actions. In order to eliminate the sensitivity of ELM to its hyperparameters, we present a variable-length particle swarm optimization algorithm to optimize the number of hidden neurons, corresponding input weights, and biases of ELM. Using a low-cost Kinect depth camera, we build an action dataset that consists of six types of actions (falling, bending, sitting, squatting, walking, and lying) from ten subjects. Experimenting with the dataset shows that our approach can achieve up to 91.15% sensitivity, 77.14% specificity, and 86.83% accuracy. On a public dataset, our approach performs comparably to state-of-the-art fall detection methods that need multiple cameras.

  13. Highlight detection for video content analysis through double filters

    Science.gov (United States)

    Sun, Zhonghua; Chen, Hexin; Chen, Mianshu

    2005-07-01

    Highlight detection is a form of video summarization techniques aiming at including the most expressive or attracting parts in the video. Most video highlights selection research work has been performed on sports video, detecting certain objects or events such as goals in soccer video, touch down in football and others. In this paper, we present a highlight detection method for film video. Highlight section in a film video is not like that in sports video that usually has certain objects or events. The methods to determine a highlight part in a film video can exhibit as three aspects: (a) locating obvious audio event, (b) detecting expressive visual content around the obvious audio location, (c) selecting the preferred portion of the extracted audio-visual highlight segments. We define a double filters model to detect the potential highlights in video. First obvious audio location is determined through filtering the obvious audio features, and then we perform the potential visual salience detection around the potential audio highlight location. Finally the production from the audio-visual double filters is compared with a preference threshold to determine the final highlights. The user study results indicate that the double filters detection approach is an effective method for highlight detection for video content analysis.

  14. BICM-ID scheme for clipped DCO-OFDM in visible light communications.

    Science.gov (United States)

    Tan, Jiandong; Wang, Zhaocheng; Wang, Qi; Dai, Linglong

    2016-03-07

    Visible light communication (VLC) is recommended for indoor transmissions in 5G network, whereby DC-biased optical orthogonal frequency division multiplexing (DCO-OFDM) is adopted to eliminate the inter-symbol interference (ISI) but suffers from considerable performance loss induced by clipping distortion. In this paper, bit-interleaved coded modulation with iterative demapping and decoding (BICM-ID) scheme for clipped DCO-OFDM is investigated to enhance the performance of VLC systems. In order to further mitigate the clipping distortions, a novel soft demapping criterion is proposed, and a simplified demapping algorithm is developed to reduce the complexity of the proposed criterion. Simulation results illustrate that the enhanced demapping algorithm achieves a significant performance gain.

  15. Body-related film clip triggers desire to binge in women with binge eating disorder.

    Science.gov (United States)

    Svaldi, Jennifer; Caffier, Detlef; Blechert, Jens; Tuschen-Caffier, Brunna

    2009-09-01

    Previous research suggests that excessive influence of shape or weight concern on self-evaluation is strongly associated with psychological functioning in women with binge eating disorder (BED). However, little is known so far about its direct influence on binge episodes. In an experimental study, 27 women with BED (DSM-IV) and 25 overweight healthy controls watched a body-related film clip. Ratings of the desire to binge and mood were assessed prior to and at the end of the film clip. Additionally, measures of heart rate, finger pulse and electrodermal activity were obtained. Main results revealed a significant increase in the desire to binge, sadness and anxiety, as well as a significant increase in non-specific skin conductance fluctuation on the body-related clip in the group of BED only. The results underline the importance of shape and weight concerns in BED.

  16. Transcriptome-wide identification of RNA binding sites by CLIP-seq.

    Science.gov (United States)

    Murigneux, Valentine; Saulière, Jérôme; Roest Crollius, Hugues; Le Hir, Hervé

    2013-09-01

    An emergent strategy for the transcriptome-wide study of protein-RNA interactions is CLIP-seq (crosslinking and immunoprecipitation followed by high-throughput sequencing). We combined CLIP-seq and mRNA-seq to identify direct RNA binding sites of eIF4AIII in human cells. This RNA helicase is a core constituant of the Exon Junction Complex (EJC), a multifunctional protein complex associated with spliced mRNAs in metazoans. Here, we describe the successive steps of the CLIP protocol and the computational tools and strategies we employed to map the physiological targets of eIF4AIII on human RNAs. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. A comparative study of female sterilization via modified Uchida and silver clip techniques in rural China.

    Science.gov (United States)

    Qiu, Hongyan; Li, Li; Wu, Shangchun; Liang, Hong; Yuan, Wei; He, Yingqin

    2011-03-01

    To compare the specific effects of 2 female sterilization methods: the modified Uchida technique and the application of silver clips. A total of 2198 women living in rural areas who were still of reproductive age but opting for sterilization were enrolled. The participants were randomly divided into 2 groups, and underwent sterilization by either modified Uchida technique or silver clips. Information on acceptability, operation conditions, effectiveness, adverse effects, and complaints was collected 3, 6, and 12 months after the procedure. No significant difference in effectiveness, adverse effects or chief complaints between the 2 procedures was found. Differences in operative outcome, bleeding volume during the procedure, and operation time were found. A shorter operation time and less bleeding for the silver clip method indicated that female sterilization by this technique was as safe as that by modified Uchida technique. Copyright © 2010 International Federation of Gynecology and Obstetrics. Published by Elsevier Ireland Ltd. All rights reserved.

  18. Understanding Video Games

    DEFF Research Database (Denmark)

    Heide Smith, Jonas; Tosca, Susana Pajares; Egenfeldt-Nielsen, Simon

    From Pong to PlayStation 3 and beyond, Understanding Video Games is the first general introduction to the exciting new field of video game studies. This textbook traces the history of video games, introduces the major theories used to analyze games such as ludology and narratology, reviews...... the economics of the game industry, examines the aesthetics of game design, surveys the broad range of game genres, explores player culture, and addresses the major debates surrounding the medium, from educational benefits to the effects of violence. Throughout the book, the authors ask readers to consider...... larger questions about the medium: * What defines a video game? * Who plays games? * Why do we play games? * How do games affect the player? Extensively illustrated, Understanding Video Games is an indispensable and comprehensive resource for those interested in the ways video games are reshaping...

  19. Reflections on academic video

    Directory of Open Access Journals (Sweden)

    Thommy Eriksson

    2012-11-01

    Full Text Available As academics we study, research and teach audiovisual media, yet rarely disseminate and mediate through it. Today, developments in production technologies have enabled academic researchers to create videos and mediate audiovisually. In academia it is taken for granted that everyone can write a text. Is it now time to assume that everyone can make a video essay? Using the online journal of academic videos Audiovisual Thinking and the videos published in it as a case study, this article seeks to reflect on the emergence and legacy of academic audiovisual dissemination. Anchoring academic video and audiovisual dissemination of knowledge in two critical traditions, documentary theory and semiotics, we will argue that academic video is in fact already present in a variety of academic disciplines, and that academic audiovisual essays are bringing trends and developments that have long been part of academic discourse to their logical conclusion.

  20. Examining in vivo tympanic membrane mobility using smart phone video-otoscopy and phase-based Eulerian video magnification

    Science.gov (United States)

    Janatka, Mirek; Ramdoo, Krishan S.; Tatla, Taran; Pachtrachai, Krittin; Elson, Daniel S.; Stoyanov, Danail

    2017-03-01

    The tympanic membrane (TM) is the bridging element between the pressure waves of sound in air and the ossicular chain. It allows for sound to be conducted into the inner ear, achieving the human sense of hearing. Otitis media with effusion (OME, commonly referred to as `glue ear') is a typical condition in infants that prevents the vibration of the TM and causes conductive hearing loss, this can lead to stunting early stage development if undiagnosed. Furthermore, OME is hard to identify in this age group; as they cannot respond to typical audiometry tests. Tympanometry allows for the mobility of the TM to be examined without patient response, but requires expensive apparatus and specialist training. By combining a smartphone equipped with a 240 frames per second video recording capability with an otoscopic clip-on accessory, this paper presents a novel application of Eulerian Video Magnification (EVM) to video-otology, that could provide assistance in diagnosing OME. We present preliminary results showing a spatio-temporal slice taken from an exaggerated video visualization of the TM being excited in vivo on a healthy ear. Our preliminary results demonstrate the potential for using such an approach for diagnosing OME under visual inspection as alternative to tympanometry, which could be used remotely and hence help diagnosis in a wider population pool.

  1. Efficacy of clip-wrapping in treatment of complex pediatric aneurysms.

    Science.gov (United States)

    Bowers, Christian; Riva-Cambrin, Jay; Couldwell, William T

    2012-12-01

    Pediatric aneurysms (PAs) are distinct from their adult counterparts with respect to typical location, aneurysm type, and known predisposing risk factors. Many strategies have been employed to treat PAs, but although it has been used frequently in adults, clip wrapping in pediatric patients has only been reported once. We present a series of pediatric patients that underwent clip wrapping and discuss this strategy as an effective means of treating unclippable PAs. Pediatric patients with clip-wrapped aneurysms over a 5-year period were retrospectively identified. Clinical presentation, surgical management, and clinical and radiological outcome of the patients were evaluated. Five pediatric patients with aneurysms were treated with clip wrapping during the specified period. Three had traumatic pseudoaneurysms, with two subarachnoid hemorrhages from aneurysm rupture. One patient presented with mycotic pseudoaneurysm rupture causing a large intraparenchymal and subarachnoid hemorrhage. Another patient had a dissecting complex saccular lenticulostriate aneurysm with four perforating vessels arising from the dome. Four patients had good clinical results, with Glasgow Outcome Scale (GOS) scores of 5 after at least 1-year follow-up (mean 24.2); one patient had a GOS score of 5 at discharge, but no additional follow-up. Postoperative neuroimaging demonstrated vessel patency after clip wrapping with no recurrent hemorrhages or increase in aneurysm size; however, one had progressive occlusion of the artery in a delayed fashion and had a small clinical ischemic event from which she fully recovered. Clip wrapping appears to be an effective underutilized technique for treatment of pediatric complex aneurysms that cannot be treated with conventional methods.

  2. Keyhole Approach for Clipping Intracranial Aneurysm: Comparison of Supraorbital and Pterional Keyhole Approach.

    Science.gov (United States)

    Lan, Qing; Zhang, Hengzhu; Zhu, Qing; Chen, Ailin; Chen, Yanming; Xu, Liang; Wang, Zhongyong; Yuan, Liqun; Liu, Shihai

    2017-06-01

    The aim of this research was to compare the functional outcome and safety between supraorbital keyhole approach (SKA) and pterional keyhole approach (PKA) for clipping intracranial aneurysms. This is a retrospective study involving 318 patients with a total of 365 aneurysms who underwent keyhole surgery, comprising 195 cases in SKA group and 123 cases in PKA group. The outcome measures include Glasgow Outcome Scale, complete clipping rate, adverse events incidence, operation view angle, working distance, and surgical incision condition. Of a total of 356 aneurysms that were clipped and 9 trapped, no significant difference was observed in Glasgow Outcome Scale score, adverse events incidence, or complete clipping rate between the SKA and PKA groups. The distance from skin incision to anterior clinoid process was 5.87 ± 0.24 cm in SKA and 5.12 ± 0.27 cm in PKA. The operation view angle (from midline to the operating channel in sagittal plane) was 30°-40° in the SKA group and 60°-68° in the PKA group. Our research demonstrates that both SKA and PKA are safe and effective for most anterior circulation aneurysms and parts of posterior circulation aneurysms. The SKA exposures aneurysm better on deep and sagittal directions and is more suitable for clipping aneurysms by the contralateral approach due to the short distance. The PKA has a good exposure on the neck of aneurysm with dorsal direction of parent artery and can be used to evacuate hematoma in the temporal lobe when clipping the aneurysm. Integrating multimodal 3-dimensional images could help neurosurgeon in selecting an appropriate and effective approach. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. New concept of 3D printed bone clip (polylactic acid/hydroxyapatite/silk composite) for internal fixation of bone fractures.

    Science.gov (United States)

    Yeon, Yeung Kyu; Park, Hae Sang; Lee, Jung Min; Lee, Ji Seung; Lee, Young Jin; Sultan, Md Tipu; Seo, Ye Bin; Lee, Ok Joo; Kim, Soon Hee; Park, Chan Hum

    2017-10-03

    Open reduction with internal fixation is commonly used for the treatment of bone fractures. However, postoperative infection associated with internal fixation devices (intramedullary nails, plates, and screws) remains a significant complication, and it is technically difficult to fix multiple fragmented bony fractures using internal fixation devices. In addition, drilling in the bone to install devices can lead to secondary fracture, bone necrosis associated with postoperative infection. In this study, we developed bone clip type internal fixation device using three- dimensional (3D) printing technology. Standard 3D model of the bone clip was generated based on computed tomography (CT) scan of the femur in the rat. Polylacticacid (PLA), hydroxyapatite (HA), and silk were used for bone clip material. The purpose of this study was to characterize 3D printed PLA, PLA/HA, and PLA/HA/Silk composite bone clip and evaluate the feasibility of these bone clips as an internal fixation device. Based on the results, PLA/HA/Silk composite bone clip showed similar mechanical property, and superior biocompatibility compared to other types of the bone clip. PLA/HA/Silk composite bone clip demonstrated excellent alignment of the bony segments across the femur fracture site with well-positioned bone clip in an animal study. Our 3D printed bone clips have several advantages: (1) relatively noninvasive (drilling in the bone is not necessary), (2) patient-specific design (3) mechanically stable device, and (4) it provides high biocompatibility. Therefore, we suggest that our 3D printed PLA/HA/Silk composite bone clip is a possible internal fixation device.

  4. Video game use and cognitive performance: does it vary with the presence of problematic video game use?

    Science.gov (United States)

    Collins, Emily; Freeman, Jonathan

    2014-03-01

    Action video game players have been found to outperform nonplayers on a variety of cognitive tasks. However, several failures to replicate these video game player advantages have indicated that this relationship may not be straightforward. Moreover, despite the discovery that problematic video game players do not appear to demonstrate the same superior performance as nonproblematic video game players in relation to multiple object tracking paradigms, this has not been investigated for other tasks. Consequently, this study compared gamers and nongamers in task switching ability, visual short-term memory, mental rotation, enumeration, and flanker interference, as well as investigated the influence of self-reported problematic video game use. A total of 66 participants completed the experiment, 26 of whom played action video games, including 20 problematic players. The results revealed no significant effect of playing action video games, nor any influence of problematic video game play. This indicates that the previously reported cognitive advantages in video game players may be restricted to specific task features or samples. Furthermore, problematic video game play may not have a detrimental effect on cognitive performance, although this is difficult to ascertain considering the lack of video game player advantage. More research is therefore sorely needed.

  5. Using a CLIPS expert system to automatically manage TCP/IP networks and their components

    Science.gov (United States)

    Faul, Ben M.

    1991-01-01

    A expert system that can directly manage networks components on a Transmission Control Protocol/Internet Protocol (TCP/IP) network is described. Previous expert systems for managing networks have focused on managing network faults after they occur. However, this proactive expert system can monitor and control network components in near real time. The ability to directly manage network elements from the C Language Integrated Production System (CLIPS) is accomplished by the integration of the Simple Network Management Protocol (SNMP) and a Abstract Syntax Notation (ASN) parser into the CLIPS artificial intelligence language.

  6. The use of shape memory compression anastomosis clips in cholecystojejunostomy in pigs – a preliminary study

    Directory of Open Access Journals (Sweden)

    Piotr Holak

    2015-01-01

    Full Text Available This paper reports on the use of compression anastomosis clips (CAC in cholecystoenterostomy in an animal model. Cholecystojejunostomy was performed in 6 pigs using implants made of nickel-titanium alloy in the form of elliptical springs with two-way shape memory. The applied procedure led to the achievement of tight anastomosis with a minimal number of complications and positive results of histopathological evaluations of the anastomotic site. The results of the study indicate that shape memory NiTi clips are a promising surgical tool for cholecystoenterostomy in cats and dogs.

  7. A CLIPS-based tool for aircraft pilot-vehicle interface design

    Science.gov (United States)

    Fowler, Thomas D.; Rogers, Steven P.

    1991-01-01

    The Pilot-Vehicle Interface of modern aircraft is the cognitive, sensory, and psychomotor link between the pilot, the avionics modules, and all other systems on board the aircraft. To assist pilot-vehicle interface designers, a C Language Integrated Production System (CLIPS) based tool was developed that allows design information to be stored in a table that can be modified by rules representing design knowledge. Developed for the Apple Macintosh, the tool allows users without any CLIPS programming experience to form simple rules using a point and click interface.

  8. A case of Sengstaken-Blakemore tube-induced esophageal rupture repaired by endoscopic clipping.

    Science.gov (United States)

    Jung, Jin Hwan; Kim, Jin Il; Song, Jun Ho; Kim, Jeong Ho; Lee, Sang Hun; Cheung, Dae Young; Park, Soo Heon; Kim, Jae Kwang

    2011-01-01

    A 57-year-old man was admitted to another hospital for hematemesis due to heavy drinking. A Sengstaken-Blakemore tube was inserted and the patient was transferred to our hospital. The patient's ensuing movements inadvertently caused an esophageal rupture 2.5 cm in size. Since the patient's condition was stable, treatment via endoscopic repair using metallic clips was chosen over emergency surgery. Two hemoclips were fixed at the ends of the ruptured area; by employing an endoscopic detachable snare, the ruptured area was carefully repaired with 10 metallic clips. As a result, the esophageal rupture could be successfully repaired by endoscopic procedure rather than performing surgery.

  9. Fundamental Frequency Extraction Method using Central Clipping and its Importance for the Classification of Emotional State

    Directory of Open Access Journals (Sweden)

    Pavol Partila

    2012-01-01

    Full Text Available The paper deals with a classification of emotional state. We implemented a method for extracting the fundamental speech signal frequency by means of a central clipping and examined a correlation between emotional state and fundamental speech frequency. For this purpose, we applied an approach of exploratory data analysis. The ANOVA (Analysis of variance test confirmed that a modification in the speaker's emotional state changes the fundamental frequency of human vocal tract. The main contribution of the paper lies in investigation, of central clipping method by the ANOVA.

  10. Using Behavioral Parent Training to Treat Disruptive Behavior Disorders in Young Children: A How-to Approach Using Video Clips

    Science.gov (United States)

    Borrego, Joaquin, Jr.; Burrell, T. Lindsey

    2010-01-01

    This article describes the application of a behavioral parent training program, Parent-Child Interaction Therapy (PCIT), in the treatment of behavior disorders in young children. PCIT is unique in that it works with both the child and parent in treatment and it focuses on improving the parent-child relationship as a means to improving parent and…

  11. Underwater Video Clip and Still Imagery for Ground Validation (GV) and Accuracy Assessment (AA) of the Florida Keys

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This project is a cooperative effort between the National Ocean Service, Office of National Marine Sanctuaries, the National Centers for Coastal Ocean Science,...

  12. Video Shot Boundary Detection based on Multifractal Analisys

    Directory of Open Access Journals (Sweden)

    B. D. Reljin

    2011-11-01

    Full Text Available Extracting video shots is an essential preprocessing step to almost all video analysis, indexing, and other content-based operations. This process is equivalent to detecting the shot boundaries in a video. In this paper we presents video Shot Boundary Detection (SBD based on Multifractal Analysis (MA. Low-level features (color and texture features are extracted from each frame in video sequence. Features are concatenated in feature vectors (FVs and stored in feature matrix. Matrix rows correspond to FVs of frames from video sequence, while columns are time series of particular FV component. Multifractal analysis is applied to FV component time series, and shot boundaries are detected as high singularities of time series above pre defined treshold. Proposed SBD method is tested on real video sequence with 64 shots, with manually labeled shot boundaries. Detection accuracy depends on number FV components used. For only one FV component detection accuracy lies in the range 76-92% (depending on selected threshold, while by combining two FV components all shots are detected completely (accuracy of 100%.

  13. Sound for digital video

    CERN Document Server

    Holman, Tomlinson

    2013-01-01

    Achieve professional quality sound on a limited budget! Harness all new, Hollywood style audio techniques to bring your independent film and video productions to the next level.In Sound for Digital Video, Second Edition industry experts Tomlinson Holman and Arthur Baum give you the tools and knowledge to apply recent advances in audio capture, video recording, editing workflow, and mixing to your own film or video with stunning results. This fresh edition is chockfull of techniques, tricks, and workflow secrets that you can apply to your own projects from preproduction

  14. Green Power Partnership Videos

    Science.gov (United States)

    The Green Power Partnership develops videos on a regular basis that explore a variety of topics including, Green Power partnership, green power purchasing, Renewable energy certificates, among others.

  15. Automatic topics segmentation for TV news video

    Science.gov (United States)

    Hmayda, Mounira; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    Automatic identification of television programs in the TV stream is an important task for operating archives. This article proposes a new spatio-temporal approach to identify the programs in TV stream into two main steps: First, a reference catalogue for video features visual jingles built. We operate the features that characterize the instances of the same program type to identify the different types of programs in the flow of television. The role of video features is to represent the visual invariants for each visual jingle using appropriate automatic descriptors for each television program. On the other hand, programs in television streams are identified by examining the similarity of the video signal for visual grammars in the catalogue. The main idea of the identification process is to compare the visual similarity of the video signal features in the flow of television to the catalogue. After presenting the proposed approach, the paper overviews encouraging experimental results on several streams extracted from different channels and compounds of several programs.

  16. The Effects of Reviews in Video Tutorials

    Science.gov (United States)

    van der Meij, H.; van der Meij, J.

    2016-01-01

    This study investigates how well a video tutorial for software training that is based on Demonstration-Based Teaching supports user motivation and performance. In addition, it is studied whether reviews significantly contribute to these measures. The Control condition employs a tutorial with instructional features added to a dynamic task…

  17. Heartbeat Rate Measurement from Facial Video

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal

    2016-01-01

    by combining a ‘Good feature to track’ and a ‘Supervised descent method’ in order to overcome the limitations of currently available facial video based HR measuring systems. Such limitations include, e.g., unrealistic restriction of the subject’s movement and artificial lighting during data capture. A face...

  18. Assessment of Caudal Fin Clips as a Non-lethal Technique for Predicting Muscle Tissue Mercury Concentrations in Largeouth Bass

    Science.gov (United States)

    The statistical relationship between total mercury (Hg) concentration in clips from the caudal fin and muscle tissue of largemouth bass (Micropterus salmoides) from 26 freshwater sites in Rhode Island, USA was developed and evaluated to determine the utility of fin clip analysis ...

  19. Copper analysis of nail clippings an attempt to differentiate between normal children and patients suffering from cystic fibrosis

    NARCIS (Netherlands)

    Stekelenburg, G.J. van; Laar, A.J.B. van de; Laag, J. van der

    From 39 normal children and 36 patients, suffering from cystic fibrosis (C/F), the copper content of finger nail clippings and toe nail clippings were determined. From this study it can be concluded that, although the patients with cystic fibrosis (C/F) have a higher copper content, the

  20. Videos Bridging Asia and Africa: Overcoming Cultural and Institutional Barriers in Technology-Mediated Rural Learning

    Science.gov (United States)

    Van Mele, Paul; Wanvoeke, Jonas; Akakpo, Cyriaque; Dacko, Rosaline Maiga; Ceesay, Mustapha; Beavogui, Louis; Soumah, Malick; Anyang, Robert

    2010-01-01

    Will African farmers watch and learn from videos featuring farmers in Bangladesh? Learning videos on rice seed management were made with rural women in Bangladesh. By using a new approach, called zooming-in, zooming-out, the videos were of regional relevance and locally appropriate. When the Africa Rice Center (AfricaRice) introduced them to…