WorldWideScience

Sample records for video clips featuring

  1. Content-based video retrieval by example video clip

    Science.gov (United States)

    Dimitrova, Nevenka; Abdel-Mottaleb, Mohamed

    1997-01-01

    This paper presents a novel approach for video retrieval from a large archive of MPEG or Motion JPEG compressed video clips. We introduce a retrieval algorithm that takes a video clip as a query and searches the database for clips with similar contents. Video clips are characterized by a sequence of representative frame signatures, which are constructed from DC coefficients and motion information (`DC+M' signatures). The similarity between two video clips is determined by using their respective signatures. This method facilitates retrieval of clips for the purpose of video editing, broadcast news retrieval, or copyright violation detection.

  2. Authoring Data-Driven Videos with DataClips.

    Science.gov (United States)

    Amini, Fereshteh; Riche, Nathalie Henry; Lee, Bongshin; Monroy-Hernandez, Andres; Irani, Pourang

    2017-01-01

    Data videos, or short data-driven motion graphics, are an increasingly popular medium for storytelling. However, creating data videos is difficult as it involves pulling together a unique combination of skills. We introduce DataClips, an authoring tool aimed at lowering the barriers to crafting data videos. DataClips allows non-experts to assemble data-driven "clips" together to form longer sequences. We constructed the library of data clips by analyzing the composition of over 70 data videos produced by reputable sources such as The New York Times and The Guardian. We demonstrate that DataClips can reproduce over 90% of our data videos corpus. We also report on a qualitative study comparing the authoring process and outcome achieved by (1) non-experts using DataClips, and (2) experts using Adobe Illustrator and After Effects to create data-driven clips. Results indicated that non-experts are able to learn and use DataClips with a short training period. In the span of one hour, they were able to produce more videos than experts using a professional editing tool, and their clips were rated similarly by an independent audience.

  3. Chinese Language Video Clips. [CD-ROM].

    Science.gov (United States)

    Fleming, Stephen; Hipley, David; Ning, Cynthia

    This compact disc includes video clips covering six topics for the learner of Chinese: personal information, commercial transactions, travel and leisure, health and sports, food and school. Filmed on location in Beijing, these naturalistic video clips consist mainly of unrehearsed interviews of ordinary people. The learner is lead through a series…

  4. Digital video clips for improved pedagogy and illustration of scientific research — with illustrative video clips on atomic spectrometry

    Science.gov (United States)

    Michel, Robert G.; Cavallari, Jennifer M.; Znamenskaia, Elena; Yang, Karl X.; Sun, Tao; Bent, Gary

    1999-12-01

    This article is an electronic publication in Spectrochimica Acta Electronica (SAE), a section of Spectrochimica Acta Part B (SAB). The hardcopy text is accompanied by an electronic archive, stored on the CD-ROM accompanying this issue. The archive contains video clips. The main article discusses the scientific aspects of the subject and explains the purpose of the video files. Short, 15-30 s, digital video clips are easily controllable at the computer keyboard, which gives a speaker the ability to show fine details through the use of slow motion. Also, they are easily accessed from the computer hard drive for rapid extemporaneous presentation. In addition, they are easily transferred to the Internet for dissemination. From a pedagogical point of view, the act of making a video clip by a student allows for development of powers of observation, while the availability of the technology to make digital video clips gives a teacher the flexibility to demonstrate scientific concepts that would otherwise have to be done as 'live' demonstrations, with all the likely attendant misadventures. Our experience with digital video clips has been through their use in computer-based presentations by undergraduate and graduate students in analytical chemistry classes, and by high school and middle school teachers and their students in a variety of science and non-science classes. In physics teaching laboratories, we have used the hardware to capture digital video clips of dynamic processes, such as projectiles and pendulums, for later mathematical analysis.

  5. Speed Biases With Real-Life Video Clips

    Directory of Open Access Journals (Sweden)

    Federica Rossi

    2018-03-01

    Full Text Available We live almost literally immersed in an artificial visual world, especially motion pictures. In this exploratory study, we asked whether the best speed for reproducing a video is its original, shooting speed. By using adjustment and double staircase methods, we examined speed biases in viewing real-life video clips in three experiments, and assessed their robustness by manipulating visual and auditory factors. With the tested stimuli (short clips of human motion, mixed human-physical motion, physical motion and ego-motion, speed underestimation was the rule rather than the exception, although it depended largely on clip content, ranging on average from 2% (ego-motion to 32% (physical motion. Manipulating display size or adding arbitrary soundtracks did not modify these speed biases. Estimated speed was not correlated with estimated duration of these same video clips. These results indicate that the sense of speed for real-life video clips can be systematically biased, independently of the impression of elapsed time. Measuring subjective visual tempo may integrate traditional methods that assess time perception: speed biases may be exploited to develop a simple, objective test of reality flow, to be used for example in clinical and developmental contexts. From the perspective of video media, measuring speed biases may help to optimize video reproduction speed and validate “natural” video compression techniques based on sub-threshold temporal squeezing.

  6. Emotion Index of Cover Song Music Video Clips based on Facial Expression Recognition

    DEFF Research Database (Denmark)

    Kavallakis, George; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2017-01-01

    This paper presents a scheme of creating an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video. More specifically, it fuses effective and robust algorithms which are employed for expression recognition, along with the use...... of a neural network system using the features extracted by the SIFT algorithm. Also we support the need of this fusion of different expression recognition algorithms, because of the way that emotions are linked to facial expressions in music video clips....

  7. Multimodal Feature Learning for Video Captioning

    Directory of Open Access Journals (Sweden)

    Sujin Lee

    2018-01-01

    Full Text Available Video captioning refers to the task of generating a natural language sentence that explains the content of the input video clips. This study proposes a deep neural network model for effective video captioning. Apart from visual features, the proposed model learns additionally semantic features that describe the video content effectively. In our model, visual features of the input video are extracted using convolutional neural networks such as C3D and ResNet, while semantic features are obtained using recurrent neural networks such as LSTM. In addition, our model includes an attention-based caption generation network to generate the correct natural language captions based on the multimodal video feature sequences. Various experiments, conducted with the two large benchmark datasets, Microsoft Video Description (MSVD and Microsoft Research Video-to-Text (MSR-VTT, demonstrate the performance of the proposed model.

  8. Electroencephalography Amplitude Modulation Analysis for Automated Affective Tagging of Music Video Clips

    Directory of Open Access Journals (Sweden)

    Andrea Clerico

    2018-01-01

    Full Text Available The quantity of music content is rapidly increasing and automated affective tagging of music video clips can enable the development of intelligent retrieval, music recommendation, automatic playlist generators, and music browsing interfaces tuned to the users' current desires, preferences, or affective states. To achieve this goal, the field of affective computing has emerged, in particular the development of so-called affective brain-computer interfaces, which measure the user's affective state directly from measured brain waves using non-invasive tools, such as electroencephalography (EEG. Typically, conventional features extracted from the EEG signal have been used, such as frequency subband powers and/or inter-hemispheric power asymmetry indices. More recently, the coupling between EEG and peripheral physiological signals, such as the galvanic skin response (GSR, have also been proposed. Here, we show the importance of EEG amplitude modulations and propose several new features that measure the amplitude-amplitude cross-frequency coupling per EEG electrode, as well as linear and non-linear connections between multiple electrode pairs. When tested on a publicly available dataset of music video clips tagged with subjective affective ratings, support vector classifiers trained on the proposed features were shown to outperform those trained on conventional benchmark EEG features by as much as 6, 20, 8, and 7% for arousal, valence, dominance and liking, respectively. Moreover, fusion of the proposed features with EEG-GSR coupling features showed to be particularly useful for arousal (feature-level fusion and liking (decision-level fusion prediction. Together, these findings show the importance of the proposed features to characterize human affective states during music clip watching.

  9. Fast and efficient search for MPEG-4 video using adjacent pixel intensity difference quantization histogram feature

    Science.gov (United States)

    Lee, Feifei; Kotani, Koji; Chen, Qiu; Ohmi, Tadahiro

    2010-02-01

    In this paper, a fast search algorithm for MPEG-4 video clips from video database is proposed. An adjacent pixel intensity difference quantization (APIDQ) histogram is utilized as the feature vector of VOP (video object plane), which had been reliably applied to human face recognition previously. Instead of fully decompressed video sequence, partially decoded data, namely DC sequence of the video object are extracted from the video sequence. Combined with active search, a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by total 15 hours of video contained of TV programs such as drama, talk, news, etc. to search for given 200 MPEG-4 video clips which each length is 15 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 80ms, and Equal Error Rate (ERR) of 2 % in drama and news categories are achieved, which are more accurately and robust than conventional fast video search algorithm.

  10. Identifying sports videos using replay, text, and camera motion features

    Science.gov (United States)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1999-12-01

    Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.

  11. Do medical students watch video clips in eLearning and do these facilitate learning?

    Science.gov (United States)

    Romanov, Kalle; Nevgi, Anne

    2007-06-01

    There is controversial evidence of the impact of individual learning style on students' performance in computer-aided learning. We assessed the association between the use of multimedia materials, such as video clips, and collaborative communication tools with learning outcome among medical students. One hundred and twenty-one third-year medical students attended a course in medical informatics (0.7 credits) consisting of lectures, small group sessions and eLearning material. The eLearning material contained six learning modules with integrated video clips and collaborative learning tools in WebCT. Learning outcome was measured with a course exam. Approximately two-thirds of students (68.6%) viewed two or more videos. Female students were significantly more active video-watchers. No significant associations were found between video-watching and self-test scores or the time used in eLearning. Video-watchers were more active in WebCT; they loaded more pages and more actively participated in discussion forums. Video-watching was associated with a better course grade. Students who watched video clips were more active in using collaborative eLearning tools and achieved higher course grades.

  12. News video story segmentation method using fusion of audio-visual features

    Science.gov (United States)

    Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang

    2007-11-01

    News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.

  13. First- and third-party ground truth for key frame extraction from consumer video clips

    Science.gov (United States)

    Costello, Kathleen; Luo, Jiebo

    2007-02-01

    Extracting key frames (KF) from video is of great interest in many applications, such as video summary, video organization, video compression, and prints from video. KF extraction is not a new problem. However, current literature has been focused mainly on sports or news video. In the consumer video space, the biggest challenges for key frame selection from consumer videos are the unconstrained content and lack of any preimposed structure. In this study, we conduct ground truth collection of key frames from video clips taken by digital cameras (as opposed to camcorders) using both first- and third-party judges. The goals of this study are: (1) to create a reference database of video clips reasonably representative of the consumer video space; (2) to identify associated key frames by which automated algorithms can be compared and judged for effectiveness; and (3) to uncover the criteria used by both first- and thirdparty human judges so these criteria can influence algorithm design. The findings from these ground truths will be discussed.

  14. 78 FR 78319 - Media Bureau Seeks Comment on Application of the IP Closed Captioning Rules to Video Clips

    Science.gov (United States)

    2013-12-26

    ... Video Programming: Implementation of the Twenty-First Century Communications and Video Accessibility Act..., pursuant to the Twenty-First Century Communications and Video Accessibility Act of 2010 (``CVAA''),\\2\\ the... clips, especially news clips.\\8\\ The Commission stated that if it finds that consumers who are deaf or...

  15. Microanalysis on selected video clips with focus on communicative response in music therapy

    DEFF Research Database (Denmark)

    Ridder, Hanne Mette Ochsner

    2007-01-01

    This chapter describes a five-step procedure for video analysis where the topic of investigation is the communicative response of clients in music therapy. In this microanalysis procedure only very short video clips are used, and in order to select these clips an overview of each music therapy...... session is obtained with the help of a session-graph that is a systematic way of collecting video observations from one music therapy session and combining the data in one figure. The systematic procedures do not demand sophisticated computer equipment; only standard programmes such as Excel and a media...... player. They are based on individual music therapy work with a population who are difficult to engage in joint activities and who show little response (e.g. persons suffering from severe dementia). The video analysis tools might be relevant to other groups of clients where it is important to form a clear...

  16. The effect of music video clips on adolescent boys' body image, mood, and schema activation.

    Science.gov (United States)

    Mulgrew, Kate E; Volcevski-Kostas, Diana; Rendell, Peter G

    2014-01-01

    There is limited research that has examined experimentally the effects of muscular images on adolescent boys' body image, with no research specifically examining the effects of music television. The aim of the current study was to examine the effects of viewing muscular and attractive singers in music video clips on early, mid, and late adolescent boys' body image, mood, and schema activation. Participants were 180 boys in grade 7 (mean age = 12.73 years), grade 9 (mean age = 14.40 years) or grade 11 (mean age = 16.15 years) who completed pre- and post-test measures of mood and body satisfaction after viewing music videos containing male singers of muscular or average appearance. They also completed measures of schema activation and social comparison after viewing the clips. The results showed that the boys who viewed the muscular clips reported poorer upper body satisfaction, lower appearance satisfaction, lower happiness, and more depressive feelings compared to boys who viewed the clips depicting singers of average appearance. There was no evidence of increased appearance schema activation but the boys who viewed the muscular clips did report higher levels of social comparison to the singers. The results suggest that music video clips are a powerful form of media in conveying information about the male ideal body shape and that negative effects are found in boys as young as 12 years.

  17. Changes in salivary testosterone concentrations and subsequent voluntary squat performance following the presentation of short video clips.

    Science.gov (United States)

    Cook, Christian J; Crewther, Blair T

    2012-01-01

    Previous studies have shown that visual images can produce rapid changes in testosterone concentrations. We explored the acute effects of video clips on salivary testosterone and cortisol concentrations and subsequent voluntary squat performance in highly trained male athletes (n=12). Saliva samples were collected on 6 occasions immediately before and 15 min after watching a brief video clip (approximately 4 min in duration) on a computer screen. The watching of a sad, erotic, aggressive, training motivational, humorous or a neutral control clip was randomised. Subjects then performed a squat workout aimed at producing a 3 repetition maximum (3RM) lift. Significant (Psquats across all video sessions (Pfree hormone concentrations and the relative changes in testosterone closely mapped 3RM squat performance in a group of highly trained males. Thus, speculatively, using short video presentations in the pre-workout environment offers an opportunity for understanding the outcomes of hormonal change, athlete behaviour and subsequent voluntary performance. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. A Sieving ANN for Emotion-Based Movie Clip Classification

    Science.gov (United States)

    Watanapa, Saowaluk C.; Thipakorn, Bundit; Charoenkitkarn, Nipon

    Effective classification and analysis of semantic contents are very important for the content-based indexing and retrieval of video database. Our research attempts to classify movie clips into three groups of commonly elicited emotions, namely excitement, joy and sadness, based on a set of abstract-level semantic features extracted from the film sequence. In particular, these features consist of six visual and audio measures grounded on the artistic film theories. A unique sieving-structured neural network is proposed to be the classifying model due to its robustness. The performance of the proposed model is tested with 101 movie clips excerpted from 24 award-winning and well-known Hollywood feature films. The experimental result of 97.8% correct classification rate, measured against the collected human-judges, indicates the great potential of using abstract-level semantic features as an engineered tool for the application of video-content retrieval/indexing.

  19. Exploring the Use of Video-Clips for Motivation Building in a Secondary School EFL Setting

    Science.gov (United States)

    Park, Yujong; Jung, Eunsu

    2016-01-01

    By employing an action research framework, this study evaluated the effectiveness of a video-based curriculum in motivating EFL learners to learn English. Fifteen Korean EFL students at the secondary school context participated in an 8-week English program, which employed video clips including TED talk replays, sitcoms, TV news reports and movies…

  20. Surgical gesture classification from video and kinematic data.

    Science.gov (United States)

    Zappella, Luca; Béjar, Benjamín; Hager, Gregory; Vidal, René

    2013-10-01

    Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on dynamic cues (e.g., time to completion, speed, forces, torque) or kinematic data (e.g., robot trajectories and velocities). While videos could be equally or more discriminative (e.g., videos contain semantic information not present in kinematic data), they are typically not used because of the difficulties associated with automatic video interpretation. In this paper, we propose several methods for automatic surgical gesture classification from video data. We assume that the video of a surgical task (e.g., suturing) has been segmented into video clips corresponding to a single gesture (e.g., grabbing the needle, passing the needle) and propose three methods to classify the gesture of each video clip. In the first one, we model each video clip as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words, and use a bag-of-features (BoF) approach to classify new video clips. In the third one, we use multiple kernel learning (MKL) to combine the LDS and BoF approaches. Since the LDS approach is also applicable to kinematic data, we also use MKL to combine both types of data in order to exploit their complementarity. Our experiments on a typical surgical training setup show that methods based on video data perform equally well, if not better, than state-of-the-art approaches based on kinematic data. In turn, the combination of both kinematic and video data outperforms any other algorithm based on one type of data alone. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Intelligent keyframe extraction for video printing

    Science.gov (United States)

    Zhang, Tong

    2004-10-01

    Nowadays most digital cameras have the functionality of taking short video clips, with the length of video ranging from several seconds to a couple of minutes. The purpose of this research is to develop an algorithm which extracts an optimal set of keyframes from each short video clip so that the user could obtain proper video frames to print out. In current video printing systems, keyframes are normally obtained by evenly sampling the video clip over time. Such an approach, however, may not reflect highlights or regions of interest in the video. Keyframes derived in this way may also be improper for video printing in terms of either content or image quality. In this paper, we present an intelligent keyframe extraction approach to derive an improved keyframe set by performing semantic analysis of the video content. For a video clip, a number of video and audio features are analyzed to first generate a candidate keyframe set. These features include accumulative color histogram and color layout differences, camera motion estimation, moving object tracking, face detection and audio event detection. Then, the candidate keyframes are clustered and evaluated to obtain a final keyframe set. The objective is to automatically generate a limited number of keyframes to show different views of the scene; to show different people and their actions in the scene; and to tell the story in the video shot. Moreover, frame extraction for video printing, which is a rather subjective problem, is considered in this work for the first time, and a semi-automatic approach is proposed.

  2. Real-time video streaming of sonographic clips using domestic internet networks and free videoconferencing software.

    Science.gov (United States)

    Liteplo, Andrew S; Noble, Vicki E; Attwood, Ben H C

    2011-11-01

    As the use of point-of-care sonography spreads, so too does the need for remote expert over-reading via telesonogrpahy. We sought to assess the feasibility of using familiar, widespread, and cost-effective existent technology to allow remote over-reading of sonograms in real time and to compare 4 different methods of transmission and communication for both the feasibility of transmission and image quality. Sonographic video clips were transmitted using 2 different connections (WiFi and 3G) and via 2 different videoconferencing modalities (iChat [Apple Inc, Cupertino, CA] and Skype [Skype Software Sàrl, Luxembourg]), for a total of 4 different permutations. The clips were received at a remote location and recorded and then scored by expert reviewers for image quality, resolution, and detail. Wireless transmission of sonographic clips was feasible in all cases when WiFi was used and when Skype was used over a 3G connection. Images transmitted via a WiFi connection were statistically superior to those transmitted via 3G in all parameters of quality (average P = .031), and those sent by iChat were superior to those sent by Skype but not statistically so (average P = .057). Wireless transmission of sonographic video clips using inexpensive hardware, free videoconferencing software, and domestic Internet networks is feasible with retention of image quality sufficient for interpretation. WiFi transmission results in greater image quality than transmission by a 3G network.

  3. Judgments of Nonverbal Behaviour by Children with High-Functioning Autism Spectrum Disorder: Can They Detect Signs of Winning and Losing from Brief Video Clips?

    Science.gov (United States)

    Ryan, Christian; Furley, Philip; Mulhall, Kathleen

    2016-01-01

    Typically developing children are able to judge who is winning or losing from very short clips of video footage of behaviour between active match play across a number of sports. Inferences from "thin slices" (short video clips) allow participants to make complex judgments about the meaning of posture, gesture and body language. This…

  4. Automated Music Video Generation Using Multi-level Feature-based Segmentation

    Science.gov (United States)

    Yoon, Jong-Chul; Lee, In-Kwon; Byun, Siwoo

    The expansion of the home video market has created a requirement for video editing tools to allow ordinary people to assemble videos from short clips. However, professional skills are still necessary to create a music video, which requires a stream to be synchronized with pre-composed music. Because the music and the video are pre-generated in separate environments, even a professional producer usually requires a number of trials to obtain a satisfactory synchronization, which is something that most amateurs are unable to achieve.

  5. Video genre classification using multimodal features

    Science.gov (United States)

    Jin, Sung Ho; Bae, Tae Meon; Choo, Jin Ho; Ro, Yong Man

    2003-12-01

    We propose a video genre classification method using multimodal features. The proposed method is applied for the preprocessing of automatic video summarization or the retrieval and classification of broadcasting video contents. Through a statistical analysis of low-level and middle-level audio-visual features in video, the proposed method can achieve good performance in classifying several broadcasting genres such as cartoon, drama, music video, news, and sports. In this paper, we adopt MPEG-7 audio-visual descriptors as multimodal features of video contents and evaluate the performance of the classification by feeding the features into a decision tree-based classifier which is trained by CART. The experimental results show that the proposed method can recognize several broadcasting video genres with a high accuracy and the classification performance with multimodal features is superior to the one with unimodal features in the genre classification.

  6. COMPOSITIONAL AND CONTENT-RELATED PARTICULARITIES OF POLITICAL MEDIA TEXTS (THROUGH THE EXAMPLE OF THE TEXTS OF POLITICAL VIDEO CLIPS ISSUED BY THE CANDIDATES FOR PRESIDENCY IN FRANCE IN 2017

    Directory of Open Access Journals (Sweden)

    Dmitrieva, A.V.

    2017-09-01

    Full Text Available The article examines the texts of political advertising video clips issued by the candidates for presidency in France during the campaign before the first round of elections in 2017. The mentioned examples of media texts are analysed from the compositional point of view as well as from that of the content particularities which are directly connected to the text structure. In general, the majority of the studied clips have a similar structure and consist of three parts: introduction, main part and conclusion. However, as a result of the research, a range of advantages marking well-structured videos was revealed. These include: addressing the voters and stating the speech topic clearly at the beginning of the clip, a relevant attention-grabbing opening phrase, consistency and clarity of the information presentation, appropriate use of additional video plots, conclusion at the end of the clip.

  7. SnapVideo: Personalized Video Generation for a Sightseeing Trip.

    Science.gov (United States)

    Zhang, Luming; Jing, Peiguang; Su, Yuting; Zhang, Chao; Shaoz, Ling

    2017-11-01

    Leisure tourism is an indispensable activity in urban people's life. Due to the popularity of intelligent mobile devices, a large number of photos and videos are recorded during a trip. Therefore, the ability to vividly and interestingly display these media data is a useful technique. In this paper, we propose SnapVideo, a new method that intelligently converts a personal album describing of a trip into a comprehensive, aesthetically pleasing, and coherent video clip. The proposed framework contains three main components. The scenic spot identification model first personalizes the video clips based on multiple prespecified audience classes. We then search for some auxiliary related videos from YouTube 1 according to the selected photos. To comprehensively describe a scenery, the view generation module clusters the crawled video frames into a number of views. Finally, a probabilistic model is developed to fit the frames from multiple views into an aesthetically pleasing and coherent video clip, which optimally captures the semantics of a sightseeing trip. Extensive user studies demonstrated the competitiveness of our method from an aesthetic point of view. Moreover, quantitative analysis reflects that semantically important spots are well preserved in the final video clip. 1 https://www.youtube.com/.

  8. Learning Computational Models of Video Memorability from fMRI Brain Imaging.

    Science.gov (United States)

    Han, Junwei; Chen, Changyuan; Shao, Ling; Hu, Xintao; Han, Jungong; Liu, Tianming

    2015-08-01

    Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects' brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework.

  9. Fault Diagnosis of Motor Bearing by Analyzing a Video Clip

    Directory of Open Access Journals (Sweden)

    Siliang Lu

    2016-01-01

    Full Text Available Conventional bearing fault diagnosis methods require specialized instruments to acquire signals that can reflect the health condition of the bearing. For instance, an accelerometer is used to acquire vibration signals, whereas an encoder is used to measure motor shaft speed. This study proposes a new method for simplifying the instruments for motor bearing fault diagnosis. Specifically, a video clip recording of a running bearing system is captured using a cellphone that is equipped with a camera and a microphone. The recorded video is subsequently analyzed to obtain the instantaneous frequency of rotation (IFR. The instantaneous fault characteristic frequency (IFCF of the defective bearing is obtained by analyzing the sound signal that is recorded by the microphone. The fault characteristic order is calculated by dividing IFCF by IFR to identify the fault type of the bearing. The effectiveness and robustness of the proposed method are verified by a series of experiments. This study provides a simple, flexible, and effective solution for motor bearing fault diagnosis. Given that the signals are gathered using an affordable and accessible cellphone, the proposed method is proven suitable for diagnosing the health conditions of bearing systems that are located in remote areas where specialized instruments are unavailable or limited.

  10. Validation of the fifth edition BI-RADS ultrasound lexicon with comparison of fourth and fifth edition diagnostic performance using video clips

    Directory of Open Access Journals (Sweden)

    Jung Hyun Yoon

    2016-10-01

    Full Text Available Purpose The aim of this study was to evaluate the positive predictive value (PPV and the diagnostic performance of the ultrasonographic descriptors in the fifth edition of BI-RADS, comparing with the fourth edition using video clips. Methods From September 2013 to July 2014, 80 breast masses in 74 women (mean age, 47.5±10.7 years from five institutions of the Korean Society of Breast Imaging were included. Two radiologists individually reviewed the static and video images and analyzed the images according to the fourth and fifth edition of BI-RADS. The PPV of each descriptor was calculated and diagnostic performances between the fourth and fifth editions were compared. Results Of the 80 breast masses, 51 (63.8% were benign and 29 (36.2% were malignant. Suspicious ultrasonographic features such as irregular shape, non-parallel orientation, angular or spiculated margins, and combined posterior features showed higher PPV in both editions (all P0.05. The area under the receiver operating characteristics curve was higher in the fourth edition (0.708 to 0.690, without significance (P=0.416. Conclusion The fifth edition of the BI-RADS ultrasound lexicon showed comparable performance to the fourth edition and can be useful in the differential diagnosis of breast masses using ultrasonography.

  11. Habitat diversity in the Northeastern Gulf of Mexico: Selected video clips from the Gulfstream Natural Gas Pipeline digital archive

    Science.gov (United States)

    Raabe, Ellen A.; D'Anjou, Robert; Pope, Domonique K.; Robbins, Lisa L.

    2011-01-01

    This project combines underwater video with maps and descriptions to illustrate diverse seafloor habitats from Tampa Bay, Florida, to Mobile Bay, Alabama. A swath of seafloor was surveyed with underwater video to 100 meters (m) water depth in 1999 and 2000 as part of the Gulfstream Natural Gas System Survey. The U.S. Geological Survey (USGS) in St. Petersburg, Florida, in cooperation with Eckerd College and the Florida Department of Environmental Protection (FDEP), produced an archive of analog-to-digital underwater movies. Representative clips of seafloor habitats were selected from hundreds of hours of underwater footage. The locations of video clips were mapped to show the distribution of habitat and habitat transitions. The numerous benthic habitats in the northeastern Gulf of Mexico play a vital role in the region's economy, providing essential resources for tourism, natural gas, recreational water sports (fishing, boating, scuba diving), materials, fresh food, energy, a source of sand for beach renourishment, and more. These submerged natural resources are important to the economy but are often invisible to the general public. This product provides a glimpse of the seafloor with sample underwater video, maps, and habitat descriptions. It was developed to depict the range and location of seafloor habitats in the region but is limited by depth and by the survey track. It should not be viewed as comprehensive, but rather as a point of departure for inquiries and appreciation of marine resources and seafloor habitats. Further information is provided in the Resources section.

  12. Features for detecting smoke in laparoscopic videos

    Directory of Open Access Journals (Sweden)

    Jalal Nour Aldeen

    2017-09-01

    Full Text Available Video-based smoke detection in laparoscopic surgery has different potential applications, such as the automatic addressing of surgical events associated with the electrocauterization task and the development of automatic smoke removal. In the literature, video-based smoke detection has been studied widely for fire surveillance systems. Nevertheless, the proposed methods are insufficient for smoke detection in laparoscopic videos because they often depend on assumptions which rarely hold in laparoscopic surgery such as static camera. In this paper, ten visual features based on motion, texture and colour of smoke are proposed and evaluated for smoke detection in laparoscopic videos. These features are RGB channels, energy-based feature, texture features based on gray level co-occurrence matrix (GLCM, HSV colour space feature, features based on the detection of moving regions using optical flow and the smoke colour in HSV colour space. These features were tested on four laparoscopic cholecystectomy videos. Experimental observations show that each feature can provide valuable information in performing the smoke detection task. However, each feature has weaknesses to detect the presence of smoke in some cases. By combining all proposed features smoke with high and even low density can be identified robustly and the classification accuracy increases significantly.

  13. Validation of the fifth edition BI-RADS ultrasound lexicon with comparison of fourth and fifth edition diagnostic performance using video clips

    International Nuclear Information System (INIS)

    Yoon, Jung Hyun; Kim, Min Jung; Lee, Hye Sun; Kim, Sung Hun; Youk, Ji Hyun; Jeong, Sun Hye; Kim, You Me

    2016-01-01

    The aim of this study was to evaluate the positive predictive value (PPV) and the diagnostic performance of the ultrasonographic descriptors in the fifth edition of BI-RADS, comparing with the fourth edition using video clips. From September 2013 to July 2014, 80 breast masses in 74 women (mean age, 47.5±10.7 years) from five institutions of the Korean Society of Breast Imaging were included. Two radiologists individually reviewed the static and video images and analyzed the images according to the fourth and fifth edition of BI-RADS. The PPV of each descriptor was calculated and diagnostic performances between the fourth and fifth editions were compared. Of the 80 breast masses, 51 (63.8%) were benign and 29 (36.2%) were malignant. Suspicious ultrasonographic features such as irregular shape, non-parallel orientation, angular or spiculated margins, and combined posterior features showed higher PPV in both editions (all P<0.05). No significant differences were found in the diagnostic performances between the two editions (all P>0.05). The area under the receiver operating characteristics curve was higher in the fourth edition (0.708 to 0.690), without significance (P=0.416). The fifth edition of the BI-RADS ultrasound lexicon showed comparable performance to the fourth edition and can be useful in the differential diagnosis of breast masses using ultrasonography

  14. Fuzzy expert systems using CLIPS

    Science.gov (United States)

    Le, Thach C.

    1994-01-01

    This paper describes a CLIPS-based fuzzy expert system development environment called FCLIPS and illustrates its application to the simulated cart-pole balancing problem. FCLIPS is a straightforward extension of CLIPS without any alteration to the CLIPS internal structures. It makes use of the object-oriented and module features in CLIPS version 6.0 for the implementation of fuzzy logic concepts. Systems of varying degrees of mixed Boolean and fuzzy rules can be implemented in CLIPS. Design and implementation issues of FCLIPS will also be discussed.

  15. Background estimation and player detection in badminton video clips using histogram of pixel values along temporal dimension

    Science.gov (United States)

    Peng, Yahui; Ma, Xiao; Gao, Xinyu; Zhou, Fangxu

    2015-12-01

    Computer vision is an important tool for sports video processing. However, its application in badminton match analysis is very limited. In this study, we proposed a straightforward but robust histogram-based background estimation and player detection methods for badminton video clips, and compared the results with the naive averaging method and the mixture of Gaussians methods, respectively. The proposed method yielded better background estimation results than the naive averaging method and more accurate player detection results than the mixture of Gaussians player detection method. The preliminary results indicated that the proposed histogram-based method could estimate the background and extract the players accurately. We conclude that the proposed method can be used for badminton player tracking and further studies are warranted for automated match analysis.

  16. Feature Quantization and Pooling for Videos

    Science.gov (United States)

    2014-05-01

    less vertical motion. The exceptions are videos from the classes of biking (mainly due to the camera tracking fast bikers), jumping on a trampoline ...tracking the bikers; the jumping videos, featuring people on trampolines , the swing videos, which are usually recorded in profile view, and the walking

  17. Knowledge-based approach to video content classification

    Science.gov (United States)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  18. Video Scene Parsing with Predictive Feature Learning

    OpenAIRE

    Jin, Xiaojie; Li, Xin; Xiao, Huaxin; Shen, Xiaohui; Lin, Zhe; Yang, Jimei; Chen, Yunpeng; Dong, Jian; Liu, Luoqi; Jie, Zequn; Feng, Jiashi; Yan, Shuicheng

    2016-01-01

    In this work, we address the challenging video scene parsing problem by developing effective representation learning methods given limited parsing annotations. In particular, we contribute two novel methods that constitute a unified parsing framework. (1) \\textbf{Predictive feature learning}} from nearly unlimited unlabeled video data. Different from existing methods learning features from single frame parsing, we learn spatiotemporal discriminative features by enforcing a parsing network to ...

  19. Unsupervised Learning of Spatiotemporal Features by Video Completion

    OpenAIRE

    Nallabolu, Adithya Reddy

    2017-01-01

    In this work, we present an unsupervised representation learning approach for learning rich spatiotemporal features from videos without the supervision from semantic labels. We propose to learn the spatiotemporal features by training a 3D convolutional neural network (CNN) using video completion as a surrogate task. Using a large collection of unlabeled videos, we train the CNN to predict the missing pixels of a spatiotemporal hole given the remaining parts of the video through minimizing per...

  20. Legal drug content in music video programs shown on Australian television on saturday mornings.

    Science.gov (United States)

    Johnson, Rebecca; Croager, Emma; Pratt, Iain S; Khoo, Natalie

    2013-01-01

    To examine the extent to which legal drug references (alcohol and tobacco) are present in the music video clips shown on two music video programs broadcast in Australia on Saturday mornings. Further, to examine the music genres in which the references appeared and the dominant messages associated with the references. Music video clips shown on the music video programs 'Rage' (ABC TV) and [V] 'Music Video Chart' (Channel [V]) were viewed over 8 weeks from August 2011 to October 2011 and the number of clips containing verbal and/or visual drug references in each program was counted. The songs were classified by genre and the dominant messages associated with drug references were also classified and analysed. A considerable proportion of music videos (approximately one-third) contained drug references. Alcohol featured in 95% of the music videos that contained drug references. References to alcohol generally associated it with fun and humour, and alcohol and tobacco were both overwhelmingly presented in contexts that encouraged, rather than discouraged, their use. In Australia, Saturday morning is generally considered a children's television viewing timeslot, and several broadcaster Codes of Practice dictate that programs shown on Saturday mornings must be appropriate for viewing by audiences of all ages. Despite this, our findings show that music video programs aired on Saturday mornings contain a considerable level of drug-related content.

  1. Clinical features and nail clippings in 52 children with psoriasis.

    Science.gov (United States)

    Uber, Marjorie; Carvalho, Vânia O; Abagge, Kerstin T; Robl Imoto, Renata; Werner, Betina

    2018-03-01

    Nail clipping, the act of cutting the distal portion of a nail for microscopic analysis, can complement the diagnosis of skin diseases with nail involvement, such as psoriasis. This study aimed to describe histopathologic findings on 81 nails from 52 children and adolescents with skin psoriasis and to determine whether these changes correlated with the severity of skin and nail involvement. Children with psoriasis were enrolled in this cross-sectional study to obtain Psoriasis Area and Severity Index (PASI) and Nail Psoriasis Severity Index (NAPSI) scores. The most altered nails were processed using periodic acid-Schiff with diastase staining. Fifty-two patients with a median age of 10.5 years were included. The median Nail Psoriasis Severity Index score of the 20 nails from these patients was 17 (range 3-80). The most common findings were pitting (94.2%), leukonychia (73.0%), and longitudinal ridges (63.5%). Eighty-one nail fragments were collected by clipping. Neutrophils were found in 6 samples (7.6%) and serous lakes in 15 (19%). Median nail plate thickness was 0.3 mm (range 0.1-0.63 mm). Patients whose nails had neutrophils had a higher median PASI score (6.1 vs 2.0, P = .03). Patients whose nails had serous lakes had higher median PASI (5.3 vs 1.9, P = .008) and NAPSI (median 45.0 vs 18.0, P = .006) scores. There seems to be a correlation between some microscopic nail features in children with psoriasis and their PASI and NAPSI scores, so nail clippings from children with suspected psoriasis may help with diagnosis, especially in the presence of neutrophils, and in excluding onychomycosis. © 2018 Wiley Periodicals, Inc.

  2. Alleviating travel anxiety through virtual reality and narrated video technology.

    Science.gov (United States)

    Ahn, J C; Lee, O

    2013-01-01

    This study presents an empirical evidence of benefit of narrative video clips in embedded virtual reality websites of hotels for relieving travel anxiety. Even though it was proven that virtual reality functions do provide some relief in travel anxiety, a stronger virtual reality website can be built when narrative video clips that show video clips with narration about important aspects of the hotel. We posit that these important aspects are 1. Escape route and 2. Surrounding neighborhood information, which are derived from the existing research on anxiety disorder as well as travel anxiety. Thus we created a video clip that showed and narrated about the escape route from the hotel room, another video clip that showed and narrated about surrounding neighborhood. We then conducted experiments with this enhanced virtual reality website of a hotel by having human subjects play with the website and fill out a questionnaire. The result confirms our hypothesis that there is a statistically significant relationship between the degree of travel anxiety and psychological relief caused by the use of embedded virtual reality functions with narrative video clips of a hotel website (Tab. 2, Fig. 3, Ref. 26).

  3. Slow motion in films and video clips: Music influences perceived duration and emotion, autonomic physiological activation and pupillary responses.

    Science.gov (United States)

    Wöllner, Clemens; Hammerschmidt, David; Albrecht, Henning

    2018-01-01

    Slow motion scenes are ubiquitous in screen-based audiovisual media and are typically accompanied by emotional music. The strong effects of slow motion on observers are hypothetically related to heightened emotional states in which time seems to pass more slowly. These states are simulated in films and video clips, and seem to resemble such experiences in daily life. The current study investigated time perception and emotional response to media clips containing decelerated human motion, with or without music using psychometric and psychophysiological testing methods. Participants were presented with slow-motion scenes taken from commercial films, ballet and sports footage, as well as the same scenes converted to real-time. Results reveal that slow-motion scenes, compared to adapted real-time scenes, led to systematic underestimations of duration, lower perceived arousal but higher valence, lower respiration rates and smaller pupillary diameters. The presence of music compared to visual-only presentations strongly affected results in terms of higher accuracy in duration estimates, higher perceived arousal and valence, higher physiological activation and larger pupillary diameters, indicating higher arousal. Video genre affected responses in addition. These findings suggest that perceiving slow motion is not related to states of high arousal, but rather affects cognitive dimensions of perceived time and valence. Music influences these experiences profoundly, thus strengthening the impact of stretched time in audiovisual media.

  4. The Narrative Analysis of the Discourse on Homosexual BDSM Pornograhic Video Clips of The Manhunt Variety

    Directory of Open Access Journals (Sweden)

    Milica Vasić

    2016-02-01

    Full Text Available In this paper we have analyzed the ideal-type model of the story which represents the basic framework of action in Manhunt category pornographic internet video clips, using narrative analysis methods of Claude Bremond. The results have shown that it is possible to apply the theoretical model to elements of visual and mass culture, with certain modifications and taking into account the wider context of the narrative itself. The narrative analysis indicated the significance of researching categories of pornography on the internet, because it leads to a deep analysis of the distribution of power in relations between the categories of heterosexual and homosexual within a virtual environment.

  5. A Generalized Pyramid Matching Kernel for Human Action Recognition in Realistic Videos

    Directory of Open Access Journals (Sweden)

    Wenjun Zhang

    2013-10-01

    Full Text Available Human action recognition is an increasingly important research topic in the fields of video sensing, analysis and understanding. Caused by unconstrained sensing conditions, there exist large intra-class variations and inter-class ambiguities in realistic videos, which hinder the improvement of recognition performance for recent vision-based action recognition systems. In this paper, we propose a generalized pyramid matching kernel (GPMK for recognizing human actions in realistic videos, based on a multi-channel “bag of words” representation constructed from local spatial-temporal features of video clips. As an extension to the spatial-temporal pyramid matching (STPM kernel, the GPMK leverages heterogeneous visual cues in multiple feature descriptor types and spatial-temporal grid granularity levels, to build a valid similarity metric between two video clips for kernel-based classification. Instead of the predefined and fixed weights used in STPM, we present a simple, yet effective, method to compute adaptive channel weights of GPMK based on the kernel target alignment from training data. It incorporates prior knowledge and the data-driven information of different channels in a principled way. The experimental results on three challenging video datasets (i.e., Hollywood2, Youtube and HMDB51 validate the superiority of our GPMK w.r.t. the traditional STPM kernel for realistic human action recognition and outperform the state-of-the-art results in the literature.

  6. Obscene Video Recognition Using Fuzzy SVM and New Sets of Features

    Directory of Open Access Journals (Sweden)

    Alireza Behrad

    2013-02-01

    Full Text Available In this paper, a novel approach for identifying normal and obscene videos is proposed. In order to classify different episodes of a video independently and discard the need to process all frames, first, key frames are extracted and skin regions are detected for groups of video frames starting with key frames. In the second step, three different features including 1- structural features based on single frame information, 2- features based on spatiotemporal volume and 3-motion-based features, are extracted for each episode of video. The PCA-LDA method is then applied to reduce the size of structural features and select more distinctive features. For the final step, we use fuzzy or a Weighted Support Vector Machine (WSVM classifier to identify video episodes. We also employ a multilayer Kohonen network as an initial clustering algorithm to increase the ability to discriminate between the extracted features into two classes of videos. Features based on motion and periodicity characteristics increase the efficiency of the proposed algorithm in videos with bad illumination and skin colour variation. The proposed method is evaluated using 1100 videos in different environmental and illumination conditions. The experimental results show a correct recognition rate of 94.2% for the proposed algorithm.

  7. Evaluation of electrosurgery and titanium clips for ovarian pedicle haemostasis in video-assisted ovariohysterectomy with two portals in bitches

    Directory of Open Access Journals (Sweden)

    Rogério Luizari Guedes

    Full Text Available ABSTRACT: This study evaluated the use of bipolar electrosurgery and laparoscopic clipping, and their effects on blood loss and the inflammatory response, during a two portal video-assisted ovariohysterectomy technique (two groups with 10 animals each. Surgical time and blood loss volume were significantly lower in the electrosurgery group. There were no significant changes in haematocrit between groups; however, haematocrit did differ between evaluated times, and decreased 10% from the initial measurement to four hours after the procedure. The inflammatory response was significantly higher throughout the post-surgical period, but without any different clinical signs between the two groups. Both techniques had good application for the two portal video-assisted procedure; however, the bipolar electrosurgery allowed for shorter surgical times, reduced blood loss and a minimal learning curve for the surgeon.

  8. Music Video: An Analysis at Three Levels.

    Science.gov (United States)

    Burns, Gary

    This paper is an analysis of the different aspects of the music video. Music video is defined as having three meanings: an individual clip, a format, or the "aesthetic" that describes what the clips and format look like. The paper examines interruptions, the dialectical tension and the organization of the work of art, shot-scene…

  9. The Kinetics Human Action Video Dataset

    OpenAIRE

    Kay, Will; Carreira, Joao; Simonyan, Karen; Zhang, Brian; Hillier, Chloe; Vijayanarasimhan, Sudheendra; Viola, Fabio; Green, Tim; Back, Trevor; Natsev, Paul; Suleyman, Mustafa; Zisserman, Andrew

    2017-01-01

    We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some ...

  10. Mograph Cinema 4d untuk Menunjang Efek Visual Video Klip

    Directory of Open Access Journals (Sweden)

    Ardiyan Ardiyan

    2010-10-01

    Full Text Available This research is to talk about the advantages of MoGraph as one reliability feature in 3D modeling application, 4D Cinema as the implemented example in Cinta Laura video clip. The advantage in MoGraph is the ability to create multiple object moving effect accordingly and (or randomly easily and efficiently, also supported by the render quality of Cinema 4D that clean and relatively fast. The advantage made MoGraph Cinema 4D is suitable to use to enrich the visual effect a motion graphic work. The quality is hoped to support MoGraph usage as more creative. Regarding today’s visual variation is effected by the digital technology development, therefore the implementation of MoGraph Conema 4D is hoped to be optimally supporting creativity in making video clip in motion graphic art content. 

  11. Video clip transfer of radiological images using a mobile telephone in emergency neurosurgical consultations (3G Multi-Media Messaging Service).

    Science.gov (United States)

    Waran, Vicknes; Bahuri, Nor Faizal Ahmad; Narayanan, Vairavan; Ganesan, Dharmendra; Kadir, Khairul Azmi Abdul

    2012-04-01

    The purpose of this study was to validate and assess the accuracy and usefulness of sending short video clips in 3gp file format of an entire scan series of patients, using mobile telephones running on 3G-MMS technology, to enable consultation between junior doctors in a neurosurgical unit and the consultants on-call after office hours. A total of 56 consecutive patients with acute neurosurgical problems requiring urgent after-hours consultation during a 6-month period, prospectively had their images recorded and transmitted using the above method. The response to the diagnosis and the management plan by two neurosurgeons (who were not on site) based on the images viewed on a mobile telephone were reviewed by an independent observer and scored. In addition to this, a radiologist reviewed the original images directly on the hospital's Patients Archiving and Communication System (PACS) and this was compared with the neurosurgeons' response. Both neurosurgeons involved in this study were in complete agreement with their diagnosis. The radiologist disagreed with the diagnosis in only one patient, giving a kappa coefficient of 0.88, indicating an almost perfect agreement. The use of mobile telephones to transmit MPEG video clips of radiological images is very advantageous for carrying out emergency consultations in neurosurgery. The images accurately reflect the pathology in question, thereby reducing the incidence of medical errors from incorrect diagnosis, which otherwise may just depend on a verbal description.

  12. An Aerial Video Stabilization Method Based on SURF Feature

    Directory of Open Access Journals (Sweden)

    Wu Hao

    2016-01-01

    Full Text Available The video captured by Micro Aerial Vehicle is often degraded due to unexpected random trembling and jitter caused by wind and the shake of the aerial platform. An approach for stabilizing the aerial video based on SURF feature and Kalman filter is proposed. SURF feature points are extracted in each frame, and the feature points between adjacent frames are matched using Fast Library for Approximate Nearest Neighbors search method. Then Random Sampling Consensus matching algorithm and Least Squares Method are used to remove mismatching points pairs, and estimate the transformation between the adjacent images. Finally, Kalman filter is applied to smooth the motion parameters and separate Intentional Motion from Unwanted Motion to stabilize the aerial video. Experiments results show that the approach can stabilize aerial video efficiently with high accuracy, and it is robust to the translation, rotation and zooming motion of camera.

  13. Depth estimation of features in video frames with improved feature matching technique using Kinect sensor

    Science.gov (United States)

    Sharma, Kajal; Moon, Inkyu; Kim, Sung Gaun

    2012-10-01

    Estimating depth has long been a major issue in the field of computer vision and robotics. The Kinect sensor's active sensing strategy provides high-frame-rate depth maps and can recognize user gestures and human pose. This paper presents a technique to estimate the depth of features extracted from video frames, along with an improved feature-matching method. In this paper, we used the Kinect camera developed by Microsoft, which captured color and depth images for further processing. Feature detection and selection is an important task for robot navigation. Many feature-matching techniques have been proposed earlier, and this paper proposes an improved feature matching between successive video frames with the use of neural network methodology in order to reduce the computation time of feature matching. The features extracted are invariant to image scale and rotation, and different experiments were conducted to evaluate the performance of feature matching between successive video frames. The extracted features are assigned distance based on the Kinect technology that can be used by the robot in order to determine the path of navigation, along with obstacle detection applications.

  14. Modeling Timbre Similarity of Short Music Clips.

    Science.gov (United States)

    Siedenburg, Kai; Müllensiefen, Daniel

    2017-01-01

    There is evidence from a number of recent studies that most listeners are able to extract information related to song identity, emotion, or genre from music excerpts with durations in the range of tenths of seconds. Because of these very short durations, timbre as a multifaceted auditory attribute appears as a plausible candidate for the type of features that listeners make use of when processing short music excerpts. However, the importance of timbre in listening tasks that involve short excerpts has not yet been demonstrated empirically. Hence, the goal of this study was to develop a method that allows to explore to what degree similarity judgments of short music clips can be modeled with low-level acoustic features related to timbre. We utilized the similarity data from two large samples of participants: Sample I was obtained via an online survey, used 16 clips of 400 ms length, and contained responses of 137,339 participants. Sample II was collected in a lab environment, used 16 clips of 800 ms length, and contained responses from 648 participants. Our model used two sets of audio features which included commonly used timbre descriptors and the well-known Mel-frequency cepstral coefficients as well as their temporal derivates. In order to predict pairwise similarities, the resulting distances between clips in terms of their audio features were used as predictor variables with partial least-squares regression. We found that a sparse selection of three to seven features from both descriptor sets-mainly encoding the coarse shape of the spectrum as well as spectrotemporal variability-best predicted similarities across the two sets of sounds. Notably, the inclusion of non-acoustic predictors of musical genre and record release date allowed much better generalization performance and explained up to 50% of shared variance ( R 2 ) between observations and model predictions. Overall, the results of this study empirically demonstrate that both acoustic features related

  15. Evaluation of stereoscopic medical video content on an autostereoscopic display for undergraduate medical education

    Science.gov (United States)

    Ilgner, Justus F. R.; Kawai, Takashi; Shibata, Takashi; Yamazoe, Takashi; Westhofen, Martin

    2006-02-01

    Introduction: An increasing number of surgical procedures are performed in a microsurgical and minimally-invasive fashion. However, the performance of surgery, its possibilities and limitations become difficult to teach. Stereoscopic video has evolved from a complex production process and expensive hardware towards rapid editing of video streams with standard and HDTV resolution which can be displayed on portable equipment. This study evaluates the usefulness of stereoscopic video in teaching undergraduate medical students. Material and methods: From an earlier study we chose two clips each of three different microsurgical operations (tympanoplasty type III of the ear, endonasal operation of the paranasal sinuses and laser chordectomy for carcinoma of the larynx). This material was added by 23 clips of a cochlear implantation, which was specifically edited for a portable computer with an autostereoscopic display (PC-RD1-3D, SHARP Corp., Japan). The recording and synchronization of left and right image was performed at the University Hospital Aachen. The footage was edited stereoscopically at the Waseda University by means of our original software for non-linear editing of stereoscopic 3-D movies. Then the material was converted into the streaming 3-D video format. The purpose of the conversion was to present the video clips by a file type that does not depend on a television signal such as PAL or NTSC. 25 4th year medical students who participated in the general ENT course at Aachen University Hospital were asked to estimate depth clues within the six video clips plus cochlear implantation clips. Another 25 4th year students who were shown the material monoscopically on a conventional laptop served as control. Results: All participants noted that the additional depth information helped with understanding the relation of anatomical structures, even though none had hands-on experience with Ear, Nose and Throat operations before or during the course. The monoscopic

  16. Coding visual features extracted from video sequences.

    Science.gov (United States)

    Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2014-05-01

    Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.

  17. A Snapshot of the Depiction of Electronic Cigarettes in YouTube Videos.

    Science.gov (United States)

    Romito, Laura M; Hurwich, Risa A; Eckert, George J

    2015-11-01

    To assess the depiction of e-cigarettes in YouTube videos. The sample (N = 63) was selected from the top 20 search results for "electronic cigarette," and "e-cig" with each term searched twice by the filters "Relevance" and "View Count." Data collected included title, length, number of views, "likes," "dislikes," comments, and inferred demographics of individuals appearing in the videos. Seventy-six percent of videos included at least one man, 62% included a Caucasian, and 50% included at least one young individual. Video content connotation was coded as positive (76%), neutral (18%), or negative (6%). Videos were categorized as advertisement (33%), instructional (17%), news clip (19%), product review (13%), entertainment (11%), public health (3%), and personal testimonial (3%). Most e-cigarette YouTube videos are non-traditional or covert advertisements featuring young Caucasian men.

  18. CERN Video News on line

    CERN Multimedia

    2003-01-01

    The latest CERN video news is on line. In this issue : an interview with the Director General and reports on the new home for the DELPHI barrel and the CERN firemen's spectacular training programme. There's also a vintage video news clip from 1954. See: www.cern.ch/video or Bulletin web page

  19. Hierarchical vs non-hierarchical audio indexation and classification for video genres

    Science.gov (United States)

    Dammak, Nouha; BenAyed, Yassine

    2018-04-01

    In this paper, Support Vector Machines (SVMs) are used for segmenting and indexing video genres based on only audio features extracted at block level, which has a prominent asset by capturing local temporal information. The main contribution of our study is to show the wide effect on the classification accuracies while using an hierarchical categorization structure based on Mel Frequency Cepstral Coefficients (MFCC) audio descriptor. In fact, the classification consists in three common video genres: sports videos, music clips and news scenes. The sub-classification may divide each genre into several multi-speaker and multi-dialect sub-genres. The validation of this approach was carried out on over 360 minutes of video span yielding a classification accuracy of over 99%.

  20. Video segmentation and camera motion characterization using compressed data

    Science.gov (United States)

    Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain

    1997-10-01

    We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.

  1. Visual Analytics and Storytelling through Video

    Energy Technology Data Exchange (ETDEWEB)

    Wong, Pak C.; Perrine, Kenneth A.; Mackey, Patrick S.; Foote, Harlan P.; Thomas, Jim

    2005-10-31

    This paper supplements a video clip submitted to the Video Track of IEEE Symposium on Information Visualization 2005. The original video submission applies a two-way storytelling approach to demonstrate the visual analytics capabilities of a new visualization technique. The paper presents our video production philosophy, describes the plot of the video, explains the rationale behind the plot, and finally, shares our production experiences with our readers.

  2. Video diaries on social media: Creating online communities for geoscience research and education

    Science.gov (United States)

    Tong, V.

    2013-12-01

    Making video clips is an engaging way to learn and teach geoscience. As smartphones become increasingly common, it is relatively straightforward for students to produce ';video diaries' by recording their research and learning experience over the course of a science module. Instead of keeping the video diaries for themselves, students may use the social media such as Facebook for sharing their experience and thoughts. There are some potential benefits to link video diaries and social media in pedagogical contexts. For example, online comments on video clips offer useful feedback and learning materials to the students. Students also have the opportunity to engage in geoscience outreach by producing authentic scientific contents at the same time. A video diary project was conducted to test the pedagogical potential of using video diaries on social media in the context of geoscience outreach, undergraduate research and teaching. This project formed part of a problem-based learning module in field geophysics at an archaeological site in the UK. The project involved i) the students posting video clips about their research and problem-based learning in the field on a daily basis; and ii) the lecturer building an online outreach community with partner institutions. In this contribution, I will discuss the implementation of the project and critically evaluate the pedagogical potential of video diaries on social media. My discussion will focus on the following: 1) Effectiveness of video diaries on social media; 2) Student-centered approach of producing geoscience video diaries as part of their research and problem-based learning; 3) Learning, teaching and assessment based on video clips and related commentaries posted on Facebook; and 4) Challenges in creating and promoting online communities for geoscience outreach through the use of video diaries. I will compare the outcomes from this study with those from other pedagogical projects with video clips on geoscience, and

  3. Cerebral activation associated with sexual arousal in response to a pornographic clip: A 15O-H2O PET study in heterosexual men.

    Science.gov (United States)

    Bocher, M; Chisin, R; Parag, Y; Freedman, N; Meir Weil, Y; Lester, H; Mishani, E; Bonne, O

    2001-07-01

    This study attempted to use PET and 15O-H2O to measure changes in regional cerebral blood flow (rCBF) during sexual arousal evoked in 10 young heterosexual males while they watched a pornographic video clip, featuring heterosexual intercourse. This condition was compared with other mental setups evoked by noisy, nature, and talkshow audiovisual clips. Immediately after each clip, the participants answered three questions pertaining to what extent they thought about sex, felt aroused, and sensed an erection. They scored their answers using a 1 to 10 scale. SPM was used for data analysis. Sexual arousal was mainly associated with activation of bilateral, predominantly right, inferoposterior extrastriate cortices, of the right inferolateral prefrontal cortex and of the midbrain. The significance of those findings is discussed in the light of current theories concerning selective attention, "mind reading" and mirroring, reinforcement of pleasurable stimuli, and penile erection.

  4. Humorous Videos and Idiom Achievement: Some Pedagogical Considerations for EFL Learners

    Science.gov (United States)

    Neissari, Malihe; Ashraf, Hamid; Ghorbani, Mohammad Reza

    2017-01-01

    Employing a quasi-experimental design, this study examined the efficacy of humorous idiom video clips on the achievement of Iranian undergraduate students studying English as a Foreign Language (EFL). Forty humorous video clips from the English Idiom Series called "The Teacher" from the BBC website were used to teach 120 idioms to 61…

  5. Digital video steganalysis using motion vector recovery-based features.

    Science.gov (United States)

    Deng, Yu; Wu, Yunjie; Zhou, Linna

    2012-07-10

    As a novel digital video steganography, the motion vector (MV)-based steganographic algorithm leverages the MVs as the information carriers to hide the secret messages. The existing steganalyzers based on the statistical characteristics of the spatial/frequency coefficients of the video frames cannot attack the MV-based steganography. In order to detect the presence of information hidden in the MVs of video streams, we design a novel MV recovery algorithm and propose the calibration distance histogram-based statistical features for steganalysis. The support vector machine (SVM) is trained with the proposed features and used as the steganalyzer. Experimental results demonstrate that the proposed steganalyzer can effectively detect the presence of hidden messages and outperform others by the significant improvements in detection accuracy even with low embedding rates.

  6. Enhance Video Film using Retnix method

    Science.gov (United States)

    Awad, Rasha; Al-Zuky, Ali A.; Al-Saleh, Anwar H.; Mohamad, Haidar J.

    2018-05-01

    An enhancement technique used to improve the studied video quality. Algorithms like mean and standard deviation are used as a criterion within this paper, and it applied for each video clip that divided into 80 images. The studied filming environment has different light intensity (315, 566, and 644Lux). This different environment gives similar reality to the outdoor filming. The outputs of the suggested algorithm are compared with the results before applying it. This method is applied into two ways: first, it is applied for the full video clip to get the enhanced film; second, it is applied for every individual image to get the enhanced image then compiler them to get the enhanced film. This paper shows that the enhancement technique gives good quality video film depending on a statistical method, and it is recommended to use it in different application.

  7. Geographic Video 3d Data Model And Retrieval

    Science.gov (United States)

    Han, Z.; Cui, C.; Kong, Y.; Wu, H.

    2014-04-01

    Geographic video includes both spatial and temporal geographic features acquired through ground-based or non-ground-based cameras. With the popularity of video capture devices such as smartphones, the volume of user-generated geographic video clips has grown significantly and the trend of this growth is quickly accelerating. Such a massive and increasing volume poses a major challenge to efficient video management and query. Most of the today's video management and query techniques are based on signal level content extraction. They are not able to fully utilize the geographic information of the videos. This paper aimed to introduce a geographic video 3D data model based on spatial information. The main idea of the model is to utilize the location, trajectory and azimuth information acquired by sensors such as GPS receivers and 3D electronic compasses in conjunction with video contents. The raw spatial information is synthesized to point, line, polygon and solid according to the camcorder parameters such as focal length and angle of view. With the video segment and video frame, we defined the three categories geometry object using the geometry model of OGC Simple Features Specification for SQL. We can query video through computing the spatial relation between query objects and three categories geometry object such as VFLocation, VSTrajectory, VSFOView and VFFovCone etc. We designed the query methods using the structured query language (SQL) in detail. The experiment indicate that the model is a multiple objective, integration, loosely coupled, flexible and extensible data model for the management of geographic stereo video.

  8. Post-Vacuum-Assisted Stereotactic Core Biopsy Clip Displacement: A Comparison Between Commercially Available Clips and Surgical Clip.

    Science.gov (United States)

    Yen, Peggy; Dumas, Sandra; Albert, Arianne; Gordon, Paula

    2018-02-01

    The placement of localization clips following percutaneous biopsy is a standard practice for a variety of situations. Subsequent clip displacement creates challenges for imaging surveillance and surgical planning, and may cause confusion amongst radiologists and between surgeons and radiologists. Many causes have been attributed for this phenomenon including the commonly accepted "accordion effect." Herein, we investigate the performance of a low cost surgical clip system against 4 commercially available clips. We retrospectively reviewed 2112 patients who underwent stereotactic vacuum-assisted core biopsy followed by clip placement between January 2013 and June 2016. The primary performance parameter compared was displacement >10 mm following vacuum-assisted stereotactic core biopsy. Within the group of clips that had displaced, the magnitude of displacement was compared. There was a significant difference in displacement among the clip types (P < .0001) with significant pairwise comparisons between pediatric surgical clips and SecureMark (38% vs 28%; P = .001) and SenoMark (38% vs 27%; P = .0001) in the proportion displaced. The surgical clips showed a significant magnitude of displacement of approximately 25% greater average distance displaced. As a whole, the commercial clips performed better than the surgical clip after stereotactic vacuum-assisted core biopsy suggesting the surrounding outer component acts to anchor the central clip and minimizes clip displacement. The same should apply to tomosynthesis-guided biopsy. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.

  9. A modular CUDA-based framework for scale-space feature detection in video streams

    International Nuclear Information System (INIS)

    Kinsner, M; Capson, D; Spence, A

    2010-01-01

    Multi-scale image processing techniques enable extraction of features where the size of a feature is either unknown or changing, but the requirement to process image data at multiple scale levels imposes a substantial computational load. This paper describes the architecture and emerging results from the implementation of a GPGPU-accelerated scale-space feature detection framework for video processing. A discrete scale-space representation is generated for image frames within a video stream, and multi-scale feature detection metrics are applied to detect ridges and Gaussian blobs at video frame rates. A modular structure is adopted, in which common feature extraction tasks such as non-maximum suppression and local extrema search may be reused across a variety of feature detectors. Extraction of ridge and blob features is achieved at faster than 15 frames per second on video sequences from a machine vision system, utilizing an NVIDIA GTX 480 graphics card. By design, the framework is easily extended to additional feature classes through the inclusion of feature metrics to be applied to the scale-space representation, and using common post-processing modules to reduce the required CPU workload. The framework is scalable across multiple and more capable GPUs, and enables previously intractable image processing at video frame rates using commodity computational hardware.

  10. Microsurgery Simulator of Cerebral Aneurysm Clipping with Interactive Cerebral Deformation Featuring a Virtual Arachnoid.

    Science.gov (United States)

    Shono, Naoyuki; Kin, Taichi; Nomura, Seiji; Miyawaki, Satoru; Saito, Toki; Imai, Hideaki; Nakatomi, Hirofumi; Oyama, Hiroshi; Saito, Nobuhito

    2018-05-01

    A virtual reality simulator for aneurysmal clipping surgery is an attractive research target for neurosurgeons. Brain deformation is one of the most important functionalities necessary for an accurate clipping simulator and is vastly affected by the status of the supporting tissue, such as the arachnoid membrane. However, no virtual reality simulator implementing the supporting tissue of the brain has yet been developed. To develop a virtual reality clipping simulator possessing interactive brain deforming capability closely dependent on arachnoid dissection and apply it to clinical cases. Three-dimensional computer graphics models of cerebral tissue and surrounding structures were extracted from medical images. We developed a new method for modifiable cerebral tissue complex deformation by incorporating a nonmedical image-derived virtual arachnoid/trabecula in a process called multitissue integrated interactive deformation (MTIID). MTIID made it possible for cerebral tissue complexes to selectively deform at the site of dissection. Simulations for 8 cases of actual clipping surgery were performed before surgery and evaluated for their usefulness in surgical approach planning. Preoperatively, each operative field was precisely reproduced and visualized with the virtual brain retraction defined by users. The clear visualization of the optimal approach to treating the aneurysm via an appropriate arachnoid incision was possible with MTIID. A virtual clipping simulator mainly focusing on supporting tissues and less on physical properties seemed to be useful in the surgical simulation of cerebral aneurysm clipping. To our knowledge, this article is the first to report brain deformation based on supporting tissues.

  11. Semantic-based surveillance video retrieval.

    Science.gov (United States)

    Hu, Weiming; Xie, Dan; Fu, Zhouyu; Zeng, Wenrong; Maybank, Steve

    2007-04-01

    Visual surveillance produces large amounts of video data. Effective indexing and retrieval from surveillance video databases are very important. Although there are many ways to represent the content of video clips in current video retrieval algorithms, there still exists a semantic gap between users and retrieval systems. Visual surveillance systems supply a platform for investigating semantic-based video retrieval. In this paper, a semantic-based video retrieval framework for visual surveillance is proposed. A cluster-based tracking algorithm is developed to acquire motion trajectories. The trajectories are then clustered hierarchically using the spatial and temporal information, to learn activity models. A hierarchical structure of semantic indexing and retrieval of object activities, where each individual activity automatically inherits all the semantic descriptions of the activity model to which it belongs, is proposed for accessing video clips and individual objects at the semantic level. The proposed retrieval framework supports various queries including queries by keywords, multiple object queries, and queries by sketch. For multiple object queries, succession and simultaneity restrictions, together with depth and breadth first orders, are considered. For sketch-based queries, a method for matching trajectories drawn by users to spatial trajectories is proposed. The effectiveness and efficiency of our framework are tested in a crowded traffic scene.

  12. Storage, access, and retrieval of endoscopic and laparoscopic video

    Science.gov (United States)

    Bellaire, Gunter; Steines, Daniel; Graschew, Georgi; Thiel, Andreas; Bernarding, Johannes; Tolxdorff, Thomas; Schlag, Peter M.

    1999-05-01

    The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video into DICOM3.0. Digital stereoscopic video sequences (DSVS) are especially in demand for surgery (laparoscopy, microsurgery, surgical microscopy, second opinion, virtual reality). Therefore DSVS are also integrated into the DICOM video concept. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital (stereoscopic) video sequences relevant for surgery should be examined regarding the clip length necessary for diagnosis and documentation and the clip size manageable with today's hardware. Methods for DSVS compression are described, implemented, and tested. Image sources relevant for this paper include, among others, a stereoscopic laparoscope and a monoscopic endoscope. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video- cutting.

  13. The NASA eClips 4D Program: Impacts from the First Year Quasi-Experimental Study on Video Development and Viewing on Students.

    Science.gov (United States)

    Davey, B.; Davis, H. B.; Harper-Neely, J.; Bowers, S.

    2017-12-01

    NASA eClips™ is a multi-media educational program providing educational resources relevant to the formal K-12 classroom. Science content for the NASA eClips™ 4D elements is drawn from all four divisions of the Science Mission Directorate (SMD) as well as cross-divisional topics. The suite of elements fulfills the following SMD education objectives: Enable STEM education, Improve U.S. scientific literacy, Advance national education goals (CoSTEM), and Leverage efforts through partnerships. A component of eClips™ was the development of NASA Spotlite videos (student developed videos designed to increase student literacy and address misconceptions of other students) by digital media students. While developing the Sptolite videos, the students gained skills in teamwork, working in groups to accomplish a task, and how to convey specific concepts in a video. The teachers felt the video project was a good fit for their courses and enhanced what the students were already learning. Teachers also reported that the students learned knowledge and skills that would help them in future careers including how to gain a better understanding of a project and the importance of being knowledgeable about the topic. The student developed eClips videos were then used as part of interactive lessons to help other students learn about key science concepts. As part of our research, we established a quasi-experimental design where one group of students received the intervention including the Spotlite videos (intervention group) and one group did not receive the intervention (comparison group). An overall comparison of post scores between intervention group and comparison group students showed intervention groups had significantly higher scores in three of the four content areas - Ozone, Clouds, and Phase Change.

  14. Evaluation of the effectiveness of color attributes for video indexing

    Science.gov (United States)

    Chupeau, Bertrand; Forest, Ronan

    2001-10-01

    Color features are reviewed and their effectiveness assessed in the application framework of key-frame clustering for abstracting unconstrained video. Existing color spaces and associated quantization schemes are first studied. Description of global color distribution by means of histograms is then detailed. In our work, 12 combinations of color space and quantization were selected, together with 12 histogram metrics. Their respective effectiveness with respect to picture similarity measurement was evaluated through a query-by-example scenario. For that purpose, a set of still-picture databases was built by extracting key frames from several video clips, including news, documentaries, sports and cartoons. Classical retrieval performance evaluation criteria were adapted to the specificity of our testing methodology.

  15. ONLINE LEARNING: CAN VIDEOS ENHANCE LEARNING?

    OpenAIRE

    HAJHASHEMI, Karim; ANDERSON, Neil; JACKSON, Cliff; CALTABIANO, Nerina

    2015-01-01

    Highereducation lecturers integrate different media into their courses. Internet-basededucational video clips have gained prominence, as this media is perceived topromote deeper thought processes, communication and interaction among users,and makeclassroom content more diverse.This paper provides a literature overview of the increasing importance ofonline videos across all modes of instruction. It discusses a quantitative andqualitative research design that was used to assess on-line video pe...

  16. A novel vascular clip design for the reliable induction of 2-kidney, 1-clip hypertension in the rat.

    Science.gov (United States)

    Chelko, Stephen P; Schmiedt, Chad W; Lewis, Tristan H; Lewis, Stephen J; Robertson, Tom P

    2012-02-01

    The 2-kidney, 1-clip (2K1C) model has provided many insights into the pathogenesis of renovascular hypertension. However, studies using the 2K1C model often report low success rates of hypertension, with typical success rates of just 40-60%. We hypothesized that these low success rates are due to fundamental design flaws in the clips traditionally used in 2K1C models. Specifically, the gap widths of traditional silver clips may not be maintained during investigator handling and these clips may also be easily dislodged from the renal artery following placement. Therefore, we designed and tested a novel vascular clip possessing design features to maintain both gap width and position around the renal artery. In this initial study, application of these new clips to the left renal artery produced reliable and consistent levels of hypertension in rats. Nine-day application of clips with gap widths of 0.27, 0.25, and 0.23 mm elicited higher mean arterial blood pressures of 112 ± 4, 121 ± 6, and 135 ± 7 mmHg, respectively (n = 8 for each group), than those of sham-operated controls (95 ± 2 mmHg, n = 8). Moreover, 8 out of 8 rats in each of the 0.23 and 0.25 mm 2K1C groups were hypertensive, whereas 7 out of 8 rats in the 0.27 mm 2K1C group were hypertensive. Plasma renin concentrations were also increased in all 2K1C groups compared with sham-operated controls. In summary, this novel clip design may help eliminate the large degree of unreliability commonly encountered with the 2K1C model.

  17. Is perception of quality more important than technical quality in patient video cases?

    Science.gov (United States)

    Roland, Damian; Matheson, David; Taub, Nick; Coats, Tim; Lakhanpaul, Monica

    2015-08-13

    The use of video cases to demonstrate key signs and symptoms in patients (patient video cases or PVCs) is a rapidly expanding field. The aims of this study were to evaluate whether the technical quality, or judgement of quality, of a video clip influences a paediatrician's judgment on acuity of the case and assess the relationship between perception of quality and the technical quality of a selection of video clips. Participants (12 senior consultant paediatricians attending an examination workshop) individually categorised 28 PVCs into one of 3 possible acuities and then described the quality of the image seen. The PVCs had been converted into four different technical qualities (differing bit rates ranging from excellent to low quality). Participants' assessment of quality and the actual industry standard of the PVC were independent (333 distinct observations, spearmans rho = 0.0410, p = 0.4564). Agreement between actual acuity and participants' judgement was generally good at higher acuities but moderate at medium/low acuities of illness (overall correlation 0.664). Perception of the quality of the clip was related to correct assignment of acuity regardless of the technical quality of the clip (number of obs = 330, z = 2.07, p = 0.038). It is important to benchmark PVCs prior to use in learning resources as experts may not agree on the information within, or quality of, the clip. It appears, although PVCs may be beneficial in a pedagogical context, the perception of quality of clip may be an important determinant of an expert's decision making.

  18. MicroRNA transfection and AGO-bound CLIP-seq data sets reveal distinct determinants of miRNA action

    DEFF Research Database (Denmark)

    Wen, Jiayu; Parker, Brian J; Jacobsen, Anders

    2011-01-01

    the predictive effect of target flanking features. We observe distinct target determinants between expression-based and CLIP-based data. Target flanking features such as flanking region conservation are an important AGO-binding determinant-we hypothesize that CLIP experiments have a preference for strongly bound......Microarray expression analyses following miRNA transfection/inhibition and, more recently, Argonaute cross-linked immunoprecipitation (CLIP)-seq assays have been used to detect miRNA target sites. CLIP and expression approaches measure differing stages of miRNA functioning-initial binding of the mi...... miRNP-target interactions involving adjacent RNA-binding proteins that increase the strength of cross-linking. In contrast, seed-related features are major determinants in expression-based studies, but less so for CLIP-seq studies, and increased miRNA concentrations typical of transfection studies...

  19. Microsurgical Clipping of an Anterior Communicating Artery Aneurysm Using a Novel Robotic Visualization Tool in Lieu of the Binocular Operating Microscope: Operative Video.

    Science.gov (United States)

    Klinger, Daniel R; Reinard, Kevin A; Ajayi, Olaide O; Delashaw, Johnny B

    2018-01-01

    The binocular operating microscope has been the visualization instrument of choice for microsurgical clipping of intracranial aneurysms for many decades. To discuss recent technological advances that have provided novel visualization tools, which may prove to be superior to the binocular operating microscope in many regards. We present an operative video and our operative experience with the BrightMatterTM Servo System (Synaptive Medical, Toronto, Ontario, Canada) during the microsurgical clipping of an anterior communicating artery aneurysm. To the best of our knowledge, the use of this device for the microsurgical clipping of an intracranial aneurysm has never been described in the literature. The BrightMatterTM Servo System (Synaptive Medical) is a surgical exoscope which avoids many of the ergonomic constraints of the binocular operating microscope, but is associated with a steep learning curve. The BrightMatterTM Servo System (Synaptive Medical) is a maneuverable surgical exoscope that is positioned with a directional aiming device and a surgeon-controlled foot pedal. While utilizing this device comes with a steep learning curve typical of any new technology, the BrightMatterTM Servo System (Synaptive Medical) has several advantages over the conventional surgical microscope, which include a relatively unobstructed surgical field, provision of high-definition images, and visualization of difficult angles/trajectories. This device can easily be utilized as a visualization tool for a variety of cranial and spinal procedures in lieu of the binocular operating microscope. We anticipate that this technology will soon become an integral part of the neurosurgeon's armamentarium. Copyright © 2017 by the Congress of Neurological Surgeons

  20. Improving Video Generation for Multi-functional Applications

    OpenAIRE

    Kratzwald, Bernhard; Huang, Zhiwu; Paudel, Danda Pani; Dinesh, Acharya; Van Gool, Luc

    2017-01-01

    In this paper, we aim to improve the state-of-the-art video generative adversarial networks (GANs) with a view towards multi-functional applications. Our improved video GAN model does not separate foreground from background nor dynamic from static patterns, but learns to generate the entire video clip conjointly. Our model can thus be trained to generate - and learn from - a broad set of videos with no restriction. This is achieved by designing a robust one-stream video generation architectur...

  1. Cell Phone Video Recording Feature as a Language Learning Tool: A Case Study

    Science.gov (United States)

    Gromik, Nicolas A.

    2012-01-01

    This paper reports on a case study conducted at a Japanese national university. Nine participants used the video recording feature on their cell phones to produce weekly video productions. The task required that participants produce one 30-second video on a teacher-selected topic. Observations revealed the process of video creation with a cell…

  2. Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos

    Directory of Open Access Journals (Sweden)

    Li Yao

    2016-01-01

    Full Text Available Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a k-way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm’s projective function. We test our work on the several datasets and obtain very promising results.

  3. Automated UAV-based mapping for airborne reconnaissance and video exploitation

    Science.gov (United States)

    Se, Stephen; Firoozfam, Pezhman; Goldstein, Norman; Wu, Linda; Dutkiewicz, Melanie; Pace, Paul; Naud, J. L. Pierre

    2009-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for force protection, situational awareness, mission planning, damage assessment and others. UAVs gather huge amount of video data but it is extremely labour-intensive for operators to analyse hours and hours of received data. At MDA, we have developed a suite of tools towards automated video exploitation including calibration, visualization, change detection and 3D reconstruction. The on-going work is to improve the robustness of these tools and automate the process as much as possible. Our calibration tool extracts and matches tie-points in the video frames incrementally to recover the camera calibration and poses, which are then refined by bundle adjustment. Our visualization tool stabilizes the video, expands its field-of-view and creates a geo-referenced mosaic from the video frames. It is important to identify anomalies in a scene, which may include detecting any improvised explosive devices (IED). However, it is tedious and difficult to compare video clips to look for differences manually. Our change detection tool allows the user to load two video clips taken from two passes at different times and flags any changes between them. 3D models are useful for situational awareness, as it is easier to understand the scene by visualizing it in 3D. Our 3D reconstruction tool creates calibrated photo-realistic 3D models from video clips taken from different viewpoints, using both semi-automated and automated approaches. The resulting 3D models also allow distance measurements and line-of- sight analysis.

  4. Infrared video based gas leak detection method using modified FAST features

    Science.gov (United States)

    Wang, Min; Hong, Hanyu; Huang, Likun

    2018-03-01

    In order to detect the invisible leaking gas that is usually dangerous and easily leads to fire or explosion in time, many new technologies have arisen in the recent years, among which the infrared video based gas leak detection is widely recognized as a viable tool. However, all the moving regions of a video frame can be detected as leaking gas regions by the existing infrared video based gas leak detection methods, without discriminating the property of each detected region, e.g., a walking person in a video frame may be also detected as gas by the current gas leak detection methods.To solve this problem, we propose a novel infrared video based gas leak detection method in this paper, which is able to effectively suppress strong motion disturbances.Firstly, the Gaussian mixture model(GMM) is used to establish the background model.Then due to the observation that the shapes of gas regions are different from most rigid moving objects, we modify the Features From Accelerated Segment Test (FAST) algorithm and use the modified FAST (mFAST) features to describe each connected component. In view of the fact that the statistical property of the mFAST features extracted from gas regions is different from that of other motion regions, we propose the Pixel-Per-Points (PPP) condition to further select candidate connected components.Experimental results show that the algorithm is able to effectively suppress most strong motion disturbances and achieve real-time leaking gas detection.

  5. Comparison of Image Transform-Based Features for Visual Speech Recognition in Clean and Corrupted Videos

    Directory of Open Access Journals (Sweden)

    Seymour Rowan

    2008-01-01

    Full Text Available Abstract We present results of a study into the performance of a variety of different image transform-based feature types for speaker-independent visual speech recognition of isolated digits. This includes the first reported use of features extracted using a discrete curvelet transform. The study will show a comparison of some methods for selecting features of each feature type and show the relative benefits of both static and dynamic visual features. The performance of the features will be tested on both clean video data and also video data corrupted in a variety of ways to assess each feature type's robustness to potential real-world conditions. One of the test conditions involves a novel form of video corruption we call jitter which simulates camera and/or head movement during recording.

  6. Comparison of Image Transform-Based Features for Visual Speech Recognition in Clean and Corrupted Videos

    Directory of Open Access Journals (Sweden)

    Ji Ming

    2008-03-01

    Full Text Available We present results of a study into the performance of a variety of different image transform-based feature types for speaker-independent visual speech recognition of isolated digits. This includes the first reported use of features extracted using a discrete curvelet transform. The study will show a comparison of some methods for selecting features of each feature type and show the relative benefits of both static and dynamic visual features. The performance of the features will be tested on both clean video data and also video data corrupted in a variety of ways to assess each feature type's robustness to potential real-world conditions. One of the test conditions involves a novel form of video corruption we call jitter which simulates camera and/or head movement during recording.

  7. Prediction of transmission distortion for wireless video communication: analysis.

    Science.gov (United States)

    Chen, Zhifeng; Wu, Dapeng

    2012-03-01

    Transmitting video over wireless is a challenging problem since video may be seriously distorted due to packet errors caused by wireless channels. The capability of predicting transmission distortion (i.e., video distortion caused by packet errors) can assist in designing video encoding and transmission schemes that achieve maximum video quality or minimum end-to-end video distortion. This paper is aimed at deriving formulas for predicting transmission distortion. The contribution of this paper is twofold. First, we identify the governing law that describes how the transmission distortion process evolves over time and analytically derive the transmission distortion formula as a closed-form function of video frame statistics, channel error statistics, and system parameters. Second, we identify, for the first time, two important properties of transmission distortion. The first property is that the clipping noise, which is produced by nonlinear clipping, causes decay of propagated error. The second property is that the correlation between motion-vector concealment error and propagated error is negative and has dominant impact on transmission distortion, compared with other correlations. Due to these two properties and elegant error/distortion decomposition, our formula provides not only more accurate prediction but also lower complexity than the existing methods.

  8. The distinguishing motor features of cataplexy: a study from video-recorded attacks.

    Science.gov (United States)

    Pizza, Fabio; Antelmi, Elena; Vandi, Stefano; Meletti, Stefano; Erro, Roberto; Baumann, Christian R; Bhatia, Kailash P; Dauvilliers, Yves; Edwards, Mark J; Iranzo, Alex; Overeem, Sebastiaan; Tinazzi, Michele; Liguori, Rocco; Plazzi, Giuseppe

    2018-05-01

    To describe the motor pattern of cataplexy and to determine its phenomenological differences from pseudocataplexy in the differential diagnosis of episodic falls. We selected 30 video-recorded cataplexy and 21 pseudocataplexy attacks in 17 and 10 patients evaluated for suspected narcolepsy and with final diagnosis of narcolepsy type 1 and conversion disorder, respectively, together with self-reported attacks features, and asked expert neurologists to blindly evaluate the motor features of the attacks. Video documented and self-reported attack features of cataplexy and pseudocataplexy were contrasted. Video-recorded cataplexy can be positively differentiated from pseudocataplexy by the occurrence of facial hypotonia (ptosis, mouth opening, tongue protrusion) intermingled by jerks and grimaces abruptly interrupting laughter behavior (i.e. smile, facial expression) and postural control (head drops, trunk fall) under clear emotional trigger. Facial involvement is present in both partial and generalized cataplexy. Conversely, generalized pseudocataplexy is associated with persistence of deep tendon reflexes during the attack. Self-reported features confirmed the important role of positive emotions (laughter, telling a joke) in triggering the attacks, as well as the more frequent occurrence of partial body involvement in cataplexy compared with pseudocataplexy. Cataplexy is characterized by abrupt facial involvement during laughter behavior. Video recording of suspected cataplexy attacks allows the identification of positive clinical signs useful for diagnosis and, possibly in the future, for severity assessment.

  9. Scientists feature their work in Arctic-focused short videos by FrontierScientists

    Science.gov (United States)

    Nielsen, L.; O'Connell, E.

    2013-12-01

    Whether they're guiding an unmanned aerial vehicle into a volcanic plume to sample aerosols, or documenting core drilling at a frozen lake in Siberia formed 3.6 million years ago by a massive meteorite impact, Arctic scientists are using video to enhance and expand their science and science outreach. FrontierScientists (FS), a forum for showcasing scientific work, produces and promotes radically different video blogs featuring Arctic scientists. Three- to seven- minute multimedia vlogs help deconstruct researcher's efforts and disseminate stories, communicating scientific discoveries to our increasingly connected world. The videos cover a wide range of current field work being performed in the Arctic. All videos are freely available to view or download from the FrontierScientists.com website, accessible via any internet browser or via the FrontierScientists app. FS' filming process fosters a close collaboration between the scientist and the media maker. Film creation helps scientists reach out to the public, communicate the relevance of their scientific findings, and craft a discussion. Videos keep audience tuned in; combining field footage, pictures, audio, and graphics with a verbal explanation helps illustrate ideas, allowing one video to reach people with different learning strategies. The scientists' stories are highlighted through social media platforms online. Vlogs grant scientists a voice, letting them illustrate their own work while ensuring accuracy. Each scientific topic on FS has its own project page where easy-to-navigate videos are featured prominently. Video sets focus on different aspects of a researcher's work or follow one of their projects into the field. We help the scientist slip the answers to their five most-asked questions into the casual script in layman's terms in order to free the viewers' minds to focus on new concepts. Videos are accompanied by written blogs intended to systematically demystify related facts so the scientists can focus

  10. Deep Spatial-Temporal Joint Feature Representation for Video Object Detection.

    Science.gov (United States)

    Zhao, Baojun; Zhao, Boya; Tang, Linbo; Han, Yuqi; Wang, Wenzheng

    2018-03-04

    With the development of deep neural networks, many object detection frameworks have shown great success in the fields of smart surveillance, self-driving cars, and facial recognition. However, the data sources are usually videos, and the object detection frameworks are mostly established on still images and only use the spatial information, which means that the feature consistency cannot be ensured because the training procedure loses temporal information. To address these problems, we propose a single, fully-convolutional neural network-based object detection framework that involves temporal information by using Siamese networks. In the training procedure, first, the prediction network combines the multiscale feature map to handle objects of various sizes. Second, we introduce a correlation loss by using the Siamese network, which provides neighboring frame features. This correlation loss represents object co-occurrences across time to aid the consistent feature generation. Since the correlation loss should use the information of the track ID and detection label, our video object detection network has been evaluated on the large-scale ImageNet VID dataset where it achieves a 69.5% mean average precision (mAP).

  11. Extracting foreground ensemble features to detect abnormal crowd behavior in intelligent video-surveillance systems

    Science.gov (United States)

    Chan, Yi-Tung; Wang, Shuenn-Jyi; Tsai, Chung-Hsien

    2017-09-01

    Public safety is a matter of national security and people's livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method's performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.

  12. CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (MACINTOSH VERSION)

    Science.gov (United States)

    Culbert, C.

    1994-01-01

    The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh

  13. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Yuchou Chang

    2008-02-01

    Full Text Available Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  14. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Hong Yi

    2008-01-01

    Full Text Available Abstract Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  15. Deep Spatial-Temporal Joint Feature Representation for Video Object Detection

    Directory of Open Access Journals (Sweden)

    Baojun Zhao

    2018-03-01

    Full Text Available With the development of deep neural networks, many object detection frameworks have shown great success in the fields of smart surveillance, self-driving cars, and facial recognition. However, the data sources are usually videos, and the object detection frameworks are mostly established on still images and only use the spatial information, which means that the feature consistency cannot be ensured because the training procedure loses temporal information. To address these problems, we propose a single, fully-convolutional neural network-based object detection framework that involves temporal information by using Siamese networks. In the training procedure, first, the prediction network combines the multiscale feature map to handle objects of various sizes. Second, we introduce a correlation loss by using the Siamese network, which provides neighboring frame features. This correlation loss represents object co-occurrences across time to aid the consistent feature generation. Since the correlation loss should use the information of the track ID and detection label, our video object detection network has been evaluated on the large-scale ImageNet VID dataset where it achieves a 69.5% mean average precision (mAP.

  16. Reviews Website: Online Graphing Calculator Video Clip: Learning From the News Phone App: Graphing Calculator Book: Challenge and Change: A History of the Nuffield A-Level Physics Project Book: SEP Sound Book: Reinventing Schools, Reforming Teaching Book: Physics and Technology for Future Presidents iPhone App: iSeismometer Web Watch

    Science.gov (United States)

    2011-01-01

    WE RECOMMEND Online Graphing Calculator Calculator plots online graphs Challenge and Change: A History of the Nuffield A-Level Physics Project Book delves deep into the history of Nuffield physics SEP Sound Booklet has ideas for teaching sound but lacks some basics Reinventing Schools, Reforming Teaching Fascinating book shows how politics impacts on the classroom Physics and Technology for Future Presidents A great book for teaching physics for the modern world iSeismometer iPhone app teaches students about seismic waves WORTH A LOOK Teachers TV Video Clip Lesson plan uses video clip to explore new galaxies Graphing Calculator App A phone app that handles formulae and graphs WEB WATCH Physics.org competition finds the best websites

  17. Know your data: understanding implicit usage versus explicit action in video content classification

    Science.gov (United States)

    Yew, Jude; Shamma, David A.

    2011-02-01

    In this paper, we present a method for video category classification using only social metadata from websites like YouTube. In place of content analysis, we utilize communicative and social contexts surrounding videos as a means to determine a categorical genre, e.g. Comedy, Music. We hypothesize that video clips belonging to different genre categories would have distinct signatures and patterns that are reflected in their collected metadata. In particular, we define and describe social metadata as usage or action to aid in classification. We trained a Naive Bayes classifier to predict categories from a sample of 1,740 YouTube videos representing the top five genre categories. Using just a small number of the available metadata features, we compare the classifications produced by our Naive Bayes classifier with those provided by the uploader of that particular video. Compared to random predictions with the YouTube data (21% accurate), our classifier attained a mediocre 33% accuracy in predicting video genres. However, we found that the accuracy of our classifier significantly improves by nominal factoring of the explicit data features. By factoring the ratings of the videos in the dataset, the classifier was able to accurately predict the genres of 75% of the videos. We argue that the patterns of social activity found in the metadata are not just meaningful in their own right, but are indicative of the meaning of the shared video content. The results presented by this project represents a first step in investigating the potential meaning and significance of social metadata and its relation to the media experience.

  18. CERN Video News

    CERN Document Server

    2003-01-01

    From Monday you can see on the web the new edition of CERN's Video News. Thanks to a collaboration between the audiovisual teams at CERN and Fermilab, you can see a report made by the American laboratory. The clip concerns the LHC magnets that are being constructed at Fermilab. Also in the programme: the spectacular rotation of one of the ATLAS coils, the arrival at CERN of the first American magnet made at Brookhaven, the story of the discovery 20 years ago of the W and Z bosons at CERN. http://www.cern.ch/video or Bulletin web page.

  19. Psychogenic Tremor: A Video Guide to Its Distinguishing Features

    Directory of Open Access Journals (Sweden)

    Joseph Jankovic

    2014-08-01

    Full Text Available Background: Psychogenic tremor is the most common psychogenic movement disorder. It has characteristic clinical features that can help distinguish it from other tremor disorders. There is no diagnostic gold standard and the diagnosis is based primarily on clinical history and examination. Despite proposed diagnostic criteria, the diagnosis of psychogenic tremor can be challenging. While there are numerous studies evaluating psychogenic tremor in the literature, there are no publications that provide a video/visual guide that demonstrate the clinical characteristics of psychogenic tremor. Educating clinicians about psychogenic tremor will hopefully lead to earlier diagnosis and treatment. Methods: We selected videos from the database at the Parkinson's Disease Center and Movement Disorders Clinic at Baylor College of Medicine that illustrate classic findings supporting the diagnosis of psychogenic tremor.Results: We include 10 clinical vignettes with accompanying videos that highlight characteristic clinical signs of psychogenic tremor including distractibility, variability, entrainability, suggestibility, and coherence.Discussion: Psychogenic tremor should be considered in the differential diagnosis of patients presenting with tremor, particularly if it is of abrupt onset, intermittent, variable and not congruous with organic tremor. The diagnosis of psychogenic tremor, however, should not be simply based on exclusion of organic tremor, such as essential, parkinsonian, or cerebellar tremor, but on positive criteria demonstrating characteristic features. Early recognition and management are critical for good long-term outcome.

  20. EEG-based recognition of video-induced emotions: selecting subject-independent feature set.

    Science.gov (United States)

    Kortelainen, Jukka; Seppänen, Tapio

    2013-01-01

    Emotions are fundamental for everyday life affecting our communication, learning, perception, and decision making. Including emotions into the human-computer interaction (HCI) could be seen as a significant step forward offering a great potential for developing advanced future technologies. While the electrical activity of the brain is affected by emotions, offers electroencephalogram (EEG) an interesting channel to improve the HCI. In this paper, the selection of subject-independent feature set for EEG-based emotion recognition is studied. We investigate the effect of different feature sets in classifying person's arousal and valence while watching videos with emotional content. The classification performance is optimized by applying a sequential forward floating search algorithm for feature selection. The best classification rate (65.1% for arousal and 63.0% for valence) is obtained with a feature set containing power spectral features from the frequency band of 1-32 Hz. The proposed approach substantially improves the classification rate reported in the literature. In future, further analysis of the video-induced EEG changes including the topographical differences in the spectral features is needed.

  1. Teaching Surgical Procedures with Movies: Tips for High-quality Video Clips

    OpenAIRE

    Jacquemart, Mathieu; Bouletreau, Pierre; Breton, Pierre; Mojallal, Ali; Sigaux, Nicolas

    2016-01-01

    Summary: Video must now be considered as a precious tool for learning surgery. However, the medium does present production challenges, and currently, quality movies are not always accessible. We developed a series of 7 surgical videos and made them available on a publicly accessible internet website. Our videos have been viewed by thousands of people worldwide. High-quality educational movies must respect strategic and technical points to be reliable.

  2. Teaching Surgical Procedures with Movies: Tips for High-quality Video Clips.

    Science.gov (United States)

    Jacquemart, Mathieu; Bouletreau, Pierre; Breton, Pierre; Mojallal, Ali; Sigaux, Nicolas

    2016-09-01

    Video must now be considered as a precious tool for learning surgery. However, the medium does present production challenges, and currently, quality movies are not always accessible. We developed a series of 7 surgical videos and made them available on a publicly accessible internet website. Our videos have been viewed by thousands of people worldwide. High-quality educational movies must respect strategic and technical points to be reliable.

  3. Physiological remodeling of bifurcation aneurysms: preclinical results of the eCLIPs device.

    Science.gov (United States)

    Marotta, Thomas R; Riina, Howard A; McDougall, Ian; Ricci, Donald R; Killer-Oberpfalzer, Monika

    2018-02-01

    OBJECTIVE Intracranial bifurcation aneurysms are complex lesions for which current therapy, including simple coiling, balloon- or stent-assisted coiling, coil retention, or intrasaccular devices, is inadequate. Thromboembolic complications due to a large burden of intraluminal metal, impedance of access to side branches, and a high recurrence rate, due largely to the unmitigated high-pressure flow into the aneurysm (water hammer effect), are among the limitations imposed by current therapy. The authors describe herein a novel device, eCLIPs, and its use in a preclinical laboratory study that suggests the device's design and functional features may overcome many of these limitations. METHODS A preclinical model of wide-necked bifurcation aneurysms in rabbits was used to assess functional features and efficacy of aneurysm occlusion by the eCLIPs device. RESULTS The eCLIPs device, in bridging the aneurysm neck, allows coil retention, disrupts flow away from the aneurysm, leaves the main vessel and side branches unencumbered by intraluminal metal, and serves as a platform for endothelial growth across the neck, excluding the aneurysm from the circulation. CONCLUSIONS The eCLIPs device permits physiological remodeling of the bifurcation.

  4. Content-based TV sports video retrieval using multimodal analysis

    Science.gov (United States)

    Yu, Yiqing; Liu, Huayong; Wang, Hongbin; Zhou, Dongru

    2003-09-01

    In this paper, we propose content-based video retrieval, which is a kind of retrieval by its semantical contents. Because video data is composed of multimodal information streams such as video, auditory and textual streams, we describe a strategy of using multimodal analysis for automatic parsing sports video. The paper first defines the basic structure of sports video database system, and then introduces a new approach that integrates visual stream analysis, speech recognition, speech signal processing and text extraction to realize video retrieval. The experimental results for TV sports video of football games indicate that the multimodal analysis is effective for video retrieval by quickly browsing tree-like video clips or inputting keywords within predefined domain.

  5. The 3D Human Motion Control Through Refined Video Gesture Annotation

    Science.gov (United States)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  6. CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (IBM PC VERSION)

    Science.gov (United States)

    Riley, G.

    1994-01-01

    The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh

  7. Saying What You're Looking For: Linguistics Meets Video Search.

    Science.gov (United States)

    Barrett, Daniel Paul; Barbu, Andrei; Siddharth, N; Siskind, Jeffrey Mark

    2016-10-01

    We present an approach to searching large video corpora for clips which depict a natural-language query in the form of a sentence. Compositional semantics is used to encode subtle meaning differences lost in other approaches, such as the difference between two sentences which have identical words but entirely different meaning: The person rode the horse versus The horse rode the person. Given a sentential query and a natural-language parser, we produce a score indicating how well a video clip depicts that sentence for each clip in a corpus and return a ranked list of clips. Two fundamental problems are addressed simultaneously: detecting and tracking objects, and recognizing whether those tracks depict the query. Because both tracking and object detection are unreliable, our approach uses the sentential query to focus the tracker on the relevant participants and ensures that the resulting tracks are described by the sentential query. While most earlier work was limited to single-word queries which correspond to either verbs or nouns, we search for complex queries which contain multiple phrases, such as prepositional phrases, and modifiers, such as adverbs. We demonstrate this approach by searching for 2,627 naturally elicited sentential queries in 10 Hollywood movies.

  8. Seismic signals hard clipping overcoming

    Science.gov (United States)

    Olszowa, Paula; Sokolowski, Jakub

    2018-01-01

    In signal processing the clipping is understand as the phenomenon of limiting the signal beyond certain threshold. It is often related to overloading of a sensor. Two particular types of clipping are being recognized: soft and hard. Beyond the limiting value soft clipping reduces the signal real gain while the hard clipping stiffly sets the signal values at the limit. In both cases certain amount of signal information is lost. Obviously if one possess the model which describes the considered signal and the threshold value (which might be slightly more difficult to obtain in the soft clipping case), the attempt of restoring the signal can be made. Commonly it is assumed that the seismic signals take form of an impulse response of some specific system. This may lead to belief that the sine wave may be the most appropriate to fit in the clipping period. However, this should be tested. In this paper the possibility of overcoming the hard clipping in seismic signals originating from a geoseismic station belonging to an underground mine is considered. A set of raw signals will be hard-clipped manually and then couple different functions will be fitted and compared in terms of least squares. The results will be then analysed.

  9. Concept of the central clip

    DEFF Research Database (Denmark)

    Alegria-Barrero, Eduardo; Chan, Pak Hei; Foin, Nicolas

    2014-01-01

    AIMS: Percutaneous edge-to-edge mitral valve repair with the MitraClip(®) was shown to be a safe and feasible alternative compared to conventional surgical mitral valve repair. We analyse the concept of the central clip and the predictors for the need of more than one MitraClip(®) in our high.......8±10.7 years (30 males, 13 females; mean logistic EuroSCORE 24.1±11, mean LVEF 47.5±18.5%; mean±SD) were treated. Median follow-up was 385 days (104-630; Q1-Q3). Device implantation success was 93%. All patients were treated following the central clip concept: 52.5% of MR was degenerative in aetiology and 47....... The presence of a restricted posterior mitral valve leaflet (PML) was inversely correlated with the need for more than one clip (p=0.02). A cut-off value of ≥7.5 mm for vena contracta predicted the need for a second clip (sensitivity 83%, specificity 90%, p=0.01). CONCLUSIONS: The central MitraClip(®) concept...

  10. High-quality and small-capacity e-learning video featuring lecturer-superimposing PC screen images

    Science.gov (United States)

    Nomura, Yoshihiko; Murakami, Michinobu; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    2006-10-01

    Information processing and communication technology are progressing quickly, and are prevailing throughout various technological fields. Therefore, the development of such technology should respond to the needs for improvement of quality in the e-learning education system. The authors propose a new video-image compression processing system that ingeniously employs the features of the lecturing scene. While dynamic lecturing scene is shot by a digital video camera, screen images are electronically stored by a PC screen image capturing software in relatively long period at a practical class. Then, a lecturer and a lecture stick are extracted from the digital video images by pattern recognition techniques, and the extracted images are superimposed on the appropriate PC screen images by off-line processing. Thus, we have succeeded to create a high-quality and small-capacity (HQ/SC) video-on-demand educational content featuring the advantages: the high quality of image sharpness, the small electronic file capacity, and the realistic lecturer motion.

  11. Using Progressive Video Prompting to Teach Students with Moderate Intellectual Disability to Shoot a Basketball

    Science.gov (United States)

    Lo, Ya-yu; Burk, Bradley; Burk, Bradley; Anderson, Adrienne L.

    2014-01-01

    The current study examined the effects of a modified video prompting procedure, namely progressive video prompting, to increase technique accuracy of shooting a basketball in the school gymnasium of three 11th-grade students with moderate intellectual disability. The intervention involved participants viewing video clips of an adult model who…

  12. Object oriented development of engineering software using CLIPS

    Science.gov (United States)

    Yoon, C. John

    1991-01-01

    Engineering applications involve numeric complexity and manipulations of a large amount of data. Traditionally, numeric computation has been the concern in developing an engineering software. As engineering application software became larger and more complex, management of resources such as data, rather than the numeric complexity, has become the major software design problem. Object oriented design and implementation methodologies can improve the reliability, flexibility, and maintainability of the resulting software; however, some tasks are better solved with the traditional procedural paradigm. The C Language Integrated Production System (CLIPS), with deffunction and defgeneric constructs, supports the procedural paradigm. The natural blending of object oriented and procedural paradigms has been cited as the reason for the popularity of the C++ language. The CLIPS Object Oriented Language's (COOL) object oriented features are more versatile than C++'s. A software design methodology based on object oriented and procedural approaches appropriate for engineering software, and to be implemented in CLIPS was outlined. A method for sensor placement for Space Station Freedom is being implemented in COOL as a sample problem.

  13. Teasing Apart Complex Motions using VideoPoint

    Science.gov (United States)

    Fischer, Mark

    2002-10-01

    Using video analysis software such as VideoPoint, it is possible to explore the physics of any phenomenon that can be captured on videotape. The good news is that complex motions can be filmed and analyzed. The bad news is that the motions can become very complex very quickly. An example of such a complicated motion, the 2-dimensional motion of an object as filmed by a camera that is moving and rotating in the same plane will be discussed. Methods for extracting the desired object motion will be given as well as suggestions for shooting more easily analyzable video clips.

  14. YouTube Video Project: A "Cool" Way to Learn Communication Ethics

    Science.gov (United States)

    Lehman, Carol M.; DuFrene, Debbie D.; Lehman, Mark W.

    2010-01-01

    The millennial generation embraces new technologies as a natural way of accessing and exchanging information, staying connected, and having fun. YouTube, a video-sharing site that allows users to upload, view, and share video clips, is among the latest "cool" technologies for enjoying quick laughs, employing a wide variety of corporate activities,…

  15. Treatment of Complex Fistula-in-Ano With a Nitinol Proctology Clip

    DEFF Research Database (Denmark)

    Nordholm-Carstensen, Andreas; Krarup, Peter-Martin; Hagen, Kikke

    2017-01-01

    BACKGROUND: The treatment of complex anocutaneous fistulas remains a major therapeutic challenge balancing the risk of incontinence against the chance of permanent closure. OBJECTIVE: The purpose of this study was to investigate the efficacy of a nitinol proctology clip for closure of complex ano...... with those of other noninvasive, sphincter-sparing techniques for high-complex anocutaneous fistulas, with no risk of incontinence. Predictive parameters for fistula healing using this technique remain uncertain. See Video Abstract at http://links.lww.com/DCR/A347....

  16. Feature Extraction in Sequential Multimedia Images: with Applications in Satellite Images and On-line Videos

    Science.gov (United States)

    Liang, Yu-Li

    Multimedia data is increasingly important in scientific discovery and people's daily lives. Content of massive multimedia is often diverse and noisy, and motion between frames is sometimes crucial in analyzing those data. Among all, still images and videos are commonly used formats. Images are compact in size but do not contain motion information. Videos record motion but are sometimes too big to be analyzed. Sequential images, which are a set of continuous images with low frame rate, stand out because they are smaller than videos and still maintain motion information. This thesis investigates features in different types of noisy sequential images, and the proposed solutions that intelligently combined multiple features to successfully retrieve visual information from on-line videos and cloudy satellite images. The first task is detecting supraglacial lakes above ice sheet in sequential satellite images. The dynamics of supraglacial lakes on the Greenland ice sheet deeply affect glacier movement, which is directly related to sea level rise and global environment change. Detecting lakes above ice is suffering from diverse image qualities and unexpected clouds. A new method is proposed to efficiently extract prominent lake candidates with irregular shapes, heterogeneous backgrounds, and in cloudy images. The proposed system fully automatize the procedure that track lakes with high accuracy. We further cooperated with geoscientists to examine the tracked lakes and found new scientific findings. The second one is detecting obscene content in on-line video chat services, such as Chatroulette, that randomly match pairs of users in video chat sessions. A big problem encountered in such systems is the presence of flashers and obscene content. Because of various obscene content and unstable qualities of videos capture by home web-camera, detecting misbehaving users is a highly challenging task. We propose SafeVchat, which is the first solution that achieves satisfactory

  17. Deconstructing "Good Practice" Teaching Videos: An Analysis of Pre-Service Teachers' Reflections

    Science.gov (United States)

    Ineson, Gwen; Voutsina, Chronoula; Fielding, Helen; Barber, Patti; Rowland, Tim

    2015-01-01

    Video clips of mathematics lessons are used extensively in pre-service teacher education and continuing professional development activities. Given course time constraints, an opportunity to critique these videos is not always possible. Because of this, and because pre-service teachers make extensive use of material found during internet searches,…

  18. Detection of Double-Compressed H.264/AVC Video Incorporating the Features of the String of Data Bits and Skip Macroblocks

    Directory of Open Access Journals (Sweden)

    Heng Yao

    2017-12-01

    Full Text Available Today’s H.264/AVC coded videos have a high quality, high data-compression ratio. They also have a strong fault tolerance, better network adaptability, and have been widely applied on the Internet. With the popularity of powerful and easy-to-use video editing software, digital videos can be tampered with in various ways. Therefore, the double compression in the H.264/AVC video can be used as a first step in the study of video-tampering forensics. This paper proposes a simple, but effective, double-compression detection method that analyzes the periodic features of the string of data bits (SODBs and the skip macroblocks (S-MBs for all I-frames and P-frames in a double-compressed H.264/AVC video. For a given suspicious video, the SODBs and S-MBs are extracted for each frame. Both features are then incorporated to generate one enhanced feature to represent the periodic artifact of the double-compressed video. Finally, a time-domain analysis is conducted to detect the periodicity of the features. The primary Group of Pictures (GOP size is estimated based on an exhaustive strategy. The experimental results demonstrate the efficacy of the proposed method.

  19. YouTube as a Qualitative Research Asset: Reviewing User Generated Videos as Learning Resources

    Science.gov (United States)

    Chenail, Ronald J.

    2011-01-01

    YouTube, the video hosting service, offers students, teachers, and practitioners of qualitative researchers a unique reservoir of video clips introducing basic qualitative research concepts, sharing qualitative data from interviews and field observations, and presenting completed research studies. This web-based site also affords qualitative…

  20. Selection of Film Clips and Development of a Video for the Investigation of Sexual Decision Making among Men Who Have Sex with Men

    Science.gov (United States)

    Woolf-King, Sarah E.; Maisto, Stephen; Carey, Michael; Vanable, Peter

    2013-01-01

    Experimental research on sexual decision making is limited, despite the public health importance of such work. We describe formative work conducted in advance of an experimental study designed to evaluate the effects of alcohol intoxication and sexual arousal on risky sexual decision making among men who have sex with men. In Study 1, we describe the procedures for selecting and validating erotic film clips (to be used for the experimental manipulation of arousal). In Study 2, we describe the tailoring of two interactive role-play videos to be used to measure risk perception and communication skills in an analog risky sex situation. Together, these studies illustrate a method for creating experimental stimuli to investigate sexual decision making in a laboratory setting. Research using this approach will support experimental research that affords a stronger basis for drawing causal inferences regarding sexual decision making. PMID:19760530

  1. Bayesian Recovery of Clipped OFDM Signals: A Receiver-based Approach

    KAUST Repository

    Al-Rabah, Abdullatif R.

    2013-05-01

    Recently, orthogonal frequency-division multiplexing (OFDM) has been adopted for high-speed wireless communications due to its robustness against multipath fading. However, one of the main fundamental drawbacks of OFDM systems is the high peak-to-average-power ratio (PAPR). Several techniques have been proposed for PAPR reduction. Most of these techniques require transmitter-based (pre-compensated) processing. On the other hand, receiver-based alternatives would save the power and reduce the transmitter complexity. By keeping this in mind, a possible approach is to limit the amplitude of the OFDM signal to a predetermined threshold and equivalently a sparse clipping signal is added. Then, estimating this clipping signal at the receiver to recover the original signal. In this work, we propose a Bayesian receiver-based low-complexity clipping signal recovery method for PAPR reduction. The method is able to i) effectively reduce the PAPR via simple clipping scheme at the transmitter side, ii) use Bayesian recovery algorithm to reconstruct the clipping signal at the receiver side by measuring part of subcarriers, iii) perform well in the absence of statistical information about the signal (e.g. clipping level) and the noise (e.g. noise variance), and at the same time iv is energy efficient due to its low complexity. Specifically, the proposed recovery technique is implemented in data-aided based. The data-aided method collects clipping information by measuring reliable 
data subcarriers, thus makes full use of spectrum for data transmission without the need for tone reservation. The study is extended further to discuss how to improve the recovery of the clipping signal utilizing some features of practical OFDM systems i.e., the oversampling and the presence of multiple receivers. Simulation results demonstrate the superiority of the proposed technique over other recovery algorithms. The overall objective is to show that the receiver-based Bayesian technique is highly

  2. Portable inference engine: An extended CLIPS for real-time production systems

    Science.gov (United States)

    Le, Thach; Homeier, Peter

    1988-01-01

    The present C-Language Integrated Production System (CLIPS) architecture has not been optimized to deal with the constraints of real-time production systems. Matching in CLIPS is based on the Rete Net algorithm, whose assumption of working memory stability might fail to be satisfied in a system subject to real-time dataflow. Further, the CLIPS forward-chaining control mechanism with a predefined conflict resultion strategy may not effectively focus the system's attention on situation-dependent current priorties, or appropriately address different kinds of knowledge which might appear in a given application. Portable Inference Engine (PIE) is a production system architecture based on CLIPS which attempts to create a more general tool while addressing the problems of real-time expert systems. Features of the PIE design include a modular knowledge base, a modified Rete Net algorithm, a bi-directional control strategy, and multiple user-defined conflict resolution strategies. Problems associated with real-time applications are analyzed and an explanation is given for how the PIE architecture addresses these problems.

  3. Part Two: Learning Science Through Digital Video: Student Views on Watching and Creating Videos

    Science.gov (United States)

    Wade, P.; Courtney, A. R.

    2014-12-01

    The use of digital video for science education has become common with the wide availability of video imagery. This study continues research into aspects of using digital video as a primary teaching tool to enhance student learning in undergraduate science courses. Two survey instruments were administered to undergraduate non-science majors. Survey One focused on: a) What science is being learned from watching science videos such as a "YouTube" clip of a volcanic eruption or an informational video on geologic time and b) What are student preferences with regard to their learning (e.g. using video versus traditional modes of delivery)? Survey Two addressed students' perspectives on the storytelling aspect of the video with respect to: a) sustaining interest, b) providing science information, c) style of video and d) quality of the video. Undergraduate non-science majors were the primary focus group in this study. Students were asked to view video segments and respond to a survey focused on what they learned from the segments. The storytelling aspect of each video was also addressed by students. Students watched 15-20 shorter (3-15 minute science videos) created within the last four years. Initial results of this research support that shorter video segments were preferred and the storytelling quality of each video related to student learning.

  4. Public online information about tinnitus: A cross-sectional study of YouTube videos.

    Science.gov (United States)

    Basch, Corey H; Yin, Jingjing; Kollia, Betty; Adedokun, Adeyemi; Trusty, Stephanie; Yeboah, Felicia; Fung, Isaac Chun-Hai

    2018-01-01

    To examine the information about tinnitus contained in different video sources on YouTube. The 100 most widely viewed tinnitus videos were manually coded. Firstly, we identified the sources of upload: consumer, professional, television-based clip, and internet-based clip. Secondly, the videos were analyzed to ascertain what pertinent information they contained from a current National Institute on Deafness and Other Communication Disorders fact sheet. Of the videos, 42 were consumer-generated, 33 from media, and 25 from professionals. Collectively, the 100 videos were viewed almost 9 million times. The odds of mentioning "objective tinnitus" in professional videos were 9.58 times those from media sources [odds ratio (OR) = 9.58; 95% confidence interval (CI): 1.94, 47.42; P = 0.01], whereas these odds in consumer videos were 51% of media-generated videos (OR = 0.51; 95% CI: 0.20, 1.29; P = 0.16). The odds that the purpose of a video was to sell a product or service were nearly the same for both consumer and professional videos. Consumer videos were found to be 4.33 times as likely to carry a theme about an individual's own experience with tinnitus (OR = 4.33; 95% CI: 1.62, 11.63; P = 0.004) as media videos. Of the top 100 viewed videos on tinnitus, most were uploaded by consumers, sharing individuals' experiences. Actions are needed to make scientific medical information more prominently available and accessible on YouTube and other social media.

  5. Mounting clips for panel installation

    Science.gov (United States)

    Cavieres, Andres; Al-Haddad, Tristan; Goodman, Joseph; Valdes, Francisco

    2017-02-14

    An exemplary mounting clip for removably attaching panels to a supporting structure comprises a base, spring locking clips, a lateral flange, a lever flange, and a spring bonding pad. The spring locking clips extend upwardly from the base. The lateral flange extends upwardly from a first side of the base. The lateral flange comprises a slot having an opening configured to receive at least a portion of one of the one or more panels. The lever flange extends outwardly from the lateral flange. The spring bonding flange extends downwardly from the lever flange. At least a portion of the first spring bonding flange comprises a serrated edge for gouging at least a portion of the one or more panels when the one or more panels are attached to the mounting clip to electrically and mechanically couple the one or more panels to the mounting clip.

  6. CLIPS - C LANGUAGE INTEGRATED PRODUCTION SYSTEM (IBM PC VERSION WITH CLIPSITS)

    Science.gov (United States)

    Riley, , .

    1994-01-01

    The C Language Integrated Production System, CLIPS, is a shell for developing expert systems. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. The primary design goals for CLIPS are portability, efficiency, and functionality. For these reasons, the program is written in C. CLIPS meets or outperforms most micro- and minicomputer based artificial intelligence tools. CLIPS is a forward chaining rule-based language. The program contains an inference engine and a language syntax that provide a framework for the construction of an expert system. It also includes tools for debugging an application. CLIPS is based on the Rete algorithm, which enables very efficient pattern matching. The collection of conditions and actions to be taken if the conditions are met is constructed into a rule network. As facts are asserted either prior to or during a session, CLIPS pattern-matches the number of fields. Wildcards and variables are supported for both single and multiple fields. CLIPS syntax allows the inclusion of externally defined functions (outside functions which are written in a language other than CLIPS). CLIPS itself can be embedded in a program such that the expert system is available as a simple subroutine call. Advanced features found in CLIPS version 4.3 include an integrated microEMACS editor, the ability to generate C source code from a CLIPS rule base to produce a dedicated executable, binary load and save capabilities for CLIPS rule bases, and the utility program CRSV (Cross-Reference, Style, and Verification) designed to facilitate the development and maintenance of large rule bases. Five machine versions are available. Each machine version includes the source and the executable for that machine. The UNIX version includes the source and binaries for IBM RS/6000, Sun3 series, and Sun4 series computers. The UNIX, DEC VAX, and DEC RISC Workstation versions are line oriented. The PC version and the Macintosh

  7. Adventure Racing and Organizational Behavior: Using Eco Challenge Video Clips to Stimulate Learning

    Science.gov (United States)

    Kenworthy-U'Ren, Amy; Erickson, Anthony

    2009-01-01

    In this article, the Eco Challenge race video is presented as a teaching tool for facilitating theory-based discussion and application in organizational behavior (OB) courses. Before discussing the intricacies of the video series itself, the authors present a pedagogically based rationale for using reality TV-based video segments in a classroom…

  8. A defect in the CLIP1 gene (CLIP-170) can cause autosomal recessive intellectual disability.

    Science.gov (United States)

    Larti, Farzaneh; Kahrizi, Kimia; Musante, Luciana; Hu, Hao; Papari, Elahe; Fattahi, Zohreh; Bazazzadegan, Niloofar; Liu, Zhe; Banan, Mehdi; Garshasbi, Masoud; Wienker, Thomas F; Ropers, H Hilger; Galjart, Niels; Najmabadi, Hossein

    2015-03-01

    In the context of a comprehensive research project, investigating novel autosomal recessive intellectual disability (ARID) genes, linkage analysis based on autozygosity mapping helped identify an intellectual disability locus on Chr.12q24, in an Iranian family (LOD score = 3.7). Next-generation sequencing (NGS) following exon enrichment in this novel interval, detected a nonsense mutation (p.Q1010*) in the CLIP1 gene. CLIP1 encodes a member of microtubule (MT) plus-end tracking proteins, which specifically associates with the ends of growing MTs. These proteins regulate MT dynamic behavior and are important for MT-mediated transport over the length of axons and dendrites. As such, CLIP1 may have a role in neuronal development. We studied lymphoblastoid and skin fibroblast cell lines established from healthy and affected patients. RT-PCR and western blot analyses showed the absence of CLIP1 transcript and protein in lymphoblastoid cells derived from affected patients. Furthermore, immunofluorescence analyses showed MT plus-end staining only in fibroblasts containing the wild-type (and not the mutant) CLIP1 protein. Collectively, our data suggest that defects in CLIP1 may lead to ARID.

  9. Real-time UAV trajectory generation using feature points matching between video image sequences

    Science.gov (United States)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  10. Televisión, estética y video clip: la música popular hecha imagen

    Directory of Open Access Journals (Sweden)

    Mauricio Vera-Sánchez

    2009-01-01

    Full Text Available Durante mucho tiempo -dice Arlindo Machado-, los teóricos de la comunicación nos acostumbraron a encarar la televisión como un medio popular, de masa (en su peor sentido, opacando la atención que se pudiera tener sobre algunas manifestaciones televisivas interesantes, singulares y significativas para definir el estatuto de este medio en el panorama cultural contemporáneo. Y es justamente en el lugar privilegiado de la expresión contemporánea de la luz, es decir, en la pantalla del televisor, donde se ubican los objetos de refl exión de la presente investigación, es decir, los video clips de música popular, donde se indaga su dimensión estética en la relación de sus elementos constitutivos: el cantante y toda su parafernalia gestual, sus ropajes y los escenarios donde se desenvuelve en la historia narrada, y presenta una manera específica de estar en el espacio o deco-gramma; la relación de los recursos tecnológicos que permiten establecer una manera particular de registro, es decir, un tecno-gramma; una imagen facturada en la improvisación de un género que en la región del Eje Cafetero, en Colombia, si bien es producido y difundido de manera casi industrial, está generando una dinámica de mercadeo y exposición de los cantantes antes nunca vista. Es pues, un escrito sobre la música popular que circula en el medio más popular de todos: la televisión.

  11. Preserving Sharp Edges with Volume Clipping

    NARCIS (Netherlands)

    Termeer, M.A.; Oliván Bescós, J.; Telea, A.C.

    2006-01-01

    Volume clipping is a useful aid for exploring volumetric datasets. To maximize the effectiveness of this technique, the clipping geometry should be flexibly specified and the resulting images should not contain artifacts due to the clipping techniques. We present an improvement to an existing

  12. L'uso del doppiaggio e del sottotitolaggio nell'insegnamento della L2: Il caso della piattaforma ClipFlair

    Directory of Open Access Journals (Sweden)

    Lupe Romero

    2016-01-01

    Full Text Available Abstract – The purpose of this paper is to present the Clipflair project, a web platform for foreign language learning (FLL through revoicing and captioning of clips. Using audiovisual material in the language classroom is a common resource for teachers since it introduces variety, provides exposure to nonverbal cultural elements and, most importantly, presents linguistic and cultural aspects of communication in their context. However, teachers using this resource face the difficulty of finding active tasks that will engage learners and discourage passive viewing. ClipFlair proposes working with AV material productively while also motivating learners by getting them to revoice or caption a clip. Revoicing is a term used to refer to (rerecording voice onto a clip, as in dubbing, free commentary, audio description and karaoke singing. The term captioning refers to adding written text to a clip, such as standard subtitles, annotations and intertitles. Clips can be short video or audio files, including documentaries, film scenes, news pieces, animations and songs. ClipFlair develops materials that enable foreign language learners to practice all four standard CEFR skills: writing, speaking, listening and reading. Within the project’s scope, more than 350 ready-made activities, which involve captioning and/or revoicing of clips, has been created. These activities has been created for more than 16 languages including English, Spanish and Italian, but focus is placed on less widely taught languages, namely Estonian, Greek, Romanian and Polish, as well as minority languages, i.e. Basque, Catalan and Irish. Non-European languages, namely Arabic, Chinese, Japanese, Russian and Ukrainian are also included. The platform has three different areas: The Gallery offers the materials and the activities; the Studio area, offers a captioning and revoicing tools, in order to create or practice and learn languages by using the activities; the Social Network area

  13. Application of discriminative models for interactive query refinement in video retrieval

    Science.gov (United States)

    Srivastava, Amit; Khanwalkar, Saurabh; Kumar, Anoop

    2013-12-01

    The ability to quickly search for large volumes of videos for specific actions or events can provide a dramatic new capability to intelligence agencies. Example-based queries from video are a form of content-based information retrieval (CBIR) where the objective is to retrieve clips from a video corpus, or stream, using a representative query sample to find more like this. Often, the accuracy of video retrieval is largely limited by the gap between the available video descriptors and the underlying query concept, and such exemplar queries return many irrelevant results with relevant ones. In this paper, we present an Interactive Query Refinement (IQR) system which acts as a powerful tool to leverage human feedback and allow intelligence analyst to iteratively refine search queries for improved precision in the retrieved results. In our approach to IQR, we leverage discriminative models that operate on high dimensional features derived from low-level video descriptors in an iterative framework. Our IQR model solicits relevance feedback on examples selected from the region of uncertainty and updates the discriminating boundary to produce a relevance ranked results list. We achieved 358% relative improvement in Mean Average Precision (MAP) over initial retrieval list at a rank cutoff of 100 over 4 iterations. We compare our discriminative IQR model approach to a naïve IQR and show our model-based approach yields 49% relative improvement over the no model naïve system.

  14. Standardized access, display, and retrieval of medical video

    Science.gov (United States)

    Bellaire, Gunter; Steines, Daniel; Graschew, Georgi; Thiel, Andreas; Bernarding, Johannes; Tolxdorff, Thomas; Schlag, Peter M.

    1999-05-01

    The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video sequences into DICOM 3.0. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital video documents of diagnostic and therapeutic procedures should be examined regarding the clip length and size necessary for second opinion and manageable with today's hardware. Image sources relevant for this paper include 3D laparoscope, 3D surgical microscope, 3D open surgery camera, synthetic video, and monoscopic endoscopes, etc. The global DICOM video concept and three special workplaces of distinct applications are described. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video-cutting. Digital stereoscopic video sequences are especially in demand for surgery . Therefore DSVS are also integrated into the DICOM video concept. Results are presented describing the suitability of stereoscopic display techniques for the operating room.

  15. Using Television Commercials as Video Illustrations: Examples from a Money and Banking Economics Class

    Science.gov (United States)

    Bowes, David R.

    2014-01-01

    Video clips are an excellent way to enhance lecture material. Television commercials are a source of video examples that should not be overlooked and they are readily available on the internet. They are familiar, short, self-contained, constantly being created, and often funny. This paper describes several examples of television commercials that…

  16. Objective video quality measure for application to tele-echocardiography.

    Science.gov (United States)

    Moore, Peter Thomas; O'Hare, Neil; Walsh, Kevin P; Ward, Neil; Conlon, Niamh

    2008-08-01

    Real-time tele-echocardiography is widely used to remotely diagnose or exclude congenital heart defects. Cost effective technical implementation is realised using low-bandwidth transmission systems and lossy compression (videoconferencing) schemes. In our study, DICOM video sequences were converted to common multimedia formats, which were then, compressed using three lossy compression algorithms. We then applied a digital (multimedia) video quality metric (VQM) to determine objectively a value for degradation due to compression. Three levels of compression were simulated by varying system bandwidth and compared to a subjective assessment of video clip quality by three paediatric cardiologists with more than 5 years of experience.

  17. Mobile-Based Video Learning Outcomes in Clinical Nursing Skill Education: A Randomized Controlled Trial.

    Science.gov (United States)

    Lee, Nam-Ju; Chae, Sun-Mi; Kim, Haejin; Lee, Ji-Hye; Min, Hyojin Jennifer; Park, Da-Eun

    2016-01-01

    Mobile devices are a regular part of daily life among the younger generations. Thus, now is the time to apply mobile device use to nursing education. The purpose of this study was to identify the effects of a mobile-based video clip on learning motivation, competence, and class satisfaction in nursing students using a randomized controlled trial with a pretest and posttest design. A total of 71 nursing students participated in this study: 36 in the intervention group and 35 in the control group. A video clip of how to perform a urinary catheterization was developed, and the intervention group was able to download it to their own mobile devices for unlimited viewing throughout 1 week. All of the students participated in a practice laboratory to learn urinary catheterization and were blindly tested for their performance skills after participation in the laboratory. The intervention group showed significantly higher levels of learning motivation and class satisfaction than did the control. Of the fundamental nursing competencies, the intervention group was more confident in practicing catheterization than their counterparts. Our findings suggest that video clips using mobile devices are useful tools that educate student nurses on relevant clinical skills and improve learning outcomes.

  18. 21 CFR 882.4215 - Clip rack.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Clip rack. 882.4215 Section 882.4215 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES NEUROLOGICAL DEVICES Neurological Surgical Devices § 882.4215 Clip rack. (a) Identification. A clip rack is a...

  19. How Is Marijuana Vaping Portrayed on YouTube? Content, Features, Popularity and Retransmission of Vaping Marijuana YouTube Videos.

    Science.gov (United States)

    Yang, Qinghua; Sangalang, Angeline; Rooney, Molly; Maloney, Erin; Emery, Sherry; Cappella, Joseph N

    2018-01-01

    The purpose of the study is to investigate how vaping marijuana, a novel but emerging risky health behavior, is portrayed on YouTube, and how the content and features of these YouTube videos influence their popularity and retransmission. A content analysis of vaping marijuana YouTube videos published between July 2014 to June 2015 (n = 214) was conducted. Video genre, valence, promotional and warning arguments, emotional appeals, message sensation value, presence of misinformation and misleading information, and user-generated statistics, including number of views, comments, shares, likes and dislikes, were coded. The results showed that these videos were predominantly pro-marijuana-vaping, with the most frequent videos being user-sharing. The genre and message features influenced the popularity, evaluations, and retransmission of vaping marijuana YouTube videos. Theoretical and practical implications are discussed.

  20. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (UNIX VERSION)

    Science.gov (United States)

    Donnell, B.

    1994-01-01

    CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with

  1. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (MACINTOSH VERSION)

    Science.gov (United States)

    Riley, G.

    1994-01-01

    CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with

  2. Intelligent tutoring using HyperCLIPS

    Science.gov (United States)

    Hill, Randall W., Jr.; Pickering, Brad

    1990-01-01

    HyperCard is a popular hypertext-like system used for building user interfaces to databases and other applications, and CLIPS is a highly portable government-owned expert system shell. We developed HyperCLIPS in order to fill a gap in the U.S. Army's computer-based instruction tool set; it was conceived as a development environment for building adaptive practical exercises for subject-matter problem-solving, though it is not limited to this approach to tutoring. Once HyperCLIPS was developed, we set out to implement a practical exercise prototype using HyperCLIPS in order to demonstrate the following concepts: learning can be facilitated by doing; student performance evaluation can be done in real-time; and the problems in a practical exercise can be adapted to the individual student's knowledge.

  3. Learning Science Through Digital Video: Views on Watching and Creating Videos

    Science.gov (United States)

    Wade, P.; Courtney, A. R.

    2013-12-01

    In science, the use of digital video to document phenomena, experiments and demonstrations has rapidly increased during the last decade. The use of digital video for science education also has become common with the wide availability of video over the internet. However, as with using any technology as a teaching tool, some questions should be asked: What science is being learned from watching a YouTube clip of a volcanic eruption or an informational video on hydroelectric power generation? What are student preferences (e.g. multimedia versus traditional mode of delivery) with regard to their learning? This study describes 1) the efficacy of watching digital video in the science classroom to enhance student learning, 2) student preferences of instruction with regard to multimedia versus traditional delivery modes, and 3) the use of creating digital video as a project-based educational strategy to enhance learning. Undergraduate non-science majors were the primary focus group in this study. Students were asked to view video segments and respond to a survey focused on what they learned from the segments. Additionally, they were asked about their preference for instruction (e.g. text only, lecture-PowerPoint style delivery, or multimedia-video). A majority of students indicated that well-made video, accompanied with scientific explanations or demonstration of the phenomena was most useful and preferred over text-only or lecture instruction for learning scientific information while video-only delivery with little or no explanation was deemed not very useful in learning science concepts. The use of student generated video projects as learning vehicles for the creators and other class members as viewers also will be discussed.

  4. Energy saving approaches for video streaming on smartphone based on QoE modeling

    DEFF Research Database (Denmark)

    Ballesteros, Luis Guillermo Martinez; Ickin, Selim; Fiedler, Markus

    2016-01-01

    In this paper, we study the influence of video stalling on QoE. We provide QoE models that are obtained in realistic scenarios on the smartphone, and provide energy-saving approaches for smartphone by leveraging the proposed QoE models in relation to energy. Results show that approximately 5J...... is saved in a 3 minutes video clip with an acceptable Mean Opinion Score (MOS) level when the video frames are skipped. If the video frames are not skipped, then it is suggested to avoid freezes during a video stream as the freezes highly increase the energy waste on the smartphones....

  5. Viewer Discussion is Advised. Video Clubs Focus Teacher Discussion on Student Learning

    Directory of Open Access Journals (Sweden)

    Elizabeth A. van Es

    2014-06-01

    Full Text Available Video is being used widely in professional development. Yet, little is known about how to design video-based learning environments that are productive for teacher learning. One promising model is a video club (Sherin, 2000. Video clubs bring teachers together to view and analyze video segments from one another’s classrooms. The idea is that by watching and discussing video segments focused on student thinking, teachers will learn practices for identifying and analyzing noteworthy student thinking during instruction and can use what they learn to inform their instructional decisions. This paper addresses issues to consider when setting up a video club for teacher education, such as defining goals for using video, establishing norms for viewing and discussing one another’s teaching, selecting clips for analysis, and facilitating teacher discussions. Si consiglia la discussione tra osservatori. Nei Video Club gli insegnanti mettono a fuoco le modalità con cui gli studenti apprendono.Il video è stato ampiamente utilizzato per la formazione professionale. Tuttavia poche sono le conoscenze relative alla progettazione di ambienti di apprendimento basati su video che siano efficaci per la formazione degli insegnanti. Un modello promettente è il “video club” (Sherin, 2000. Video club uniscono insegnanti che guardano ed analizzano insieme segmenti video delle proprie rispettive classi. L'idea è che gli insegnanti, guardando e discutendo segmenti video centrati sul pensiero degli alunni, imparino ad adottare durante l’insegnamento pratiche d'identificazione e analisi di pensieri degli alunni degni di nota e possano poi utilizzare ciò che hanno imparato nelle decisioni didattiche. Questo articolo affronta le questioni da considerare quando si configura un video club per la formazione degli insegnanti, come ad esempio la definizione di obiettivi per l'utilizzo dei video, le norme per la visione e discussione dei rispettivi video, la selezione

  6. Video Dubbing Projects in the Foreign Language Curriculum

    Science.gov (United States)

    Burston, Jack

    2005-01-01

    The dubbing of muted video clips offers an excellent opportunity to develop the skills of foreign language learners at all linguistic levels. In addition to its motivational value, soundtrack dubbing provides a rich source of activities in all language skill areas: listening, reading, writing, speaking. With advanced students, it also lends itself…

  7. Person Identification from Video with Multiple Biometric Cues: Benchmarks for Human and Machine Performance

    National Research Council Canada - National Science Library

    O'Toole, Alice

    2003-01-01

    .... Experiments have been completed comparing the effects of several types of facial motion on face recognition, the effects of face familiarity on recognition from video clips taken at a distance...

  8. Use of a surgical rehearsal platform and improvement in aneurysm clipping measures: results of a prospective, randomized trial.

    Science.gov (United States)

    Chugh, A Jessey; Pace, Jonathan R; Singer, Justin; Tatsuoka, Curtis; Hoffer, Alan; Selman, Warren R; Bambakidis, Nicholas C

    2017-03-01

    OBJECTIVE The field of neurosurgery is constantly undergoing improvements and advances, both in technique and technology. Cerebrovascular neurosurgery is no exception, with endovascular treatments changing the treatment paradigm. Clipping of aneurysms is still necessary, however, and advances are still being made to improve patient outcomes within the microsurgical treatment of aneurysms. Surgical rehearsal platforms are surgical simulators that offer the opportunity to rehearse a procedure prior to entering the operative suite. This study is designed to determine whether use of a surgical rehearsal platform in aneurysm surgery is helpful in decreasing aneurysm dissection time and clip manipulation of the aneurysm. METHODS The authors conducted a blinded, prospective, randomized study comparing key effort and time variables in aneurysm clip ligation surgery with and without preoperative use of the SuRgical Planner (SRP) surgical rehearsal platform. Initially, 40 patients were randomly assigned to either of two groups: one in which surgery was performed after use of the SRP (SRP group) and one in which surgery was performed without use of the SRP (control group). All operations were videotaped. After exclusion of 6 patients from the SRP group and 9 from the control group, a total of 25 surgical cases were analyzed by a reviewer blinded to group assignment. The videos were analyzed for total microsurgical time, number of clips used, and number of clip placement attempts. Means and standard deviations (SDs) were calculated and compared between groups. RESULTS The mean (± SD) amount of operative time per clip used was 920 ± 770 seconds in the SRP group and 1294 ± 678 seconds in the control group (p = 0.05). In addition, the mean values for the number of clip attempts, total operative time, ratio of clip attempts to clips used, and time per clip attempt were all lower in the SRP group, although the between-group differences were not statistically significant

  9. Cortical Response Similarities Predict which Audiovisual Clips Individuals Viewed, but Are Unrelated to Clip Preference.

    Directory of Open Access Journals (Sweden)

    David A Bridwell

    Full Text Available Cortical responses to complex natural stimuli can be isolated by examining the relationship between neural measures obtained while multiple individuals view the same stimuli. These inter-subject correlation's (ISC's emerge from similarities in individual's cortical response to the shared audiovisual inputs, which may be related to their emergent cognitive and perceptual experience. Within the present study, our goal is to examine the utility of using ISC's for predicting which audiovisual clips individuals viewed, and to examine the relationship between neural responses to natural stimuli and subjective reports. The ability to predict which clips individuals viewed depends on the relationship of the EEG response across subjects and the nature in which this information is aggregated. We conceived of three approaches for aggregating responses, i.e. three assignment algorithms, which we evaluated in Experiment 1A. The aggregate correlations algorithm generated the highest assignment accuracy (70.83% chance = 33.33% and was selected as the assignment algorithm for the larger sample of individuals and clips within Experiment 1B. The overall assignment accuracy was 33.46% within Experiment 1B (chance = 06.25%, with accuracies ranging from 52.9% (Silver Linings Playbook to 11.75% (Seinfeld within individual clips. ISC's were significantly greater than zero for 15 out of 16 clips, and fluctuations within the delta frequency band (i.e. 0-4 Hz primarily contributed to response similarities across subjects. Interestingly, there was insufficient evidence to indicate that individuals with greater similarities in clip preference demonstrate greater similarities in cortical responses, suggesting a lack of association between ISC and clip preference. Overall these results demonstrate the utility of using ISC's for prediction, and further characterize the relationship between ISC magnitudes and subjective reports.

  10. Cortical Response Similarities Predict which Audiovisual Clips Individuals Viewed, but Are Unrelated to Clip Preference.

    Science.gov (United States)

    Bridwell, David A; Roth, Cullen; Gupta, Cota Navin; Calhoun, Vince D

    2015-01-01

    Cortical responses to complex natural stimuli can be isolated by examining the relationship between neural measures obtained while multiple individuals view the same stimuli. These inter-subject correlation's (ISC's) emerge from similarities in individual's cortical response to the shared audiovisual inputs, which may be related to their emergent cognitive and perceptual experience. Within the present study, our goal is to examine the utility of using ISC's for predicting which audiovisual clips individuals viewed, and to examine the relationship between neural responses to natural stimuli and subjective reports. The ability to predict which clips individuals viewed depends on the relationship of the EEG response across subjects and the nature in which this information is aggregated. We conceived of three approaches for aggregating responses, i.e. three assignment algorithms, which we evaluated in Experiment 1A. The aggregate correlations algorithm generated the highest assignment accuracy (70.83% chance = 33.33%) and was selected as the assignment algorithm for the larger sample of individuals and clips within Experiment 1B. The overall assignment accuracy was 33.46% within Experiment 1B (chance = 06.25%), with accuracies ranging from 52.9% (Silver Linings Playbook) to 11.75% (Seinfeld) within individual clips. ISC's were significantly greater than zero for 15 out of 16 clips, and fluctuations within the delta frequency band (i.e. 0-4 Hz) primarily contributed to response similarities across subjects. Interestingly, there was insufficient evidence to indicate that individuals with greater similarities in clip preference demonstrate greater similarities in cortical responses, suggesting a lack of association between ISC and clip preference. Overall these results demonstrate the utility of using ISC's for prediction, and further characterize the relationship between ISC magnitudes and subjective reports.

  11. A novel vascular clip design for the reliable induction of 2-kidney, 1-clip hypertension in the rat

    OpenAIRE

    Chelko, Stephen P.; Schmiedt, Chad W.; Lewis, Tristan H.; Lewis, Stephen J.; Robertson, Tom P.

    2011-01-01

    The 2-kidney, 1-clip (2K1C) model has provided many insights into the pathogenesis of renovascular hypertension. However, studies using the 2K1C model often report low success rates of hypertension, with typical success rates of just 40–60%. We hypothesized that these low success rates are due to fundamental design flaws in the clips traditionally used in 2K1C models. Specifically, the gap widths of traditional silver clips may not be maintained during investigator handling and these clips ma...

  12. CLIP Tool Kit (CTK): a flexible and robust pipeline to analyze CLIP sequencing data.

    Science.gov (United States)

    Shah, Ankeeta; Qian, Yingzhi; Weyn-Vanhentenryck, Sebastien M; Zhang, Chaolin

    2017-02-15

    UV cross-linking and immunoprecipitation (CLIP), followed by high-throughput sequencing, is a powerful biochemical assay that maps in vivo protein-RNA interactions on a genome-wide scale. The CLIP Tool Kit (CTK) aims at providing a set of tools for flexible, streamlined and comprehensive CLIP data analysis. This software package extends the scope of our original CIMS package. The software is implemented in Perl. The source code and detailed documentation are available at http://zhanglab.c2b2.columbia.edu/index.php/CTK . cz2294@columbia.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  13. Attitudes of older adults toward shooter video games: An initial study to select an acceptable game for training visual processing.

    Science.gov (United States)

    McKay, Sandra M; Maki, Brian E

    2010-01-01

    A computer-based 'Useful Field of View' (UFOV) training program has been shown to be effective in improving visual processing in older adults. Studies of young adults have shown that playing video games can have similar benefits; however, these studies involved realistic and violent 'first-person shooter' (FPS) games. The willingness of older adults to play such games has not been established. OBJECTIVES: To determine the degree to which older adults would accept playing a realistic, violent FPS-game, compared to video games not involving realistic depiction of violence. METHODS: Sixteen older adults (ages 64-77) viewed and rated video-clip demonstrations of the UFOV program and three video-game genres (realistic-FPS, cartoon-FPS, fixed-shooter), and were then given an opportunity to try them out (30 minutes per game) and rate various features. RESULTS: The results supported a hypothesis that the participants would be less willing to play the realistic-FPS game in comparison to the less violent alternatives (p'svideo-clip demonstrations, 10 of 16 participants indicated they would be unwilling to try out the realistic-FPS game. Of the six who were willing, three did not enjoy the experience and were not interested in playing again. In contrast, all 12 subjects who were willing to try the cartoon-FPS game reported that they enjoyed it and would be willing to play again. A high proportion also tried and enjoyed the UFOV training (15/16) and the fixed-shooter game (12/15). DISCUSSION: A realistic, violent FPS video game is unlikely to be an appropriate choice for older adults. Cartoon-FPS and fixed-shooter games are more viable options. Although most subjects also enjoyed UFOV training, a video-game approach has a number of potential advantages (for instance, 'addictive' properties, low cost, self-administration at home). We therefore conclude that non-violent cartoon-FPS and fixed-shooter video games warrant further investigation as an alternative to the UFOV program

  14. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (IBM PC VERSION)

    Science.gov (United States)

    Donnell, B.

    1994-01-01

    CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with

  15. CLIPS based decision support system for water distribution networks

    Directory of Open Access Journals (Sweden)

    K. Sandeep

    2011-10-01

    Full Text Available The difficulty in knowledge representation of a water distribution network (WDN problem has contributed to the limited use of artificial intelligence (AI based expert systems (ES in the management of these networks. This paper presents a design of a Decision Support System (DSS that facilitates "on-demand'' knowledge generation by utilizing results of simulation runs of a suitably calibrated and validated hydraulic model of an existing aged WDN corresponding to emergent or even hypothetical but likely scenarios. The DSS augments the capability of a conventional expert system by integrating together the hydraulic modelling features with heuristics based knowledge of experts under a common, rules based, expert shell named CLIPS (C Language Integrated Production System. In contrast to previous ES, the knowledge base of the DSS has been designed to be dynamic by superimposing CLIPS on Structured Query Language (SQL. The proposed ES has an inbuilt calibration module that enables calibration of an existing (aged WDN for the unknown, and unobservable, Hazen-Williams C-values. In addition, the daily run and simulation modules of the proposed ES further enable the CLIPS inference engine to evaluate the network performance for any emergent or suggested test scenarios. An additional feature of the proposed design is that the DSS integrates computational platforms such as MATLAB, open source Geographical Information System (GIS, and a relational database management system (RDBMS working under the umbrella of the Microsoft Visual Studio based common user interface. The paper also discusses implementation of the proposed framework on a case study and clearly demonstrates the utility of the application as an able aide for effective management of the study network.

  16. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  17. The accuracy and reproducibility of video assessment in the pitch-side management of concussion in elite rugby.

    Science.gov (United States)

    Fuller, G W; Kemp, S P T; Raftery, M

    2017-03-01

    To investigate the accuracy and reliability of side-line video review of head impact events to aid identification of concussion in elite sport. Diagnostic accuracy and inter-rater agreement study. Immediate care, match day and team doctors involved in the 2015 Rugby Union World Cup viewed 20 video clips showing broadcaster's footage of head impact events occurring during elite Rugby matches. Subjects subsequently recorded whether any criteria warranting permanent removal from play or medical room head injury assessment were present. The accuracy of these ratings were compared to consensus expert opinion by calculating mean sensitivity and specificity across raters. The reproducibility of doctor's decisions was additionally assessed using raw agreement and Gwets AC1 chance corrected agreement coefficient. Forty rugby medicine doctors were included in the study. Compared to the expert reference standard overall sensitivity and specificity of doctors decisions were 77.5% (95% CI 73.1-81.5%) and 53.3% (95% CI 48.2-58.2%) respectively. Overall there was raw agreement of 67.8% (95% CI 57.9-77.7%) between doctors across all video clips. Chance corrected Gwets AC1 agreement coefficient was 0.39 (95% CI 0.17-0.62), indicating fair agreement. Rugby World Cup doctors' demonstrated moderate accuracy and fair reproducibility in head injury event decision making when assessing video clips of head impact events. The use of real-time video may improve the identification, decision making and management of concussion in elite sports. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  18. Visual dictionaries as intermediate features in the human brain

    Directory of Open Access Journals (Sweden)

    Kandan eRamakrishnan

    2015-01-01

    Full Text Available The human visual system is assumed to transform low level visual features to object and scene representations via features of intermediate complexity. How the brain computationally represents intermediate features is still unclear. To further elucidate this, we compared the biologically plausible HMAX model and Bag of Words (BoW model from computer vision. Both these computational models use visual dictionaries, candidate features of intermediate complexity, to represent visual scenes, and the models have been proven effective in automatic object and scene recognition. These models however differ in the computation of visual dictionaries and pooling techniques. We investigated where in the brain and to what extent human fMRI responses to short video can be accounted for by multiple hierarchical levels of the HMAX and BoW models. Brain activity of 20 subjects obtained while viewing a short video clip was analyzed voxel-wise using a distance-based variation partitioning method. Results revealed that both HMAX and BoW explain a significant amount of brain activity in early visual regions V1, V2 and V3. However BoW exhibits more consistency across subjects in accounting for brain activity compared to HMAX. Furthermore, visual dictionary representations by HMAX and BoW explain significantly some brain activity in higher areas which are believed to process intermediate features. Overall our results indicate that, although both HMAX and BoW account for activity in the human visual system, the BoW seems to more faithfully represent neural responses in low and intermediate level visual areas of the brain.

  19. Sepsis from dropped clips at laparoscopic cholecystectomy

    International Nuclear Information System (INIS)

    Hussain, Sarwat

    2001-01-01

    We report seven patients in whom five dropped surgical clips and two gallstones were visualized in the peritoneal cavity, on radiological studies. In two, subphrenic abscesses and empyemas developed as a result of dropped clips into the peritoneal cavity during or following laparoscopic cholecystectomy. In one of these two, a clip was removed surgically from the site of an abscess. In two other patients dropped gallstones, and in three, dropped clips led to no complications. These were seen incidentally on studies done for other indications. Abdominal abscess secondary to dropped gallstones is a well-recognized complication of laparoscopic cholecystectomy (LC). We conclude that even though dropped surgical clips usually do not cause problems, they should be considered as a risk additional to other well-known causes of post-LC abdominal sepsis

  20. CLIPS 6.0 - C LANGUAGE INTEGRATED PRODUCTION SYSTEM, VERSION 6.0 (DEC VAX VMS VERSION)

    Science.gov (United States)

    Donnell, B.

    1994-01-01

    CLIPS, the C Language Integrated Production System, is a complete environment for developing expert systems -- programs which are specifically intended to model human expertise or knowledge. It is designed to allow artificial intelligence research, development, and delivery on conventional computers. CLIPS 6.0 provides a cohesive tool for handling a wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming allows knowledge to be represented as heuristics, or "rules-of-thumb" which specify a set of actions to be performed for a given situation. Object-oriented programming allows complex systems to be modeled as modular components (which can be easily reused to model other systems or create new components). The procedural programming capabilities provided by CLIPS 6.0 allow CLIPS to represent knowledge in ways similar to those allowed in languages such as C, Pascal, Ada, and LISP. Using CLIPS 6.0, one can develop expert system software using only rule-based programming, only object-oriented programming, only procedural programming, or combinations of the three. CLIPS provides extensive features to support the rule-based programming paradigm including seven conflict resolution strategies, dynamic rule priorities, and truth maintenance. CLIPS 6.0 supports more complex nesting of conditional elements in the if portion of a rule ("and", "or", and "not" conditional elements can be placed within a "not" conditional element). In addition, there is no longer a limitation on the number of multifield slots that a deftemplate can contain. The CLIPS Object-Oriented Language (COOL) provides object-oriented programming capabilities. Features supported by COOL include classes with multiple inheritance, abstraction, encapsulation, polymorphism, dynamic binding, and message passing with message-handlers. CLIPS 6.0 supports tight integration of the rule-based programming features of CLIPS with

  1. Anesthesia management for MitraClip device implantation

    Directory of Open Access Journals (Sweden)

    Harikrishnan Kothandan

    2014-01-01

    Full Text Available Aims and Objectives: Percutaneous MitraClip implantation has been demonstrated as an alternative procedure in high-risk patients with symptomatic severe mitral regurgitation (MR who are not suitable (or denied mitral valve repair/replacement due to excessive co morbidity. The MitraClip implantation was performed under general anesthesia and with 3-dimensional transesophageal echocardiography (TEE and fluoroscopic guidance. Materials and Methods: Peri-operative patient data were extracted from the electronic and paper medical records of 21 patients who underwent MitraClip implantations. Results: Four MitraClip implantation were performed in the catheterization laboratory; remaining 17 were performed in the hybrid operating theatre. In 2 patients, procedure was aborted, in one due to migration of the Chiari network into the left atrium and in second one, the leaflets and chords of the mitral valve torn during clipping resulting in consideration for open surgery. In the remaining 19 patients, MitraClip was implanted and the patients showed acute reduction of severe MR to mild-moderate MR. All the patients had invasive blood pressure monitoring and the initial six patients had central venous catheterization prior to the procedure. Intravenous heparin was administered after the guiding catheter was introduced through the inter-atrial septum and activated clotting time was maintained beyond 250 s throughout the procedure. Protamine was administered at the end of the procedure. All the patients were monitored in the intensive care unit after the procedure. Conclusions: Percutaneous MitraClip implantation is a feasible alternative in high-risk patients with symptomatic severe MR. Anesthesia management requirements are similar to open surgical mitral valve repair or replacement. TEE plays a vital role during the MitraClip implantation.

  2. Voices and visions of Syrian video activists in Aleppo and Raqqa

    DEFF Research Database (Denmark)

    Wessels, Josepha Ivanka

    2015-01-01

    and exhibits some sequentiality. With sequentiality comes a certain subjectivity which allows the video maker to take a political space and position. Part of an ongoing postdoctoral research, during which a general typography of You Tube clips from Syria is developed, this paper provides a focus on young...

  3. Teaching Psychology to Student Nurses: The Use of "Talking Head" Videos

    Science.gov (United States)

    Snelgrove, Sherrill; Tait, Desiree J. R.; Tait, Michael

    2016-01-01

    Psychology is a central part of undergraduate nursing curricula in the UK. However, student nurses report difficulties recognising the relevance and value of psychology. We sought to strengthen first-year student nurses' application of psychology by developing a set of digital stories based around "Talking Head" video clips where…

  4. Which technology to investigate visual perception in sport: video vs. virtual reality.

    Science.gov (United States)

    Vignais, Nicolas; Kulpa, Richard; Brault, Sébastien; Presse, Damien; Bideau, Benoit

    2015-02-01

    Visual information uptake is a fundamental element of sports involving interceptive tasks. Several methodologies, like video and methods based on virtual environments, are currently employed to analyze visual perception during sport situations. Both techniques have advantages and drawbacks. The goal of this study is to determine which of these technologies may be preferentially used to analyze visual information uptake during a sport situation. To this aim, we compared a handball goalkeeper's performance using two standardized methodologies: video clip and virtual environment. We examined this performance for two response tasks: an uncoupled task (goalkeepers show where the ball ends) and a coupled task (goalkeepers try to intercept the virtual ball). Variables investigated in this study were percentage of correct zones, percentage of correct responses, radial error and response time. The results showed that handball goalkeepers were more effective, more accurate and started to intercept earlier when facing a virtual handball thrower than when facing the video clip. These findings suggested that the analysis of visual information uptake for handball goalkeepers was better performed by using a 'virtual reality'-based methodology. Technical and methodological aspects of these findings are discussed further. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Reidentification of Persons Using Clothing Features in Real-Life Video

    Directory of Open Access Journals (Sweden)

    Guodong Zhang

    2017-01-01

    Full Text Available Person reidentification, which aims to track people across nonoverlapping cameras, is a fundamental task in automated video processing. Moving people often appear differently when viewed from different nonoverlapping cameras because of differences in illumination, pose, and camera properties. The color histogram is a global feature of an object that can be used for identification. This histogram describes the distribution of all colors on the object. However, the use of color histograms has two disadvantages. First, colors change differently under different lighting and at different angles. Second, traditional color histograms lack spatial information. We used a perception-based color space to solve the illumination problem of traditional histograms. We also used the spatial pyramid matching (SPM model to improve the image spatial information in color histograms. Finally, we used the Gaussian mixture model (GMM to show features for person reidentification, because the main color feature of GMM is more adaptable for scene changes, and improve the stability of the retrieved results for different color spaces in various scenes. Through a series of experiments, we found the relationships of different features that impact person reidentification.

  6. PROTOTIPE VIDEO EDITOR DENGAN MENGGUNAKAN DIRECT X DAN DIRECT SHOW

    Directory of Open Access Journals (Sweden)

    Djoni Haryadi Setiabudi

    2004-01-01

    Full Text Available Technology development had given people the chance to capture their memorable moments in video format. A high quality digital video is a result of a good editing process. Which in turn, arise the new need of an editor application. In accordance to the problem, here the process of making a simple application for video editing needs. The application development use the programming techniques often applied in multimedia applications, especially video. First part of the application will begin with the video file compression and decompression, then we'll step into the editing part of the digital video file. Furthermore, the application also equipped with the facilities needed for the editing processes. The application made with Microsoft Visual C++ with DirectX technology, particularly DirectShow. The application provides basic facilities that will help the editing process of a digital video file. The application will produce an AVI format file after the editing process is finished. Through the testing process of this application shows the ability of this application to do the 'cut' and 'insert' of video files in AVI, MPEG, MPG and DAT formats. The 'cut' and 'insert' process only can be done in static order. Further, the aplication also provide the effects facility for transition process in each clip. Lastly, the process of saving the new edited video file in AVI format from the application. Abstract in Bahasa Indonesia : Perkembangan teknologi memberi kesempatan masyarakat untuk mengabadikan saat - saat yang penting menggunakan video. Pembentukan video digital yang baik membutuhkan proses editing yang baik pula. Untuk melakukan proses editing video digital dibutuhkan program editor. Berdasarkan permasalahan diatas maka pada penelitian ini dibuat prototipe editor sederhana untuk video digital. Pembuatan aplikasi memakai teknik pemrograman di bidang multimedia, khususnya video. Perencanaan dalam pembuatan aplikasi tersebut dimulai dengan pembentukan

  7. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Science.gov (United States)

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  8. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Directory of Open Access Journals (Sweden)

    Florian Eyben

    Full Text Available Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  9. Recurrence of ICA-PCoA aneurysms after neck clipping.

    Science.gov (United States)

    Sakaki, T; Takeshima, T; Tominaga, M; Hashimoto, H; Kawaguchi, S

    1994-01-01

    Between 1975 and 1992, 2211 patients underwent aneurysmal neck clipping at the Nara Medical University clinic and associated hospitals. The aneurysm in 931 of these patients was situated at the junction of the internal carotid artery (ICA) and posterior communicating artery (PCoA). Seven patients were readmitted 4 to 17 years after the first surgery because of regrowth and rupture of an ICA-PCoA aneurysmal sac that had arisen from the residual neck. On angiograms obtained following aneurysmal neck clipping, a large primitive type of PCoA was demonstrated in six patients and a small PCoA in one. A small residual aneurysm was confirmed in only two patients and angiographically complete neck clipping in five. Recurrent ICA-PCoA aneurysms were separated into two types based on the position of the old clip in relation to the new growth. Type 1 aneurysms regrow from the entire neck and balloon eccentrically. In this type, it is possible to apply the clip at the neck as in conventional clipping for a ruptured aneurysm. Type 2 includes aneurysms in which the proximal portion of a previous clip is situated at the corner of the ICA and aneurysmal neck and the distal portion on the enlarged dome of the aneurysm, because the sac is regrowing from a portion of the residual neck. In this type of aneurysm, a Sugita fenestrated clip can occlude the residual neck, overriding the old clip. Classifying these aneurysms into two groups is very useful from a surgical point of view because it is possible to apply a new clip without removing the old clip, which was found to be adherent to surrounding tissue.

  10. Journal of Clipped Words in Reader's Digest Magazine

    OpenAIRE

    Simanjuntak, Lestari

    2012-01-01

    This study deals with Clipped Words in the “Laughter, the Best Medicine” of Reader's Digest. The objectives of the study are to find out the types of clipped words which are used in the “Laughter, the Best Medicine” of Reader's Digest, to find out sthe dominantly used in the whole story and to reason the dominant clipped word use in the text. The study use descriptive qualitative method. The data were collected from seventeen selected Reader's Digest which contains the clipped word by applie...

  11. Human features detection in video surveillance

    OpenAIRE

    Barbosa, Patrícia Margarida Silva de Castro Neves

    2016-01-01

    Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e Computadores Human activity recognition algorithms have been studied actively from decades using a sequence of 2D and 3D images from a video surveillance. This new surveillance solutions and the areas of image processing and analysis have been receiving special attention and interest from the scientific community. Thus, it became possible to witness the appearance of new video compression techniques, the tr...

  12. 21 CFR 868.6225 - Nose clip.

    Science.gov (United States)

    2010-04-01

    ... ANESTHESIOLOGY DEVICES Miscellaneous § 868.6225 Nose clip. (a) Identification. A nose clip is a device intended to close a patient's external nares (nostrils) during diagnostic or therapeutic procedures. (b... from the current good manufacturing practice requirements of the quality system regulation in part 820...

  13. Using Video Modeling as an Anti-bullying Intervention for Children with Autism Spectrum Disorder.

    Science.gov (United States)

    Rex, Catherine; Charlop, Marjorie H; Spector, Vicki

    2018-03-07

    In the present study, we used a multiple baseline design across participants to assess the efficacy of a video modeling intervention to teach six children with autism spectrum disorder (ASD) to assertively respond to bullying. During baseline, the children made few appropriate responses upon viewing video clips of bullying scenarios. During the video modeling intervention, participants viewed videos of models assertively responding to three types of bullying: physical, verbal bullying, and social exclusion. Results indicated that all six children learned through video modeling to make appropriate assertive responses to bullying scenarios. Four of the six children demonstrated learning in the in situ bullying probes. The results are discussed in terms of an intervention for victims of bullying with ASD.

  14. 21 CFR 886.1410 - Ophthalmic trial lens clip.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Ophthalmic trial lens clip. 886.1410 Section 886...) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1410 Ophthalmic trial lens clip. (a) Identification. An ophthalmic trial lens clip is a device intended to hold prisms, spheres, cylinders, or...

  15. Video gallery of educational lectures integrated in faculty's portal

    Directory of Open Access Journals (Sweden)

    Jaroslav Majerník

    2013-05-01

    Full Text Available This paper presents a web based educational video-clips exhibition created to share various archived lectures for medical students, health care professionals as well as for general public. The presentation of closely related topics was developed as video gallery and it is based solely on free or open source tools to be available for wide academic and/or non-commercial use. Even if the educational video records can be embedded in any websites, we preferred to use our faculty’s portal, which should be a central point to offer various multimedia educational materials. The system was integrated and tested to offer open access to infectology lectures that were captured and archived from live-streamed sessions and from videoconferences.

  16. Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video

    Science.gov (United States)

    Yeo, Boon-Lock; Liu, Bede

    1996-03-01

    Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.

  17. Psychiatric disorder associated with vacuum-assisted breast biopsy clip placement: a case report

    Directory of Open Access Journals (Sweden)

    Zografos George C

    2008-10-01

    Full Text Available Abstract Introduction Vacuum-assisted breast biopsy is a minimally invasive technique that has been used increasingly in the treatment of mammographically detected, non-palpable breast lesions. Clip placement at the biopsy site is standard practice after vacuum-assisted breast biopsy. Case presentation We present the case of a 62-year-old woman with suspicious microcalcifications in her left breast. The patient was informed about vacuum-assisted breast biopsy, including clip placement. During the course of taking the patient's history, she communicated excellently, her demeanor was normal, she disclosed no intake of psychiatric medication and had not been diagnosed with any psychiatric disorders. Subsequently, the patient underwent vacuum-assisted breast biopsy (11 G under local anesthesia. A clip was placed at the biopsy site. The pathological diagnosis was of sclerosing adenosis. At the 6-month mammographic follow-up, the radiologist mentioned the existence of the metallic clip in her breast. Subsequently, the woman presented complaining about "being spied [upon] by an implanted clip in [her] breast" and repeatedly requested the removal of the clip. The patient was referred to the specialized psychiatrist of our breast unit for evaluation. The Mental State Examination found that systematized paranoid ideas of persecutory type dominated her daily routines. At the time, she believed that the implanted clip was one of several pieces of equipment being used to keep her under surveillance, the other equipment being her telephone, cameras and television. Quite surprisingly, she had never had a consultation with a mental health professional. The patient appeared depressed and her insight into her condition was impaired. The prevalent diagnosis was schizotypal disorder, whereas the differential diagnosis comprised delusional disorder of persecutory type, affective disorder with psychotic features or comorbid delusional disorder with major depression

  18. Biocompatibility of Plastic Clip in Neurocranium - Experimental Study on Dogs.

    Science.gov (United States)

    Delibegovic, Samir; Dizdarevic, Kemal; Cickusic, Elmir; Katica, Muhamed; Obhodjas, Muamer; Ocus, Muhamed

    2016-01-01

    A potential advantage of the use of the plastic clips in neurosurgery is their property of causing fewer artifacts than titanium clips as assessed by computed tomography and magnetic resonance scans. The biocompatibility of plastic clips was demonstrated in the peritoneal cavity, but their behavior in the neurocranium is not known. Twelve aggressive stray dogs designated for euthanasia were taken for this experimental study. The animals were divided into two groups. In all cases, after anesthesia, a craniotomy was performed, and after opening the dura, a proximal part titanium clip was placed on the isolated superficial Sylvian vein (a permanent Yasargil FT 746 T clip at a 90° angle, while a plastic Hem-o-lok clip ML was placed on another part of the vein). The first group of animals was sacrificed on the 7 th postoperative day and the second group on the 60 th postoperative day. Samples of tissue around the clips were taken for a histopathological evaluation. The plastic clip caused a more intensive tissue reaction than the titanium clip on the 7 th postoperative day, but there was no statistical difference. Even on the 60 th postoperative day there was no significant difference in tissue reaction between the titanium and plastic clips. These preliminary results confirm the possibility for the use of plastic clips in neurosurgery. Before their use in human neurosurgery, further studies are needed to investigate the long-term effects of the presence of plastic clips in the neurocranium, as well as studies of the aneurysmal model.

  19. Mounting clips for panel installation

    Science.gov (United States)

    Cavieres, Andres; Al-Haddad, Tristan; Goodman, Joseph

    2017-07-11

    A photovoltaic panel mounting clip comprising a base, central indexing tabs, flanges, lateral indexing tabs, and vertical indexing tabs. The mounting clip removably attaches one or more panels to a beam or the like structure, both mechanically and electrically. It provides secure locking of the panels in all directions, while providing guidance in all directions for accurate installation of the panels to the beam or the like structure.

  20. Video Comparator

    International Nuclear Information System (INIS)

    Rose, R.P.

    1978-01-01

    The Video Comparator is a comparative gage that uses electronic images from two sources, a standard and an unknown. Two matched video cameras are used to obtain the electronic images. The video signals are mixed and displayed on a single video receiver (CRT). The video system is manufactured by ITP of Chatsworth, CA and is a Tele-Microscope II, Model 148. One of the cameras is mounted on a toolmaker's microscope stand and produces a 250X image of a cast. The other camera is mounted on a stand and produces an image of a 250X template. The two video images are mixed in a control box provided by ITP and displayed on a CRT. The template or the cast can be moved to align the desired features. Vertical reference lines are provided on the CRT, and a feature on the cast can be aligned with a line on the CRT screen. The stage containing the casts can be moved using a Boeckleler micrometer equipped with a digital readout, and a second feature aligned with the reference line and the distance moved obtained from the digital display

  1. Application of indocyanine green video angiography in surgical treatment of intracranial aneurysms

    Directory of Open Access Journals (Sweden)

    Felix Hendrik Pahl

    2015-07-01

    Full Text Available Indocyanine green (ICG video angiography has been used for several medical indications in the last decades. It allows a real time evaluation of vascular structures during the surgery. This study describes the surgical results of a senior vascular neurosurgeon. We retrospectively searched our database for all aneurysm cases treated with the aid of intraoperative ICG from 2009 to 2014. A total of 61 aneurysms in 56 patients were surgically clipped using intraoperative ICG. Clip reposition after ICG happened in 2 patients (3.2%. Generally, highly variable clip adjustment rates of 2%–38% following ICG have been reported since the introduction of this imaging technique. The application of ICG in vascular neurosurgery is still an emerging challenge. It is an adjunctive strategy which facilitates aneurismal evaluation and treatment in experienced hands. Nevertheless, a qualified vascular neurosurgeon is still the most important component of a high quality work.

  2. Development of an artifact-free aneurysm clip

    Directory of Open Access Journals (Sweden)

    Brack Alexander

    2016-09-01

    Full Text Available For the treatment of intracranial aneurysms with aneurysm clips, usually a follow-up inspection in MRI is required. To avoid any artifacts, which can make a proper diagnosis difficult, a new approach for the manufacturing of an aneurysm clip entirely made from fiber-reinforced plastics has been developed. In this paper the concept for the design of the clip, the development of a new manufacturing technology for the fiber-reinforced components as well as first results from the examination of the components in phantom MRI testing is shown.

  3. Ranking Highlights in Personal Videos by Analyzing Edited Videos.

    Science.gov (United States)

    Sun, Min; Farhadi, Ali; Chen, Tseng-Hung; Seitz, Steve

    2016-11-01

    We present a fully automatic system for ranking domain-specific highlights in unconstrained personal videos by analyzing online edited videos. A novel latent linear ranking model is proposed to handle noisy training data harvested online. Specifically, given a targeted domain such as "surfing," our system mines the YouTube database to find pairs of raw and their corresponding edited videos. Leveraging the assumption that an edited video is more likely to contain highlights than the trimmed parts of the raw video, we obtain pair-wise ranking constraints to train our model. The learning task is challenging due to the amount of noise and variation in the mined data. Hence, a latent loss function is incorporated to mitigate the issues caused by the noise. We efficiently learn the latent model on a large number of videos (about 870 min in total) using a novel EM-like procedure. Our latent ranking model outperforms its classification counterpart and is fairly competitive compared with a fully supervised ranking system that requires labels from Amazon Mechanical Turk. We further show that a state-of-the-art audio feature mel-frequency cepstral coefficients is inferior to a state-of-the-art visual feature. By combining both audio-visual features, we obtain the best performance in dog activity, surfing, skating, and viral video domains. Finally, we show that impressive highlights can be detected without additional human supervision for seven domains (i.e., skating, surfing, skiing, gymnastics, parkour, dog activity, and viral video) in unconstrained personal videos.

  4. A defect in the CLIP1 gene (CLIP-170) can cause autosomal recessive intellectual disability

    OpenAIRE

    Larti, Farzaneh; Kahrizi, Kimia; Musante, Luciana; Hu, Hao; Papari, Elahe; Fattahi, Zohreh; Bazazzadegan, Niloofar; Liu, Zhe; Banan, Mehdi; Garshasbi, Masoud; Wienker, Thomas F; Hilger Ropers, H; Galjart, Niels; Najmabadi, Hossein

    2015-01-01

    In the context of a comprehensive research project, investigating novel autosomal recessive intellectual disability (ARID) genes, linkage analysis based on autozygosity mapping helped identify an intellectual disability locus on Chr.12q24, in an Iranian family (LOD score=3.7). Next-generation sequencing (NGS) following exon enrichment in this novel interval, detected a nonsense mutation (p.Q1010*) in the CLIP1 gene. CLIP1 encodes a member of microtubule (MT) plus-end tracking proteins, which ...

  5. A model-based approach to identify binding sites in CLIP-Seq data.

    Directory of Open Access Journals (Sweden)

    Tao Wang

    Full Text Available Cross-linking immunoprecipitation coupled with high-throughput sequencing (CLIP-Seq has made it possible to identify the targeting sites of RNA-binding proteins in various cell culture systems and tissue types on a genome-wide scale. Here we present a novel model-based approach (MiClip to identify high-confidence protein-RNA binding sites from CLIP-seq datasets. This approach assigns a probability score for each potential binding site to help prioritize subsequent validation experiments. The MiClip algorithm has been tested in both HITS-CLIP and PAR-CLIP datasets. In the HITS-CLIP dataset, the signal/noise ratios of miRNA seed motif enrichment produced by the MiClip approach are between 17% and 301% higher than those by the ad hoc method for the top 10 most enriched miRNAs. In the PAR-CLIP dataset, the MiClip approach can identify ∼50% more validated binding targets than the original ad hoc method and two recently published methods. To facilitate the application of the algorithm, we have released an R package, MiClip (http://cran.r-project.org/web/packages/MiClip/index.html, and a public web-based graphical user interface software (http://galaxy.qbrc.org/tool_runner?tool_id=mi_clip for customized analysis.

  6. Integrating CLIPS applications into heterogeneous distributed systems

    Science.gov (United States)

    Adler, Richard M.

    1991-01-01

    SOCIAL is an advanced, object-oriented development tool for integrating intelligent and conventional applications across heterogeneous hardware and software platforms. SOCIAL defines a family of 'wrapper' objects called agents, which incorporate predefined capabilities for distributed communication and control. Developers embed applications within agents and establish interactions between distributed agents via non-intrusive message-based interfaces. This paper describes a predefined SOCIAL agent that is specialized for integrating C Language Integrated Production System (CLIPS)-based applications. The agent's high-level Application Programming Interface supports bidirectional flow of data, knowledge, and commands to other agents, enabling CLIPS applications to initiate interactions autonomously, and respond to requests and results from heterogeneous remote systems. The design and operation of CLIPS agents are illustrated with two distributed applications that integrate CLIPS-based expert systems with other intelligent systems for isolating and mapping problems in the Space Shuttle Launch Processing System at the NASA Kennedy Space Center.

  7. 21 CFR 886.3100 - Ophthalmic tantalum clip.

    Science.gov (United States)

    2010-04-01

    ... blood vessels in the eye. (b) Classification. Class II (special controls). The device is exempt from the...) MEDICAL DEVICES OPHTHALMIC DEVICES Prosthetic Devices § 886.3100 Ophthalmic tantalum clip. (a) Identification. An ophthalmic tantalum clip is a malleable metallic device intended to be implanted permanently...

  8. Percutaneous interventional mitral regurgitation treatment using the Mitra-Clip system

    DEFF Research Database (Denmark)

    Boekstegers, P; Hausleiter, J; Baldus, S

    2014-01-01

    The interventional treatment of mitral valve regurgitation by the MitraClip procedure has grown rapidly in Germany and Europe during the past years. The MitraClip procedure has the potential to treat high-risk patients with secondary mitral valve regurgitation and poor left ventricular function....... Furthermore, patients with primary mitral valve regurgitation may be treated successfully by the MitraClip procedure in case of high surgical risk or in very old patients. At the same time it has been emphasised that the MitraClip interventional treatment is still at an early stage of clinical development....... The largest clinical experience with the MitraClip procedure so far is probably present in some German cardiovascular centers, which here summarise their recommendations on the current indications and procedural steps of the MitraClip treatment. These recommendations of the AGIK and ALKK may present a basis...

  9. Clip reconstruction of a large right MCA bifurcation aneurysm. Case report

    Directory of Open Access Journals (Sweden)

    Giovani A.

    2014-06-01

    Full Text Available We report a case of complex large middle cerebral artery (MCA bifurcation aneurysm that ruptured during dissection from the very adherent MCA branches but was successfully clipped and the MCA bifurcation reconstructed using 4 Yasargill clips. Through a right pterional craniotomy the sylvian fissure was largely opened as to allow enough workspace for clipping the aneurysm and placing a temporary clip on M1. The pacient recovered very well after surgery and was discharged after 1 week with no neurological deficit. Complex MCA bifurcation aneurysms can be safely reconstructed using regular clips, without the need of using fenestrated clips or complex by-pass procedures.

  10. Manipulations of the features of standard video lottery terminal (VLT) games: effects in pathological and non-pathological gamblers.

    Science.gov (United States)

    Loba, P; Stewart, S H; Klein, R M; Blackburn, J R

    2001-01-01

    The present study was conducted to identify game parameters that would reduce the risk of abuse of video lottery terminals (VLTs) by pathological gamblers, while exerting minimal effects on the behavior of non-pathological gamblers. Three manipulations of standard VLT game features were explored. Participants were exposed to: a counter which displayed a running total of money spent; a VLT spinning reels game where participants could no longer "stop" the reels by touching the screen; and sensory feature manipulations. In control conditions, participants were exposed to standard settings for either a spinning reels or a video poker game. Dependent variables were self-ratings of reactions to each set of parameters. A set of 2(3) x 2 x 2 (game manipulation [experimental condition(s) vs. control condition] x game [spinning reels vs. video poker] x gambler status [pathological vs. non-pathological]) repeated measures ANOVAs were conducted on all dependent variables. The findings suggest that the sensory manipulations (i.e., fast speed/sound or slow speed/no sound manipulations) produced the most robust reaction differences. Before advocating harm reduction policies such as lowering sensory features of VLT games to reduce potential harm to pathological gamblers, it is important to replicate findings in a more naturalistic setting, such as a real bar.

  11. Hardware Implementation Of Line Clipping A lgorithm By Using FPGA

    Directory of Open Access Journals (Sweden)

    Amar Dawod

    2013-04-01

    Full Text Available The computer graphics system performance is increasing faster than any other computing application. Algorithms for line clipping against convex polygons and lines have been studied for a long time and many research papers have been published so far. In spite of the latest graphical hardware development and significant increase of performance the clipping is still a bottleneck of any graphical system. So its implementation in hardware is essential for real time applications. In this paper clipping operation is discussed and a hardware implementation of the line clipping algorithm is presented and finally formulated and tested using Field Programmable Gate Arrays (FPGA. The designed hardware unit consists of two parts : the first is positional code generator unit and the second is the clipping unit. Finally it is worth mentioning that the  designed unit is capable of clipping (232524 line segments per second.       

  12. Do You See What I See? How We Use Video as an Adjunct to General Surgery Resident Education.

    Science.gov (United States)

    Abdelsattar, Jad M; Pandian, T K; Finnesgard, Eric J; El Khatib, Moustafa M; Rowse, Phillip G; Buckarma, EeeL N H; Gas, Becca L; Heller, Stephanie F; Farley, David R

    2015-01-01

    Preparation of learners for surgical operations varies by institution, surgeon staff, and the trainees themselves. Often the operative environment is overwhelming for surgical trainees and the educational experience is substandard due to inadequate preparation. We sought to develop a simple, quick, and interactive tool that might assess each individual trainee's knowledge baseline before participating in minimally invasive surgery (MIS). A 4-minute video with 5 separate muted clips from laparoscopic procedures (splenectomy, gastric band removal, cholecystectomy, adrenalectomy, and inguinal hernia repair) was created and shown to medical students (MS), general surgery residents, and staff surgeons. Participants were asked to watch the video and commentate (provide facts) on the operation, body region, instruments, anatomy, pathology, and surgical technique. Comments were scored using a 100-point grading scale (100 facts agreed upon by 8 surgical staff and trainees) with points deducted for incorrect answers. All participants were video recorded. Performance was scored by 2 separate raters. An academic medical center. MS = 10, interns (n = 8), postgraduate year 2 residents (PGY)2s (n = 11), PGY3s (n = 10), PGY4s (n = 9), PGY5s (n = 7), and general surgery staff surgeons (n = 5). Scores ranged from -5 to 76 total facts offered during the 4-minute video examination. MS scored the lowest (mean, range; 5, -5 to 8); interns were better (17, 4-29), followed by PGY2s (31, 21-34), PGY3s (33, 10-44), PGY4s (44, 19-47), PGY5s (48, 28-49), and staff (48, 17-76), p video clip vs 10 of 11 PGY2 residents (p video clip of 5 MIS operations than inexperienced trainees. However, even tenured staff surgeons relayed very few facts on procedures they were unfamiliar with. The potential differentiating capabilities of such a quick and inexpensive effort has pushed us to generate better online learning tools (operative modules) and hands-on simulation resources for our learners. We aim to

  13. Augmented video viewing: transforming video consumption into an active experience

    OpenAIRE

    WIJNANTS, Maarten; Leën, Jeroen; QUAX, Peter; LAMOTTE, Wim

    2014-01-01

    Traditional video productions fail to cater to the interactivity standards that the current generation of digitally native customers have become accustomed to. This paper therefore advertises the \\activation" of the video consumption process. In particular, it proposes to enhance HTML5 video playback with interactive features in order to transform video viewing into a dynamic pastime. The objective is to enable the authoring of more captivating and rewarding video experiences for end-users. T...

  14. A method of handing down surgical clipping technique for cerebral aneurysm

    International Nuclear Information System (INIS)

    Idei, Masaru; Yamane, Kanji; Okita, Shinji; Kumano, Kiyoshi; Nakae, Ryuta

    2009-01-01

    Meticulous clipping techniques are essential to obtain good results. Recently, the introduction of intravascular surgery for cerebral aneurysms has decreased the number of the direct clipping surgeries. And the increasing number of the lawsuits against doctors further discourages young surgeons from attempting clipping. As a result, young neurosurgeons, have less experience performing clipping. Therefore, we must learn clipping techniques from expert neurosurgeons under the limitation of having fewer opportunities to perform clipping surgery. In this paper, I present my experiences and discuss ways to obtain techniques for clipping surgery. I performed surgical clipping in 19 cases, 12 unruptured and 7 ruptured aneurysms, 7 males and 12 females aged from 36 to 79 years old (mean 61.9 years). Postoperatively, there were no symptomatic complications, but there were 2 asymptomatic infarctions that were revealed on CT scan. Intraoperative premature rupture occurred in 1 patient with a ruptured aneurysm. Techniques of manipulation with micro-forceps, suction and spatula are required for successful clipping. Off-the-job training of dissecting chicken wing arteries and rat abdominal aortas and vena cavas is useful. Moreover, actual experiences of surgical operations are essential. Surgical experiences raise the motivation of young neurosurgeons and encourage them to train more. We believe that this benign cycle contributes to meticulous surgical skills. (author)

  15. Semantic Labeling of Nonspeech Audio Clips

    Directory of Open Access Journals (Sweden)

    Xiaojuan Ma

    2010-01-01

    Full Text Available Human communication about entities and events is primarily linguistic in nature. While visual representations of information are shown to be highly effective as well, relatively little is known about the communicative power of auditory nonlinguistic representations. We created a collection of short nonlinguistic auditory clips encoding familiar human activities, objects, animals, natural phenomena, machinery, and social scenes. We presented these sounds to a broad spectrum of anonymous human workers using Amazon Mechanical Turk and collected verbal sound labels. We analyzed the human labels in terms of their lexical and semantic properties to ascertain that the audio clips do evoke the information suggested by their pre-defined captions. We then measured the agreement with the semantically compatible labels for each sound clip. Finally, we examined which kinds of entities and events, when captured by nonlinguistic acoustic clips, appear to be well-suited to elicit information for communication, and which ones are less discriminable. Our work is set against the broader goal of creating resources that facilitate communication for people with some types of language loss. Furthermore, our data should prove useful for future research in machine analysis/synthesis of audio, such as computational auditory scene analysis, and annotating/querying large collections of sound effects.

  16. Clip, connect, clone

    DEFF Research Database (Denmark)

    Fujima, Jun; Lunzer, Aran; Hornbæk, Kasper

    2010-01-01

    using three mechanisms: clipping of input and result elements from existing applications to form cells on a spreadsheet; connecting these cells using formulas, thus enabling result transfer between applications; and cloning cells so that multiple requests can be handled side by side. We demonstrate...

  17. Current MitraClip experience, safety and feasibility in the Netherlands

    NARCIS (Netherlands)

    Rahhab, Z.; Kortlandt, F. A.; Velu, J. F.; Schurer, R. A. J.; Delgado, V.; Tonino, P.; Boven, A. J.; van den Branden, B. J. L.; Kraaijeveld, A. O.; Voskuil, M.; Hoorntje, J.; van Wely, M. [=Marleen; van Houwelingen, K.; Bleeker, G. B.; Rensing, B.; Kardys, I.; Baan, J.; van der Heyden, J. A. S.; van Mieghem, N. M.

    2017-01-01

    Purpose Data on MitraClip procedural safety and efficacy in the Netherlands are scarce. We aim to provide an overview of the Dutch MitraClip experience. Methods We pooled anonymised demographic and procedural data of 1151 consecutive MitraClip patients, from 13 Dutch hospitals. Data was collected by

  18. Role of pre-operative multimedia video information in allaying anxiety related to spinal anaesthesia: A randomised controlled trial

    Science.gov (United States)

    Dias, Raylene; Baliarsing, Lipika; Barnwal, Neeraj Kumar; Mogal, Shweta; Gujjar, Pinakin

    2016-01-01

    Background and Aims: A high incidence of anxiety has been reported in patients in the operation theatre set up. We developed a short visual clip of 206 s duration depicting the procedure of spinal anaesthesia (SAB) and aimed to compare the effect of this video on perioperative anxiety in patients undergoing procedures under SAB. Methods: A prospective randomised study of 200 patients undergoing surgery under SAB was conducted. Patients were allotted to either the nonvideo group (Group NV - those who were not shown the video) or the video group (Group V - those who were shown the video). Anxiety was assessed using the Spielberger State-Trait Anxiety Inventory during the pre-anaesthetic check-up and before surgery. Haemodynamic parameters such as heart rate (HR) and mean arterial pressure (MAP) were also noted. Student's t-test was used for normally distributed and Mann–Whitney U-test for nonnormally distributed quantitative data. Chi-square test was used for categorical data. Results: Both groups were comparable with respect to baseline anxiety scores and haemodynamic parameters. The nonvideo group showed a significant increase in state anxiety scores before administration of SAB (P Multimedia information in the form of a short audiovisual clip is an effective and feasible method to reduce perioperative anxiety related to SAB. PMID:27942059

  19. Diagnostic applications of nail clippings.

    Science.gov (United States)

    Stephen, Sasha; Tosti, Antonella; Rubin, Adam I

    2015-04-01

    "Nail clipping is a simple technique for diagnosis of several nail unit dermatoses. This article summarizes the practical approach, utility, and histologic findings of a nail clipping in evaluation of onychomycosis, nail unit psoriasis, onychomatricoma, subungual hematoma, melanonychia, and nail cosmetics, and the forensic applications of this easily obtained specimen. It reviews important considerations in optimizing specimen collection, processing methods, and efficacy of special stains in several clinical contexts. Readers will develop a greater understanding and ease of application of this indispensable procedure in assessing nail unit dermatoses." Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Identifying Key Features of Student Performance in Educational Video Games and Simulations through Cluster Analysis

    Science.gov (United States)

    Kerr, Deirdre; Chung, Gregory K. W. K.

    2012-01-01

    The assessment cycle of "evidence-centered design" (ECD) provides a framework for treating an educational video game or simulation as an assessment. One of the main steps in the assessment cycle of ECD is the identification of the key features of student performance. While this process is relatively simple for multiple choice tests, when…

  1. The Building of Pre-Service Primary Teachers' Knowledge of Mathematics Teaching: Interaction and Online Video Case Studies

    Science.gov (United States)

    Llinares, Salvador; Valls, Julia

    2009-01-01

    This study explores how preservice primary teachers became engaged in meaning-making mathematics teaching when participating in online discussions within learning environments integrating video-clips of mathematics teaching. We identified different modes of participation in the online discussions and different levels of knowledge-building. The…

  2. Cerebral Aneurysm Clipping Surgery Simulation Using Patient-Specific 3D Printing and Silicone Casting.

    Science.gov (United States)

    Ryan, Justin R; Almefty, Kaith K; Nakaji, Peter; Frakes, David H

    2016-04-01

    Neurosurgery simulator development is growing as practitioners recognize the need for improved instructional and rehearsal platforms to improve procedural skills and patient care. In addition, changes in practice patterns have decreased the volume of specific cases, such as aneurysm clippings, which reduces the opportunity for operating room experience. The authors developed a hands-on, dimensionally accurate model for aneurysm clipping using patient-derived anatomic data and three-dimensional (3D) printing. Design of the model focused on reproducibility as well as adaptability to new patient geometry. A modular, reproducible, and patient-derived medical simulacrum was developed for medical learners to practice aneurysmal clipping procedures. Various forms of 3D printing were used to develop a geometrically accurate cranium and vascular tree featuring 9 patient-derived aneurysms. 3D printing in conjunction with elastomeric casting was leveraged to achieve a patient-derived brain model with tactile properties not yet available from commercial 3D printing technology. An educational pilot study was performed to gauge simulation efficacy. Through the novel manufacturing process, a patient-derived simulacrum was developed for neurovascular surgical simulation. A follow-up qualitative study suggests potential to enhance current educational programs; assessments support the efficacy of the simulacrum. The proposed aneurysm clipping simulator has the potential to improve learning experiences in surgical environment. 3D printing and elastomeric casting can produce patient-derived models for a dynamic learning environment that add value to surgical training and preparation. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Receiver-based recovery of clipped ofdm signals for papr reduction: A bayesian approach

    KAUST Repository

    Ali, Anum

    2014-01-01

    Clipping is one of the simplest peak-to-average power ratio reduction schemes for orthogonal frequency division multiplexing (OFDM). Deliberately clipping the transmission signal degrades system performance, and clipping mitigation is required at the receiver for information restoration. In this paper, we acknowledge the sparse nature of the clipping signal and propose a low-complexity Bayesian clipping estimation scheme. The proposed scheme utilizes a priori information about the sparsity rate and noise variance for enhanced recovery. At the same time, the proposed scheme is robust against inaccurate estimates of the clipping signal statistics. The undistorted phase property of the clipped signal, as well as the clipping likelihood, is utilized for enhanced reconstruction. Furthermore, motivated by the nature of modern OFDM-based communication systems, we extend our clipping reconstruction approach to multiple antenna receivers and multi-user OFDM.We also address the problem of channel estimation from pilots contaminated by the clipping distortion. Numerical findings are presented that depict favorable results for the proposed scheme compared to the established sparse reconstruction schemes.

  4. A Public Database of Immersive VR Videos with Corresponding Ratings of Arousal, Valence, and Correlations between Head Movements and Self Report Measures

    Directory of Open Access Journals (Sweden)

    Benjamin J. Li

    2017-12-01

    Full Text Available Virtual reality (VR has been proposed as a methodological tool to study the basic science of psychology and other fields. One key advantage of VR is that sharing of virtual content can lead to more robust replication and representative sampling. A database of standardized content will help fulfill this vision. There are two objectives to this study. First, we seek to establish and allow public access to a database of immersive VR video clips that can act as a potential resource for studies on emotion induction using virtual reality. Second, given the large sample size of participants needed to get reliable valence and arousal ratings for our video, we were able to explore the possible links between the head movements of the observer and the emotions he or she feels while viewing immersive VR. To accomplish our goals, we sourced for and tested 73 immersive VR clips which participants rated on valence and arousal dimensions using self-assessment manikins. We also tracked participants' rotational head movements as they watched the clips, allowing us to correlate head movements and affect. Based on past research, we predicted relationships between the standard deviation of head yaw and valence and arousal ratings. Results showed that the stimuli varied reasonably well along the dimensions of valence and arousal, with a slight underrepresentation of clips that are of negative valence and highly arousing. The standard deviation of yaw positively correlated with valence, while a significant positive relationship was found between head pitch and arousal. The immersive VR clips tested are available online as supplemental material.

  5. CLIPS/Ada: An Ada-based tool for building expert systems

    Science.gov (United States)

    White, W. A.

    1990-01-01

    Clips/Ada is a production system language and a development environment. It is functionally equivalent to the CLIPS tool. CLIPS/Ada was developed in order to provide a means of incorporating expert system technology into projects where the use of the Ada language had been mandated. A secondary purpose was to glean information about the Ada language and its compilers. Specifically, whether or not the language and compilers were mature enough to support AI applications. The CLIPS/Ada tool is coded entirely in Ada and is designed to be used by Ada systems that require expert reasoning.

  6. A Comparison of Comprehension Processes in Sign Language Interpreter Videos with or without Captions.

    Science.gov (United States)

    Debevc, Matjaž; Milošević, Danijela; Kožuh, Ines

    2015-01-01

    One important theme in captioning is whether the implementation of captions in individual sign language interpreter videos can positively affect viewers' comprehension when compared with sign language interpreter videos without captions. In our study, an experiment was conducted using four video clips with information about everyday events. Fifty-one deaf and hard of hearing sign language users alternately watched the sign language interpreter videos with, and without, captions. Afterwards, they answered ten questions. The results showed that the presence of captions positively affected their rates of comprehension, which increased by 24% among deaf viewers and 42% among hard of hearing viewers. The most obvious differences in comprehension between watching sign language interpreter videos with and without captions were found for the subjects of hiking and culture, where comprehension was higher when captions were used. The results led to suggestions for the consistent use of captions in sign language interpreter videos in various media.

  7. A scheme for racquet sports video analysis with the combination of audio-visual information

    Science.gov (United States)

    Xing, Liyuan; Ye, Qixiang; Zhang, Weigang; Huang, Qingming; Yu, Hua

    2005-07-01

    As a very important category in sports video, racquet sports video, e.g. table tennis, tennis and badminton, has been paid little attention in the past years. Considering the characteristics of this kind of sports video, we propose a new scheme for structure indexing and highlight generating based on the combination of audio and visual information. Firstly, a supervised classification method is employed to detect important audio symbols including impact (ball hit), audience cheers, commentator speech, etc. Meanwhile an unsupervised algorithm is proposed to group video shots into various clusters. Then, by taking advantage of temporal relationship between audio and visual signals, we can specify the scene clusters with semantic labels including rally scenes and break scenes. Thirdly, a refinement procedure is developed to reduce false rally scenes by further audio analysis. Finally, an exciting model is proposed to rank the detected rally scenes from which many exciting video clips such as game (match) points can be correctly retrieved. Experiments on two types of representative racquet sports video, table tennis video and tennis video, demonstrate encouraging results.

  8. Role of pre-operative multimedia video information in allaying anxiety related to spinal anaesthesia: A randomised controlled trial

    Directory of Open Access Journals (Sweden)

    Raylene Dias

    2016-01-01

    Full Text Available Background and Aims: A high incidence of anxiety has been reported in patients in the operation theatre set up. We developed a short visual clip of 206 s duration depicting the procedure of spinal anaesthesia (SAB and aimed to compare the effect of this video on perioperative anxiety in patients undergoing procedures under SAB. Methods: A prospective randomised study of 200 patients undergoing surgery under SAB was conducted. Patients were allotted to either the nonvideo group (Group NV - those who were not shown the video or the video group (Group V - those who were shown the video. Anxiety was assessed using the Spielberger State-Trait Anxiety Inventory during the pre-anaesthetic check-up and before surgery. Haemodynamic parameters such as heart rate (HR and mean arterial pressure (MAP were also noted. Student′s t-test was used for normally distributed and Mann-Whitney U-test for nonnormally distributed quantitative data. Chi-square test was used for categorical data. Results: Both groups were comparable with respect to baseline anxiety scores and haemodynamic parameters. The nonvideo group showed a significant increase in state anxiety scores before administration of SAB (P < 0.001. Patients in the video group had significantly lower HR and MAP preoperatively (P < 0.001. The prevalence of ′high anxiety′ for SAB was 81% in our study which decreased to 66% in the video group before surgery. Conclusion: Multimedia information in the form of a short audiovisual clip is an effective and feasible method to reduce perioperative anxiety related to SAB.

  9. Teen videos on YouTube: Features and digital vulnerabilities

    OpenAIRE

    Montes-Vozmediano, Manuel; García-Jiménez, Antonio; Menor-Sendra, Juan

    2018-01-01

    As a mechanism for social participation and integration and for the purpose of building their identity, teens make and share videos on platforms such as YouTube of which they are also content consumers. The vulnerability conditions that occur and the risks to which adolescents are exposed, both as creators and consumers of videos, are the focus of this study. The methodology used is content analysis, applied to 400 videos. This research has worked with manifest variables (such as the scene) a...

  10. 21 CFR 882.4190 - Clip forming/cutting instrument.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Clip forming/cutting instrument. 882.4190 Section 882.4190 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES.../cutting instrument. (a) Identification. A clip forming/cutting instrument is a device used by the...

  11. Clinical experience of titanium cerebral aneurysm clips. Evaluation of artifact of CT and MRI

    International Nuclear Information System (INIS)

    Ninomiya, Takashi; Kato, Yoko; Sano, Hirotoshi

    1996-01-01

    The titanium aneurysm clips manufactured by AESCULAP Company are expected to be useful, not only for clinical applications, but also for reducing artifacts in post-operative CT and MRI. We carried out an investigation of the behavior of the new Yasargil titanium clips in a 1.5T MR imager. The new titanium clips showed considerably smaller clip-induced MR and CT artifacts than phynox and elgiloy clips. No movement of the titanium clips was seen by introducing them to the MR imager. Subsequent to these experimental studies, we applied titanium clips to 25 cerebral aneurysms. Post-operative CT, especially helical scanning CT and MR showed minimal artifacts, leading to the conclusion that the titanium clips are better than the other types of clips for the evaluation of post-operative neuroradiological images. (author)

  12. Fast Appearance Modeling for Automatic Primary Video Object Segmentation.

    Science.gov (United States)

    Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong

    2016-02-01

    Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.

  13. Remission of migraine after clipping of saccular intracranial aneurysms

    DEFF Research Database (Denmark)

    Lebedeva, E R; Busygina, A V; Kolotvinov, V S

    2015-01-01

    interview. The remission rates of migraine and tension-type headache (TTH) in these patients were compared to 92 patients from a headache center. Diagnoses were made according to the ICHD-2. RESULTS: During 1 year preceding rupture 51 patients with SIA had migraine. During the year after clipping......BACKGROUND: Unruptured saccular intracranial aneurysm (SIA) is associated with an increased prevalence of migraine, but it is unclear whether this is altered by clipping of the aneurysm. The aim of our study was to determine whether remission rate of migraine and other recurrent headaches...... was greater in patients with SIA after clipping than in controls. METHODS: We prospectively studied 87 SIA patients with migraine or other recurrent headaches. They were interviewed about headaches in the preceding year before and 1 year after clipping using a validated semi-structured neurologist conducted...

  14. Film clips and narrative text as subjective emotion elicitation techniques.

    Science.gov (United States)

    Zupan, Barbra; Babbage, Duncan R

    2017-01-01

    Film clips and narrative text are useful techniques in eliciting emotion in a laboratory setting but have not been examined side-by-side using the same methodology. This study examined the self-identification of emotions elicited by film clip and narrative text stimuli to confirm that selected stimuli appropriately target the intended emotions. Seventy participants viewed 30 film clips, and 40 additional participants read 30 narrative texts. Participants identified the emotion experienced (happy, sad, angry, fearful, neutral-six stimuli each). Eighty-five percent of participants self-identified the target emotion for at least two stimuli for all emotion categories of film clips, except angry (only one) and for all categories of narrative text, except fearful (only one). The most effective angry text was correctly identified 74% of the time. Film clips were more effective in eliciting all target emotions in participants for eliciting the correct emotion (angry), intensity rating (happy, sad), or both (fearful).

  15. Current MitraClip experience, safety and feasibility in the Netherlands

    OpenAIRE

    Rahhab, Z.; Kortlandt, F.A.; Velu, J.F.; Schurer, R.A.J.; Delgado, V.; Tonino, P.; Boven, A.J. van; Branden, B.J.L. Van den; Kraaijeveld, A.O.; Voskuil, M.; Hoorntje, J.; Wely, M.H. van; Houwelingen, K. van; Bleeker, G.B.; Rensing, B.

    2017-01-01

    PURPOSE: Data on MitraClip procedural safety and efficacy in the Netherlands are scarce. We aim to provide an overview of the Dutch MitraClip experience. METHODS: We pooled anonymised demographic and procedural data of 1151 consecutive MitraClip patients, from 13 Dutch hospitals. Data was collected by product specialists in collaboration with local operators. Effect on mitral regurgitation was intra-procedurally assessed by transoesophageal echocardiography. Technical success and device succe...

  16. Using Video Stimuli to Examine Judgments of Nonoffending and Offending Pedophiles: A Brief Communication.

    Science.gov (United States)

    Boardman, Katie A; Bartels, Ross M

    2018-05-19

    In this experimental study, 89 participants were allocated to an offending pedophile, nonoffending pedophile, or control video condition. They then watched two short help-seeking video clips of an older male and a younger male (counterbalanced). Judgments about each male were assessed, as were general attitudes toward pedophiles and sexual offenders. Offending pedophiles were judged as more deserving of punishment than the nonoffending pedophiles and controls. Age of the male was found to have an effect on judgments of dangerousness. Existing attitudes toward pedophiles and sexual offenders did not statistically differ. Limitations and future research ideas are discussed.

  17. Deception Detection in Videos

    OpenAIRE

    Wu, Zhe; Singh, Bharat; Davis, Larry S.; Subrahmanian, V. S.

    2017-01-01

    We present a system for covert automated deception detection in real-life courtroom trial videos. We study the importance of different modalities like vision, audio and text for this task. On the vision side, our system uses classifiers trained on low level video features which predict human micro-expressions. We show that predictions of high-level micro-expressions can be used as features for deception prediction. Surprisingly, IDT (Improved Dense Trajectory) features which have been widely ...

  18. Comparison of Sonography versus Digital Breast Tomosynthesis to Locate Intramammary Marker Clips.

    Science.gov (United States)

    Schulz-Wendtland, R; Dankerl, P; Dilbat, G; Bani, M; Fasching, P A; Heusinger, K; Lux, M P; Loehberg, C R; Jud, S M; Rauh, C; Bayer, C M; Beckmann, M W; Wachter, D L; Uder, M; Meier-Meitinger, M; Brehm, B

    2015-01-01

    Introduction: This study aimed to compare the accuracy of sonography versus digital breast tomosynthesis to locate intramammary marker clips placed under ultrasound guidance. Patients and Methods: Fifty patients with suspicion of breast cancer (lesion diameter less than 2 cm [cT1]) had ultrasound-guided core needle biopsy with placement of a marker clip in the center of the tumor. Intramammary marker clips were subsequently located with both sonography and digital breast tomosynthesis. Results: Sonography detected no dislocation of intrammammary marker clips in 42 of 50 patients (84 %); dislocation was reported in 8 patients (16 %) with a maximum dislocation of 7 mm along the x-, y- or z-axis. Digital breast tomosynthesis showed accurate placement without dislocation of the intramammary marker clip in 48 patients (96 %); 2 patients (4 %) had a maximum clip dislocation of 3 mm along the x-, y- or z-axis (p tomosynthesis could improve the accuracy when locating intramammary marker clips compared to sonography and could, in future, be used to complement or even completely replace sonography.

  19. Detection of goal events in soccer videos

    Science.gov (United States)

    Kim, Hyoung-Gook; Roeber, Steffen; Samour, Amjad; Sikora, Thomas

    2005-01-01

    In this paper, we present an automatic extraction of goal events in soccer videos by using audio track features alone without relying on expensive-to-compute video track features. The extracted goal events can be used for high-level indexing and selective browsing of soccer videos. The detection of soccer video highlights using audio contents comprises three steps: 1) extraction of audio features from a video sequence, 2) event candidate detection of highlight events based on the information provided by the feature extraction Methods and the Hidden Markov Model (HMM), 3) goal event selection to finally determine the video intervals to be included in the summary. For this purpose we compared the performance of the well known Mel-scale Frequency Cepstral Coefficients (MFCC) feature extraction method vs. MPEG-7 Audio Spectrum Projection feature (ASP) extraction method based on three different decomposition methods namely Principal Component Analysis( PCA), Independent Component Analysis (ICA) and Non-Negative Matrix Factorization (NMF). To evaluate our system we collected five soccer game videos from various sources. In total we have seven hours of soccer games consisting of eight gigabytes of data. One of five soccer games is used as the training data (e.g., announcers' excited speech, audience ambient speech noise, audience clapping, environmental sounds). Our goal event detection results are encouraging.

  20. Examining in vivo tympanic membrane mobility using smart phone video-otoscopy and phase-based Eulerian video magnification

    Science.gov (United States)

    Janatka, Mirek; Ramdoo, Krishan S.; Tatla, Taran; Pachtrachai, Krittin; Elson, Daniel S.; Stoyanov, Danail

    2017-03-01

    The tympanic membrane (TM) is the bridging element between the pressure waves of sound in air and the ossicular chain. It allows for sound to be conducted into the inner ear, achieving the human sense of hearing. Otitis media with effusion (OME, commonly referred to as `glue ear') is a typical condition in infants that prevents the vibration of the TM and causes conductive hearing loss, this can lead to stunting early stage development if undiagnosed. Furthermore, OME is hard to identify in this age group; as they cannot respond to typical audiometry tests. Tympanometry allows for the mobility of the TM to be examined without patient response, but requires expensive apparatus and specialist training. By combining a smartphone equipped with a 240 frames per second video recording capability with an otoscopic clip-on accessory, this paper presents a novel application of Eulerian Video Magnification (EVM) to video-otology, that could provide assistance in diagnosing OME. We present preliminary results showing a spatio-temporal slice taken from an exaggerated video visualization of the TM being excited in vivo on a healthy ear. Our preliminary results demonstrate the potential for using such an approach for diagnosing OME under visual inspection as alternative to tympanometry, which could be used remotely and hence help diagnosis in a wider population pool.

  1. 76 Computer Assisted Language Learning (CALL) Software ...

    African Journals Online (AJOL)

    Ike Odimegwu

    combination with other factors which may enhance or ameliorate the ... form of computer-based learning which carries two important features: .... To take some commonplace examples, a ... photographs, and even full-motion video clips.

  2. Adding run history to CLIPS

    Science.gov (United States)

    Tuttle, Sharon M.; Eick, Christoph F.

    1991-01-01

    To debug a C Language Integrated Production System (CLIPS) program, certain 'historical' information about a run is needed. It would be convenient for system builders to have the capability to request such information. We will discuss how historical Rete networks can be used for answering questions that help a system builder detect the cause of an error in a CLIPS program. Moreover, the cost of maintaining a historical Rete network is compared with that for a classical Rete network. We will demonstrate that the cost for assertions is only slightly higher for a historical Rete network. The cost for handling retraction could be significantly higher; however, we will show that by using special data structures that rely on hashing, it is also possible to implement retractions efficiently.

  3. Subtitled video tutorials, an accessible teaching material

    Directory of Open Access Journals (Sweden)

    Luis Bengochea

    2012-11-01

    Full Text Available The use of short-lived audio-visual tutorials constitutes an educational resource very attractive for young students, widely familiar with this type of format similar to YouTube clips. Considered as "learning pills", these tutorials are intended to strengthen the understanding of complex concepts that because their dynamic nature can’t be represented through texts or diagrams. However, the inclusion of this type of content in eLearning platforms presents accessibility problems for students with visual or hearing disabilities. This paper describes this problem and shows the way in which a teacher could add captions and subtitles to their videos.

  4. A CLIPS expert system for clinical flow cytometry data analysis

    Science.gov (United States)

    Salzman, G. C.; Duque, R. E.; Braylan, R. C.; Stewart, C. C.

    1990-01-01

    An expert system is being developed using CLIPS to assist clinicians in the analysis of multivariate flow cytometry data from cancer patients. Cluster analysis is used to find subpopulations representing various cell types in multiple datasets each consisting of four to five measurements on each of 5000 cells. CLIPS facts are derived from results of the clustering. CLIPS rules are based on the expertise of Drs. Stewart, Duque, and Braylan. The rules incorporate certainty factors based on case histories.

  5. Extravasal occlusion of large vessels with titanic clips: efficiency, indications, and contraindications.

    Science.gov (United States)

    Vasilenko, Yu V; Kim, A I; Kotov, S A

    2002-11-01

    The mechanism of extravasal occlusion of blood vessels with titanic clips "Atrauclip" and "Ligaclip extra" was studied in order to reveal indications and contraindications to their use. Occlusion with the clips of both types was ineffective in vessels with a diameter of >7.0 mm. Arteritis or the presence of an intravascular occlusion facility in the vessel were also the contraindications for clip occlusion. In overcases the procedure of occlusion with titanic clips was efficient and atraumatic.

  6. Using web-based video to enhance physical examination skills in medical students.

    Science.gov (United States)

    Orientale, Eugene; Kosowicz, Lynn; Alerte, Anton; Pfeiffer, Carol; Harrington, Karen; Palley, Jane; Brown, Stacey; Sapieha-Yanchak, Teresa

    2008-01-01

    Physical examination (PE) skills among U.S. medical students have been shown to be deficient. This study examines the effect of a Web-based physical examination curriculum on first-year medical student PE skills. Web-based video clips, consisting of instruction in 77 elements of the physical examination, were created using Microsoft Windows Moviemaker software. Medical students' PE skills were evaluated by standardized patients before and after implementation of the Internet-based video. Following implementation of this curriculum, there was a higher level of competency (from 87% in 2002-2003 to 91% in 2004-2005), and poor performances on standardized patient PE exams substantially diminished (from a 14%-22%failure rate in 2002-2003, to 4% in 2004-2005. A significant improvement in first-year medical student performance on the adult PE occurred after implementing Web-based instructional video.

  7. Molecular clips based on propanediurea : synthesis and physical properties

    NARCIS (Netherlands)

    Jansen, Robertus Johannes

    2002-01-01

    This thesis describes the synthesis and physical properties of a series of molecular clips derived from the concave molecule propanediurea. These molecular clips are cavity-containing receptors that can bind a variety of aromatic guests. This binding is a result of hydrogen bonding and pi-pi

  8. Sex differences in visual attention to sexually explicit videos: a preliminary study.

    Science.gov (United States)

    Tsujimura, Akira; Miyagawa, Yasushi; Takada, Shingo; Matsuoka, Yasuhiro; Takao, Tetsuya; Hirai, Toshiaki; Matsushita, Masateru; Nonomura, Norio; Okuyama, Akihiko

    2009-04-01

    Although men appear to be more interested in sexual stimuli than women, this difference is not completely understood. Eye-tracking technology has been used to investigate visual attention to still sexual images; however, it has not been applied to moving sexual images. To investigate whether sex difference exists in visual attention to sexual videos. Eleven male and 11 female healthy volunteers were studied by our new methodology. The subjects viewed two sexual videos (one depicting sexual intercourse and one not) in which several regions were designated for eye-gaze analysis in each frame. Visual attention was measured across each designated region according to gaze duration. Sex differences, the region attracting the most attention, and visually favored sex were evaluated. In the nonintercourse clip, gaze time for the face and body of the actress was significantly shorter among women than among men. Gaze time for the face and body of the actor and nonhuman regions was significantly longer for women than men. The region attracting the most attention was the face of the actress for both men and women. Men viewed the opposite sex for a significantly longer period than did women, and women viewed their own sex for a significantly longer period than did men. However, gaze times for the clip showing intercourse were not significantly different between sexes. A sex difference existed in visual attention to a sexual video without heterosexual intercourse; men viewed the opposite sex for longer periods than did women, and women viewed the same sex for longer periods than did men. There was no statistically significant sex difference in viewing patterns in a sexual video showing heterosexual intercourse, and we speculate that men and women may have similar visual attention patterns if the sexual stimuli are sufficiently explicit.

  9. Videos and images from 25 years of teaching compressible flow

    Science.gov (United States)

    Settles, Gary

    2008-11-01

    Compressible flow is a very visual topic due to refractive optical flow visualization and the public fascination with high-speed flight. Films, video clips, and many images are available to convey this in the classroom. An overview of this material is given and selected examples are shown, drawn from educational films, the movies, television, etc., and accumulated over 25 years of teaching basic and advanced compressible-flow courses. The impact of copyright protection and the doctrine of fair use is also discussed.

  10. Understanding and Applying Psychology through Use of News Clippings.

    Science.gov (United States)

    Rider, Elizabeth A.

    1992-01-01

    Discusses a student project for psychology courses in which students collect newspaper clippings that illustrate psychological concepts. Explains that the students record the source and write a brief description of the clipping, explaining how it relates to a psychological concept or theory. Includes results of student evaluations of the…

  11. Sympathetic block by metal clips may be a reversible operation

    DEFF Research Database (Denmark)

    Thomsen, Lars L; Mikkelsen, Rasmus T; Derejko, Miroslawa

    2014-01-01

    , but the question of reversibility remains controversial. Two recent experimental studies found severe histological signs of nerve damage 4-6 weeks after clip removal, but they only used conventional histopathological staining methods. METHODS: Thoracoscopic clipping of the sympathetic trunk was performed in adult...... the sympathetic chain vary tremendously. Most surgeons transect or resect the sympathetic chain, but application of a metal clip that blocks transmission of nerve impulses in the sympathetic chain is used increasingly worldwide. This approach offers potential reversibility if patients regret surgery...... suggests in theory that application of metal clips to the sympathetic chain is a reversible procedure if only the observation period is prolonged. Further studies with longer periods between application and removal as well as investigations of nerve conduction should be encouraged, because we do not know...

  12. Teen Videos on YouTube: Features and Digital Vulnerabilities

    Science.gov (United States)

    Montes-Vozmediano, Manuel; García-Jiménez, Antonio; Menor-Sendra, Juan

    2018-01-01

    As a mechanism for social participation and integration and for the purpose of building their identity, teens make and share videos on platforms such as YouTube of which they are also content consumers. The vulnerability conditions that occur and the risks to which adolescents are exposed, both as creators and consumers of videos, are the focus of…

  13. Face Recognition and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Swapnil Vitthal Tathe

    2017-07-01

    Full Text Available Advancement in computer vision technology and availability of video capturing devices such as surveillance cameras has evoked new video processing applications. The research in video face recognition is mostly biased towards law enforcement applications. Applications involves human recognition based on face and iris, human computer interaction, behavior analysis, video surveillance etc. This paper presents face tracking framework that is capable of face detection using Haar features, recognition using Gabor feature extraction, matching using correlation score and tracking using Kalman filter. The method has good recognition rate for real-life videos and robust performance to changes due to illumination, environmental factors, scale, pose and orientations.

  14. The influence of clipping frequency on reserve carbohydrates and ...

    African Journals Online (AJOL)

    Increasing the frequency of clipping over a 12 week period resulted in decreasing root and stubble weight at the end of the 12 week period. During the regrowth period, following the clipping treatment, the aerial dry matter production was found to increase with the more lenient treatments. Keywords: weights|dry ...

  15. Suture retraction technique to prevent parent vessel obstruction following aneurysm tandem clipping.

    Science.gov (United States)

    Rayan, Tarek; Amin-Hanjani, Sepideh

    2015-08-01

    With large or giant aneurysms, the use of multiple tandem clips can be essential for complete obliteration of the aneurysm. One potential disadvantage, however, is the considerable cumulative weight of these clips, which may lead to kinking of the underlying parent vessels and obstruction of flow. The authors describe a simple technique to address this problem, guided by intraoperative blood flow measurements, in a patient with a ruptured near-giant 2.2 × 1.7-cm middle cerebral artery bifurcation aneurysm that was treated with the tandem clipping technique. A total of 11 clips were applied in a vertical stacked fashion. The cumulative weight of the clips caused kinking of the temporal M2 branch of the bifurcation with reduction of flow. A 4-0 Nurolon suture tie was applied to the hub of one of the clips and was tethered to the dura of the sphenoid ridge by a small mini-clip and reinforced by application of tissue sealant. The patient underwent intraoperative indocyanine green videoangiography as well as catheter angiography, which demonstrated complete aneurysmal obliteration and preservation of vessel branches. Postoperative angiography confirmed patency of the bifurcation vessels with mild vasospasm. The patient had a full recovery with no postoperative complications and was neurologically intact at her 6-month follow-up. The suture retraction technique allows a simple solution to parent vessel obstruction following aneurysm tandem clipping, in conjunction with the essential guidance provided by intraoperative flow measurements.

  16. A systematic review of discomfort due to toe or ear clipping in laboratory rodents

    Science.gov (United States)

    Geessink, Florentine J.; Brouwer, Michelle A. E.; Tillema, Alice; Ritskes-Hoitinga, Merel

    2017-01-01

    Toe clipping and ear clipping (also ear notching or ear punching) are frequently used methods for individual identification of laboratory rodents. These procedures potentially cause severe discomfort, which can reduce animal welfare and distort experimental results. However, no systematic summary of the evidence on this topic currently exists. We conducted a systematic review of the evidence for discomfort due to toe or ear clipping in rodents. The review methodology was pre-specified in a registered review protocol. The population, intervention, control, outcome (PICO) question was: In rodents, what is the effect of toe clipping or ear clipping, compared with no clipping or sham clipping, on welfare-related outcomes? Through a systematic search in PubMed, Embase, Web of Science and grey literature, we identified seven studies on the effect of ear clipping on animal welfare, and five such studies on toe clipping. Studies were included in the review if they contained original data from an in vivo experiment in rodents, assessing the effect of toe clipping or ear clipping on a welfare-related outcome. Case studies and studies applying unsuitable co-interventions were excluded. Study quality was appraised using an extended version of SYstematic Review Centre for Laboratory animal Experimentation (SYRCLE)’s risk of bias tool for animal studies. Study characteristics and outcome measures were highly heterogeneous, and there was an unclear or high risk of bias in all studies. We therefore present a narrative synthesis of the evidence identified. None of the studies reported a sample size calculation. Out of over 60 different outcomes, we found evidence of discomfort due to ear clipping in the form of increased respiratory volume, vocalization and blood pressure. For toe clipping, increased vocalization and decreased motor activity in pups were found, as well as long-term effects in the form of reduced grip strength and swimming ability in adults. In conclusion, there

  17. Utility of Indocyanine Green Video Angiography for Sylvian Fissure Dissection in Subarachnoid Hemorrhage Patients - Sylvian ICG Technique.

    Science.gov (United States)

    Toi, Hiroyuki; Matsushita, Nobuhisa; Ogawa, Yukari; Kinoshita, Keita; Satoh, Kohei; Takai, Hiroki; Hirai, Satoshi; Hara, Keijiro; Matsubara, Shunji; Uno, Masaaki

    2018-02-15

    Indocyanine green (ICG) emits fluorescence in the far-red domain under light excitation. ICG video angiography (ICG-VA) has been established as a useful method to evaluate blood flow in the operative field. We report the usefulness of ICG-VA for Sylvian fissure dissection in patients with subarachnoid hemorrhage (SAH). Subjects comprised 7 patients who underwent ICG-VA before opening the Sylvian fissure during neck clipping for ruptured cerebral aneurysm. We observed contrasted Sylvian veins before opening the Sylvian fissure using surgical microscopes. This procedure was termed "Sylvian ICG". We observed ICG fluorescence quickly in all cases. Sylvian veins that appeared unclear in the standard microscopic operative field covered with subarachnoid hemorrhage were extremely clearly depicted. These Sylvian ICG findings were helpful in identifying entry points and the dissecting course of the Sylvian fissure. At the time of clipping, no residual fluorescence from Sylvian ICG was present, and aneurysm clipping was not impeded. Sylvian ICG for SAH patients is a novel technique to facilitate dissection of the Sylvian fissure. We believe that this technique will contribute to improved safety of clipping surgery for ruptured aneurysms.

  18. Peak reduction and clipping mitigation in OFDM by augmented compressive sensing

    KAUST Repository

    Al-Safadi, Ebrahim B.

    2012-07-01

    This work establishes the design, analysis, and fine-tuning of a peak-to-average-power-ratio (PAPR) reducing system, based on compressed sensing (CS) at the receiver of a peak-reducing sparse clipper applied to an orthogonal frequency-division multiplexing (OFDM) signal at the transmitter. By exploiting the sparsity of clipping events in the time domain relative to a predefined clipping threshold, the method depends on partially observing the frequency content of the clipping distortion over reserved tones to estimate the remaining distortion. The approach has the advantage of eliminating the computational complexity at the transmitter and reducing the overall complexity of the system compared to previous methods which incorporate pilots to cancel nonlinear distortion. Data-based augmented CS methods are also proposed that draw upon available phase and support information from data tones for enhanced estimation and cancelation of clipping noise. This enables signal recovery under more severe clipping scenarios and hence lower PAPR can be achieved compared to conventional CS techniques. © 2012 IEEE.

  19. Peak reduction and clipping mitigation in OFDM by augmented compressive sensing

    KAUST Repository

    Al-Safadi, Ebrahim B.; Al-Naffouri, Tareq Y.

    2012-01-01

    This work establishes the design, analysis, and fine-tuning of a peak-to-average-power-ratio (PAPR) reducing system, based on compressed sensing (CS) at the receiver of a peak-reducing sparse clipper applied to an orthogonal frequency-division multiplexing (OFDM) signal at the transmitter. By exploiting the sparsity of clipping events in the time domain relative to a predefined clipping threshold, the method depends on partially observing the frequency content of the clipping distortion over reserved tones to estimate the remaining distortion. The approach has the advantage of eliminating the computational complexity at the transmitter and reducing the overall complexity of the system compared to previous methods which incorporate pilots to cancel nonlinear distortion. Data-based augmented CS methods are also proposed that draw upon available phase and support information from data tones for enhanced estimation and cancelation of clipping noise. This enables signal recovery under more severe clipping scenarios and hence lower PAPR can be achieved compared to conventional CS techniques. © 2012 IEEE.

  20. Catheter Entrapment During Posterior Mitral Leaflet Pushing Maneuver for MitraClip Implantation.

    Science.gov (United States)

    Castrodeza, Javier; Amat-Santos, Ignacio J; Tobar, Javier; Varela-Falcón, Luis H

    2016-06-01

    MitraClip (Abbott Vascular) therapy has been reported to be an effective procedure for mitral regurgitation, especially in high-risk patients. Recently, the novel pushing maneuver technique has been described for approaching restricted and short posterior leaflets with a pigtail catheter in order to facilitate grasping of the clip. However, complications or unexpected situations may occur. We report the case of an 84-year-old patient who underwent MitraClip implantation wherein the pushing maneuver was complicated by the clip accidentally gripping the pigtail catheter along with the two leaflets.

  1. Preoperative simulation for the planning of microsurgical clipping of intracranial aneurysms.

    Science.gov (United States)

    Marinho, Paulo; Vermandel, Maximilien; Bourgeois, Philippe; Lejeune, Jean-Paul; Mordon, Serge; Thines, Laurent

    2014-12-01

    The safety and success of intracranial aneurysm (IA) surgery could be improved through the dedicated application of simulation covering the procedure from the 3-dimensional (3D) description of the surgical scene to the visual representation of the clip application. We aimed in this study to validate the technical feasibility and clinical relevance of such a protocol. All patients preoperatively underwent 3D magnetic resonance imaging and 3D computed tomography angiography to build 3D reconstructions of the brain, cerebral arteries, and surrounding cranial bone. These 3D models were segmented and merged using Osirix, a DICOM image processing application. This provided the surgical scene that was subsequently imported into Blender, a modeling platform for 3D animation. Digitized clips and appliers could then be manipulated in the virtual operative environment, allowing the visual simulation of clipping. This simulation protocol was assessed in a series of 10 IAs by 2 neurosurgeons. The protocol was feasible in all patients. The visual similarity between the surgical scene and the operative view was excellent in 100% of the cases, and the identification of the vascular structures was accurate in 90% of the cases. The neurosurgeons found the simulation helpful for planning the surgical approach (ie, the bone flap, cisternal opening, and arterial tree exposure) in 100% of the cases. The correct number of final clip(s) needed was predicted from the simulation in 90% of the cases. The preoperatively expected characteristics of the optimal clip(s) (ie, their number, shape, size, and orientation) were validated during surgery in 80% of the cases. This study confirmed that visual simulation of IA clipping based on the processing of high-resolution 3D imaging can be effective. This is a new and important step toward the development of a more sophisticated integrated simulation platform dedicated to cerebrovascular surgery.

  2. Fast Aerial Video Stitching

    Directory of Open Access Journals (Sweden)

    Jing Li

    2014-10-01

    Full Text Available The highly efficient and robust stitching of aerial video captured by unmanned aerial vehicles (UAVs is a challenging problem in the field of robot vision. Existing commercial image stitching systems have seen success with offline stitching tasks, but they cannot guarantee high-speed performance when dealing with online aerial video sequences. In this paper, we present a novel system which has an unique ability to stitch high-frame rate aerial video at a speed of 150 frames per second (FPS. In addition, rather than using a high-speed vision platform such as FPGA or CUDA, our system is running on a normal personal computer. To achieve this, after the careful comparison of the existing invariant features, we choose the FAST corner and binary descriptor for efficient feature extraction and representation, and present a spatial and temporal coherent filter to fuse the UAV motion information into the feature matching. The proposed filter can remove the majority of feature correspondence outliers and significantly increase the speed of robust feature matching by up to 20 times. To achieve a balance between robustness and efficiency, a dynamic key frame-based stitching framework is used to reduce the accumulation errors. Extensive experiments on challenging UAV datasets demonstrate that our approach can break through the speed limitation and generate an accurate stitching image for aerial video stitching tasks.

  3. Bayesian Recovery of Clipped OFDM Signals: A Receiver-based Approach

    KAUST Repository

    Al-Rabah, Abdullatif R.

    2013-01-01

    recovery algorithm to reconstruct the clipping signal at the receiver side by measuring part of subcarriers, iii) perform well in the absence of statistical information about the signal (e.g. clipping level) and the noise (e.g. noise variance

  4. Equation-Method for correcting clipping errors in OFDM signals.

    Science.gov (United States)

    Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry

    2016-01-01

    Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.

  5. Physics and Video Analysis

    Science.gov (United States)

    Allain, Rhett

    2016-05-01

    We currently live in a world filled with videos. There are videos on YouTube, feature movies and even videos recorded with our own cameras and smartphones. These videos present an excellent opportunity to not only explore physical concepts, but also inspire others to investigate physics ideas. With video analysis, we can explore the fantasy world in science-fiction films. We can also look at online videos to determine if they are genuine or fake. Video analysis can be used in the introductory physics lab and it can even be used to explore the make-believe physics embedded in video games. This book covers the basic ideas behind video analysis along with the fundamental physics principles used in video analysis. The book also includes several examples of the unique situations in which video analysis can be used.

  6. Effects of Japanese beetle (Coleoptera: Scarabaeidae) and silk clipping in field corn.

    Science.gov (United States)

    Steckel, Sandy; Stewart, S D; Tindall, K V

    2013-10-01

    Japanese beetle (Popillia japonica Newman) is an emerging silk-feeding insect found in fields in the lower Corn Belt and Midsouthern United States. Studies were conducted in 2010 and 2011 to evaluate how silk clipping in corn affects pollination and yield parameters. Manually clipping silks once daily had modest effects on yield parameters. Sustained clipping by either manually clipping silks three times per day or by caging Japanese beetles onto ears affected total kernel weight if it occurred during early silking (R1 growth stage). Manually clipping silks three times per day for the first 5 d of silking affected the number of kernels per ear, total kernel weight, and the weight of individual kernels. Caged beetles fed on silks and, depending on the number of beetles caged per ear, reduced the number of kernels per ear. Caging eight beetles per ear significantly reduced total kernel weight compared with noninfested ears. Drought stress before anthesis appeared to magnify the impact of silk clipping by Japanese beetles. There was evidence of some compensation for reduced pollination by increasing the size of pollinated kernels within the ear. Our results showed that it requires sustained silk clipping during the first week of silking to have substantial impacts on pollination and yield parameters, at least under good growing conditions. Some states recommend treating for Japanese beetle when three Japanese beetles per ear are found, silks are clipped to < 13 mm, and pollination is < 50% complete, and that recommendation appears to be adequate.

  7. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  8. Predicting personal preferences in subjective video quality assessment

    DEFF Research Database (Denmark)

    Korhonen, Jari

    2017-01-01

    In this paper, we study the problem of predicting the visual quality of a specific test sample (e.g. a video clip) experienced by a specific user, based on the ratings by other users for the same sample and the same user for other samples. A simple linear model and algorithm is presented, where...... the characteristics of each test sample are represented by a set of parameters, and the individual preferences are represented by weights for the parameters. According to the validation experiment performed on public visual quality databases annotated with raw individual scores, the proposed model can predict...

  9. Hemodynamic response during aneurysm clipping surgery among experienced neurosurgeons.

    Science.gov (United States)

    Bunevicius, Adomas; Bilskiene, Diana; Macas, Andrius; Tamasauskas, Arimantas

    2016-02-01

    Neurosurgery is a challenging field associated with high levels of mental stress. The goal of this study was to investigate the hemodynamic response of experienced neurosurgeons during aneurysm clipping surgery and to evaluate whether neurosurgeons' hemodynamic responses are associated with patients' clinical statuses. Four vascular neurosurgeons (all male; mean age 51 ± 10 years; post-residency experience ≥7 years) were studied during 42 aneurysm clipping procedures. Blood pressure (BP) and heart rate (HR) were assessed at rest and during seven phases of surgery: before the skin incision, after craniotomy, after dural opening, after aneurysm neck dissection, after aneurysm clipping, after dural closure and after skin closure. HR and BP were significantly greater during surgery relative to the rest situation (p ≤ 0.03). There was a statistically significant increase in neurosurgeons' HR (F [6, 41] = 10.88, p neurosurgeon experience, the difference in BP as a function of aneurysm rupture was not significant (p > 0.08). Aneurysm location, intraoperative aneurysm rupture, admission WFNS score, admission Glasgow Coma Scale scores and Fisher grade were not associated with neurosurgeons' intraoperative HR and BP (all p > 0.07). Aneurysm clipping surgery is associated with significant hemodynamic system activation among experienced neurosurgeons. The greatest HR and BP were after aneurysm neck dissection and clipping. Aneurysm location and patient clinical status were not associated with intraoperative changes of neurosurgeons' HR and BP.

  10. An anthropomorphic phantom study of visualisation of surgical clips for partial breast irradiation (PBI) setup verification

    International Nuclear Information System (INIS)

    Thomas, Carys W.; Nichol, Alan M.; Park, Julie E.; Hui, Jason F.; Giddings, Alison A.; Grahame, Sheri; Otto, Karl

    2009-01-01

    Surgical clips were investigated for partial breast image-guided radiotherapy (IGRT). Small titanium clips were insufficiently well visualised. Medium tantalum clips were best for megavoltage IGRT and small tantalum clips were best for floor mounted kilovoltage IGRT (ExacTrac TM ). Both small tantalum and medium titanium clips were suitable for isocentric kilovoltage IGRT

  11. An anthropomorphic phantom study of visualisation of surgical clips for partial breast irradiation (PBI) setup verification.

    Science.gov (United States)

    Thomas, Carys W; Nichol, Alan M; Park, Julie E; Hui, Jason F; Giddings, Alison A; Grahame, Sheri; Otto, Karl

    2009-01-01

    Surgical clips were investigated for partial breast image-guided radiotherapy (IGRT). Small titanium clips were insufficiently well visualised. Medium tantalum clips were best for megavoltage IGRT and small tantalum clips were best for floor mounted kilovoltage IGRT (ExacTrac). Both small tantalum and medium titanium clips were suitable for isocentric kilovoltage IGRT.

  12. Como o clipping pode auxiliar o dermatologista How the Nail clipping helps the dermatologist

    Directory of Open Access Journals (Sweden)

    José Fillus Neto

    2009-04-01

    Full Text Available Alterações ungueais são queixas muito frequentes nos consultórios dermatológicos. Onicomicoses representam cerca de 50% das onicopatias, daí a importância de se estabelecer o diagnóstico correto antes de se iniciar o tratamento. Neste artigo, relataremos a utilidade de um exame que é de fácil execução pelo clínico, de baixo custo e sensível: esse exame consiste na análise histopatológica da queratina ungueal distal, atualmente já consagrado com o termo clipping.Onycodystrophies are common problems in dermatologic practice. About 50% of dystrophic nails have a fungal cause, so it is very important to establish a correct diagnosis before treatment. In this article we relate the usefulness of an easydoing exam, free from pain, cheap and sensible. This exam is the histopathology of the nail keratin or nail clipping.

  13. Action recognition in depth video from RGB perspective: A knowledge transfer manner

    Science.gov (United States)

    Chen, Jun; Xiao, Yang; Cao, Zhiguo; Fang, Zhiwen

    2018-03-01

    Different video modal for human action recognition has becoming a highly promising trend in the video analysis. In this paper, we propose a method for human action recognition from RGB video to Depth video using domain adaptation, where we use learned feature from RGB videos to do action recognition for depth videos. More specifically, we make three steps for solving this problem in this paper. First, different from image, video is more complex as it has both spatial and temporal information, in order to better encode this information, dynamic image method is used to represent each RGB or Depth video to one image, based on this, most methods for extracting feature in image can be used in video. Secondly, as video can be represented as image, so standard CNN model can be used for training and testing for videos, beside, CNN model can be also used for feature extracting as its powerful feature expressing ability. Thirdly, as RGB videos and Depth videos are belong to two different domains, in order to make two different feature domains has more similarity, domain adaptation is firstly used for solving this problem between RGB and Depth video, based on this, the learned feature from RGB video model can be directly used for Depth video classification. We evaluate the proposed method on one complex RGB-D action dataset (NTU RGB-D), and our method can have more than 2% accuracy improvement using domain adaptation from RGB to Depth action recognition.

  14. Definition of postlumpectomy tumor bed for radiotherapy boost field planning: CT versus surgical clips

    International Nuclear Information System (INIS)

    Goldberg, Hadassah; Prosnitz, Robert G.; Olson, John A.; Marks, Lawrence B.

    2005-01-01

    Purpose: To compare the location and extent of the tumor bed as defined by surgical clips and computed tomography (CT) scans, after lumpectomy, for electron boost planning as part of breast radiotherapy. Methods and Materials: Planning CT images of 31 operated breasts in 30 patients who underwent lumpectomy were reviewed. One or more clips were placed in the lumpectomy cavity. Serial CT images were used to measure the depth and transverse and longitudinal dimensions. The area and geometric center of the tumor bed were defined by the clips and CT. Results: The CT and clip measurements were identical for the maximal tumor depth in 27 of 30 patients. The CT bed extended beyond the clips by 0-7 mm medially in the transverse/longitudinal extent (multiclip patients). The median distance between the geometric centers in the coronal plane for the tumor bed center was larger for patients with single clips than for those with multiple clips (p 2 . The CT bed was more readily visible in patients with a shorter interval between surgery and radiotherapy. Conclusion: The maximal depth of the tumor bed was similar using the two methods. The extent and centers of the clip-and CT-determined beds differed significantly. This may indicate an underestimation of the tumor bed as defined by clips only and justifies integration of CT information in boost field planning

  15. The Effect of Typographical Features of Subtitles on Nonnative English Viewers’ Retention and Recall of Lyrics in English Music Videos

    Directory of Open Access Journals (Sweden)

    Farshid Tayari Ashtiani

    2017-10-01

    Full Text Available The goal of this study was to test the effect of typographical features of subtitles including size, color and position on nonnative English viewers’ retention and recall of lyrics in music videos. To do so, the researcher played a simple subtitled music video for the participants at the beginning of their classes, and administered a 31-blank cloze test from the lyrics at the end of the classes. In the second test, the control group went through the same procedure but experimental group watched the customized subtitled version of the music video. The results demonstrated no significant difference between the two groups in the first test but in the second, the scores remarkably increased in the experimental group and proved better retention and recall. This study has implications for English language teachers and material developers to benefit customized bimodal subtitles as a mnemonic tool for better comprehension, retention and recall of aural contents in videos via Computer Assisted Language Teaching approach.

  16. Social trait judgment and affect recognition from static faces and video vignettes in schizophrenia.

    Science.gov (United States)

    McIntosh, Lindsey G; Park, Sohee

    2014-09-01

    Social impairment is a core feature of schizophrenia, present from the pre-morbid stage and predictive of outcome, but the etiology of this deficit remains poorly understood. Successful and adaptive social interactions depend on one's ability to make rapid and accurate judgments about others in real time. Our surprising ability to form accurate first impressions from brief exposures, known as "thin slices" of behavior has been studied very extensively in healthy participants. We sought to examine affect and social trait judgment from thin slices of static or video stimuli in order to investigate the ability of schizophrenic individuals to form reliable social impressions of others. 21 individuals with schizophrenia (SZ) and 20 matched healthy participants (HC) were asked to identify emotions and social traits for actors in standardized face stimuli as well as brief video clips. Sound was removed from videos to remove all verbal cues. Clinical symptoms in SZ and delusional ideation in both groups were measured. Results showed a general impairment in affect recognition for both types of stimuli in SZ. However, the two groups did not differ in the judgments of trustworthiness, approachability, attractiveness, and intelligence. Interestingly, in SZ, the severity of positive symptoms was correlated with higher ratings of attractiveness, trustworthiness, and approachability. Finally, increased delusional ideation in SZ was associated with a tendency to rate others as more trustworthy, while the opposite was true for HC. These findings suggest that complex social judgments in SZ are affected by symptomatology. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Virtual Cerebral Aneurysm Clipping with Real-Time Haptic Force Feedback in Neurosurgical Education.

    Science.gov (United States)

    Gmeiner, Matthias; Dirnberger, Johannes; Fenz, Wolfgang; Gollwitzer, Maria; Wurm, Gabriele; Trenkler, Johannes; Gruber, Andreas

    2018-04-01

    Realistic, safe, and efficient modalities for simulation-based training are highly warranted to enhance the quality of surgical education, and they should be incorporated in resident training. The aim of this study was to develop a patient-specific virtual cerebral aneurysm-clipping simulator with haptic force feedback and real-time deformation of the aneurysm and vessels. A prototype simulator was developed from 2012 to 2016. Evaluation of virtual clipping by blood flow simulation was integrated in this software, and the prototype was evaluated by 18 neurosurgeons. In 4 patients with different medial cerebral artery aneurysms, virtual clipping was performed after real-life surgery, and surgical results were compared regarding clip application, surgical trajectory, and blood flow. After head positioning and craniotomy, bimanual virtual aneurysm clipping with an original forceps was performed. Blood flow simulation demonstrated residual aneurysm filling or branch stenosis. The simulator improved anatomic understanding for 89% of neurosurgeons. Simulation of head positioning and craniotomy was considered realistic by 89% and 94% of users, respectively. Most participants agreed that this simulator should be integrated into neurosurgical education (94%). Our illustrative cases demonstrated that virtual aneurysm surgery was possible using the same trajectory as in real-life cases. Both virtual clipping and blood flow simulation were realistic in broad-based but not calcified aneurysms. Virtual clipping of a calcified aneurysm could be performed using the same surgical trajectory, but not the same clip type. We have successfully developed a virtual aneurysm-clipping simulator. Next, we will prospectively evaluate this device for surgical procedure planning and education. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Training model for cerebral aneurysm clipping

    Directory of Open Access Journals (Sweden)

    Hiroshi Tenjin, M.D., Ph.D.

    2017-12-01

    Full Text Available Clipping of cerebral aneurysms is still an important skill in neurosurgery. We have made a training model for the clipping of cerebral aneurysms. The concepts for the model were 1: training model for beginners, 2: three dimensional manipulation using an operating microscope, 3: the aneurysm model is to be perfused by simulated blood causing premature rupture. The correct relationship between each tissue, and softness of the brain and vessels were characteristics of the model. The skull, brain, arteries, and veins were made using a 3D printer with data from DICOM. The brain and vessels were made from polyvinyl alcohol (PVA. One training course was held and this model was useful for training of cerebral aneurysm surgery for young neurosurgeons.

  19. Successful Removal of Football Helmet Face-Mask Clips After 1 Season of Use

    Science.gov (United States)

    Scibek, Jason S.; Gatti, Joseph M.; McKenzie, Jennifer I.

    2012-01-01

    Context Whereas many researchers have assessed the ability to remove loop straps in traditional face-mask attachment systems after at least 1 season of use, research in which the effectiveness of the Riddell Quick Release (QR) Face Guard Attachment System clip after 1 season has been assessed is limited. Objective To examine the success rate of removing the QR clips after 1 season of use at the Football Championship Subdivision level. We hypothesized that 1 season of use would negatively affect the removal rate of the QR clip but repeated clip-removal trials would improve the removal rate. Design Retrospective, quasi-experimental design. Setting Controlled laboratory study. Patients or Other Participants Sixty-three football helmets from a National Collegiate Athletic Association Division I university located in western Pennsylvania used during the 2008 season were tested. Intervention(s) Three certified athletic trainers (2 men, 1 woman; age = 31.3 ± 3.06 years, time certified = 9.42 ± 2.65 years) attempted to remove the QR clips from each helmet with the tool provided by the manufacturer. Helmets then were reassembled to allow each athletic trainer to attempt clip removal. Main Outcome Measure(s) The dependent variables were total left clips removed (TCR-L), total right clips removed (TCR-R), and total clips removed (TCR). Success rate of clip removal (SRCR) also was assessed. Results Percentages for TCR-L, TCR-R, and TCR were 100% (189 of 189), 96.30% (182 of 189), and 98.15% (371 of 378), respectively. A paired-samples t test revealed a difference between TCR-R and TCR-L (t188 = −2.689, P = .008, μd = 0.037, 95% confidence interval [CI] = −0.064, −0.010). The percentage for SRCR was 96.30% (n = 182), whereas SRCR percentages for trials 1, 2, and 3 were 95.24% (n = 60), 98.41% (n = 62), and 95.24% (n = 60), respectively, and did not represent a difference (F2,186 = 0.588, P = .56, 95% CI = 0.94, 0.99). Conclusions Our results indicated favorable and

  20. A Depth Video-based Human Detection and Activity Recognition using Multi-features and Embedded Hidden Markov Models for Health Care Monitoring Systems

    Directory of Open Access Journals (Sweden)

    Ahmad Jalal

    2017-08-01

    Full Text Available Increase in number of elderly people who are living independently needs especial care in the form of healthcare monitoring systems. Recent advancements in depth video technologies have made human activity recognition (HAR realizable for elderly healthcare applications. In this paper, a depth video-based novel method for HAR is presented using robust multi-features and embedded Hidden Markov Models (HMMs to recognize daily life activities of elderly people living alone in indoor environment such as smart homes. In the proposed HAR framework, initially, depth maps are analyzed by temporal motion identification method to segment human silhouettes from noisy background and compute depth silhouette area for each activity to track human movements in a scene. Several representative features, including invariant, multi-view differentiation and spatiotemporal body joints features were fused together to explore gradient orientation change, intensity differentiation, temporal variation and local motion of specific body parts. Then, these features are processed by the dynamics of their respective class and learned, modeled, trained and recognized with specific embedded HMM having active feature values. Furthermore, we construct a new online human activity dataset by a depth sensor to evaluate the proposed features. Our experiments on three depth datasets demonstrated that the proposed multi-features are efficient and robust over the state of the art features for human action and activity recognition.

  1. Moisture-Induced Delamination Video of an Oxidized Thermal Barrier Coating

    Science.gov (United States)

    Smialek, James L.; Zhu, Dongming; Cuy, Michael D.

    2008-01-01

    PVD TBC coatings were thermally cycled to near-failure at 1150 C. Normal failure occurred after 200 to 300 1-hr cycles with only moderate weight gains (0.5 mg/sq cm). Delamination and buckling was often delayed until well after cooldown (desktop spallation), but could be instantly induced by the application of water drops, as shown in a video clip which can be viewed by clicking on figure 2 of this report. Moisture therefore plays a primary role in delayed desktop TBC failure. Hydrogen embrittlement is proposed as the underlying mechanism.

  2. No-reference pixel based video quality assessment for HEVC decoded video

    DEFF Research Database (Denmark)

    Huang, Xin; Søgaard, Jacob; Forchhammer, Søren

    2017-01-01

    the quantization step used in the Intra coding is estimated. We map the obtained HEVC features using an Elastic Net to predict subjective video quality scores, Mean Opinion Scores (MOS). The performance is verified on a dataset consisting of HEVC coded 4 K UHD (resolution equal to 3840 x 2160) video sequences...

  3. Reviews in instructional video

    NARCIS (Netherlands)

    van der Meij, Hans

    2017-01-01

    This study investigates the effectiveness of a video tutorial for software training whose construction was based on a combination of insights from multimedia learning and Demonstration-Based Training. In the videos, a model of task performance was enhanced with instructional features that were

  4. Multichannel infinite clipping as a form of sampling of speech signals

    International Nuclear Information System (INIS)

    Guidarelli, G.

    1985-01-01

    A remarkable improvement of both intelligibility and naturalness of infinitely clipped speech can be achieved by means of a multichannel system in which the speech signal is splitted into several band-pass channels before the clipping and successively reconstructed by summing up the clipped outputs of each channel. A possible explanation of such an improvement is given, founded on the so-called zero-based representation of band limited signals where the zero-crossings sequence is considered as a set of samples of the signal

  5. Real-time skin feature identification in a time-sequential video stream

    Science.gov (United States)

    Kramberger, Iztok

    2005-04-01

    Skin color can be an important feature when tracking skin-colored objects. Particularly this is the case for computer-vision-based human-computer interfaces (HCI). Humans have a highly developed feeling of space and, therefore, it is reasonable to support this within intelligent HCI, where the importance of augmented reality can be foreseen. Joining human-like interaction techniques within multimodal HCI could, or will, gain a feature for modern mobile telecommunication devices. On the other hand, real-time processing plays an important role in achieving more natural and physically intuitive ways of human-machine interaction. The main scope of this work is the development of a stereoscopic computer-vision hardware-accelerated framework for real-time skin feature identification in the sense of a single-pass image segmentation process. The hardware-accelerated preprocessing stage is presented with the purpose of color and spatial filtering, where the skin color model within the hue-saturation-value (HSV) color space is given with a polyhedron of threshold values representing the basis of the filter model. An adaptive filter management unit is suggested to achieve better segmentation results. This enables the adoption of filter parameters to the current scene conditions in an adaptive way. Implementation of the suggested hardware structure is given at the level of filed programmable system level integrated circuit (FPSLIC) devices using an embedded microcontroller as their main feature. A stereoscopic clue is achieved using a time-sequential video stream, but this shows no difference for real-time processing requirements in terms of hardware complexity. The experimental results for the hardware-accelerated preprocessing stage are given by efficiency estimation of the presented hardware structure using a simple motion-detection algorithm based on a binary function.

  6. Elevated intracranial pressure and reversible eye-tracking changes detected while viewing a film clip.

    Science.gov (United States)

    Kolecki, Radek; Dammavalam, Vikalpa; Bin Zahid, Abdullah; Hubbard, Molly; Choudhry, Osamah; Reyes, Marleen; Han, ByoungJun; Wang, Tom; Papas, Paraskevi Vivian; Adem, Aylin; North, Emily; Gilbertson, David T; Kondziolka, Douglas; Huang, Jason H; Huang, Paul P; Samadani, Uzma

    2018-03-01

    OBJECTIVE The precise threshold differentiating normal and elevated intracranial pressure (ICP) is variable among individuals. In the context of several pathophysiological conditions, elevated ICP leads to abnormalities in global cerebral functioning and impacts the function of cranial nerves (CNs), either or both of which may contribute to ocular dysmotility. The purpose of this study was to assess the impact of elevated ICP on eye-tracking performed while patients were watching a short film clip. METHODS Awake patients requiring placement of an ICP monitor for clinical purposes underwent eye tracking while watching a 220-second continuously playing video moving around the perimeter of a viewing monitor. Pupil position was recorded at 500 Hz and metrics associated with each eye individually and both eyes together were calculated. Linear regression with generalized estimating equations was performed to test the association of eye-tracking metrics with changes in ICP. RESULTS Eye tracking was performed at ICP levels ranging from -3 to 30 mm Hg in 23 patients (12 women, 11 men, mean age 46.8 years) on 55 separate occasions. Eye-tracking measures correlating with CN function linearly decreased with increasing ICP (p short film clip. These results suggest that eye tracking may be used as a noninvasive, automatable means to quantitate the physiological impact of elevated ICP, which has clinical application for assessment of shunt malfunction, pseudotumor cerebri, concussion, and prevention of second-impact syndrome.

  7. Delay line clipping in a scintillation camera system

    International Nuclear Information System (INIS)

    Hatch, K.F.

    1979-01-01

    The present invention provides a novel base line restoring circuit and a novel delay line clipping circuit in a scintillation camera system. Single and double delay line clipped signal waveforms are generated for increasing the operational frequency and fidelity of data detection of the camera system by base line distortion such as undershooting, overshooting, and capacitive build-up. The camera system includes a set of photomultiplier tubes and associated amplifiers which generate sequences of pulses. These pulses are pulse-height analyzed for detecting a scintillation having an energy level which falls within a predetermined energy range. Data pulses are combined to provide coordinates and energy of photopeak events. The amplifiers are biassed out of saturation over all ranges of pulse energy level and count rate. Single delay line clipping circuitry is provided for narrowing the pulse width of the decaying electrical data pulses which increase operating speed without the occurrence of data loss. (JTA)

  8. The dark side of online activism: Swedish right-wing extremist video activism on YouTube

    Directory of Open Access Journals (Sweden)

    Mattias Ekman

    2014-06-01

    Full Text Available In recent years, an emerging body of work, centred on specific communicative forms used in facilitating collective and connective action, have contributed to greater understanding of how digital communication relates to social mobilisation. Plenty of these studies highlight the progressive potentiality of digital communication. However, undemocratic actors also utilise the rapid advancement in digital technology. This article explores the online video activism of extreme right-wing groups in Sweden. It analyses more than 200 clips on YouTube, produced by five right-wing extremist organisations. The study shows that the extreme right deploy video activism as a strategy of visibility to mobilise and strengthen activists. Moreover, the groups attempt to alter the perception of (historically-rooted socio-political identities of the extreme right. Furthermore, YouTube becomes a political arena in which action repertoires and street politics are adapted to the specific characteristics of online video activism. Finally, video activism could be understood as an aestheticisation of politics.

  9. The dark side of online activism: Swedish right-wing extremist video activism on YouTube

    Directory of Open Access Journals (Sweden)

    Mattias Ekman

    2014-05-01

    Full Text Available In recent years, an emerging body of work, centred on specific communicative forms used in facilitating collective and connective action, have contributed to greater understanding of how digital communication relates to social mobilisation. Plenty of these studies highlight the progressive potentiality of digital communication. However, undemocratic actors also utilise the rapid advancement in digital technology. This article explores the online video activism of extreme right-wing groups in Sweden. It analyses more than 200 clips on YouTube, produced by five right-wing extremist organisations. The study shows that the extreme right deploy video activism as a strategy of visibility to mobilise and strengthen activists. Moreover, the groups attempt to alter the perception of (historically-rooted socio-political identities of the extreme right. Furthermore, YouTube becomes a political arena in which action repertoires and street politics are adapted to the specific characteristics of online video activism. Finally, video activism could be understood as an aestheticisation of politics.

  10. Segmentation Based Video Steganalysis to Detect Motion Vector Modification

    Directory of Open Access Journals (Sweden)

    Peipei Wang

    2017-01-01

    Full Text Available This paper presents a steganalytic approach against video steganography which modifies motion vector (MV in content adaptive manner. Current video steganalytic schemes extract features from fixed-length frames of the whole video and do not take advantage of the content diversity. Consequently, the effectiveness of the steganalytic feature is influenced by video content and the problem of cover source mismatch also affects the steganalytic performance. The goal of this paper is to propose a steganalytic method which can suppress the differences of statistical characteristics caused by video content. The given video is segmented to subsequences according to block’s motion in every frame. The steganalytic features extracted from each category of subsequences with close motion intensity are used to build one classifier. The final steganalytic result can be obtained by fusing the results of weighted classifiers. The experimental results have demonstrated that our method can effectively improve the performance of video steganalysis, especially for videos of low bitrate and low embedding ratio.

  11. Automated Indexing and Search of Video Data in Large Collections with inVideo

    Directory of Open Access Journals (Sweden)

    Shuangbao Paul Wang

    2017-08-01

    Full Text Available In this paper, we present a novel system, inVideo, for automatically indexing and searching videos based on the keywords spoken in the audio track and the visual content of the video frames. Using the highly efficient video indexing engine we developed, inVideo is able to analyze videos using machine learning and pattern recognition without the need for initial viewing by a human. The time-stamped commenting and tagging features refine the accuracy of search results. The cloud-based implementation makes it possible to conduct elastic search, augmented search, and data analytics. Our research shows that inVideo presents an efficient tool in processing and analyzing videos and increasing interactions in video-based online learning environment. Data from a cybersecurity program with more than 500 students show that applying inVideo to current video material, interactions between student-student and student-faculty increased significantly across 24 sections program-wide.

  12. Children's Video Games as Interactive Racialization

    OpenAIRE

    Martin, Cathlena

    2008-01-01

    Cathlena Martin explores in her paper "Children's Video Games as Interactive Racialization" selected children's video games. Martin argues that children's video games often act as reinforcement for the games' television and film counterparts and their racializing characteristics and features. In Martin's analysis the video games discussed represent media through which to analyze racial identities and ideologies. In making the case for positive female minority leads in children's video games, ...

  13. Video Vectorization via Tetrahedral Remeshing.

    Science.gov (United States)

    Wang, Chuan; Zhu, Jie; Guo, Yanwen; Wang, Wenping

    2017-02-09

    We present a video vectorization method that generates a video in vector representation from an input video in raster representation. A vector-based video representation offers the benefits of vector graphics, such as compactness and scalability. The vector video we generate is represented by a simplified tetrahedral control mesh over the spatial-temporal video volume, with color attributes defined at the mesh vertices. We present novel techniques for simplification and subdivision of a tetrahedral mesh to achieve high simplification ratio while preserving features and ensuring color fidelity. From an input raster video, our method is capable of generating a compact video in vector representation that allows a faithful reconstruction with low reconstruction errors.

  14. Video enhancement : content classification and model selection

    NARCIS (Netherlands)

    Hu, H.

    2010-01-01

    The purpose of video enhancement is to improve the subjective picture quality. The field of video enhancement includes a broad category of research topics, such as removing noise in the video, highlighting some specified features and improving the appearance or visibility of the video content. The

  15. Assessing arsenic and selenium in a single nail clipping using portable X-ray fluorescence

    International Nuclear Information System (INIS)

    Fleming, David E.B.; Nader, Michel N.; Foran, Kelly A.; Groskopf, Craig; Reno, Michael C.; Ware, Chris S.; Tehrani, Mina; Guimarães, Diana; Parsons, Patrick J.

    2017-01-01

    The feasibility of measuring arsenic and selenium contents in a single nail clipping was investigated using a small-focus portable X-ray fluorescence (XRF) instrument with monochromatic excitation beams. Nail clipping phantoms supplemented with arsenic and selenium to produce materials with 0, 5, 10, 15, and 20 µg/g were used for calibration purposes. In total, 10 different clippings were analyzed at two different measurement positions. Energy spectra were fit with detection peaks for arsenic K_α, selenium K_α, arsenic K_β, selenium K_β, and bromine K_α characteristic X-rays. Data analysis was performed under two distinct conditions of fitting constraint. Calibration lines were established from the amplitude of each of the arsenic and selenium peaks as a function of the elemental contents in the clippings. The slopes of the four calibration lines were consistent between the two conditions of analysis. The calculated minimum detection limit (MDL) of the method, when considering the K_α peak only, ranged from 0.210±0.002 µg/g selenium under one condition of analysis to 0.777±0.009 µg/g selenium under another. Compared with previous portable XRF nail clipping studies, MDLs were substantially improved for both arsenic and selenium. The new measurement technique had the additional benefits of being short in duration (~3 min) and requiring only a single nail clipping. The mass of the individual clipping used did not appear to play a major role in signal strength, but positioning of the clipping is important. - Highlights: • Portable X-ray fluorescence was used to assess As and Se in nail clipping phantoms. • Calibration lines were consistent between two different conditions of data analysis. • This new XRF approach was sensitive and required only a single nail clipping.

  16. Temporal Segmentation of MPEG Video Streams

    Directory of Open Access Journals (Sweden)

    Janko Calic

    2002-06-01

    Full Text Available Many algorithms for temporal video partitioning rely on the analysis of uncompressed video features. Since the information relevant to the partitioning process can be extracted directly from the MPEG compressed stream, higher efficiency can be achieved utilizing information from the MPEG compressed domain. This paper introduces a real-time algorithm for scene change detection that analyses the statistics of the macroblock features extracted directly from the MPEG stream. A method for extraction of the continuous frame difference that transforms the 3D video stream into a 1D curve is presented. This transform is then further employed to extract temporal units within the analysed video sequence. Results of computer simulations are reported.

  17. Low Proteolytic Clipping of Histone H3 in Cervical Cancer

    Science.gov (United States)

    Sandoval-Basilio, Jorge; Serafín-Higuera, Nicolás; Reyes-Hernandez, Octavio D.; Serafín-Higuera, Idanya; Leija-Montoya, Gabriela; Blanco-Morales, Magali; Sierra-Martínez, Monica; Ramos-Mondragon, Roberto; García, Silvia; López-Hernández, Luz Berenice; Yocupicio-Monroy, Martha; Alcaraz-Estrada, Sofia L.

    2016-01-01

    Chromatin in cervical cancer (CC) undergoes chemical and structural changes that alter the expression pattern of genes. Recently, a potential mechanism, which regulates gene expression at transcriptional levels is the proteolytic clipping of histone H3. However, until now this process in CC has not been reported. Using HeLa cells as a model of CC and human samples from patients with CC, we identify that the H3 cleavage was lower in CC compared with control tissue. Additionally, the histone H3 clipping was performed by serine and aspartyl proteases in HeLa cells. These results suggest that histone H3 clipping operates as part of post-translational modification system in CC. PMID:27698925

  18. The contribution of CLIP2 haploinsufficiency to the clinical manifestations of the Williams-Beuren syndrome.

    Science.gov (United States)

    Vandeweyer, Geert; Van der Aa, Nathalie; Reyniers, Edwin; Kooy, R Frank

    2012-06-08

    Williams-Beuren syndrome is a rare contiguous gene syndrome, characterized by intellectual disability, facial dysmorphisms, connective-tissue abnormalities, cardiac defects, structural brain abnormalities, and transient infantile hypercalcemia. Genes lying telomeric to RFC2, including CLIP2, GTF2I and GTF2IRD1, are currently thought to be the most likely major contributors to the typical Williams syndrome cognitive profile, characterized by a better-than-expected auditory rote-memory ability, a relative sparing of language capabilities, and a severe visual-spatial constructive impairment. Atypical deletions in the region have helped to establish genotype-phenotype correlations. So far, however, hardly any deletions affecting only a single gene in the disease region have been described. We present here two healthy siblings with a pure, hemizygous deletion of CLIP2. A putative role in the cognitive and behavioral abnormalities seen in Williams-Beuren patients has been suggested for this gene on the basis of observations in a knock-out mouse model. The presented siblings did not show any of the clinical features associated with the syndrome. Cognitive testing showed an average IQ for both and no indication of the Williams syndrome cognitive profile. This shows that CLIP2 haploinsufficiency by itself does not lead to the physical or cognitive characteristics of the Williams-Beuren syndrome, nor does it lead to the Williams syndrome cognitive profile. Although contribution of CLIP2 to the phenotype cannot be excluded when it is deleted in combination with other genes, our results support the hypothesis that GTF2IRD1 and GTF2I are the main genes causing the cognitive defects associated with Williams-Beuren syndrome. Copyright © 2012 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  19. Automatic topics segmentation for TV news video

    Science.gov (United States)

    Hmayda, Mounira; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    Automatic identification of television programs in the TV stream is an important task for operating archives. This article proposes a new spatio-temporal approach to identify the programs in TV stream into two main steps: First, a reference catalogue for video features visual jingles built. We operate the features that characterize the instances of the same program type to identify the different types of programs in the flow of television. The role of video features is to represent the visual invariants for each visual jingle using appropriate automatic descriptors for each television program. On the other hand, programs in television streams are identified by examining the similarity of the video signal for visual grammars in the catalogue. The main idea of the identification process is to compare the visual similarity of the video signal features in the flow of television to the catalogue. After presenting the proposed approach, the paper overviews encouraging experimental results on several streams extracted from different channels and compounds of several programs.

  20. Structural changes in the human vas deferens after tantalum clip occlusion and conventional vasectomy.

    Science.gov (United States)

    Kothari, L K; Gupta, A S

    1978-02-01

    In 15 human subjects, the vasa deferentia were occluded by applying two tantalum clips on one side and by conventional vasectomy with silk ligatures on the other. After 2 weeks, the occluded segments were recovered for histopathologic examination of serial sections. Obstructing the seminal tract did not, as such, produce any significant change in the vas: the distal and proximal segments appeared to be essentially similar and normal. At the actual site of occlusion, however, tantalum clips produced marked flattening of the tube, complete loss of lining epithelium, distortion of the muscular lamellae, and areas of hemorrhage. The lumen was converted into a narrow slit. Under the ligatures, the damage was largely confined to denudation of the mucosal epithelium. The mucosa of the intersegment left unexcised between two clips showed hyalinization, invasion by macrophages, and degeneration of the epithelium. The changes under the clips suggest that, although clip occlusion may offer several advantages, sterility cannot be reversed merely by removing the clips. The mechanisms of these changes, different in the case of clips and ligatures, are discussed and some possible long-term consequences are considered.

  1. Sex Differences in Emotional Evaluation of Film Clips: Interaction with Five High Arousal Emotional Categories

    Science.gov (United States)

    Maffei, Antonio; Vencato, Valentina; Angrilli, Alessandro

    2015-01-01

    The present study aimed to investigate gender differences in the emotional evaluation of 18 film clips divided into six categories: Erotic, Scenery, Neutral, Sadness, Compassion, and Fear. 41 female and 40 male students rated all clips for valence-pleasantness, arousal, level of elicited distress, anxiety, jittery feelings, excitation, and embarrassment. Analysis of positive films revealed higher levels of arousal, pleasantness, and excitation to the Scenery clips in both genders, but lower pleasantness and greater embarrassment in women compared to men to Erotic clips. Concerning unpleasant stimuli, unlike men, women reported more unpleasantness to the Compassion, Sadness, and Fear compared to the Neutral clips and rated them also as more arousing than did men. They further differentiated the films by perceiving greater arousal to Fear than to Compassion clips. Women rated the Sadness and Fear clips with greater Distress and Jittery feelings than men did. Correlation analysis between arousal and the other emotional scales revealed that, although men looked less aroused than women to all unpleasant clips, they also showed a larger variance in their emotional responses as indicated by the high number of correlations and their relatively greater extent, an outcome pointing to a masked larger sensitivity of part of male sample to emotional clips. We propose a new perspective in which gender difference in emotional responses can be better evidenced by means of film clips selected and clustered in more homogeneous categories, controlled for arousal levels, as well as evaluated through a number of emotion focused adjectives. PMID:26717488

  2. Sex Differences in Emotional Evaluation of Film Clips: Interaction with Five High Arousal Emotional Categories.

    Directory of Open Access Journals (Sweden)

    Antonio Maffei

    Full Text Available The present study aimed to investigate gender differences in the emotional evaluation of 18 film clips divided into six categories: Erotic, Scenery, Neutral, Sadness, Compassion, and Fear. 41 female and 40 male students rated all clips for valence-pleasantness, arousal, level of elicited distress, anxiety, jittery feelings, excitation, and embarrassment. Analysis of positive films revealed higher levels of arousal, pleasantness, and excitation to the Scenery clips in both genders, but lower pleasantness and greater embarrassment in women compared to men to Erotic clips. Concerning unpleasant stimuli, unlike men, women reported more unpleasantness to the Compassion, Sadness, and Fear compared to the Neutral clips and rated them also as more arousing than did men. They further differentiated the films by perceiving greater arousal to Fear than to Compassion clips. Women rated the Sadness and Fear clips with greater Distress and Jittery feelings than men did. Correlation analysis between arousal and the other emotional scales revealed that, although men looked less aroused than women to all unpleasant clips, they also showed a larger variance in their emotional responses as indicated by the high number of correlations and their relatively greater extent, an outcome pointing to a masked larger sensitivity of part of male sample to emotional clips. We propose a new perspective in which gender difference in emotional responses can be better evidenced by means of film clips selected and clustered in more homogeneous categories, controlled for arousal levels, as well as evaluated through a number of emotion focused adjectives.

  3. Pull-off characteristics of double-shanked compared to single-shanked ligation clips: an animal study

    Directory of Open Access Journals (Sweden)

    Schenk Martin

    2016-09-01

    Full Text Available The use of surgical ligation clips is considered as the gold standard for the closure of vessels, particularly in laparoscopic surgery. The safety of clips is mainly achieved by the deep indentation of the metal bar with a high retention force. A novel double-shanked (DS titanium clip was compared to two single-shanked clips with respect to axial and radial pull-off forces.

  4. It is time for a better clip applier - 3 mm, percutaneous, non-crushing and locking.

    Science.gov (United States)

    Yuval, Jonathan B; Weiss, Daniel J; Paz, Adrian; Bachar, Yehuda; Brodie, Ronit; Shapira, Yinon; Mintz, Yoav

    2017-10-06

    Since the advent of laparoscopy there have been attempts to minimize abdominal wall incisions. For this purpose smaller instruments have been produced. Our aim was to develop the first 3 mm percutaneous clip applier and to make it better than the standard clips of today. The ClipTip clip is made of Nitinol and has a crocodile shaped jaws, which when apposed effectively seal vessels. The shaft operates as a retractable needle permitting percutaneous insertion. Closing, reopening and reclosing is possible. The physical properties of the device were compared to three commercially available clip appliers. Surgeries were performed on porcine animals by experienced surgeons. In comparison to available clips, the superiority of the ClipTip is a combination of wide effective length alongside the ability to withstand strong forces. In live animal studies the Cliptip was inserted into the peritoneal cavity without any injuries. Vessels were ligated successfully and no clip dislodgement or leakage occurred. We developed the next generation clip applier with better properties. Advantages include its length, the needleoscopic caliber, non-crushing effect, locking mechanism and wide aperture. The device has performed safely and effectively in pre-clinical tests. Further studies are planned in humans.

  5. Clip, move, adjust”: Video editing as reflexive rhythmanalysis in networked publics

    DEFF Research Database (Denmark)

    Rehder, Mads Middelboe; Pereira, Gabriel; Markham, Annette

    2017-01-01

    as part of a six-year (and ongoing) study of how youth experience social media (authors). In this larger study, youth produced, among other things, videologs of their experiences, after being trained in auto-elicitation and ethnographic methods (authors). As a further step in reflexive autoethnographic...... analysis, the method we outline consists of asking participants to engage in a phenomenologically grounded analytical editing process of these videologs. In this study, we find that as participants edit their own videos, they add information and depth at a level beyond what we researchers can bring in our...... own analyses. This tool is a productive way to get closer to the granularity of participants’ lived experiences....

  6. Videos - The National Guard

    Science.gov (United States)

    Legislative Liaison Small Business Programs Social Media State Websites Videos Featured Videos On Every Front 2:17 Always Ready, Always There National Guard Bureau Diversity and Inclusion Play Button 1:04 National Guard Bureau Diversity and Inclusion The ChalleNGe Ep.5 [Graduation] Play Button 3:51 The

  7. Inducing negative affect using film clips with general and eating disorder-related content.

    Science.gov (United States)

    Koushiou, Maria; Nicolaou, Kalia; Karekla, Maria

    2018-02-09

    The aim of the present study was to select appropriate film clips with a general vs. eating disorder (ED)-related content to induce negative affect. More specifically, the study examined the subjective emotional experience (valence, arousal, anxiety, induction of somatic symptoms, and ability to control reactions during film clips) of Greek-Cypriot university students (N = 79) in response to three types of film clips: general unpleasant, ED-specific unpleasant, and emotionally neutral. In addition, the study aimed to compare the emotional reactions to the aforementioned clips between two groups of participants differing on their risk for ED (high vs. low). Preliminary results indicate the clips with general content ("The Champ") and with ED-specific content ("Binge eating") that are most effective in inducing negative affect and differentiating between risk groups. These clips provide an effective method for emotion induction that can be used for assessing the emotional experience of individuals with ED symptoms, since their emotional experience is significantly implicated in the development and maintenance of their symptoms (Merwin, Clin Psychol Sci Pract 18(3):208-214, 2011).Level of evidence No level of evidence, Experimental Study.

  8. SOCIOLINGUISTIC IMPORT OF NAME-CLIPPING AMONG ...

    African Journals Online (AJOL)

    NGOZI

    2013-02-27

    Feb 27, 2013 ... experiences which, most of the times, encompass cultural and philosophical ... The art of name clipping goes way back in language history ... describes Akan names as “iconic representation of complete social variables that ...

  9. Computational Thinking in Constructionist Video Games

    Science.gov (United States)

    Weintrop, David; Holbert, Nathan; Horn, Michael S.; Wilensky, Uri

    2016-01-01

    Video games offer an exciting opportunity for learners to engage in computational thinking in informal contexts. This paper describes a genre of learning environments called constructionist video games that are especially well suited for developing learners' computational thinking skills. These games blend features of conventional video games with…

  10. Video Retrieval Berdasarkan Teks dan Gambar

    Directory of Open Access Journals (Sweden)

    Rahmi Hidayati

    2013-01-01

    Abstract Retrieval video has been used to search a video based on the query entered by user which were text and image. This system could increase the searching ability on video browsing and expected to reduce the video’s retrieval time. The research purposes were designing and creating a software application of retrieval video based on the text and image on the video. The index process for the text is tokenizing, filtering (stopword, stemming. The results of stemming to saved in the text index table. Index process for the image is to create an image color histogram and compute the mean and standard deviation at each primary color red, green and blue (RGB of each image. The results of feature extraction is stored in the image table The process of video retrieval using the query text, images or both. To text query system to process the text query by looking at the text index tables. If there is a text query on the index table system will display information of the video according to the text query. To image query system to process the image query by finding the value of the feature extraction means red, green means, means blue, red standard deviation, standard deviation and standard deviation of blue green. If the value of the six features extracted query image on the index table image will display the video information system according to the query image. To query text and query images, the system will display the video information if the query text and query images have a relationship that is query text and query image has the same film title.   Keywords—  video, index, retrieval, text, image

  11. Optimizing assessment of sexual arousal in postmenopausal women using erotic film clips.

    Science.gov (United States)

    Ramos Alarcon, Lauren G; Dai, Jing; Collins, Karen; Perez, Mindy; Woodard, Terri; Diamond, Michael P

    2017-10-01

    This study sought to assess sexual arousal in a subgroup of women by identifying erotic film clips that would be most mentally appealing and physically arousing to postmenopausal women. By measuring levels of mental appeal and self-reported physical arousal using a bidirectional scale, we aimed to elucidate the clips that would best be utilized for sexual health research in the postmenopausal or over 50-year-old subpopulation. Our results showed that postmenopausal women did not rate clips with older versus younger actors differently (p>0.05). The mean mental and mean physical scores were significantly correlated for both premenopausal subject ratings (r=0.69, perotic film clips; this knowledge is relevant for design of future sexual function research. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Identification of CELF1 RNA targets by CLIP-seq in human HeLa cells

    Directory of Open Access Journals (Sweden)

    Olivier Le Tonquèze

    2016-06-01

    Full Text Available The specific interactions between RNA-binding proteins and their target RNAs are an essential level to control gene expression. By combining ultra-violet cross-linking and immunoprecipitation (CLIP and massive SoliD sequencing we identified the RNAs bound by the RNA-binding protein CELF1, in human HeLa cells. The CELF1 binding sites deduced from the sequence data allow characterizing specific features of CELF1-RNA association. We present therefore the first map of CELF1 binding sites in human cells.

  13. Extraction Of Audio Features For Emotion Recognition System Based On Music

    Directory of Open Access Journals (Sweden)

    Kee Moe Han

    2015-08-01

    Full Text Available Music is the combination of melody linguistic information and the vocalists emotion. Since music is a work of art analyzing emotion in music by computer is a difficult task. Many approaches have been developed to detect the emotions included in music but the results are not satisfactory because emotion is very complex. In this paper the evaluations of audio features from the music files are presented. The extracted features are used to classify the different emotion classes of the vocalists. Musical features extraction is done by using Music Information Retrieval MIR tool box in this paper. The database of 100 music clips are used to classify the emotions perceived in music clips. Music may contain many emotions according to the vocalists mood such as happy sad nervous bored peace etc. In this paper the audio features related to the emotions of the vocalists are extracted to use in emotion recognition system based on music.

  14. Establishment of an easy Ic measurement method of HTS superconducting tapes using clipped voltage taps

    International Nuclear Information System (INIS)

    Shin, Hyung Seop; Nisay, Arman; Dedicatoria, Marlon; Sim, Ki Deok

    2014-01-01

    The critical current, I c of HTS superconducting tapes can be measured by transport or contactless method. Practically, the transport method using the four-probe method is the most common. In this study, a simple test procedure by clipping the voltage lead taps have been introduced instead of soldering which reduces time and effort and thereby achieving a much faster measurement of I c . When using a pair of iron clips, I c value decreased as compared with the measured one by standard method using soldered voltage taps and varies with the width of the clipped specimen part. However, when using a pure Cu clip, both by clipping and by soldering voltage taps give a comparable result and I c measured are equal and close to the samples specification. As a result, material to be used as voltage clip should be considered and should not influence the potential voltage between the leads during I c measurement. Furthermore, the simulation result of magnetic flux during I c measurement test showed that the decrease of I c observed in the experiment is due to the magnetic flux density, By produced at the clipped part of the sample by the operating current with iron clips attached to the sample.

  15. Video Self-Modeling: A Promising Strategy for Noncompliant Children.

    Science.gov (United States)

    Axelrod, Michael I; Bellini, Scott; Markoff, Kimberly

    2014-07-01

    The current study investigated the effects of a Video Self-Modeling (VSM) intervention on the compliance and aggressive behavior of three children placed in a psychiatric hospital. Each participant viewed brief video clips of himself following simple adult instructions just prior to the school's morning session and the unit's afternoon free period. A multiple baseline design across settings was used to evaluate the effects of the VSM intervention on compliance with staff instructions and aggressive behavior on the hospital unit and in the hospital-based classroom. All three participants exhibited higher levels of compliance and fewer aggressive episodes during the intervention condition, and the effects were generally maintained when the intervention was withdrawn. Hospital staff reported at the conclusion of the study that the VSM intervention was easy to implement and beneficial for all participants. Taken altogether, the results suggest VSM is a promising, socially acceptable, and proactive intervention approach for improving the behavior of noncompliant children. © The Author(s) 2014.

  16. Combining high-speed SVM learning with CNN feature encoding for real-time target recognition in high-definition video for ISR missions

    Science.gov (United States)

    Kroll, Christine; von der Werth, Monika; Leuck, Holger; Stahl, Christoph; Schertler, Klaus

    2017-05-01

    For Intelligence, Surveillance, Reconnaissance (ISR) missions of manned and unmanned air systems typical electrooptical payloads provide high-definition video data which has to be exploited with respect to relevant ground targets in real-time by automatic/assisted target recognition software. Airbus Defence and Space is developing required technologies for real-time sensor exploitation since years and has combined the latest advances of Deep Convolutional Neural Networks (CNN) with a proprietary high-speed Support Vector Machine (SVM) learning method into a powerful object recognition system with impressive results on relevant high-definition video scenes compared to conventional target recognition approaches. This paper describes the principal requirements for real-time target recognition in high-definition video for ISR missions and the Airbus approach of combining an invariant feature extraction using pre-trained CNNs and the high-speed training and classification ability of a novel frequency-domain SVM training method. The frequency-domain approach allows for a highly optimized implementation for General Purpose Computation on a Graphics Processing Unit (GPGPU) and also an efficient training of large training samples. The selected CNN which is pre-trained only once on domain-extrinsic data reveals a highly invariant feature extraction. This allows for a significantly reduced adaptation and training of the target recognition method for new target classes and mission scenarios. A comprehensive training and test dataset was defined and prepared using relevant high-definition airborne video sequences. The assessment concept is explained and performance results are given using the established precision-recall diagrams, average precision and runtime figures on representative test data. A comparison to legacy target recognition approaches shows the impressive performance increase by the proposed CNN+SVM machine-learning approach and the capability of real-time high

  17. Video Quality Prediction over Wireless 4G

    KAUST Repository

    Lau, Chun Pong

    2013-04-14

    In this paper, we study the problem of video quality prediction over the wireless 4G network. Video transmission data is collected from a real 4G SCM testbed for investigating factors that affect video quality. After feature transformation and selection on video and network parameters, video quality is predicted by solving as regression problem. Experimental results show that the dominated factor on video quality is the channel attenuation and video quality can be well estimated by our models with small errors.

  18. Video Quality Prediction over Wireless 4G

    KAUST Repository

    Lau, Chun Pong; Zhang, Xiangliang; Shihada, Basem

    2013-01-01

    In this paper, we study the problem of video quality prediction over the wireless 4G network. Video transmission data is collected from a real 4G SCM testbed for investigating factors that affect video quality. After feature transformation and selection on video and network parameters, video quality is predicted by solving as regression problem. Experimental results show that the dominated factor on video quality is the channel attenuation and video quality can be well estimated by our models with small errors.

  19. Portable x-ray fluorescence for the analysis of chromium in nail and nail clippings

    International Nuclear Information System (INIS)

    Fleming, David E.B.; Ware, Chris S.

    2017-01-01

    Assessment of chromium content in human nail or nail clippings could serve as an effective biomarker of chromium status. The feasibility of a new portable x-ray fluorescence (XRF) approach to chromium measurement was investigated through analysis of nail and nail clipping phantoms. Five measurements of 180 s (real time) duration were first performed on six whole nail phantoms having chromium concentrations of 0, 2, 5, 10, 15, and 20 µg/g. Using nail clippers, these phantoms were then converted to nail clippings, and assembled into different mass groups of 20, 40, 60, 80, and 100 mg for additional measurements. The amplitude of the chromium Kα characteristic x-ray energy peak was examined as a function of phantom concentration for all measurement conditions to create a series of calibration lines. The minimum detection limit (MDL) for chromium was also calculated for each case. The chromium MDL determined from the whole nail intact phantoms was 0.88±0.03 µg/g. For the clipping phantoms, the MDL ranged from 1.2 to 3.3 µg/g, depending on the mass group analyzed. For the 40 mg clipping group, the MDL was 1.2±0.1 µg/g, and higher mass collections did not improve upon this result. This MDL is comparable to chromium concentration levels seen in various studies involving human nail clippings. Further improvements to the portable XRF technique would be required to detect chromium levels expected from the lower end of a typical population. - Highlights: • Portable x-ray fluorescence (XRF) was explored as a technique to assess levels of chromium in human nails or nail clippings. • Results were found to depend on the mass of clipping sample provided. • Minimum detection limits for chromium were similar to concentration levels found in previous studies of human nail clippings.

  20. Hierarchical video summarization

    Science.gov (United States)

    Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.

    1998-12-01

    We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.

  1. The Use of Film Clips in a Viewing Time Task of Sexual Interests.

    Science.gov (United States)

    Lalumière, Martin L; Babchishin, Kelly M; Ebsworth, Megan

    2018-04-01

    Viewing time tasks using still pictures to assess age and gender sexual interests have been well validated and are commonly used. The use of film clips in a viewing time task would open up interesting possibilities for the study of sexual interest toward sexual targets or activities that are not easily captured in still pictures. We examined the validity of a viewing time task using film clips to assess sexual interest toward male and female targets, in a sample of 52 young adults. Film clips produced longer viewing times than still pictures. For both men and women, the indices derived from the film viewing time task were able to distinguish individuals who identified as homosexual (14 men, 8 women) from those who identified as heterosexual (15 men, 15 women), and provided comparable group differentiation as indices derived from a viewing time task using still pictures. Men's viewing times were more gender-specific than those of women. Viewing times to film clips were correlated with participants' ratings of sexual appeal of the same clips, and with viewing times to pictures. The results support the feasibility of a viewing time measure of sexual interest that utilizes film clips and, thus, expand the types of sexual interests that could be investigated (e.g., sadism, biastophilia).

  2. The OTSC®-clip in revisional endoscopy against weight gain after bariatric gastric bypass surgery.

    Science.gov (United States)

    Heylen, Alex Marie Florent; Jacobs, Anja; Lybeer, Monika; Prosst, Ruediger L

    2011-10-01

    The maintenance of the restrictive component of the Fobi pouch gastric bypass is essential for permanent weight control. Dilatation of the pouch-outlet and of the pouch itself is responsible for substantial weight gain by an increased volume per meal and binge-eating due to the rapid emptying. An endoscopic over-the-scope clip (OTSC®; Ovesco AG, Tübingen, Germany) was applied in 94 patients following gastric bypass and unintended weight gain by dilated gastro-jejunostomy to narrow the pouch-outlet. The OTSC®-clip application was safe and efficient to reduce the pouch-outlet in all cases. Best clinical results were obtained by narrowing the gastro-jejunostomy by placing two clips at opposite sites, hence reducing the outlet of more than 80%. Preferably, the clip approximated the whole thickness of the wall to avoid further dilatation of the anastomosis. Between surgery and OTSC®-clip application the mean BMI dropped from 45.8 (±3.6) to 32.8 (±1.9). At the first follow-up about 3 months (mean 118 days, ±46 days) after OTSC®-clip application the mean BMI was 29.7 (±1.8). At the second follow-up about 1 year (mean 352 days, ±66 days) after OTSC®-clip application the mean BMI was 27.4 (±3.8). The OTSC®-clip for revisional endoscopy after gastric bypass is reliable and effective in treating weight gain due to a dilated pouch-outlet with favorable short- and midterm results.

  3. Video Game Structural Characteristics: A New Psychological Taxonomy

    Science.gov (United States)

    King, Daniel; Delfabbro, Paul; Griffiths, Mark

    2010-01-01

    Excessive video game playing behaviour may be influenced by a variety of factors including the structural characteristics of video games. Structural characteristics refer to those features inherent within the video game itself that may facilitate initiation, development and maintenance of video game playing over time. Numerous structural…

  4. Fabrication and Characterization of ZnO Nano-Clips by the Polyol-Mediated Process

    Science.gov (United States)

    Wang, Mei; Li, Ai-Dong; Kong, Ji-Zhou; Gong, You-Pin; Zhao, Chao; Tang, Yue-Feng; Wu, Di

    2018-02-01

    ZnO nano-clips with better monodispersion were prepared successfully using zinc acetate hydrate (Zn(OAc)2·nH2O) as Zn source and ethylene glycol (EG) as solvent by a simple solution-based route-polyol process. The effect of solution concentration on the formation of ZnO nano-clips has been investigated deeply. We first prove that the 0.01 M Zn(OAc)2·nH2O can react with EG without added water or alkaline, producing ZnO nano-clips with polycrystalline wurtzite structure at 170 °C. As-synthesized ZnO nano-clips contain a lot of aggregated nanocrystals ( 5 to 15 nm) with high specific surface area of 88 m2/g. The shapes of ZnO nano-clips basically keep constant with improved crystallinity after annealing at 400-600 °C. The lower solution concentration and slight amount of H2O play a decisive role in ZnO nano-clip formation. When the solution concentration is ≤ 0.0125 M, the complexing and polymerization reactions between Zn(OAc)2·nH2O and EG predominate, mainly elaborating ZnO nano-clips. When the solution concentration is ≥ 0.015 M, the alcoholysis and polycondensation reactions of Zn(OAc)2·nH2O and EG become dominant, leading to ZnO particle formation with spherical and elliptical shapes. The possible growth mechanism based on a competition between complexing and alcoholysis of Zn(OAc)2·nH2O and EG has been proposed.

  5. Primitive experience of three dimensional multi-slice spiral CT angiography for the follow-up of intracranial aneurysm clipping

    International Nuclear Information System (INIS)

    Yang Yunjun; Chen Weijian; Hu Zhangyong; Wu Enfu; Wang Meihao; Zhuge Qichuan; Zhongming; Cheng Jingliang; Ren Cuiping; Zhang Yong

    2008-01-01

    Objective To evaluate multi-slice three-dimensional CT angiography (MS 3D-CTA) for the follow-up of intracranial aneurysm clipping. Methods: MS 3D-CTA of 16 patients with intracranial aneurysm clipping were retrospectively analyzed. The patients were scanned on a 16-slice spiral CT (GE Lightspeed pro). Volume rendering(VR), thin maximum intensity projection(thin MIP) and multi-planar reconstruction (MPR) were employed in image postprocessing in all cases. Results: There were 17 clips in the 16 patients with aneurysm clipping. Six clips were located at the posterior communicating artery, 5 at the anterior communicating artery, 4 at the middle cerebral artery, and the remaining 2 clips were located at the pericallosal artery, in 1 patient. There were no abnormalities found in the aneurysm clipping region in 7 cases by MS 3D- CTA. There were residual aneurysm in 2 cases, parent artery stenosis in 4 cases, and artery spasm in 3 eases. There was no parent artery occlusion and clip displacement in all cases. VR showed excellent 3D spacial relations between the clip and parent artery in 12 cases, and showed good relations in 3 cases. The 1 case with 2 clips in the pericallosal artery showed heavy beam-hardening artifacts. The size and shape of aneurysm clips were clearly depicted by MPR and thin MIP, while 3D spacial relation of aneurysm clip and parent artery were poorly showed. Conclusion: MS 3D-CTA is a safe and efficient method for the follow-up of intracranialaneurysm clipping. Combined VR with MPR or thin MIP can well reveal postoperative changes after aneurysm clipping. (authors)

  6. 76 FR 44575 - Paper Clips From the People's Republic of China: Continuation of the Antidumping Duty Order

    Science.gov (United States)

    2011-07-26

    ..., butterfly clips, binder clips, or other paper fasteners that are not made wholly of wire of base metal and... DEPARTMENT OF COMMERCE International Trade Administration [A-570-826] Paper Clips From the People... the antidumping duty order on paper clips from the People's Republic of China (``PRC'') would likely...

  7. Using Short Movie and Television Clips in the Economics Principles Class

    Science.gov (United States)

    Sexton, Robert L.

    2006-01-01

    The author describes a teaching method that uses powerful contemporary media, movie and television clips, to demonstrate the enormous breadth and depth of economic concepts. Many different movie and television clips can be used to show the power of economic analysis. The author describes the scenes and the economic concepts within those scenes for…

  8. Automatic assessment of mitral regurgitation severity based on extensive textural features on 2D echocardiography videos.

    Science.gov (United States)

    Moghaddasi, Hanie; Nourian, Saeed

    2016-06-01

    Heart disease is the major cause of death as well as a leading cause of disability in the developed countries. Mitral Regurgitation (MR) is a common heart disease which does not cause symptoms until its end stage. Therefore, early diagnosis of the disease is of crucial importance in the treatment process. Echocardiography is a common method of diagnosis in the severity of MR. Hence, a method which is based on echocardiography videos, image processing techniques and artificial intelligence could be helpful for clinicians, especially in borderline cases. In this paper, we introduce novel features to detect micro-patterns of echocardiography images in order to determine the severity of MR. Extensive Local Binary Pattern (ELBP) and Extensive Volume Local Binary Pattern (EVLBP) are presented as image descriptors which include details from different viewpoints of the heart in feature vectors. Support Vector Machine (SVM), Linear Discriminant Analysis (LDA) and Template Matching techniques are used as classifiers to determine the severity of MR based on textural descriptors. The SVM classifier with Extensive Uniform Local Binary Pattern (ELBPU) and Extensive Volume Local Binary Pattern (EVLBP) have the best accuracy with 99.52%, 99.38%, 99.31% and 99.59%, respectively, for the detection of Normal, Mild MR, Moderate MR and Severe MR subjects among echocardiography videos. The proposed method achieves 99.38% sensitivity and 99.63% specificity for the detection of the severity of MR and normal subjects. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Hierarchical Spatio-Temporal Probabilistic Graphical Model with Multiple Feature Fusion for Binary Facial Attribute Classification in Real-World Face Videos.

    Science.gov (United States)

    Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal

    2016-06-01

    Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively.

  10. Recommendations for recognizing video events by concept vocabularies

    Science.gov (United States)

    2014-06-01

    represents a video in terms of low-level audiovisual features [16,38,50,35,15,19,37]. In general, these methods first extract from the video various types of...interpretable, but is also reported to outperform the state-of-the-art low-level audiovisual features in recognizing events [31,33]. Rather than training...concept detector accuracy. As a consequence, the vocabulary concepts do not necessarily have a semantic interpreta- tion needed to explain the video content

  11. Heart rate measurement based on face video sequence

    Science.gov (United States)

    Xu, Fang; Zhou, Qin-Wu; Wu, Peng; Chen, Xing; Yang, Xiaofeng; Yan, Hong-jian

    2015-03-01

    This paper proposes a new non-contact heart rate measurement method based on photoplethysmography (PPG) theory. With this method we can measure heart rate remotely with a camera and ambient light. We collected video sequences of subjects, and detected remote PPG signals through video sequences. Remote PPG signals were analyzed with two methods, Blind Source Separation Technology (BSST) and Cross Spectral Power Technology (CSPT). BSST is a commonly used method, and CSPT is used for the first time in the study of remote PPG signals in this paper. Both of the methods can acquire heart rate, but compared with BSST, CSPT has clearer physical meaning, and the computational complexity of CSPT is lower than that of BSST. Our work shows that heart rates detected by CSPT method have good consistency with the heart rates measured by a finger clip oximeter. With good accuracy and low computational complexity, the CSPT method has a good prospect for the application in the field of home medical devices and mobile health devices.

  12. The Development and Validation of a Human Systems Integration (HSI) Program for the Canadian Department of National Defence (DND)

    Science.gov (United States)

    2008-09-01

    can be animated. The Animation Module can animate a mannequin to visualise (through the mannequin’s eyes) the execution of a task or an operation...soldier commentary and audio clips, observe information presented in tables, graphs and schematics and view videos of soldiers in action. Key features

  13. Magnetic resonance angiography with ultrashort echo times reduces the artefact of aneurysm clips

    International Nuclear Information System (INIS)

    Goenner, F.; Heid, O.; Remonda, L.; Schroth, G.; Loevblad, K.O.; Guzman, R.; Barth, A.

    2002-01-01

    We evaluated the ability of an ultrashort echo time (TE) three-dimensional (3D) time-of-flight (TOF) magnetic resonance angiography (MRA) sequence to reduce the metal artefact of intracranial aneurysm clips and to display adjacent cerebral arteries. In five patients (aged 8-72 years) treated with Elgiloy or Phynox aneurysm clips we prospectively performed a conventional (TE 6.0 ms) and a new ultrashort TE (TE 2.4 ms) 3D TOF MRA. We compared the diameter of the clip-induced susceptibility artefact and the detectability of flow in adjacent vessels. The mean artefact diameter was 22.3±6.4 mm (range 14-38 mm) with the ultrashort TE and 27.7±6.4 mm (range 19-45 mm) with the conventional MRA (P<0.0001). This corresponded to a diameter reduction of 19.5±9.2%. More parts of adjacent vessels were detected, but with less intense flow signal. The aneurysm dome and neck remained within the area of signal loss and were therefore not displayed. Ultrashort TE MRA is a noninvasive and fast method for improving detection of vessels adjacent to clipped intracranial aneurysms, by reducing clip-induced susceptibility artefact. The method cannot, however, be used to show remnants of the aneurysm neck or sac as a result of imperfect clipping. (orig.)

  14. Fabrication and Characterization of ZnO Nano-Clips by the Polyol-Mediated Process.

    Science.gov (United States)

    Wang, Mei; Li, Ai-Dong; Kong, Ji-Zhou; Gong, You-Pin; Zhao, Chao; Tang, Yue-Feng; Wu, Di

    2018-02-09

    ZnO nano-clips with better monodispersion were prepared successfully using zinc acetate hydrate (Zn(OAc) 2 ·nH 2 O) as Zn source and ethylene glycol (EG) as solvent by a simple solution-based route-polyol process. The effect of solution concentration on the formation of ZnO nano-clips has been investigated deeply. We first prove that the 0.01 M Zn(OAc) 2 ·nH 2 O can react with EG without added water or alkaline, producing ZnO nano-clips with polycrystalline wurtzite structure at 170 °C. As-synthesized ZnO nano-clips contain a lot of aggregated nanocrystals (~ 5 to 15 nm) with high specific surface area of 88 m 2 /g. The shapes of ZnO nano-clips basically keep constant with improved crystallinity after annealing at 400-600 °C. The lower solution concentration and slight amount of H 2 O play a decisive role in ZnO nano-clip formation. When the solution concentration is ≤ 0.0125 M, the complexing and polymerization reactions between Zn(OAc) 2 ·nH 2 O and EG predominate, mainly elaborating ZnO nano-clips. When the solution concentration is ≥ 0.015 M, the alcoholysis and polycondensation reactions of Zn(OAc) 2 ·nH 2 O and EG become dominant, leading to ZnO particle formation with spherical and elliptical shapes. The possible growth mechanism based on a competition between complexing and alcoholysis of Zn(OAc) 2 ·nH 2 O and EG has been proposed.

  15. A Novel Quantum Video Steganography Protocol with Large Payload Based on MCQI Quantum Video

    Science.gov (United States)

    Qu, Zhiguo; Chen, Siyi; Ji, Sai

    2017-11-01

    As one of important multimedia forms in quantum network, quantum video attracts more and more attention of experts and scholars in the world. A secure quantum video steganography protocol with large payload based on the video strip encoding method called as MCQI (Multi-Channel Quantum Images) is proposed in this paper. The new protocol randomly embeds the secret information with the form of quantum video into quantum carrier video on the basis of unique features of video frames. It exploits to embed quantum video as secret information for covert communication. As a result, its capacity are greatly expanded compared with the previous quantum steganography achievements. Meanwhile, the new protocol also achieves good security and imperceptibility by virtue of the randomization of embedding positions and efficient use of redundant frames. Furthermore, the receiver enables to extract secret information from stego video without retaining the original carrier video, and restore the original quantum video as a follow. The simulation and experiment results prove that the algorithm not only has good imperceptibility, high security, but also has large payload.

  16. Children aged 6-24 months like to watch YouTube videos but could not learn anything from them.

    Science.gov (United States)

    Yadav, Savita; Chakraborty, Pinaki; Mittal, Prabhat; Arora, Udit

    2018-03-20

    Parents sometimes show young children YouTube videos on their smartphones. We studied the interaction of 55 Indian children born between December 2014 and May 2015 who watched YouTube videos when they were 6-24 months old. The children were recruited by the researchers using professional and personal contacts and visited by the same two observers at four ages, for at least 10 minutes. The observers recorded the children's abilities to interact with touch screens and identify people in videos and noted what videos attracted them the most. The children were attracted to music at six months of age and were interested in watching the videos at 12 months. They could identify their parents in videos at 12 months and themselves by 24 months. They started touching the screen at 18 months and could press the buttons that appeared on the screen, but did not understand their use. The children preferred watching dance performances by multiple artists with melodical music, advertisements for products they used and videos showing toys and balloons. Children up to two years of age could be entertained and kept busy by showing them YouTube clips on smartphones, but did not learn anything from the videos. ©2018 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  17. An Intervention Based on Video Feedback and Questioning to Improve Tactical Knowledge in Expert Female Volleyball Players.

    Science.gov (United States)

    Moreno, M Perla; Moreno, Alberto; García-González, Luis; Ureña, Aurelio; Hernández, César; Del Villar, Fernando

    2016-06-01

    This study applied an intervention program, based on video feedback and questioning, to expert female volleyball players to improve their tactical knowledge. The sample consisted of eight female attackers (26 ± 2.6 years old) from the Spanish National Volleyball Team, who were divided into an experimental group (n = 4) and a control group (n = 4). The video feedback and questioning program applied in the study was developed over eight reflective sessions and consisted of three phases: viewing of the selected actions, self-analysis and reflection by the attacker, and joint player-coach analysis. The attackers were videotaped in an actual game and four clips (situations) of each of the attackers were chosen for each reflective session. Two of the clips showed a correct action by the attacker, and two showed an incorrect decision. Tactical knowledge was measured by problem representation with a verbal protocol. The members of the experimental group showed adaptations in long-term memory, significantly improving their tactical knowledge. With respect to conceptual content, there was an increase in the total number of conditions verbalized by the players; with respect to conceptual sophistication, there was an increase in the indication of appropriate conditions with two or more details; and finally, with respect to conceptual structure, there was an increase in the use of double or triple conceptual structures. The intervention program, based on video feedback and questioning, in addition to on-court training sessions of expert volleyball players, appears to improve the athletes' tactical knowledge. © The Author(s) 2016.

  18. AN ADAPTIVE ORGANIZATION METHOD OF GEOVIDEO DATA FOR SPATIO-TEMPORAL ASSOCIATION ANALYSIS

    Directory of Open Access Journals (Sweden)

    C. Wu

    2015-07-01

    Full Text Available Public security incidents have been increasingly challenging to address with their new features, including large-scale mobility, multi-stage dynamic evolution, spatio-temporal concurrency and uncertainty in the complex urban environment, which require spatio-temporal association analysis among multiple regional video data for global cognition. However, the existing video data organizational methods that view video as a property of the spatial object or position in space dissever the spatio-temporal relationship of scattered video shots captured from multiple video channels, limit the query functions on interactive retrieval between a camera and its video clips and hinder the comprehensive management of event-related scattered video shots. GeoVideo, which maps video frames onto a geographic space, is a new approach to represent the geographic world, promote security monitoring in a spatial perspective and provide a highly feasible solution to this problem. This paper analyzes the large-scale personnel mobility in public safety events and proposes a multi-level, event-related organization method with massive GeoVideo data by spatio-temporal trajectory. This paper designs a unified object identify(ID structure to implicitly store the spatio-temporal relationship of scattered video clips and support the distributed storage management of massive cases. Finally, the validity and feasibility of this method are demonstrated through suspect tracking experiments.

  19. Endoscopic clipping for gastrointestinal tumors. A method to define the target volume more precisely

    International Nuclear Information System (INIS)

    Riepl, M.; Klautke, G.; Fehr, R.; Fietkau, R.; Pietsch, A.

    2000-01-01

    Background: In many cases it is not possible to exactly define the extension of carcinoma of the gastrointestinal tract with the help of computertomography scans made for 3-D-radiation treatment planning. Consequently, the planning of external beam radiotherapy is made more difficult for the gross tumor volume as well as, in some cases, also for the clinical target volume. Patients and Methods: Eleven patients with macrosocpic tumors (rectal cancer n = 5, cardiac cancer n = 6) were included. Just before 3-D planning, the oral and aboral border of the tumor was marked endoscopically with hemoclips. Subsequently, CT scans for radiotherapy planning were made and the clinical target volume was defined. Five to 6 weeks thereafter, new CT scans were done to define the gross tumor volume for boost planning. Two investigators independently assessed the influence of the hemoclips on the different planning volumes, and whether the number of clips was sufficient to define the gross tumor volume. Results: In all patients, the implantation of the clips was done without complications. Start of radiotherapy was not delayed. With the help of the clips it was possible to exactly define the position and the extension of the primary tumor. The clinical target volume was modified according to the position of the clips in 5/11 patients; the gross tumor volume was modified in 7/11 patients. The use of the clips made the documentation and verification of the treatment portals by the simulator easier. Moreover, the clips helped the surgeon to define the primary tumor region following marked regression after neoadjuvant therapy in 3 patients. Conclusions: Endoscopic clipping of gastrointestinal tumors helps to define the tumor volumes more precisely in radiation therapy. The clips are easily recognized on the portal films and, thus, contribute to quality control. (orig.) [de

  20. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  1. Satisfaction with Online Teaching Videos: A Quantitative Approach

    Science.gov (United States)

    Meseguer-Martinez, Angel; Ros-Galvez, Alejandro; Rosa-Garcia, Alfonso

    2017-01-01

    We analyse the factors that determine the number of clicks on the "Like" button in online teaching videos, with a sample of teaching videos in the area of Microeconomics across Spanish-speaking countries. The results show that users prefer short online teaching videos. Moreover, some features of the videos have a significant impact on…

  2. Real-world experience of MitraClip for treatment of severe mitral regurgitation

    DEFF Research Database (Denmark)

    Chan, Pak Hei; She, Hoi Lam; Alegria-Barrero, Eduardo

    2012-01-01

     Percutaneous edge-to-edge mitral valve repair with the MitraClip(®) was shown to be a safe and feasible alternative compared to conventional surgical mitral valve repair. Herein is reported our experience on MitraClip(®) for high-risk surgical candidates with severe mitral regurgitation (MR)....

  3. Iterative Signal Processing for Mitigation of Analog-to-Digital Converter Clipping Distortion in Multiband OFDMA Receivers

    Directory of Open Access Journals (Sweden)

    Markus Allén

    2012-01-01

    Full Text Available In modern wideband communication receivers, the large input-signal dynamics is a fundamental problem. Unintentional signal clipping occurs, if the receiver front-end with the analog-to-digital interface cannot respond to rapidly varying conditions. This paper discusses digital postprocessing compensation of such unintentional clipping in multiband OFDMA receivers. The proposed method iteratively mitigates the clipping distortion by exploiting the symbol decisions. The performance of the proposed method is illustrated with various computer simulations and also verified by concrete laboratory measurements with commercially available analog-to-digital hardware. It is shown that the clipping compensation algorithm implemented in a turbo decoding OFDM receiver is able to remove almost all the clipping distortion even under significant clipping in fading channel circumstances. That is to say, it is possible to nearly recover the receiver performance to the level, which would be achieved in the equivalent nonclipped situation.

  4. What kind of erotic film clips should we use in female sex research? An exploratory study.

    Science.gov (United States)

    Woodard, Terri L; Collins, Karen; Perez, Mindy; Balon, Richard; Tancer, Manuel E; Kruger, Michael; Moffat, Scott; Diamond, Michael P

    2008-01-01

    Erotic film clips are used in sex research, including studies of female sexual dysfunction and arousal. However, little is known about which clips optimize female sexual response. Furthermore, their use is not well standardized. To identify the types of film clips that are most mentally appealing and physically arousing to women for use in future sexual function and dysfunction studies; to explore the relationship between mental appeal and reported physical arousal; to characterize the content of the films that were found to be the most and least appealing and arousing. Twenty-one women viewed 90 segments of erotic film clips. They rated how (i) mentally appealing and (ii) how physically aroused they were by each clip. The data were analyzed by descriptive statistics. The means of the mental and self-reported physical responses were calculated to determine the most and least appealing/arousing film clips. Pearson correlations were calculated to assess the relationship between mental appeal and reported physical arousal. Self-reported mental and physical arousal. Of 90 film clips, 18 were identified as the most mentally appealing and physically arousing while nine were identified as the least mentally appealing and physically arousing. The level of mental appeal positively correlated with the level of perceived physical arousal in both categories (r = 0.61, P Erotic film clips reliably produced a state of self-reported arousal in women. The most appealing and arousing films tended to depict heterosexual vaginal intercourse. Film clips with these attributes should be used in future research of sexual function and response of women.

  5. Effect of apical meristem clipping on carbon allocation and morphological development of white oak seedlings

    Science.gov (United States)

    Paul P. Kormanik; Shi-Jean S. Sung; T.L. Kormanik; Stanley J. Zarnoch

    1994-01-01

    Seedlings from three open-pollinated half-sib white oak seedlots were clipped in mid-July and their development was compared to nonclipped controls after one growing season.In general when data were analyzed by family, clipped seedlings were significantly less desirable in three to six of the eight variables tested.Numerically, in all families seedlots, the clipped...

  6. Impact of Clipping versus Coiling on Postoperative Hemodynamics and Pulmonary Edema after Subarachnoid Hemorrhage

    Directory of Open Access Journals (Sweden)

    Nobutaka Horie

    2014-01-01

    Full Text Available Volume management is critical for assessment of cerebral vasospasm after aneurysmal subarachnoid hemorrhage (SAH. This multicenter prospective cohort study compared the impact of surgical clipping versus endovascular coiling on postoperative hemodynamics and pulmonary edema in patients with SAH. Hemodynamic parameters were measured for 14 days using a transpulmonary thermodilution system. The study included 202 patients, including 160 who underwent clipping and 42 who underwent coiling. There were no differences in global ejection fraction (GEF, cardiac index, systemic vascular resistance index, or global end-diastolic volume index between the clipping and coiling groups in the early period. However, extravascular lung water index (EVLWI and pulmonary vascular permeability index (PVPI were significantly higher in the clipping group in the vasospasm period. Postoperative C-reactive protein (CRP level was higher in the clipping group and was significantly correlated with postoperative brain natriuretic peptide level. Multivariate analysis found that PVPI and GEF were independently associated with high EVLWI in the early period, suggesting cardiogenic edema, and that CRP and PVPI, but not GEF, were independently associated with high EVLWI in the vasospasm period, suggesting noncardiogenic edema. In conclusion, clipping affects postoperative CRP level and may thereby increase noncardiogenic pulmonary edema in the vasospasm period. His trial is registered with University Hospital Medical Information Network UMIN000003794.

  7. Are YouTube videos accurate and reliable on basic life support and cardiopulmonary resuscitation?

    Science.gov (United States)

    Yaylaci, Serpil; Serinken, Mustafa; Eken, Cenker; Karcioglu, Ozgur; Yilmaz, Atakan; Elicabuk, Hayri; Dal, Onur

    2014-10-01

    The objective of this study is to investigate reliability and accuracy of the information on YouTube videos related to CPR and BLS in accord with 2010 CPR guidelines. YouTube was queried using four search terms 'CPR', 'cardiopulmonary resuscitation', 'BLS' and 'basic life support' between 2011 and 2013. Sources that uploaded the videos, the record time, the number of viewers in the study period, inclusion of human or manikins were recorded. The videos were rated if they displayed the correct order of resuscitative efforts in full accord with 2010 CPR guidelines or not. Two hundred and nine videos meeting the inclusion criteria after the search in YouTube with four search terms ('CPR', 'cardiopulmonary resuscitation', 'BLS' and 'basic life support') comprised the study sample subjected to the analysis. Median score of the videos is 5 (IQR: 3.5-6). Only 11.5% (n = 24) of the videos were found to be compatible with 2010 CPR guidelines with regard to sequence of interventions. Videos uploaded by 'Guideline bodies' had significantly higher rates of download when compared with the videos uploaded by other sources. Sources of the videos and date of upload (year) were not shown to have any significant effect on the scores received (P = 0.615 and 0.513, respectively). The videos' number of downloads did not differ according to the videos compatible with the guidelines (P = 0.832). The videos downloaded more than 10,000 times had a higher score than the others (P = 0.001). The majority of You-Tube video clips purporting to be about CPR are not relevant educational material. Of those that are focused on teaching CPR, only a small minority optimally meet the 2010 Resucitation Guidelines. © 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  8. User-assisted video segmentation system for visual communication

    Science.gov (United States)

    Wu, Zhengping; Chen, Chun

    2002-01-01

    Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.

  9. Digital video transcoding for transmission and storage

    CERN Document Server

    Sun, Huifang; Chen, Xuemin

    2004-01-01

    Professionals in the video and multimedia industries need a book that explains industry standards for video coding and how to convert the compressed information between standards. Digital Video Transcoding for Transmission and Storage answers this demand while also supplying the theories and principles of video compression and transcoding technologies. Emphasizing digital video transcoding techniques, this book summarizes its content via examples of practical methods for transcoder implementation. It relates almost all of its featured transcoding technologies to practical applications.This vol

  10. Receiver-based recovery of clipped ofdm signals for papr reduction: A bayesian approach

    KAUST Repository

    Ali, Anum; Al-Rabah, Abdullatif R.; Masood, Mudassir; Al-Naffouri, Tareq Y.

    2014-01-01

    at the receiver for information restoration. In this paper, we acknowledge the sparse nature of the clipping signal and propose a low-complexity Bayesian clipping estimation scheme. The proposed scheme utilizes a priori information about the sparsity rate

  11. No-Reference Video Quality Assessment by HEVC Codec Analysis

    DEFF Research Database (Denmark)

    Huang, Xin; Søgaard, Jacob; Forchhammer, Søren

    2015-01-01

    This paper proposes a No-Reference (NR) Video Quality Assessment (VQA) method for videos subject to the distortion given by High Efficiency Video Coding (HEVC). The proposed assessment can be performed either as a BitstreamBased (BB) method or as a Pixel-Based (PB). It extracts or estimates...... the transform coefficients, estimates the distortion, and assesses the video quality. The proposed scheme generates VQA features based on Intra coded frames, and then maps features using an Elastic Net to predict subjective video quality. A set of HEVC coded 4K UHD sequences are tested. Results show...... that the quality scores computed by the proposed method are highly correlated with the subjective assessment....

  12. Video2vec Embeddings Recognize Events When Examples Are Scarce.

    Science.gov (United States)

    Habibian, Amirhossein; Mensink, Thomas; Snoek, Cees G M

    2017-10-01

    This paper aims for event recognition when video examples are scarce or even completely absent. The key in such a challenging setting is a semantic video representation. Rather than building the representation from individual attribute detectors and their annotations, we propose to learn the entire representation from freely available web videos and their descriptions using an embedding between video features and term vectors. In our proposed embedding, which we call Video2vec, the correlations between the words are utilized to learn a more effective representation by optimizing a joint objective balancing descriptiveness and predictability. We show how learning the Video2vec embedding using a multimodal predictability loss, including appearance, motion and audio features, results in a better predictable representation. We also propose an event specific variant of Video2vec to learn a more accurate representation for the words, which are indicative of the event, by introducing a term sensitive descriptiveness loss. Our experiments on three challenging collections of web videos from the NIST TRECVID Multimedia Event Detection and Columbia Consumer Videos datasets demonstrate: i) the advantages of Video2vec over representations using attributes or alternative embeddings, ii) the benefit of fusing video modalities by an embedding over common strategies, iii) the complementarity of term sensitive descriptiveness and multimodal predictability for event recognition. By its ability to improve predictability of present day audio-visual video features, while at the same time maximizing their semantic descriptiveness, Video2vec leads to state-of-the-art accuracy for both few- and zero-example recognition of events in video.

  13. Indexed Captioned Searchable Videos: A Learning Companion for STEM Coursework

    Science.gov (United States)

    Tuna, Tayfun; Subhlok, Jaspal; Barker, Lecia; Shah, Shishir; Johnson, Olin; Hovey, Christopher

    2017-02-01

    Videos of classroom lectures have proven to be a popular and versatile learning resource. A key shortcoming of the lecture video format is accessing the content of interest hidden in a video. This work meets this challenge with an advanced video framework featuring topical indexing, search, and captioning (ICS videos). Standard optical character recognition (OCR) technology was enhanced with image transformations for extraction of text from video frames to support indexing and search. The images and text on video frames is analyzed to divide lecture videos into topical segments. The ICS video player integrates indexing, search, and captioning in video playback providing instant access to the content of interest. This video framework has been used by more than 70 courses in a variety of STEM disciplines and assessed by more than 4000 students. Results presented from the surveys demonstrate the value of the videos as a learning resource and the role played by videos in a students learning process. Survey results also establish the value of indexing and search features in a video platform for education. This paper reports on the development and evaluation of ICS videos framework and over 5 years of usage experience in several STEM courses.

  14. Visual hashing of digital video : applications and techniques

    NARCIS (Netherlands)

    Oostveen, J.; Kalker, A.A.C.M.; Haitsma, J.A.; Tescher, A.G.

    2001-01-01

    his paper present the concept of robust video hashing as a tool for video identification. We present considerations and a technique for (i) extracting essential perceptual features from a moving image sequences and (ii) for identifying any sufficiently long unknown video segment by efficiently

  15. [Clip Sheets from BOCES. Opportunities. Health. Careers. = Oportunidades. Salud. Una Camera En...

    Science.gov (United States)

    State Univ. of New York, Geneseo. Coll. at Geneseo. Migrant Center.

    This collection of 83 clip sheets, or classroom handouts, was created to help U.S. migrants learn more about health, careers, and general "opportunities" including education programs. They are written in both English and Spanish and are presented in an easily understandable format. Health clip-sheet topics include the following: Abuse; AIDS;…

  16. Delayed recovery of adipsic diabetes insipidus (ADI) caused by elective clipping of anterior communicating artery and left middle cerebral artery aneurysms.

    Science.gov (United States)

    Tan, Jeffrey; Ndoro, Samuel; Okafo, Uchenna; Garrahy, Aoife; Agha, Amar; Rawluk, Danny

    2016-12-16

    Adipsic diabetes insipidus (ADI) is an extremely rare complication following microsurgical clipping of anterior communicating artery aneurysm (ACoA) and left middle cerebral artery (MCA) aneurysm. It poses a significant challenge to manage due to an absent thirst response and the co-existence of cognitive impairment in our patient. Recovery from adipsic DI has hitherto been reported only once. A 52-year-old man with previous history of clipping of left posterior communicating artery aneurysm 20 years prior underwent microsurgical clipping of ACoA and left MCA aneurysms without any intraoperative complications. Shortly after surgery, he developed clear features of ADI with adipsic severe hypernatraemia and hypotonic polyuria, which was associated with cognitive impairment that was confirmed with biochemical investigations and cognitive assessments. He was treated with DDAVP along with a strict intake of oral fluids at scheduled times to maintain eunatremia. Repeat assessment at six months showed recovery of thirst and a normal water deprivation test. Management of ADI with cognitive impairment is complex and requires a multidisciplinary approach. Recovery from ADI is very rare, and this is only the second report of recovery in this particular clinical setting.

  17. Roadside video data analysis deep learning

    CERN Document Server

    Verma, Brijesh; Stockwell, David

    2017-01-01

    This book highlights the methods and applications for roadside video data analysis, with a particular focus on the use of deep learning to solve roadside video data segmentation and classification problems. It describes system architectures and methodologies that are specifically built upon learning concepts for roadside video data processing, and offers a detailed analysis of the segmentation, feature extraction and classification processes. Lastly, it demonstrates the applications of roadside video data analysis including scene labelling, roadside vegetation classification and vegetation biomass estimation in fire risk assessment.

  18. From benchmarking HITS-CLIP peak detection programs to a new method for identification of miRNA-binding sites from Ago2-CLIP data.

    Science.gov (United States)

    Bottini, Silvia; Hamouda-Tekaya, Nedra; Tanasa, Bogdan; Zaragosi, Laure-Emmanuelle; Grandjean, Valerie; Repetto, Emanuela; Trabucchi, Michele

    2017-05-19

    Experimental evidence indicates that about 60% of miRNA-binding activity does not follow the canonical rule about the seed matching between miRNA and target mRNAs, but rather a non-canonical miRNA targeting activity outside the seed or with a seed-like motifs. Here, we propose a new unbiased method to identify canonical and non-canonical miRNA-binding sites from peaks identified by Ago2 Cross-Linked ImmunoPrecipitation associated to high-throughput sequencing (CLIP-seq). Since the quality of peaks is of pivotal importance for the final output of the proposed method, we provide a comprehensive benchmarking of four peak detection programs, namely CIMS, PIPE-CLIP, Piranha and Pyicoclip, on four publicly available Ago2-HITS-CLIP datasets and one unpublished in-house Ago2-dataset in stem cells. We measured the sensitivity, the specificity and the position accuracy toward miRNA binding sites identification, and the agreement with TargetScan. Secondly, we developed a new pipeline, called miRBShunter, to identify canonical and non-canonical miRNA-binding sites based on de novo motif identification from Ago2 peaks and prediction of miRNA::RNA heteroduplexes. miRBShunter was tested and experimentally validated on the in-house Ago2-dataset and on an Ago2-PAR-CLIP dataset in human stem cells. Overall, we provide guidelines to choose a suitable peak detection program and a new method for miRNA-target identification. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. An Innovative SIFT-Based Method for Rigid Video Object Recognition

    Directory of Open Access Journals (Sweden)

    Jie Yu

    2014-01-01

    Full Text Available This paper presents an innovative SIFT-based method for rigid video object recognition (hereafter called RVO-SIFT. Just like what happens in the vision system of human being, this method makes the object recognition and feature updating process organically unify together, using both trajectory and feature matching, and thereby it can learn new features not only in the training stage but also in the recognition stage, which can improve greatly the completeness of the video object’s features automatically and, in turn, increases the ratio of correct recognition drastically. The experimental results on real video sequences demonstrate its surprising robustness and efficiency.

  20. Future of clip-on weapon sights: pros and cons from an applications perspective

    Science.gov (United States)

    Knight, C. Reed; Greenslade, Ken; Francisco, Glen

    2015-05-01

    US Domestic, International, allied Foreign National Warfighters and Para-Military First Responders (Police, SWAT, Special Operations, Law Enforcement, Government, Security and more) are put in harm's way all the time. To successfully complete their missions and return home safely are the primary goals of these professionals. Tactical product improvements that affect mission effectiveness and solider survivability are pivotal to understanding the past, present and future of Clip-On in-line weapon sights. Clip-On Weapon Sight (WS) technology was deemed an interim solution by the US Government for use until integrated and fused (day/night multi-sensor) Weapon Sights (WSs) were developed/fielded. Clip-On has now become the solution of choice by Users, Warriors, Soldiers and the US Government. SWaP-C (size, weight and power -cost) has been improved through progressive advances in Clip-On Image Intensified (I2), passive thermal, LL-CMOS and fused technology. Clip-On Weapon Sights are now no longer mounting position sensitive. Now they maintain aim point boresight, so they can be used for longer ranges with increased capabilities while utilizing the existing zeroed weapon and daysight optic. Active illuminated low-light level (both analog I2 and digital LL-CMOS) imaging is rightfully a real-world technology, proven to deliver daytime and low-light level identification confidence. Passive thermal imaging is also a real-world technology, proven to deliver daytime, nighttime and all-weather (including dirty battlefield) target detection confidence. Image processing detection algorithms with intelligent analytics provide documented promise to improve confidence by reducing Users, Warriors and Soldiers' work-loads and improving overall system engagement solution outcomes. In order to understand the future of Clip-On in-line weapon sights, addressing pros and cons, this paper starts with an overview of historical weapon sight applications, technologies and stakeholder decisions

  1. PAPR Reduction of FBMC by Clipping and Its Iterative Compensation

    Directory of Open Access Journals (Sweden)

    Zsolt Kollár

    2012-01-01

    Full Text Available Physical layers of communication systems using Filter Bank Multicarrier (FBMC as a modulation scheme provide low out-of-band leakage but suffer from the large Peak-to-Average Power Ratio (PAPR of the transmitted signal. Two special FBMC schemes are investigated in this paper: the Orthogonal Frequency Division Multiplexing (OFDM and the Staggered Multitone (SMT. To reduce the PAPR of the signal, time domain clipping is applied in both schemes. If the clipping is not compensated, the system performance is severely affected. To avoid this degradation, an iterative noise cancelation technique, Bussgang Noise Cancelation (BNC, is applied in the receiver. It is shown that clipping can be a good means for reducing the PAPR, especially for the SMT scheme. A novel modified BNC receiver is presented for SMT. It is shown how this technique can be implemented in real-life applications where special requirements must be met regarding the spectral characteristics of the transmitted signal.

  2. Acute Cholangitis following Intraductal Migration of Surgical Clips 10 Years after Laparoscopic Cholecystectomy

    Directory of Open Access Journals (Sweden)

    Natalie E. Cookson

    2015-01-01

    Full Text Available Background. Laparoscopic cholecystectomy represents the gold standard approach for treatment of symptomatic gallstones. Surgery-associated complications include bleeding, bile duct injury, and retained stones. Migration of surgical clips after cholecystectomy is a rare complication and may result in gallstone formation “clip cholelithiasis”. Case Report. We report a case of a 55-year-old female patient who presented with right upper quadrant pain and severe sepsis having undergone an uncomplicated laparoscopic cholecystectomy 10 years earlier. Computed tomography (CT imaging revealed hyperdense material in the common bile duct (CBD compatible with retained calculus. Endoscopic retrograde cholangiopancreatography (ERCP revealed appearances in keeping with a migrated surgical clip within the CBD. Balloon trawl successfully extracted this, alleviating the patient’s jaundice and sepsis. Conclusion. Intraductal clip migration is a rarely encountered complication after laparoscopic cholecystectomy which may lead to choledocholithiasis. Appropriate management requires timely identification and ERCP.

  3. Surgical clipping is still a good choice for the treatment of paraclinoid aneurysms

    Directory of Open Access Journals (Sweden)

    Felix Hendrik Pahl

    2016-04-01

    Full Text Available ABSTRACT Paraclinoid aneurysms are lesions located adjacent to the clinoid and ophthalmic segments of the internal carotid artery. In recent years, flow diverter stents have been introduced as a better endovascular technique for treatment of these aneurysms. Method From 2009 to 2014, a total of 43 paraclinoid aneurysms in 43 patients were surgically clipped. We retrospectively reviewed the records of these patients to analyze clinical outcomes. Results Twenty-six aneurysms (60.5% were ophthalmic artery aneurysms, while 17 were superior hypophyseal artery aneurysms (39.5%. The extradural approach to the clinoid process was used to clip these aneurysms. One hundred percent of aneurysms were clipped (complete exclusion in 100% on follow-up angiography. The length of follow-up ranged from 1 to 60 months (mean, 29.82 months. Conclusion Surgical clipping continues to be a good option for the treatment of paraclinoid aneurysms.

  4. Biochemical Analysis Reveals the Multifactorial Mechanism of Histone H3 Clipping by Chicken Liver Histone H3 Protease

    KAUST Repository

    Chauhan, Sakshi

    2016-09-02

    Proteolytic clipping of histone H3 has been identified in many organisms. Despite several studies, the mechanism of clipping, the substrate specificity, and the significance of this poorly understood epigenetic mechanism are not clear. We have previously reported histone H3 specific proteolytic clipping and a protein inhibitor in chicken liver. However, the sites of clipping are still not known very well. In this study, we attempt to identify clipping sites in histone H3 and to determine the mechanism of inhibition by stefin B protein, a cysteine protease inhibitor. By employing site-directed mutagenesis and in vitro biochemical assays, we have identified three distinct clipping sites in recombinant human histone H3 and its variants (H3.1, H3.3, and H3t). However, post-translationally modified histones isolated from chicken liver and Saccharomyces cerevisiae wild-type cells showed different clipping patterns. Clipping of histone H3 N-terminal tail at three sites occurs in a sequential manner. We have further observed that clipping sites are regulated by the structure of the N-terminal tail as well as the globular domain of histone H3. We also have identified the QVVAG region of stefin B protein to be very crucial for inhibition of the protease activity. Altogether, our comprehensive biochemical studies have revealed three distinct clipping sites in histone H3 and their regulation by the structure of histone H3, histone modifications marks, and stefin B.

  5. Crossmodal integration enhances neural representation of task-relevant features in audiovisual face perception.

    Science.gov (United States)

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei

    2015-02-01

    Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Using video analysis for concussion surveillance in Australian football.

    Science.gov (United States)

    Makdissi, Michael; Davis, Gavin

    2016-12-01

    The objectives of the study were to assess the relationship between various player and game factors and risk of concussion; and to assess the reliability of video analysis for mechanistic assessment of concussion in Australian football. Prospective cohort study. All impacts and collisions resulting in concussion were identified during the 2011 Australian Football League season. An extensive list of factors for assessment was created based upon previous analysis of concussion in Australian Football League and expert opinions. The authors independently reviewed the video clips and correlation for each factor was examined. A total of 82 concussions were reported in 194 games (rate: 8.7 concussions per 1000 match hours; 95% confidence interval: 6.9-10.5). Player demographics and game variables such as venue, timing of the game (day, night or twilight), quarter, travel status (home or interstate) or score margin did not demonstrate a significant relationship with risk of concussion; although a higher percentage of concussions occurred in the first 5min of game time of the quarter (36.6%), when compared to the last 5min (20.7%). Variables with good inter-rater agreement included position on the ground, circumstances of the injury and cause of the impact. The remainder of the variables assessed had fair-poor inter-rater agreement. Common problems included insufficient or poor quality video and interpretation issues related to the definitions used. Clear definitions and good quality video from multiple camera angles are required to improve the utility of video analysis for concussion surveillance in Australian football. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  7. Breaking the news on mobile TV: user requirements of a popular mobile content

    Science.gov (United States)

    Knoche, Hendrik O.; Sasse, M. Angela

    2006-02-01

    This paper presents the results from three lab-based studies that investigated different ways of delivering Mobile TV News by measuring user responses to different encoding bitrates, image resolutions and text quality. All studies were carried out with participants watching News content on mobile devices, with a total of 216 participants rating the acceptability of the viewing experience. Study 1 compared the acceptability of a 15-second video clip at different video and audio encoding bit rates on a 3G phone at a resolution of 176x144 and an iPAQ PDA (240x180). Study 2 measured the acceptability of video quality of full feature news clips of 2.5 minutes which were recorded from broadcast TV, encoded at resolutions ranging from 120x90 to 240x180, and combined with different encoding bit rates and audio qualities presented on an iPAQ. Study 3 improved the legibility of the text included in the video simulating a separate text delivery. The acceptability of News' video quality was greatly reduced at a resolution of 120x90. The legibility of text was a decisive factor in the participants' assessment of the video quality. Resolutions of 168x126 and higher were substantially more acceptable when they were accompanied by optimized high quality text compared to proportionally scaled inline text. When accompanied by high quality text TV news clips were acceptable to the vast majority of participants at resolutions as small as 168x126 for video encoding bitrates of 160kbps and higher. Service designers and operators can apply this knowledge to design a cost-effective mobile TV experience.

  8. Video game characteristics, happiness and flow as predictors of addiction among video game players: a pilot study

    OpenAIRE

    Hull, DC; Williams, GA; Griffiths, MD

    2013-01-01

    Aims:\\ud Video games provide opportunities for positive psychological experiences such as flow-like phenomena during play and general happiness that could be associated with gaming achievements. However, research has shown that specific features of game play may be associated with problematic behaviour associated with addiction-like experiences. The study was aimed at analysing whether certain structural characteristics of video games, flow, and global happiness could be predictive of video g...

  9. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  10. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  11. Feigning Amnesia Moderately Impairs Memory for a Mock Crime Video.

    Science.gov (United States)

    Mangiulli, Ivan; van Oorsouw, Kim; Curci, Antonietta; Merckelbach, Harald; Jelicic, Marko

    2018-01-01

    Previous studies showed that feigning amnesia for a crime impairs actual memory for the target event. Lack of rehearsal has been proposed as an explanation for this memory-undermining effect of feigning. The aim of the present study was to replicate and extend previous research adopting a mock crime video instead of a narrative story. We showed participants a video of a violent crime. Next, they were requested to imagine that they had committed this offense and to either feign amnesia or confess the crime. A third condition was included: Participants in the delayed test-only control condition did not receive any instruction. On subsequent recall tests, participants in all three conditions were instructed to report as much information as possible about the offense. On the free recall test, feigning amnesia impaired memory for the video clip, but participants who were asked to feign crime-related amnesia outperformed controls. However, no differences between simulators and confessors were found on both correct cued recollection or on distortion and commission rates. We also explored whether inner speech might modulate memory for the crime. Inner speech traits were not found to be related to the simulating amnesia effect. Theoretical and practical implications of our results are discussed.

  12. Feigning Amnesia Moderately Impairs Memory for a Mock Crime Video

    Directory of Open Access Journals (Sweden)

    Ivan Mangiulli

    2018-04-01

    Full Text Available Previous studies showed that feigning amnesia for a crime impairs actual memory for the target event. Lack of rehearsal has been proposed as an explanation for this memory-undermining effect of feigning. The aim of the present study was to replicate and extend previous research adopting a mock crime video instead of a narrative story. We showed participants a video of a violent crime. Next, they were requested to imagine that they had committed this offense and to either feign amnesia or confess the crime. A third condition was included: Participants in the delayed test-only control condition did not receive any instruction. On subsequent recall tests, participants in all three conditions were instructed to report as much information as possible about the offense. On the free recall test, feigning amnesia impaired memory for the video clip, but participants who were asked to feign crime-related amnesia outperformed controls. However, no differences between simulators and confessors were found on both correct cued recollection or on distortion and commission rates. We also explored whether inner speech might modulate memory for the crime. Inner speech traits were not found to be related to the simulating amnesia effect. Theoretical and practical implications of our results are discussed.

  13. Crew Resource Management (CRM video storytelling project: a team-based learning activity

    Directory of Open Access Journals (Sweden)

    Ma, Maggie Jiao

    2011-01-01

    Full Text Available This Crew Resource Management (CRM video storytelling project asks students to work in a team (4-5 people per team to create (write and produce a video story. The story should demonstrate lacking and ill practices of CRM knowledge and skills, or positive skills used to create a successful scenario in aviation (e. g. , flight training, commercial aviation, airport management. The activity is composed of two parts: (1 creating a video story of CRM in aviation, and (2 delivering a group presentation. Each tem creates a 5-8 minute long video clip of its story. The story must be originally created by the team to educate pilot and/or aviation management students on good practices of CRM in aviation. Accidents and incidents can be used as a reference to inspire ideas. However, this project is not to re-create any previous CRM accidents/incidents. The video story needs to be self-contained and address two or more aspects of CRM specified in the Federal Aviation Administration’s Advisory Circular 120-51. The presentation must include the use of PowerPoint or similar software and additional multimedia visual aids. The presentation itself will last no more than 17 minutes in length; including the actual video story (each group has additional 3 minutes to set up prior to the presentation. During the presentation following the video each team will discuss the CRM problems (or invite audience to identify CRM problems and explain what CRM practices were performed, and should have been performed. This presentation also should describe how each team worked together in order to complete this project (i. e. , good and bad CRM practiced

  14. Adaptive quantization-parameter clip scheme for smooth quality in H.264/AVC.

    Science.gov (United States)

    Hu, Sudeng; Wang, Hanli; Kwong, Sam

    2012-04-01

    In this paper, we investigate the issues over the smooth quality and the smooth bit rate during rate control (RC) in H.264/AVC. An adaptive quantization-parameter (Q(p)) clip scheme is proposed to optimize the quality smoothness while keeping the bit-rate fluctuation at an acceptable level. First, the frame complexity variation is studied by defining a complexity ratio between two nearby frames. Second, the range of the generated bits is analyzed to prevent the encoder buffer from overflow and underflow. Third, based on the safe range of the generated bits, an optimal Q(p) clip range is developed to reduce the quality fluctuation. Experimental results demonstrate that the proposed Q(p) clip scheme can achieve excellent performance in quality smoothness and buffer regulation.

  15. New concept of 3D printed bone clip (polylactic acid/hydroxyapatite/silk composite) for internal fixation of bone fractures.

    Science.gov (United States)

    Yeon, Yeung Kyu; Park, Hae Sang; Lee, Jung Min; Lee, Ji Seung; Lee, Young Jin; Sultan, Md Tipu; Seo, Ye Bin; Lee, Ok Joo; Kim, Soon Hee; Park, Chan Hum

    Open reduction with internal fixation is commonly used for the treatment of bone fractures. However, postoperative infection associated with internal fixation devices (intramedullary nails, plates, and screws) remains a significant complication, and it is technically difficult to fix multiple fragmented bony fractures using internal fixation devices. In addition, drilling in the bone to install devices can lead to secondary fracture, bone necrosis associated with postoperative infection. In this study, we developed bone clip type internal fixation device using three- dimensional (3D) printing technology. Standard 3D model of the bone clip was generated based on computed tomography (CT) scan of the femur in the rat. Polylacticacid (PLA), hydroxyapatite (HA), and silk were used for bone clip material. The purpose of this study was to characterize 3D printed PLA, PLA/HA, and PLA/HA/Silk composite bone clip and evaluate the feasibility of these bone clips as an internal fixation device. Based on the results, PLA/HA/Silk composite bone clip showed similar mechanical property, and superior biocompatibility compared to other types of the bone clip. PLA/HA/Silk composite bone clip demonstrated excellent alignment of the bony segments across the femur fracture site with well-positioned bone clip in an animal study. Our 3D printed bone clips have several advantages: (1) relatively noninvasive (drilling in the bone is not necessary), (2) patient-specific design (3) mechanically stable device, and (4) it provides high biocompatibility. Therefore, we suggest that our 3D printed PLA/HA/Silk composite bone clip is a possible internal fixation device.

  16. Simple device to determine the pressure applied by pressure clips for the treatment of earlobe keloids

    Directory of Open Access Journals (Sweden)

    Aashish Sasidharan

    2015-01-01

    Full Text Available Background: Keloids of the ear are common problems. Various treatment modalities are available for the treatment of ear keloids. Surgical excision with intralesional steroid injection along with compression therapy has the least recurrence rate. Various types of devices are available for pressure therapy. Pressure applied by these devices is uncontrolled and is associated with the risk of pressure necrosis. We describe here a simple and easy to use device to measure pressure applied by these clips for better outcome. Objectives: To devise a simple method to measure the pressure applied by various pressure clips used in ear keloid pressure therapy. Materials and Methods: By using a force sensitive resistor (FSR, the pressure applied gets converted into voltage using electrical wires, resistors, capacitors, converter, amplifier, diode, nine-volt (9V cadmium battery and the voltage is measured using a multimeter. The measured voltage is then converted into pressure using pressure voltage graph that depicts the actual pressure applied by the pressure clip. Results: The pressure applied by different clips was variable. The spring clips were adjustable by slight variation in the design whereas the pressure applied by binder clips and magnet discs was not adjustable. Conclusion: The uncontrolled/suboptimal pressure applied by certain pressure clips can be monitored to provide optimal pressure therapy in ear keloid for better outcome.

  17. 76 FR 26242 - Paper Clips From the People's Republic of China: Final Results of Expedited Sunset Review of...

    Science.gov (United States)

    2011-05-06

    ... order are plastic and vinyl covered paper clips, butterfly clips, binder clips, or other paper fasteners... order is revoked. Parties can obtain a public copy of the I&D Memo from the Central Records Unit, room...

  18. A new video programme

    CERN Multimedia

    CERN video productions

    2011-01-01

    "What's new @ CERN?", a new monthly video programme, will be broadcast on the Monday of every month on webcast.cern.ch. Aimed at the general public, the programme will cover the latest CERN news, with guests and explanatory features. Tune in on Monday 3 October at 4 pm (CET) to see the programme in English, and then at 4:20 pm (CET) for the French version.   var flash_video_player=get_video_player_path(); insert_player_for_external('Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-0753-kbps-640x360-25-fps-audio-64-kbps-44-kHz-stereo', 'mms://mediastream.cern.ch/MediaArchive/Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-Multirate-200-to-753-kbps-640x360-25-fps.wmv', 'false', 480, 360, 'https://mediastream.cern.ch/MediaArchive/Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-posterframe-640x360-at-10-percent.jpg', '1383406', true, 'Video/Public/Movies/2011/CERN-MOVIE-2011-129/CERN-MOVIE-2011-129-0600-kbps-maxH-360-25-fps-...

  19. Advantages in imaging results with titanium aneurysm clips (TiAl6V4)

    International Nuclear Information System (INIS)

    Piepgras, A.; Gueckel, F.; Weik, T.; Schmiedek, P.

    1995-01-01

    Aneurysm clips made of a titanium alloy (TiAl6V4) were used in clinical practice for the first time. The design of the clips is identical to the routinely used Yasargil series. In 30 patients, 38 symptomatic and asymptomatic aneurysms were fixed with 45 clips. Metallurgical advantages of the new alloy are better biocompatibility, less magnetic susceptibility, and lower X-ray density. The postoperative imaging results are superior to the conventionally used alloys with respect to artifact reduction in computed tomography, angiography, and magnetic resonance imaging. With a follow-up period of 7 months, a statement on biocompatibility cannot yet be given. (orig.) [de

  20. Telemetry and Communication IP Video Player

    Science.gov (United States)

    OFarrell, Zachary L.

    2011-01-01

    Aegis Video Player is the name of the video over IP system for the Telemetry and Communications group of the Launch Services Program. Aegis' purpose is to display video streamed over a network connection to be viewed during launches. To accomplish this task, a VLC ActiveX plug-in was used in C# to provide the basic capabilities of video streaming. The program was then customized to be used during launches. The VLC plug-in can be configured programmatically to display a single stream, but for this project multiple streams needed to be accessed. To accomplish this, an easy to use, informative menu system was added to the program to enable users to quickly switch between videos. Other features were added to make the player more useful, such as watching multiple videos and watching a video in full screen.

  1. Ingested bread clip as an unexpected diagnostic tool.

    Science.gov (United States)

    Jay, Sharon M; Russell, Michael J; Lau, Yee C; Dunn, Joel W; Roberts, Ross

    2018-03-23

    We describe a case where a bread clip has in fact became lodged adjacent to a portion of small bowel affected by a deposit of previously undiagnosed metastatic serous carcinoma of likely ovarian origin.

  2. Capturing Students' Attention: Movie Clips Set the Stage for Learning in Abnormal Psychology.

    Science.gov (United States)

    Badura, Amy S.

    2002-01-01

    Presents results of a study that evaluated using popular movie clips, shown in the first class meeting of an abnormal psychology course, in relation to student enthusiasm. Compares two classes of female juniors, one using clips and one class not using them. States that the films portrayed psychological disorders. (CMK)

  3. 76 FR 31360 - Paper Clips From China; Scheduling of an Expedited Five-Year Review Concerning the Antidumping...

    Science.gov (United States)

    2011-05-31

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 731-TA-663 Third Review] Paper Clips From China; Scheduling of an Expedited Five-Year Review Concerning the Antidumping Duty Order on Paper Clips From China... paper clips from China would be likely to lead to continuation or recurrence of material injury within a...

  4. Heartbeat Rate Measurement from Facial Video

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal

    2016-01-01

    Heartbeat Rate (HR) reveals a person’s health condition. This paper presents an effective system for measuring HR from facial videos acquired in a more realistic environment than the testing environment of current systems. The proposed method utilizes a facial feature point tracking method...... by combining a ‘Good feature to track’ and a ‘Supervised descent method’ in order to overcome the limitations of currently available facial video based HR measuring systems. Such limitations include, e.g., unrealistic restriction of the subject’s movement and artificial lighting during data capture. A face...

  5. Spatial Pyramid Covariance based Compact Video Code for Robust Face Retrieval in TV-series.

    Science.gov (United States)

    Li, Yan; Wang, Ruiping; Cui, Zhen; Shan, Shiguang; Chen, Xilin

    2016-10-10

    We address the problem of face video retrieval in TV-series which searches video clips based on the presence of specific character, given one face track of his/her. This is tremendously challenging because on one hand, faces in TV-series are captured in largely uncontrolled conditions with complex appearance variations, and on the other hand retrieval task typically needs efficient representation with low time and space complexity. To handle this problem, we propose a compact and discriminative representation for the huge body of video data, named Compact Video Code (CVC). Our method first models the face track by its sample (i.e., frame) covariance matrix to capture the video data variations in a statistical manner. To incorporate discriminative information and obtain more compact video signature suitable for retrieval, the high-dimensional covariance representation is further encoded as a much lower-dimensional binary vector, which finally yields the proposed CVC. Specifically, each bit of the code, i.e., each dimension of the binary vector, is produced via supervised learning in a max margin framework, which aims to make a balance between the discriminability and stability of the code. Besides, we further extend the descriptive granularity of covariance matrix from traditional pixel-level to more general patchlevel, and proceed to propose a novel hierarchical video representation named Spatial Pyramid Covariance (SPC) along with a fast calculation method. Face retrieval experiments on two challenging TV-series video databases, i.e., the Big Bang Theory and Prison Break, demonstrate the competitiveness of the proposed CVC over state-of-the-art retrieval methods. In addition, as a general video matching algorithm, CVC is also evaluated in traditional video face recognition task on a standard Internet database, i.e., YouTube Celebrities, showing its quite promising performance by using an extremely compact code with only 128 bits.

  6. A systematic review of discomfort due to toe or ear clipping in laboratory rodents

    NARCIS (Netherlands)

    Wever, K.E.; Geessink, F.J.; Brouwer, M.A.E.; Tillema, A.; Ritskes-Hoitinga, M.

    2017-01-01

    Toe clipping and ear clipping (also ear notching or ear punching) are frequently used methods for individual identification of laboratory rodents. These procedures potentially cause severe discomfort, which can reduce animal welfare and distort experimental results. However, no systematic summary of

  7. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  8. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  9. Context-specific effects of musical expertise on audiovisual integration

    Science.gov (United States)

    Bishop, Laura; Goebl, Werner

    2014-01-01

    Ensemble musicians exchange auditory and visual signals that can facilitate interpersonal synchronization. Musical expertise improves how precisely auditory and visual signals are perceptually integrated and increases sensitivity to asynchrony between them. Whether expertise improves sensitivity to audiovisual asynchrony in all instrumental contexts or only in those using sound-producing gestures that are within an observer's own motor repertoire is unclear. This study tested the hypothesis that musicians are more sensitive to audiovisual asynchrony in performances featuring their own instrument than in performances featuring other instruments. Short clips were extracted from audio-video recordings of clarinet, piano, and violin performances and presented to highly-skilled clarinetists, pianists, and violinists. Clips either maintained the audiovisual synchrony present in the original recording or were modified so that the video led or lagged behind the audio. Participants indicated whether the audio and video channels in each clip were synchronized. The range of asynchronies most often endorsed as synchronized was assessed as a measure of participants' sensitivities to audiovisual asynchrony. A positive relationship was observed between musical training and sensitivity, with data pooled across stimuli. While participants across expertise groups detected asynchronies most readily in piano stimuli and least readily in violin stimuli, pianists showed significantly better performance for piano stimuli than for either clarinet or violin. These findings suggest that, to an extent, the effects of expertise on audiovisual integration can be instrument-specific; however, the nature of the sound-producing gestures that are observed has a substantial effect on how readily asynchrony is detected as well. PMID:25324819

  10. Current opinion on clip placement after breast biopsy: A survey of practising radiologists in France and Quebec

    International Nuclear Information System (INIS)

    Thomassin-Naggara, I.; Jalaguier-Coudray, A.; Chopier, J.; Tardivon, A.; Trop, I.

    2013-01-01

    Aim: To investigate current practice regarding clip placement after breast biopsy. Materials and methods: In June 2011, an online survey instrument was designed using an Internet-based survey site ( (www.surveymonkey.com)) to assess practices and opinions of breast radiologists regarding clip placement after breast biopsy. Radiologists were asked to give personal practice data, describe their current practice regarding clip deployment under stereotactic, ultrasonographic, and magnetic resonance imaging (MRI) guidance, and describe what steps are taken to ensure quality control with regards to clip deployment. Results: The response rate was 29.9% in France (131 respondents) and 46.7% in Quebec (50 respondents). The great majority of respondents used breast markers in their practice (92.1% in France and 96% in Quebec). In both countries, most reported deploying a clip after percutaneous biopsy under stereotactic or MRI guidance. Regarding clip deployment under ultrasonography, 38% of Quebec radiologists systematically placed a marker after each biopsy, whereas 30% of French radiologists never placed a marker in this situation, mainly due to its cost. Finally, 56.4% of radiologists in France and 54% in Quebec considered that their practice regarding clip deployment after breast percutaneous biopsy had changed in the last 5 years. Conclusion: There continues to be variations in the use of biopsy clips after imaging-guided biopsies, particularly with regards to sonographic techniques. These variations are likely to decrease over time, with the standardization of relatively new investigation protocols

  11. The role of surgical clips in the evaluation of interfractional uncertainty for treatment of hepatobiliary and pancreatic cancer with postoperative radiotherapy

    International Nuclear Information System (INIS)

    Bae, Jin Suk; Kim, Dong Hyun; Kim, Won Taek; Kim, Yong Ho; Park, Dahl; Ki, Yong Kan

    2017-01-01

    To evaluate the utility of implanted surgical clips for detecting interfractional errors in the treatment of hepatobiliary and pancreatic cancer with postoperative radiotherapy (PORT). Twenty patients had been treated with PORT for locally advanced hepatobiliary or pancreatic cancer, from November 2014 to April 2016. Patients underwent computed tomography simulation and were treated in expiratory breathing phase. During treatment, orthogonal kilovoltage (kV) imaging was taken twice a week, and isocenter shifts were made to match bony anatomy. The difference in position of clips between kV images and digitally reconstructed radiographs was determined. Clips were consist of 3 proximal clips (clip_p, ≤2 cm) and 3 distal clips (clip_d, >2 cm), which were classified according to distance from treatment center. The interfractional displacements of clips were measured in the superior-inferior (SI), anterior-posterior (AP), and right-left (RL) directions. The translocation of clip was well correlated with diaphragm movement in 90.4% (190/210) of all images. The clip position errors greater than 5 mm were observed in 26.0% in SI, 1.8% in AP, and 5.4% in RL directions, respectively. Moreover, the clip position errors greater than 10 mm were observed in 1.9% in SI, 0.2% in AP, and 0.2% in RL directions, despite respiratory control. Quantitative analysis of surgical clip displacement reflect respiratory motion, setup errors and postoperative change of intraabdominal organ position. Furthermore, position of clips is distinguished easily in verification images. The identification of the surgical clip position may lead to a significant improvement in the accuracy of upper abdominal radiation therapy

  12. The role of surgical clips in the evaluation of interfractional uncertainty for treatment of hepatobiliary and pancreatic cancer with postoperative radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Jin Suk; Kim, Dong Hyun; Kim, Won Taek; Kim, Yong Ho; Park, Dahl; Ki, Yong Kan [Pusan National University Hospital, Pusan National University School of Medicine, Busan (Korea, Republic of)

    2017-03-15

    To evaluate the utility of implanted surgical clips for detecting interfractional errors in the treatment of hepatobiliary and pancreatic cancer with postoperative radiotherapy (PORT). Twenty patients had been treated with PORT for locally advanced hepatobiliary or pancreatic cancer, from November 2014 to April 2016. Patients underwent computed tomography simulation and were treated in expiratory breathing phase. During treatment, orthogonal kilovoltage (kV) imaging was taken twice a week, and isocenter shifts were made to match bony anatomy. The difference in position of clips between kV images and digitally reconstructed radiographs was determined. Clips were consist of 3 proximal clips (clip{sub p}, ≤2 cm) and 3 distal clips (clip{sub d}, >2 cm), which were classified according to distance from treatment center. The interfractional displacements of clips were measured in the superior-inferior (SI), anterior-posterior (AP), and right-left (RL) directions. The translocation of clip was well correlated with diaphragm movement in 90.4% (190/210) of all images. The clip position errors greater than 5 mm were observed in 26.0% in SI, 1.8% in AP, and 5.4% in RL directions, respectively. Moreover, the clip position errors greater than 10 mm were observed in 1.9% in SI, 0.2% in AP, and 0.2% in RL directions, despite respiratory control. Quantitative analysis of surgical clip displacement reflect respiratory motion, setup errors and postoperative change of intraabdominal organ position. Furthermore, position of clips is distinguished easily in verification images. The identification of the surgical clip position may lead to a significant improvement in the accuracy of upper abdominal radiation therapy.

  13. Functional outcome of microsurgical clipping compared to endovascular coiling.

    Science.gov (United States)

    Premananda, R M; Ramesh, N; Hillol, K P

    2012-12-01

    Endovascular coiling has been used increasingly as an alternative to neurosurgical clipping for treating subarachnoid hemorrhage secondary to aneurysm rupture. In a retrospective cohort review on the treatment methods of aneurysm rupture in Hospital Kuala Lumpur over the period of five years (2005-2009) a total of 268 patients were treated. These patients were broadly categorized into two groups based on their treatment mode for ruptured aneurysms. Statistical analysis was determined using Chi- Square tests to study these associations. In our study, 67.5% of patients presented with Good World Federation of Neurosurgical Societies (WFNS) grade (WFNS1-2) while 32.5% patients presented with Poor WFNS prior to intervention. In our outcome, it was noted that 60.4% had good functional outcome (mRS grade 0-2) as compared to 39.6% patients who had poor mRS(modified rankin scale) outcome (mRS 3-6). In the good WFNS group, 76% of patients in clipping group had a good mRS outcome while, 86.5% patients in coiling group had good mRS outcome (p=0.114). In poor WFNS presentation, it was noted that in 77.3% patients in clipping group, had poor mRS outcome. Similarly with poor WFNS presentation, 83.3% of patient in coiling group had poor outcome. (p=1.00). Hence when we control the WFNS group, there was no significant association between treatment group (clipping and coiling) and mRS outcome at 6 months. The outcome of patient is determined by initial clinical presentation (WFNS grade) and influenced by requirement of Extraventricular drain (EVD) in presence of hydrocephalus, CSF infection and pneumonia. Therefore the decision regarding treatment option needs to be individualized based on the presentation of the patient.

  14. Wavelet based mobile video watermarking: spread spectrum vs. informed embedding

    Science.gov (United States)

    Mitrea, M.; Prêteux, F.; Duţă, S.; Petrescu, M.

    2005-11-01

    The cell phone expansion provides an additional direction for digital video content distribution: music clips, news, sport events are more and more transmitted toward mobile users. Consequently, from the watermarking point of view, a new challenge should be taken: very low bitrate contents (e.g. as low as 64 kbit/s) are now to be protected. Within this framework, the paper approaches for the first time the mathematical models for two random processes, namely the original video to be protected and a very harmful attack any watermarking method should face the StirMark attack. By applying an advanced statistical investigation (combining the Chi square, Ro, Fisher and Student tests) in the discrete wavelet domain, it is established that the popular Gaussian assumption can be very restrictively used when describing the former process and has nothing to do with the latter. As these results can a priori determine the performances of several watermarking methods, both of spread spectrum and informed embedding types, they should be considered in the design stage.

  15. Video game characteristics, happiness and flow as predictors of addiction among video game players: A pilot study.

    Science.gov (United States)

    Hull, Damien C; Williams, Glenn A; Griffiths, Mark D

    2013-09-01

    Video games provide opportunities for positive psychological experiences such as flow-like phenomena during play and general happiness that could be associated with gaming achievements. However, research has shown that specific features of game play may be associated with problematic behaviour associated with addiction-like experiences. The study was aimed at analysing whether certain structural characteristics of video games, flow, and global happiness could be predictive of video game addiction. A total of 110 video game players were surveyed about a game they had recently played by using a 24-item checklist of structural characteristics, an adapted Flow State Scale, the Oxford Happiness Questionnaire, and the Game Addiction Scale. The study revealed decreases in general happiness had the strongest role in predicting increases in gaming addiction. One of the nine factors of the flow experience was a significant predictor of gaming addiction - perceptions of time being altered during play. The structural characteristic that significantly predicted addiction was its social element with increased sociability being associated with higher levels of addictive-like experiences. Overall, the structural characteristics of video games, elements of the flow experience, and general happiness accounted for 49.2% of the total variance in Game Addiction Scale levels. Implications for interventions are discussed, particularly with regard to making players more aware of time passing and in capitalising on benefits of social features of video game play to guard against addictive-like tendencies among video game players.

  16. Video game characteristics, happiness and flow as predictors of addiction among video game players: A pilot study

    Science.gov (United States)

    Hull, Damien C.; Williams, Glenn A.; Griffiths, Mark D.

    2013-01-01

    Aims: Video games provide opportunities for positive psychological experiences such as flow-like phenomena during play and general happiness that could be associated with gaming achievements. However, research has shown that specific features of game play may be associated with problematic behaviour associated with addiction-like experiences. The study was aimed at analysing whether certain structural characteristics of video games, flow, and global happiness could be predictive of video game addiction. Method: A total of 110 video game players were surveyed about a game they had recently played by using a 24-item checklist of structural characteristics, an adapted Flow State Scale, the Oxford Happiness Questionnaire, and the Game Addiction Scale. Results: The study revealed decreases in general happiness had the strongest role in predicting increases in gaming addiction. One of the nine factors of the flow experience was a significant predictor of gaming addiction – perceptions of time being altered during play. The structural characteristic that significantly predicted addiction was its social element with increased sociability being associated with higher levels of addictive-like experiences. Overall, the structural characteristics of video games, elements of the flow experience, and general happiness accounted for 49.2% of the total variance in Game Addiction Scale levels. Conclusions: Implications for interventions are discussed, particularly with regard to making players more aware of time passing and in capitalising on benefits of social features of video game play to guard against addictive-like tendencies among video game players. PMID:25215196

  17. MANHATTAN: The View From Los Alamos of History's Most Secret Project

    International Nuclear Information System (INIS)

    Carr, Alan Brady

    2016-01-01

    This presentation covers the political and scientific events leading up to the creation of the Manhattan Project. The creation of the Manhattan Project's three most significant sites--Los Alamos, Oak Ridge, and Hanford--is also discussed. The lecture concludes by exploring the use of the atomic bombs at the end of World War II. The presentation slides include three videos. The first is a short clip of the 100-ton Test. The 100-Ton Test was history's largest measured blast at that point in time; it was a pre-test for Trinity, the world's first nuclear detonation. The second clip features views of Trinity followed a short statement by the Laboratory's first director, J. Robert Oppenheimer. The final clip shows Norris Bradbury talking about arms control.

  18. VideoStory Embeddings Recognize Events when Examples are Scarce

    OpenAIRE

    Habibian, Amirhossein; Mensink, Thomas; Snoek, Cees G. M.

    2015-01-01

    This paper aims for event recognition when video examples are scarce or even completely absent. The key in such a challenging setting is a semantic video representation. Rather than building the representation from individual attribute detectors and their annotations, we propose to learn the entire representation from freely available web videos and their descriptions using an embedding between video features and term vectors. In our proposed embedding, which we call VideoStory, the correlati...

  19. Development of a new biodegradable operative clip made of a magnesium alloy: Evaluation of its safety and tolerability for canine cholecystectomy.

    Science.gov (United States)

    Yoshida, Toshihiko; Fukumoto, Takumi; Urade, Takeshi; Kido, Masahiro; Toyama, Hirochika; Asari, Sadaki; Ajiki, Tetsuo; Ikeo, Naoko; Mukai, Toshiji; Ku, Yonson

    2017-06-01

    Operative clips used to ligate vessels in abdominal operation usually are made of titanium. They remain in the body permanently and form metallic artifacts in computed tomography images, which impair accurate diagnosis. Although biodegradable magnesium instruments have been developed in other fields, the physical properties necessary for operative clips differ from those of other instruments. We developed a biodegradable magnesium-zinc-calcium alloy clip with good biologic compatibility and enough clamping capability as an operative clip. In this study, we verified the safety and tolerability of this clip for use in canine cholecystectomy. Nine female beagles were used. We performed cholecystectomy and ligated the cystic duct by magnesium alloy or titanium clips. The chronologic change of clips and artifact formation were compared at 1, 4, 12, 18, and 24 weeks postoperative by computed tomography. The animals were killed at the end of the observation period, and the clips were removed to evaluate their biodegradability. We also evaluated their effect on the living body by blood biochemistry data. The magnesium alloy clip formed much fewer artifacts than the titanium clip, and it was almost absorbed at 6 months postoperative. There were no postoperative complications and no elevation of constituent elements such as magnesium, calcium, and zinc during the observation period in both groups. The novel magnesium alloy clip demonstrated sufficient sealing capability for the cystic duct and proper biodegradability in canine models. The magnesium alloy clip revealed much fewer metallic artifacts in CT than the conventional titanium clip. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Heavy metal phytoextraction by Sedum alfredii is affected by continual clipping and phosphorus fertilization amendment.

    Science.gov (United States)

    Huang, Huagang; Li, Tingqiang; Gupta, D K; He, Zhenli; Yang, Xiao-E; Ni, Bingnan; Li, Mao

    2012-01-01

    Improving the efficacy of phytoextraction is critical for its successful application in metal contaminated soils. Mineral nutrition affects plant growth and metal absorption and subsequently the accumulation of heavy metal through hyper-accumulator plants. This study assessed the effects of di-hydrogen phosphates (KH2PO4, Ca(H2PO4)2, NaH2PO4 and NH4H2PO4) application at three levels (22, 88 and 352 mg P/kg soil) on Sedum alfredii growth and metal uptake by three consecutive harvests on aged and Zn/Cd combined contaminated paddy soil. The addition of phosphates (P) significantly increased the amount of Zn taken up by S. alfredii due to increased shoot Zn concentration and dry matter yield (DMY) (P phytoextraction of Zn and Cd was observed in KH2PO4 and NH4H2PO4 treatment at 352 mg P/kg soil. The amount of Zn removed by phytoextraction increased in the order of 1st clipping < 2nd clipping < 3rd clipping, and for Cd extraction the order was 2nd clipping < 1st clipping < 3rd clipping. These results indicate that the application of P fertilizers coupled with multiple cuttings can enhance the removal of Zn and Cd from contaminated soils by S. alfredii, thus shortening the time needed for accomplishing remediation goals.

  1. Anterior petroclinoid fold fenestration: an adjunct to clipping of postero-laterally projecting posterior communicating aneurysms.

    Science.gov (United States)

    Nossek, Erez; Setton, Avi; Dehdashti, Amir R; Chalif, David J

    2014-10-01

    Proximally located posterior communicating artery (PCoA) aneurysms, projecting postero-laterally in proximity to the tentorium, may pose a technical challenge for microsurgical clipping due to obscuration of the proximal aneurysmal neck by the anterior petroclinoid fold. We describe an efficacious technique utilizing fenestration of the anterior petroclinoid fold to facilitate visualization and clipping of PCoA aneurysms abutting this aspect of the tentorium. Of 86 cases of PCoA aneurysms treated between 2003 and 2013, the technique was used in nine (10.5 %) patients to allow for adequate clipping. A 3 mm fenestration in the anterior petroclinoid ligament is created adjacent and lateral to the anterior clinoid process. This fenestration is then widened into a small wedge corridor by bipolar coagulation. In all cases, the proximal aneurysm neck was visualized after the wedge fenestration. Additionally, an adequate corridor for placement of the proximal clip blade was uniformly established. All cases were adequately clipped, with complete occlusion of the aneurysm neck and fundus with preservation of the PCoA. There were two intraoperative ruptures not related to creation of the wedge fenestration. One patient experienced post-operative partial third nerve palsy, which resolved during follow-up. We describe a technique of fenestration of the anterior petroclinoid fold to establish a critical and safe corridor for both visualization and clipping of PCoA aneurysms.

  2. Anticipation of high arousal aversive and positive movie clips engages common and distinct neural substrates.

    Science.gov (United States)

    Greenberg, Tsafrir; Carlson, Joshua M; Rubin, Denis; Cha, Jiook; Mujica-Parodi, Lilianne

    2015-04-01

    The neural correlates of anxious anticipation have been primarily studied with aversive and neutral stimuli. In this study, we examined the effect of valence on anticipation by using high arousal aversive and positive stimuli and a condition of uncertainty (i.e. either positive or aversive). The task consisted of predetermined cues warning participants of upcoming aversive, positive, 'uncertain' (either aversive or positive) and neutral movie clips. Anticipation of all affective clips engaged common regions including the anterior insula, dorsal anterior cingulate cortex, thalamus, caudate, inferior parietal and prefrontal cortex that are associated with emotional experience, sustained attention and appraisal. In contrast, the nucleus accumbens and medial prefrontal cortex, regions implicated in reward processing, were selectively engaged during anticipation of positive clips (depicting sexually explicit content) and the mid-insula, which has been linked to processing aversive stimuli, was selectively engaged during anticipation of aversive clips (depicting graphic medical procedures); these three areas were also activated during anticipation of 'uncertain' clips reflecting a broad preparatory response for both aversive and positive stimuli. These results suggest that a common circuitry is recruited in anticipation of affective clips regardless of valence, with additional areas preferentially engaged depending on whether expected stimuli are negative or positive. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  3. YUCSA: A CLIPS expert database system to monitor academic performance

    Science.gov (United States)

    Toptsis, Anestis A.; Ho, Frankie; Leindekar, Milton; Foon, Debra Low; Carbonaro, Mike

    1991-01-01

    The York University CLIPS Student Administrator (YUCSA), an expert database system implemented in C Language Integrated Processing System (CLIPS), for monitoring the academic performance of undergraduate students at York University, is discussed. The expert system component in the system has already been implemented for two major departments, and it is under testing and enhancement for more departments. Also, more elaborate user interfaces are under development. We describe the design and implementation of the system, problems encountered, and immediate future plans. The system has excellent maintainability and it is very efficient, taking less than one minute to complete an assessment of one student.

  4. Image-based querying of urban knowledge databases

    Science.gov (United States)

    Cho, Peter; Bae, Soonmin; Durand, Fredo

    2009-05-01

    We extend recent automated computer vision algorithms to reconstruct the global three-dimensional structures for photos and videos shot at fixed points in outdoor city environments. Mosaics of digital stills and embedded videos are georegistered by matching a few of their 2D features with 3D counterparts in aerial ladar imagery. Once image planes are aligned with world maps, abstract urban knowledge can propagate from the latter into the former. We project geotagged annotations from a 3D map into a 2D video stream and demonstrate their tracking buildings and streets in a clip with significant panning motion. We also present an interactive tool which enables users to select city features of interest in video frames and retrieve their geocoordinates and ranges. Implications of this work for future augmented reality systems based upon mobile smart phones are discussed.

  5. Real-time unmanned aircraft systems surveillance video mosaicking using GPU

    Science.gov (United States)

    Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.

    2010-04-01

    Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.

  6. Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.

    Science.gov (United States)

    Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick

    2017-10-01

    In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

  7. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-03-01

    Full Text Available Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT, speed-up robust feature (SURF, local binary patterns (LBP, histogram of oriented gradients (HOG, and weighted HOG. Recently, the convolutional neural network (CNN method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  8. Phosphorus runoff from turfgrass as affected by phosphorus fertilization and clipping management.

    Science.gov (United States)

    Bierman, Peter M; Horgan, Brian P; Rosen, Carl J; Hollman, Andrew B; Pagliari, Paulo H

    2010-01-01

    Phosphorus enrichment of surface water is a concern in many urban watersheds. A 3-yr study on a silt loam soil with 5% slope and high soil test P (27 mg kg(-1) Bray P1) was conducted to evaluate P fertilization and clipping management effects on P runoff from turfgrass (Poa pratensis L.) under frozen and nonfrozen conditions. Four fertilizer treatments were compared: (i) no fertilizer, (ii) nitrogen (N)+potassium (K)+0xP, (iii) N+K+1xP, and (iv) N+K+3xP. Phosphorus rates were 21.3 and 63.9 kg ha(-1) yr(-1) the first year and 7.1 and 21.3 kg ha(-1) yr(-1) the following 2 yr. Each fertilizer treatment was evaluated with clippings removed or clippings recycled back to the turf. In the first year, P runoff increased with increasing P rate and P losses were greater in runoff from frozen than nonfrozen soil. In year 2, total P runoff from the no fertilizer treatment was greater than from treatments receiving fertilizer. This was because reduced turf quality resulted in greater runoff depth from the no fertilizer treatment. In year 3, total P runoff from frozen soil and cumulative total P runoff increased with increasing P rate. Clipping management was not an important factor in any year, indicating that returning clippings does not significantly increase P runoff from turf. In the presence of N and K, P fertilization did not improve turf growth or quality in any year. Phosphorus runoff can be reduced by not applying P to high testing soils and avoiding fall applications when P is needed.

  9. Use of online clinical videos for clinical skills training for medical students: benefits and challenges.

    Science.gov (United States)

    Jang, Hye Won; Kim, Kyong-Jee

    2014-03-21

    Multimedia learning has been shown effective in clinical skills training. Yet, use of technology presents both opportunities and challenges to learners. The present study investigated student use and perceptions of online clinical videos for learning clinical skills and in preparing for OSCE (Objective Structured Clinical Examination). This study aims to inform us how to make more effective us of these resources. A mixed-methods study was conducted for this study. A 30-items questionnaire was administered to investigate student use and perceptions of OSCE videos. Year 3 and 4 students from 34 Korean medical schools who had access to OSCE videos participated in the online survey. Additionally, a semi-structured interview of a group of Year 3 medical students was conducted for an in-depth understanding of student experience with OSCE videos. 411 students from 31 medical schools returned the questionnaires; a majority of them found OSCE videos effective for their learning of clinical skills and in preparing for OSCE. The number of OSCE videos that the students viewed was moderately associated with their self-efficacy and preparedness for OSCE (p mobile devices; they agreed more with the statement that it was convenient to access the video clips than their peers who accessed the videos using computers (p students reported lack of integration into the curriculum and lack of interaction as barriers to more effective use of OSCE videos. The present study confirms the overall positive impact of OSCE videos on student learning of clinical skills. Having faculty integrate these learning resources into their teaching, integrating interactive tools into this e-learning environment to foster interactions, and using mobile devices for convenient access are recommended to help students make more effective use of these resources.

  10. 76 FR 42730 - Paper Clips From China

    Science.gov (United States)

    2011-07-19

    ... China Determination On the basis of the record \\1\\ developed in the subject five-year review, the United... China would be likely to lead to continuation or recurrence of material injury to an industry in the...), entitled Paper Clips from China: Investigation No. 731-TA-663 (Third Review). By order of the Commission...

  11. Nonvariceal Upper Gastrointestinal Bleeding: the Usefulness of Rotational Angiography after Endoscopic Marking with a Metallic Clip

    Energy Technology Data Exchange (ETDEWEB)

    Song, Ji Soo; Kwak, Hyo Sung; Chung, Gyung Ho [Chonbuk National University Medical School, Chonju (Korea, Republic of)

    2011-08-15

    We wanted to assess the usefulness of rotational angiography after endoscopic marking with a metallic clip in upper gastrointestinal bleeding patients with no extravasation of contrast medium on conventional angiography. In 16 patients (mean age, 59.4 years) with acute bleeding ulcers (13 gastric ulcers, 2 duodenal ulcers, 1 malignant ulcer), a metallic clip was placed via gastroscopy and this had been preceded by routine endoscopic treatment. The metallic clip was placed in the fibrous edge of the ulcer adjacent to the bleeding point. All patients had negative results from their angiographic studies. To localize the bleeding focus, rotational angiography and high pressure angiography as close as possible to the clip were used. Of the 16 patients, seven (44%) had positive results after high pressure angiography as close as possible to the clip and they underwent transcatheter arterial embolization (TAE) with microcoils. Nine patients without extravasation of contrast medium underwent TAE with microcoils as close as possible to the clip. The bleeding was stopped initially in all patients after treatment of the feeding artery. Two patients experienced a repeat episode of bleeding two days later. Of the two patients, one had subtle oozing from the ulcer margin and that patient underwent endoscopic treatment. One patient with malignant ulcer died due to disseminated intravascular coagulation one month after embolization. Complete clinical success was achieved in 14 of 16 (88%) patients. Delayed bleeding or major/minor complications were not noted. Rotational angiography after marking with a metallic clip helps to localize accurately the bleeding focus and thus to embolize the vessel correctly.

  12. Nonvariceal Upper Gastrointestinal Bleeding: the Usefulness of Rotational Angiography after Endoscopic Marking with a Metallic Clip

    International Nuclear Information System (INIS)

    Song, Ji Soo; Kwak, Hyo Sung; Chung, Gyung Ho

    2011-01-01

    We wanted to assess the usefulness of rotational angiography after endoscopic marking with a metallic clip in upper gastrointestinal bleeding patients with no extravasation of contrast medium on conventional angiography. In 16 patients (mean age, 59.4 years) with acute bleeding ulcers (13 gastric ulcers, 2 duodenal ulcers, 1 malignant ulcer), a metallic clip was placed via gastroscopy and this had been preceded by routine endoscopic treatment. The metallic clip was placed in the fibrous edge of the ulcer adjacent to the bleeding point. All patients had negative results from their angiographic studies. To localize the bleeding focus, rotational angiography and high pressure angiography as close as possible to the clip were used. Of the 16 patients, seven (44%) had positive results after high pressure angiography as close as possible to the clip and they underwent transcatheter arterial embolization (TAE) with microcoils. Nine patients without extravasation of contrast medium underwent TAE with microcoils as close as possible to the clip. The bleeding was stopped initially in all patients after treatment of the feeding artery. Two patients experienced a repeat episode of bleeding two days later. Of the two patients, one had subtle oozing from the ulcer margin and that patient underwent endoscopic treatment. One patient with malignant ulcer died due to disseminated intravascular coagulation one month after embolization. Complete clinical success was achieved in 14 of 16 (88%) patients. Delayed bleeding or major/minor complications were not noted. Rotational angiography after marking with a metallic clip helps to localize accurately the bleeding focus and thus to embolize the vessel correctly.

  13. Learning from Multiple Sources for Video Summarisation

    OpenAIRE

    Zhu, Xiatian; Loy, Chen Change; Gong, Shaogang

    2015-01-01

    Many visual surveillance tasks, e.g.video summarisation, is conventionally accomplished through analysing imagerybased features. Relying solely on visual cues for public surveillance video understanding is unreliable, since visual observations obtained from public space CCTV video data are often not sufficiently trustworthy and events of interest can be subtle. On the other hand, non-visual data sources such as weather reports and traffic sensory signals are readily accessible but are not exp...

  14. Virtual reality cerebral aneurysm clipping simulation with real-time haptic feedback.

    Science.gov (United States)

    Alaraj, Ali; Luciano, Cristian J; Bailey, Daniel P; Elsenousi, Abdussalam; Roitberg, Ben Z; Bernardo, Antonio; Banerjee, P Pat; Charbel, Fady T

    2015-03-01

    With the decrease in the number of cerebral aneurysms treated surgically and the increase of complexity of those treated surgically, there is a need for simulation-based tools to teach future neurosurgeons the operative techniques of aneurysm clipping. To develop and evaluate the usefulness of a new haptic-based virtual reality simulator in the training of neurosurgical residents. A real-time sensory haptic feedback virtual reality aneurysm clipping simulator was developed using the ImmersiveTouch platform. A prototype middle cerebral artery aneurysm simulation was created from a computed tomographic angiogram. Aneurysm and vessel volume deformation and haptic feedback are provided in a 3-dimensional immersive virtual reality environment. Intraoperative aneurysm rupture was also simulated. Seventeen neurosurgery residents from 3 residency programs tested the simulator and provided feedback on its usefulness and resemblance to real aneurysm clipping surgery. Residents thought that the simulation would be useful in preparing for real-life surgery. About two-thirds of the residents thought that the 3-dimensional immersive anatomic details provided a close resemblance to real operative anatomy and accurate guidance for deciding surgical approaches. They thought the simulation was useful for preoperative surgical rehearsal and neurosurgical training. A third of the residents thought that the technology in its current form provided realistic haptic feedback for aneurysm surgery. Neurosurgical residents thought that the novel immersive VR simulator is helpful in their training, especially because they do not get a chance to perform aneurysm clippings until late in their residency programs.

  15. PVR system design of advanced video navigation reinforced with audible sound

    NARCIS (Netherlands)

    Eerenberg, O.; Aarts, R.; De With, P.N.

    2014-01-01

    This paper presents an advanced video navigation concept for Personal Video Recording (PVR), based on jointly using the primary image and a Picture-in-Picture (PiP) image, featuring combined rendering of normal-play video fragments with audio and fast-search video. The hindering loss of audio during

  16. Developing a Multimedia Instrument for Technical Vocabulary Learning: A Case of EFL Undergraduate Physics Education

    Science.gov (United States)

    Rusanganwa, Joseph Appolinary

    2015-01-01

    The aim of the present study is to investigate the process of constructing a Multimedia Assisted Vocabulary Learning (MAVL) instrument at a university in Rwanda in 2009. The instrument is used in a one-computer classroom where students were taught in a foreign language and had little access to books. It consists of video clips featuring images,…

  17. The "Isms" of Art. Introduction to the 2001-2002 Clip and Save Art Prints.

    Science.gov (United States)

    Hubbard, Guy

    2001-01-01

    Provides an introduction to the 2001-2002 Clip and Save Art Prints that will focus on ten art movements from the past 150 years. Includes information on three art movements, or "isms": Classicism, Romanticism, and Realism. Discusses the Clip and Save Art Print format and provides information on three artists. (CMK)

  18. Hierarchical Context Modeling for Video Event Recognition.

    Science.gov (United States)

    Wang, Xiaoyang; Ji, Qiang

    2016-10-11

    Current video event recognition research remains largely target-centered. For real-world surveillance videos, targetcentered event recognition faces great challenges due to large intra-class target variation, limited image resolution, and poor detection and tracking results. To mitigate these challenges, we introduced a context-augmented video event recognition approach. Specifically, we explicitly capture different types of contexts from three levels including image level, semantic level, and prior level. At the image level, we introduce two types of contextual features including the appearance context features and interaction context features to capture the appearance of context objects and their interactions with the target objects. At the semantic level, we propose a deep model based on deep Boltzmann machine to learn event object representations and their interactions. At the prior level, we utilize two types of prior-level contexts including scene priming and dynamic cueing. Finally, we introduce a hierarchical context model that systematically integrates the contextual information at different levels. Through the hierarchical context model, contexts at different levels jointly contribute to the event recognition. We evaluate the hierarchical context model for event recognition on benchmark surveillance video datasets. Results show that incorporating contexts in each level can improve event recognition performance, and jointly integrating three levels of contexts through our hierarchical model achieves the best performance.

  19. Our History Clips: Collaborating for the Common Good

    Science.gov (United States)

    Bailey, Beatrice N.

    2017-01-01

    This case study reveals how middle school social studies teachers within a professional development program are encouraging their students to use multiple disciplinary literacies to create Our History Clips as they also work toward developing a classroom community of engaged student citizens.

  20. ASTP 15th Anniversary Clip-Media Release

    Science.gov (United States)

    1990-01-01

    This release is comprised of 5 separate clips, including the following: CL 762 Astronauts/Cosmonauts Visit to KSC and Walt Disney World; CL 739 ASTP Joint Crew Activities; CL 748 ASTP Astronauts/Cosmonauts Horlock Ranch Visit; CL 758 T-21 ASTP Training - US/USSR; and CL 743 ASTP Joint Crew Training in the Soviet Union.

  1. Clips supporting and spacing flanged sheets of reflective insulation

    International Nuclear Information System (INIS)

    Carr, R.W.

    1980-01-01

    This invention relates to clips, spacing and supporting flanged sheets of reflective insulation used to encase the main body and associated piping of nuclear reactors to minimize heat and radiation losses. (UK)

  2. A memory efficient user interface for CLIPS micro-computer applications

    Science.gov (United States)

    Sterle, Mark E.; Mayer, Richard J.; Jordan, Janice A.; Brodale, Howard N.; Lin, Min-Jin

    1990-01-01

    The goal of the Integrated Southern Pine Beetle Expert System (ISPBEX) is to provide expert level knowledge concerning treatment advice that is convenient and easy to use for Forest Service personnel. ISPBEX was developed in CLIPS and delivered on an IBM PC AT class micro-computer, operating with an MS/DOS operating system. This restricted the size of the run time system to 640K. In order to provide a robust expert system, with on-line explanation, help, and alternative actions menus, as well as features that allow the user to back up or execute 'what if' scenarios, a memory efficient menuing system was developed to interface with the CLIPS programs. By robust, we mean an expert system that (1) is user friendly, (2) provides reasonable solutions for a wide variety of domain specific problems, (3) explains why some solutions were suggested but others were not, and (4) provides technical information relating to the problem solution. Several advantages were gained by using this type of user interface (UI). First, by storing the menus on the hard disk (instead of main memory) during program execution, a more robust system could be implemented. Second, since the menus were built rapidly, development time was reduced. Third, the user may try a new scenario by backing up to any of the input screens and revising segments of the original input without having to retype all the information. And fourth, asserting facts from the menus provided for a dynamic and flexible fact base. This UI technology has been applied successfully in expert systems applications in forest management, agriculture, and manufacturing. This paper discusses the architecture of the UI system, human factors considerations, and the menu syntax design.

  3. Endoscopic removal of over-the-scope clips using a novel cutting device: a retrospective case series.

    Science.gov (United States)

    Schmidt, Arthur; Riecken, Bettina; Damm, Michael; Cahyadi, Oscar; Bauder, Markus; Caca, Karel

    2014-09-01

    Over-the-scope clips (OTSCs; Ovesco Endoscopy, Tübingen, Germany) are extensively used for treatment of gastrointestinal perforations, leakages, fistulas, and bleeding. In this report, a new method of removing OTSCs using a prototype bipolar cutting device is described. A total of 11 patients underwent endoscopic removal of an OTSC. The OTSC was cut at two opposing sites by a prototype device (DC ClipCutter; Ovesco Endoscopy). The remaining clip fragments were extracted using a standard forceps. Mean procedure time was 47 minutes (range 35 - 75 minutes). Cutting of the OTSC at two opposing sites was successful in all cases (100 %). Complete retrieval of all clip fragments was possible in 10 patients (91 %). The overall success rate for cutting and complete removal of the clip was 91 %. No major complications were observed. Removal of OTSCs with the prototype device was feasible and effective. The device may be valuable for OTSC removal in emergency as well as elective indications. © Georg Thieme Verlag KG Stuttgart · New York.

  4. Mining Videos for Features that Drive Attention

    Science.gov (United States)

    2015-04-01

    that can be added or removed from the final saliency computation. Examples of these features include intensity contrast, motion energy , color opponent...corresponding to the image. Each pixel in the feature map indicates the energy that the feature in question contributes at that location. In the standard...eye and head animation using a neurobio - logical model of visual attention. In: Bosacchi B, Fogel DB, Bezdek JC (eds) Proceedings of SPIE 48th annual

  5. No-Reference Video Quality Assessment using MPEG Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2013-01-01

    We present a method for No-Reference (NR) Video Quality Assessment (VQA) for decoded video without access to the bitstream. This is achieved by extracting and pooling features from a NR image quality assessment method used frame by frame. We also present methods to identify the video coding...... and estimate the video coding parameters for MPEG-2 and H.264/AVC which can be used to improve the VQA. The analysis differs from most other video coding analysis methods since it is without access to the bitstream. The results show that our proposed method is competitive with other recent NR VQA methods...

  6. Blind prediction of natural video quality.

    Science.gov (United States)

    Saad, Michele A; Bovik, Alan C; Charrier, Christophe

    2014-03-01

    We propose a blind (no reference or NR) video quality evaluation model that is nondistortion specific. The approach relies on a spatio-temporal model of video scenes in the discrete cosine transform domain, and on a model that characterizes the type of motion occurring in the scenes, to predict video quality. We use the models to define video statistics and perceptual features that are the basis of a video quality assessment (VQA) algorithm that does not require the presence of a pristine video to compare against in order to predict a perceptual quality score. The contributions of this paper are threefold. 1) We propose a spatio-temporal natural scene statistics (NSS) model for videos. 2) We propose a motion model that quantifies motion coherency in video scenes. 3) We show that the proposed NSS and motion coherency models are appropriate for quality assessment of videos, and we utilize them to design a blind VQA algorithm that correlates highly with human judgments of quality. The proposed algorithm, called video BLIINDS, is tested on the LIVE VQA database and on the EPFL-PoliMi video database and shown to perform close to the level of top performing reduced and full reference VQA algorithms.

  7. Parkinson's Disease Videos

    Medline Plus

    Full Text Available ... Is Initiated After Diagnosis? CareMAP: When Is It Time to Get Help? Unconditional Love CareMAP: Rest and Sleep: ... CareMAP: Mealtime and Swallowing: Part 1 ... of books, fact sheets, videos, podcasts, and more. To get started, use the search feature or check ...

  8. Video2vec Embeddings Recognize Events when Examples are Scarce

    OpenAIRE

    Habibian, A.; Mensink, T.; Snoek, C.G.M.

    2017-01-01

    This paper aims for event recognition when video examples are scarce or even completely absent. The key in such a challenging setting is a semantic video representation. Rather than building the representation from individual attribute detectors and their annotations, we propose to learn the entire representation from freely available web videos and their descriptions using an embedding between video features and term vectors. In our proposed embedding, which we call Video2vec, the correlatio...

  9. Unattended digital video surveillance: A system prototype for EURATOM safeguards

    International Nuclear Information System (INIS)

    Chare, P.; Goerten, J.; Wagner, H.; Rodriguez, C.; Brown, J.E.

    1994-01-01

    Ever increasing capabilities in video and computer technology have changed the face of video surveillance. From yesterday's film and analog video tape-based systems, we now emerge into the digital era with surveillance systems capable of digital image processing, image analysis, decision control logic, and random data access features -- all of which provide greater versatility with the potential for increased effectiveness in video surveillance. Digital systems also offer other advantages such as the ability to ''compress'' data, providing increased storage capacities and the potential for allowing longer surveillance Periods. Remote surveillance and system to system communications are also a benefit that can be derived from digital surveillance systems. All of these features are extremely important in today's climate Of increasing safeguards activity and decreasing budgets -- Los Alamos National Laboratory's Safeguards Systems Group and the EURATOM Safeguards Directorate have teamed to design and implement a period surveillance system that will take advantage of the versatility of digital video for facility surveillance system that will take advantage of the versatility of digital video for facility surveillance and data review. In this Paper we will familiarize you with system components and features and report on progress in developmental areas such as image compression and region of interest processing

  10. Neural Basis of Video Gaming: A Systematic Review

    Science.gov (United States)

    Palaus, Marc; Marron, Elena M.; Viejo-Sobera, Raquel; Redolar-Ripoll, Diego

    2017-01-01

    Background: Video gaming is an increasingly popular activity in contemporary society, especially among young people, and video games are increasing in popularity not only as a research tool but also as a field of study. Many studies have focused on the neural and behavioral effects of video games, providing a great deal of video game derived brain correlates in recent decades. There is a great amount of information, obtained through a myriad of methods, providing neural correlates of video games. Objectives: We aim to understand the relationship between the use of video games and their neural correlates, taking into account the whole variety of cognitive factors that they encompass. Methods: A systematic review was conducted using standardized search operators that included the presence of video games and neuro-imaging techniques or references to structural or functional brain changes. Separate categories were made for studies featuring Internet Gaming Disorder and studies focused on the violent content of video games. Results: A total of 116 articles were considered for the final selection. One hundred provided functional data and 22 measured structural brain changes. One-third of the studies covered video game addiction, and 14% focused on video game related violence. Conclusions: Despite the innate heterogeneity of the field of study, it has been possible to establish a series of links between the neural and cognitive aspects, particularly regarding attention, cognitive control, visuospatial skills, cognitive workload, and reward processing. However, many aspects could be improved. The lack of standardization in the different aspects of video game related research, such as the participants' characteristics, the features of each video game genre and the diverse study goals could contribute to discrepancies in many related studies. PMID:28588464

  11. Neural Basis of Video Gaming: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Marc Palaus

    2017-05-01

    Full Text Available Background: Video gaming is an increasingly popular activity in contemporary society, especially among young people, and video games are increasing in popularity not only as a research tool but also as a field of study. Many studies have focused on the neural and behavioral effects of video games, providing a great deal of video game derived brain correlates in recent decades. There is a great amount of information, obtained through a myriad of methods, providing neural correlates of video games.Objectives: We aim to understand the relationship between the use of video games and their neural correlates, taking into account the whole variety of cognitive factors that they encompass.Methods: A systematic review was conducted using standardized search operators that included the presence of video games and neuro-imaging techniques or references to structural or functional brain changes. Separate categories were made for studies featuring Internet Gaming Disorder and studies focused on the violent content of video games.Results: A total of 116 articles were considered for the final selection. One hundred provided functional data and 22 measured structural brain changes. One-third of the studies covered video game addiction, and 14% focused on video game related violence.Conclusions: Despite the innate heterogeneity of the field of study, it has been possible to establish a series of links between the neural and cognitive aspects, particularly regarding attention, cognitive control, visuospatial skills, cognitive workload, and reward processing. However, many aspects could be improved. The lack of standardization in the different aspects of video game related research, such as the participants' characteristics, the features of each video game genre and the diverse study goals could contribute to discrepancies in many related studies.

  12. Neural Basis of Video Gaming: A Systematic Review.

    Science.gov (United States)

    Palaus, Marc; Marron, Elena M; Viejo-Sobera, Raquel; Redolar-Ripoll, Diego

    2017-01-01

    Background: Video gaming is an increasingly popular activity in contemporary society, especially among young people, and video games are increasing in popularity not only as a research tool but also as a field of study. Many studies have focused on the neural and behavioral effects of video games, providing a great deal of video game derived brain correlates in recent decades. There is a great amount of information, obtained through a myriad of methods, providing neural correlates of video games. Objectives: We aim to understand the relationship between the use of video games and their neural correlates, taking into account the whole variety of cognitive factors that they encompass. Methods: A systematic review was conducted using standardized search operators that included the presence of video games and neuro-imaging techniques or references to structural or functional brain changes. Separate categories were made for studies featuring Internet Gaming Disorder and studies focused on the violent content of video games. Results: A total of 116 articles were considered for the final selection. One hundred provided functional data and 22 measured structural brain changes. One-third of the studies covered video game addiction, and 14% focused on video game related violence. Conclusions: Despite the innate heterogeneity of the field of study, it has been possible to establish a series of links between the neural and cognitive aspects, particularly regarding attention, cognitive control, visuospatial skills, cognitive workload, and reward processing. However, many aspects could be improved. The lack of standardization in the different aspects of video game related research, such as the participants' characteristics, the features of each video game genre and the diverse study goals could contribute to discrepancies in many related studies.

  13. Video game use and cognitive performance: does it vary with the presence of problematic video game use?

    Science.gov (United States)

    Collins, Emily; Freeman, Jonathan

    2014-03-01

    Action video game players have been found to outperform nonplayers on a variety of cognitive tasks. However, several failures to replicate these video game player advantages have indicated that this relationship may not be straightforward. Moreover, despite the discovery that problematic video game players do not appear to demonstrate the same superior performance as nonproblematic video game players in relation to multiple object tracking paradigms, this has not been investigated for other tasks. Consequently, this study compared gamers and nongamers in task switching ability, visual short-term memory, mental rotation, enumeration, and flanker interference, as well as investigated the influence of self-reported problematic video game use. A total of 66 participants completed the experiment, 26 of whom played action video games, including 20 problematic players. The results revealed no significant effect of playing action video games, nor any influence of problematic video game play. This indicates that the previously reported cognitive advantages in video game players may be restricted to specific task features or samples. Furthermore, problematic video game play may not have a detrimental effect on cognitive performance, although this is difficult to ascertain considering the lack of video game player advantage. More research is therefore sorely needed.

  14. Turbo Decision Aided Receivers for Clipping Noise Mitigation in Coded OFDM

    Directory of Open Access Journals (Sweden)

    Declercq David

    2008-01-01

    Full Text Available Abstract Orthogonal frequency division multiplexing (OFDM is the modulation technique used in most of the high-rate communication standards. However, OFDM signals exhibit high peak average to power ratio (PAPR that makes them particularly sensitive to nonlinear distortions caused by high-power amplifiers. Hence, the amplifier needs to operate at large output backoff, thereby decreasing the average efficiency of the transmitter. One way to reduce PAPR consists in clipping the amplitude of the OFDM signal introducing an additional noise that degrades the overall system performance. In that case, the receiver needs to set up an algorithm that compensates this clipping noise. In this paper, we propose three new iterative receivers with growing complexity and performance that operate at severe clipping: the first and simplest receiver uses a Viterbi algorithm as channel decoder whereas the other two implement a soft-input soft-output (SISO decoder. Each soft receiver is analyzed through EXIT charts for different mappings. Finally, the performances of the receivers are simulated on both short time-varying channel and AWGN channel.

  15. Turbo Decision Aided Receivers for Clipping Noise Mitigation in Coded OFDM

    Directory of Open Access Journals (Sweden)

    Maxime Colas

    2008-02-01

    Full Text Available Orthogonal frequency division multiplexing (OFDM is the modulation technique used in most of the high-rate communication standards. However, OFDM signals exhibit high peak average to power ratio (PAPR that makes them particularly sensitive to nonlinear distortions caused by high-power amplifiers. Hence, the amplifier needs to operate at large output backoff, thereby decreasing the average efficiency of the transmitter. One way to reduce PAPR consists in clipping the amplitude of the OFDM signal introducing an additional noise that degrades the overall system performance. In that case, the receiver needs to set up an algorithm that compensates this clipping noise. In this paper, we propose three new iterative receivers with growing complexity and performance that operate at severe clipping: the first and simplest receiver uses a Viterbi algorithm as channel decoder whereas the other two implement a soft-input soft-output (SISO decoder. Each soft receiver is analyzed through EXIT charts for different mappings. Finally, the performances of the receivers are simulated on both short time-varying channel and AWGN channel.

  16. A calibration method for proposed XRF measurements of arsenic and selenium in nail clippings

    International Nuclear Information System (INIS)

    Gherase, Mihai R; Fleming, David E B

    2011-01-01

    A calibration method for proposed x-ray fluorescence (XRF) measurements of arsenic and selenium in nail clippings is demonstrated. Phantom nail clippings were produced from a whole nail phantom (0.7 mm thickness, 25 x 25 mm 2 area) and contained equal concentrations of arsenic and selenium ranging from 0 to 20 μg g -1 in increments of 5 μg g -1 . The phantom nail clippings were then grouped in samples of five different masses: 20, 40, 60, 80 and 100 mg for each concentration. Experimental x-ray spectra were acquired for each of the sample masses using a portable x-ray tube and a detector unit. Calibration lines (XRF signal in a number of counts versus stoichiometric elemental concentration) were produced for each of the two elements. A semi-empirical relationship between the mass of the nail phantoms (m) and the slope of the calibration line (s) was determined separately for arsenic and selenium. Using this calibration method, one can estimate elemental concentrations and their uncertainties from the XRF spectra of human nail clippings. (note)

  17. Virus-Clip: a fast and memory-efficient viral integration site detection tool at single-base resolution with annotation capability.

    Science.gov (United States)

    Ho, Daniel W H; Sze, Karen M F; Ng, Irene O L

    2015-08-28

    Viral integration into the human genome upon infection is an important risk factor for various human malignancies. We developed viral integration site detection tool called Virus-Clip, which makes use of information extracted from soft-clipped sequencing reads to identify exact positions of human and virus breakpoints of integration events. With initial read alignment to virus reference genome and streamlined procedures, Virus-Clip delivers a simple, fast and memory-efficient solution to viral integration site detection. Moreover, it can also automatically annotate the integration events with the corresponding affected human genes. Virus-Clip has been verified using whole-transcriptome sequencing data and its detection was validated to have satisfactory sensitivity and specificity. Marked advancement in performance was detected, compared to existing tools. It is applicable to versatile types of data including whole-genome sequencing, whole-transcriptome sequencing, and targeted sequencing. Virus-Clip is available at http://web.hku.hk/~dwhho/Virus-Clip.zip.

  18. MANHATTAN: The View From Los Alamos of History's Most Secret Project

    Energy Technology Data Exchange (ETDEWEB)

    Carr, Alan Brady [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-12-22

    This presentation covers the political and scientific events leading up to the creation of the Manhattan Project. The creation of the Manhattan Project’s three most significant sites--Los Alamos, Oak Ridge, and Hanford--is also discussed. The lecture concludes by exploring the use of the atomic bombs at the end of World War II. The presentation slides include three videos. The first is a short clip of the 100-ton Test. The 100-Ton Test was history’s largest measured blast at that point in time; it was a pre-test for Trinity, the world’s first nuclear detonation. The second clip features views of Trinity followed a short statement by the Laboratory’s first director, J. Robert Oppenheimer. The final clip shows Norris Bradbury talking about arms control.

  19. Virtual Reality Cerebral Aneurysm Clipping Simulation With Real-time Haptic Feedback

    Science.gov (United States)

    Alaraj, Ali; Luciano, Cristian J.; Bailey, Daniel P.; Elsenousi, Abdussalam; Roitberg, Ben Z.; Bernardo, Antonio; Banerjee, P. Pat; Charbel, Fady T.

    2014-01-01

    Background With the decrease in the number of cerebral aneurysms treated surgically and the increase of complexity of those treated surgically, there is a need for simulation-based tools to teach future neurosurgeons the operative techniques of aneurysm clipping. Objective To develop and evaluate the usefulness of a new haptic-based virtual reality (VR) simulator in the training of neurosurgical residents. Methods A real-time sensory haptic feedback virtual reality aneurysm clipping simulator was developed using the Immersive Touch platform. A prototype middle cerebral artery aneurysm simulation was created from a computed tomography angiogram. Aneurysm and vessel volume deformation and haptic feedback are provided in a 3-D immersive VR environment. Intraoperative aneurysm rupture was also simulated. Seventeen neurosurgery residents from three residency programs tested the simulator and provided feedback on its usefulness and resemblance to real aneurysm clipping surgery. Results Residents felt that the simulation would be useful in preparing for real-life surgery. About two thirds of the residents felt that the 3-D immersive anatomical details provided a very close resemblance to real operative anatomy and accurate guidance for deciding surgical approaches. They believed the simulation is useful for preoperative surgical rehearsal and neurosurgical training. One third of the residents felt that the technology in its current form provided very realistic haptic feedback for aneurysm surgery. Conclusion Neurosurgical residents felt that the novel immersive VR simulator is helpful in their training especially since they do not get a chance to perform aneurysm clippings until very late in their residency programs. PMID:25599200

  20. [Rapid 3-Dimensional Models of Cerebral Aneurysm for Emergency Surgical Clipping].

    Science.gov (United States)

    Konno, Takehiko; Mashiko, Toshihiro; Oguma, Hirofumi; Kaneko, Naoki; Otani, Keisuke; Watanabe, Eiju

    2016-08-01

    We developed a method for manufacturing solid models of cerebral aneurysms, with a shorter printing time than that involved in conventional methods, using a compact 3D printer with acrylonitrile-butadiene-styrene(ABS)resin. We further investigated the application and utility of this printing system in emergency clipping surgery. A total of 16 patients diagnosed with acute subarachnoid hemorrhage resulting from cerebral aneurysm rupture were enrolled in the present study. Emergency clipping was performed on the day of hospitalization. Digital Imaging and Communication in Medicine(DICOM)data obtained from computed tomography angiography(CTA)scans were edited and converted to stereolithography(STL)file formats, followed by the production of 3D models of the cerebral aneurysm by using the 3D printer. The mean time from hospitalization to the commencement of surgery was 242 min, whereas the mean time required for manufacturing the 3D model was 67 min. The average cost of each 3D model was 194 Japanese Yen. The time required for manufacturing the 3D models shortened to approximately 1 hour with increasing experience of producing 3D models. Favorable impressions for the use of the 3D models in clipping were reported by almost all neurosurgeons included in this study. Although 3D printing is often considered to involve huge costs and long manufacturing time, the method used in the present study requires shorter time and lower costs than conventional methods for manufacturing 3D cerebral aneurysm models, thus making it suitable for use in emergency clipping.

  1. [Choledochal lithiasis and stenosis secondary to the migration of a surgical clip].

    Science.gov (United States)

    Baldomà España, M; Pernas Canadell, J C; González Ceballos, S

    2014-01-01

    The migration of a clip to the common bile duct after cholecystectomy is an uncommon, usually late, complication that can lead to diverse complications like stone formation, stenosis, and obstruction in the bile duct. We present the case of a patient who presented with signs and symptoms of cholangitis due to clip migration one year after laparoscopic cholecystectomy; endoscopic retrograde cholangiopancreatography and biliary tract stent placement resolved the problem. Copyright © 2011 SERAM. Published by Elsevier Espana. All rights reserved.

  2. Origin of fin-clipped salmonids collected at two thermal discharges on Lake Michigan

    International Nuclear Information System (INIS)

    Romberg, G.P.; Thommes, M.M.; Spigarelli, S.A.

    1974-01-01

    Fin clips observed on fish collected during tagging studies at the Point Beach and Waukegan thermal discharges were recorded and the data were tabulated by species. Using fin clip and fish size, attempts were made to identify probable stocking locations and dates from agency records. Data are presented for lake trout, rainbow trout, brown trout, and Coho salmon. Tables are presented to show probable stocking locations and dates

  3. A method for creating teaching movie clips using screen recording software: usefulness of teaching movies as self-learning tools for medical students

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Seong Su [The Catholic University of Korea, Suwon (Korea, Republic of)

    2007-04-15

    I wanted to describe a method to create teaching movies with using screen recordings, and I wanted to see if self-learning movies are useful for medical students. Teaching movies were created by direct recording of the screen activity and voice narration during the interpretation of educational cases; we used a PACS system and screen recording software for the recording (CamStudio, Rendersoft, U.S.A.). The usefulness of teaching movies for seft-learning of abdominal CT anatomy was evacuated by the medical students. Creating teaching movie clips with using screen recording software was simple and easy. Survey responses were collected from 43 medical students. The contents of teaching movie was adequately understandable (52%) and useful for learning (47%). Only 23% students agreed the these movies helped motivated them to learn. Teaching movies were more useful than still photographs of the teaching image files. The students wanted teaching movies on the cross-sectional CT anatomy of different body regions (82%) and for understanding the radiological interpretation of various diseases (42%). Creating teaching movie by direct screen recording of a radiologist's interpretation process is easy and simple. The teaching video clips reveal a radiologist's interpretation process or the explanation of teaching cases with his/her own voice narration, and it is an effective self-learning tool for medical students and residents.

  4. A method for creating teaching movie clips using screen recording software: usefulness of teaching movies as self-learning tools for medical students

    International Nuclear Information System (INIS)

    Hwang, Seong Su

    2007-01-01

    I wanted to describe a method to create teaching movies with using screen recordings, and I wanted to see if self-learning movies are useful for medical students. Teaching movies were created by direct recording of the screen activity and voice narration during the interpretation of educational cases; we used a PACS system and screen recording software for the recording (CamStudio, Rendersoft, U.S.A.). The usefulness of teaching movies for seft-learning of abdominal CT anatomy was evacuated by the medical students. Creating teaching movie clips with using screen recording software was simple and easy. Survey responses were collected from 43 medical students. The contents of teaching movie was adequately understandable (52%) and useful for learning (47%). Only 23% students agreed the these movies helped motivated them to learn. Teaching movies were more useful than still photographs of the teaching image files. The students wanted teaching movies on the cross-sectional CT anatomy of different body regions (82%) and for understanding the radiological interpretation of various diseases (42%). Creating teaching movie by direct screen recording of a radiologist's interpretation process is easy and simple. The teaching video clips reveal a radiologist's interpretation process or the explanation of teaching cases with his/her own voice narration, and it is an effective self-learning tool for medical students and residents

  5. Robust video object cosegmentation.

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing; Li, Xuelong; Porikli, Fatih

    2015-10-01

    With ever-increasing volumes of video data, automatic extraction of salient object regions became even more significant for visual analytic solutions. This surge has also opened up opportunities for taking advantage of collective cues encapsulated in multiple videos in a cooperative manner. However, it also brings up major challenges, such as handling of drastic appearance, motion pattern, and pose variations, of foreground objects as well as indiscriminate backgrounds. Here, we present a cosegmentation framework to discover and segment out common object regions across multiple frames and multiple videos in a joint fashion. We incorporate three types of cues, i.e., intraframe saliency, interframe consistency, and across-video similarity into an energy optimization framework that does not make restrictive assumptions on foreground appearance and motion model, and does not require objects to be visible in all frames. We also introduce a spatio-temporal scale-invariant feature transform (SIFT) flow descriptor to integrate across-video correspondence from the conventional SIFT-flow into interframe motion flow from optical flow. This novel spatio-temporal SIFT flow generates reliable estimations of common foregrounds over the entire video data set. Experimental results show that our method outperforms the state-of-the-art on a new extensive data set (ViCoSeg).

  6. BioClips of symmetric and asymmetric cell division.

    Science.gov (United States)

    Lu, Fong-Mei; Eliceiri, Kevin W; White, John G

    2007-05-01

    Animations have long been used as tools to illustrate complex processes in such diverse fields as mechanical engineering, astronomy, bacteriology and physics. Animations in biology hold particular educational promise for depicting complex dynamic processes, such as photosynthesis, motility, viral replication and cellular respiration, which cannot be easily explained using static two-dimensional images. However, these animations have often been restrictive in scope, having been created for a specific classroom or research audience. In recent years, a new type of animation has emerged called the BioClip (http://www.bioclips.com) that strives to present science in an interactive multimedia format, which is, at once, informative and entertaining, by combining animations, text descriptions and music in one portable cross-platform document. In the present article, we illustrate the educational value of this new electronic resource by reviewing in depth two BioClips our group has created which describe the processes of symmetric and asymmetric cell division (http://www.wormclassroom.org/cb/bioclip).

  7. Effects of Tail Clipping on Larval Performance and Tail Regeneration Rates in the Near Eastern Fire Salamander, Salamandra infraimmaculata.

    Directory of Open Access Journals (Sweden)

    Ori Segev

    Full Text Available Tail-tip clipping is a common technique for collecting tissue samples from amphibian larvae and adults. Surprisingly, studies of this invasive sampling procedure or of natural tail clipping--i.e., bites inflicted by predators including conspecifics--on the performance and fitness of aquatic larval stages of urodeles are scarce. We conducted two studies in which we assessed the effects of posterior tail clipping (~30 percent of tail on Near Eastern fire salamander (Salamandra infraimmaculata larvae. In a laboratory study, we checked regeneration rates of posterior tail-tip clipping at different ages. Regeneration rates were hump-shaped, peaking at the age of ~30 days and then decreasing. This variation in tail regeneration rates suggests tradeoffs in resource allocation between regeneration and somatic growth during early and advanced development. In an outdoor artificial pond experiment, under constant larval densities, we assessed how tail clipping of newborn larvae affects survival to, time to, and size at metamorphosis. Repeated measures ANOVA on mean larval survival per pond revealed no effect of tail clipping. Tail clipping had correspondingly no effect on larval growth and development expressed in size (mass and snout-vent length at, and time to, metamorphosis. We conclude that despite the given variation in tail regeneration rates throughout larval ontogeny, clipping of 30% percent of the posterior tail area seems to have no adverse effects on larval fitness and survival. We suggest that future use of this imperative tool for the study of amphibian should take into account larval developmental stage during the time of application and not just the relative size of the clipped tail sample.

  8. Harmonic Scalpel versus electrocautery and surgical clips in head and neck free-flap harvesting.

    Science.gov (United States)

    Dean, Nichole R; Rosenthal, Eben L; Morgan, Bruce A; Magnuson, J Scott; Carroll, William R

    2014-06-01

    We sought to determine the safety and utility of Harmonic Scalpel-assisted free-flap harvesting as an alternative to a combined electrocautery and surgical clip technique. The medical records of 103 patients undergoing radial forearm free-flap reconstruction (105 free flaps) for head and neck surgical defects between 2006 and 2008 were reviewed. The use of bipolar electrocautery and surgical clips for division of small perforating vessels (n = 53) was compared to ultrasonic energy (Harmonic Scalpel; Ethicon Endo-Surgery, Inc., Cincinnati, Ohio) (n = 52) free-tissue harvesting techniques. Flap-harvesting time was reduced with the use of the Harmonic Scalpel when compared with electrocautery and surgical clip harvest (31.4 vs. 36.9 minutes, respectively; p = 0.06). Two patients who underwent flap harvest with electrocautery and surgical clips developed postoperative donor site hematomas, whereas no donor site complications were noted in the Harmonic Scalpel group. Recipient site complication rates for infection, fistula, and hematoma were similar for both harvesting techniques (p = 0.77). Two flap failures occurred in the clip-assisted radial forearm free-flap harvest group, and none in the Harmonic Scalpel group. Median length of hospitalization was significantly reduced for patients who underwent free-flap harvest with the Harmonic Scalpel when compared with the other technique (7 vs. 8 days; p = 0.01). The Harmonic Scalpel is safe, and its use is feasible for radial forearm free-flap harvest.

  9. Psychophysiological correlates of sexually and non-sexually motivated attention to film clips in a workload task.

    Science.gov (United States)

    Carvalho, Sandra; Leite, Jorge; Galdo-Álvarez, Santiago; Gonçalves, Oscar F

    2011-01-01

    Some authors have speculated that the cognitive component (P3) of the Event-Related Potential (ERP) can function as a psychophysiological measure of sexual interest. The aim of this study was to determine if the P3 ERP component in a workload task can be used as a specific and objective measure of sexual motivation by comparing the neurophysiologic response to stimuli of motivational relevance with different levels of valence and arousal. A total of 30 healthy volunteers watched different films clips with erotic, horror, social-positive and social-negative content, while answering an auditory oddball paradigm. Erotic film clips resulted in larger interference when compared to both the social-positive and auditory alone conditions. Horror film clips resulted in the highest levels of interference with smaller P3 amplitudes than erotic and also than social-positive, social-negative and auditory alone condition. No gender differences were found. Both horror and erotic film clips significantly decreased heart rate (HR) when compared to both social-positive and social-negative films. The erotic film clips significantly increased the skin conductance level (SCL) compared to the social-negative films. The horror film clips significantly increased the SCL compared to both social-positive and social-negative films. Both the highly arousing erotic and non-erotic (horror) movies produced the largest decrease in the P3 amplitude, a decrease in the HR and an increase in the SCL. These data support the notion that this workload task is very sensitive to the attentional resources allocated to the film clip, although they do not act as a specific index of sexual interest. Therefore, the use of this methodology seems to be of questionable utility as a specific measure of sexual interest or as an objective measure of the severity of Hypoactive Sexual Desire Disorder. © 2011 Carvalho et al.

  10. Psychophysiological correlates of sexually and non-sexually motivated attention to film clips in a workload task.

    Directory of Open Access Journals (Sweden)

    Sandra Carvalho

    Full Text Available Some authors have speculated that the cognitive component (P3 of the Event-Related Potential (ERP can function as a psychophysiological measure of sexual interest. The aim of this study was to determine if the P3 ERP component in a workload task can be used as a specific and objective measure of sexual motivation by comparing the neurophysiologic response to stimuli of motivational relevance with different levels of valence and arousal. A total of 30 healthy volunteers watched different films clips with erotic, horror, social-positive and social-negative content, while answering an auditory oddball paradigm. Erotic film clips resulted in larger interference when compared to both the social-positive and auditory alone conditions. Horror film clips resulted in the highest levels of interference with smaller P3 amplitudes than erotic and also than social-positive, social-negative and auditory alone condition. No gender differences were found. Both horror and erotic film clips significantly decreased heart rate (HR when compared to both social-positive and social-negative films. The erotic film clips significantly increased the skin conductance level (SCL compared to the social-negative films. The horror film clips significantly increased the SCL compared to both social-positive and social-negative films. Both the highly arousing erotic and non-erotic (horror movies produced the largest decrease in the P3 amplitude, a decrease in the HR and an increase in the SCL. These data support the notion that this workload task is very sensitive to the attentional resources allocated to the film clip, although they do not act as a specific index of sexual interest. Therefore, the use of this methodology seems to be of questionable utility as a specific measure of sexual interest or as an objective measure of the severity of Hypoactive Sexual Desire Disorder.

  11. Detection of LSB+/-1 steganography based on co-occurrence matrix and bit plane clipping

    Science.gov (United States)

    Abolghasemi, Mojtaba; Aghaeinia, Hassan; Faez, Karim; Mehrabi, Mohammad Ali

    2010-01-01

    Spatial LSB+/-1 steganography changes smooth characteristics between adjoining pixels of the raw image. We present a novel steganalysis method for LSB+/-1 steganography based on feature vectors derived from the co-occurrence matrix in the spatial domain. We investigate how LSB+/-1 steganography affects the bit planes of an image and show that it changes more least significant bit (LSB) planes of it. The co-occurrence matrix is derived from an image in which some of its most significant bit planes are clipped. By this preprocessing, in addition to reducing the dimensions of the feature vector, the effects of embedding were also preserved. We compute the co-occurrence matrix in different directions and with different dependency and use the elements of the resulting co-occurrence matrix as features. This method is sensitive to the data embedding process. We use a Fisher linear discrimination (FLD) classifier and test our algorithm on different databases and embedding rates. We compare our scheme with the current LSB+/-1 steganalysis methods. It is shown that the proposed scheme outperforms the state-of-the-art methods in detecting the LSB+/-1 steganographic method for grayscale images.

  12. Impact of Aneurysm Projection on Intraoperative Complications During Surgical Clipping of Ruptured Posterior Communicating Artery Aneurysms.

    Science.gov (United States)

    Fukuda, Hitoshi; Hayashi, Kosuke; Yoshino, Kumiko; Koyama, Takashi; Lo, Benjamin; Kurosaki, Yoshitaka; Yamagata, Sen

    2016-03-01

    Surgical clipping of ruptured posterior communicating artery (PCoA) aneurysms is a well-established procedure to date. However, preoperative factors associated with procedure-related risk require further elucidation. To investigate the impact of the direction of aneurysm projection on the incidence of procedure-related complications during surgical clipping of ruptured PCoA aneurysms. A total of 65 patients with ruptured PCoA aneurysms who underwent surgical clipping were retrospectively analyzed from a single-center, prospective, observational cohort database in this study. The aneurysms were categorized into lateral and posterior projection groups, depending on direction of the dome. Characteristics and operative findings of each projection group were identified. We also evaluated any correlation of aneurysm projection with the incidence of procedure-related complications. Patients with ruptured PCoA aneurysms with posterior projection more likely presented with good-admission-grade subarachnoid hemorrhage (P = .01, χ test) and were less to also have intracerebral hematoma (P = .01). These aneurysms were found to be associated with higher incidence of intraoperative rupture (P = .02), complex clipping with fenestrated clips (P = .02), and dense adherence to PCoA or its perforators (P = .04) by univariate analysis. Aneurysms with posterior projection were also correlated with procedure-related complications, including postoperative cerebral infarction or hematoma formation (odds ratio, 5.87; 95% confidence interval, 1.11-31.1; P = .04) by multivariable analysis. Ruptured PCoA aneurysms with posterior projection carried a higher risk of procedure-related complications of surgical clipping than those with lateral projection.

  13. Photovoltaic module mounting clip with integral grounding

    Science.gov (United States)

    Lenox, Carl J.

    2010-08-24

    An electrically conductive mounting/grounding clip, usable with a photovoltaic (PV) assembly of the type having an electrically conductive frame, comprises an electrically conductive body. The body has a central portion and first and second spaced-apart arms extending from the central portion. Each arm has first and second outer portions with frame surface-disrupting element at the outer portions.

  14. Video and accelerometer-based motion analysis for automated surgical skills assessment.

    Science.gov (United States)

    Zia, Aneeq; Sharma, Yachna; Bettadapura, Vinay; Sarin, Eric L; Essa, Irfan

    2018-03-01

    Basic surgical skills of suturing and knot tying are an essential part of medical training. Having an automated system for surgical skills assessment could help save experts time and improve training efficiency. There have been some recent attempts at automated surgical skills assessment using either video analysis or acceleration data. In this paper, we present a novel approach for automated assessment of OSATS-like surgical skills and provide an analysis of different features on multi-modal data (video and accelerometer data). We conduct a large study for basic surgical skill assessment on a dataset that contained video and accelerometer data for suturing and knot-tying tasks. We introduce "entropy-based" features-approximate entropy and cross-approximate entropy, which quantify the amount of predictability and regularity of fluctuations in time series data. The proposed features are compared to existing methods of Sequential Motion Texture, Discrete Cosine Transform and Discrete Fourier Transform, for surgical skills assessment. We report average performance of different features across all applicable OSATS-like criteria for suturing and knot-tying tasks. Our analysis shows that the proposed entropy-based features outperform previous state-of-the-art methods using video data, achieving average classification accuracies of 95.1 and 92.2% for suturing and knot tying, respectively. For accelerometer data, our method performs better for suturing achieving 86.8% average accuracy. We also show that fusion of video and acceleration features can improve overall performance for skill assessment. Automated surgical skills assessment can be achieved with high accuracy using the proposed entropy features. Such a system can significantly improve the efficiency of surgical training in medical schools and teaching hospitals.

  15. Enterocutaneous fistula: a novel video-assisted approach.

    Science.gov (United States)

    Rios, Hugo Palma; Goulart, André; Rolanda, Carla; Leão, Pedro

    2017-09-01

    Video-assisted anal fistula treatment (VAAFT) is a novel minimally invasive and sphincter-saving technique to treat complex anal fistulas described by Meinero in 2006. An enterocutaneous fistula is an abnormal communication between the bowel and the skin. Most cases are secondary to surgical complications, and managing this condition is a true challenge for surgeons. Postoperative fistulas account for 75-85% of all enterocutaneous fistulas. The aim of paper was to devise a minimally invasive technique to treat enterocutaneous fistulas. We used the same principles of VAAFT applied to other conditions, combining endoluminal vision of the tract with colonoscopy to identify the internal opening. We present a case of a 78-year-old woman who was subjected to a total colectomy for cecum and sigmoid synchronous adenocarcinoma. The postoperative course was complicated with an enterocutaneous fistula, treated with conservative measures, which recurred during follow-up. We performed video-assisted fistula treatment using a fistuloscope combined with a colonoscope. Once we identified the fistula tract, we performed cleansing and destruction of the tract, applied synthetic cyanoacrylate and sealed the internal opening with clips through an endoluminal approach. The patient was discharged 5 days later without complications. Two months later the wound was completely healed without evidence of recurrence. This procedure represents an alternative treatment for enterocutaneous fistula using a minimally invasive technique, especially in selected patients not able to undergo major surgery.

  16. Video-Aided GPS/INS Positioning and Attitude Determination

    National Research Council Canada - National Science Library

    Brown, Alison; Silva, Randy

    2006-01-01

    ... precise positioning and attitude information to be maintained, even during periods of extended GPS dropouts. This relies on information extracted from the video images of reference points and features to continue to update the inertial navigation solution. In this paper, the principles of the video-update method aredescribed.

  17. An Echocardiography Training Program for Improving the Left Ventricular Function Interpretation in Emergency Department; a Brief Report

    Directory of Open Access Journals (Sweden)

    Mary S. Jacob

    2017-06-01

    Full Text Available Introduction: Focused training in transthoracic echocardiography enables emergency physicians (EPs to accurately estimate the left ventricular function. This study aimed to evaluate the efficacy of a brief training program utilizing standardized echocardiography video clips in this regard. Methods: A before and after design was used to determine the efficacy of a 1 hour echocardiography training program using PowerPoint presentation and standardized echocardiography video clips illustrating normal and abnormal left ventricular ejection fraction (LVEF as well as video clips emphasizing the measurement of mitral valve E-point septal separation (EPSS. Pre- and post-test evaluation used unique video clips and asked trainees to estimate LVEF and EPSS based on the viewed video clips. Results: 21 EPs with no prior experience with the echocardiographic technical methods completed this study. The EPs had very limited prior echocardiographic training. The mean score on the categorization of LVEF estimation improved from 4.9 (95% CI: 4.1-5.6 to 7.6 (95%CI: 7-8.3 out of a possible 10 score (p<0.0001. Categorization of EPSS improved from 4.1 (95% CI: 3.1-5.1 to 8.1 (95% CI: 7.6- 8.7 after education (p<0.0001. Conclusions: The results of this study demonstrate a statistically significant improvement of EPs’ ability to categorize left ventricular function as normal or depressed, after a short lecture utilizing a commercially available DVD of standardized echocardiography clips.

  18. Preoperative Computed Tomography-Guided Percutaneous Hookwire Localization of Metallic Marker Clips in the Breast with a Radial Approach: Initial Experience

    Energy Technology Data Exchange (ETDEWEB)

    Uematsu, T.; Kasami, M.; Uchida, Y.; Sanuki, J.; Kimura, K.; Tanaka, K.; Takahashi, K. [Dept. of Diagnostic Radiology, Dept. of Pathology, and Dept. of Breast Surgery, Shizuoka Cancer Center Hospital, Naga-izumi, Shizuoka (Japan)

    2007-07-15

    Background: Hookwire localization is the current standard technique for radiological marking of nonpalpable breast lesions. Stereotactic directional vacuum-assisted breast biopsy (SVAB) is of sufficient sensitivity and specificity to replace surgical biopsy. Wire localization for metallic marker clips placed after SVAB is needed. Purpose: To describe a method for performing computed tomography (CT)-guided hookwire localization using a radial approach for metallic marker clips placed percutaneously after SVAB. Material and Methods: Nineteen women scheduled for SVAB with marker-clip placement, CT-guided wire localization of marker clips, and, eventually, surgical excision were prospectively entered into the study. CT-guided wire localization was performed with a radial approach, followed by placement of a localizing marker-clip surgical excision. Feasibility and reliability of the procedure and the incidence of complications were examined. Results: CT-guided wire localization surgical excision was successfully performed in all 19 women without any complications. The mean total procedure time was 15 min. The median distance on CT image from marker clip to hookwire was 2 mm (range 0-3 mm). Conclusion: CT-guided preoperative hookwire localization with a radial approach for marker clips after SVAB is technically feasible.

  19. Preoperative computed tomography-guided percutaneous hookwire localization of metallic marker clips in the breast with a radial approach: initial experience.

    Science.gov (United States)

    Uematsu, T; Kasami, M; Uchida, Y; Sanuki, J; Kimura, K; Tanaka, K; Takahashi, K

    2007-06-01

    Hookwire localization is the current standard technique for radiological marking of nonpalpable breast lesions. Stereotactic directional vacuum-assisted breast biopsy (SVAB) is of sufficient sensitivity and specificity to replace surgical biopsy. Wire localization for metallic marker clips placed after SVAB is needed. To describe a method for performing computed tomography (CT)-guided hookwire localization using a radial approach for metallic marker clips placed percutaneously after SVAB. Nineteen women scheduled for SVAB with marker-clip placement, CT-guided wire localization of marker clips, and, eventually, surgical excision were prospectively entered into the study. CT-guided wire localization was performed with a radial approach, followed by placement of a localizing marker-clip surgical excision. Feasibility and reliability of the procedure and the incidence of complications were examined. CT-guided wire localization surgical excision was successfully performed in all 19 women without any complications. The mean total procedure time was 15 min. The median distance on CT image from marker clip to hookwire was 2 mm (range 0-3 mm). CT-guided preoperative hookwire localization with a radial approach for marker clips after SVAB is technically feasible.

  20. Recent advances in intelligent image search and video retrieval

    CERN Document Server

    2017-01-01

    This book initially reviews the major feature representation and extraction methods and effective learning and recognition approaches, which have broad applications in the context of intelligent image search and video retrieval. It subsequently presents novel methods, such as improved soft assignment coding, Inheritable Color Space (InCS) and the Generalized InCS framework, the sparse kernel manifold learner method, the efficient Support Vector Machine (eSVM), and the Scale-Invariant Feature Transform (SIFT) features in multiple color spaces. Lastly, the book presents clothing analysis for subject identification and retrieval, and performance evaluation methods of video analytics for traffic monitoring. Digital images and videos are proliferating at an amazing speed in the fields of science, engineering and technology, media and entertainment. With the huge accumulation of such data, keyword searches and manual annotation schemes may no longer be able to meet the practical demand for retrieving relevant conte...

  1. Electromyographically Assessed Empathic Concern and Empathic Happiness Predict Increased Prosocial Behavior in Adults

    Science.gov (United States)

    Light, Sharee N.; Moran, Zachary D.; Swander, Lena; Le, Van; Cage, Brandi; Burghy, Cory; Westbrook, Cecilia; Greishar, Larry; Davidson, Richard J.

    2016-01-01

    The relation between empathy subtypes and prosocial behavior was investigated in a sample of healthy adults. "Empathic concern" and "empathic happiness," defined as negative and positive vicarious emotion (respectively) combined with an other-oriented feeling of “goodwill” (i.e. a thought to do good to others/see others happy), were elicited in 68 adult participants who watched video clips extracted from the television show Extreme Makeover: Home Edition. Prosocial behavior was quantified via performance on a non-monetary altruistic decision-making task involving book selection and donation. Empathic concern and empathic happiness were measured via self-report (immediately following each video clip) and via facial electromyography recorded from corrugator (active during frowning) and zygomatic (active during smiling) facial regions. Facial electromyographic signs of (a) empathic concern (i.e. frowning) during sad video clips, and (b) empathic happiness (i.e. smiling) during happy video clips, predicted increased prosocial behavior in the form of increased goodwill-themed book selection/donation. PMID:25486408

  2. Reading your own lips: common-coding theory and visual speech perception.

    Science.gov (United States)

    Tye-Murray, Nancy; Spehar, Brent P; Myerson, Joel; Hale, Sandra; Sommers, Mitchell S

    2013-02-01

    Common-coding theory posits that (1) perceiving an action activates the same representations of motor plans that are activated by actually performing that action, and (2) because of individual differences in the ways that actions are performed, observing recordings of one's own previous behavior activates motor plans to an even greater degree than does observing someone else's behavior. We hypothesized that if observing oneself activates motor plans to a greater degree than does observing others, and if these activated plans contribute to perception, then people should be able to lipread silent video clips of their own previous utterances more accurately than they can lipread video clips of other talkers. As predicted, two groups of participants were able to lipread video clips of themselves, recorded more than two weeks earlier, significantly more accurately than video clips of others. These results suggest that visual input activates speech motor activity that links to word representations in the mental lexicon.

  3. Multi-modal highlight generation for sports videos using an information-theoretic excitability measure

    Science.gov (United States)

    Hasan, Taufiq; Bořil, Hynek; Sangwan, Abhijeet; L Hansen, John H.

    2013-12-01

    The ability to detect and organize `hot spots' representing areas of excitement within video streams is a challenging research problem when techniques rely exclusively on video content. A generic method for sports video highlight selection is presented in this study which leverages both video/image structure as well as audio/speech properties. Processing begins where the video is partitioned into small segments and several multi-modal features are extracted from each segment. Excitability is computed based on the likelihood of the segmental features residing in certain regions of their joint probability density function space which are considered both exciting and rare. The proposed measure is used to rank order the partitioned segments to compress the overall video sequence and produce a contiguous set of highlights. Experiments are performed on baseball videos based on signal processing advancements for excitement assessment in the commentators' speech, audio energy, slow motion replay, scene cut density, and motion activity as features. Detailed analysis on correlation between user excitability and various speech production parameters is conducted and an effective scheme is designed to estimate the excitement level of commentator's speech from the sports videos. Subjective evaluation of excitability and ranking of video segments demonstrate a higher correlation with the proposed measure compared to well-established techniques indicating the effectiveness of the overall approach.

  4. Video Texture Synthesis Based on Flow-Like Stylization Painting

    Directory of Open Access Journals (Sweden)

    Qian Wenhua

    2014-01-01

    Full Text Available The paper presents an NP-video rendering system based on natural phenomena. It provides a simple nonphotorealistic video synthesis system in which user can obtain a flow-like stylization painting and infinite video scene. Firstly, based on anisotropic Kuwahara filtering in conjunction with line integral convolution, the phenomena video scene can be rendered to flow-like stylization painting. Secondly, the methods of frame division, patches synthesis, will be used to synthesize infinite playing video. According to selection examples from different natural video texture, our system can generate stylized of flow-like and infinite video scenes. The visual discontinuities between neighbor frames are decreased, and we also preserve feature and details of frames. This rendering system is easy and simple to implement.

  5. Prediction of visual saliency in video with deep CNNs

    Science.gov (United States)

    Chaabouni, Souad; Benois-Pineau, Jenny; Hadar, Ofer

    2016-09-01

    Prediction of visual saliency in images and video is a highly researched topic. Target applications include Quality assessment of multimedia services in mobile context, video compression techniques, recognition of objects in video streams, etc. In the framework of mobile and egocentric perspectives, visual saliency models cannot be founded only on bottom-up features, as suggested by feature integration theory. The central bias hypothesis, is not respected neither. In this case, the top-down component of human visual attention becomes prevalent. Visual saliency can be predicted on the basis of seen data. Deep Convolutional Neural Networks (CNN) have proven to be a powerful tool for prediction of salient areas in stills. In our work we also focus on sensitivity of human visual system to residual motion in a video. A Deep CNN architecture is designed, where we incorporate input primary maps as color values of pixels and magnitude of local residual motion. Complementary contrast maps allow for a slight increase of accuracy compared to the use of color and residual motion only. The experiments show that the choice of the input features for the Deep CNN depends on visual task:for th eintersts in dynamic content, the 4K model with residual motion is more efficient, and for object recognition in egocentric video the pure spatial input is more appropriate.

  6. Patterns and Meanings of English Words through Word Formation Processes of Acronyms, Clipping, Compound and Blending Found in Internet-Based Media

    Directory of Open Access Journals (Sweden)

    Rio Rini Diah Moehkardi

    2017-02-01

    Full Text Available This research aims to explore the word-formation process in English new words found in the internet-based media through acronym, compound,  clipping and blending and their meanings. This study applies Plag’s (2002 framework of acronym and compound; Jamet’s (2009 framework of clipping, and Algeo’s framework (1977 in Hosseinzadeh  (2014 for blending. Despite the  formula established in each respective framework,  there could be occurrences  of novelty and modification on how words are formed and  how meaning developed in  the newly formed words. The research shows that well accepted acronyms can become real words by taking lower case and affixation. Some acronyms initialized non-lexical words, used non initial letters, and used letters and numbers that pronounced the same with the words they represent. Compounding also includes numbers as the element member of the compound. The nominal nouns are likely to have metaphorical and idiomatic meanings. Some compounds evolve to new and more specific meaning. The study also finds that back-clipping is the most dominant clipping. In blending, the sub-category clipping of blending, the study finds out that when clipping takes place, the non-head element is back-clipped and the head is fore-clipped.

  7. Video Spectroscopy with the RSpec Explorer

    Science.gov (United States)

    Lincoln, James

    2018-01-01

    The January 2018 issue of "The Physics Teacher" saw two articles that featured the RSpec Explorer as a supplementary lab apparatus. The RSpec Explorer provides live video spectrum analysis with which teachers can demonstrate how to investigate features of a diffracted light source. In this article I provide an introduction to the device…

  8. Sociolinguistic import of name-clipping among Omambala cultural ...

    African Journals Online (AJOL)

    This study examines the perceived but obvious manifestation of name-clipping among Omambala cultural zone of Anambra State. This situation has given rise to distortion of names and most often, to either mis-interpretation or complete loss of the original and full meanings of the names. This situation of misinterpretation is ...

  9. Using Film Clips to Teach Teen Pregnancy Prevention: "The Gloucester 18" at a Teen Summit

    Science.gov (United States)

    Herrman, Judith W.; Moore, Christopher C.; Anthony, Becky

    2012-01-01

    Teaching pregnancy prevention to large groups offers many challenges. This article describes the use of film clips, with guided discussion, to teach pregnancy prevention. In order to analyze the costs associated with teen pregnancy, a film clip discussion session based with the film "The Gloucester 18" was the keynote of a youth summit. The lesson…

  10. Can interface features affect aggression resulting from violent video game play? An examination of realistic controller and large screen size.

    Science.gov (United States)

    Kim, Ki Joon; Sundar, S Shyam

    2013-05-01

    Aggressiveness attributed to violent video game play is typically studied as a function of the content features of the game. However, can interface features of the game also affect aggression? Guided by the General Aggression Model (GAM), we examine the controller type (gun replica vs. mouse) and screen size (large vs. small) as key technological aspects that may affect the state aggression of gamers, with spatial presence and arousal as potential mediators. Results from a between-subjects experiment showed that a realistic controller and a large screen display induced greater aggression, presence, and arousal than a conventional mouse and a small screen display, respectively, and confirmed that trait aggression was a significant predictor of gamers' state aggression. Contrary to GAM, however, arousal showed no effects on aggression; instead, presence emerged as a significant mediator.

  11. O MONITORAMENTO DE NOTÍCIAS COMO FERRAMENTA PARA A INTELIGÊNCIA COMPETITIVATHE CLIPPING AS TOLL FOR COMPETITIVE INTELLIGENCE

    Directory of Open Access Journals (Sweden)

    Ariane Barbosa Lemos

    2011-07-01

    Full Text Available O artigo apresenta um diagnóstico do serviço de monitoramento de notícias (clipping. No estudo, foram investigados os processos de elaboração desse serviço por uma empresa especializada e sua utilização por parte de organizações dos setores de educação executiva, jurídico e entretenimento. Conclui que tanto a empresa clipadora quanto seus clientes consideram o clipping útil para o processo decisório e o serviço é visto de forma complementar a ação geral de inteligência competitiva das organizações. Palavras-chave inteligência competitiva, monitoramento ambiental, fontes de informação e monitoramento de notícias. Abstract The paper presents a diagnostic study about a clipping service. This study focused the development of the service provided by a specialized firm and its use by organizations operating in the executive education, legal and entertainment areas. It concludes that both the company and your customers consider clipping useful for decision making and the clipping is seen as complementary to general action of competitive intelligence organizations. Keywords competitive intelligence, environmental scanning, information source and clipping.

  12. Heliostat blocking and shadowing efficiency in the video-game era

    Energy Technology Data Exchange (ETDEWEB)

    Ramos, Alberto [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Ramos, Francisco [Nevada Software Informatica S.L., Madrid (Spain)

    2014-02-15

    Blocking and shadowing is one of the key effects in designing and evaluating a thermal central receiver solar tower plant. Therefore it is convenient to develop efficient algorithms to compute the area of an heliostat blocked or shadowed by the rest of the field. In this paper we explore the possibility of using very efficient clipping algorithms developed for the video game and imaging industry to compute the blocking and shadowing efficiency of a solar thermal plant layout. We propose an algorithm valid for arbitrary position, orientation and size of the heliostats. This algorithm turns out to be very accurate, free of assumptions and fast. We show the feasibility of the use of this algorithm to the optimization of a solar plant by studying a couple of examples in detail.

  13. Heliostat blocking and shadowing efficiency in the video-game era

    International Nuclear Information System (INIS)

    Ramos, Alberto

    2014-02-01

    Blocking and shadowing is one of the key effects in designing and evaluating a thermal central receiver solar tower plant. Therefore it is convenient to develop efficient algorithms to compute the area of an heliostat blocked or shadowed by the rest of the field. In this paper we explore the possibility of using very efficient clipping algorithms developed for the video game and imaging industry to compute the blocking and shadowing efficiency of a solar thermal plant layout. We propose an algorithm valid for arbitrary position, orientation and size of the heliostats. This algorithm turns out to be very accurate, free of assumptions and fast. We show the feasibility of the use of this algorithm to the optimization of a solar plant by studying a couple of examples in detail.

  14. Reflection Paper on a Ubiquitous English Vocabulary Learning System: Evidence of Active/Passive Attitude vs. Usefulness/Ease-of-Use

    Science.gov (United States)

    Lim, Jeff

    2013-01-01

    "A ubiquitous English vocabulary learning system: evidence of active/passive attitudes vs. usefulness/ease-of-use" introduces and develops "Ubiquitous English Vocabulary Learning" (UEFL) system. It introduces to the memorization using the video clips. According to their paper the video clip gives a better chance for students to…

  15. Effect of hair coat clipping on some physiological changes of dairy bulls

    Directory of Open Access Journals (Sweden)

    Prasanpanich, S.

    2006-03-01

    Full Text Available Some physiological responses of 6 Friesian crossbred (87.5% bull yearling with 2.5 years old, averaging 235 kg. bodyweight were investigated under hot humid conditions. All animals were raised in a house (4 × 15 × 5; w × l × h meters, respectively with concrete floor and were assigned to the Pair Comparison Design according to their weight and age into 2 groups. Animals in group 1 were maintained with their natural hair coat while their counterparts in group 2 were coat clipped fortnightly through a 70-day experimental period. The results indicated that the clipped animals had a significantly (P<0.05 lower sweating rate than did the unclipped ones (102.7±15.48 and 48.3±15.48 g/m2/hour, respectively. However, there were no significant differences in rectal temperature, skin temperature and respiratory rate between the two groups of animals. Further study should be done to clarify the consequences of lower sweating rate in clipped animals under hot humid conditions.

  16. Mediating Tourist Experiences. Access to Places via Shared Videos

    DEFF Research Database (Denmark)

    Tussyadiah, Iis; Fesenmaier, D.R.

    2009-01-01

    The emergence of new media using multimedia features has generated a new set of mediators for tourists' experiences. This study examines two hypotheses regarding the roles that online travel videos play as mediators of tourist experiences. The results confirm that online shared videos can provide...

  17. Efficient computation of clipped Voronoi diagram for mesh generation

    KAUST Repository

    Yan, Dongming

    2013-04-01

    The Voronoi diagram is a fundamental geometric structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact domain (i.e. a bounded and closed 2D region or a 3D volume), some Voronoi cells of their Voronoi diagram are infinite or partially outside of the domain, but in practice only the parts of the cells inside the domain are needed, as when computing the centroidal Voronoi tessellation. Such a Voronoi diagram confined to a compact domain is called a clipped Voronoi diagram. We present an efficient algorithm to compute the clipped Voronoi diagram for a set of sites with respect to a compact 2D region or a 3D volume. We also apply the proposed method to optimal mesh generation based on the centroidal Voronoi tessellation. Crown Copyright © 2011 Published by Elsevier Ltd. All rights reserved.

  18. Efficient computation of clipped Voronoi diagram for mesh generation

    KAUST Repository

    Yan, Dongming; Wang, Wen Ping; Lé vy, Bruno L.; Liu, Yang

    2013-01-01

    The Voronoi diagram is a fundamental geometric structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact domain (i.e. a bounded and closed 2D region or a 3D volume), some Voronoi cells of their Voronoi diagram are infinite or partially outside of the domain, but in practice only the parts of the cells inside the domain are needed, as when computing the centroidal Voronoi tessellation. Such a Voronoi diagram confined to a compact domain is called a clipped Voronoi diagram. We present an efficient algorithm to compute the clipped Voronoi diagram for a set of sites with respect to a compact 2D region or a 3D volume. We also apply the proposed method to optimal mesh generation based on the centroidal Voronoi tessellation. Crown Copyright © 2011 Published by Elsevier Ltd. All rights reserved.

  19. Videos for Science Communication and Nature Interpretation: The TIB|AV-Portal as Resource.

    Science.gov (United States)

    Marín Arraiza, Paloma; Plank, Margret; Löwe, Peter

    2016-04-01

    Scientific audiovisual media such as videos of research, interactive displays or computer animations has become an important part of scientific communication and education. Dynamic phenomena can be described better by audiovisual media than by words and pictures. For this reason, scientific videos help us to understand and discuss environmental phenomena more efficiently. Moreover, the creation of scientific videos is easier than ever, thanks to mobile devices and open source editing software. Video-clips, webinars or even the interactive part of a PICO are formats of scientific audiovisual media used in the Geosciences. This type of media translates the location-referenced Science Communication such as environmental interpretation into computed-based Science Communication. A new way of Science Communication is video abstracting. A video abstract is a three- to five-minute video statement that provides background information about a research paper. It also gives authors the opportunity to present their research activities to a wider audience. Since this kind of media have become an important part of scientific communication there is a need for reliable infrastructures which are capable of managing the digital assets researchers generate. Using the reference of the usecase of video abstracts this paper gives an overview over the activities by the German National Library of Science and Technology (TIB) regarding publishing and linking audiovisual media in a scientifically sound way. The German National Library of Science and Technology (TIB) in cooperation with the Hasso Plattner Institute (HPI) developed a web-based portal (av.tib.eu) that optimises access to scientific videos in the fields of science and technology. Videos from the realms of science and technology can easily be uploaded onto the TIB|AV Portal. Within a short period of time the videos are assigned a digital object identifier (DOI). This enables them to be referenced, cited, and linked (e.g. to the

  20. Artificial Intelligence in Video Games: Towards a Unified Framework

    OpenAIRE

    Safadi, Firas

    2015-01-01

    The work presented in this dissertation revolves around the problem of designing artificial intelligence (AI) for video games. This problem becomes increasingly challenging as video games grow in complexity. With modern video games frequently featuring sophisticated and realistic environments, the need for smart and comprehensive agents that understand the various aspects of these environments is pressing. Although machine learning techniques are being successfully applied in a multitude of d...

  1. 21 CFR 884.2685 - Fetal scalp clip electrode and applicator.

    Science.gov (United States)

    2010-04-01

    ... and an external monitoring device by means of pinching skin tissue with a nonreusable clip. This... shall have an approved PMA or a declared completed PDP in effect before being placed in commercial...

  2. Direct mounted photovoltaic device with improved front clip

    Science.gov (United States)

    Keenihan, James R; Boven, Michelle; Brown, Jr., Claude; Gaston, Ryan S; Hus, Michael; Langmaid, Joe A; Lesniak, Mike

    2013-11-05

    The present invention is premised upon a photovoltaic assembly system for securing and/or aligning at least a plurality of vertically adjacent (overlapping) photovoltaic device assemblies to one another. The securing function being accomplished by a clip member that may be a separate component or integral to one or more of the photovoltaic device assemblies.

  3. Direct mounted photovoltaic device with improved side clip

    Science.gov (United States)

    Keenihan, James R; Boven, Michelle L; Brown, Jr., Claude; Eurich, Gerald K; Gaston, Ryan S; Hus, Michael

    2013-11-19

    The present invention is premised upon a photovoltaic assembly system for securing and/or aligning at least a plurality of vertically adjacent photovoltaic device assemblies to one another. The securing function being accomplished by a clip member that may be a separate component or integral to one or more of the photovoltaic device assemblies.

  4. Development of a new detection device using a glass clip emitting infrared fluorescence for laparoscopic surgery of gastric cancer

    International Nuclear Information System (INIS)

    Inada, Shunko Albano; Mori, Kensaku; Fuchi, Shingo; Hasegawa, Junichi; Misawa, Kazunari; Nakanishi, Hayao

    2015-01-01

    In conventional method, to identify location of the tumor intraperitoneally for extirpation of the gastric cancer, charcoal ink is injected around the primary tumor. However, in the time of laparoscopic operation, it is difficult to estimate specific site of primary tumor. In this study we developed a glass phosphors was realized with Yb 3+ , Nd 3+ doped to Bi 2 O 3 -B 2 O 3 based glasses, which have central emission wavelength of 1020 nm and 100 nm of FWHM. Using this glass phosphor, we developed a fluorescent clip and the laparoscopic fluorescent detection system for clip-derived near-infrared light. To evaluated clinical performance of a fluorescent clip and the laparoscopic detection system, we used resected stomach from the patients. Fluorescent clip was fixed on the gastric mucosa, and an excitation light (wavelength: 808nm) was irradiated from outside of stomach for detection of fluorescent through stomach wall. As a result, fluorescent emission from the clip was successfully detected. These results indicate that the glass fluorescent clip in combination with laparoscopic detection system is a very useful method to identify the exact location of the primary gastric cancer. (paper)

  5. Hierarchical structure for audio-video based semantic classification of sports video sequences

    Science.gov (United States)

    Kolekar, M. H.; Sengupta, S.

    2005-07-01

    A hierarchical structure for sports event classification based on audio and video content analysis is proposed in this paper. Compared to the event classifications in other games, those of cricket are very challenging and yet unexplored. We have successfully solved cricket video classification problem using a six level hierarchical structure. The first level performs event detection based on audio energy and Zero Crossing Rate (ZCR) of short-time audio signal. In the subsequent levels, we classify the events based on video features using a Hidden Markov Model implemented through Dynamic Programming (HMM-DP) using color or motion as a likelihood function. For some of the game-specific decisions, a rule-based classification is also performed. Our proposed hierarchical structure can easily be applied to any other sports. Our results are very promising and we have moved a step forward towards addressing semantic classification problems in general.

  6. Effect of a reminder video using a mobile phone on the retention of CPR and AED skills in lay responders.

    Science.gov (United States)

    Ahn, Ji Yun; Cho, Gyu Chong; Shon, You Dong; Park, Seung Min; Kang, Ku Hyun

    2011-12-01

    Skills related to cardiopulmonary resuscitation (CPR) and automated external defibrillator (AED) use by lay responders decay rapidly after training, and efforts are required to maintain competence among trainees. We examined whether repeated viewing of a reminder video on a mobile phone would be an effective means of maintaining CPR and AED skills in lay responders. In a single-blind case-control study, 75 male students received training in CPR and AED use. They were allocated either to the control or to the video-reminded group, who received a memory card containing a video clip about CPR and AED use for their mobile phone, which they were repeatedly encouraged to watch by SMS text message. CPR and AED skills were assessed in scenario format by examiners immediately and 3 months after initial training. Three months after initial training, the video-reminded group showed more accurate airway opening (PCPR after defibrillation (PCPR confidence scores and increased willingness to perform bystander CPR in cardiac arrest than the controls at 3 months (PCPR and AED skills in lay responders. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  7. Real-time surgery simulation of intracranial aneurysm clipping with patient-specific geometries and haptic feedback

    Science.gov (United States)

    Fenz, Wolfgang; Dirnberger, Johannes

    2015-03-01

    Providing suitable training for aspiring neurosurgeons is becoming more and more problematic. The increasing popularity of the endovascular treatment of intracranial aneurysms leads to a lack of simple surgical situations for clipping operations, leaving mainly the complex cases, which present even experienced surgeons with a challenge. To alleviate this situation, we have developed a training simulator with haptic interaction allowing trainees to practice virtual clipping surgeries on real patient-specific vessel geometries. By using specialized finite element (FEM) algorithms (fast finite element method, matrix condensation) combined with GPU acceleration, we can achieve the necessary frame rate for smooth real-time interaction with the detailed models needed for a realistic simulation of the vessel wall deformation caused by the clamping with surgical clips. Vessel wall geometries for typical training scenarios were obtained from 3D-reconstructed medical image data, while for the instruments (clipping forceps, various types of clips, suction tubes) we use models provided by manufacturer Aesculap AG. Collisions between vessel and instruments have to be continuously detected and transformed into corresponding boundary conditions and feedback forces, calculated using a contact plane method. After a training, the achieved result can be assessed based on various criteria, including a simulation of the residual blood flow into the aneurysm. Rigid models of the surgical access and surrounding brain tissue, plus coupling a real forceps to the haptic input device further increase the realism of the simulation.

  8. Prophylactic clipping for the prevention of bleeding following wide-field endoscopic mucosal resection of laterally spreading colorectal lesions: an economic modeling study.

    Science.gov (United States)

    Bahin, Farzan F; Rasouli, Khalid N; Williams, Stephen J; Lee, Eric Y T; Bourke, Michael J

    2016-08-01

    Clinically significant bleeding (CSPEB) is the most common adverse event following endoscopic mucosal resection (EMR) of large sessile and laterally spreading colorectal lesions (LSLs), and is associated with morbidity and resource utilization. CSPEB occurs more frequently with proximal LSLs. Prophylactic clipping of the post-EMR defect may be beneficial in CSPEB prevention. The aim of this study was to determine the cost-effectiveness of a prophylactic clipping strategy. We hypothesized that prophylactic clipping in the proximal colon was cost-effective. An economic model was applied to outcomes from the Australian Colonic Endoscopic Mucosal Resection (ACE) Study. Clip distances of 3, 5, 8, and 10 mm were analyzed. The cost of treating CSPEB was determined from an independent costing agency. The funds needed to spend (FNS) was the cost incurred in order to prevent one episode of CSPEB. A break-even analysis was performed to determine cost equivalence of the costs of clipping and CSPEB. Outcomes of 1717 LSLs (mean size 35.8 mm; 52.6 % proximal colon) that underwent EMR were analyzed. The overall rate of CSPEB was 6.4 % (proximal 8.9 %; distal 3.7 %). Endoscopic management was required in 45 % of CSPEB episodes. With a clip distance of 3 mm, the expected cost of prophylactic clipping was € 1106 per lesion compared with € 157 per lesion for the expected cost of CSPEB without clipping. At 100 % clipping efficacy, the FNS was € 14 826 (proximal and distal lesions € 9309 and € 29 540, respectively). A clip price of € 10.35 was required for the cost of clipping to offset the cost of CSPEB. A prophylactic clipping strategy is not cost-effective and at present cannot be justified for all lesions or selectively for lesions in the proximal colon. ClinicalTrials.gov (NCT01368289). © Georg Thieme Verlag KG Stuttgart · New York.

  9. "Frenemies, Fraitors, and Mean-em-aitors": Priming Effects of Viewing Physical and Relational Aggression in the Media on Women.

    Science.gov (United States)

    Coyne, Sarah M; Linder, Jennifer Ruh; Nelson, David A; Gentile, Douglas A

    2012-01-01

    Past research has shown activation of aggressive cognitions in memory after media violence exposure, but has not examined priming effects of viewing relational aggression in the media. In the current study, 250 women viewed a video clip depicting physical aggression, relational aggression, or no aggression. Subsequent activation of physical and relational aggression cognitions was measured using an emotional Stroop task. Results indicated priming of relational aggression cognitions after viewing the relationally aggressive video clip, and activation of both physical and relational aggression cognitions after viewing the physically aggressive video clip. Results are discussed within the framework of the General Aggression Model. © 2012 Wiley Periodicals, Inc.

  10. A new video studio for CERN

    CERN Multimedia

    Anaïs Vernede

    2011-01-01

    On Monday, 14 February 2011 CERN's new video studio was inaugurated with a recording of "Spotlight on CERN", featuring an interview with the DG, Rolf Heuer.   CERN's new video studio. Almost all international organisations have a studio for their audiovisual communications, and now it's CERN’s turn to acquire such a facility. “In the past, we've made videos using the Globe audiovisual facilities and sometimes using the small photographic studio, which is equipped with simple temporary sets that aren’t really suitable for video,” explains Jacques Fichet, head of CERN‘s audiovisual service. Once the decision had been taken to create the new 100 square-metre video studio, the work took only five months to complete. The studio, located in Building 510, is equipped with a cyclorama (a continuous smooth white wall used as a background) measuring 3 m in height and 16 m in length, as well as a teleprompter, a rail-mounted camera dolly fo...

  11. Robust video watermarking via optimization algorithm for quantization of pseudo-random semi-global statistics

    Science.gov (United States)

    Kucukgoz, Mehmet; Harmanci, Oztan; Mihcak, Mehmet K.; Venkatesan, Ramarathnam

    2005-03-01

    In this paper, we propose a novel semi-blind video watermarking scheme, where we use pseudo-random robust semi-global features of video in the three dimensional wavelet transform domain. We design the watermark sequence via solving an optimization problem, such that the features of the mark-embedded video are the quantized versions of the features of the original video. The exact realizations of the algorithmic parameters are chosen pseudo-randomly via a secure pseudo-random number generator, whose seed is the secret key, that is known (resp. unknown) by the embedder and the receiver (resp. by the public). We experimentally show the robustness of our algorithm against several attacks, such as conventional signal processing modifications and adversarial estimation attacks.

  12. Practical Use of the Extended No Action Level (eNAL) Correction Protocol for Breast Cancer Patients With Implanted Surgical Clips

    International Nuclear Information System (INIS)

    Penninkhof, Joan; Quint, Sandra; Baaijens, Margreet; Heijmen, Ben; Dirkx, Maarten

    2012-01-01

    Purpose: To describe the practical use of the extended No Action Level (eNAL) setup correction protocol for breast cancer patients with surgical clips and evaluate its impact on the setup accuracy of both tumor bed and whole breast during simultaneously integrated boost treatments. Methods and Materials: For 80 patients, two orthogonal planar kilovoltage images and one megavoltage image (for the mediolateral beam) were acquired per fraction throughout the radiotherapy course. For setup correction, the eNAL protocol was applied, based on registration of surgical clips in the lumpectomy cavity. Differences with respect to application of a No Action Level (NAL) protocol or no protocol were quantified for tumor bed and whole breast. The correlation between clip migration during the fractionated treatment and either the method of surgery or the time elapsed from last surgery was investigated. Results: The distance of the clips to their center of mass (COM), averaged over all clips and patients, was reduced by 0.9 ± 1.2 mm (mean ± 1 SD). Clip migration was similar between the group of patients starting treatment within 100 days after surgery (median, 53 days) and the group starting afterward (median, 163 days) (p = 0.20). Clip migration after conventional breast surgery (closing the breast superficially) or after lumpectomy with partial breast reconstructive techniques (sutured cavity). was not significantly different either (p = 0.22). Application of eNAL on clips resulted in residual systematic errors for the clips’ COM of less than 1 mm in each direction, whereas the setup of the breast was within about 2 mm of accuracy. Conclusions: Surgical clips can be safely used for high-accuracy position verification and correction. Given compensation for time trends in the clips’ COM throughout the treatment course, eNAL resulted in better setup accuracies for both tumor bed and whole breast than NAL.

  13. Does the placement of surgical clips within the excision cavity influence local control for patients treated with breast conserving surgery and irradiation?

    Energy Technology Data Exchange (ETDEWEB)

    Fein, Douglas A; Fowble, Barbara L; Hanlon, Alexandra L; Hoffman, John P; Sigurdson, Elin R; Eisenberg, Burton L

    1995-07-01

    PURPOSE: A number of authors have demonstrated the importance of using surgical clips to define the tumor bed in the treatment planning of early stage breast cancer. The clips have been useful in delineating the borders of the tangential fields especially for very medial and very lateral lesions as well as the boost volume. If surgical clips better define the tumor bed then a reduction in true or marginal recurrences should be appreciated. We sought to compare the incidence of breast recurrence in women with and without surgical clips controlling for other recognized prognostic factors. METHODS AND MATERIALS: Between 1980 and 1992, 1364 women with clinical Stage I or II invasive breast cancer underwent excisional biopsy, axillary dissection, and definitive irradiation. Median follow-up was 60 months. Median age was 55 years. Seventy-one percent of patients were path N0, 22% had 1-3 nodes and 7% had {>=} 4 nodes. Sixty-one percent were ER positive and 49% PR positive. Margin status was negative in 62%, positive in 10%, close in 9%, and unknown in 19%. Fifty-seven percent of women underwent a reexcision. Adjuvant chemotherapy {+-} tamoxifen was administered in 29%, and tamoxifen alone in 17%. Surgical clips were placed in the excision cavity in 556 patients while the other 808 did not have clips placed. All patients had a boost to the tumor bed. Patients had their boost planned with CT scanning or stereo shift radiographs. No significant differences between the 2 groups were noted for median age, T stage, nodal status, race, ER/PR receptor status, region irradiated, or tumor location. Patients without clips had negative margins less often, a higher rate of unknown or positive margins and more often received no adjuvant therapy compared to patients with surgical clips. RESULTS: Twenty-three and 27 patients with and without surgical clips, respectively developed a true or marginal recurrence in the treated breast. The actuarial probability of a breast recurrence was 2

  14. Esophageal Perforation due to Transesophageal Echocardiogram: New Endoscopic Clip Treatment

    Directory of Open Access Journals (Sweden)

    John Robotis

    2014-07-01

    Full Text Available Esophageal perforation due to transesophageal echocardiogram (TEE during cardiac surgery is rare. A 72-year-old female underwent TEE during an operation for aortic valve replacement. Further, the patient presented hematemesis. Gastroscopy revealed an esophageal bleeding ulcer. Endoscopic therapy was successful. Although a CT scan excluded perforation, the patient became febrile, and a second gastroscopy revealed a big perforation at the site of ulcer. The patient's clinical condition required endoscopic intervention with a new OTSC® clip (Ovesco Endoscopy, Tübingen, Germany. The perforation was successfully sealed. The patient remained on intravenous antibiotics, proton pump inhibitors and parenteral nutrition for few days, followed by enteral feeding. She was discharged fully recovered 3 months later. We clearly demonstrate an effective, less invasive treatment of an esophageal perforation with a new endoscopic clip.

  15. Does the placement of surgical clips within the excision cavity influence local control for patients treated with breast conserving surgery and irradiation?

    International Nuclear Information System (INIS)

    Fein, Douglas A.; Fowble, Barbara L.; Hanlon, Alexandra L.; Hoffman, John P.; Sigurdson, Elin R.; Eisenberg, Burton L.

    1995-01-01

    PURPOSE: A number of authors have demonstrated the importance of using surgical clips to define the tumor bed in the treatment planning of early stage breast cancer. The clips have been useful in delineating the borders of the tangential fields especially for very medial and very lateral lesions as well as the boost volume. If surgical clips better define the tumor bed then a reduction in true or marginal recurrences should be appreciated. We sought to compare the incidence of breast recurrence in women with and without surgical clips controlling for other recognized prognostic factors. METHODS AND MATERIALS: Between 1980 and 1992, 1364 women with clinical Stage I or II invasive breast cancer underwent excisional biopsy, axillary dissection, and definitive irradiation. Median follow-up was 60 months. Median age was 55 years. Seventy-one percent of patients were path N0, 22% had 1-3 nodes and 7% had ≥ 4 nodes. Sixty-one percent were ER positive and 49% PR positive. Margin status was negative in 62%, positive in 10%, close in 9%, and unknown in 19%. Fifty-seven percent of women underwent a reexcision. Adjuvant chemotherapy ± tamoxifen was administered in 29%, and tamoxifen alone in 17%. Surgical clips were placed in the excision cavity in 556 patients while the other 808 did not have clips placed. All patients had a boost to the tumor bed. Patients had their boost planned with CT scanning or stereo shift radiographs. No significant differences between the 2 groups were noted for median age, T stage, nodal status, race, ER/PR receptor status, region irradiated, or tumor location. Patients without clips had negative margins less often, a higher rate of unknown or positive margins and more often received no adjuvant therapy compared to patients with surgical clips. RESULTS: Twenty-three and 27 patients with and without surgical clips, respectively developed a true or marginal recurrence in the treated breast. The actuarial probability of a breast recurrence was 2% at

  16. Does the placement of surgical clips within the excision cavity influence local control for patients treated with breast-conserving surgery and irradiation?

    International Nuclear Information System (INIS)

    Fein, Douglas A.; Fowble, Barbara L.; Hanlon, Alexandra L.; Hoffman, John P.; Sigurdson, Elin R.; Eisenberg, Burton L.

    1996-01-01

    Purpose: A number of authors have demonstrated the importance of using surgical clips to define the tumor bed in the treatment planning of early-stage breast cancer. The clips have been useful in delineating the borders of the tangential fields, especially for very medial and very lateral lesions as well as the boost volume. If surgical clips better define the tumor bed, then a reduction in true or marginal recurrences should be appreciated. We sought to compare the incidence of breast recurrence in women with and without surgical clips, controlling for other recognized prognostic factors. Methods and Materials: Between 1980 and 1992, 1364 women with clinical Stage I or II invasive breast cancer underwent excisional biopsy, axillary dissection, and definitive irradiation. Median follow-up was 60 months. Median age was 55 years. Seventy-one percent of patients were path N0, 22% had one to three nodes, and 7% had > four nodes. Sixty-one percent were ER positive and 49% PR positive. Margin status was negative in 62%, positive in 10%, close in 9%, and unknown in 19%. Fifty-seven percent of women underwent a reexcision. Adjuvant chemotherapy + tamoxifen was administered in 29%, and tamoxifen alone in 17%. Surgical clips were placed in the excision cavity in 556 patients, while the other 808 did not have clips placed. All patients had a boost to the tumor bed. Patients had their boost planned with CT scanning or stereo shift radiographs. No significant differences between the two groups were noted for median age, T stage, nodal status, race, ER/PR receptor status, region irradiated, or tumor location. Patients without clips had negative margins less often, a higher rate of unknown or positive margins and more often received no adjuvant therapy compared to patients with surgical clips. Results: Twenty-five and 27 patients with and without surgical clips, respectively, developed a true or marginal recurrence in the treated breast. The actuarial probability of a breast

  17. Efficacy of electrocoagulation in sealing the cystic artery and cystic duct occluded with only one absorbable clip during laparoscopic cholecystectomy.

    Science.gov (United States)

    Yang, Chang-Ping; Cao, Jin-Lin; Yang, Ren-Rong; Guo, Hong-Rong; Li, Zhao-Hui; Guo, Hai-Ying; Shao, Yin-Can; Liu, Gui-Bao

    2014-02-01

    Even though laparoscopic cholecystectomy (LC) emerged over 20 years ago, controversies persist with regard to the best method to ligate the cystic duct and artery. We proposed to assess the effectiveness and safety of electrocoagulation to seal the cystic artery and cystic duct after their occlusion with only one absorbable clip. We retrospectively compared the clinical data for 635 patients undergoing LC using electrocoagulation to seal the cystic artery and cystic duct that were occluded with only one absorbable clip (Group 1) and 728 patients undergoing LC using titanium clips (Group 2). In parallel, 30 rabbits randomized into six groups underwent cholecystectomy. After cystic duct ligation with absorbable or titanium clips, the animals were sacrificed 1, 3, or 6 months later, and intraabdominal adhesions were assessed after celiotomy. The mean operative time was significantly shorter (41.6 versus 58.9 minutes, PElectrocoagulation of the cystic artery and cystic duct that were occluded with only one absorbable clip is safe and effective during LC. This approach is associated with shortened operative times and reduced leakage, compared with the standard method using metal clips.

  18. The role of structural characteristics in problem video game playing: a review

    OpenAIRE

    King, DL; Delfabbro, PH; Griffiths, MD

    2010-01-01

    The structural characteristics of video games may play an important role in explaining why some people play video games to excess. This paper provides a review of the literature on structural features of video games and the psychological experience of playing video games. The dominant view of the appeal of video games is based on operant conditioning theory and the notion that video games satisfy various needs for social interaction and belonging. However, there is a lack of experimental and ...

  19. Effects of Knee Alignments and Toe Clip on Frontal Plane Knee Biomechanics in Cycling.

    Science.gov (United States)

    Shen, Guangping; Zhang, Songning; Bennett, Hunter J; Martin, James C; Crouter, Scott E; Fitzhugh, Eugene C

    2018-06-01

    Effects of knee alignment on the internal knee abduction moment (KAM) in walking have been widely studied. The KAM is closely associated with the development of medial knee osteoarthritis. Despite the importance of knee alignment, no studies have explored its effects on knee frontal plane biomechanics during stationary cycling. The purpose of this study was to examine the effects of knee alignment and use of a toe clip on the knee frontal plane biomechanics during stationary cycling. A total of 32 participants (11 varus, 11 neutral, and 10 valgus alignment) performed five trials in each of six cycling conditions: pedaling at 80 rpm and 0.5 kg (40 Watts), 1.0 kg (78 Watts), and 1.5 kg (117 Watts) with and without a toe clip. A motion analysis system and a customized instrumented pedal were used to collect 3D kinematic and kinetic data. A 3 × 2 × 3 (group × toe clip × workload) mixed design ANOVA was used for statistical analysis (p < 0.05). There were two different knee frontal plane loading patterns, internal abduction and adduction moment, which were affected by knee alignment type. The knee adduction angle was 12.2° greater in the varus group compared to the valgus group (p = 0.001), yet no difference was found for KAM among groups. Wearing a toe clip increased the knee adduction angle by 0.95º (p = 0.005). The findings of this study indicate that stationary cycling may be a safe exercise prescription for people with knee malalignments. In addition, using a toe clip may not have any negative effects on knee joints during stationary cycling.

  20. The effect of video feedback on the social behavior of an adolescent with ADHD.

    Science.gov (United States)

    Sibley, Margaret H; Pelham, William E; Mazur, Amy; Gnagy, Elizabeth M; Ross, J Megan; Kuriyan, Aparajita B

    2012-10-01

    The social functioning of adolescents with ADHD is characteristically impaired, yet almost no interventions effectively address the peer relationships of these youth. This study evaluates the preliminary effects of a video-feedback intervention on the social behavior of a 16-year-old male with ADHD-combined type in the context of a summer treatment program for youth with ADHD. The intervention was administered in a teen-run business meeting designed to mimic the context of group-based activities such as student government, service clubs, and group projects. During each video-feedback session, the adolescent viewed a 5-min clip of his behavior in the previous business meeting, rated the appropriateness of his own social behavior in each 30-s interval, and discussed behavior with a summer program counselor. Results indicated that while the video-feedback intervention was in place, the adolescent displayed improvements in social behavior from baseline. Results also indicated that the adolescent exhibited relatively accurate self-perceptions during the intervention period. The authors present preliminary evidence for cross-contextual and cross-temporal generalization. The results of this study and future directions for intervention development are discussed in the context of the broader conversation about how to treat social impairment in adolescents with ADHD.

  1. A Method for Estimating Surveillance Video Georeferences

    Directory of Open Access Journals (Sweden)

    Aleksandar Milosavljević

    2017-07-01

    Full Text Available The integration of a surveillance camera video with a three-dimensional (3D geographic information system (GIS requires the georeferencing of that video. Since a video consists of separate frames, each frame must be georeferenced. To georeference a video frame, we rely on the information about the camera view at the moment that the frame was captured. A camera view in 3D space is completely determined by the camera position, orientation, and field-of-view. Since the accurate measuring of these parameters can be extremely difficult, in this paper we propose a method for their estimation based on matching video frame coordinates of certain point features with their 3D geographic locations. To obtain these coordinates, we rely on high-resolution orthophotos and digital elevation models (DEM of the area of interest. Once an adequate number of points are matched, Levenberg–Marquardt iterative optimization is applied to find the most suitable video frame georeference, i.e., position and orientation of the camera.

  2. MRI-validation of SEP monitoring for ischemic events during microsurgical clipping of intracranial aneurysms.

    Science.gov (United States)

    Krayenbühl, Niklaus; Sarnthein, Johannes; Oinas, Minna; Erdem, Eren; Krisht, Ali F

    2011-09-01

    During surgical clipping of intracranial aneurysms, reduction in SEP amplitude is thought to indicate cortical ischemia and subsequent neurological deficits. Since the sensitivity of SEP is questioned, we investigated SEP with respect to post-operative ischemia. In 36 patients with 51 intracranial aneurysms, clinical evaluation and diffusion-weighted MRI (DWI) was performed before and within 24h after surgery. During surgery, time of temporary occlusion was recorded. MRI images were reviewed for signs of ischemia. For 43 clip applications (84%), we observed neither pathologic SEP events nor ischemia in MRI. In two cases where reduction lasted >10 min after clip release, SEP events correlated with ischemia in the MRI. Only one of the ischemic patients was symptomatic and developed a transient hemiparesis. While pathologic SEP events correlated with visible ischemia in MRI only in two cases with late SEP recovery, ischemia in MRI may have been transient or may not have reached detection threshold in the other cases, in agreement with the absence of permanent neurological deficits. In complex aneurysm cases, where prolonged temporary occlusion is expected, SEP should be used to detect ischemia at a reversible stage to improve the safety of aneurysm clipping. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  3. Custom-made different designs of pressure clips for the management of ear lobe keloids

    Directory of Open Access Journals (Sweden)

    Anshul Chugh

    2013-01-01

    Full Text Available Introduction : Keloids are frequent finding after physical trauma. Keloids of ear lobe are common complication of ear piercing, although its incidence remains unknown. The use of intrakeloid resection and a form pressure device to treat pinna keloids. The recommendation of this therapy is to maintain constant pressure and duration of pressure therapy was about 25 weeks. Clinical innovation : This article will present inexpensive custom made pressure clips of various designs. The dimensions of polymethylmethacrylate (PMMA plates in ear lobe clip presented by us though they esthetically not so good, but colored PMMA has been used to make it decorative and acceptable by most of the patients. This has been an encouraging experience to use the different designs. Discussion : Ear clip prosthesis has been developed for maintaining pressure on ear lobe keloids before and after surgical removal. The prosthesis includes an ear clip to which heat-polymerized acrylic resin is attached, which covers the keloid area. Pressure therapy is widely used to help in the early maturation of scar tissue and to prevent the recurrence of keloid. The preliminary report by Brent revealed that constant light pressure was an effective means of preventing post excision recurrence of ear lobe keloids using a decorative, spring-pressure earring.

  4. Evaluation of stiffness and plastic deformation of active ceramic self-ligating bracket clips after repetitive opening and closure movements.

    Science.gov (United States)

    Carneiro, Grace Kelly Martins; Roque, Juliano Alves; Segundo, Aguinaldo Silva Garcez; Suzuki, Hideo

    2015-01-01

    The aim of this study was to assess whether repetitive opening and closure of self-ligating bracket clips can cause plastic deformation of the clip. Three types of active/interactive ceramic self-ligating brackets (n = 20) were tested: In-Ovation C, Quicklear and WOW. A standardized controlled device performed 500 cycles of opening and closure movements of the bracket clip with proper instruments and techniques adapted as recommended by the manufacturer of each bracket type. Two tensile tests, one before and one after the repetitive cycles, were performed to assess the stiffness of the clips. To this end, a custom-made stainless steel 0.40 x 0.40 mm wire was inserted into the bracket slot and adapted to the universal testing machine (EMIC DL2000), after which measurements were recorded. On the loading portion of the loading-unloading curve of clips, the slope fitted a first-degree equation curve to determine the stiffness/deflection rate of the clip. The results of plastic deformation showed no significant difference among bracket types before and after the 500 cycles of opening and closure (p = 0.811). There were significant differences on stiffness among the three types of brackets (p = 0.005). The WOW bracket had higher mean values, whereas Quicklear bracket had lower values, regardless of the opening/closure cycle. Repetitive controlled opening and closure movements of the clip did not alter stiffness or cause plastic deformation.

  5. Evaluation of stiffness and plastic deformation of active ceramic self-ligating bracket clips after repetitive opening and closure movements

    Directory of Open Access Journals (Sweden)

    Grace Kelly Martins Carneiro

    2015-08-01

    Full Text Available OBJECTIVE: The aim of this study was to assess whether repetitive opening and closure of self-ligating bracket clips can cause plastic deformation of the clip.METHODS: Three types of active/interactive ceramic self-ligating brackets (n = 20 were tested: In-Ovation C, Quicklear and WOW. A standardized controlled device performed 500 cycles of opening and closure movements of the bracket clip with proper instruments and techniques adapted as recommended by the manufacturer of each bracket type. Two tensile tests, one before and one after the repetitive cycles, were performed to assess the stiffness of the clips. To this end, a custom-made stainless steel 0.40 x 0.40 mm wire was inserted into the bracket slot and adapted to the universal testing machine (EMIC DL2000, after which measurements were recorded. On the loading portion of the loading-unloading curve of clips, the slope fitted a first-degree equation curve to determine the stiffness/deflection rate of the clip.RESULTS: The results of plastic deformation showed no significant difference among bracket types before and after the 500 cycles of opening and closure (p = 0.811. There were significant differences on stiffness among the three types of brackets (p = 0.005. The WOW bracket had higher mean values, whereas Quicklear bracket had lower values, regardless of the opening/closure cycle.CONCLUSION: Repetitive controlled opening and closure movements of the clip did not alter stiffness or cause plastic deformation.

  6. Cortical fMRI activation to opponents' body kinematics in sport-related anticipation: expert-novice differences with normal and point-light video.

    Science.gov (United States)

    Wright, M J; Bishop, D T; Jackson, R C; Abernethy, B

    2011-08-18

    Badminton players of varying skill levels viewed normal and point-light video clips of opponents striking the shuttle towards the viewer; their task was to predict in which quadrant of the court the shuttle would land. In a whole-brain fMRI analysis we identified bilateral cortical networks sensitive to the anticipation task relative to control stimuli. This network is more extensive and localised than previously reported. Voxel clusters responding more strongly in experts than novices were associated with all task-sensitive areas, whereas voxels responding more strongly in novices were found outside these areas. Task-sensitive areas for normal and point-light video were very similar, whereas early visual areas responded differentially, indicating the primacy of kinematic information for sport-related anticipation. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  7. Heartbeat Signal from Facial Video for Biometric Recognition

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Nasrollahi, Kamal; Moeslund, Thomas B.

    2015-01-01

    Different biometric traits such as face appearance and heartbeat signal from Electrocardiogram (ECG)/Phonocardiogram (PCG) are widely used in the human identity recognition. Recent advances in facial video based measurement of cardio-physiological parameters such as heartbeat rate, respiratory rate......, and blood volume pressure provide the possibility of extracting heartbeat signal from facial video instead of using obtrusive ECG or PCG sensors in the body. This paper proposes the Heartbeat Signal from Facial Video (HSFV) as a new biometric trait for human identity recognition, for the first time...... to the best of our knowledge. Feature extraction from the HSFV is accomplished by employing Radon transform on a waterfall model of the replicated HSFV. The pairwise Minkowski distances are obtained from the Radon image as the features. The authentication is accomplished by a decision tree based supervised...

  8. Clinical outcome of critically ill, not fully recompensated, patients undergoing MitraClip therapy

    DEFF Research Database (Denmark)

    Rudolph, Volker; Huntgeburth, Michael; von Bardeleben, Ralph Stephan

    2014-01-01

    AIMS: As periprocedural risk is low, MitraClip implantation is often performed in critically ill, not fully recompensated patients, who are in NYHA functional class IV at the time of the procedure, to accelerate convalescence. We herein sought to evaluate the procedural and 30-day outcome.......3%. CONCLUSION: MitraClip therapy is feasible and safe even in critically ill, not fully recompensated patients and leads to symptomatic improvement in over two-thirds of these patients; however, it is associated with an elevated 30-day mortality....

  9. Electronic evaluation for video commercials by impression index.

    Science.gov (United States)

    Kong, Wanzeng; Zhao, Xinxin; Hu, Sanqing; Vecchiato, Giovanni; Babiloni, Fabio

    2013-12-01

    How to evaluate the effect of commercials is significantly important in neuromarketing. In this paper, we proposed an electronic way to evaluate the influence of video commercials on consumers by impression index. The impression index combines both the memorization and attention index during consumers observing video commercials by tracking the EEG activity. It extracts features from scalp EEG to evaluate the effectiveness of video commercials in terms of time-frequency-space domain. And, the general global field power was used as an impression index for evaluation of video commercial scenes as time series. Results of experiment demonstrate that the proposed approach is able to track variations of the cerebral activity related to cognitive task such as observing video commercials, and help to judge whether the scene in video commercials is impressive or not by EEG signals.

  10. Top-down and Middle-down Protein Analysis Reveals that Intact and Clipped Human Histones Differ in Post-translational Modification Patterns

    DEFF Research Database (Denmark)

    Tvardovskiy, Andrey; Wrzesinski, Krzysztof; Sidoli, Simone

    2015-01-01

    Post-translational modifications (PTMs) of histone proteins play a fundamental role in regulation of DNA-templated processes. There is also growing evidence that proteolytic cleavage of histone N-terminal tails, known as histone clipping, influences nucleosome dynamics and functional properties...... hepatocytes and the hepatocellular carcinoma cell line HepG2/C3A when grown in spheroid (3D) culture, but not in a flat (2D) culture. Using tandem mass spectrometry we localized four different clipping sites in H3 and one clipping site in H2B. We show that in spheroid culture clipped H3 proteoforms are mainly...

  11. "Lost" on the Web: Does Web Distribution Stimulate or Depress Television Viewing?

    OpenAIRE

    Joel Waldfogel

    2007-01-01

    In the past few years, YouTube and other sites for sharing video files over the Internet have vaulted from obscurity to places of centrality in the media landscape. The files available at YouTube include a mix of user-generated video and clips from network television shows. Networks fear that availability of their clips on YouTube will depress television viewing. But unauthorized clips are also free advertising for television shows. As YouTube has grown quickly, major networks have responded ...

  12. Using Interactive Video Instruction To Enhance Public Speaking Instruction.

    Science.gov (United States)

    Cronin, Michael W.; Kennan, William R.

    Noting that interactive video instruction (IVI) should not and cannot replace classroom instruction, this paper offers an introduction to interactive video instruction as an innovative technology that can be used to expand pedagogical opportunities in public speaking instruction. The paper: (1) defines the distinctive features of IVI; (2) assesses…

  13. [MitraClip® for treatment of tricuspid valve insufficiency].

    Science.gov (United States)

    Pfister, R; Baldus, S

    2017-11-01

    Tricuspid valve regurgitation is frequently found as a result of right ventricular remodeling due to advanced left heart diseases. Drug treatment is limited to diuretics and the cardiac or pulmonary comorbidities. Due to the high risk only a small percentage of patients are amenable to surgical treatment of tricuspid regurgitation in those who undergo left-sided surgery for other reasons. Catheter-based procedures are an attractive treatment alternative, particularly since the strong prognostic impact of tricuspid regurgitation suggests an unmet need of treatment, independent of the underlying heart disease. A vast amount of clinical experience exists for the MitraClip system for treatment of mitral regurgitation. A first case series shows that the application for treatment of tricuspid regurgitation is technically feasible, seems to be safe and the degree of valve regurgitation can be reduced. In this review the background of tricuspid regurgitation treatment is summarized and first experiences and perspectives with the MitraClip system are assessed.

  14. Surgical clips for position verification and correction of non-rigid breast tissue in simultaneously integrated boost (SIB) treatments

    International Nuclear Information System (INIS)

    Penninkhof, Joan; Quint, Sandra; Boer, Hans de; Mens, Jan Willem; Heijmen, Ben; Dirkx, Maarten

    2009-01-01

    Background and purpose: The aim of this study is to investigate whether surgical clips in the lumpectomy cavity are representative for position verification of both the tumour bed and the whole breast in simultaneously integrated boost (SIB) treatments. Materials and methods: For a group of 30 patients treated with a SIB technique, kV and MV planar images were acquired throughout the course of the fractionated treatment. The 3D set-up error for the tumour bed was derived by matching the surgical clips (3-8 per patient) in two almost orthogonal planar kV images. By projecting the 3D set-up error derived from the planar kV images to the (u, v)-plane of the tangential beams, the correlation with the 2D set-up error for the whole breast, derived from the MV EPID images, was determined. The stability of relative clip positions during the fractionated treatment was investigated. In addition, for a subgroup of 15 patients, the impact of breathing was determined from fluoroscopic movies acquired at the linac. Results: The clip configurations were stable over the course of radiotherapy, showing an inter-fraction variation (1 SD) of 0.5 mm on average. Between the start and the end of the treatment, the mean distance between the clips and their center of mass was reduced by 0.9 mm. A decrease larger than 2 mm was observed in eight patients (17 clips). The top-top excursion of the clips due to breathing was generally less than 2.5 mm in all directions. The population averages of the difference (±1 SD) between kV and MV matches in the (u, v)-plane were 0.2 ± 1.8 mm and 0.9 ± 1.5 mm, respectively. In 30% of the patients, time trends larger than 3 mm were present over the course of the treatment in either or in both kV and MV match results. Application of the NAL protocol based on the clips reduced the population mean systematic error to less than 2 mm in all directions, both for the tumour bed and the whole breast. Due to the observed time trends, these systematic errors can

  15. Aneurysmal subarachnoid hemorrhage: outcome of aneurysm clipping versus coiling in anterior circulation aneurysm

    International Nuclear Information System (INIS)

    Wadd, I.H.; Haroon, A.; Ansari, S.

    2015-01-01

    To compare the neurological outcome of microsurgical clipping versus coiling in patients with anterior circulation aneurysm. Study Design: Comparative study. Place and Duration of Study: Department of Neurosurgery, Lahore General Hospital, Lahore, from January 2010 to December 2013. Methodology: Patients aged 14 - 60 years, with ruptured cerebral aneurysm of anterior circulation and World Federation of Neurosurgical Society (WFNS) grades 1, 2 and 3 were included. Patients more than 60 years, medically unfit patient and posterior circulation aneurysms and WFNS grades 4 and 5 were excluded. Aneurysm sac obliteration was done in randomized manner with microsurgical clipping or coiling. Postoperatively, the patients were assessed and followed-up upto one year for outcome parameters on the bases of WFNS grade and Modified Ranking Scale (mRS) as favourable (mRS =2 ) and unfavourable (mRS > 2). Results: Among 140 subjects selected for study, 70 were included in group A, i.e. coiling and other 70 were in group B, i.e. clipping. The median age of patients in group A was 52.5 ± 10 years and in group B was 51.00 ± years. Overall, 56 (40%) males, 28 (60%) males in each group; and 84 (60%) females, 42 (60%) in each group were included. The male to female ratio in this study was 1:1.5. In group A, i.e. coiling, 27 (38.6%) patients had no disability (grades 1 and 2), 25 (35.7%) were slightly disabled (grade 3) and 18 (25.7%) had moderate disability (grade 4); whereas in group B, i.e. clipping group 23 (32.9%) patients had no disability (grades 1 and 2), 23 (32.9%) were slightly disabled (grade 3) and 24 (34.3%) had moderate disability (grade 4). At one year follow-up, in group A, favourable outcome was achieved in 56 (80%) of patients compared to 48 (68.6%) in group B; whilst, 14 (20%) patients in group A and 22 (33.1%) in group B showed unfavourable outcome. Although mortality rate was higher in clipping (n=3, 4.3%) as compared to coiling (n=1, 1.4%), but was not statistically

  16. Combining monoenergetic extrapolations from dual-energy CT with iterative reconstructions. Reduction of coil and clip artifacts from intracranial aneurysm therapy

    Energy Technology Data Exchange (ETDEWEB)

    Winklhofer, Sebastian; Baltsavias, Gerasimos; Michels, Lars; Valavanis, Antonios [University of Zurich, Department of Neuroradiology, University Hospital Zurich, Zurich (Switzerland); Hinzpeter, Ricarda; Stocker, Daniel; Alkadhi, Hatem [University of Zurich, Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Zurich (Switzerland); Burkhardt, Jan-Karl; Regli, Luca [University of Zurich, Department of Neurosurgery, University Hospital Zurich, Zurich (Switzerland)

    2018-03-15

    To compare and to combine iterative metal artifact reduction (MAR) and virtual monoenergetic extrapolations (VMEs) from dual-energy computed tomography (DECT) for reducing metal artifacts from intracranial clips and coils. Fourteen clips and six coils were scanned in a phantom model with DECT at 100 and 150SnkVp. Four datasets were reconstructed: non-corrected images (filtered-back projection), iterative MAR, VME from DECT at 120 keV, and combined iterative MAR + VME images. Artifact severity scores and visibility of simulated, contrast-filled, adjacent vessels were assessed qualitatively and quantitatively by two independent, blinded readers. Iterative MAR, VME, and combined iterative MAR + VME resulted in a significant reduction of qualitative (p < 0.001) and quantitative clip artifacts (p < 0.005) and improved the visibility of adjacent vessels (p < 0.05) compared to non-corrected images, with lowest artifact scores found in combined iterative MAR + VME images. Titanium clips demonstrated less artifacts than Phynox clips (p < 0.05), and artifact scores increased with clip size. Coil artifacts increased with coil size but were reducible when applying iterative MAR + VME compared to non-corrected images. However, no technique improved the severe artifacts from large, densely packed coils. Combining iterative MAR with VME allows for an improved metal artifact reduction from clips and smaller, loosely packed coils. Limited value was found for large and densely packed coils. (orig.)

  17. Multiple Feature Fusion Based on Co-Training Approach and Time Regularization for Place Classification in Wearable Video

    Directory of Open Access Journals (Sweden)

    Vladislavs Dovgalecs

    2013-01-01

    Full Text Available The analysis of video acquired with a wearable camera is a challenge that multimedia community is facing with the proliferation of such sensors in various applications. In this paper, we focus on the problem of automatic visual place recognition in a weakly constrained environment, targeting the indexing of video streams by topological place recognition. We propose to combine several machine learning approaches in a time regularized framework for image-based place recognition indoors. The framework combines the power of multiple visual cues and integrates the temporal continuity information of video. We extend it with computationally efficient semisupervised method leveraging unlabeled video sequences for an improved indexing performance. The proposed approach was applied on challenging video corpora. Experiments on a public and a real-world video sequence databases show the gain brought by the different stages of the method.

  18. Determination of moulting events in rock lobsters from pleopod clipping.

    Science.gov (United States)

    Gardner, Caleb; Mills, David J

    2013-01-01

    Rock lobster growth is routinely measured for research to optimise management measures such as size limits and quotas. The process of estimating growth is complicated in crustaceans as growth only occurs when the animal moults. As data are typically collected by tag-recapture methods, the timing of moulting events can bias results. For example, if annual moulting events take place within a very short time-at-large after tagging, or if time-at-large is long and no moulting occurs. Classifying data into cases where moulting has / has not occurred during time-at-large can be required and can generally be determined by change in size between release and recapture. However, in old or slow growth individuals the moult increment can be too small to provide surety that moulting has occurred. A method that has been used since the 1970's to determine moulting in rock lobsters involves clipping the distal portion of a pleopod so that any regeneration observed at recapture can be used as evidence of a moult. We examined the use of this method in both tank and long-duration field trials within a marine protected area, which provided access to large animals with smaller growth increments. Our results emphasised that determination of moulting by change in size was unreliable with larger lobsters and that pleopod clipping can assist in identifying moulting events. However, regeneration was an unreliable measure of moulting if clipping occurred less than three months before the moult.

  19. Determination of moulting events in rock lobsters from pleopod clipping.

    Directory of Open Access Journals (Sweden)

    Caleb Gardner

    Full Text Available Rock lobster growth is routinely measured for research to optimise management measures such as size limits and quotas. The process of estimating growth is complicated in crustaceans as growth only occurs when the animal moults. As data are typically collected by tag-recapture methods, the timing of moulting events can bias results. For example, if annual moulting events take place within a very short time-at-large after tagging, or if time-at-large is long and no moulting occurs. Classifying data into cases where moulting has / has not occurred during time-at-large can be required and can generally be determined by change in size between release and recapture. However, in old or slow growth individuals the moult increment can be too small to provide surety that moulting has occurred. A method that has been used since the 1970's to determine moulting in rock lobsters involves clipping the distal portion of a pleopod so that any regeneration observed at recapture can be used as evidence of a moult. We examined the use of this method in both tank and long-duration field trials within a marine protected area, which provided access to large animals with smaller growth increments. Our results emphasised that determination of moulting by change in size was unreliable with larger lobsters and that pleopod clipping can assist in identifying moulting events. However, regeneration was an unreliable measure of moulting if clipping occurred less than three months before the moult.

  20. Effects of Knee Alignments and Toe Clip on Frontal Plane Knee Biomechanics in Cycling

    Science.gov (United States)

    Shen, Guangping; Zhang, Songning; Bennett, Hunter J.; Martin, James C.; Crouter, Scott E.; Fitzhugh, Eugene C.

    2018-01-01

    Effects of knee alignment on the internal knee abduction moment (KAM) in walking have been widely studied. The KAM is closely associated with the development of medial knee osteoarthritis. Despite the importance of knee alignment, no studies have explored its effects on knee frontal plane biomechanics during stationary cycling. The purpose of this study was to examine the effects of knee alignment and use of a toe clip on the knee frontal plane biomechanics during stationary cycling. A total of 32 participants (11 varus, 11 neutral, and 10 valgus alignment) performed five trials in each of six cycling conditions: pedaling at 80 rpm and 0.5 kg (40 Watts), 1.0 kg (78 Watts), and 1.5 kg (117 Watts) with and without a toe clip. A motion analysis system and a customized instrumented pedal were used to collect 3D kinematic and kinetic data. A 3 × 2 × 3 (group × toe clip × workload) mixed design ANOVA was used for statistical analysis (p < 0.05). There were two different knee frontal plane loading patterns, internal abduction and adduction moment, which were affected by knee alignment type. The knee adduction angle was 12.2° greater in the varus group compared to the valgus group (p = 0.001), yet no difference was found for KAM among groups. Wearing a toe clip increased the knee adduction angle by 0.95º (p = 0.005). The findings of this study indicate that stationary cycling may be a safe exercise prescription for people with knee malalignments. In addition, using a toe clip may not have any negative effects on knee joints during stationary cycling. Key points Varus or valgus alignment did not cause increased frontal-plane knee joint loading, suggesting stationary cycling is a safe exercise. This study supports that using a toe clip did not lead to abnormal frontal-plane knee loading during stationary cycling. Two different knee frontal plane loading patterns, knee abduction and adduction moment, were observed during stationary cycling, which are likely affected by