WorldWideScience

Sample records for video modeling prompting

  1. Teaching Daily Living Skills to Seven Individuals with Severe Intellectual Disabilities: A Comparison of Video Prompting to Video Modeling

    Science.gov (United States)

    Cannella-Malone, Helen I.; Fleming, Courtney; Chung, Yi-Cheih; Wheeler, Geoffrey M.; Basbagill, Abby R.; Singh, Angella H.

    2011-01-01

    We conducted a systematic replication of Cannella-Malone et al. by comparing the effects of video prompting to video modeling for teaching seven students with severe disabilities to do laundry and wash dishes. The video prompting and video modeling procedures were counterbalanced across tasks and participants and compared in an alternating…

  2. Video Modeling and Prompting in Practice: Teaching Cooking Skills

    Science.gov (United States)

    Kellems, Ryan O.; Mourra, Kjerstin; Morgan, Robert L.; Riesen, Tim; Glasgow, Malinda; Huddleston, Robin

    2016-01-01

    This article discusses the creation of video modeling (VM) and video prompting (VP) interventions for teaching novel multi-step tasks to individuals with disabilities. This article reviews factors to consider when selecting skills to teach, and students for whom VM/VP may be successful, as well as the difference between VM and VP and circumstances…

  3. Comparison of the Effects of Continuous Video Modeling, Video Prompting, and Video Modeling on Task Completion by Young Adults with Moderate Intellectual Disability

    Science.gov (United States)

    Mechling, Linda C.; Ayres, Kevin M.; Bryant, Kathryn J.; Foster, Ashley L.

    2014-01-01

    This study compared the effects of three procedures (video prompting: VP, video modeling: VM, and continuous video modeling: CVM) on task completion by three high school students with moderate intellectual disability. The comparison was made across three sets of fundamentally different tasks (putting away household items in clusters of two items;…

  4. Using Video Modeling and Video Prompting to Teach Core Academic Content to Students with Learning Disabilities

    Science.gov (United States)

    Kellems, Ryan O.; Edwards, Sean

    2016-01-01

    Practitioners are constantly searching for evidence-based practices that are effective in teaching academic skills to students with learning disabilities (LD). Video modeling (VM) and video prompting have become popular instructional interventions for many students across a wide range of different disability classifications, including those with…

  5. Using Progressive Video Prompting to Teach Students with Moderate Intellectual Disability to Shoot a Basketball

    Science.gov (United States)

    Lo, Ya-yu; Burk, Bradley; Burk, Bradley; Anderson, Adrienne L.

    2014-01-01

    The current study examined the effects of a modified video prompting procedure, namely progressive video prompting, to increase technique accuracy of shooting a basketball in the school gymnasium of three 11th-grade students with moderate intellectual disability. The intervention involved participants viewing video clips of an adult model who…

  6. Video Modeling and Prompting: A Comparison of Two Strategies for Teaching Cooking Skills to Students with Mild Intellectual Disabilities

    Science.gov (United States)

    Taber-Doughty, Teresa; Bouck, Emily C.; Tom, Kinsey; Jasper, Andrea D.; Flanagan, Sara M.; Bassette, Laura

    2011-01-01

    Self-operated video prompting and video modeling was compared when used by three secondary students with mild intellectual disabilities as they completed novel recipes during cooking activities. Alternating between video systems, students completed twelve recipes within their classroom kitchen. An alternating treatment design with a follow-up and…

  7. Use of static picture prompts versus video modeling during simulation instruction.

    Science.gov (United States)

    Alberto, Paul A; Cihak, David F; Gama, Robert I

    2005-01-01

    The purpose of this study was to compare the effectiveness and efficiency of static picture prompts and video modeling as classroom simulation strategies in combination with in vivo community instruction. Students with moderate intellectual disabilities were instructed in the tasks of withdrawing money from an ATM and purchasing items using a debit card. Both simulation strategies were effective and efficient at teaching the skills. The two simulation strategies were not functionally different in terms of number of trials to acquisition, number of errors, and number of instructional sessions to criterion.

  8. Use of Video Modeling and Video Prompting Interventions for Teaching Daily Living Skills to Individuals with Autism Spectrum Disorders: A Review

    Science.gov (United States)

    Gardner, Stephanie; Wolfe, Pamela

    2013-01-01

    Identifying methods to increase the independent functioning of individuals with autism spectrum disorders (ASD) is vital in enhancing their quality of life; teaching students with ASD daily living skills can foster independent functioning. This review examines interventions that implement video modeling and/or prompting to teach individuals with…

  9. Introducing an Information-Seeking Skill in a School Library to Students with Autism Spectrum Disorder: Using Video Modeling and Least-to-Most Prompts

    Science.gov (United States)

    Markey, Patricia T.

    2015-01-01

    This study investigated the effectiveness of a video peer modeling and least-to-most prompting intervention in the school library setting, targeting the instructional delivery of an information-literacy skill to students with Autism Spectrum Disorder (ASD). Research studies have evaluated the effectiveness of video-modeling procedures in the…

  10. Effects of Video Modeling on the Instructional Efficiency of Simultaneous Prompting among Preschoolers with Autism Spectrum Disorder

    Science.gov (United States)

    Genc-Tosun, Derya; Kurt, Onur

    2017-01-01

    The purpose of the present study was to compare the effectiveness and efficiency of simultaneous prompting with and without video modeling in teaching food preparation skills to four participants with autism spectrum disorder, whose ages ranged from 5 to 6 years old. An adapted alternating treatment single-case experimental design was used to…

  11. A Comparison of Least-to-Most Prompting and Video Modeling for Teaching Pretend Play Skills to Children with Autism Spectrum Disorder

    Science.gov (United States)

    Ulke-Kurkcuoglu, Burcu

    2015-01-01

    The aim of this study is to compare effectiveness and efficiency of least-to-most prompting and video modeling for teaching pretend play skills to children with autism spectrum disorder. The adapted alternating treatment model, a single-subject design, was used in the study. Three students, one girl and two boys, between the ages of 5-6…

  12. Teaching Leisure Skills to an Adult with Developmental Disabilities Using a Video Prompting Intervention Package

    Science.gov (United States)

    Chan, Jeffrey Michael; Lambdin, Lindsay; Van Laarhoven, Toni; Johnson, Jesse W.

    2013-01-01

    The current study used a video prompting plus least-to-most prompting treatment package to teach a 35-year-old Caucasian man with Down Syndrome three leisure skills. Each leisure skill was task analyzed and the researchers created brief videos depicting the completion of individual steps. Using a multiple probe across behaviors design, the video…

  13. Comparison of Self-Prompting of Cooking Skills via Picture-Based Cookbooks and Video Recipes

    Science.gov (United States)

    Mechling, Linda C.; Stephens, Erin

    2009-01-01

    This investigation compared the use of static picture prompting, in a cookbook format, and video prompting to self-prompt four students with moderate intellectual disabilities to independently complete multi-step cooking tasks. An adapted alternating treatments design (AATD) with baseline, alternating treatments, and final treatment condition, was…

  14. Using Video-Based Modeling to Promote Acquisition of Fundamental Motor Skills

    Science.gov (United States)

    Obrusnikova, Iva; Rattigan, Peter J.

    2016-01-01

    Video-based modeling is becoming increasingly popular for teaching fundamental motor skills to children in physical education. Two frequently used video-based instructional strategies that incorporate modeling are video prompting (VP) and video modeling (VM). Both strategies have been used across multiple disciplines and populations to teach a…

  15. Examining the Effects of Video Modeling and Prompts to Teach Activities of Daily Living Skills.

    Science.gov (United States)

    Aldi, Catarina; Crigler, Alexandra; Kates-McElrath, Kelly; Long, Brian; Smith, Hillary; Rehak, Kim; Wilkinson, Lisa

    2016-12-01

    Video modeling has been shown to be effective in teaching a number of skills to learners diagnosed with autism spectrum disorders (ASD). In this study, we taught two young men diagnosed with ASD three different activities of daily living skills (ADLS) using point-of-view video modeling. Results indicated that both participants met criterion for all ADLS. Participants did not maintain mastery criterion at a 1-month follow-up, but did score above baseline at maintenance with and without video modeling. • Point-of-view video models may be an effective intervention to teach daily living skills. • Video modeling with handheld portable devices (Apple iPod or iPad) can be just as effective as video modeling with stationary viewing devices (television or computer). • The use of handheld portable devices (Apple iPod and iPad) makes video modeling accessible and possible in a wide variety of environments.

  16. Continuous Video Modeling to Prompt Completion of Multi-Component Tasks by Adults with Moderate Intellectual Disability

    Science.gov (United States)

    Mechling, Linda C.; Ayres, Kevin M.; Purrazzella, Kaitlin; Purrazzella, Kimberly

    2014-01-01

    This investigation examined the ability of four adults with moderate intellectual disability to complete multi-component tasks using continuous video modeling. Continuous video modeling, which is a newly researched application of video modeling, presents video in a "looping" format which automatically repeats playing of the video while…

  17. An evaluation of the production effects of video self-modeling.

    Science.gov (United States)

    O'Handley, Roderick D; Allen, Keith D

    2017-12-01

    A multiple baseline across tasks design was used to evaluate the production effects of video self-modeling on three activities of daily living tasks of an adult male with Autism Spectrum Disorder and Intellectual Disability. Results indicated large increases in task accuracy after the production of a self-modeling video for each task, but before the video was viewed by the participant. Results also indicated small increases when the participant was directed to view the same video self-models before being prompted to complete each task. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Teaching physical activities to students with significant disabilities using video modeling.

    Science.gov (United States)

    Cannella-Malone, Helen I; Mizrachi, Sharona V; Sabielny, Linsey M; Jimenez, Eliseo D

    2013-06-01

    The objective of this study was to examine the effectiveness of video modeling on teaching physical activities to three adolescents with significant disabilities. The study implemented a multiple baseline across six physical activities (three per student): jumping rope, scooter board with cones, ladder drill (i.e., feet going in and out), ladder design (i.e., multiple steps), shuttle run, and disc ride. Additional prompt procedures (i.e., verbal, gestural, visual cues, and modeling) were implemented within the study. After the students mastered the physical activities, we tested to see if they would link the skills together (i.e., complete an obstacle course). All three students made progress learning the physical activities, but only one learned them with video modeling alone (i.e., without error correction). Video modeling can be an effective tool for teaching students with significant disabilities various physical activities, though additional prompting procedures may be needed.

  19. Use of a Proximity Sensor Switch for "Hands Free" Operation of Computer-Based Video Prompting by Young Adults with Moderate Intellectual Disability

    Science.gov (United States)

    Ivey, Alexandria N.; Mechling, Linda C.; Spencer, Galen P.

    2015-01-01

    In this study, the effectiveness of a "hands free" approach for operating video prompts to complete multi-step tasks was measured. Students advanced the video prompts by using a motion (hand wave) over a proximity sensor switch. Three young adult females with a diagnosis of moderate intellectual disability participated in the study.…

  20. Modifying the affective behavior of preschoolers with autism using in-vivo or video modeling and reinforcement contingencies.

    Science.gov (United States)

    Gena, Angeliki; Couloura, Sophia; Kymissis, Effie

    2005-10-01

    The purpose of this study was to modify the affective behavior of three preschoolers with autism in home settings and in the context of play activities, and to compare the effects of video modeling to the effects of in-vivo modeling in teaching these children contextually appropriate affective responses. A multiple-baseline design across subjects, with a return to baseline condition, was used to assess the effects of treatment that consisted of reinforcement, video modeling, in-vivo modeling, and prompting. During training trials, reinforcement in the form of verbal praise and tokens was delivered contingent upon appropriate affective responding. Error correction procedures differed for each treatment condition. In the in-vivo modeling condition, the therapist used modeling and verbal prompting. In the video modeling condition, video segments of a peer modeling the correct response and verbal prompting by the therapist were used as corrective procedures. Participants received treatment in three categories of affective behavior--sympathy, appreciation, and disapproval--and were presented with a total of 140 different scenarios. The study demonstrated that both treatments--video modeling and in-vivo modeling--systematically increased appropriate affective responding in all response categories for the three participants. Additionally, treatment effects generalized across responses to untrained scenarios, the child's mother, new therapists, and time.

  1. Continuous Video Modeling to Assist with Completion of Multi-Step Home Living Tasks by Young Adults with Moderate Intellectual Disability

    Science.gov (United States)

    Mechling, Linda C.; Ayres, Kevin M.; Bryant, Kathryn J.; Foster, Ashley L.

    2014-01-01

    The current study evaluated a relatively new video-based procedure, continuous video modeling (CVM), to teach multi-step cleaning tasks to high school students with moderate intellectual disability. CVM in contrast to video modeling and video prompting allows repetition of the video model (looping) as many times as needed while the user completes…

  2. Use of Video Modeling to Teach Adolescents with an Intellectual Disability to Film Their Own Video Prompts

    Science.gov (United States)

    Shepley, Sally B.; Smith, Katie A.; Ayres, Kevin M.; Alexander, Jennifer L.

    2017-01-01

    Self-instruction for individuals with an intellectual disability can be viewed as a pivotal skill in that once learned this skill has collateral effects on future behaviors in various environments. This study used a multiple probe across participants design to evaluate video modeling to teach high school students with an intellectual disability to…

  3. Comparison of Static Picture and Video Prompting on the Performance of Cooking-Related Tasks by Students with Autism

    Science.gov (United States)

    Mechling, Linda C.; Gustafson, Melissa R.

    2009-01-01

    This study compared the effectiveness of static photographs and video prompts on the independent task performance of six young men with a diagnosis of autism. An adapted alternating-treatment design with baseline, comparison, withdrawal, and final treatment conditions was used to measure the percentage of cooking-related tasks completed…

  4. Observing Observers: Using Video to Prompt and Record Reflections on Teachers' Pedagogies in Four Regions of Canada

    Science.gov (United States)

    Reid, David A; Simmt, Elaine; Savard, Annie; Suurtamm, Christine; Manuel, Dominic; Lin, Terry Wan Jung; Quigley, Brenna; Knipping, Christine

    2015-01-01

    Regional differences in performance in mathematics across Canada prompted us to conduct a comparative study of middle-school mathematics pedagogy in four regions. We built on the work of Tobin, using a theoretical framework derived from the work of Maturana. In this paper, we describe the use of video as part of the methodology used. We used…

  5. Effects of video modeling with video feedback on vocational skills of adults with autism spectrum disorder.

    Science.gov (United States)

    English, Derek L; Gounden, Sadhana; Dagher, Richard E; Chan, Shu Fen; Furlonger, Brett E; Anderson, Angelika; Moore, Dennis W

    2017-11-01

    To examine the effectiveness of a video modeling (VM) with video feedback (VFB) intervention to teach vocational gardening skills to three adults with autism spectrum disorder (ASD). A multiple probe design across skills was used to assess the effects of the intervention on the three participants' ability to perform skills accurately. The use of VM with VFB led to improvements across skills for two of the participants. The third participant required video prompting (VP) for successful skill acquisition. Skill performance generalized across personnel and settings for two of the participants, but it was not assessed for the third. Skill performance maintained at follow-up for all three participants. Social validity data gathered from participants, parents, and co-workers were positive. These findings suggest that VM with VFB and VP with VFB were effective and socially acceptable interventions for teaching vocational gardening skills to young adults with ASD.

  6. Comparing the Effects of Commercially Available and Custom-Made Video Prompting for Teaching Cooking Skills to High School Students with Autism

    Science.gov (United States)

    Mechling, Linda C.; Ayres, Kevin M.; Foster, Ashley L.; Bryant, Kathryn J.

    2013-01-01

    The study compared the effects of using commercially available and custom-made video prompts on the completion of cooking recipes by four high school age males with a diagnosis of autism. An adapted alternating treatments design with continuous baseline, comparison, final treatment, and best treatment condition was used to compare the two…

  7. VBR video traffic models

    CERN Document Server

    Tanwir, Savera

    2014-01-01

    There has been a phenomenal growth in video applications over the past few years. An accurate traffic model of Variable Bit Rate (VBR) video is necessary for performance evaluation of a network design and for generating synthetic traffic that can be used for benchmarking a network. A large number of models for VBR video traffic have been proposed in the literature for different types of video in the past 20 years. Here, the authors have classified and surveyed these models and have also evaluated the models for H.264 AVC and MVC encoded video and discussed their findings.

  8. Video Self-Modeling

    Science.gov (United States)

    Buggey, Tom; Ogle, Lindsey

    2012-01-01

    Video self-modeling (VSM) first appeared on the psychology and education stage in the early 1970s. The practical applications of VSM were limited by lack of access to tools for editing video, which is necessary for almost all self-modeling videos. Thus, VSM remained in the research domain until the advent of camcorders and VCR/DVD players and,…

  9. A Framework for Video Modeling

    NARCIS (Netherlands)

    Petkovic, M.; Jonker, Willem

    In recent years, research in video databases has increased greatly, but relatively little work has been done in the area of semantic content-based retrieval. In this paper, we present a framework for video modelling with emphasis on semantic content of video data. The video data model presented

  10. No Reference Video-Quality-Assessment Model for Monitoring Video Quality of IPTV Services

    Science.gov (United States)

    Yamagishi, Kazuhisa; Okamoto, Jun; Hayashi, Takanori; Takahashi, Akira

    Service providers should monitor the quality of experience of a communication service in real time to confirm its status. To do this, we previously proposed a packet-layer model that can be used for monitoring the average video quality of typical Internet protocol television content using parameters derived from transmitted packet headers. However, it is difficult to monitor the video quality per user using the average video quality because video quality depends on the video content. To accurately monitor the video quality per user, a model that can be used for estimating the video quality per video content rather than the average video quality should be developed. Therefore, to take into account the impact of video content on video quality, we propose a model that calculates the difference in video quality between the video quality of the estimation-target video and the average video quality estimated using a packet-layer model. We first conducted extensive subjective quality assessments for different codecs and video sequences. We then model their characteristics based on parameters related to compression and packet loss. Finally, we verify the performance of the proposed model by applying it to unknown data sets different from the training data sets used for developing the model.

  11. Video Quality Prediction Models Based on Video Content Dynamics for H.264 Video over UMTS Networks

    Directory of Open Access Journals (Sweden)

    Asiya Khan

    2010-01-01

    Full Text Available The aim of this paper is to present video quality prediction models for objective non-intrusive, prediction of H.264 encoded video for all content types combining parameters both in the physical and application layer over Universal Mobile Telecommunication Systems (UMTS networks. In order to characterize the Quality of Service (QoS level, a learning model based on Adaptive Neural Fuzzy Inference System (ANFIS and a second model based on non-linear regression analysis is proposed to predict the video quality in terms of the Mean Opinion Score (MOS. The objective of the paper is two-fold. First, to find the impact of QoS parameters on end-to-end video quality for H.264 encoded video. Second, to develop learning models based on ANFIS and non-linear regression analysis to predict video quality over UMTS networks by considering the impact of radio link loss models. The loss models considered are 2-state Markov models. Both the models are trained with a combination of physical and application layer parameters and validated with unseen dataset. Preliminary results show that good prediction accuracy was obtained from both the models. The work should help in the development of a reference-free video prediction model and QoS control methods for video over UMTS networks.

  12. Comprehensive overview of the Point-by-Point model of prompt emission in fission

    Energy Technology Data Exchange (ETDEWEB)

    Tudora, A. [University of Bucharest, Faculty of Physics, Bucharest Magurele (Romania); Hambsch, F.J. [European Commission, Joint Research Centre, Directorate G - Nuclear Safety and Security, Unit G2, Geel (Belgium)

    2017-08-15

    The investigation of prompt emission in fission is very important in understanding the fission process and to improve the quality of evaluated nuclear data required for new applications. In the last decade remarkable efforts were done for both the development of prompt emission models and the experimental investigation of the properties of fission fragments and the prompt neutrons and γ-ray emission. The accurate experimental data concerning the prompt neutron multiplicity as a function of fragment mass and total kinetic energy for {sup 252}Cf(SF) and {sup 235}U(n,f) recently measured at JRC-Geel (as well as other various prompt emission data) allow a consistent and very detailed validation of the Point-by-Point (PbP) deterministic model of prompt emission. The PbP model results describe very well a large variety of experimental data starting from the multi-parametric matrices of prompt neutron multiplicity ν(A,TKE) and γ-ray energy E{sub γ}(A,TKE) which validate the model itself, passing through different average prompt emission quantities as a function of A (e.g., ν(A), E{sub γ}(A), left angle ε right angle (A) etc.), as a function of TKE (e.g., ν(TKE), E{sub γ}(TKE)) up to the prompt neutron distribution P(ν) and the total average prompt neutron spectrum. The PbP model does not use free or adjustable parameters. To calculate the multi-parametric matrices it needs only data included in the reference input parameter library RIPL of IAEA. To provide average prompt emission quantities as a function of A, of TKE and total average quantities the multi-parametric matrices are averaged over reliable experimental fragment distributions. The PbP results are also in agreement with the results of the Monte Carlo prompt emission codes FIFRELIN, CGMF and FREYA. The good description of a large variety of experimental data proves the capability of the PbP model to be used in nuclear data evaluations and its reliability to predict prompt emission data for fissioning

  13. Effectiveness of Instruction and Video Feedback on Staff's Use of Prompts and Children's Adaptive Responses during One-to-One Training in Children with Severe to Profound Intellectual Disability

    Science.gov (United States)

    van Vonderen, Annemarie; de Swart, Charlotte; Didden, Robert

    2010-01-01

    Although relatively many studies have addressed staff training and its effect on trainer behavior, the effects of staff training on trainee's adaptive behaviors have seldomly been examined. We therefore assessed effectiveness of staff training, consisting of instruction and video feedback, on (a) staff's response prompting, and (b) staff's trainer…

  14. Pragmatic comprehension of apology, request and refusal: An investigation on the effect of consciousness-raising video-driven prompts

    Directory of Open Access Journals (Sweden)

    Ali Derakhshan

    2014-01-01

    Full Text Available Recent research in interlanguage pragmatics (ILP has substantiated that some aspects of pragmatics are amenable to instruction in the second or foreign language classroom. However, there are still controversies over the most conducive teaching approaches and the required materials. Therefore, this study aims to investigate the relative effectiveness of consciousness-raising video-driven prompts on the comprehension of the three speech acts of apology, request, and refusal on seventy eight (36 male and 42 female upper-intermediate Persian learners of English who were randomly assigned to four groups (metapragmatic, form-search, role play, and control. The four groups were exposed to 45 video vignettes (15 for each speech act extracted from different episodes of Flash Forward, Stargate TV Series and Annie Hall Film for nine 60-minute sessions of instruction twice a week. Results of the multiple choice discourse completion test (MDCT indicated that learners’ awareness of apologies, requests and refusals benefit from all three types of instruction, but the results of the Post hoc test of Tukey (HSD illustrated that the metapragmatic group outperformed the other treatment groups, and that form-search group had a better performance than role-play and control groups.

  15. Parent-implemented picture exchange communication system (PECS) training: an analysis of YouTube videos.

    Science.gov (United States)

    Jurgens, Anneke; Anderson, Angelika; Moore, Dennis W

    2012-01-01

    To investigate the integrity with which parents and carers implement PECS in naturalistic settings, utilizing a sample of videos obtained from YouTube. Twenty-one YouTube videos meeting selection criteria were identified. The videos were reviewed for instances of seven implementer errors and, where appropriate, presence of a physical prompter. Forty-three per cent of videos and 61% of PECS exchanges contained errors in parent implementation of specific teaching strategies of the PECS training protocol. Vocal prompts, incorrect error correction and the absence of timely reinforcement occurred most frequently, while gestural prompts, insistence on speech, incorrect use of the open hand prompt and not waiting for the learner to initiate occurred less frequently. Results suggest that parents engage in vocal prompting and incorrect use of the 4-step error correction strategy when using PECS with their children, errors likely to result in prompt dependence.

  16. Designing MOOCs videos - A prompt for teachers and advisers' pedagogical development?

    OpenAIRE

    Van de Poël, Jean-François; Martin, Pierre; Harcq, Samuel; Crepin, Thibault; Verpoorten, Dominique

    2017-01-01

    The use of online video is not new in education but it undergoes a new momentum. What is new is not the medium but its scale of deployment and availability (Van Gog, 2013). MOOCs have played their part in this rise since these new modes of online learning deliver learning through a series of (short) videos. Both for teachers and Teaching & Learning Centers, like IFRES at the University of Liège, designing, shooting, editing, ornamenting, and locating such videos in the instructional design of...

  17. Using Video Modeling with Voiceover Instruction Plus Feedback to Train Staff to Implement Direct Teaching Procedures.

    Science.gov (United States)

    Giannakakos, Antonia R; Vladescu, Jason C; Kisamore, April N; Reeve, Sharon A

    2016-06-01

    Direct teaching procedures are often an important part of early intensive behavioral intervention for consumers with autism spectrum disorder. In the present study, a video model with voiceover (VMVO) instruction plus feedback was evaluated to train three staff trainees to implement a most-to-least direct (MTL) teaching procedure. Probes for generalization were conducted with untrained direct teaching procedures (i.e., least-to-most, prompt delay) and with an actual consumer. The results indicated that VMVO plus feedback was effective in training the staff trainees to implement the MTL procedure. Although additional feedback was required for the staff trainees to show mastery of the untrained direct teaching procedures (i.e., least-to-most and prompt delay) and with an actual consumer, moderate to high levels of generalization were observed.

  18. Attention modeling for video quality assessment

    DEFF Research Database (Denmark)

    You, Junyong; Korhonen, Jari; Perkis, Andrew

    2010-01-01

    averaged spatiotemporal pooling. The local quality is derived from visual attention modeling and quality variations over frames. Saliency, motion, and contrast information are taken into account in modeling visual attention, which is then integrated into IQMs to calculate the local quality of a video frame...... average between the global quality and the local quality. Experimental results demonstrate that the combination of the global quality and local quality outperforms both sole global quality and local quality, as well as other quality models, in video quality assessment. In addition, the proposed video...... quality modeling algorithm can improve the performance of image quality metrics on video quality assessment compared to the normal averaged spatiotemporal pooling scheme....

  19. Perceptual video quality assessment in H.264 video coding standard using objective modeling.

    Science.gov (United States)

    Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu

    2014-01-01

    Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.

  20. KEEFEKTIFAN PENERAPAN MODEL PEMBELAJARAN KOOPERATIF TIPE PROBING-PROMPTING DENGAN PENILAIAN PRODUK

    Directory of Open Access Journals (Sweden)

    Himmatul Ulya

    2012-06-01

    Full Text Available Abstract Tujuan penelitian ini adalah untuk mengetahui apakah pembelajaran kooperatif tipe probing-prompting dengan penilaian produk dan pembelajaran kooperatif tipe probing-prompting materi keliling dan luas lingkaran dapat mencapai ketuntasan belajar dan lebih baik dari pembelajaran ekspositori pada peserta didik kelas VIII. Populasi dalam penelitian ini adalah peserta didik kelas VIII MTs. Nurussalam Gebog Kudus tahun pelajaran 2011/2012. Dengan cara acak terpilih sampel yaitu peserta didik kelas VIIIA dan VIIIC sebagai kelas eksperimen 1 dan 2, serta VIIIB sebagai kelas kontrol. Diperoleh hasil penelitian bahwa rata-rata hasil belajar peserta didik kelas eksperimen 1 sebesar 79,91, kelas eksperimen 2 sebesar 73,21, dan kelas kontrol sebesar 66,10. Dari hasil uji ketuntasan belajar diperoleh peserta didik kelas eksperimen mencapai ketuntasan belajar (individual dan klasikal. Dari hasil uji Anava nilai Sig.= 0,000 < 0,05 artinya ada perbedaan rata-rata, kemudian dilakukan uji lanjut Scheffe menunjukkan adanya perbedaan rata-rata yang signifikan antara masing-masing kelas. Simpulan yang diperoleh adalah pembelajaran kooperatif tipe probing-prompting yang disertai dengan penilaian produk dan pembelajaran kooperatif tipe probing-prompting materi keliling dan luas lingkaran dapat mencapai ketuntasan belajar, model pembelajaran kooperatif tipe probing-prompting dengan penilaian produk lebih baik dari pembelajaran kooperatif tipe probing-prompting, dan pembelajaran kooperatif tipe probing-prompting lebih baik dari pembelajaran ekspositori. The purpose of this study was to determine whether the cooperative learning of probing-prompting by assessing products of circle perimeter and circle area material could reach the completeness learning of students. The population of this study was the students of grade VIII MTs. Nurussalam Gebog Kudus year 2011/2012. Randomly, the selected samples were the students of VIIIA and VIIIC as the experiment classes 1

  1. Does the Model Matter? Comparing Video Self-Modeling and Video Adult Modeling for Task Acquisition and Maintenance by Adolescents with Autism Spectrum Disorders

    Science.gov (United States)

    Cihak, David F.; Schrader, Linda

    2009-01-01

    The purpose of this study was to compare the effectiveness and efficiency of learning and maintaining vocational chain tasks using video self-modeling and video adult modeling instruction. Four adolescents with autism spectrum disorders were taught vocational and prevocational skills. Although both video modeling conditions were effective for…

  2. Procedures and compliance of a video modeling applied behavior analysis intervention for Brazilian parents of children with autism spectrum disorders.

    Science.gov (United States)

    Bagaiolo, Leila F; Mari, Jair de J; Bordini, Daniela; Ribeiro, Tatiane C; Martone, Maria Carolina C; Caetano, Sheila C; Brunoni, Decio; Brentani, Helena; Paula, Cristiane S

    2017-07-01

    Video modeling using applied behavior analysis techniques is one of the most promising and cost-effective ways to improve social skills for parents with autism spectrum disorder children. The main objectives were: (1) To elaborate/describe videos to improve eye contact and joint attention, and to decrease disruptive behaviors of autism spectrum disorder children, (2) to describe a low-cost parental training intervention, and (3) to assess participant's compliance. This is a descriptive study of a clinical trial for autism spectrum disorder children. The parental training intervention was delivered over 22 weeks based on video modeling. Parents with at least 8 years of schooling with an autism spectrum disorder child between 3 and 6 years old with an IQ lower than 70 were invited to participate. A total of 67 parents fulfilled the study criteria and were randomized into two groups: 34 as the intervention and 33 as controls. In all, 14 videos were recorded covering management of disruptive behaviors, prompting hierarchy, preference assessment, and acquisition of better eye contact and joint attention. Compliance varied as follows: good 32.4%, reasonable 38.2%, low 5.9%, and 23.5% with no compliance. Video modeling parental training seems a promising, feasible, and low-cost way to deliver care for children with autism spectrum disorder, particularly for populations with scarce treatment resources.

  3. Video modeling by experts with video feedback to enhance gymnastics skills.

    Science.gov (United States)

    Boyer, Eva; Miltenberger, Raymond G; Batsche, Catherine; Fogel, Victoria

    2009-01-01

    The effects of combining video modeling by experts with video feedback were analyzed with 4 female competitive gymnasts (7 to 10 years old) in a multiple baseline design across behaviors. During the intervention, after the gymnast performed a specific gymnastics skill, she viewed a video segment showing an expert gymnast performing the same skill and then viewed a video replay of her own performance of the skill. The results showed that all gymnasts demonstrated improved performance across three gymnastics skills following exposure to the intervention.

  4. Guerrilla Video: Adjudicating the Credible and the Cool

    Science.gov (United States)

    Sullivan, Patricia; Fadde, Peter Jae

    2010-01-01

    Because video on the web has spread almost virally, video crafted out of an amateur aesthetic has contributed to a disruption of professional communication economies as it prompts us to ask: Can we use digital video to make work-related communication cool? Professional writing pedagogies are beginning to respond to new student expectations about…

  5. Geographic Video 3d Data Model And Retrieval

    Science.gov (United States)

    Han, Z.; Cui, C.; Kong, Y.; Wu, H.

    2014-04-01

    Geographic video includes both spatial and temporal geographic features acquired through ground-based or non-ground-based cameras. With the popularity of video capture devices such as smartphones, the volume of user-generated geographic video clips has grown significantly and the trend of this growth is quickly accelerating. Such a massive and increasing volume poses a major challenge to efficient video management and query. Most of the today's video management and query techniques are based on signal level content extraction. They are not able to fully utilize the geographic information of the videos. This paper aimed to introduce a geographic video 3D data model based on spatial information. The main idea of the model is to utilize the location, trajectory and azimuth information acquired by sensors such as GPS receivers and 3D electronic compasses in conjunction with video contents. The raw spatial information is synthesized to point, line, polygon and solid according to the camcorder parameters such as focal length and angle of view. With the video segment and video frame, we defined the three categories geometry object using the geometry model of OGC Simple Features Specification for SQL. We can query video through computing the spatial relation between query objects and three categories geometry object such as VFLocation, VSTrajectory, VSFOView and VFFovCone etc. We designed the query methods using the structured query language (SQL) in detail. The experiment indicate that the model is a multiple objective, integration, loosely coupled, flexible and extensible data model for the management of geographic stereo video.

  6. Selecting salient frames for spatiotemporal video modeling and segmentation.

    Science.gov (United States)

    Song, Xiaomu; Fan, Guoliang

    2007-12-01

    We propose a new statistical generative model for spatiotemporal video segmentation. The objective is to partition a video sequence into homogeneous segments that can be used as "building blocks" for semantic video segmentation. The baseline framework is a Gaussian mixture model (GMM)-based video modeling approach that involves a six-dimensional spatiotemporal feature space. Specifically, we introduce the concept of frame saliency to quantify the relevancy of a video frame to the GMM-based spatiotemporal video modeling. This helps us use a small set of salient frames to facilitate the model training by reducing data redundancy and irrelevance. A modified expectation maximization algorithm is developed for simultaneous GMM training and frame saliency estimation, and the frames with the highest saliency values are extracted to refine the GMM estimation for video segmentation. Moreover, it is interesting to find that frame saliency can imply some object behaviors. This makes the proposed method also applicable to other frame-related video analysis tasks, such as key-frame extraction, video skimming, etc. Experiments on real videos demonstrate the effectiveness and efficiency of the proposed method.

  7. 4K Video Traffic Prediction using Seasonal Autoregressive Modeling

    Directory of Open Access Journals (Sweden)

    D. R. Marković

    2017-06-01

    Full Text Available From the perspective of average viewer, high definition video streams such as HD (High Definition and UHD (Ultra HD are increasing their internet presence year over year. This is not surprising, having in mind expansion of HD streaming services, such as YouTube, Netflix etc. Therefore, high definition video streams are starting to challenge network resource allocation with their bandwidth requirements and statistical characteristics. Need for analysis and modeling of this demanding video traffic has essential importance for better quality of service and experience support. In this paper we use an easy-to-apply statistical model for prediction of 4K video traffic. Namely, seasonal autoregressive modeling is applied in prediction of 4K video traffic, encoded with HEVC (High Efficiency Video Coding. Analysis and modeling were performed within R programming environment using over 17.000 high definition video frames. It is shown that the proposed methodology provides good accuracy in high definition video traffic modeling.

  8. PENGARUH MODEL PEMBELAJARAN PROBING-PROMPTING BERBANTUAN LEMBAR KERJA BERSTRUKTUR TERHADAP HASIL BELAJAR

    Directory of Open Access Journals (Sweden)

    Ajeng Diasputri

    2016-07-01

    Full Text Available Penelitian ini bertujuan untuk mengetahui pengaruh penggunaan model pembelajaran Probing-Prompting Berbantuan Lembar Kerja Berstruktur terhadap hasil be/ajar siswa di suatu sekolah pada materi Hidrokarbon dan Minyak Bumi. Pengambilan sampel dilakukan dengan menggunakan teknik Purposive Sampling. Pada kelas eksperimen digunakan model pembelajaran probing-prompting berbantuan lembar kerja berstruktur, sedangkan kelas kontrol menggunakan metode konvesional. Setelah diberi perlakuan yang berbeda dan dilakukan post test dapat diketahui bahwa hasil be/ajar siswa kelas ekperimen lebih baik dibandingkan hasil be/ajar siswa kelas kontrol, yaitu masing-masing 77 dan 70. Berdasarkan uji perbedaan rata-rata hasil be/ajar, diperoleh thitung(4,074> trabe1(1,669, maka dapat dikatakan bahwa rata-rata hasil be/ajar kelompok eksperimen lebih baik dari kelompok kontrol. Pada uji ketuntasan be/ajar, persentase ketuntasan kelas eksperimen mencapai 91, 18% dan kelas kontrol mencapai 59,38%. Uji korelasi memperoleh harga koefisien biserial sebesar 0,5638. Dari seluruh hasil penelitian ini, dapat disimpulkan bahwa model pembelajaran probing-prompting berbantuan LKB berpengaruh terhadap hasil be/ajar siswa pada materi pokok Hidrokarbon dan Minyak bumi dengan memberikan kontribusi sebesar 32%.This study aimed to know the influence of the use of a learning model Probing-prompting assisted by structured worksheet on learning outcomes in an high school especially in the Hydrocarbon and Petroleum. Sampling is done by using purposive sampling technique. In the experimental class, learning used Probing-prompting assisted by structured worksheet, while in classroom control used conventional methods. After being given a different treatment and after post test was done, it can be concluded that students' learning outcomes in class experiment is better than the control class, respectively 77 and 70. Based on the analysis of difference average learning outcomes, it obtained fcounr

  9. Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.

    Science.gov (United States)

    Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick

    2017-10-01

    In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

  10. Comparison of the Effects of Video Modeling with Narration vs. Video Modeling on the Functional Skill Acquisition of Adolescents with Autism

    Science.gov (United States)

    Smith, Molly; Ayres, Kevin; Mechling, Linda; Smith, Katie

    2013-01-01

    The purpose of this study was to compare the effects of two forms of video modeling: video modeling that includes narration (VMN) and video models without narration (VM) on skill acquisition of four adolescent boys with a primary diagnosis of autism enrolled in an Extended School Year (ESY) summer program. An adapted alternating treatment design…

  11. The Supercritical Pile Model: Prompt Emission Across the Electromagnetic Spectrum

    Science.gov (United States)

    Kazanas, Demos; Mastichiadis, A.

    2008-01-01

    The "Supercritical Pile" GRB model is an economical model that provides the dissipation necessary to convert explosively the energy stored in relativistic protons in the blast wave of a GRB into radiation; at the same time it produces spectra whose luminosity peaks at 1 MeV in the lab frame, the result of the kinematics of the proton-photon - pair production reaction that effects the conversion of proton energy to radiation. We outline the fundamental notions behind the "Supercritical Pile" model and discuss the resulting spectra of the prompt emission from optical to gamma-ray energies of order Gamma^2 m_ec^2, (Gamma is the Lorentz factor of the blast wave) present even in the absence of an accelerated particle distribution and compare our results to bursts that cover this entire energy range. Particular emphasis is given on the emission at the GLAST energy range both in the prompt and the afterglow stages of the burst.

  12. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  13. Learning from video modeling examples: Does gender matter?

    NARCIS (Netherlands)

    Hoogerheide, V.; Loyens, S.M.M.; van Gog, T.

    2016-01-01

    Online learning from video modeling examples, in which a human model demonstrates and explains how to perform a learning task, is an effective instructional method that is increasingly used nowadays. However, model characteristics such as gender tend to differ across videos, and the model-observer

  14. A Bulk Comptonization Model for the Prompt GRM Emission

    Science.gov (United States)

    Kazanas, Demos; Mastichiadis, A.

    2010-01-01

    The "Supercritical Pile" is a very economical GRB model that provides for the efficient conversion of the energy stored in the protons of a Relativistic Blast Wave (RBW) into radiation and at the same time produces - in the prompt GRB phase, even in the absence of any particle acceleration - a spectral peak at energy approximately 1 MeV. We extend this model to include the evolution of the RBW Lorentz factor F and thus follow its spectral and temporal features into the early GRB afterglow stage. One of the novel features of the present treatment is the inclusion of the feedback of the GRB produced radiation on the evolution of Gamma with radius. This feedback and the presence of kinematic and dynamic thresholds in the model are sources of potentially very rich time evolution which we have began to explore. In particular, one can this way obtain afterglow light curves with steep decays followed by the more conventional flatter afterglow slopes, while at the same time preserving the desirable features of the model, i.e. the well defined relativistic electron source and radiative processes that produce the proper peak in the nu F(sub nu) spectra. In this note we present the results of a specific set of parameters of this model with emphasis on the multiwavelength prompt emission and transition to the early afterglow.

  15. Film grain noise modeling in advanced video coding

    Science.gov (United States)

    Oh, Byung Tae; Kuo, C.-C. Jay; Sun, Shijun; Lei, Shawmin

    2007-01-01

    A new technique for film grain noise extraction, modeling and synthesis is proposed and applied to the coding of high definition video in this work. The film grain noise is viewed as a part of artistic presentation by people in the movie industry. On one hand, since the film grain noise can boost the natural appearance of pictures in high definition video, it should be preserved in high-fidelity video processing systems. On the other hand, video coding with film grain noise is expensive. It is desirable to extract film grain noise from the input video as a pre-processing step at the encoder and re-synthesize the film grain noise and add it back to the decoded video as a post-processing step at the decoder. Under this framework, the coding gain of the denoised video is higher while the quality of the final reconstructed video can still be well preserved. Following this idea, we present a method to remove film grain noise from image/video without distorting its original content. Besides, we describe a parametric model containing a small set of parameters to represent the extracted film grain noise. The proposed model generates the film grain noise that is close to the real one in terms of power spectral density and cross-channel spectral correlation. Experimental results are shown to demonstrate the efficiency of the proposed scheme.

  16. Intelligent Model for Video Survillance Security System

    Directory of Open Access Journals (Sweden)

    J. Vidhya

    2013-12-01

    Full Text Available Video surveillance system senses and trails out all the threatening issues in the real time environment. It prevents from security threats with the help of visual devices which gather the information related to videos like CCTV’S and IP (Internet Protocol cameras. Video surveillance system has become a key for addressing problems in the public security. They are mostly deployed on the IP based network. So, all the possible security threats exist in the IP based application might also be the threats available for the reliable application which is available for video surveillance. In result, it may increase cybercrime, illegal video access, mishandling videos and so on. Hence, in this paper an intelligent model is used to propose security for video surveillance system which ensures safety and it provides secured access on video.

  17. Web-video-mining-supported workflow modeling for laparoscopic surgeries.

    Science.gov (United States)

    Liu, Rui; Zhang, Xiaoli; Zhang, Hao

    2016-11-01

    As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Investigation of some approximation used in promptly emitted particle models

    International Nuclear Information System (INIS)

    Leray, S.; La Rana, G.; Lucas, R.; Ngo, C.; Barranco, M.; Pi, M.; Vinas, X.

    1984-01-01

    We investigate three effects which can be taken into account in a model for promptly emitted particles: the Pauli blocking, the velocity of the window separating the two ions with respect to each of the fragments and the spatial extension of the window

  19. Learning from Video Modeling Examples: Does Gender Matter?

    Science.gov (United States)

    Hoogerheide, Vincent; Loyens, Sofie M. M.; van Gog, Tamara

    2016-01-01

    Online learning from video modeling examples, in which a human model demonstrates and explains how to perform a learning task, is an effective instructional method that is increasingly used nowadays. However, model characteristics such as gender tend to differ across videos, and the model-observer similarity hypothesis suggests that such…

  20. Learning from video modeling examples: does gender matter?

    NARCIS (Netherlands)

    V. Hoogerheide (Vincent); S.M.M. Loyens (Sofie); T.A.J.M. van Gog (Tamara)

    2016-01-01

    textabstractOnline learning from video modeling examples, in which a human model demonstrates and explains how to perform a learning task, is an effective instructional method that is increasingly used nowadays. However, model characteristics such as gender tend to differ across videos, and the

  1. Learning to Swim Using Video Modelling and Video Feedback within a Self-Management Program

    Science.gov (United States)

    Lao, So-An; Furlonger, Brett E.; Moore, Dennis W.; Busacca, Margherita

    2016-01-01

    Although many adults who cannot swim are primarily interested in learning by direct coaching there are options that have a focus on self-directed learning. As an alternative a self-management program combined with video modelling, video feedback and high quality and affordable video technology was used to assess its effectiveness to assisting an…

  2. Common and Innovative Visuals: A sparsity modeling framework for video.

    Science.gov (United States)

    Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder

    2014-05-02

    Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.

  3. Learning a Continuous-Time Streaming Video QoE Model.

    Science.gov (United States)

    Ghadiyaram, Deepti; Pan, Janice; Bovik, Alan C

    2018-05-01

    Over-the-top adaptive video streaming services are frequently impacted by fluctuating network conditions that can lead to rebuffering events (stalling events) and sudden bitrate changes. These events visually impact video consumers' quality of experience (QoE) and can lead to consumer churn. The development of models that can accurately predict viewers' instantaneous subjective QoE under such volatile network conditions could potentially enable the more efficient design of quality-control protocols for media-driven services, such as YouTube, Amazon, Netflix, and so on. However, most existing models only predict a single overall QoE score on a given video and are based on simple global video features, without accounting for relevant aspects of human perception and behavior. We have created a QoE evaluator, called the time-varying QoE Indexer, that accounts for interactions between stalling events, analyzes the spatial and temporal content of a video, predicts the perceptual video quality, models the state of the client-side data buffer, and consequently predicts continuous-time quality scores that agree quite well with human opinion scores. The new QoE predictor also embeds the impact of relevant human cognitive factors, such as memory and recency, and their complex interactions with the video content being viewed. We evaluated the proposed model on three different video databases and attained standout QoE prediction performance.

  4. Refinements in the Los Alamos model of the prompt fission neutron spectrum

    Energy Technology Data Exchange (ETDEWEB)

    Madland, D.G., E-mail: dgm@lanl.gov; Kahler, A.C.

    2017-01-15

    This paper presents a number of refinements to the original Los Alamos model of the prompt fission neutron spectrum and average prompt neutron multiplicity as derived in 1982. The four refinements are due to new measurements of the spectrum and related fission observables many of which were not available in 1982. They are also due to a number of detailed studies and comparisons of the model with previous and present experimental results including not only the differential spectrum, but also integral cross sections measured in the field of the differential spectrum. The four refinements are (a) separate neutron contributions in binary fission, (b) departure from statistical equilibrium at scission, (c) fission-fragment nuclear level-density models, and (d) center-of-mass anisotropy. With these refinements, for the first time, good agreement has been obtained for both differential and integral measurements using the same Los Alamos model spectrum.

  5. The Technological Barriers of Using Video Modeling in the Classroom

    Science.gov (United States)

    Marino, Desha; Myck-Wayne, Janice

    2015-01-01

    The purpose of this investigation is to identify the technological barriers teachers encounter when attempting to implement video modeling in the classroom. Video modeling is an emerging evidence-based intervention method used with individuals with autism. Research has shown the positive effects video modeling can have on its recipients. Educators…

  6. Dynamic Textures Modeling via Joint Video Dictionary Learning.

    Science.gov (United States)

    Wei, Xian; Li, Yuanxiang; Shen, Hao; Chen, Fang; Kleinsteuber, Martin; Wang, Zhongfeng

    2017-04-06

    Video representation is an important and challenging task in the computer vision community. In this paper, we consider the problem of modeling and classifying video sequences of dynamic scenes which could be modeled in a dynamic textures (DT) framework. At first, we assume that image frames of a moving scene can be modeled as a Markov random process. We propose a sparse coding framework, named joint video dictionary learning (JVDL), to model a video adaptively. By treating the sparse coefficients of image frames over a learned dictionary as the underlying "states", we learn an efficient and robust linear transition matrix between two adjacent frames of sparse events in time series. Hence, a dynamic scene sequence is represented by an appropriate transition matrix associated with a dictionary. In order to ensure the stability of JVDL, we impose several constraints on such transition matrix and dictionary. The developed framework is able to capture the dynamics of a moving scene by exploring both sparse properties and the temporal correlations of consecutive video frames. Moreover, such learned JVDL parameters can be used for various DT applications, such as DT synthesis and recognition. Experimental results demonstrate the strong competitiveness of the proposed JVDL approach in comparison with state-of-the-art video representation methods. Especially, it performs significantly better in dealing with DT synthesis and recognition on heavily corrupted data.

  7. Hierarchical Context Modeling for Video Event Recognition.

    Science.gov (United States)

    Wang, Xiaoyang; Ji, Qiang

    2016-10-11

    Current video event recognition research remains largely target-centered. For real-world surveillance videos, targetcentered event recognition faces great challenges due to large intra-class target variation, limited image resolution, and poor detection and tracking results. To mitigate these challenges, we introduced a context-augmented video event recognition approach. Specifically, we explicitly capture different types of contexts from three levels including image level, semantic level, and prior level. At the image level, we introduce two types of contextual features including the appearance context features and interaction context features to capture the appearance of context objects and their interactions with the target objects. At the semantic level, we propose a deep model based on deep Boltzmann machine to learn event object representations and their interactions. At the prior level, we utilize two types of prior-level contexts including scene priming and dynamic cueing. Finally, we introduce a hierarchical context model that systematically integrates the contextual information at different levels. Through the hierarchical context model, contexts at different levels jointly contribute to the event recognition. We evaluate the hierarchical context model for event recognition on benchmark surveillance video datasets. Results show that incorporating contexts in each level can improve event recognition performance, and jointly integrating three levels of contexts through our hierarchical model achieves the best performance.

  8. Comparing Video Modeling and Graduated Guidance Together and Video Modeling Alone for Teaching Role Playing Skills to Children with Autism

    Science.gov (United States)

    Akmanoglu, Nurgul; Yanardag, Mehmet; Batu, E. Sema

    2014-01-01

    Teaching play skills is important for children with autism. The purpose of the present study was to compare effectiveness and efficiency of providing video modeling and graduated guidance together and video modeling alone for teaching role playing skills to children with autism. The study was conducted with four students. The study was conducted…

  9. User interface using a 3D model for video surveillance

    Science.gov (United States)

    Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru

    1998-02-01

    These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.

  10. A comparison of peer video modeling and self video modeling to teach textual responses in children with autism.

    Science.gov (United States)

    Marcus, Alonna; Wilder, David A

    2009-01-01

    Peer video modeling was compared to self video modeling to teach 3 children with autism to respond appropriately to (i.e., identify or label) novel letters. A combination multiple baseline and multielement design was used to compare the two procedures. Results showed that all 3 participants met the mastery criterion in the self-modeling condition, whereas only 1 of the participants met the mastery criterion in the peer-modeling condition. In addition, the participant who met the mastery criterion in both conditions reached the criterion more quickly in the self-modeling condition. Results are discussed in terms of their implications for teaching new skills to children with autism.

  11. Learning from video modeling examples : Effects of seeing the human model's face

    NARCIS (Netherlands)

    Van Gog, Tamara; Verveer, Ilse; Verveer, Lise

    2014-01-01

    Video modeling examples in which a human(-like) model shows learners how to perform a task are increasingly used in education, as they have become very easy to create and distribute in e-learning environments. However, little is known about design guidelines to optimize learning from video modeling

  12. Integrating Usability Evaluation into Model-Driven Video Game Development

    OpenAIRE

    Fernandez , Adrian; Insfran , Emilio; Abrahão , Silvia; Carsí , José ,; Montero , Emanuel

    2012-01-01

    Part 3: Short Papers; International audience; The increasing complexity of video game development highlights the need of design and evaluation methods for enhancing quality and reducing time and cost. In this context, Model-Driven Development approaches seem to be very promising since a video game can be obtained by transforming platform-independent models into platform-specific models that can be in turn transformed into code. Although this approach is started to being used for video game de...

  13. The Supercritical Pile GRB Model: The Prompt to Afterglow Evolution

    Science.gov (United States)

    Mastichiadis, A.; Kazanas, D.

    2009-01-01

    The "Supercritical Pile" is a very economical GRB model that provides for the efficient conversion of the energy stored in the protons of a Relativistic Blast Wave (RBW) into radiation and at the same time produces - in the prompt GRB phase, even in the absence of any particle acceleration - a spectral peak at energy approx. 1 MeV. We extend this model to include the evolution of the RBW Lorentz factor Gamma and thus follow its spectral and temporal features into the early GRB afterglow stage. One of the novel features of the present treatment is the inclusion of the feedback of the GRB produced radiation on the evolution of Gamma with radius. This feedback and the presence of kinematic and dynamic thresholds in the model can be the sources of rich time evolution which we have began to explore. In particular. one can this may obtain afterglow light curves with steep decays followed by the more conventional flatter afterglow slopes, while at the same time preserving the desirable features of the model, i.e. the well defined relativistic electron source and radiative processes that produce the proper peak in the (nu)F(sub nu), spectra. In this note we present the results of a specific set of parameters of this model with emphasis on the multiwavelength prompt emission and transition to the early afterglow.

  14. Teaching autistic children conversational speech using video modeling.

    Science.gov (United States)

    Charlop, M H; Milstein, J P

    1989-01-01

    We assessed the effects of video modeling on acquisition and generalization of conversational skills among autistic children. Three autistic boys observed videotaped conversations consisting of two people discussing specific toys. When criterion for learning was met, generalization of conversational skills was assessed with untrained topics of conversation; new stimuli (toys); unfamiliar persons, siblings, and autistic peers; and other settings. The results indicated that the children learned through video modeling, generalized their conversational skills, and maintained conversational speech over a 15-month period. Video modeling shows much promise as a rapid and effective procedure for teaching complex verbal skills such as conversational speech. PMID:2793634

  15. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  16. Constructing Self-Modeling Videos: Procedures and Technology

    Science.gov (United States)

    Collier-Meek, Melissa A.; Fallon, Lindsay M.; Johnson, Austin H.; Sanetti, Lisa M. H.; Delcampo, Marisa A.

    2012-01-01

    Although widely recommended, evidence-based interventions are not regularly utilized by school practitioners. Video self-modeling is an effective and efficient evidence-based intervention for a variety of student problem behaviors. However, like many other evidence-based interventions, it is not frequently used in schools. As video creation…

  17. Video Modeling and Word Identification in Adolescents with Autism Spectrum Disorder

    Science.gov (United States)

    Morlock, Larissa; Reynolds, Jennifer L.; Fisher, Sycarah; Comer, Ronald J.

    2015-01-01

    Video modeling involves the learner viewing videos of a model demonstrating a target skill. According to the National Professional Development Center on Autism Spectrum Disorders (2011), video modeling is an evidenced-based intervention for individuals with Autism Spectrum Disorder (ASD) in elementary through middle school. Little research exists…

  18. Video-modelling to improve task completion in a child with autism.

    Science.gov (United States)

    Rayner, Christopher Stephen

    2010-01-01

    To evaluate the use of video modelling as an intervention for increasing task completion for individuals with autism who have high support needs. A 12-year-old-boy with autism received video modelling intervention on two routines (unpacking his bag and brushing his teeth). Use of the video modelling intervention led to rapid increases in the percentage of steps performed in the unpacking his bag sequence and these gains generalized to packing his bag prior to departure from school. There was limited success in the use of the video modelling intervention for teaching the participant to brush his teeth. Video modelling can be successfully applied to enhance daily functioning in a classroom environment for students with autism and high support needs.

  19. PENGARUH MODEL PEMBELAJARAN PROBING-PROMPTING BERBANTUAN LEMBAR KERJA BERSTRUKTUR TERHADAP HASIL BELAJAR

    Directory of Open Access Journals (Sweden)

    Ajeng Diasputri

    2015-11-01

    Full Text Available This study aimed to know the influence of the use of a learning model Probing-prompting assisted by structured worksheet on learning outcomes in an high school especially in the Hydrocarbon and Petroleum. Sampling is done by using purposive sampling technique. In the experimental class, learning used Probing-prompting assisted by structured worksheet, while in classroom control used conventional methods. After being given a different treatment and after post test was done, it can be concluded that students' learning outcomes in class experiment is better than the control class, respectively 77 and 70. Based on the analysis of difference average learning outcomes, it obtained t count (4,074> t table (1,669, so it can be concluded that the average  of learning outcomes in experimental class is better than the control class. In the test mastery learning, mastery percentage of experimental class reached 91.18% while control class reached 59.38%. Test correlation obtained biserial coefficient of 0.5638. It can be concluded that the learning model of Probing-prompting assisted by structured worksheet have significant effect on student learning outcomes on Hydrocarbons and Petroleum subject with contribution of 32%.

  20. Moderating Factors of Video-Modeling with Other as Model: A Meta-Analysis of Single-Case Studies

    Science.gov (United States)

    Mason, Rose A.; Ganz, Jennifer B.; Parker, Richard I.; Burke, Mack D.; Camargo, Siglia P.

    2012-01-01

    Video modeling with other as model (VMO) is a more practical method for implementing video-based modeling techniques, such as video self-modeling, which requires significantly more editing. Despite this, identification of contextual factors such as participant characteristics and targeted outcomes that moderate the effectiveness of VMO has not…

  1. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    Science.gov (United States)

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  2. EFFICIENT USE OF VIDEO FOR 3D MODELLING OF CULTURAL HERITAGE OBJECTS

    Directory of Open Access Journals (Sweden)

    B. Alsadik

    2015-03-01

    Full Text Available Currently, there is a rapid development in the techniques of the automated image based modelling (IBM, especially in advanced structure-from-motion (SFM and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 – 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  3. Occupational Therapy and Video Modeling for Children with Autism

    Science.gov (United States)

    Becker, Emily Ann; Watry-Christian, Meghan; Simmons, Amanda; Van Eperen, Ashleigh

    2016-01-01

    This review explores the evidence in support of using video modeling for teaching children with autism. The process of implementing video modeling, the use of various perspectives, and a wide range of target skills are addressed. Additionally, several helpful clinician resources including handheld device applications, books, and websites are…

  4. A link between prompt optical and prompt gamma-ray emission in gamma-ray bursts.

    Science.gov (United States)

    Vestrand, W T; Wozniak, P R; Wren, J A; Fenimore, E E; Sakamoto, T; White, R R; Casperson, D; Davis, H; Evans, S; Galassi, M; McGowan, K E; Schier, J A; Asa, J W; Barthelmy, S D; Cummings, J R; Gehrels, N; Hullinger, D; Krimm, H A; Markwardt, C B; McLean, K; Palmer, D; Parsons, A; Tueller, J

    2005-05-12

    The prompt optical emission that arrives with the gamma-rays from a cosmic gamma-ray burst (GRB) is a signature of the engine powering the burst, the properties of the ultra-relativistic ejecta of the explosion, and the ejecta's interactions with the surroundings. Until now, only GRB 990123 had been detected at optical wavelengths during the burst phase. Its prompt optical emission was variable and uncorrelated with the prompt gamma-ray emission, suggesting that the optical emission was generated by a reverse shock arising from the ejecta's collision with surrounding material. Here we report prompt optical emission from GRB 041219a. It is variable and correlated with the prompt gamma-rays, indicating a common origin for the optical light and the gamma-rays. Within the context of the standard fireball model of GRBs, we attribute this new optical component to internal shocks driven into the burst ejecta by variations of the inner engine. The correlated optical emission is a direct probe of the jet isolated from the medium. The timing of the uncorrelated optical emission is strongly dependent on the nature of the medium.

  5. Prompt atmospheric neutrino fluxes: perturbative QCD models and nuclear effects

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharya, Atri [Department of Physics, University of Arizona,1118 E. 4th St. Tucson, AZ 85704 (United States); Space sciences, Technologies and Astrophysics Research (STAR) Institute,Université de Liège,Bât. B5a, 4000 Liège (Belgium); Enberg, Rikard [Department of Physics and Astronomy, Uppsala University,Box 516, SE-75120 Uppsala (Sweden); Jeong, Yu Seon [Department of Physics and IPAP, Yonsei University,50 Yonsei-ro Seodaemun-gu, Seoul 03722 (Korea, Republic of); National Institute of Supercomputing and Networking, KISTI,245 Daehak-ro, Yuseong-gu, Daejeon 34141 (Korea, Republic of); Kim, C.S. [Department of Physics and IPAP, Yonsei University,50 Yonsei-ro Seodaemun-gu, Seoul 03722 (Korea, Republic of); Reno, Mary Hall [Department of Physics and Astronomy, University of Iowa,Iowa City, Iowa 52242 (United States); Sarcevic, Ina [Department of Physics, University of Arizona,1118 E. 4th St. Tucson, AZ 85704 (United States); Department of Astronomy, University of Arizona,933 N. Cherry Ave., Tucson, AZ 85721 (United States); Stasto, Anna [Department of Physics, 104 Davey Lab, The Pennsylvania State University,University Park, PA 16802 (United States)

    2016-11-28

    We evaluate the prompt atmospheric neutrino flux at high energies using three different frameworks for calculating the heavy quark production cross section in QCD: NLO perturbative QCD, k{sub T} factorization including low-x resummation, and the dipole model including parton saturation. We use QCD parameters, the value for the charm quark mass and the range for the factorization and renormalization scales that provide the best description of the total charm cross section measured at fixed target experiments, at RHIC and at LHC. Using these parameters we calculate differential cross sections for charm and bottom production and compare with the latest data on forward charm meson production from LHCb at 7 TeV and at 13 TeV, finding good agreement with the data. In addition, we investigate the role of nuclear shadowing by including nuclear parton distribution functions (PDF) for the target air nucleus using two different nuclear PDF schemes. Depending on the scheme used, we find the reduction of the flux due to nuclear effects varies from 10% to 50% at the highest energies. Finally, we compare our results with the IceCube limit on the prompt neutrino flux, which is already providing valuable information about some of the QCD models.

  6. Parental modelling and prompting effects on acceptance of a novel fruit in 2-4-year-old children are dependent on children's food responsiveness.

    Science.gov (United States)

    Blissett, Jackie; Bennett, Carmel; Fogel, Anna; Harris, Gillian; Higgs, Suzanne

    2016-02-14

    Few children consume the recommended portions of fruit or vegetables. This study examined the effects of parental physical prompting and parental modelling in children's acceptance of a novel fruit (NF) and examined the role of children's food-approach and food-avoidance traits on NF engagement and consumption. A total of 120 caregiver-child dyads (fifty-four girls, sixty-six boys) participated in this study. Dyads were allocated to one of the following three conditions: physical prompting but no modelling, physical prompting and modelling or a modelling only control condition. Dyads ate a standardised meal containing a portion of a fruit new to the child. Parents completed measures of children's food approach and avoidance. Willingness to try the NF was observed, and the amount of the NF consumed was measured. Physical prompting but no modelling resulted in greater physical refusal of the NF. There were main effects of enjoyment of food and food fussiness on acceptance. Food responsiveness interacted with condition such that children who were more food responsive had greater NF acceptance in the prompting and modelling conditions in comparison with the modelling only condition. In contrast, children with low food responsiveness had greater acceptance in the modelling control condition than in the prompting but no modelling condition. Physical prompting in the absence of modelling is likely to be detrimental to NF acceptance. Parental use of physical prompting strategies, in combination with modelling of NF intake, may facilitate acceptance of NF, but only in food-responsive children. Modelling consumption best promotes acceptance in children with low food responsiveness.

  7. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    Science.gov (United States)

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  8. Non-intrusive Packet-Layer Model for Monitoring Video Quality of IPTV Services

    Science.gov (United States)

    Yamagishi, Kazuhisa; Hayashi, Takanori

    Developing a non-intrusive packet-layer model is required to passively monitor the quality of experience (QoE) during service. We propose a packet-layer model that can be used to estimate the video quality of IPTV using quality parameters derived from transmitted packet headers. The computational load of the model is lighter than that of the model that takes video signals and/or video-related bitstream information such as motion vectors as input. This model is applicable even if the transmitted bitstream information is encrypted because it uses transmitted packet headers rather than bitstream information. For developing the model, we conducted three extensive subjective quality assessments for different encoders and decoders (codecs), and video content. Then, we modeled the subjective video quality assessment characteristics based on objective features affected by coding and packet loss. Finally, we verified the model's validity by applying our model to unknown data sets different from training data sets used above.

  9. Effects of Video Modeling on Treatment Integrity of Behavioral Interventions

    Science.gov (United States)

    DiGennaro-Reed, Florence D.; Codding, Robin; Catania, Cynthia N.; Maguire, Helena

    2010-01-01

    We examined the effects of individualized video modeling on the accurate implementation of behavioral interventions using a multiple baseline design across 3 teachers. During video modeling, treatment integrity improved above baseline levels; however, teacher performance remained variable. The addition of verbal performance feedback increased…

  10. Prompt Radiation Protection Factors

    Science.gov (United States)

    2018-02-01

    with the following design assumptions (Dillon and Kane 2017): - All buildings are square - Interior mass is modeled as distributed foam columns in...3 2.2. Building Design ...this report include the methodology of calculation of the prompt PFs, with a description of the buildings’ design and computational model, results

  11. Adherent Raindrop Modeling, Detectionand Removal in Video.

    Science.gov (United States)

    You, Shaodi; Tan, Robby T; Kawakami, Rei; Mukaigawa, Yasuhiro; Ikeuchi, Katsushi

    2016-09-01

    Raindrops adhered to a windscreen or window glass can significantly degrade the visibility of a scene. Modeling, detecting and removing raindrops will, therefore, benefit many computer vision applications, particularly outdoor surveillance systems and intelligent vehicle systems. In this paper, a method that automatically detects and removes adherent raindrops is introduced. The core idea is to exploit the local spatio-temporal derivatives of raindrops. To accomplish the idea, we first model adherent raindrops using law of physics, and detect raindrops based on these models in combination with motion and intensity temporal derivatives of the input video. Having detected the raindrops, we remove them and restore the images based on an analysis that some areas of raindrops completely occludes the scene, and some other areas occlude only partially. For partially occluding areas, we restore them by retrieving as much as possible information of the scene, namely, by solving a blending function on the detected partially occluding areas using the temporal intensity derivative. For completely occluding areas, we recover them by using a video completion technique. Experimental results using various real videos show the effectiveness of our method.

  12. Prompt fission neutron spectra and average prompt neutron multiplicities

    International Nuclear Information System (INIS)

    Madland, D.G.; Nix, J.R.

    1983-01-01

    We present a new method for calculating the prompt fission neutron spectrum N(E) and average prompt neutron multiplicity anti nu/sub p/ as functions of the fissioning nucleus and its excitation energy. The method is based on standard nuclear evaporation theory and takes into account (1) the motion of the fission fragments, (2) the distribution of fission-fragment residual nuclear temperature, (3) the energy dependence of the cross section sigma/sub c/ for the inverse process of compound-nucleus formation, and (4) the possibility of multiple-chance fission. We use a triangular distribution in residual nuclear temperature based on the Fermi-gas model. This leads to closed expressions for N(E) and anti nu/sub p/ when sigma/sub c/ is assumed constant and readily computed quadratures when the energy dependence of sigma/sub c/ is determined from an optical model. Neutron spectra and average multiplicities calculated with an energy-dependent cross section agree well with experimental data for the neutron-induced fission of 235 U and the spontaneous fission of 252 Cf. For the latter case, there are some significant inconsistencies between the experimental spectra that need to be resolved. 29 references

  13. Rate-control algorithms testing by using video source model

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna

    2008-01-01

    In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set....

  14. Fast Appearance Modeling for Automatic Primary Video Object Segmentation.

    Science.gov (United States)

    Yang, Jiong; Price, Brian; Shen, Xiaohui; Lin, Zhe; Yuan, Junsong

    2016-02-01

    Automatic segmentation of the primary object in a video clip is a challenging problem as there is no prior knowledge of the primary object. Most existing techniques thus adapt an iterative approach for foreground and background appearance modeling, i.e., fix the appearance model while optimizing the segmentation and fix the segmentation while optimizing the appearance model. However, these approaches may rely on good initialization and can be easily trapped in local optimal. In addition, they are usually time consuming for analyzing videos. To address these limitations, we propose a novel and efficient appearance modeling technique for automatic primary video object segmentation in the Markov random field (MRF) framework. It embeds the appearance constraint as auxiliary nodes and edges in the MRF structure, and can optimize both the segmentation and appearance model parameters simultaneously in one graph cut. The extensive experimental evaluations validate the superiority of the proposed approach over the state-of-the-art methods, in both efficiency and effectiveness.

  15. Video modeling to train staff to implement discrete-trial instruction.

    Science.gov (United States)

    Catania, Cynthia N; Almeida, Daniel; Liu-Constant, Brian; DiGennaro Reed, Florence D

    2009-01-01

    Three new direct-service staff participated in a program that used a video model to train target skills needed to conduct a discrete-trial session. Percentage accuracy in completing a discrete-trial teaching session was evaluated using a multiple baseline design across participants. During baseline, performances ranged from a mean of 12% to 63% accuracy. During video modeling, there was an immediate increase in accuracy to a mean of 98%, 85%, and 94% for each participant. Performance during maintenance and generalization probes remained at high levels. Results suggest that video modeling can be an effective technique to train staff to conduct discrete-trial sessions.

  16. The Global Classroom Video Conferencing Model and First Evaluations

    DEFF Research Database (Denmark)

    Weitze, Charlotte Lærke; Ørngreen, Rikke; Levinsen, Karin

    2013-01-01

    pedagogical innovativeness, including collaborative and technological issues. The research is based on the Global Classroom Model as it is implemented and used at an adult learning center in Denmark (VUC Storstrøm). VUC Storstrøms (VUC) Global Classroom Model is an approach to video conferencing and e......Learning using campus-based teaching combined with laptop solutions for students at home. After a couple of years of campus-to-campus video streaming, VUC started a fulltime day program in 2011 with the support of a hybrid campus and videoconference model. In this model the teachers and some of the students......This paper presents and discusses findings about how students, teachers, and the organization experience a start-up-project applying video conferences between campus and home. This is new territory for adult learning centers. The paper discusses the transition to this eLearning form and discusses...

  17. Sub-component modeling for face image reconstruction in video communications

    Science.gov (United States)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  18. Modeling the time--varying subjective quality of HTTP video streams with rate adaptations.

    Science.gov (United States)

    Chen, Chao; Choi, Lark Kwon; de Veciana, Gustavo; Caramanis, Constantine; Heath, Robert W; Bovik, Alan C

    2014-05-01

    Newly developed hypertext transfer protocol (HTTP)-based video streaming technologies enable flexible rate-adaptation under varying channel conditions. Accurately predicting the users' quality of experience (QoE) for rate-adaptive HTTP video streams is thus critical to achieve efficiency. An important aspect of understanding and modeling QoE is predicting the up-to-the-moment subjective quality of a video as it is played, which is difficult due to hysteresis effects and nonlinearities in human behavioral responses. This paper presents a Hammerstein-Wiener model for predicting the time-varying subjective quality (TVSQ) of rate-adaptive videos. To collect data for model parameterization and validation, a database of longer duration videos with time-varying distortions was built and the TVSQs of the videos were measured in a large-scale subjective study. The proposed method is able to reliably predict the TVSQ of rate adaptive videos. Since the Hammerstein-Wiener model has a very simple structure, the proposed method is suitable for online TVSQ prediction in HTTP-based streaming.

  19. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    Science.gov (United States)

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  20. Gamma-Ray Burst Prompt Correlations

    Directory of Open Access Journals (Sweden)

    M. G. Dainotti

    2018-01-01

    Full Text Available The mechanism responsible for the prompt emission of gamma-ray bursts (GRBs is still a debated issue. The prompt phase-related GRB correlations can allow discriminating among the most plausible theoretical models explaining this emission. We present an overview of the observational two-parameter correlations, their physical interpretations, and their use as redshift estimators and possibly as cosmological tools. The nowadays challenge is to make GRBs, the farthest stellar-scaled objects observed (up to redshift z=9.4, standard candles through well established and robust correlations. However, GRBs spanning several orders of magnitude in their energetics are far from being standard candles. We describe the advances in the prompt correlation research in the past decades, with particular focus paid to the discoveries in the last 20 years.

  1. Recurrent and Dynamic Models for Predicting Streaming Video Quality of Experience.

    Science.gov (United States)

    Bampis, Christos G; Li, Zhi; Katsavounidis, Ioannis; Bovik, Alan C

    2018-07-01

    Streaming video services represent a very large fraction of global bandwidth consumption. Due to the exploding demands of mobile video streaming services, coupled with limited bandwidth availability, video streams are often transmitted through unreliable, low-bandwidth networks. This unavoidably leads to two types of major streaming-related impairments: compression artifacts and/or rebuffering events. In streaming video applications, the end-user is a human observer; hence being able to predict the subjective Quality of Experience (QoE) associated with streamed videos could lead to the creation of perceptually optimized resource allocation strategies driving higher quality video streaming services. We propose a variety of recurrent dynamic neural networks that conduct continuous-time subjective QoE prediction. By formulating the problem as one of time-series forecasting, we train a variety of recurrent neural networks and non-linear autoregressive models to predict QoE using several recently developed subjective QoE databases. These models combine multiple, diverse neural network inputs, such as predicted video quality scores, rebuffering measurements, and data related to memory and its effects on human behavioral responses, using them to predict QoE on video streams impaired by both compression artifacts and rebuffering events. Instead of finding a single time-series prediction model, we propose and evaluate ways of aggregating different models into a forecasting ensemble that delivers improved results with reduced forecasting variance. We also deploy appropriate new evaluation metrics for comparing time-series predictions in streaming applications. Our experimental results demonstrate improved prediction performance that approaches human performance. An implementation of this work can be found at https://github.com/christosbampis/NARX_QoE_release.

  2. Effects of recent modeling developments in prompt burst hypothetical core disruptive accident calculations

    International Nuclear Information System (INIS)

    Sienicki, J.J.; Abramson, P.B.

    1978-01-01

    The main objective of the development of multifield, multicomponent thermohydrodynamic computer codes is the detailed study of hypothetical core disruptive accidents (HCDAs) in liquid-metal fast breeder reactors. The main contributions such codes are expected to make are the inclusion of detailed modeling of the relative motion of liquid and vapor (slip), the inclusion of modeling of nonequilibrium/nonsaturation thermodynamics, and the use of more detailed neutronics methods. Scoping studies of the importance of including these phenomena performed with the parametric two-field, two-component coupled neutronic/thermodynamic/hydrodynamic code FX2-TWOPOOL indicate for the prompt burst portion of an HCDA that: (1) Vapor-liquid slip plays a relatively insignificant role in establishing energetics, implying that analyses that do not model vapor-liquid slip may be adequate. Furthermore, if conditions of saturation are assumed to be maintained, calculations that do not permit vapor-liquid slip appear to be conservative. (2) The modeling of conduction-limited fuel vaporization and condensation causes the energetics to be highly sensitive to variations in the droplet size (i.e., in the parametric values) for the sizes of interest in HCDA analysis. Care must therefore be exercised in the inclusion of this phenomenon in energetics calculations. (3) Insignificant differences are observed between the use of space-time kinetics (quasi-static diffusion theory) and point kinetics, indicating again that point kinetics is normally adequate for analysis of the prompt burst portion of an HCDA. (4) No significant differences were found to result from assuming that delayed neutron precursors remain stationary where they are created rather than assuming that they move together with fuel. (5) There is no need for implicit coupling between the neutronics and the hydrodynamics/thermodynamics routines, even outside the prompt burst portion

  3. Creating engagement with old research videos

    DEFF Research Database (Denmark)

    Caglio, Agnese; Buur, Jacob

    User-centred design projects that utilize ethnographic research tend to produce hours and hours of contextual video footage that seldom gets used again once the project is complete. The richness of such research video could, however, make it attractive for other project teams or researchers...... as source of inspiration or knowledge of a particular context or user group -- if it were practically feasible to engage with the material later on. In this paper we explore the potentials of using old research footage to stimulate reflection, conversations and creativity by presenting it on pervasive...... screens to colleague designers and researchers. The setup we designed included large and small screens placed in a social space of a research environment, the communal kitchen. Through screenings of ten different 'old' research videos accompanied by various prompt questions and activities we built...

  4. Cross-band noise model refinement for transform domain Wyner–Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2012-01-01

    TDWZ video coding trails that of conventional video coding solutions, mainly due to the quality of side information, inaccurate noise modeling and loss in the final coding step. The major goal of this paper is to enhance the accuracy of the noise modeling, which is one of the most important aspects...... influencing the coding performance of DVC. A TDWZ video decoder with a novel cross-band based adaptive noise model is proposed, and a noise residue refinement scheme is introduced to successively update the estimated noise residue for noise modeling after each bit-plane. Experimental results show...... that the proposed noise model and noise residue refinement scheme can improve the rate-distortion (RD) performance of TDWZ video coding significantly. The quality of the side information modeling is also evaluated by a measure of the ideal code length....

  5. Application of discriminative models for interactive query refinement in video retrieval

    Science.gov (United States)

    Srivastava, Amit; Khanwalkar, Saurabh; Kumar, Anoop

    2013-12-01

    The ability to quickly search for large volumes of videos for specific actions or events can provide a dramatic new capability to intelligence agencies. Example-based queries from video are a form of content-based information retrieval (CBIR) where the objective is to retrieve clips from a video corpus, or stream, using a representative query sample to find more like this. Often, the accuracy of video retrieval is largely limited by the gap between the available video descriptors and the underlying query concept, and such exemplar queries return many irrelevant results with relevant ones. In this paper, we present an Interactive Query Refinement (IQR) system which acts as a powerful tool to leverage human feedback and allow intelligence analyst to iteratively refine search queries for improved precision in the retrieved results. In our approach to IQR, we leverage discriminative models that operate on high dimensional features derived from low-level video descriptors in an iterative framework. Our IQR model solicits relevance feedback on examples selected from the region of uncertainty and updates the discriminating boundary to produce a relevance ranked results list. We achieved 358% relative improvement in Mean Average Precision (MAP) over initial retrieval list at a rank cutoff of 100 over 4 iterations. We compare our discriminative IQR model approach to a naïve IQR and show our model-based approach yields 49% relative improvement over the no model naïve system.

  6. Digital Video as a Personalized Learning Assignment: A Qualitative Study of Student Authored Video Using the ICSDR Model

    Science.gov (United States)

    Campbell, Laurie O.; Cox, Thomas D.

    2018-01-01

    Students within this study followed the ICSDR (Identify, Conceptualize/Connect, Storyboard, Develop, Review/Reflect/Revise) development model to create digital video, as a personalized and active learning assignment. The participants, graduate students in education, indicated that following the ICSDR framework for student-authored video guided…

  7. Learning Computational Models of Video Memorability from fMRI Brain Imaging.

    Science.gov (United States)

    Han, Junwei; Chen, Changyuan; Shao, Ling; Hu, Xintao; Han, Jungong; Liu, Tianming

    2015-08-01

    Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects' brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework.

  8. Effects of creating video-based modeling examples on learning and transfer

    NARCIS (Netherlands)

    Hoogerheide, Vincent; Loyens, Sofie M M; van Gog, Tamara

    2014-01-01

    Two experiments investigated whether acting as a peer model for a video-based modeling example, which entails studying a text with the intention to explain it to others and then actually explaining it on video, would foster learning and transfer. In both experiments, novices were instructed to study

  9. Video Modeling and Observational Learning to Teach Gaming Access to Students with ASD

    Science.gov (United States)

    Spriggs, Amy D.; Gast, David L.; Knight, Victoria F.

    2016-01-01

    The purpose of this study was to evaluate both video modeling and observational learning to teach age-appropriate recreation and leisure skills (i.e., accessing video games) to students with autism spectrum disorder. Effects of video modeling were evaluated via a multiple probe design across participants and criteria for mastery were based on…

  10. Using Video Modeling as an Anti-bullying Intervention for Children with Autism Spectrum Disorder.

    Science.gov (United States)

    Rex, Catherine; Charlop, Marjorie H; Spector, Vicki

    2018-03-07

    In the present study, we used a multiple baseline design across participants to assess the efficacy of a video modeling intervention to teach six children with autism spectrum disorder (ASD) to assertively respond to bullying. During baseline, the children made few appropriate responses upon viewing video clips of bullying scenarios. During the video modeling intervention, participants viewed videos of models assertively responding to three types of bullying: physical, verbal bullying, and social exclusion. Results indicated that all six children learned through video modeling to make appropriate assertive responses to bullying scenarios. Four of the six children demonstrated learning in the in situ bullying probes. The results are discussed in terms of an intervention for victims of bullying with ASD.

  11. Energy saving approaches for video streaming on smartphone based on QoE modeling

    DEFF Research Database (Denmark)

    Ballesteros, Luis Guillermo Martinez; Ickin, Selim; Fiedler, Markus

    2016-01-01

    In this paper, we study the influence of video stalling on QoE. We provide QoE models that are obtained in realistic scenarios on the smartphone, and provide energy-saving approaches for smartphone by leveraging the proposed QoE models in relation to energy. Results show that approximately 5J...... is saved in a 3 minutes video clip with an acceptable Mean Opinion Score (MOS) level when the video frames are skipped. If the video frames are not skipped, then it is suggested to avoid freezes during a video stream as the freezes highly increase the energy waste on the smartphones....

  12. Player behavioural modelling for video games

    NARCIS (Netherlands)

    van Lankveld, G.; Spronck, P.H.M.; Bakkes, S.C.J.

    2012-01-01

    Player behavioural modelling has grown from a means to improve the playing strength of computer programs that play classic games (e.g., chess), to a means for impacting the player experience and satisfaction in video games, as well as in cross-domain applications such as interactive storytelling. In

  13. Bayesian Modeling of Temporal Coherence in Videos for Entity Discovery and Summarization.

    Science.gov (United States)

    Mitra, Adway; Biswas, Soma; Bhattacharyya, Chiranjib

    2017-03-01

    A video is understood by users in terms of entities present in it. Entity Discovery is the task of building appearance model for each entity (e.g., a person), and finding all its occurrences in the video. We represent a video as a sequence of tracklets, each spanning 10-20 frames, and associated with one entity. We pose Entity Discovery as tracklet clustering, and approach it by leveraging Temporal Coherence (TC): the property that temporally neighboring tracklets are likely to be associated with the same entity. Our major contributions are the first Bayesian nonparametric models for TC at tracklet-level. We extend Chinese Restaurant Process (CRP) to TC-CRP, and further to Temporally Coherent Chinese Restaurant Franchise (TC-CRF) to jointly model entities and temporal segments using mixture components and sparse distributions. For discovering persons in TV serial videos without meta-data like scripts, these methods show considerable improvement over state-of-the-art approaches to tracklet clustering in terms of clustering accuracy, cluster purity and entity coverage. The proposed methods can perform online tracklet clustering on streaming videos unlike existing approaches, and can automatically reject false tracklets. Finally we discuss entity-driven video summarization- where temporal segments of the video are selected based on the discovered entities, to create a semantically meaningful summary.

  14. Reviewing Instructional Studies Conducted Using Video Modeling to Children with Autism

    Science.gov (United States)

    Acar, Cimen; Diken, Ibrahim H.

    2012-01-01

    This study explored 31 instructional research articles written using video modeling to children with autism and published in peer-reviewed journals. The studies in this research have been reached by searching EBSCO, Academic Search Complete, ERIC and other Anadolu University online search engines and using keywords such as "autism, video modeling,…

  15. Exotic Prompt and Non-Prompt Leptonic Decays as a Window to the Dark Sector with ATLAS

    CERN Document Server

    Diamond, Miriam; The ATLAS collaboration

    2016-01-01

    Results of searches for both prompt and non-prompt leptonic decays of new dark sector particles in proton-proton collisions with the ATLAS detector are presented. Searches that encompass a wide range of new particle masses, lifetimes and degrees of collimation of leptonic decay products are discussed. The results are interpreted in the context of models containing new gauge bosons (dark photons or dark Z bosons) that give rise to lepton-jets or to more general displaced leptonic signatures that could be a viable dark matter candidate.

  16. No-Reference Video Quality Assessment Model for Distortion Caused by Packet Loss in the Real-Time Mobile Video Services

    Directory of Open Access Journals (Sweden)

    Jiarun Song

    2014-01-01

    Full Text Available Packet loss will make severe errors due to the corruption of related video data. For most video streams, because the predictive coding structures are employed, the transmission errors in one frame will not only cause decoding failure of itself at the receiver side, but also propagate to its subsequent frames along the motion prediction path, which will bring a significant degradation of end-to-end video quality. To quantify the effects of packet loss on video quality, a no-reference objective quality assessment model is presented in this paper. Considering the fact that the degradation of video quality significantly relies on the video content, the temporal complexity is estimated to reflect the varying characteristic of video content, using the macroblocks with different motion activities in each frame. Then, the quality of the frame affected by the reference frame loss, by error propagation, or by both of them is evaluated, respectively. Utilizing a two-level temporal pooling scheme, the video quality is finally obtained. Extensive experimental results show that the video quality estimated by the proposed method matches well with the subjective quality.

  17. When Video Games Tell Stories: A Model of Video Game Narrative Architectures

    Directory of Open Access Journals (Sweden)

    Marcello Arnaldo Picucci

    2014-11-01

    Full Text Available In the present study a model is proposed offering a comprehensive categorization of video game narrative structures intended as the methods and techniques used by game designers and allowed by the medium to deliver the story content throughout the gameplay in collaboration with the players. A case is first made for the presence of narrative in video games and its growth of importance as a central component in game design. An in-depth analysis ensues focusing on how games tell stories, guided by the criteria of linearity/nonlinearity, interactivity and randomness. Light is shed upon the fundamental architectures through which stories are told as well as the essential boundaries posed by the close link between narrative and game AI.

  18. Video processing for human perceptual visual quality-oriented video coding.

    Science.gov (United States)

    Oh, Hyungsuk; Kim, Wonha

    2013-04-01

    We have developed a video processing method that achieves human perceptual visual quality-oriented video coding. The patterns of moving objects are modeled by considering the limited human capacity for spatial-temporal resolution and the visual sensory memory together, and an online moving pattern classifier is devised by using the Hedge algorithm. The moving pattern classifier is embedded in the existing visual saliency with the purpose of providing a human perceptual video quality saliency model. In order to apply the developed saliency model to video coding, the conventional foveation filtering method is extended. The proposed foveation filter can smooth and enhance the video signals locally, in conformance with the developed saliency model, without causing any artifacts. The performance evaluation results confirm that the proposed video processing method shows reliable improvements in the perceptual quality for various sequences and at various bandwidths, compared to existing saliency-based video coding methods.

  19. Prompts to eat novel and familiar fruits and vegetables in families with 1-3 year-old children: Relationships with food acceptance and intake.

    Science.gov (United States)

    Edelson, Lisa R; Mokdad, Cassandra; Martin, Nathalie

    2016-04-01

    Toddlers often go through a picky eating phase, which can make it difficult to introduce new foods into the diet. A better understanding of how parents' prompts to eat fruits and vegetables are related to children's intake of these foods will help promote healthy eating habits. 60 families recorded all toddler meals over one day, plus a meal in which parents introduced a novel fruit/vegetable to the child. Videos were coded for parent and child behaviors. Parents completed a feeding style questionnaire and three 24-h dietary recalls about their children's intake. Parents made, on average, 48 prompts for their children to eat more during the main meals in a typical day, mostly of the neutral type. Authoritarian parents made the most prompts, and used pressure the most often. In the novel food situation, it took an average of 2.5 prompts before the child tasted the new food. The most immediately successful prompt for regular meals across food types was modeling. There was a trend for using another food as a reward to work less well than a neutral prompt for encouraging children to try a novel fruit or vegetable. More frequent prompts to eat fruits and vegetables during typical meals were associated with higher overall intake of these food groups. More prompts for children to try a novel vegetable was associated with higher overall vegetable intake, but this pattern was not seen for fruits, suggesting that vegetable variety may be more strongly associated with intake. Children who ate the most vegetables had parents who used more "reasoning" prompts, which may have become an internalized motivation to eat these foods, but this needs to be tested explicitly using longer-term longitudinal studies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Strategies for Teaching Children with Autism to Imitate Response Chains Using Video Modeling

    Science.gov (United States)

    Tereshko, Lisa; MacDonald, Rebecca; Ahearn, William H.

    2010-01-01

    Video modeling has been found to be an effective procedure for teaching a variety of skills to persons with autism, however, some individuals do not learn through video instruction. The purpose of the current investigation was to teach children with autism, who initially did not imitate a video model, to construct three toy structures through the…

  1. A time-varying subjective quality model for mobile streaming videos with stalling events

    Science.gov (United States)

    Ghadiyaram, Deepti; Pan, Janice; Bovik, Alan C.

    2015-09-01

    Over-the-top mobile video streaming is invariably influenced by volatile network conditions which cause playback interruptions (stalling events), thereby impairing users' quality of experience (QoE). Developing models that can accurately predict users' QoE could enable the more efficient design of quality-control protocols for video streaming networks that reduce network operational costs while still delivering high-quality video content to the customers. Existing objective models that predict QoE are based on global video features, such as the number of stall events and their lengths, and are trained and validated on a small pool of ad hoc video datasets, most of which are not publicly available. The model we propose in this work goes beyond previous models as it also accounts for the fundamental effect that a viewer's recent level of satisfaction or dissatisfaction has on their overall viewing experience. In other words, the proposed model accounts for and adapts to the recency, or hysteresis effect caused by a stall event in addition to accounting for the lengths, frequency of occurrence, and the positions of stall events - factors that interact in a complex way to affect a user's QoE. On the recently introduced LIVE-Avvasi Mobile Video Database, which consists of 180 distorted videos of varied content that are afflicted solely with over 25 unique realistic stalling events, we trained and validated our model to accurately predict the QoE, attaining standout QoE prediction performance.

  2. Traffic characterization and modeling of wavelet-based VBR encoded video

    Energy Technology Data Exchange (ETDEWEB)

    Yu Kuo; Jabbari, B. [George Mason Univ., Fairfax, VA (United States); Zafar, S. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div.

    1997-07-01

    Wavelet-based video codecs provide a hierarchical structure for the encoded data, which can cater to a wide variety of applications such as multimedia systems. The characteristics of such an encoder and its output, however, have not been well examined. In this paper, the authors investigate the output characteristics of a wavelet-based video codec and develop a composite model to capture the traffic behavior of its output video data. Wavelet decomposition transforms the input video in a hierarchical structure with a number of subimages at different resolutions and scales. the top-level wavelet in this structure contains most of the signal energy. They first describe the characteristics of traffic generated by each subimage and the effect of dropping various subimages at the encoder on the signal-to-noise ratio at the receiver. They then develop an N-state Markov model to describe the traffic behavior of the top wavelet. The behavior of the remaining wavelets are then obtained through estimation, based on the correlations between these subimages at the same level of resolution and those wavelets located at an immediate higher level. In this paper, a three-state Markov model is developed. The resulting traffic behavior described by various statistical properties, such as moments and correlations, etc., is then utilized to validate their model.

  3. Topical video object discovery from key frames by modeling word co-occurrence prior.

    Science.gov (United States)

    Zhao, Gangqiang; Yuan, Junsong; Hua, Gang; Yang, Jiong

    2015-12-01

    A topical video object refers to an object, that is, frequently highlighted in a video. It could be, e.g., the product logo and the leading actor/actress in a TV commercial. We propose a topic model that incorporates a word co-occurrence prior for efficient discovery of topical video objects from a set of key frames. Previous work using topic models, such as latent Dirichelet allocation (LDA), for video object discovery often takes a bag-of-visual-words representation, which ignored important co-occurrence information among the local features. We show that such data driven co-occurrence information from bottom-up can conveniently be incorporated in LDA with a Gaussian Markov prior, which combines top-down probabilistic topic modeling with bottom-up priors in a unified model. Our experiments on challenging videos demonstrate that the proposed approach can discover different types of topical objects despite variations in scale, view-point, color and lighting changes, or even partial occlusions. The efficacy of the co-occurrence prior is clearly demonstrated when compared with topic models without such priors.

  4. A Collaborative Video Sketching Model in the Making

    DEFF Research Database (Denmark)

    Gundersen, Peter Bukovica; Ørngreen, Rikke; Hautopp, Heidi

    model, where we explore the relation between the educational research design team, their sketching and video sketching activities. The results show how sketching can be done in different modes and how it supports thinking, communication, reflection and distributed cognition in design teams when......The literature on design research emphasizes working in iterative cycles that investigate and explore many ideas and alternative designs. However, these cycles are seldom applied or documented in educational research papers. In this paper, we illustrate the development process of a video sketching...

  5. A collaborative video sketching model in the making

    DEFF Research Database (Denmark)

    Gundersen, Peter; Ørngreen, Rikke; Henningsen, Birgitte

    2018-01-01

    model, where we explore the relation between the educational research design team, their sketching and video sketching activities. The results show how sketching can be done in different modes and how it supports thinking, communication, reflection and distributed cognition in design teams when......The literature on design research emphasizes working in iterative cycles that investigate and explore many ideas and alternative designs. However, these cycles are seldom applied or documented in educational research papers. In this paper, we illustrate the development process of a video sketching...

  6. Kinetic analysis of sub-prompt-critical reactor assemblies

    International Nuclear Information System (INIS)

    Das, S.

    1992-01-01

    Neutronic analysis of safety-related kinetics problems in experimental neutron multiplying assemblies has been carried out using a sub-prompt-critical reactor model. The model is based on the concept of a sub-prompt-critical nuclear reactor and the concept of instantaneous neutron multiplication in a reactor system. Computations of reactor power, period and reactivity using the model show excellent agreement with results obtained from exact kinetics method. Analytic expressions for the energy released in a controlled nuclear power excursion are derived. Application of the model to a Pulsed Fast Reactor gives its sensitivity between 4 and 5. (author). 6 refs., 4 figs., 1 tab

  7. Operation quality assessment model for video conference system

    Science.gov (United States)

    Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian

    2018-01-01

    Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.

  8. Water surface modeling from a single viewpoint video.

    Science.gov (United States)

    Li, Chuan; Pickup, David; Saunders, Thomas; Cosker, Darren; Marshall, David; Hall, Peter; Willis, Philip

    2013-07-01

    We introduce a video-based approach for producing water surface models. Recent advances in this field output high-quality results but require dedicated capturing devices and only work in limited conditions. In contrast, our method achieves a good tradeoff between the visual quality and the production cost: It automatically produces a visually plausible animation using a single viewpoint video as the input. Our approach is based on two discoveries: first, shape from shading (SFS) is adequate to capture the appearance and dynamic behavior of the example water; second, shallow water model can be used to estimate a velocity field that produces complex surface dynamics. We will provide qualitative evaluation of our method and demonstrate its good performance across a wide range of scenes.

  9. Using Video Modeling to Teach Young Children with Autism Developmentally Appropriate Play and Connected Speech

    Science.gov (United States)

    Scheflen, Sarah Clifford; Freeman, Stephanny F. N.; Paparella, Tanya

    2012-01-01

    Four children with autism were taught play skills through the use of video modeling. Video instruction was used to model play and appropriate language through a developmental sequence of play levels integrated with language techniques. Results showed that children with autism could successfully use video modeling to learn how to play appropriately…

  10. The Effects of Video Self-Modeling on High School Students with Emotional and Behavioral Disturbances

    Science.gov (United States)

    Chu, Szu-Yin; Baker, Sonia

    2015-01-01

    Video self-modeling has been proven to be effective with other populations with challenging behaviors, but only a few studies of video self-modeling have been conducted with high school students with emotional and behavioral disorders. This study aimed to focus on analyzing the effects of video self-modeling on four high school students with…

  11. Improved virtual channel noise model for transform domain Wyner-Ziv video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2009-01-01

    Distributed video coding (DVC) has been proposed as a new video coding paradigm to deal with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. A virtual channel noise model is utilized at the decoder to estimate...... the noise distribution between the side information frame and the original frame. This is one of the most important aspects influencing the coding performance of DVC. Noise models with different granularity have been proposed. In this paper, an improved noise model for transform domain Wyner-Ziv video...... coding is proposed, which utilizes cross-band correlation to estimate the Laplacian parameters more accurately. Experimental results show that the proposed noise model can improve the rate-distortion (RD) performance....

  12. Video Self-Modelling: An Intervention for Children with Behavioural Difficulties

    Science.gov (United States)

    Regan, Helen; Howe, Julia

    2017-01-01

    There has recently been a growth in interest in the use of video technology in the practice of educational psychologists. This research explores the effects of a video self-modelling (VSM) intervention on the behaviours of a child in mainstream education using a single case study design set within a behaviourist paradigm. VSM is a behavioural…

  13. Ranking Highlights in Personal Videos by Analyzing Edited Videos.

    Science.gov (United States)

    Sun, Min; Farhadi, Ali; Chen, Tseng-Hung; Seitz, Steve

    2016-11-01

    We present a fully automatic system for ranking domain-specific highlights in unconstrained personal videos by analyzing online edited videos. A novel latent linear ranking model is proposed to handle noisy training data harvested online. Specifically, given a targeted domain such as "surfing," our system mines the YouTube database to find pairs of raw and their corresponding edited videos. Leveraging the assumption that an edited video is more likely to contain highlights than the trimmed parts of the raw video, we obtain pair-wise ranking constraints to train our model. The learning task is challenging due to the amount of noise and variation in the mined data. Hence, a latent loss function is incorporated to mitigate the issues caused by the noise. We efficiently learn the latent model on a large number of videos (about 870 min in total) using a novel EM-like procedure. Our latent ranking model outperforms its classification counterpart and is fairly competitive compared with a fully supervised ranking system that requires labels from Amazon Mechanical Turk. We further show that a state-of-the-art audio feature mel-frequency cepstral coefficients is inferior to a state-of-the-art visual feature. By combining both audio-visual features, we obtain the best performance in dog activity, surfing, skating, and viral video domains. Finally, we show that impressive highlights can be detected without additional human supervision for seven domains (i.e., skating, surfing, skiing, gymnastics, parkour, dog activity, and viral video) in unconstrained personal videos.

  14. Effects of communication skills training and a Question Prompt Sheet to improve communication with older cancer patients: a randomized controlled trial

    NARCIS (Netherlands)

    van Weert, J.C.M.; Jansen, J.; Spreeuwenberg, P.M.M.; van Dulmen, S.; Bensing, J.M.

    2011-01-01

    A randomized pre- and post-test control group design was conducted in 12 oncology wards to investigate the effectiveness of an intervention, existing of a communication skills training with web-enabled video feedback and a Question Prompt Sheet (QPS), which aimed to improve patient education to

  15. Effects of communication skills training and a Question Prompt Sheet to improve communication with older cancer patients: a randomized controlled trial.

    NARCIS (Netherlands)

    Weert, J.C.M. van; Jansen, J.; Spreeuwenberg, P.M.M.; Dulmen, S. van; Bensing, J.M.

    2011-01-01

    A randomized pre- and post-test control group design was conducted in 12 oncology wards to investigate the effectiveness of an intervention, existing of a communication skills training with web-enabled video feedback and a Question Prompt Sheet (QPS), which aimed to improve patient education to

  16. Adaptive Noise Model for Transform Domain Wyner-Ziv Video using Clustering of DCT Blocks

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Huang, Xin; Forchhammer, Søren

    2011-01-01

    The noise model is one of the most important aspects influencing the coding performance of Distributed Video Coding. This paper proposes a novel noise model for Transform Domain Wyner-Ziv (TDWZ) video coding by using clustering of DCT blocks. The clustering algorithm takes advantage of the residual...... modelling. Furthermore, the proposed cluster level noise model is adaptively combined with a coefficient level noise model in this paper to robustly improve coding performance of TDWZ video codec up to 1.24 dB (by Bjøntegaard metric) compared to the DISCOVER TDWZ video codec....... information of all frequency bands, iteratively classifies blocks into different categories and estimates the noise parameter in each category. The experimental results show that the coding performance of the proposed cluster level noise model is competitive with state-ofthe- art coefficient level noise...

  17. Search for prompt neutrinos with AMANDA-II

    International Nuclear Information System (INIS)

    Gozzini, Sara Rebecca

    2008-01-01

    The investigation performed in this work aims to identify and disentangle the signal of prompt neutrinos from the inclusive atmospheric spectrum. We have analysed data recorded in the years 2000-2003 by the AMANDA-II detector at the geographical South Pole. After a tight event selection, our sample is composed of about 4 . 10 3 atmospheric neutrinos. Prompt neutrinos are decay products of heavy quark hadrons, which are produced in the collision of a cosmic ray particle with a nucleon in the atmosphere. The technique used to recognise prompt neutrinos is based on a simulated information of their energy spectrum, which appears harder than that of the conventional component from light quarks. Models accounting for different hadron production and decay schemes have been included in a Monte Carlo simulation and convoluted with the detector response, in order to reproduce the different spectra. The background of conventional events has been described with the Bartol 2006 tables. The energy spectrum of our data has been reconstructed through a numerical unfolding algorithm. The reconstruction is based on a Monte Carlo simulation and uses as an input three parameters of the neutrino track which are correlated with the energy of the event. Numerical regularisation is introduced to achieve a result free of unphysical oscillations, typical unfortunate feature of unfolding. The reconstructed data spectrum has been compared with different predictions using the model rejection factor technique. The prompt neutrino models differ in the choice of the hadron interaction model, the set of parton distribution functions and the numerical parameterisation of the fragmentation functions describing the transition from quark to hadrons. Here we considered mainly three classes of models, known in the literature as the Recombination Quark Parton Model, the Quark Gluon String Model and the Perturbative QCD model. Upper limits have been set on the expected flux predictions, based on our

  18. Impairment-Factor-Based Audiovisual Quality Model for IPTV: Influence of Video Resolution, Degradation Type, and Content Type

    Directory of Open Access Journals (Sweden)

    Garcia MN

    2011-01-01

    Full Text Available This paper presents an audiovisual quality model for IPTV services. The model estimates the audiovisual quality of standard and high definition video as perceived by the user. The model is developed for applications such as network planning and packet-layer quality monitoring. It mainly covers audio and video compression artifacts and impairments due to packet loss. The quality tests conducted for model development demonstrate a mutual influence of the perceived audio and video quality, and the predominance of the video quality for the overall audiovisual quality. The balance between audio quality and video quality, however, depends on the content, the video format, and the audio degradation type. The proposed model is based on impairment factors which quantify the quality-impact of the different degradations. The impairment factors are computed from parameters extracted from the bitstream or packet headers. For high definition video, the model predictions show a correlation with unknown subjective ratings of 95%. For comparison, we have developed a more classical audiovisual quality model which is based on the audio and video qualities and their interaction. Both quality- and impairment-factor-based models are further refined by taking the content-type into account. At last, the different model variants are compared with modeling approaches described in the literature.

  19. Stochastic modeling of soundtrack for efficient segmentation and indexing of video

    Science.gov (United States)

    Naphade, Milind R.; Huang, Thomas S.

    1999-12-01

    Tools for efficient and intelligent management of digital content are essential for digital video data management. An extremely challenging research area in this context is that of multimedia analysis and understanding. The capabilities of audio analysis in particular for video data management are yet to be fully exploited. We present a novel scheme for indexing and segmentation of video by analyzing the audio track. This analysis is then applied to the segmentation and indexing of movies. We build models for some interesting events in the motion picture soundtrack. The models built include music, human speech and silence. We propose the use of hidden Markov models to model the dynamics of the soundtrack and detect audio-events. Using these models we segment and index the soundtrack. A practical problem in motion picture soundtracks is that the audio in the track is of a composite nature. This corresponds to the mixing of sounds from different sources. Speech in foreground and music in background are common examples. The coexistence of multiple individual audio sources forces us to model such events explicitly. Experiments reveal that explicit modeling gives better result than modeling individual audio events separately.

  20. Examining human behavior in video games: The development of a computational model to measure aggression.

    Science.gov (United States)

    Lamb, Richard; Annetta, Leonard; Hoston, Douglas; Shapiro, Marina; Matthews, Benjamin

    2018-06-01

    Video games with violent content have raised considerable concern in popular media and within academia. Recently, there has been considerable attention regarding the claim of the relationship between aggression and video game play. The authors of this study propose the use of a new class of tools developed via computational models to allow examination of the question of whether there is a relationship between violent video games and aggression. The purpose of this study is to computationally model and compare the General Aggression Model with the Diathesis Mode of Aggression related to the play of violent content in video games. A secondary purpose is to provide a method of measuring and examining individual aggression arising from video game play. Total participants examined for this study are N = 1065. This study occurs in three phases. Phase 1 is the development and quantification of the profile combination of traits via latent class profile analysis. Phase 2 is the training of the artificial neural network. Phase 3 is the comparison of each model as a computational model with and without the presence of video game violence. Results suggest that a combination of environmental factors and genetic predispositions trigger aggression related to video games.

  1. Video Self-Modeling as an Intervention Strategy for Individuals with Autism Spectrum Disorders

    Science.gov (United States)

    Gelbar, Nicholas W.; Anderson, Candace; McCarthy, Scott; Buggey, Tom

    2012-01-01

    Video self-modeling demonstrates promise as an intervention strategy to improve outcomes in individuals with autism spectrum disorders. This article summarizes the empirical evidence supporting the use of video self-modeling with individuals with autism spectrum disorders to increase language and communication, increase social skills, modify…

  2. Video enhancement : content classification and model selection

    NARCIS (Netherlands)

    Hu, H.

    2010-01-01

    The purpose of video enhancement is to improve the subjective picture quality. The field of video enhancement includes a broad category of research topics, such as removing noise in the video, highlighting some specified features and improving the appearance or visibility of the video content. The

  3. Developing model-making and model-breaking skills using direct measurement video-based activities

    Science.gov (United States)

    Vonk, Matthew; Bohacek, Peter; Militello, Cheryl; Iverson, Ellen

    2017-12-01

    This study focuses on student development of two important laboratory skills in the context of introductory college-level physics. The first skill, which we call model making, is the ability to analyze a phenomenon in a way that produces a quantitative multimodal model. The second skill, which we call model breaking, is the ability to critically evaluate if the behavior of a system is consistent with a given model. This study involved 116 introductory physics students in four different sections, each taught by a different instructor. All of the students within a given class section participated in the same instruction (including labs) with the exception of five activities performed throughout the semester. For those five activities, each class section was split into two groups; one group was scaffolded to focus on model-making skills and the other was scaffolded to focus on model-breaking skills. Both conditions involved direct measurement videos. In some cases, students could vary important experimental parameters within the video like mass, frequency, and tension. Data collected at the end of the semester indicate that students in the model-making treatment group significantly outperformed the other group on the model-making skill despite the fact that both groups shared a common physical lab experience. Likewise, the model-breaking treatment group significantly outperformed the other group on the model-breaking skill. This is important because it shows that direct measurement video-based instruction can help students acquire science-process skills, which are critical for scientists, and which are a key part of current science education approaches such as the Next Generation Science Standards and the Advanced Placement Physics 1 course.

  4. Robust Adaptable Video Copy Detection

    DEFF Research Database (Denmark)

    Assent, Ira; Kremer, Hardy

    2009-01-01

    in contrast). Our query processing combines filtering and indexing structures for efficient multistep computation of video copies under this model. We show that our model successfully identifies altered video copies and does so more reliably than existing models.......Video copy detection should be capable of identifying video copies subject to alterations e.g. in video contrast or frame rates. We propose a video copy detection scheme that allows for adaptable detection of videos that are altered temporally (e.g. frame rate change) and/or visually (e.g. change...

  5. Effectiveness of Video Modeling Provided by Mothers in Teaching Play Skills to Children with Autism

    Science.gov (United States)

    Besler, Fatma; Kurt, Onur

    2016-01-01

    Video modeling is an evidence-based practice that can be used to provide instruction to individuals with autism. Studies show that this instructional practice is effective in teaching many types of skills such as self-help skills, social skills, and academic skills. However, in previous studies, videos used in the video modeling process were…

  6. The Prompt and High Energy Emission of Gamma Ray Bursts

    International Nuclear Information System (INIS)

    Meszaros, P.

    2009-01-01

    I discuss some recent developments concerning the prompt emission of gamma-ray bursts, in particular the jet properties and radiation mechanisms, as exemplified by the naked-eye burst GRB 080319b, and the prompt X-ray emission of XRB080109/SN2008d, where the progenitor has, for the first time, been shown to contribute to the prompt emission. I discuss then some recent theoretical calculations of the GeV/TeV spectrum of GRB in the context of both leptonic SSC models and hadronic models. The recent observations by the Fermi satellite of GRB 080916C are then reviewed, and their implications for such models are discussed, together with its interesting determination of a bulk Lorentz factor, and the highest lower limit on the quantum gravity energy scale so far.

  7. Playing with Process: Video Game Choice as a Model of Behavior

    Science.gov (United States)

    Waelchli, Paul

    2010-01-01

    Popular culture experience in video games creates avenues to practice information literacy skills and model research in a real-world setting. Video games create a unique popular culture experience where players can invest dozens of hours on one game, create characters to identify with, organize skill sets and plot points, collaborate with people…

  8. Search for prompt neutrinos with AMANDA-II

    Energy Technology Data Exchange (ETDEWEB)

    Gozzini, Sara Rebecca

    2008-09-11

    The investigation performed in this work aims to identify and disentangle the signal of prompt neutrinos from the inclusive atmospheric spectrum. We have analysed data recorded in the years 2000-2003 by the AMANDA-II detector at the geographical South Pole. After a tight event selection, our sample is composed of about 4 . 10{sup 3} atmospheric neutrinos. Prompt neutrinos are decay products of heavy quark hadrons, which are produced in the collision of a cosmic ray particle with a nucleon in the atmosphere. The technique used to recognise prompt neutrinos is based on a simulated information of their energy spectrum, which appears harder than that of the conventional component from light quarks. Models accounting for different hadron production and decay schemes have been included in a Monte Carlo simulation and convoluted with the detector response, in order to reproduce the different spectra. The background of conventional events has been described with the Bartol 2006 tables. The energy spectrum of our data has been reconstructed through a numerical unfolding algorithm. The reconstruction is based on a Monte Carlo simulation and uses as an input three parameters of the neutrino track which are correlated with the energy of the event. Numerical regularisation is introduced to achieve a result free of unphysical oscillations, typical unfortunate feature of unfolding. The reconstructed data spectrum has been compared with different predictions using the model rejection factor technique. The prompt neutrino models differ in the choice of the hadron interaction model, the set of parton distribution functions and the numerical parameterisation of the fragmentation functions describing the transition from quark to hadrons. Here we considered mainly three classes of models, known in the literature as the Recombination Quark Parton Model, the Quark Gluon String Model and the Perturbative QCD model. Upper limits have been set on the expected flux predictions, based on our

  9. SnapVideo: Personalized Video Generation for a Sightseeing Trip.

    Science.gov (United States)

    Zhang, Luming; Jing, Peiguang; Su, Yuting; Zhang, Chao; Shaoz, Ling

    2017-11-01

    Leisure tourism is an indispensable activity in urban people's life. Due to the popularity of intelligent mobile devices, a large number of photos and videos are recorded during a trip. Therefore, the ability to vividly and interestingly display these media data is a useful technique. In this paper, we propose SnapVideo, a new method that intelligently converts a personal album describing of a trip into a comprehensive, aesthetically pleasing, and coherent video clip. The proposed framework contains three main components. The scenic spot identification model first personalizes the video clips based on multiple prespecified audience classes. We then search for some auxiliary related videos from YouTube 1 according to the selected photos. To comprehensively describe a scenery, the view generation module clusters the crawled video frames into a number of views. Finally, a probabilistic model is developed to fit the frames from multiple views into an aesthetically pleasing and coherent video clip, which optimally captures the semantics of a sightseeing trip. Extensive user studies demonstrated the competitiveness of our method from an aesthetic point of view. Moreover, quantitative analysis reflects that semantically important spots are well preserved in the final video clip. 1 https://www.youtube.com/.

  10. Video Modeling Training Effects on Types of Attention Delivered by Educational Care-Providers.

    Science.gov (United States)

    Taber, Traci A; Lambright, Nathan; Luiselli, James K

    2017-06-01

    We evaluated the effects of abbreviated (i.e., one-session) video modeling on delivery of student-preferred attention by educational care-providers. The video depicted a novel care-provider interacting with and delivering attention to the student. Within a concurrent multiple baseline design, video modeling increased delivery of the targeted attention for all participants as well as their delivery of another type of attention that was not trained although these effects were variable within and between care-providers. We discuss the clinical and training implications from these findings.

  11. A model for measurement of noise in CCD digital-video cameras

    International Nuclear Information System (INIS)

    Irie, K; Woodhead, I M; McKinnon, A E; Unsworth, K

    2008-01-01

    This study presents a comprehensive measurement of CCD digital-video camera noise. Knowledge of noise detail within images or video streams allows for the development of more sophisticated algorithms for separating true image content from the noise generated in an image sensor. The robustness and performance of an image-processing algorithm is fundamentally limited by sensor noise. The individual noise sources present in CCD sensors are well understood, but there has been little literature on the development of a complete noise model for CCD digital-video cameras, incorporating the effects of quantization and demosaicing

  12. Stability region for a prompt power variation of a coupled-core system with positive prompt feedback

    International Nuclear Information System (INIS)

    Watanabe, S.; Nishina, K.

    1984-01-01

    A stability analysis using a one-group model is presented for a coupled-core system. Positive prompt feedback of a γp /SUB j/ form is assumed, where p /SUB j/ is the fractional power variation of core j. Prompt power variations over a range of a few milliseconds after a disturbance are analyzed. The analysis combines Lapunov's method, prompt jump approximation, and the eigenfunction expansion of coupling region response flux. The last is treated as a pseudo-delayed neutron precursor. An asymptotic stability region is found for p /SUB j/. For an asymmetric flux variation over a system of two coupled cores, either p /SUB I/ or p /SUB II/ can slightly exceed, by virtue of the coupling effect, the critical value (β/γ-1) of a single-core case. Such a stability region is increased by additional inclusion of the coupling region fundamental mode in the treatment. The coupling region contributes to stability through its delayed response and coupling. An optimum core separation distance for stability is found

  13. A Practical Strategy for Teaching a Child with Autism to Attend to and Imitate a Portable Video Model

    Science.gov (United States)

    Plavnick, Joshua B.

    2012-01-01

    Video modeling is an effective and efficient methodology for teaching new skills to individuals with autism. New technology may enhance video modeling as smartphones or tablet computers allow for portable video displays. However, the reduced screen size may decrease the likelihood of attending to the video model for some children. The present…

  14. Distributed PROMPT-LTL Synthesis

    Directory of Open Access Journals (Sweden)

    Swen Jacobs

    2016-09-01

    Full Text Available We consider the synthesis of distributed implementations for specifications in Prompt Linear Temporal Logic (PROMPT-LTL, which extends LTL by temporal operators equipped with parameters that bound their scope. For single process synthesis it is well-established that such parametric extensions do not increase worst-case complexities. For synchronous systems, we show that, despite being more powerful, the distributed realizability problem for PROMPT-LTL is not harder than its LTL counterpart. For asynchronous systems we have to consider an assume-guarantee synthesis problem, as we have to express scheduling assumptions. As asynchronous distributed synthesis is already undecidable for LTL, we give a semi-decision procedure for the PROMPT-LTL assume-guarantee synthesis problem based on bounded synthesis.

  15. Watch This! A Guide to Implementing Video Modeling in the Classroom

    Science.gov (United States)

    Wynkoop, Kaylee Stahr

    2016-01-01

    The video modeling (VM) teaching strategy is one in which a student watches a video of someone performing a specific behavior, skill, or task and is then expected to complete the behavior, skill, or task. This column discusses the variety of ways in which VM has been documented within the literature and supports teacher interest in the strategy by…

  16. Prompt form of relativistic equations of motion in a model of singular lagrangian formalism

    International Nuclear Information System (INIS)

    Gajda, R.P.; Duviryak, A.A.; Klyuchkovskij, Yu.B.

    1983-01-01

    The purpose of the paper is to develope the way of transition from equations of motion in singular lagrangian formalism to three-dimensional equations of Newton type in the prompt form of dynamics in the framework of c -2 parameter expansion (s. c. quasireltativistic approaches), as well as to find corresponding integrals of motion. The first quasirelativistifc approach for Dominici, Gomis, Longhi model was obtained and investigated

  17. The impact of thin models in music videos on adolescent girls' body dissatisfaction.

    Science.gov (United States)

    Bell, Beth T; Lawton, Rebecca; Dittmar, Helga

    2007-06-01

    Music videos are a particularly influential, new form of mass media for adolescents, which include the depiction of scantily clad female models whose bodies epitomise the ultra-thin sociocultural ideal for young women. The present study is the first exposure experiment that examines the impact of thin models in music videos on the body dissatisfaction of 16-19-year-old adolescent girls (n=87). First, participants completed measures of positive and negative affect, body image, and self-esteem. Under the guise of a memory experiment, they then either watched three music videos, listened to three songs (from the videos), or learned a list of words. Affect and body image were assessed afterwards. In contrast to the music listening and word-learning conditions, girls who watched the music videos reported significantly elevated scores on an adaptation of the Body Image States Scale after exposure, indicating increased body dissatisfaction. Self-esteem was not found to be a significant moderator of this relationship. Implications and future research are discussed.

  18. Effects of video modeling on communicative social skills of college students with Asperger syndrome.

    Science.gov (United States)

    Mason, Rose A; Rispoli, Mandy; Ganz, Jennifer B; Boles, Margot B; Orr, Kristie

    2012-01-01

    Empirical support regarding effective interventions for individuals with autism spectrum disorder (ASD) within a postsecondary community is limited. Video modeling, an empirically supported intervention for children and adolescents with ASD, may prove effective in addressing the needs of individuals with ASD in higher education. This study evaluated the effects of video modeling without additional treatment components to improve social-communicative skills, specifically, eye contact, facial expression, and conversational turntaking in college students with ASD. This study utilized a multiple baseline single-case design across behaviors for two post-secondary students with ASD to evaluate the effects of the video modeling intervention. Large effect sizes and statistically significant change across all targeted skills for one participant and eye contact and turntaking for the other participant were obtained. The use of video modeling without additional intervention may increase the social skills of post-secondary students with ASD. Implications for future research are discussed.

  19. Marginally fast cooling synchrotron models for prompt GRBs

    Science.gov (United States)

    Beniamini, Paz; Barniol Duran, Rodolfo; Giannios, Dimitrios

    2018-05-01

    Previous studies have considered synchrotron as the emission mechanism for prompt gamma-ray bursts (GRBs). These works have shown that the electrons must cool on a time-scale comparable to the dynamic time at the source in order to satisfy spectral constraints while maintaining high radiative efficiency. We focus on conditions where synchrotron cooling is balanced by a continuous source of heating, and in which these constraints are naturally satisfied. Assuming that a majority of the electrons in the emitting region are contributing to the observed peak, we find that the energy per electron has to be E ≳ 20 GeV and that the Lorentz factor of the emitting material has to be very large 103 ≲ Γem ≲ 104, well in excess of the bulk Lorentz factor of the jet inferred from GRB afterglows. A number of independent constraints then indicate that the emitters must be moving relativistically, with Γ΄ ≈ 10, relative to the bulk frame of the jet and that the jet must be highly magnetized upstream of the emission region, σup ≳ 30. The emission radius is also strongly constrained in this model to R ≳ 1016 cm. These values are consistent with magnetic jet models where the dissipation is driven by magnetic reconnection that takes place far away from the base of the jet.

  20. Statistical Analysis of Video Frame Size Distribution Originating from Scalable Video Codec (SVC

    Directory of Open Access Journals (Sweden)

    Sima Ahmadpour

    2017-01-01

    Full Text Available Designing an effective and high performance network requires an accurate characterization and modeling of network traffic. The modeling of video frame sizes is normally applied in simulation studies and mathematical analysis and generating streams for testing and compliance purposes. Besides, video traffic assumed as a major source of multimedia traffic in future heterogeneous network. Therefore, the statistical distribution of video data can be used as the inputs for performance modeling of networks. The finding of this paper comprises the theoretical definition of distribution which seems to be relevant to the video trace in terms of its statistical properties and finds the best distribution using both the graphical method and the hypothesis test. The data set used in this article consists of layered video traces generating from Scalable Video Codec (SVC video compression technique of three different movies.

  1. An evaluation of preference for video and in vivo modeling.

    Science.gov (United States)

    Geiger, Kaneen B; Leblanc, Linda A; Dillon, Courtney M; Bates, Stephanie L

    2010-01-01

    We assessed preference for video or in vivo modeling using a concurrent-chains arrangement with 3 children with autism. The two modeling conditions produced similar acquisition rates and no differential selection (i.e., preference) for all 3 participants.

  2. Multi-Task Video Captioning with Video and Entailment Generation

    OpenAIRE

    Pasunuru, Ramakanth; Bansal, Mohit

    2017-01-01

    Video captioning, the task of describing the content of a video, has seen some promising improvements in recent years with sequence-to-sequence models, but accurately learning the temporal and logical dynamics involved in the task still remains a challenge, especially given the lack of sufficient annotated data. We improve video captioning by sharing knowledge with two related directed-generation tasks: a temporally-directed unsupervised video prediction task to learn richer context-aware vid...

  3. A theory-based video messaging mobile phone intervention for smoking cessation: randomized controlled trial.

    Science.gov (United States)

    Whittaker, Robyn; Dorey, Enid; Bramley, Dale; Bullen, Chris; Denny, Simon; Elley, C Raina; Maddison, Ralph; McRobbie, Hayden; Parag, Varsha; Rodgers, Anthony; Salmon, Penny

    2011-01-21

    Advances in technology allowed the development of a novel smoking cessation program delivered by video messages sent to mobile phones. This social cognitive theory-based intervention (called "STUB IT") used observational learning via short video diary messages from role models going through the quitting process to teach behavioral change techniques. The objective of our study was to assess the effectiveness of a multimedia mobile phone intervention for smoking cessation. A randomized controlled trial was conducted with 6-month follow-up. Participants had to be 16 years of age or over, be current daily smokers, be ready to quit, and have a video message-capable phone. Recruitment targeted younger adults predominantly through radio and online advertising. Registration and data collection were completed online, prompted by text messages. The intervention group received an automated package of video and text messages over 6 months that was tailored to self-selected quit date, role model, and timing of messages. Extra messages were available on demand to beat cravings and address lapses. The control group also set a quit date and received a general health video message sent to their phone every 2 weeks. The target sample size was not achieved due to difficulty recruiting young adult quitters. Of the 226 randomized participants, 47% (107/226) were female and 24% (54/226) were Maori (indigenous population of New Zealand). Their mean age was 27 years (SD 8.7), and there was a high level of nicotine addiction. Continuous abstinence at 6 months was 26.4% (29/110) in the intervention group and 27.6% (32/116) in the control group (P = .8). Feedback from participants indicated that the support provided by the video role models was important and appreciated. This study was not able to demonstrate a statistically significant effect of the complex video messaging mobile phone intervention compared with simple general health video messages via mobile phone. However, there was

  4. Modeling the video distribution link in the Next Generation Optical Access Networks

    International Nuclear Information System (INIS)

    Amaya, F; Cardenas, A; Tafur, I

    2011-01-01

    In this work we present a model for the design and optimization of the video distribution link in the next generation optical access network. We analyze the video distribution performance in a SCM-WDM link, including the noise, the distortion and the fiber optic nonlinearities. Additionally, we consider in the model the effect of distributed Raman amplification, used to extent the capacity and the reach of the optical link. In the model, we use the nonlinear Schroedinger equation with the purpose to obtain capacity limitations and design constrains of the next generation optical access networks.

  5. Blind prediction of natural video quality.

    Science.gov (United States)

    Saad, Michele A; Bovik, Alan C; Charrier, Christophe

    2014-03-01

    We propose a blind (no reference or NR) video quality evaluation model that is nondistortion specific. The approach relies on a spatio-temporal model of video scenes in the discrete cosine transform domain, and on a model that characterizes the type of motion occurring in the scenes, to predict video quality. We use the models to define video statistics and perceptual features that are the basis of a video quality assessment (VQA) algorithm that does not require the presence of a pristine video to compare against in order to predict a perceptual quality score. The contributions of this paper are threefold. 1) We propose a spatio-temporal natural scene statistics (NSS) model for videos. 2) We propose a motion model that quantifies motion coherency in video scenes. 3) We show that the proposed NSS and motion coherency models are appropriate for quality assessment of videos, and we utilize them to design a blind VQA algorithm that correlates highly with human judgments of quality. The proposed algorithm, called video BLIINDS, is tested on the LIVE VQA database and on the EPFL-PoliMi video database and shown to perform close to the level of top performing reduced and full reference VQA algorithms.

  6. Variability in the Effectiveness of a Video Modeling Intervention Package for Children with Autism

    Science.gov (United States)

    Plavnick, Joshua B.; MacFarland, Mari C.; Ferreri, Summer J.

    2015-01-01

    Video modeling is an evidence-based instructional strategy for teaching a variety of skills to individuals with autism. Despite the effectiveness of this strategy, there is some uncertainty regarding the conditions under which video modeling is likely to be effective. The present investigation examined the differential effectiveness of video…

  7. Recorded peer video chat as a research and development tool

    DEFF Research Database (Denmark)

    Otrel-Cass, Kathrin; Cowie, Bronwen

    2016-01-01

    When practising teachers take time to exchange their experiences and reflect on their teaching realities as critical friends, they add meaning and depth to educational research. When peer talk is facilitated through video chat platforms, teachers can meet (virtually) face to face even when...... recordings were transcribed and used to prompt further discussion. The recording of the video chat meetings provided an opportunity for researchers to listen in and follow up on points they felt needed further unpacking or clarification. The recorded peer video chat conversations provided an additional...... opportunity to stimulate and support teacher participants in a process of critical analysis and reflection on practice. The discussions themselves were empowering because in the absence of the researcher, the teachers, in negotiation with peers, choose what is important enough to them to take time to discuss....

  8. Immersive video

    Science.gov (United States)

    Moezzi, Saied; Katkere, Arun L.; Jain, Ramesh C.

    1996-03-01

    Interactive video and television viewers should have the power to control their viewing position. To make this a reality, we introduce the concept of Immersive Video, which employs computer vision and computer graphics technologies to provide remote users a sense of complete immersion when viewing an event. Immersive Video uses multiple videos of an event, captured from different perspectives, to generate a full 3D digital video of that event. That is accomplished by assimilating important information from each video stream into a comprehensive, dynamic, 3D model of the environment. Using this 3D digital video, interactive viewers can then move around the remote environment and observe the events taking place from any desired perspective. Our Immersive Video System currently provides interactive viewing and `walkthrus' of staged karate demonstrations, basketball games, dance performances, and typical campus scenes. In its full realization, Immersive Video will be a paradigm shift in visual communication which will revolutionize television and video media, and become an integral part of future telepresence and virtual reality systems.

  9. So you want to conduct a randomised trial? Learnings from a 'failed' feasibility study of a Crisis Resource Management prompt during simulated paediatric resuscitation.

    Science.gov (United States)

    Teis, Rachel; Allen, Jyai; Lee, Nigel; Kildea, Sue

    2017-02-01

    No study has tested a Crisis Resource Management prompt on resuscitation performance. We conducted a feasibility, unblinded, parallel-group, randomised controlled trial at one Australian paediatric hospital (June-September 2014). Eligible participants were any doctor, nurse, or nurse manager who would normally be involved in a Medical Emergency Team simulation. The unit of block randomisation was one of six scenarios (3 control:3 intervention) with or without a verbal prompt. The primary outcomes tested the feasibility and utility of the intervention and data collection tools. The secondary outcomes measured resuscitation quality and team performance. Data were analysed from six resuscitation scenarios (n=49 participants); three control groups (n=25) and three intervention groups (n=24). The ability to measure all data items on the data collection tools was hindered by problems with the recording devices both in the mannequins and the video camera. For a pilot study, greater training for the prompt role and pre-briefing participants about assessment of their cardio-pulmonary resuscitation quality should be undertaken. Data could be analysed in real time with independent video analysis to validate findings. Two cameras would strengthen reliability of the methods. Copyright © 2016 College of Emergency Nursing Australasia. Published by Elsevier Ltd. All rights reserved.

  10. Multimodal Feature Learning for Video Captioning

    Directory of Open Access Journals (Sweden)

    Sujin Lee

    2018-01-01

    Full Text Available Video captioning refers to the task of generating a natural language sentence that explains the content of the input video clips. This study proposes a deep neural network model for effective video captioning. Apart from visual features, the proposed model learns additionally semantic features that describe the video content effectively. In our model, visual features of the input video are extracted using convolutional neural networks such as C3D and ResNet, while semantic features are obtained using recurrent neural networks such as LSTM. In addition, our model includes an attention-based caption generation network to generate the correct natural language captions based on the multimodal video feature sequences. Various experiments, conducted with the two large benchmark datasets, Microsoft Video Description (MSVD and Microsoft Research Video-to-Text (MSR-VTT, demonstrate the performance of the proposed model.

  11. Differential and integral characteristics of prompt fission neutrons in the statistical theory

    International Nuclear Information System (INIS)

    Gerasimenko, B.F.; Rubchenya, V.A.

    1989-01-01

    Hauser-Feshbach statistical theory is the most consistent approach to the calculation of both spectra and prompt fission neutrons characteristics. On the basis of this approach a statistical model for calculation of differential prompt fission neutrons characteristics of low energy fission has been proposed and improved in order to take into account the anisotropy effects arising at prompt fission neutrons emission from fragments. 37 refs, 6 figs

  12. Smoking, ADHD, and Problematic Video Game Use: A Structural Modeling Approach.

    Science.gov (United States)

    Lee, Hyo Jin; Tran, Denise D; Morrell, Holly E R

    2018-05-01

    Problematic video game use (PVGU), or addiction-like use of video games, is associated with physical and mental health problems and problems in social and occupational functioning. Possible correlates of PVGU include frequency of play, cigarette smoking, and attention deficit hyperactivity disorder (ADHD). The aim of the current study was to explore simultaneously the relationships among these variables as well as test whether two separate measures of PVGU measure the same construct, using a structural modeling approach. Secondary data analysis was conducted on 2,801 video game users (M age  = 22.43 years, standard deviation [SD] age  = 4.7; 93 percent male) who completed an online survey. The full model fit the data well: χ 2 (2) = 2.017, p > 0.05; root mean square error of approximation (RMSEA) = 0.002 (90% CI [0.000-0.038]); comparative fit index (CFI) = 1.000; standardized root mean square residual (SRMR) = 0.004; and all standardized residuals video game use explained 41.8 percent of variance in PVGU. Tracking these variables may be useful for PVGU prevention and assessment. Young's Internet Addiction Scale, adapted for video game use, and the Problem Videogame Playing Scale both loaded strongly onto a PVGU factor, suggesting that they measure the same construct, that studies using either measure may be compared to each other, and that both measures may be used as a screener of PVGU.

  13. Modelling audiovisual integration of affect from videos and music.

    Science.gov (United States)

    Gao, Chuanji; Wedell, Douglas H; Kim, Jongwan; Weber, Christine E; Shinkareva, Svetlana V

    2018-05-01

    Two experiments examined how affective values from visual and auditory modalities are integrated. Experiment 1 paired music and videos drawn from three levels of valence while holding arousal constant. Experiment 2 included a parallel combination of three levels of arousal while holding valence constant. In each experiment, participants rated their affective states after unimodal and multimodal presentations. Experiment 1 revealed a congruency effect in which stimulus combinations of the same extreme valence resulted in more extreme state ratings than component stimuli presented in isolation. An interaction between music and video valence reflected the greater influence of negative affect. Video valence was found to have a significantly greater effect on combined ratings than music valence. The pattern of data was explained by a five parameter differential weight averaging model that attributed greater weight to the visual modality and increased weight with decreasing values of valence. Experiment 2 revealed a congruency effect only for high arousal combinations and no interaction effects. This pattern was explained by a three parameter constant weight averaging model with greater weight for the auditory modality and a very low arousal value for the initial state. These results demonstrate key differences in audiovisual integration between valence and arousal.

  14. Incremental principal component pursuit for video background modeling

    Science.gov (United States)

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  15. Key Issues in Modeling of Complex 3D Structures from Video Sequences

    Directory of Open Access Journals (Sweden)

    Shengyong Chen

    2012-01-01

    Full Text Available Construction of three-dimensional structures from video sequences has wide applications for intelligent video analysis. This paper summarizes the key issues of the theory and surveys the recent advances in the state of the art. Reconstruction of a scene object from video sequences often takes the basic principle of structure from motion with an uncalibrated camera. This paper lists the typical strategies and summarizes the typical solutions or algorithms for modeling of complex three-dimensional structures. Open difficult problems are also suggested for further study.

  16. Modeling the video distribution link in the Next Generation Optical Access Networks

    DEFF Research Database (Denmark)

    Amaya, F.; Cárdenas, A.; Tafur Monroy, Idelfonso

    2011-01-01

    In this work we present a model for the design and optimization of the video distribution link in the next generation optical access network. We analyze the video distribution performance in a SCM-WDM link, including the noise, the distortion and the fiber optic nonlinearities. Additionally, we...... consider in the model the effect of distributed Raman amplification, used to extent the capacity and the reach of the optical link. In the model, we use the nonlinear Schrödinger equation with the purpose to obtain capacity limitations and design constrains of the next generation optical access networks....

  17. Monte Carlo simulations of prompt-gamma emission during carbon ion irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Le Foulher, F.; Bajard, M.; Chevallier, M.; Dauvergne, D.; Henriquet, P.; Ray, C.; Testa, E.; Testa, M. [Universite de Lyon 1, F-69003 Lyon (France); IN2P3/CNRS, UMR 5822, Institut de Physique Nucleaire de Lyon, F-69622 Villeurbanne (France); Freud, N.; Letang, J. M. [Laboratoire de Controles Non Destructifs Par Rayonnements Ionisants, INSA-Lyon, F-69621 Villeurbanne cedex (France); Karkar, S. [CPPM, Aix-Marseille Universite, CNRS/IN2P3, Marseille (France); Plescak, R.; Schardt, D. [Gesellschaft fur Schwerionenforschung (GSI), D-64291 Darmstadt (Germany)

    2009-07-01

    Monte Carlo simulations based on the Geant4 tool-kit (version 9.1) were performed to study the emission of secondary prompt gamma-rays produced by nuclear reactions during carbon ion-beam therapy. These simulations were performed along with an experimental program and instrumentation developments which aim at designing a prompt gamma-ray device for real-time control of hadron therapy. The objective of the present study is twofold: first, to present the features of the prompt gamma radiation in the case of carbon ion irradiation; secondly, to simulate the experimental setup and to compare measured and simulated counting rates corresponding to various experiments. For each experiment, we found that simulations overestimate prompt gamma-ray detection yields by a factor of 12. Uncertainties in fragmentation cross sections and binary cascade model cannot explain such discrepancies. The so-called 'photon evaporation' model is therefore questionable and its modification is currently in progress. (authors)

  18. Incorporating Video Modeling into a School-Based Intervention for Students with Autism Spectrum Disorders

    Science.gov (United States)

    Wilson, Kaitlyn P.

    2013-01-01

    Purpose: Video modeling is an intervention strategy that has been shown to be effective in improving the social and communication skills of students with autism spectrum disorders, or ASDs. The purpose of this tutorial is to outline empirically supported, step-by-step instructions for the use of video modeling by school-based speech-language…

  19. Comparison of Video and Live Modeling in Teaching Response Chains to Children with Autism

    Science.gov (United States)

    Ergenekon, Yasemin; Tekin-Iftar, Elif; Kapan, Alper; Akmanoglu, Nurgul

    2014-01-01

    Research has shown that video and live modeling are both effective in teaching new skills to children with autism. An adapted alternating treatments design was used to compare the effectiveness and efficiency of video and live modeling in teaching response chains to three children with autism. Each child was taught two chained skills; one skill…

  20. Prerequisite Skills That Support Learning through Video Modeling

    Science.gov (United States)

    MacDonald, Rebecca P. F.; Dickson, Chata A.; Martineau, Meaghan; Ahearn, William H.

    2015-01-01

    The purpose of this study was to evaluate the relationship between tasks that require delayed discriminations such as delayed imitation and delayed matching to sample on acquisition of skills using video modeling. Twenty-nine participants with an ASD diagnosis were assessed on a battery of tasks including both immediate and delayed imitation and…

  1. Thermal and prompt photons at RHIC and the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Paquet, Jean-François [Department of Physics & Astronomy, Stony Brook University, Stony Brook, NY 11794 (United States); Department of Physics, McGill University, 3600 University Street, Montreal, Quebec, H3A2T8 (Canada); Shen, Chun [Department of Physics, McGill University, 3600 University Street, Montreal, Quebec, H3A2T8 (Canada); Denicol, Gabriel [Department of Physics, McGill University, 3600 University Street, Montreal, Quebec, H3A2T8 (Canada); Physics Department, Brookhaven National Laboratory, Upton, NY 11973 (United States); Luzum, Matthew [Universidade de Santiago de Compostela, E-15706 Santiago de Compostela, Galicia-Spain (Spain); Universidade de São Paulo, Rua do Matão Travessa R, no. 187, 05508-090, Cidade Universitária, São Paulo (Brazil); Schenke, Björn [Physics Department, Brookhaven National Laboratory, Upton, NY 11973 (United States); Jeon, Sangyong; Gale, Charles [Department of Physics, McGill University, 3600 University Street, Montreal, Quebec, H3A2T8 (Canada)

    2016-12-15

    Thermal and prompt photon production in heavy ion collisions is evaluated and compared with measurements from both RHIC and the LHC. An event-by-event hydrodynamical model of heavy ion collisions that includes shear and bulk viscosities is used, along with up-to-date photon emission rates. Larger tension with measurements is observed at RHIC than at the LHC. The center-of-mass energy and centrality dependence of thermal and prompt photons is investigated.

  2. Using Video Modeling with Substitutable Loops to Teach Varied Play to Children with Autism

    Science.gov (United States)

    Dupere, Sally; MacDonald, Rebecca P. F.; Ahearn, William H.

    2013-01-01

    Children with autism often engage in repetitive play with little variation in the actions performed or items used. This study examined the use of video modeling with scripted substitutable loops on children's pretend play with trained and untrained characters. Three young children with autism were shown a video model of scripted toy play that…

  3. A Simple FSPN Model of P2P Live Video Streaming System

    OpenAIRE

    Kotevski, Zoran; Mitrevski, Pece

    2011-01-01

    Peer to Peer (P2P) live streaming is relatively new paradigm that aims at streaming live video to large number of clients at low cost. Many such applications already exist in the market, but, prior to creating such system it is necessary to analyze its performance via representative model that can provide good insight in the system’s behavior. Modeling and performance analysis of P2P live video streaming systems is challenging task which requires addressing many properties and issues of P2P s...

  4. Periodic email prompts to re-use an internet-delivered computer-tailored lifestyle program: influence of prompt content and timing.

    Science.gov (United States)

    Schneider, Francine; de Vries, Hein; Candel, Math; van de Kar, Angelique; van Osch, Liesbeth

    2013-01-31

    Adherence to Internet-delivered lifestyle interventions using multiple tailoring is suboptimal. Therefore, it is essential to invest in proactive strategies, such as periodic email prompts, to boost re-use of the intervention. This study investigated the influence of content and timing of a single email prompt on re-use of an Internet-delivered computer-tailored (CT) lifestyle program. A sample of municipality employees was invited to participate in the program. All participants who decided to use the program received an email prompting them to revisit the program. A 2×3 (content × timing) design was used to test manipulations of prompt content and timing. Depending on the study group participants were randomly assigned to, they received either a prompt containing standard content (an invitation to revisit the program), or standard content plus a preview of new content placed on the program website. Participants received this prompt after 2, 4, or 6 weeks. In addition to these 6 experimental conditions, a control condition was included consisting of participants who did not receive an additional email prompt. Clicks on the uniform resource locator (URL) provided in the prompt and log-ins to the CT program were objectively monitored. Logistic regression analyses were conducted to determine whether prompt content and/or prompt timing predicted clicking on the URL and logging in to the CT program. Of all program users (N=240), 206 participants received a subsequent email prompting them to revisit the program. A total of 53 participants (25.7%) who received a prompt reacted to this prompt by clicking on the URL, and 25 participants (12.1%) actually logged in to the program. There was a main effect of prompt timing; participants receiving an email prompt 2 weeks after their first visit clicked on the URL significantly more often compared with participants that received the prompt after 4 weeks (odds ratio [OR] 3.069, 95% CI 1.392-6.765, P=.005) and after 6 weeks (OR 4

  5. Using video self- and peer modeling to facilitate reading fluency in children with learning disabilities.

    Science.gov (United States)

    Decker, Martha M; Buggey, Tom

    2014-01-01

    The authors compared the effects of video self-modeling and video peer modeling on oral reading fluency of elementary students with learning disabilities. A control group was also included to gauge general improvement due to reading instruction and familiarity with researchers. The results indicated that both interventions resulted in improved fluency. Students in both experimental groups improved their reading fluency. Two students in the self-modeling group made substantial and immediate gains beyond any of the other students. Discussion is included that focuses on the importance that positive imagery can have on student performance and the possible applications of both forms of video modeling with students who have had negative experiences in reading.

  6. Prompt loss of beam ions in KSTAR plasmas

    Directory of Open Access Journals (Sweden)

    Jun Young Kim

    2016-10-01

    Full Text Available For a toroidal plasma facility to realize fusion energy, researching the transport of fast ions is important not only due to its close relation to the heating and current drive efficiencies but also to determine the heat load on the plasma-facing components. We present a theoretical analysis and orbit simulation for the origin of lost fast-ions during neutral beam injection (NBI heating in Korea Superconducting Tokamak Advanced Research (KSTAR device. We adopted a two-dimensional phase diagram of the toroidal momentum and magnetic moment and describe detectable momentums at the fast-ion loss detector (FILD position as a quadratic line. This simple method was used to model birth ions deposited by NBI and drawn as points in the momentum phase space. A Lorentz orbit code was used to calculate the fast-ion orbits and present the prompt loss characteristics of the KSTAR NBI. The scrape-off layer deposition of fast ions produces a significant prompt loss, and the model and experimental results closely agreed on the pitch-angle range of the NBI prompt loss. Our approach can provide wall load information from the fast ion loss.

  7. Using video modeling for generalizing toy play in children with autism.

    Science.gov (United States)

    Paterson, Claire R; Arco, Lucius

    2007-09-01

    The present study examined effects of video modeling on generalized independent toy play of two boys with autism. Appropriate and repetitive verbal and motor play were measured, and intermeasure relationships were examined. Two single-participant experiments with multiple baselines and withdrawals across toy play were used. One boy was presented with three physically unrelated toys, whereas the other was presented with three related toys. Video modeling produced increases in appropriate play and decreases in repetitive play, but generalized play was observed only with the related toys. Generalization may have resulted from variables including the toys' common physical characteristics and natural reinforcing properties and the increased correspondence between verbal and motor play.

  8. A Meta-Analysis of Video Modeling Interventions for Children and Adolescents with Emotional/Behavioral Disorders

    Science.gov (United States)

    Clinton, Elias

    2016-01-01

    Video modeling is a non-punitive, evidence-based intervention that has been proven effective for teaching functional life skills and social skills to individuals with autism and developmental disabilities. Compared to the literature base on using video modeling for students with autism and developmental disabilities, fewer studies have examined…

  9. Evaluation of Online Video Usage and Learning Satisfaction: An Extension of the Technology Acceptance Model

    Science.gov (United States)

    Nagy, Judit T.

    2018-01-01

    The aim of the study was to examine the determining factors of students' video usage and their learning satisfaction relating to the supplementary application of educational videos, accessible in a Moodle environment in a Business Mathematics Course. The research model is based on the extension of "Technology Acceptance Model" (TAM), in…

  10. The effect of longitudinal conductance variations on the ionospheric prompt penetration electric fields

    Science.gov (United States)

    Sazykin, S.; Wolf, R.; Spiro, R.; Fejer, B.

    Ionospheric prompt penetration electric fields of magnetospheric origin, together with the atmospheric disturbance dynamo, represent the most important parameters controlling the storm-time dynamics of the low and mid-latitude ionosphere. These prompt penetration fields result from the disruption of region-2 field-aligned shielding currents during geomagnetically disturbed conditions. Penetration electric fields con- trol, to a large extent, the generation and development of equatorial spread-F plasma instabilities as well as other dynamic space weather phenomena in the ionosphere equatorward of the auroral zone. While modeling studies typically agree with average patterns of prompt penetration fields, experimental results suggest that longitudinal variations of the ionospheric con- ductivities play a non-negligible role in controlling spread-F phenomena, an effect that has not previously been modeled. We present first results of modeling prompt pene- tration electric fields using a version of the Rice Convection Model (RCM) that allows for longitudinal variations in the ionospheric conductance tensor. The RCM is a first- principles numerical ionosphere-magnetosphere coupling model that solves for the electric fields, field-aligned currents, and particle distributions in the ionosphere and inner/middle magnetosphere. We compare these new theoretical results with electric field observations.

  11. Automatic video segmentation employing object/camera modeling techniques

    NARCIS (Netherlands)

    Farin, D.S.

    2005-01-01

    Practically established video compression and storage techniques still process video sequences as rectangular images without further semantic structure. However, humans watching a video sequence immediately recognize acting objects as semantic units. This semantic object separation is currently not

  12. Prompt-period measurement of the Annular Core Research Reactor prompt neutron generation time

    International Nuclear Information System (INIS)

    Coats, R.L.; Talley, D.G.; Trowbridge, F.R.

    1994-07-01

    The prompt neutron generation time for the Annular Core Research Reactor was experimentally determined using a prompt-period technique. The resultant value of 25.5 μs agreed well with the analytically determined value of 24 μs. The three different methods of reactivity insertion determination yielded ±5% agreement in the experimental values of the prompt neutron generation time. Discrepancies observed in reactivity insertion values determined by the three methods used (transient rod position, relative delayed critical control rod positions, and relative transient rod and control rod positions) were investigated to a limited extent. Rod-shadowing and low power fuel/coolant heat-up were addressed as possible causes of the discrepancies

  13. Multi-Model Estimation Based Moving Object Detection for Aerial Video

    Directory of Open Access Journals (Sweden)

    Yanning Zhang

    2015-04-01

    Full Text Available With the wide development of UAV (Unmanned Aerial Vehicle technology, moving target detection for aerial video has become a popular research topic in the computer field. Most of the existing methods are under the registration-detection framework and can only deal with simple background scenes. They tend to go wrong in the complex multi background scenarios, such as viaducts, buildings and trees. In this paper, we break through the single background constraint and perceive the complex scene accurately by automatic estimation of multiple background models. First, we segment the scene into several color blocks and estimate the dense optical flow. Then, we calculate an affine transformation model for each block with large area and merge the consistent models. Finally, we calculate subordinate degree to multi-background models pixel to pixel for all small area blocks. Moving objects are segmented by means of energy optimization method solved via Graph Cuts. The extensive experimental results on public aerial videos show that, due to multi background models estimation, analyzing each pixel’s subordinate relationship to multi models by energy minimization, our method can effectively remove buildings, trees and other false alarms and detect moving objects correctly.

  14. Designing Effective Video-Based Modeling Examples Using Gaze and Gesture Cues

    NARCIS (Netherlands)

    Ouwehand, Kim; van Gog, Tamara|info:eu-repo/dai/nl/294304975; Paas, Fred

    2015-01-01

    Research suggests that learners will likely spend a substantial amount of time looking at the model's face when it is visible in a video-based modeling example. Consequently, in this study we hypothesized that learners might not attend timely to the task areas the model is referring to, unless their

  15. Virtually numbed: immersive video gaming alters real-life experience.

    Science.gov (United States)

    Weger, Ulrich W; Loughnan, Stephen

    2014-04-01

    As actors in a highly mechanized environment, we are citizens of a world populated not only by fellow humans, but also by virtual characters (avatars). Does immersive video gaming, during which the player takes on the mantle of an avatar, prompt people to adopt the coldness and rigidity associated with robotic behavior and desensitize them to real-life experience? In one study, we correlated participants' reported video-gaming behavior with their emotional rigidity (as indicated by the number of paperclips that they removed from ice-cold water). In a second experiment, we manipulated immersive and nonimmersive gaming behavior and then likewise measured the extent of the participants' emotional rigidity. Both studies yielded reliable impacts, and thus suggest that immersion into a robotic viewpoint desensitizes people to real-life experiences in oneself and others.

  16. High Definition Video Streaming Using H.264 Video Compression

    OpenAIRE

    Bechqito, Yassine

    2009-01-01

    This thesis presents high definition video streaming using H.264 codec implementation. The experiment carried out in this study was done for an offline streaming video but a model for live high definition streaming is introduced, as well. Prior to the actual experiment, this study describes digital media streaming. Also, the different technologies involved in video streaming are covered. These include streaming architecture and a brief overview on H.264 codec as well as high definition t...

  17. Handbook of video databases design and applications

    CERN Document Server

    Furht, Borko

    2003-01-01

    INTRODUCTIONIntroduction to Video DatabasesOge Marques and Borko FurhtVIDEO MODELING AND REPRESENTATIONModeling Video Using Input/Output Markov Models with Application to Multi-Modal Event DetectionAshutosh Garg, Milind R. Naphade, and Thomas S. HuangStatistical Models of Video Structure and SemanticsNuno VasconcelosFlavor: A Language for Media RepresentationAlexandros Eleftheriadis and Danny HongIntegrating Domain Knowledge and Visual Evidence to Support Highlight Detection in Sports VideosJuergen Assfalg, Marco Bertini, Carlo Colombo, and Alberto Del BimboA Generic Event Model and Sports Vid

  18. Assessing Efficiency of Prompts Based on Learner Characteristics

    Directory of Open Access Journals (Sweden)

    Joy Backhaus

    2017-02-01

    Full Text Available Personalized prompting research has shown the significant learning benefit of prompting. The current paper outlines and examines a personalized prompting approach aimed at eliminating performance differences on the basis of a number of learner characteristics (capturing learning strategies and traits. The learner characteristics of interest were the need for cognition, work effort, computer self-efficacy, the use of surface learning, and the learner’s confidence in their learning. The approach was tested in two e-modules, using similar assessment forms (experimental n = 413; control group n = 243. Several prompts which corresponded to the learner characteristics were implemented, including an explanation prompt, a motivation prompt, a strategy prompt, and an assessment prompt. All learning characteristics were significant correlates of at least one of the outcome measures (test performance, errors, and omissions. However, only the assessment prompt increased test performance. On this basis, and drawing upon the testing effect, this prompt may be a particularly promising option to increase performance in e-learning and similar personalized systems.

  19. Real-time video quality monitoring

    Science.gov (United States)

    Liu, Tao; Narvekar, Niranjan; Wang, Beibei; Ding, Ran; Zou, Dekun; Cash, Glenn; Bhagavathy, Sitaram; Bloom, Jeffrey

    2011-12-01

    The ITU-T Recommendation G.1070 is a standardized opinion model for video telephony applications that uses video bitrate, frame rate, and packet-loss rate to measure the video quality. However, this model was original designed as an offline quality planning tool. It cannot be directly used for quality monitoring since the above three input parameters are not readily available within a network or at the decoder. And there is a great room for the performance improvement of this quality metric. In this article, we present a real-time video quality monitoring solution based on this Recommendation. We first propose a scheme to efficiently estimate the three parameters from video bitstreams, so that it can be used as a real-time video quality monitoring tool. Furthermore, an enhanced algorithm based on the G.1070 model that provides more accurate quality prediction is proposed. Finally, to use this metric in real-world applications, we present an example emerging application of real-time quality measurement to the management of transmitted videos, especially those delivered to mobile devices.

  20. Impulsivity, self-regulation,and pathological video gaming among youth: testing a mediation model.

    Science.gov (United States)

    Liau, Albert K; Neo, Eng Chuan; Gentile, Douglas A; Choo, Hyekyung; Sim, Timothy; Li, Dongdong; Khoo, Angeline

    2015-03-01

    Given the potential negative mental health consequences of pathological video gaming, understanding its etiology may lead to useful treatment developments. The purpose of the study was to examine the influence of impulsive and regulatory processes on pathological video gaming. Study 1 involved 2154 students from 6 primary and 4 secondary schools in Singapore. Study 2 involved 191 students from 2 secondary schools. The results of study 1 and study 2 supported the hypothesis that self-regulation is a mediator between impulsivity and pathological video gaming. Specifically, higher levels of impulsivity was related to lower levels of self-regulation, which in turn was related to higher levels of pathological video gaming. The use of impulsivity and self-regulation in predicting pathological video gaming supports the dual-system model of incorporating both impulsive and reflective systems in the prediction of self-control outcomes. The study highlights the development of self-regulatory resources as a possible avenue for future prevention and treatment research. © 2011 APJPH.

  1. Two Variations of Video Modeling Interventions for Teaching Play Skills to Children with Autism

    Science.gov (United States)

    Sancho, Kimberly; Sidener, Tina M.; Reeve, Sharon A.; Sidener, David W.

    2010-01-01

    The current study employed an adapted alternating treatments design with reversal and multiple probe across participants components to compare the effects of traditional video priming and simultaneous video modeling on the acquisition of play skills in two children diagnosed with autism. Generalization was programmed across play sets, instructors,…

  2. Sensitivity of prompt searches to long-lived particles

    CERN Document Server

    Montejo Berlingen, Javier; The ATLAS collaboration

    2018-01-01

    The sensitivity of "prompt" searches to long-lived particles is evaluated, in the context of SUSY models with variable RPV couplings. The experimental aspects and the information required for the correct treatment in public recast tools are discussed in detail.

  3. Gamma-ray Burst Prompt Correlations: Selection and Instrumental Effects

    Science.gov (United States)

    Dainotti, M. G.; Amati, L.

    2018-05-01

    The prompt emission mechanism of gamma-ray bursts (GRB) even after several decades remains a mystery. However, it is believed that correlations between observable GRB properties, given their huge luminosity/radiated energy and redshift distribution extending up to at least z ≈ 9, are promising possible cosmological tools. They also may help to discriminate among the most plausible theoretical models. Nowadays, the objective is to make GRBs standard candles, similar to supernovae (SNe) Ia, through well-established and robust correlations. However, differently from SNe Ia, GRBs span over several order of magnitude in their energetics, hence they cannot yet be considered standard candles. Additionally, being observed at very large distances, their physical properties are affected by selection biases, the so-called Malmquist bias or Eddington effect. We describe the state of the art on how GRB prompt correlations are corrected for these selection biases to employ them as redshift estimators and cosmological tools. We stress that only after an appropriate evaluation and correction for these effects, GRB correlations can be used to discriminate among the theoretical models of prompt emission, to estimate the cosmological parameters and to serve as distance indicators via redshift estimation.

  4. Effects of Violent-Video-Game Exposure on Aggressive Behavior, Aggressive-Thought Accessibility, and Aggressive Affect Among Adults With and Without Autism Spectrum Disorder.

    Science.gov (United States)

    Engelhardt, Christopher R; Mazurek, Micah O; Hilgard, Joseph; Rouder, Jeffrey N; Bartholow, Bruce D

    2015-08-01

    Recent mass shootings have prompted the idea among some members of the public that exposure to violent video games can have a pronounced effect on individuals with autism spectrum disorder (ASD). Empirical evidence for or against this claim has been missing, however. To address this issue, we assigned adults with and without ASD to play a violent or nonviolent version of a customized first-person shooter video game. After they played the game, we assessed three aggression-related outcome variables (aggressive behavior, aggressive-thought accessibility, and aggressive affect). Results showed strong evidence that adults with ASD, compared with typically developing adults, are not differentially affected by acute exposure to violent video games. Moreover, model comparisons provided modest evidence against any effect of violent game content whatsoever. Findings from this experiment suggest that societal concerns that exposure to violent games may have a unique effect on adults with autism are not supported by evidence. © The Author(s) 2015.

  5. Assessment of Geant4 Prompt-Gamma Emission Yields in the Context of Proton Therapy Monitoring

    Science.gov (United States)

    Pinto, Marco; Dauvergne, Denis; Freud, Nicolas; Krimmer, Jochen; Létang, Jean M.; Testa, Etienne

    2016-01-01

    Monte Carlo tools have been long used to assist the research and development of solutions for proton therapy monitoring. The present work focuses on the prompt-gamma emission yields by comparing experimental data with the outcomes of the current version of Geant4 using all applicable proton inelastic models. For the case in study and using the binary cascade model, it was found that Geant4 overestimates the prompt-gamma emission yields by 40.2 ± 0.3%, even though it predicts the prompt-gamma profile length of the experimental profile accurately. In addition, the default implementations of all proton inelastic models show an overestimation in the number of prompt gammas emitted. Finally, a set of built-in options and physically sound Geant4 source code changes have been tested in order to try to improve the discrepancy observed. A satisfactory agreement was found when using the QMD model with a wave packet width equal to 1.3 fm2. PMID:26858937

  6. Gas leak detection in infrared video with background modeling

    Science.gov (United States)

    Zeng, Xiaoxia; Huang, Likun

    2018-03-01

    Background modeling plays an important role in the task of gas detection based on infrared video. VIBE algorithm is a widely used background modeling algorithm in recent years. However, the processing speed of the VIBE algorithm sometimes cannot meet the requirements of some real time detection applications. Therefore, based on the traditional VIBE algorithm, we propose a fast prospect model and optimize the results by combining the connected domain algorithm and the nine-spaces algorithm in the following processing steps. Experiments show the effectiveness of the proposed method.

  7. The effects of video self-modeling on the decoding skills of children at risk for reading disabilities

    OpenAIRE

    Ayala, SM; O'Connor, R

    2013-01-01

    Ten first grade students who had responded poorly to a Tier 2 reading intervention in a response to intervention (RTI) model received an intervention of video self-modeling to improve decoding skills and sight word recognition. Students were video recorded blending and segmenting decodable words and reading sight words. Videos were edited and viewed a minimum of four times per week. Data were collected twice per week using curriculum-based measures. A single subject multiple baseline across p...

  8. The effects of different types of video modelling on undergraduate students’ motivation and learning in an academic writing course

    Directory of Open Access Journals (Sweden)

    Mariet Raedts

    2017-02-01

    Full Text Available This study extends previous research on observational learning in writing. It was our objective to enhance students’ motivation and learning in an academic writing course on research synthesis writing. Participants were 162 first-year college students who had no experience with the writing task. Based on Bandura’s Social Cognitive Theory we developed two videos. In the first video a manager (prestige model elaborated on how synthesizing information is important in professional life. In the second video a peer model demonstrated a five-step writing strategy for writing up a research synthesis. We compared two versions of this video. In the explicit-strategy-instruction-video we added visual cues to channel learners’ attention to critical features of the demonstrated task using an acronym in which each letter represented a step of the model’s strategy. In the implicit-strategy-instruction-video these cues were absent. The effects of the videos were tested using a 2x2 factorial between-subjects design with video of the prestige model (yes/no and type of instructional video (implicit versus explicit strategy instruction as factors. Four post-test measures were obtained: task value, self-efficacy beliefs, task knowledge and writing performances. Path analyses revealed that the prestige model did not affect students’ task value. Peer-mediated explicit strategy instruction had no effect on self-efficacy, but a strong effect on task knowledge. Task knowledge – in turn – was found to be predictive of writing performance.

  9. Perceptual tools for quality-aware video networks

    Science.gov (United States)

    Bovik, A. C.

    2014-01-01

    Monitoring and controlling the quality of the viewing experience of videos transmitted over increasingly congested networks (especially wireless networks) is a pressing problem owing to rapid advances in video-centric mobile communication and display devices that are straining the capacity of the network infrastructure. New developments in automatic perceptual video quality models offer tools that have the potential to be used to perceptually optimize wireless video, leading to more efficient video data delivery and better received quality. In this talk I will review key perceptual principles that are, or could be used to create effective video quality prediction models, and leading quality prediction models that utilize these principles. The goal is to be able to monitor and perceptually optimize video networks by making them "quality-aware."

  10. Online Media Business Models: Lessons from the Video Game Sector

    OpenAIRE

    Komorowski, Marlen; Delaere, Simon

    2016-01-01

    Today’s media industry is characterized by disruptive changes and business models have been acknowledged as a driving force for success. Current business model research manages only to grasp static descriptions while in reality media managers are struggling with the dynamics of the industry. This article aims to close this gap by investigating a new paradigm of online media business models. Based on three video game case studies of the massively multiplayer online role-playing game genre, thi...

  11. Noise Residual Learning for Noise Modeling in Distributed Video Coding

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Forchhammer, Søren

    2012-01-01

    Distributed video coding (DVC) is a coding paradigm which exploits the source statistics at the decoder side to reduce the complexity at the encoder. The noise model is one of the inherently difficult challenges in DVC. This paper considers Transform Domain Wyner-Ziv (TDWZ) coding and proposes...

  12. Prompt and Non-prompt $J/\\psi$ Elliptic Flow in Pb+Pb Collisions at 5.02 TeV with the ATLAS Detector

    CERN Document Server

    Lopez, Jorge; The ATLAS collaboration

    2018-01-01

    The elliptic flow of prompt and non-prompt $J/\\psi$ was measured in Pb+Pb collisions at $\\sqrt{s_\\text{NN}}=5.02$ TeV with an integrated luminosity of $0.42~\\mathrm{nb}^{-1}$ with ATLAS at the LHC. The prompt and non-prompt signals are separated using a two-dimensional simultaneous fit of the invariant mass and pseudo-proper time in the dimuon decay channel. The measurement is performed in the kinematic range $9prompt and non-prompt $J/\\psi$ mesons have non-zero elliptic flow. Prompt $J/\\psi$ $v_2$ decreases as a function of $p_\\mathrm{T}$, while non-prompt $J/\\psi$ $v_2$ is flat over the studied kinematical region. There is no observed dependence on rapidity or centrality.

  13. PUCK: An Automated Prompting System for Smart Environments: Towards achieving automated prompting; Challenges involved.

    Science.gov (United States)

    Das, Barnan; Cook, Diane J; Schmitter-Edgecombe, Maureen; Seelye, Adriana M

    2012-10-01

    The growth in popularity of smart environments has been quite steep in the last decade and so has the demand for smart health assistance systems. A smart home-based prompting system can enhance these technologies to deliver in-home interventions to users for timely reminders or brief instructions describing the way a task should be done for successful completion. This technology is in high demand given the desire of people who have physical or cognitive limitations to live independently in their homes. In this paper, with the introduction of the "PUCK" prompting system, we take an approach to automate prompting-based interventions without any predefined rule sets or user feedback. Unlike other approaches, we use simple off-the-shelf sensors and learn the timing for prompts based on real data that is collected with volunteer participants in our smart home test bed. The data mining approaches taken to solve this problem come with the challenge of an imbalanced class distribution that occurs naturally in the data. We propose a variant of an existing sampling technique, SMOTE, to deal with the class imbalance problem. To validate the approach, a comparative analysis with Cost Sensitive Learning is performed.

  14. Accuracy of complete-arch model using an intraoral video scanner: An in vitro study.

    Science.gov (United States)

    Jeong, Il-Do; Lee, Jae-Jun; Jeon, Jin-Hun; Kim, Ji-Hwan; Kim, Hae-Young; Kim, Woong-Chul

    2016-06-01

    Information on the accuracy of intraoral video scanners for long-span areas is limited. The purpose of this in vitro study was to evaluate and compare the trueness and precision of an intraoral video scanner, an intraoral still image scanner, and a blue-light scanner for the production of digital impressions. Reference scan data were obtained by scanning a complete-arch model. An identical model was scanned 8 times using an intraoral video scanner (CEREC Omnicam; Sirona) and an intraoral still image scanner (CEREC Bluecam; Sirona), and stone casts made from conventional impressions of the same model were scanned 8 times with a blue-light scanner as a control (Identica Blue; Medit). Accuracy consists of trueness (the extent to which the scan data differ from the reference scan) and precision (the similarity of the data from multiple scans). To evaluate precision, 8 scans were superimposed using 3-dimensional analysis software; the reference scan data were then superimposed to determine the trueness. Differences were analyzed using 1-way ANOVA and post hoc Tukey HSD tests (α=.05). Trueness in the video scanner group was not significantly different from that in the control group. However, the video scanner group showed significantly lower values than those of the still image scanner group for all variables (P<.05), except in tolerance range. The root mean square, standard deviations, and mean negative precision values for the video scanner group were significantly higher than those for the other groups (P<.05). Digital impressions obtained by the intraoral video scanner showed better accuracy for long-span areas than those captured by the still image scanner. However, the video scanner was less accurate than the laboratory scanner. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  15. Video self-modeling as a post-treatment fluency recovery strategy for adults.

    Science.gov (United States)

    Harasym, Jessica; Langevin, Marilyn; Kully, Deborah

    2015-06-01

    This multiple-baseline across subjects study investigated the effectiveness of video self-modeling (VSM) in reducing stuttering and bringing about improvements in associated self-report measures. Participants' viewing practices and perceptions of the utility of VSM also were explored. Three adult males who had previously completed speech restructuring treatment viewed VSM recordings twice per week for 6 weeks. Weekly speech data, treatment viewing logs, and pre- and post-treatment self-report measures were obtained. An exit interview also was conducted. Two participants showed a decreasing trend in stuttering frequency. All participants appeared to engage in fewer avoidance behaviors and had less expectations to stutter. All participants perceived that, in different ways, the VSM treatment had benefited them and all participants had unique viewing practices. Given the increasing availability and ease in using portable audio-visual technology, VSM appears to offer an economical and clinically useful tool for clients who are motivated to use the technology to recover fluency. Readers will be able to describe: (a) the tenets of video-self modeling; (b) the main components of video-self modeling as a fluency recovery treatment as used in this study; and (c) speech and self-report outcomes. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. When Babies Watch Television: Attention-Getting, Attention-Holding, and the Implications for Learning from Video Material

    Science.gov (United States)

    Courage, Mary L.; Setliff, Alissa E.

    2010-01-01

    The recent increase in the availability of infant-directed video material (e.g., "Baby Einstein") and the corresponding increase in the amount of time that infants and toddlers spend viewing them have prompted concern among parents and professionals that these media might impede aspects of cognitive and social development. In contrast, supporters…

  17. Visual Prompts or Volunteer Models: An Experiment in Recycling

    Directory of Open Access Journals (Sweden)

    Zi Yin Lin

    2016-05-01

    Full Text Available Successful long-term programs for urban residential food waste sorting are very rare, despite the established urgent need for them in cities for waste reduction, pollution reduction and circular resource economy reasons. This study meets recent calls to bridge policy makers and academics, and calls for more thorough analysis of operational work in terms of behavioral determinants, to move the fields on. It takes a key operational element of a recently reported successful food waste sorting program—manning of the new bins by volunteers—and considers the behavioral determinants involved in order to design a more scalable and cheaper alternative—the use of brightly colored covers with flower designs on three sides of the bin. The two interventions were tested in a medium-scale, real-life experimental set-up that showed that they had statistically similar results: high effective capture rates of 32%–34%, with low contamination rates. The success, low cost and simple implementation of the latter suggests it should be considered for large-scale use. Candidate behavioral determinants are prompts, emotion and knowledge for the yellow bin intervention, and for the volunteer intervention they are additionally social influence, modeling, role clarification, and moderators of messenger type and interpersonal or tailored messaging.

  18. Video Comparator

    International Nuclear Information System (INIS)

    Rose, R.P.

    1978-01-01

    The Video Comparator is a comparative gage that uses electronic images from two sources, a standard and an unknown. Two matched video cameras are used to obtain the electronic images. The video signals are mixed and displayed on a single video receiver (CRT). The video system is manufactured by ITP of Chatsworth, CA and is a Tele-Microscope II, Model 148. One of the cameras is mounted on a toolmaker's microscope stand and produces a 250X image of a cast. The other camera is mounted on a stand and produces an image of a 250X template. The two video images are mixed in a control box provided by ITP and displayed on a CRT. The template or the cast can be moved to align the desired features. Vertical reference lines are provided on the CRT, and a feature on the cast can be aligned with a line on the CRT screen. The stage containing the casts can be moved using a Boeckleler micrometer equipped with a digital readout, and a second feature aligned with the reference line and the distance moved obtained from the digital display

  19. Prompt neutron emission; Emission des neutrons prompts de fission

    Energy Technology Data Exchange (ETDEWEB)

    Sher, R [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires

    1959-07-01

    It is shown that Ramanna and Rao's tentative conclusion that prompt fission neutrons are emitted (in the fragment system) preferentially in the direction of fragment motion is not necessitated by their angular distribution measurements, which are well explained by the usual assumptions of isotropic emission with a Maxwell (or Maxwell-like) emission spectrum. The energy distribution (Watt spectrum) and the angular distribution, both including the effects of anisotropic emission, are given. (author) [French] On montre que la conclusion experimentale de Ramanna et Rao selon laquelle les neutrons prompts de fission sont emis (dans le systeme de reference des fragments) preferentiellement dans la direction du mouvement du fragment, ne decoule pas necessairement de leurs mesures de distribution angulaire. Celles-ci sont bien expliquees par l'hypothese classique de l'emission isotrope et d'un spectre d'emission maxwellien (ou quasi-maxwellien). On donne la distribution en energie (ou spectre de Watt) et la distribution angulaire, comprenant toutes les deux les effets d'emission anisotrope. (auteur)

  20. A Novel Video Data-Source Authentication Model Based on Digital Watermarking and MAC in Multicast

    Institute of Scientific and Technical Information of China (English)

    ZHAO Anjun; LU Xiangli; GUO Lei

    2006-01-01

    A novel video data authentication model based on digital video watermarking and MAC (message authentication code) in multicast protocol is proposed in this paper. The digital watermarking which composes of the MAC of the significant video content, the key and instant authentication data is embedded into the insignificant video component by the MLUT (modified look-up table) video watermarking technology. We explain a method that does not require storage of each data packet for a time, thus making receiver not vulnerable to DOS (denial of service) attack. So the video packets can be authenticated instantly without large volume buffer in the receivers. TESLA(timed efficient stream loss-tolerant authentication) does not explain how to select the suitable value for d, which is an important parameter in multicast source authentication. So we give a method to calculate the key disclosure delay (number of intervals). Simulation results show that the proposed algorithms improve the performance of data source authentication in multicast.

  1. Consultants' meeting on prompt fission neutron spectra of major actinides. Summary report

    International Nuclear Information System (INIS)

    Capote Noy, R.; Maslov, V.; Bauge, E.; Ohsawa, T.; Vorobyev, A.; Chadwick, M.B.; Oberstedt, S.

    2009-01-01

    A Consultants' Meeting on 'Prompt Fission Neutron Spectra of Major Actinides' was held at IAEA Headquarters, Vienna, Austria, to discuss the adequacy and quality of the recommended prompt fission neutron spectra to be found in existing nuclear data applications libraries. These prompt fission neutron spectra were judged to be inadequate, and this problem has proved difficult to resolve by means of theoretical modelling. Major adjustments may be required to ensure the validity of such important data. There is a strong requirement for an international effort to explore and resolve these difficulties and recommend prompt fission neutron spectra and uncertainty covariance matrices for the actinides over the neutron energy range from thermal to 20 MeV. Participants also stressed that there would be a strong need for validation of the resulting data against integral critical assembly and dosimetry data. (author)

  2. THE ELECTROMAGNETIC MODEL OF SHORT GRBs, THE NATURE OF PROMPT TAILS, SUPERNOVA-LESS LONG GRBs, AND HIGHLY EFFICIENT EPISODIC ACCRETION

    Energy Technology Data Exchange (ETDEWEB)

    Lyutikov, Maxim [Department of Physics, Purdue University, 525 Northwestern Avenue, West Lafayette, IN 47907-2036 (United States)

    2013-05-01

    Many short gamma-ray bursts (GRBs) show prompt tails lasting up to hundreds of seconds that can be energetically dominant over the initial sub-second spike. In this paper we develop an electromagnetic model of short GRBs that explains the two stages of the energy release, the prompt spike and the prompt tail. The key ingredient of the model is the recent discovery that an isolated black hole can keep its open magnetic flux for times much longer than the collapse time and thus can spin down electromagnetically, driving the relativistic wind. First, the merger is preceded by an electromagnetic precursor wind with total power L{sub p} Almost-Equal-To (((GM{sub NS}){sup 3}B{sub NS}{sup 2})/c{sup 5}R){proportional_to}(-t){sup - Vulgar-Fraction-One-Quarter }, reaching 3 Multiplication-Sign 10{sup 44} erg s{sup -1} for typical neutron star masses of 1.4 M{sub Sun} and magnetic fields B {approx} 10{sup 12} G. If a fraction of this power is converted into pulsar-like coherent radio emission, this may produce an observable radio burst of a few milliseconds (like the Lorimer burst). At the active stage of the merger, two neutron stars produce a black hole surrounded by an accretion torus in which the magnetic field is amplified to {approx}10{sup 15} G. This magnetic field extracts the rotational energy of the black hole and drives an axially collimated electromagnetic wind that may carry of the order of 10{sup 50} erg, limited by the accretion time of the torus, a few hundred milliseconds. For observers nearly aligned with the orbital normal this is seen as a classical short GRB. After the accretion of the torus, the isolated black hole keeps the open magnetic flux and drives the equatorially (not axially) collimated outflow, which is seen by an observer at intermediate polar angles as a prompt tail. The tail carries more energy than the prompt spike, but its emission is de-boosted for observers along the orbital normal. Observers in the equatorial plane miss the prompt spike

  3. Improving Video Generation for Multi-functional Applications

    OpenAIRE

    Kratzwald, Bernhard; Huang, Zhiwu; Paudel, Danda Pani; Dinesh, Acharya; Van Gool, Luc

    2017-01-01

    In this paper, we aim to improve the state-of-the-art video generative adversarial networks (GANs) with a view towards multi-functional applications. Our improved video GAN model does not separate foreground from background nor dynamic from static patterns, but learns to generate the entire video clip conjointly. Our model can thus be trained to generate - and learn from - a broad set of videos with no restriction. This is achieved by designing a robust one-stream video generation architectur...

  4. Does a video displaying a stair climbing model increase stair use in a worksite setting?

    Science.gov (United States)

    Van Calster, L; Van Hoecke, A-S; Octaef, A; Boen, F

    2017-08-01

    This study evaluated the effects of improving the visibility of the stairwell and of displaying a video with a stair climbing model on climbing and descending stair use in a worksite setting. Intervention study. Three consecutive one-week intervention phases were implemented: (1) the visibility of the stairs was improved by the attachment of pictograms that indicated the stairwell; (2) a video showing a stair climbing model was sent to the employees by email; and (3) the same video was displayed on a television screen at the point-of-choice (POC) between the stairs and the elevator. The interventions took place in two buildings. The implementation of the interventions varied between these buildings and the sequence was reversed. Improving the visibility of the stairs increased both stair climbing (+6%) and descending stair use (+7%) compared with baseline. Sending the video by email yielded no additional effect on stair use. By contrast, displaying the video at the POC increased stair climbing in both buildings by 12.5% on average. One week after the intervention, the positive effects on stair climbing remained in one of the buildings, but not in the other. These findings suggest that improving the visibility of the stairwell and displaying a stair climbing model on a screen at the POC can result in a short-term increase in both climbing and descending stair use. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  5. Handheld Devices and Video Modeling to Enhance the Learning of Self-Help Skills in Adolescents With Autism Spectrum Disorder.

    Science.gov (United States)

    Campbell, Joseph E; Morgan, Michele; Barnett, Veronica; Spreat, Scott

    2015-04-01

    The viewing of videos is a much-studied intervention to teach self-help, social, and vocational skills. Many of the studies to date looked at video modeling using televisions, computers, and other large screens. This study looked at the use of video modeling on portable handheld devices to teach hand washing to three adolescent students with an autism spectrum disorder. Three students participated in this 4-week study conducted by occupational therapists. Baseline data were obtained for the first student for 1 week, the second for 2 weeks, and the third for 3 weeks; videos were introduced when the participants each finished the baseline phase. Given the cognitive and motor needs of the participants, the occupational therapist set the player so that the participants only had to press the play button to start the video playing. The participants were able to hold the players and view at distances that were most appropriate for their individual needs and preferences. The results suggest that video modeling on a handheld device improves the acquisition of self-help skills.

  6. PROMPT: articulatietherapie vanuit tactiel-kinesthetische input

    NARCIS (Netherlands)

    Drs M.F. Raaijmakers; Drs Sj. van der Meulen

    2005-01-01

    PROMPT is a tactile-kinesthetic approach for assessment and treatment of speech production disorders. PROMPT uses tactile-kinethetic cues to facilitate motor speech behaviors. Therapy is structured from basic motor speech patterns with much tactile-lkinesthetic cueing, towards complex motor speech

  7. Evaluation of the 235U prompt fission neutron spectrum including a detailed analysis of experimental data and improved model information

    Science.gov (United States)

    Neudecker, Denise; Talou, Patrick; Kahler, Albert C.; White, Morgan C.; Kawano, Toshihiko

    2017-09-01

    We present an evaluation of the 235U prompt fission neutron spectrum (PFNS) induced by thermal to 20-MeV neutrons. Experimental data and associated covariances were analyzed in detail. The incident energy dependence of the PFNS was modeled with an extended Los Alamos model combined with the Hauser-Feshbach and the exciton models. These models describe prompt fission, pre-fission compound nucleus and pre-equilibrium neutron emissions. The evaluated PFNS agree well with the experimental data included in this evaluation, preliminary data of the LANL and LLNL Chi-Nu measurement and recent evaluations by Capote et al. and Rising et al. However, they are softer than the ENDF/B-VII.1 (VII.1) and JENDL-4.0 PFNS for incident neutron energies up to 2 MeV. Simulated effective multiplication factors keff of the Godiva and Flattop-25 critical assemblies are further from the measured keff if the current data are used within VII.1 compared to using only VII.1 data. However, if this work is used with ENDF/B-VIII.0β2 data, simulated values of keff agree well with the measured ones.

  8. Using video modeling to teach reciprocal pretend play to children with autism.

    Science.gov (United States)

    MacDonald, Rebecca; Sacramone, Shelly; Mansfield, Renee; Wiltz, Kristine; Ahearn, William H

    2009-01-01

    The purpose of the present study was to use video modeling to teach children with autism to engage in reciprocal pretend play with typically developing peers. Scripted play scenarios involving various verbalizations and play actions with adults as models were videotaped. Two children with autism were each paired with a typically developing child, and a multiple-probe design across three play sets was used to evaluate the effects of the video modeling procedure. Results indicated that both children with autism and the typically developing peers acquired the sequences of scripted verbalizations and play actions quickly and maintained this performance during follow-up probes. In addition, probes indicated an increase in the mean number of unscripted verbalizations as well as reciprocal verbal interactions and cooperative play. These findings are discussed as they relate to the development of reciprocal pretend-play repertoires in young children with autism.

  9. It's All a Matter of Perspective: Viewing First-Person Video Modeling Examples Promotes Learning of an Assembly Task

    Science.gov (United States)

    Fiorella, Logan; van Gog, Tamara; Hoogerheide, Vincent; Mayer, Richard E.

    2017-01-01

    The present study tests whether presenting video modeling examples from the learner's (first-person) perspective promotes learning of an assembly task, compared to presenting video examples from a third-person perspective. Across 2 experiments conducted in different labs, university students viewed a video showing how to assemble an 8-component…

  10. Probabilistic Approaches to Video Retrieval

    NARCIS (Netherlands)

    Ianeva, Tzvetanka; Boldareva, L.; Westerveld, T.H.W.; Cornacchia, Roberto; Hiemstra, Djoerd; de Vries, A.P.

    Our experiments for TRECVID 2004 further investigate the applicability of the so-called “Generative Probabilistic Models to video retrieval��?. TRECVID 2003 results demonstrated that mixture models computed from video shot sequences improve the precision of “query by examples��? results when

  11. Using Portable Video Modeling Technology to Increase the Compliment Behaviors of Children with Autism During Athletic Group Play.

    Science.gov (United States)

    Macpherson, Kevin; Charlop, Marjorie H; Miltenberger, Catherine A

    2015-12-01

    A multiple baseline design across participants was used to examine the effects of a portable video modeling intervention delivered in the natural environment on the verbal compliments and compliment gestures demonstrated by five children with autism. Participants were observed playing kickball with peers and adults. In baseline, participants demonstrated few compliment behaviors. During intervention, an iPad(®) was used to implement the video modeling treatment during the course of the athletic game. Viewing the video rapidly increased the verbal compliments participants gave to peers. Participants also demonstrated more response variation after watching the videos. Some generalization to an untrained activity occurred and compliment gestures also occurred. Results are discussed in terms of contributions to the literature.

  12. Video-Quality Estimation Based on Reduced-Reference Model Employing Activity-Difference

    Science.gov (United States)

    Yamada, Toru; Miyamoto, Yoshihiro; Senda, Yuzo; Serizawa, Masahiro

    This paper presents a Reduced-reference based video-quality estimation method suitable for individual end-user quality monitoring of IPTV services. With the proposed method, the activity values for individual given-size pixel blocks of an original video are transmitted to end-user terminals. At the end-user terminals, the video quality of a received video is estimated on the basis of the activity-difference between the original video and the received video. Psychovisual weightings and video-quality score adjustments for fatal degradations are applied to improve estimation accuracy. In addition, low-bit-rate transmission is achieved by using temporal sub-sampling and by transmitting only the lower six bits of each activity value. The proposed method achieves accurate video quality estimation using only low-bit-rate original video information (15kbps for SDTV). The correlation coefficient between actual subjective video quality and estimated quality is 0.901 with 15kbps side information. The proposed method does not need computationally demanding spatial and gain-and-offset registrations. Therefore, it is suitable for real-time video-quality monitoring in IPTV services.

  13. Modeling of video traffic in packet networks, low rate video compression, and the development of a lossy+lossless image compression algorithm

    Science.gov (United States)

    Sayood, K.; Chen, Y. C.; Wang, X.

    1992-01-01

    During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.

  14. Toy Trucks in Video Analysis

    DEFF Research Database (Denmark)

    Buur, Jacob; Nakamura, Nanami; Larsen, Rainer Rye

    2015-01-01

    discovered that using scale-models like toy trucks has a strongly encouraging effect on developers/designers to collaboratively make sense of field videos. In our analysis of such scale-model sessions, we found some quite fundamental patterns of how participants utilise objects; the participants build shared......Video fieldstudies of people who could be potential users is widespread in design projects. How to analyse such video is, however, often challenging, as it is time consuming and requires a trained eye to unlock experiential knowledge in people’s practices. In our work with industrialists, we have...... narratives by moving the objects around, they name them to handle the complexity, they experience what happens in the video through their hands, and they use the video together with objects to create alternative narratives, and thus alternative solutions to the problems they observe. In this paper we claim...

  15. Fast detection and modeling of human-body parts from monocular video

    NARCIS (Netherlands)

    Lao, W.; Han, Jungong; With, de P.H.N.; Perales, F.J.; Fisher, R.B.

    2009-01-01

    This paper presents a novel and fast scheme to detect different body parts in human motion. Using monocular video sequences, trajectory estimation and body modeling of moving humans are combined in a co-operating processing architecture. More specifically, for every individual person, features of

  16. Bone marrow equivalent prompt dose from two common fallout scenarios

    International Nuclear Information System (INIS)

    Morris, M.D.; Jones, T.D.; Young, R.W.

    1994-01-01

    A cell-kinetics model for radiation-induced myelopoiesis has been derived for mice, rats, dogs, sheep, swine, and burros. The model was extended to humans after extensive comparisons with molecular and cellular data from biological experiments and an assortment of predictive/validation tests on animal mortality, cell survival, and cellular repopulation following irradiations. One advantage of the model is that any complex pattern of protracted irradiation can be equated to its equivalent prompt dose. Severity of biological response depends upon target-organ dose, dose rate, and dose fractionation. Epidemiological and animal data are best suited for exposures given in brief periods of time. To use those data to assess risk from protracted human exposures, it is obligatory to model molecular repair and compensatory proliferation in terms of prompt dose. Although the model is somewhat complex both mathematically and biologically, this note describes simple numerical approximations for two common exposure scenarios. Both approximations are easily evaluated on a simple pocket calculator by a health physicist or emergency management officer. 12 refs., 5 figs

  17. Probabilistic recognition of human faces from video

    DEFF Research Database (Denmark)

    Zhou, Saohua; Krüger, Volker; Chellappa, Rama

    2003-01-01

    Recognition of human faces using a gallery of still or video images and a probe set of videos is systematically investigated using a probabilistic framework. In still-to-video recognition, where the gallery consists of still images, a time series state space model is proposed to fuse temporal...... of the identity variable produces the recognition result. The model formulation is very general and it allows a variety of image representations and transformations. Experimental results using videos collected by NIST/USF and CMU illustrate the effectiveness of this approach for both still-to-video and video-to-video...... information in a probe video, which simultaneously characterizes the kinematics and identity using a motion vector and an identity variable, respectively. The joint posterior distribution of the motion vector and the identity variable is estimated at each time instant and then propagated to the next time...

  18. Hybrid Modeling of Intra-DCT Coefficients for Real-Time Video Encoding

    Directory of Open Access Journals (Sweden)

    Li Jin

    2008-01-01

    Full Text Available Abstract The two-dimensional discrete cosine transform (2-D DCT and its subsequent quantization are widely used in standard video encoders. However, since most DCT coefficients become zeros after quantization, a number of redundant computations are performed. This paper proposes a hybrid statistical model used to predict the zeroquantized DCT (ZQDCT coefficients for intratransform and to achieve better real-time performance. First, each pixel block at the input of DCT is decomposed into a series of mean values and a residual block. Subsequently, a statistical model based on Gaussian distribution is used to predict the ZQDCT coefficients of the residual block. Then, a sufficient condition under which each quantized coefficient becomes zero is derived from the mean values. Finally, a hybrid model to speed up the DCT and quantization calculations is proposed. Experimental results show that the proposed model can reduce more redundant computations and achieve better real-time performance than the reference in the literature at the cost of negligible video quality degradation. Experiments also show that the proposed model significantly reduces multiplications for DCT and quantization. This is particularly suitable for processors in portable devices where multiplications consume more power than additions. Computational reduction implies longer battery lifetime and energy economy.

  19. Photometric Calibration of Consumer Video Cameras

    Science.gov (United States)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  20. Video Self-Modeling: A Promising Strategy for Noncompliant Children.

    Science.gov (United States)

    Axelrod, Michael I; Bellini, Scott; Markoff, Kimberly

    2014-07-01

    The current study investigated the effects of a Video Self-Modeling (VSM) intervention on the compliance and aggressive behavior of three children placed in a psychiatric hospital. Each participant viewed brief video clips of himself following simple adult instructions just prior to the school's morning session and the unit's afternoon free period. A multiple baseline design across settings was used to evaluate the effects of the VSM intervention on compliance with staff instructions and aggressive behavior on the hospital unit and in the hospital-based classroom. All three participants exhibited higher levels of compliance and fewer aggressive episodes during the intervention condition, and the effects were generally maintained when the intervention was withdrawn. Hospital staff reported at the conclusion of the study that the VSM intervention was easy to implement and beneficial for all participants. Taken altogether, the results suggest VSM is a promising, socially acceptable, and proactive intervention approach for improving the behavior of noncompliant children. © The Author(s) 2014.

  1. PROMPT DOSE ANALYSIS FOR THE NATIONAL IGNITION FACILITY

    International Nuclear Information System (INIS)

    Khater, H.; Dauffy, L.; Sitaraman, S.; Brereton, S.

    2008-01-01

    Detailed 3-D modeling of the NIF facility is developed to accurately understand the prompt radiation environment within NIF. Prompt dose values are calculated for different phases of NIF operation. Results of the analysis were used to determine the final thicknesses of the Target Bay (TB) and secondary doors as well as the required shield thicknesses for all unused penetrations. Integrated dose values at different locations within the facility are needed to formulate the personnel access requirements within different parts of the facility. The conclusions of this presentation are: (1) The current NIF facility model includes all important features of the Target Chamber, shielding system, and building configuration; (2) All shielding requirements for Phase I operation are met; (3) Negligible dose values (a fraction of mrem) are expected in normally occupied areas during Phase I; (4) In preparation for the Ignition Campaign and Phase IV of operation, all primary and secondary shield doors will be installed; (5) Unused utility penetrations in the Target Bay and Switchyard walls (∼50%) will be shielded by 1 foot thick concrete to reduce prompt dose inside and outside the NIF facility; (6) During Phase IV, a 20 MJ shot will produce acceptable dose levels in the occupied areas as well as at the nearest site boundary; (7) A comprehensive radiation monitoring plan will be put in place to monitor dose values at large number of locations; and (8) Results of the dose monitoring will be used to modify personnel access requirements if needed

  2. The Ethics of Video-Game Monetization Modelling: A Discussion on Ethics and Legitimacy of Micro-transactions

    OpenAIRE

    Khan, Arun Rashid

    2017-01-01

    The purpose of the paper is to present an insight and elaboration of the growing Gaming Industry. As the video-game industry has enjoyed immense growth in both consumers that part-take / consume the product of video-games and increases in profits, current research discourse has been focussed around the emergence of micro-transactions and their role in business model development of the firm. Furthermore research has also been focussed around the sociological aspects of video-games in terms of ...

  3. Teaching Social-Communication Skills to Preschoolers with Autism: Efficacy of Video versus in Vivo Modeling in the Classroom

    Science.gov (United States)

    Wilson, Kaitlyn P.

    2013-01-01

    Video modeling is a time- and cost-efficient intervention that has been proven effective for children with autism spectrum disorder (ASD); however, the comparative efficacy of this intervention has not been examined in the classroom setting. The present study examines the relative efficacy of video modeling as compared to the more widely-used…

  4. Summary Report of Second Research Coordination Meeting on Prompt Fission Neutron Spectra of Major Actinides

    International Nuclear Information System (INIS)

    Capote Noy, R.

    2013-09-01

    A summary is given of the Second Research Coordination Meeting on Prompt Fission Neutron Spectra of Actinides. Experimental data and modelling methods on prompt fission neutron spectra were reviewed. Extensive technical discussions held on theoretical methods to calculate prompt fission spectra. Detailed coordinated research proposals have been agreed. Summary reports of selected technical presentations at the meeting are given. The resulting work plan of the Coordinated Research Programme is summarized, along with actions and deadlines. (author)

  5. The Effects of Video Self-Modeling on the Decoding Skills of Children At Risk for Reading Disabilities

    OpenAIRE

    Ayala, Sandra M

    2010-01-01

    Ten first grade students, participating in a Tier II response to intervention (RTI) reading program received an intervention of video self modeling to improve decoding skills and sight word recognition. The students were video recorded blending and segmenting decodable words, and reading sight words taken directly from their curriculum instruction. Individual videos were recorded and edited to show students successfully and accurately decoding words and practicing sight word recognition. Each...

  6. Reviews in instructional video

    NARCIS (Netherlands)

    van der Meij, Hans

    2017-01-01

    This study investigates the effectiveness of a video tutorial for software training whose construction was based on a combination of insights from multimedia learning and Demonstration-Based Training. In the videos, a model of task performance was enhanced with instructional features that were

  7. Searches for Prompt R-Parity-Violating Supersymmetry at the LHC

    International Nuclear Information System (INIS)

    Redelbach, Andreas

    2015-01-01

    Searches for supersymmetry (SUSY) at the LHC frequently assume the conservation of R-parity in their design, optimization, and interpretation. In the case that R-parity is not conserved, constraints on SUSY particle masses tend to be weakened with respect to R-parity-conserving models. We review the current status of searches for R-parity-violating (RPV) supersymmetry models at the ATLAS and CMS experiments, limited to 8 TeV search results published or submitted for publication as of the end of March 2015. All forms of renormalisable RPV terms leading to prompt signatures have been considered in the set of analyses under review. Discussing results for searches for prompt R-parity-violating SUSY signatures summarizes the main constraints for various RPV models from LHC Run I and also defines the basis for promising signal regions to be optimized for Run II. In addition to identifying highly constrained regions from existing searches, also gaps in the coverage of the parameter space of RPV SUSY are outlined

  8. Exploring inter-frame correlation analysis and wavelet-domain modeling for real-time caption detection in streaming video

    Science.gov (United States)

    Li, Jia; Tian, Yonghong; Gao, Wen

    2008-01-01

    In recent years, the amount of streaming video has grown rapidly on the Web. Often, retrieving these streaming videos offers the challenge of indexing and analyzing the media in real time because the streams must be treated as effectively infinite in length, thus precluding offline processing. Generally speaking, captions are important semantic clues for video indexing and retrieval. However, existing caption detection methods often have difficulties to make real-time detection for streaming video, and few of them concern on the differentiation of captions from scene texts and scrolling texts. In general, these texts have different roles in streaming video retrieval. To overcome these difficulties, this paper proposes a novel approach which explores the inter-frame correlation analysis and wavelet-domain modeling for real-time caption detection in streaming video. In our approach, the inter-frame correlation information is used to distinguish caption texts from scene texts and scrolling texts. Moreover, wavelet-domain Generalized Gaussian Models (GGMs) are utilized to automatically remove non-text regions from each frame and only keep caption regions for further processing. Experiment results show that our approach is able to offer real-time caption detection with high recall and low false alarm rate, and also can effectively discern caption texts from the other texts even in low resolutions.

  9. Correlated prompt fission data in transport simulations

    Science.gov (United States)

    Talou, P.; Vogt, R.; Randrup, J.; Rising, M. E.; Pozzi, S. A.; Verbeke, J.; Andrews, M. T.; Clarke, S. D.; Jaffke, P.; Jandel, M.; Kawano, T.; Marcath, M. J.; Meierbachtol, K.; Nakae, L.; Rusev, G.; Sood, A.; Stetcu, I.; Walker, C.

    2018-01-01

    Detailed information on the fission process can be inferred from the observation, modeling and theoretical understanding of prompt fission neutron and γ-ray observables. Beyond simple average quantities, the study of distributions and correlations in prompt data, e.g., multiplicity-dependent neutron and γ-ray spectra, angular distributions of the emitted particles, n - n, n - γ, and γ - γ correlations, can place stringent constraints on fission models and parameters that would otherwise be free to be tuned separately to represent individual fission observables. The FREYA and CGMF codes have been developed to follow the sequential emissions of prompt neutrons and γ rays from the initial excited fission fragments produced right after scission. Both codes implement Monte Carlo techniques to sample initial fission fragment configurations in mass, charge and kinetic energy and sample probabilities of neutron and γ emission at each stage of the decay. This approach naturally leads to using simple but powerful statistical techniques to infer distributions and correlations among many observables and model parameters. The comparison of model calculations with experimental data provides a rich arena for testing various nuclear physics models such as those related to the nuclear structure and level densities of neutron-rich nuclei, the γ-ray strength functions of dipole and quadrupole transitions, the mechanism for dividing the excitation energy between the two nascent fragments near scission, and the mechanisms behind the production of angular momentum in the fragments, etc. Beyond the obvious interest from a fundamental physics point of view, such studies are also important for addressing data needs in various nuclear applications. The inclusion of the FREYA and CGMF codes into the MCNP6.2 and MCNPX - PoliMi transport codes, for instance, provides a new and powerful tool to simulate correlated fission events in neutron transport calculations important in

  10. Correlated prompt fission data in transport simulations

    Energy Technology Data Exchange (ETDEWEB)

    Talou, P.; Jaffke, P.; Kawano, T.; Stetcu, I. [Los Alamos National Laboratory, Nuclear Physics Group, Theoretical Division, Los Alamos, NM (United States); Vogt, R. [Lawrence Livermore National Laboratory, Nuclear and Chemical Sciences Division, Livermore, CA (United States); University of California, Physics Department, Davis, CA (United States); Randrup, J. [Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Rising, M.E.; Andrews, M.T.; Sood, A. [Los Alamos National Laboratory, Monte Carlo Methods, Codes, and Applications Group, Los Alamos, NM (United States); Pozzi, S.A.; Clarke, S.D.; Marcath, M.J. [University of Michigan, Department of Nuclear Engineering and Radiological Sciences, Ann Arbor, MI (United States); Verbeke, J.; Nakae, L. [Lawrence Livermore National Laboratory, Nuclear and Chemical Sciences Division, Livermore, CA (United States); Jandel, M. [Los Alamos National Laboratory, Nuclear and Radiochemistry Group, Los Alamos, NM (United States); University of Massachusetts, Department of Physics and Applied Physics, Lowell, MA (United States); Meierbachtol, K. [Los Alamos National Laboratory, Nuclear Engineering and Nonproliferation, Los Alamos, NM (United States); Rusev, G.; Walker, C. [Los Alamos National Laboratory, Nuclear and Radiochemistry Group, Los Alamos, NM (United States)

    2018-01-15

    Detailed information on the fission process can be inferred from the observation, modeling and theoretical understanding of prompt fission neutron and γ-ray observables. Beyond simple average quantities, the study of distributions and correlations in prompt data, e.g., multiplicity-dependent neutron and γ-ray spectra, angular distributions of the emitted particles, n-n, n-γ, and γ-γ correlations, can place stringent constraints on fission models and parameters that would otherwise be free to be tuned separately to represent individual fission observables. The FREYA and CGMF codes have been developed to follow the sequential emissions of prompt neutrons and γ rays from the initial excited fission fragments produced right after scission. Both codes implement Monte Carlo techniques to sample initial fission fragment configurations in mass, charge and kinetic energy and sample probabilities of neutron and γ emission at each stage of the decay. This approach naturally leads to using simple but powerful statistical techniques to infer distributions and correlations among many observables and model parameters. The comparison of model calculations with experimental data provides a rich arena for testing various nuclear physics models such as those related to the nuclear structure and level densities of neutron-rich nuclei, the γ-ray strength functions of dipole and quadrupole transitions, the mechanism for dividing the excitation energy between the two nascent fragments near scission, and the mechanisms behind the production of angular momentum in the fragments, etc. Beyond the obvious interest from a fundamental physics point of view, such studies are also important for addressing data needs in various nuclear applications. The inclusion of the FREYA and CGMF codes into the MCNP6.2 and MCNPX-PoliMi transport codes, for instance, provides a new and powerful tool to simulate correlated fission events in neutron transport calculations important in nonproliferation

  11. Video media-induced aggressiveness in children.

    Science.gov (United States)

    Cardwell, Michael Steven

    2013-09-01

    Transmission of aggressive behaviors to children through modeling by adults has long been a commonly held psychological concept; however, with the advent of technological innovations during the last 30 years, video media-television, movies, video games, and the Internet-has become the primary model for transmitting aggressiveness to children. This review explores the acquisition of aggressive behaviors by children through modeling behaviors in violent video media. The impact of aggressive behaviors on the child, the family, and society is addressed. Suggestive action plans to curb this societal ill are presented.

  12. Video Quality Prediction over Wireless 4G

    KAUST Repository

    Lau, Chun Pong

    2013-04-14

    In this paper, we study the problem of video quality prediction over the wireless 4G network. Video transmission data is collected from a real 4G SCM testbed for investigating factors that affect video quality. After feature transformation and selection on video and network parameters, video quality is predicted by solving as regression problem. Experimental results show that the dominated factor on video quality is the channel attenuation and video quality can be well estimated by our models with small errors.

  13. Video Quality Prediction over Wireless 4G

    KAUST Repository

    Lau, Chun Pong; Zhang, Xiangliang; Shihada, Basem

    2013-01-01

    In this paper, we study the problem of video quality prediction over the wireless 4G network. Video transmission data is collected from a real 4G SCM testbed for investigating factors that affect video quality. After feature transformation and selection on video and network parameters, video quality is predicted by solving as regression problem. Experimental results show that the dominated factor on video quality is the channel attenuation and video quality can be well estimated by our models with small errors.

  14. The Effect of Video-Assisted Inquiry Modified Learning Model on Student’s Achievement on 1st Fundamental Physics Practice

    Directory of Open Access Journals (Sweden)

    T W Maduretno

    2017-12-01

    Full Text Available The purpose of research are: (1 to know the effect of video-assisted inquiry modified learning model on student’s achievement; (2 to improve the student’s achievement in 1st Fundamental Physics Practice through video-assisted inquiry modified learning model. The student’s achievement as dependent variables includes the aspects of knowledge, skill, and attitude. The sampling technique did not choose at random. The Mathematics Education as the control group and the Science Education as the experimental group. The experimental group used video-assisted inquiry modified learning model and the control group used inquiry learning model. The collecting data technique used observation, questionnaire, and test. The researcher used the independent t-test that purposed to compare the average of achievement of control and experiment group. The results of research were: (1 there was an effect of video-assisted inquiry modified learning model on the knowledge and skill aspect but there was not on the attitude aspect; (2 The average of learning outcome of the experimental group higher than the control group’s; (3 The video-assisted inquiry modified learning model helped more skilled and trained student to discovery, inquiry the scientific principle, experiment and observation, and explain the experiment and observation’s result so that the students be able to understand the materials on the 1st Fundamental Physics Practice.

  15. SIRSALE: integrated video database management tools

    Science.gov (United States)

    Brunie, Lionel; Favory, Loic; Gelas, J. P.; Lefevre, Laurent; Mostefaoui, Ahmed; Nait-Abdesselam, F.

    2002-07-01

    Video databases became an active field of research during the last decade. The main objective in such systems is to provide users with capabilities to friendly search, access and playback distributed stored video data in the same way as they do for traditional distributed databases. Hence, such systems need to deal with hard issues : (a) video documents generate huge volumes of data and are time sensitive (streams must be delivered at a specific bitrate), (b) contents of video data are very hard to be automatically extracted and need to be humanly annotated. To cope with these issues, many approaches have been proposed in the literature including data models, query languages, video indexing etc. In this paper, we present SIRSALE : a set of video databases management tools that allow users to manipulate video documents and streams stored in large distributed repositories. All the proposed tools are based on generic models that can be customized for specific applications using ad-hoc adaptation modules. More precisely, SIRSALE allows users to : (a) browse video documents by structures (sequences, scenes, shots) and (b) query the video database content by using a graphical tool, adapted to the nature of the target video documents. This paper also presents an annotating interface which allows archivists to describe the content of video documents. All these tools are coupled to a video player integrating remote VCR functionalities and are based on active network technology. So, we present how dedicated active services allow an optimized video transport for video streams (with Tamanoir active nodes). We then describe experiments of using SIRSALE on an archive of news video and soccer matches. The system has been demonstrated to professionals with a positive feedback. Finally, we discuss open issues and present some perspectives.

  16. Enhancing conflict negotiation strategies of adolescents with autism spectrum disorder using video modeling.

    Science.gov (United States)

    Hochhauser, M; Weiss, P L; Gal, E

    2018-01-01

    Adolescents with autism spectrum disorder (ASD) have particular difficulty in negotiating conflict. A randomized control trial (RCT) was carried out to determine whether the negotiation strategies of adolescents with ASD would be enhanced via a 6-week intervention based on a video modeling application. Adolescents with ASD, aged 12-18 years, were randomly divided into an intervention group (n = 36) and a non-treatment control group (n = 25). Participants' negotiating strategies prior to and following the intervention were measured using the Five Factor Negotiation Scale (FFNS; Nakkula & Nikitopoulos, 1999) and the ConflicTalk questionnaire (Kimsey & Fuller, 2003). The results suggest that video modeling is an effective intervention for improving and maintaining conflict negotiation strategies of adolescents with ASD.

  17. Semantic-based surveillance video retrieval.

    Science.gov (United States)

    Hu, Weiming; Xie, Dan; Fu, Zhouyu; Zeng, Wenrong; Maybank, Steve

    2007-04-01

    Visual surveillance produces large amounts of video data. Effective indexing and retrieval from surveillance video databases are very important. Although there are many ways to represent the content of video clips in current video retrieval algorithms, there still exists a semantic gap between users and retrieval systems. Visual surveillance systems supply a platform for investigating semantic-based video retrieval. In this paper, a semantic-based video retrieval framework for visual surveillance is proposed. A cluster-based tracking algorithm is developed to acquire motion trajectories. The trajectories are then clustered hierarchically using the spatial and temporal information, to learn activity models. A hierarchical structure of semantic indexing and retrieval of object activities, where each individual activity automatically inherits all the semantic descriptions of the activity model to which it belongs, is proposed for accessing video clips and individual objects at the semantic level. The proposed retrieval framework supports various queries including queries by keywords, multiple object queries, and queries by sketch. For multiple object queries, succession and simultaneity restrictions, together with depth and breadth first orders, are considered. For sketch-based queries, a method for matching trajectories drawn by users to spatial trajectories is proposed. The effectiveness and efficiency of our framework are tested in a crowded traffic scene.

  18. Evaluation of Multiple-Alternative Prompts during Tact Training

    Science.gov (United States)

    Leaf, Justin B.; Townley-Cochran, Donna; Mitchell, Erin; Milne, Christine; Alcalay, Aditt; Leaf, Jeremy; Leaf, Ron; Taubman, Mitch; McEachin, John; Oppenheim-Leaf, Misty L.

    2016-01-01

    This study compared 2 methods of fading prompts while teaching tacts to 3 individuals who had been diagnosed with autism spectrum disorder (ASD). The 1st method involved use of an echoic prompt and prompt fading. The 2nd method involved providing multiple-alternative answers and fading by increasing the difficulty of the discrimination. An adapted…

  19. Viewers' perceptions of a YouTube music therapy session video.

    Science.gov (United States)

    Gregory, Dianne; Gooding, Lori G

    2013-01-01

    Recent research revealed diverse content and varying levels of quality in YouTube music therapy videos and prompted questions about viewers' discrimination abilities. This study compares ratings of a YouTube music therapy session video by viewers with different levels of music therapy expertise to determine video elements related to perceptions of representational quality. Eighty-one participants included 25 novices (freshmen and sophomores in an introductory music therapy course), 25 pre-interns (seniors and equivalency students who had completed all core Music Therapy courses), 26 professionals (MT-BC or MT-BC eligibility) with a mean of 1.75 years of experience, and an expert panel of 5 MT-BC professionals with a mean of 11 years of experience in special education. After viewing a music therapy special education video that in previous research met basic competency criteria and professional standards of the American Music Therapy Association, participants completed a 16-item questionnaire. Novices' ratings were more positive (less discriminating) compared to experienced viewers' neutral or negative ratings. Statistical analysis (ANOVA) of novice, pre-intern, and professional ratings of all items revealed significant differences p, .05) for specific therapy content and for a global rating of representational quality. Experienced viewers' ratings were similar to the expert panel's ratings. Content analysis of viewers' reasons for their representational quality ratings corroborated ratings of therapy-specific content. A video that combines and clearly depicts therapy objectives, client improvement, and the effectiveness of music within a therapeutic intervention best represent the music therapy profession in a public social platform like YouTube.

  20. Using of Video Modeling in Teaching a Simple Meal Preparation Skill for Pupils of Down Syndrome

    Science.gov (United States)

    AL-Salahat, Mohammad Mousa

    2016-01-01

    The current study aimed to identify the impact of video modeling upon teaching three pupils with Down syndrome the skill of preparing a simple meal (sandwich), where the training was conducted in a separate classroom in schools of normal students. The training consisted of (i) watching the video of an intellectually disabled pupil, who is…

  1. The allure of the forbidden: breaking taboos, frustration, and attraction to violent video games.

    Science.gov (United States)

    Whitaker, Jodi L; Melzer, André; Steffgen, Georges; Bushman, Brad J

    2013-04-01

    Although people typically avoid engaging in antisocial or taboo behaviors, such as cheating and stealing, they may succumb in order to maximize their personal benefit. Moreover, they may be frustrated when the chance to commit a taboo behavior is withdrawn. The present study tested whether the desire to commit a taboo behavior, and the frustration from being denied such an opportunity, increases attraction to violent video games. Playing violent games allegedly offers an outlet for aggression prompted by frustration. In two experiments, some participants had no chance to commit a taboo behavior (cheating in Experiment 1, stealing in Experiment 2), others had a chance to commit a taboo behavior, and others had a withdrawn chance to commit a taboo behavior. Those in the latter group were most attracted to violent video games. Withdrawing the chance for participants to commit a taboo behavior increased their frustration, which in turn increased their attraction to violent video games.

  2. Research on quality metrics of wireless adaptive video streaming

    Science.gov (United States)

    Li, Xuefei

    2018-04-01

    With the development of wireless networks and intelligent terminals, video traffic has increased dramatically. Adaptive video streaming has become one of the most promising video transmission technologies. For this type of service, a good QoS (Quality of Service) of wireless network does not always guarantee that all customers have good experience. Thus, new quality metrics have been widely studies recently. Taking this into account, the objective of this paper is to investigate the quality metrics of wireless adaptive video streaming. In this paper, a wireless video streaming simulation platform with DASH mechanism and multi-rate video generator is established. Based on this platform, PSNR model, SSIM model and Quality Level model are implemented. Quality Level Model considers the QoE (Quality of Experience) factors such as image quality, stalling and switching frequency while PSNR Model and SSIM Model mainly consider the quality of the video. To evaluate the performance of these QoE models, three performance metrics (SROCC, PLCC and RMSE) which are used to make a comparison of subjective and predicted MOS (Mean Opinion Score) are calculated. From these performance metrics, the monotonicity, linearity and accuracy of these quality metrics can be observed.

  3. Content-Aware Video Adaptation under Low-Bitrate Constraint

    Directory of Open Access Journals (Sweden)

    Hsiao Ming-Ho

    2007-01-01

    Full Text Available With the development of wireless network and the improvement of mobile device capability, video streaming is more and more widespread in such an environment. Under the condition of limited resource and inherent constraints, appropriate video adaptations have become one of the most important and challenging issues in wireless multimedia applications. In this paper, we propose a novel content-aware video adaptation in order to effectively utilize resource and improve visual perceptual quality. First, the attention model is derived from analyzing the characteristics of brightness, location, motion vector, and energy features in compressed domain to reduce computation complexity. Then, through the integration of attention model, capability of client device and correlational statistic model, attractive regions of video scenes are derived. The information object- (IOB- weighted rate distortion model is used for adjusting the bit allocation. Finally, the video adaptation scheme dynamically adjusts video bitstream in frame level and object level. Experimental results validate that the proposed scheme achieves better visual quality effectively and efficiently.

  4. Measurement and calculation of characteristic prompt gamma ray spectra emitted during proton irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Polf, J C; Peterson, S; Beddar, S [M D Anderson Cancer Center, Univeristy of Texas, Houston, TX 77030 (United States); McCleskey, M; Roeder, B T; Spiridon, A; Trache, L [Cyclotron Institute, Texas A and M University, College Station, TX 77843 (United States)], E-mail: jcpolf@mdanderson.org

    2009-11-21

    In this paper, we present results of initial measurements and calculations of prompt gamma ray spectra (produced by proton-nucleus interactions) emitted from tissue equivalent phantoms during irradiations with proton beams. Measurements of prompt gamma ray spectra were made using a high-purity germanium detector shielded either with lead (passive shielding), or a Compton suppression system (active shielding). Calculations of the spectra were performed using a model of both the passive and active shielding experimental setups developed using the Geant4 Monte Carlo toolkit. From the measured spectra it was shown that it is possible to distinguish the characteristic emission lines from the major elemental constituent atoms (C, O, Ca) in the irradiated phantoms during delivery of proton doses similar to those delivered during patient treatment. Also, the Monte Carlo spectra were found to be in very good agreement with the measured spectra providing an initial validation of our model for use in further studies of prompt gamma ray emission during proton therapy. (note)

  5. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  6. PROMPT: Panchromatic Robotic Optical Monitoring and Polarimetry Telescopes

    Energy Technology Data Exchange (ETDEWEB)

    Reichart, D.; Nysewander, M.; Moran, J. [North Carolina Univ., Chapel Hill (United States). Department of Physics and Astronomy] (and others)

    2005-07-15

    Funded by $1.2M in grants and donations, we are now building PROMPT at CTIO. When completed in late 2005, PROMPT will consist of six 0.41-meter diameter Ritchey-Chretien telescopes on rapidly slewing mounts that respond to GRB alerts within seconds, when the afterglow is potentially extremely bright. Each mirror and camera coating is being optimized for a different wavelength range and function, including a NIR imager, two red-optimized imager, a blue-optimized imager, an UV-optimized imager, and an optical polarimeter. PROMPT will be able to identify high-redshift events by dropout and distinguish these events from the similar signatures of extinction. In this way, PROMPT will act a distance-finder scope for spectroscopic follow up on the larger 4.1-meter diameter SOAR telescope, which is also located at CTIO. When not chasing GRBs, PROMPT serves broader educational objectives across the state of north Carolina. Enclosure construction and the first two telescopes are now complete and functioning: PROMPT observed Swift's first GRB in December 2004. We upgrade from two to four telescope in February 2005 and from four to six telescopes in mid-2005.

  7. Use of Video Modeling to Teach Extinguishing of Cooking Related Fires to Individuals with Moderate Intellectual Disabilities

    Science.gov (United States)

    Mechling, Linda C.; Gast, David L.; Gustafson, Melissa R.

    2009-01-01

    This study evaluated the effectiveness of video modeling to teach fire extinguishing behaviors to three young adults with moderate intellectual disabilities. A multiple probe design across three fire extinguishing behaviors and replicated across three students was used to evaluate the effectiveness of the video-based program. Results indicate that…

  8. Determination of the distal dose edge in a human phantom by measuring the prompt gamma distribution: a Monte Carlo study

    Energy Technology Data Exchange (ETDEWEB)

    Min, Chul Hee; Lee, Han Rim; Yeom, Yeon Su; Cho, Sung Koo; Kim, Chan Hyeong [Hanyang University, Seoul (Korea, Republic of)

    2010-06-15

    The close relationship between the proton dose distribution and the distribution of prompt gammas generated by proton-induced nuclear interactions along the path of protons in a water phantom was demonstrated by means of both Monte Carlo simulations and limited experiments. In order to test the clinical applicability of the method for determining the distal dose edge in a human body, a human voxel model, constructed based on a body-composition-approximated physical phantom, was used, after which the MCNPX code was used to analyze the energy spectra and the prompt gamma yields from the major elements composing the human voxel model; finally, the prompt gamma distribution, generated from the voxel model and measured by using an array-type prompt gamma detection system, was calculated and compared with the proton dose distribution. According to the results, effective prompt gammas were produced mainly by oxygen, and the specific energy of the prompt gammas, allowing for selective measurement, was found to be 4.44 MeV. The results also show that the distal dose edge in the human phantom, despite the heterogeneous composition and the complicated shape, can be determined by measuring the prompt gamma distribution with an array-type detection system.

  9. Video-documentation: 'The Pannonic ozon project'

    International Nuclear Information System (INIS)

    Loibl, W.; Cabela, E.; Mayer, H. F.; Schmidt, M.

    1998-07-01

    Goal of the project was the production of a video film as documentation of the Pannonian Ozone Project- POP. The main part of the video describes the POP-model consisting of the modules meteorology, emissions and chemistry, developed during the POP-project. The model considers the European emission patterns of ozone precursors and the actual wind fields. It calculates ozone build up and depletion within air parcels due to emission and weather situation along trajectory routes. Actual ozone concentrations are calculated during model runs simulating the photochemical processes within air parcels moving along 4 day trajectories before reaching the Vienna region. The model computations were validated during extensive ground and aircraft-based measurements of ozone precursors and ozone concentration within the POP study area. Scenario computations were used to determine how much ozone can be reduced in north-eastern Austria by emissions control measures. The video lasts 12:20 minutes and consists of computer animations and life video scenes, presenting the ozone problem in general, the POP model and the model results. The video was produced in co-operation by the Austrian Research Center Seibersdorf - Department of Environmental Planning (ARCS) and Joanneum Research - Institute of Informationsystems (JR). ARCS was responsible for idea, concept, storyboard and text while JR was responsible for computer animation and general video production. The speaker text was written with scientific advice by the POP - project partners: Institute of Meteorology and Physics, University of Agricultural Sciences- Vienna, Environment Agency Austria - Air Quality Department, Austrian Research Center Seibersdorf- Environmental Planning Department/System Research Division. The film was produced as German and English version. (author)

  10. An iPad™-based picture and video activity schedule increases community shopping skills of a young adult with autism spectrum disorder and intellectual disability.

    Science.gov (United States)

    Burckley, Elizabeth; Tincani, Matt; Guld Fisher, Amanda

    2015-04-01

    To evaluate the iPad 2™ with Book Creator™ software to provide visual cues and video prompting to teach shopping skills in the community to a young adult with an autism spectrum disorder and intellectual disability. A multiple probe across settings design was used to assess effects of the intervention on the participant's independence with following a shopping list in a grocery store across three community locations. Visual cues and video prompting substantially increased the participant's shopping skills within two of the three community locations, skill increases maintained after the intervention was withdrawn, and shopping skills generalized to two untaught shopping items. Social validity surveys suggested that the participant's parent and staff favorably viewed the goals, procedures, and outcomes of intervention. The iPad 2™ with Book Creator™ software may be an effective way to teach independent shopping skills in the community; additional replications are needed.

  11. Deep hierarchical attention network for video description

    Science.gov (United States)

    Li, Shuohao; Tang, Min; Zhang, Jun

    2018-03-01

    Pairing video to natural language description remains a challenge in computer vision and machine translation. Inspired by image description, which uses an encoder-decoder model for reducing visual scene into a single sentence, we propose a deep hierarchical attention network for video description. The proposed model uses convolutional neural network (CNN) and bidirectional LSTM network as encoders while a hierarchical attention network is used as the decoder. Compared to encoder-decoder models used in video description, the bidirectional LSTM network can capture the temporal structure among video frames. Moreover, the hierarchical attention network has an advantage over single-layer attention network on global context modeling. To make a fair comparison with other methods, we evaluate the proposed architecture with different types of CNN structures and decoders. Experimental results on the standard datasets show that our model has a more superior performance than the state-of-the-art techniques.

  12. Essay Prompts and Topics: Minimizing the Effect of Mean Differences.

    Science.gov (United States)

    Brown, James Dean; And Others

    1991-01-01

    Investigates whether prompts and topic types affect writing performance of college freshmen taking the Manoa Writing Placement Examination (MWPE). Finds that the MWPE is reliable but that responses to prompts and prompt sets differ. Shows that differences arising in performance on prompts or topics can be minimized by examining mean scores and…

  13. Analysis of prompt supercritical process with heat transfer and temperature feedback

    Institute of Scientific and Technical Information of China (English)

    ZHU BO; ZHU Qian; CHEN Zhiyun

    2009-01-01

    The prompt supercritical process of a nuclear reactor with temperature feedback and initial power as well as heat transfer with a big step reactivity (ρ0>β) is analyzed in this paper.Considering the effect of heat transfer on temperature of the reactor,a new model is set up.For any initial power,the variations of output power and reactivity with time are obtained by numerical method.The effects of the big inserted step reactivity and initial power on the prompt supercritical process are analyzed and discussed.It was found that the effect of heat transfer on the output power and reactivity can be neglected under any initial power,and the output power obtained by the adiabatic model is basically in accordance with that by the model of this paper,and the analytical solution can be adopted.The results provide a theoretical base for safety analysis and operation management of a power reactor.

  14. A model linking video gaming, sleep quality, sweet drinks consumption and obesity among children and youth.

    Science.gov (United States)

    Turel, O; Romashkin, A; Morrison, K M

    2017-08-01

    There is a growing need to curb paediatric obesity. The aim of this study is to untangle associations between video-game-use attributes and obesity as a first step towards identifying and examining possible interventions. Cross-sectional time-lagged cohort study was employed using parent-child surveys (t1) and objective physical activity and physiological measures (t2) from 125 children/adolescents (mean age = 13.06, 9-17-year-olds) who play video games, recruited from two clinics at a Canadian academic children's hospital. Structural equation modelling and analysis of covariance were employed for inference. The results of the study are as follows: (i) self-reported video-game play duration in the 4-h window before bedtime is related to greater abdominal adiposity (waist-to-height ratio) and this association may be mediated through reduced sleep quality (measured with the Pittsburgh Sleep Quality Index); and (ii) self-reported average video-game session duration is associated with greater abdominal adiposity and this association may be mediated through higher self-reported sweet drinks consumption while playing video games and reduced sleep quality. Video-game play duration in the 4-h window before bedtime, typical video-game session duration, sweet drinks consumption while playing video games and poor sleep quality have aversive associations with abdominal adiposity. Paediatricians and researchers should further explore how these factors can be altered through behavioural or pharmacological interventions as a means to reduce paediatric obesity. © 2017 World Obesity Federation.

  15. Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.

    Science.gov (United States)

    Huang, Yan; Wang, Wei; Wang, Liang

    2018-04-01

    Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.

  16. Efficient Delivery of Scalable Video Using a Streaming Class Model

    Directory of Open Access Journals (Sweden)

    Jason J. Quinlan

    2018-03-01

    Full Text Available When we couple the rise in video streaming with the growing number of portable devices (smart phones, tablets, laptops, we see an ever-increasing demand for high-definition video online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide a graceful changes in video quality, all while respecting viewing satisfaction. In this context, the use of well-known scalable/layered media streaming techniques, commonly known as scalable video coding (SVC, is an attractive solution. SVC encodes a number of video quality levels within a single media stream. This has been shown to be an especially effective and efficient solution, but it fares badly in the presence of datagram losses. While multiple description coding (MDC can reduce the effects of packet loss on scalable video delivery, the increased delivery cost is counterproductive for constrained networks. This situation is accentuated in cases where only the lower quality level is required. In this paper, we assess these issues and propose a new approach called Streaming Classes (SC through which we can define a key set of quality levels, each of which can be delivered in a self-contained manner. This facilitates efficient delivery, yielding reduced transmission byte-cost for devices requiring lower quality, relative to MDC and Adaptive Layer Distribution (ALD (42% and 76% respective reduction for layer 2, while also maintaining high levels of consistent quality. We also illustrate how selective packetisation technique can further reduce the effects of packet loss on viewable quality by

  17. Prompt and non-prompt $J/\\psi$ elliptic flow in Pb+Pb collisions at $\\sqrt{s_\\text{NN}}=5.02$ TeV with the ATLAS detector

    CERN Document Server

    The ATLAS collaboration

    2018-01-01

    The elliptic flow of prompt and non-prompt $J/\\psi$ was measured in the dimuon decay channel in Pb+Pb collisions at $\\sqrt{s_\\text{NN}}=5.02$ TeV with an integrated luminosity of 0.42 $\\mathrm{nb}^{-1}$ with ATLAS at the LHC. The prompt and non-prompt signals are separated using a two-dimensional simultaneous fit of the invariant mass and pseudo-proper time of the dimuon system from the \\jpsi decay. The measurement is performed in the kinematic range $9prompt and non-prompt $J/\\psi$ mesons have non-zero elliptic flow. Prompt $J/\\psi$ $v_2$ decreases as a function of $p_\\mathrm{T}$, while non-prompt $J/\\psi$ $v_2$ is flat over the studied kinematical region. There is no observed dependence on rapidity or centrality.

  18. A model of R-D performance evaluation for Rate-Distortion-Complexity evaluation of H.264 video coding

    DEFF Research Database (Denmark)

    Wu, Mo; Forchhammer, Søren

    2007-01-01

    This paper considers a method for evaluation of Rate-Distortion-Complexity (R-D-C) performance of video coding. A statistical model of the transformed coefficients is used to estimate the Rate-Distortion (R-D) performance. A model frame work for rate, distortion and slope of the R-D curve for inter...... and intra frame is presented. Assumptions are given for analyzing an R-D model for fast R-D-C evaluation. The theoretical expressions are combined with H.264 video coding, and confirmed by experimental results. The complexity frame work is applied to the integer motion estimation....

  19. Prompt and non-prompt J/$\\psi$ production in pp collisions at $\\sqrt{s}$ = 7 TeV

    CERN Document Server

    Khachatryan, Vardan; Tumasyan, Armen; Adam, Wolfgang; Bergauer, Thomas; Dragicevic, Marko; Erö, Janos; Fabjan, Christian; Friedl, Markus; Fruehwirth, Rudolf; Ghete, Vasile Mihai; Hammer, Josef; Haensel, Stephan; Hartl, Christian; Hoch, Michael; Hörmann, Natascha; Hrubec, Josef; Jeitler, Manfred; Kasieczka, Gregor; Kiesenhofer, Wolfgang; Krammer, Manfred; Liko, Dietrich; Mikulec, Ivan; Pernicka, Manfred; Rohringer, Herbert; Schöfbeck, Robert; Strauss, Josef; Taurok, Anton; Teischinger, Florian; Waltenberger, Wolfgang; Walzel, Gerhard; Widl, Edmund; Wulz, Claudia-Elisabeth; Mossolov, Vladimir; Shumeiko, Nikolai; Suarez Gonzalez, Juan; Benucci, Leonardo; Ceard, Ludivine; De Wolf, Eddi A; Janssen, Xavier; Maes, Thomas; Mucibello, Luca; Ochesanu, Silvia; Roland, Benoit; Rougny, Romain; Selvaggi, Michele; Van Haevermaet, Hans; Van Mechelen, Pierre; Van Remortel, Nick; Adler, Volker; Beauceron, Stephanie; Blekman, Freya; Blyweert, Stijn; D'Hondt, Jorgen; Devroede, Olivier; Kalogeropoulos, Alexis; Maes, Joris; Maes, Michael; Tavernier, Stefaan; Van Doninck, Walter; Van Mulders, Petra; Van Onsem, Gerrit Patrick; Villella, Ilaria; Charaf, Otman; Clerbaux, Barbara; De Lentdecker, Gilles; Dero, Vincent; Gay, Arnaud; Hammad, Gregory Habib; Hreus, Tomas; Marage, Pierre Edouard; Thomas, Laurent; Vander Velde, Catherine; Vanlaer, Pascal; Wickens, John; Costantini, Silvia; Grunewald, Martin; Klein, Benjamin; Marinov, Andrey; Ryckbosch, Dirk; Thyssen, Filip; Tytgat, Michael; Vanelderen, Lukas; Verwilligen, Piet; Walsh, Sinead; Zaganidis, Nicolas; Basegmez, Suzan; Bruno, Giacomo; Caudron, Julien; De Favereau De Jeneret, Jerome; Delaere, Christophe; Demin, Pavel; Favart, Denis; Giammanco, Andrea; Grégoire, Ghislain; Hollar, Jonathan; Lemaitre, Vincent; Liao, Junhui; Militaru, Otilia; Ovyn, Severine; Pagano, Davide; Pin, Arnaud; Piotrzkowski, Krzysztof; Quertenmont, Loic; Schul, Nicolas; Beliy, Nikita; Caebergs, Thierry; Daubie, Evelyne; Alves, Gilvan; De Jesus Damiao, Dilson; Pol, Maria Elena; Henrique Gomes E Souza, Moacyr; Carvalho, Wagner; Melo Da Costa, Eliza; De Oliveira Martins, Carley; Fonseca De Souza, Sandro; Mundim, Luiz; Nogima, Helio; Oguri, Vitor; Prado Da Silva, Wanda Lucia; Santoro, Alberto; Silva Do Amaral, Sheila Mara; Sznajder, Andre; Torres Da Silva De Araujo, Felipe; De Almeida Dias, Flavia; Ferreira Dias, Marco Andre; Tomei, Thiago; De Moraes Gregores, Eduardo; Da Cunha Marinho, Franciole; Novaes, Sergio F; Padula, Sandra; Darmenov, Nikolay; Dimitrov, Lubomir; Genchev, Vladimir; Iaydjiev, Plamen; Piperov, Stefan; Rodozov, Mircho; Stoykova, Stefka; Sultanov, Georgi; Tcholakov, Vanio; Trayanov, Rumen; Vankov, Ivan; Dyulendarova, Milena; Hadjiiska, Roumyana; Kozhuharov, Venelin; Litov, Leander; Marinova, Evelina; Mateev, Matey; Pavlov, Borislav; Petkov, Peicho; Bian, Jian-Guo; Chen, Guo-Ming; Chen, He-Sheng; Jiang, Chun-Hua; Liang, Dong; Liang, Song; Wang, Jian; Wang, Jian; Wang, Xianyou; Wang, Zheng; Yang, Min; Zang, Jingjing; Zhang, Zhen; Ban, Yong; Guo, Shuang; Li, Wenbo; Mao, Yajun; Qian, Si-Jin; Teng, Haiyun; Zhu, Bo; Cabrera, Andrés; Gomez Moreno, Bernardo; Ocampo Rios, Alberto Andres; Osorio Oliveros, Andres Felipe; Sanabria, Juan Carlos; Godinovic, Nikola; Lelas, Damir; Lelas, Karlo; Plestina, Roko; Polic, Dunja; Puljak, Ivica; Antunovic, Zeljko; Dzelalija, Mile; Brigljevic, Vuko; Duric, Senka; Kadija, Kreso; Morovic, Srecko; Attikis, Alexandros; Fereos, Reginos; Galanti, Mario; Mousa, Jehad; Nicolaou, Charalambos; Ptochos, Fotios; Razis, Panos A; Rykaczewski, Hans; Assran, Yasser; Mahmoud, Mohammed; Hektor, Andi; Kadastik, Mario; Kannike, Kristjan; Müntel, Mait; Raidal, Martti; Rebane, Liis; Azzolini, Virginia; Eerola, Paula; Czellar, Sandor; Härkönen, Jaakko; Heikkinen, Mika Aatos; Karimäki, Veikko; Kinnunen, Ritva; Klem, Jukka; Kortelainen, Matti J; Lampén, Tapio; Lassila-Perini, Kati; Lehti, Sami; Lindén, Tomas; Luukka, Panja-Riina; Mäenpää, Teppo; Tuominen, Eija; Tuominiemi, Jorma; Tuovinen, Esa; Ungaro, Donatella; Wendland, Lauri; Banzuzi, Kukka; Korpela, Arja; Tuuva, Tuure; Sillou, Daniel; Besancon, Marc; Dejardin, Marc; Denegri, Daniel; Fabbro, Bernard; Faure, Jean-Louis; Ferri, Federico; Ganjour, Serguei; Gentit, François-Xavier; Givernaud, Alain; Gras, Philippe; Hamel de Monchenault, Gautier; Jarry, Patrick; Locci, Elizabeth; Malcles, Julie; Marionneau, Matthieu; Millischer, Laurent; Rander, John; Rosowsky, André; Titov, Maksym; Verrecchia, Patrice; Baffioni, Stephanie; Beaudette, Florian; Bianchini, Lorenzo; Bluj, Michal; Broutin, Clementine; Busson, Philippe; Charlot, Claude; Dobrzynski, Ludwik; Granier de Cassagnac, Raphael; Haguenauer, Maurice; Miné, Philippe; Mironov, Camelia; Ochando, Christophe; Paganini, Pascal; Porteboeuf, Sarah; Sabes, David; Salerno, Roberto; Sirois, Yves; Thiebaux, Christophe; Wyslouch, Bolek; Zabi, Alexandre; Agram, Jean-Laurent; Andrea, Jeremy; Besson, Auguste; Bloch, Daniel; Bodin, David; Brom, Jean-Marie; Cardaci, Marco; Chabert, Eric Christian; Collard, Caroline; Conte, Eric; Drouhin, Frédéric; Ferro, Cristina; Fontaine, Jean-Charles; Gelé, Denis; Goerlach, Ulrich; Greder, Sebastien; Juillot, Pierre; Karim, Mehdi; Le Bihan, Anne-Catherine; Mikami, Yoshinari; Van Hove, Pierre; Fassi, Farida; Mercier, Damien; Baty, Clement; Beaupere, Nicolas; Bedjidian, Marc; Bondu, Olivier; Boudoul, Gaelle; Boumediene, Djamel; Brun, Hugues; Chanon, Nicolas; Chierici, Roberto; Contardo, Didier; Depasse, Pierre; El Mamouni, Houmani; Falkiewicz, Anna; Fay, Jean; Gascon, Susan; Ille, Bernard; Kurca, Tibor; Le Grand, Thomas; Lethuillier, Morgan; Mirabito, Laurent; Perries, Stephane; Sordini, Viola; Tosi, Silvano; Tschudi, Yohann; Verdier, Patrice; Xiao, Hong; Roinishvili, Vladimir; Anagnostou, Georgios; Edelhoff, Matthias; Feld, Lutz; Heracleous, Natalie; Hindrichs, Otto; Jussen, Ruediger; Klein, Katja; Merz, Jennifer; Mohr, Niklas; Ostapchuk, Andrey; Perieanu, Adrian; Raupach, Frank; Sammet, Jan; Schael, Stefan; Sprenger, Daniel; Weber, Hendrik; Weber, Martin; Wittmer, Bruno; Ata, Metin; Bender, Walter; Erdmann, Martin; Frangenheim, Jens; Hebbeker, Thomas; Hinzmann, Andreas; Hoepfner, Kerstin; Hof, Carsten; Klimkovich, Tatsiana; Klingebiel, Dennis; Kreuzer, Peter; Lanske, Dankfried; Magass, Carsten; Masetti, Gianni; Merschmeyer, Markus; Meyer, Arnd; Papacz, Paul; Pieta, Holger; Reithler, Hans; Schmitz, Stefan Antonius; Sonnenschein, Lars; Steggemann, Jan; Teyssier, Daniel; Bontenackels, Michael; Davids, Martina; Duda, Markus; Flügge, Günter; Geenen, Heiko; Giffels, Manuel; Haj Ahmad, Wael; Heydhausen, Dirk; Kress, Thomas; Kuessel, Yvonne; Linn, Alexander; Nowack, Andreas; Perchalla, Lars; Pooth, Oliver; Rennefeld, Jörg; Sauerland, Philip; Stahl, Achim; Thomas, Maarten; Tornier, Daiske; Zoeller, Marc Henning; Aldaya Martin, Maria; Behrenhoff, Wolf; Behrens, Ulf; Bergholz, Matthias; Borras, Kerstin; Cakir, Altan; Campbell, Alan; Castro, Elena; Dammann, Dirk; Eckerlin, Guenter; Eckstein, Doris; Flossdorf, Alexander; Flucke, Gero; Geiser, Achim; Glushkov, Ivan; Hauk, Johannes; Jung, Hannes; Kasemann, Matthias; Katkov, Igor; Katsas, Panagiotis; Kleinwort, Claus; Kluge, Hannelies; Knutsson, Albert; Krücker, Dirk; Kuznetsova, Ekaterina; Lange, Wolfgang; Lohmann, Wolfgang; Mankel, Rainer; Marienfeld, Markus; Melzer-Pellmann, Isabell-Alissandra; Meyer, Andreas Bernhard; Mnich, Joachim; Mussgiller, Andreas; Olzem, Jan; Parenti, Andrea; Raspereza, Alexei; Raval, Amita; Schmidt, Ringo; Schoerner-Sadenius, Thomas; Sen, Niladri; Stein, Matthias; Tomaszewska, Justyna; Volyanskyy, Dmytro; Walsh, Roberval; Wissing, Christoph; Autermann, Christian; Bobrovskyi, Sergei; Draeger, Jula; Enderle, Holger; Gebbert, Ulla; Kaschube, Kolja; Kaussen, Gordon; Klanner, Robert; Mura, Benedikt; Naumann-Emme, Sebastian; Nowak, Friederike; Pietsch, Niklas; Sander, Christian; Schettler, Hannes; Schleper, Peter; Schröder, Matthias; Schum, Torben; Schwandt, Joern; Srivastava, Ajay Kumar; Stadie, Hartmut; Steinbrück, Georg; Thomsen, Jan; Wolf, Roger; Bauer, Julia; Buege, Volker; Chwalek, Thorsten; Daeuwel, Daniel; De Boer, Wim; Dierlamm, Alexander; Dirkes, Guido; Feindt, Michael; Gruschke, Jasmin; Hackstein, Christoph; Hartmann, Frank; Heindl, Stefan Michael; Heinrich, Michael; Held, Hauke; Hoffmann, Karl-Heinz; Honc, Simon; Kuhr, Thomas; Martschei, Daniel; Mueller, Steffen; Müller, Thomas; Neuland, Maike Brigitte; Niegel, Martin; Oberst, Oliver; Oehler, Andreas; Ott, Jochen; Peiffer, Thomas; Piparo, Danilo; Quast, Gunter; Rabbertz, Klaus; Ratnikov, Fedor; Renz, Manuel; Sabellek, Andreas; Saout, Christophe; Scheurer, Armin; Schieferdecker, Philipp; Schilling, Frank-Peter; Schott, Gregory; Simonis, Hans-Jürgen; Stober, Fred-Markus Helmut; Troendle, Daniel; Wagner-Kuhr, Jeannine; Zeise, Manuel; Zhukov, Valery; Ziebarth, Eva Barbara; Daskalakis, Georgios; Geralis, Theodoros; Kesisoglou, Stilianos; Kyriakis, Aristotelis; Loukas, Demetrios; Manolakos, Ioannis; Markou, Athanasios; Markou, Christos; Mavrommatis, Charalampos; Petrakou, Eleni; Gouskos, Loukas; Mertzimekis, Theodoros; Panagiotou, Apostolos; Evangelou, Ioannis; Foudas, Costas; Kokkas, Panagiotis; Manthos, Nikolaos; Papadopoulos, Ioannis; Patras, Vaios; Triantis, Frixos A; Aranyi, Attila; Bencze, Gyorgy; Boldizsar, Laszlo; Debreczeni, Gergely; Hajdu, Csaba; Horvath, Dezso; Kapusi, Anita; Krajczar, Krisztian; Laszlo, Andras; Sikler, Ferenc; Vesztergombi, Gyorgy; Beni, Noemi; Molnar, Jozsef; Palinkas, Jozsef; Szillasi, Zoltan; Veszpremi, Viktor; Raics, Peter; Trocsanyi, Zoltan Laszlo; Ujvari, Balazs; Bansal, Sunil; Beri, Suman Bala; Bhatnagar, Vipin; Dhingra, Nitish; Jindal, Monika; Kaur, Manjit; Kohli, Jatinder Mohan; Mehta, Manuk Zubin; Nishu, Nishu; Saini, Lovedeep Kaur; Sharma, Archana; Singh, Anil; Singh, Jas Bir; Singh, Supreet Pal; Ahuja, Sudha; Bhattacharya, Satyaki; Choudhary, Brajesh C; Gupta, Pooja; Jain, Sandhya; Jain, Shilpi; Kumar, Ashok; Shivpuri, Ram Krishen; Choudhury, Rajani Kant; Dutta, Dipanwita; Kailas, Swaminathan; Kataria, Sushil Kumar; Mohanty, Ajit Kumar; Pant, Lalit Mohan; Shukla, Prashant; Suggisetti, Praveenkumar; Aziz, Tariq; Guchait, Monoranjan; Gurtu, Atul; Maity, Manas; Majumder, Devdatta; Majumder, Gobinda; Mazumdar, Kajari; Mohanty, Gagan Bihari; Saha, Anirban; Sudhakar, Katta; Wickramage, Nadeesha; Banerjee, Sudeshna; Dugad, Shashikant; Mondal, Naba Kumar; Arfaei, Hessamaddin; Bakhshiansohi, Hamed; Etesami, Seyed Mohsen; Fahim, Ali; Hashemi, Majid; Jafari, Abideh; Khakzad, Mohsen; Mohammadi, Abdollah; Mohammadi Najafabadi, Mojtaba; Paktinat Mehdiabadi, Saeid; Safarzadeh, Batool; Zeinali, Maryam; Abbrescia, Marcello; Barbone, Lucia; Calabria, Cesare; Colaleo, Anna; Creanza, Donato; De Filippis, Nicola; De Palma, Mauro; Dimitrov, Anton; Fedele, Francesca; Fiore, Luigi; Iaselli, Giuseppe; Lusito, Letizia; Maggi, Giorgio; Maggi, Marcello; Manna, Norman; Marangelli, Bartolomeo; My, Salvatore; Nuzzo, Salvatore; Pacifico, Nicola; Pierro, Giuseppe Antonio; Pompili, Alexis; Pugliese, Gabriella; Romano, Francesco; Roselli, Giuseppe; Selvaggi, Giovanna; Silvestris, Lucia; Trentadue, Raffaello; Tupputi, Salvatore; Zito, Giuseppe; Abbiendi, Giovanni; Benvenuti, Alberto; Bonacorsi, Daniele; Braibant-Giacomelli, Sylvie; Capiluppi, Paolo; Castro, Andrea; Cavallo, Francesca Romana; Cuffiani, Marco; Dallavalle, Gaetano-Marco; Fabbri, Fabrizio; Fanfani, Alessandra; Fasanella, Daniele; Giacomelli, Paolo; Giunta, Marina; Grandi, Claudio; Marcellini, Stefano; Meneghelli, Marco; Montanari, Alessandro; Navarria, Francesco; Odorici, Fabrizio; Perrotta, Andrea; Rossi, Antonio; Rovelli, Tiziano; Siroli, Gianni; Travaglini, Riccardo; Albergo, Sebastiano; Cappello, Gigi; Chiorboli, Massimiliano; Costa, Salvatore; Tricomi, Alessia; Tuve, Cristina; Barbagli, Giuseppe; Ciulli, Vitaliano; Civinini, Carlo; D'Alessandro, Raffaello; Focardi, Ettore; Frosali, Simone; Gallo, Elisabetta; Genta, Chiara; Lenzi, Piergiulio; Meschini, Marco; Paoletti, Simone; Sguazzoni, Giacomo; Tropiano, Antonio; Benussi, Luigi; Bianco, Stefano; Colafranceschi, Stefano; Fabbri, Franco; Piccolo, Davide; Fabbricatore, Pasquale; Musenich, Riccardo; Benaglia, Andrea; Cerati, Giuseppe Benedetto; De Guio, Federico; Di Matteo, Leonardo; Ghezzi, Alessio; Malberti, Martina; Malvezzi, Sandra; Martelli, Arabella; Massironi, Andrea; Menasce, Dario; Moroni, Luigi; Paganoni, Marco; Pedrini, Daniele; Ragazzi, Stefano; Redaelli, Nicola; Sala, Silvano; Tabarelli de Fatis, Tommaso; Tancini, Valentina; Buontempo, Salvatore; Carrillo Montoya, Camilo Andres; Cimmino, Anna; De Cosa, Annapaola; De Gruttola, Michele; Fabozzi, Francesco; Iorio, Alberto Orso Maria; Lista, Luca; Merola, Mario; Noli, Pasquale; Paolucci, Pierluigi; Azzi, Patrizia; Bacchetta, Nicola; Bellan, Paolo; Bisello, Dario; Branca, Antonio; Checchia, Paolo; De Mattia, Marco; Dorigo, Tommaso; Dosselli, Umberto; Fanzago, Federica; Gasparini, Fabrizio; Gasparini, Ugo; Giubilato, Piero; Gresele, Ambra; Lacaprara, Stefano; Lazzizzera, Ignazio; Margoni, Martino; Mazzucato, Mirco; Meneguzzo, Anna Teresa; Nespolo, Massimo; Perrozzi, Luca; Pozzobon, Nicola; Ronchese, Paolo; Simonetto, Franco; Torassa, Ezio; Tosi, Mia; Triossi, Andrea; Vanini, Sara; Zotto, Pierluigi; Zumerle, Gianni; Baesso, Paolo; Berzano, Umberto; Riccardi, Cristina; Torre, Paola; Vitulo, Paolo; Viviani, Claudio; Biasini, Maurizio; Bilei, Gian Mario; Caponeri, Benedetta; Fanò, Livio; Lariccia, Paolo; Lucaroni, Andrea; Mantovani, Giancarlo; Menichelli, Mauro; Nappi, Aniello; Santocchia, Attilio; Servoli, Leonello; Taroni, Silvia; Valdata, Marisa; Volpe, Roberta; Azzurri, Paolo; Bagliesi, Giuseppe; Bernardini, Jacopo; Boccali, Tommaso; Broccolo, Giuseppe; Castaldi, Rino; D'Agnolo, Raffaele Tito; Dell'Orso, Roberto; Fiori, Francesco; Foà, Lorenzo; Giassi, Alessandro; Kraan, Aafke; Ligabue, Franco; Lomtadze, Teimuraz; Martini, Luca; Messineo, Alberto; Palla, Fabrizio; Palmonari, Francesco; Sarkar, Subir; Segneri, Gabriele; Serban, Alin Titus; Spagnolo, Paolo; Tenchini, Roberto; Tonelli, Guido; Venturi, Andrea; Verdini, Piero Giorgio; Barone, Luciano; Cavallari, Francesca; Del Re, Daniele; Di Marco, Emanuele; Diemoz, Marcella; Franci, Daniele; Grassi, Marco; Longo, Egidio; Organtini, Giovanni; Palma, Alessandro; Pandolfi, Francesco; Paramatti, Riccardo; Rahatlou, Shahram; Amapane, Nicola; Arcidiacono, Roberta; Argiro, Stefano; Arneodo, Michele; Biino, Cristina; Botta, Cristina; Cartiglia, Nicolo; Castello, Roberto; Costa, Marco; Demaria, Natale; Graziano, Alberto; Mariotti, Chiara; Marone, Matteo; Maselli, Silvia; Migliore, Ernesto; Mila, Giorgia; Monaco, Vincenzo; Musich, Marco; Obertino, Maria Margherita; Pastrone, Nadia; Pelliccioni, Mario; Romero, Alessandra; Ruspa, Marta; Sacchi, Roberto; Sola, Valentina; Solano, Ada; Staiano, Amedeo; Trocino, Daniele; Vilela Pereira, Antonio; Ambroglini, Filippo; Belforte, Stefano; Cossutti, Fabio; Della Ricca, Giuseppe; Gobbo, Benigno; Montanino, Damiana; Penzo, Aldo; Heo, Seong Gu; Chang, Sunghyun; Chung, Jin Hyuk; Kim, Dong Hee; Kim, Gui Nyun; Kim, Ji Eun; Kong, Dae Jung; Park, Hyangkyu; Son, Dohhee; Son, Dong-Chul; Kim, Jaeho; Kim, Jae Yool; Song, Sanghyeon; Choi, Suyong; Hong, Byung-Sik; Jo, Mihee; Kim, Hyunchul; Kim, Ji Hyun; Kim, Tae Jeong; Lee, Kyong Sei; Moon, Dong Ho; Park, Sung Keun; Rhee, Han-Bum; Seo, Eunsung; Shin, Seungsu; Sim, Kwang Souk; Choi, Minkyoo; Kang, Seokon; Kim, Hyunyong; Park, Chawon; Park, Inkyu; Park, Sangnam; Ryu, Geonmo; Choi, Young-Il; Choi, Young Kyu; Goh, Junghwan; Lee, Jongseok; Lee, Sungeun; Seo, Hyunkwan; Yu, Intae; Bilinskas, Mykolas Jurgis; Grigelionis, Ignas; Janulis, Mindaugas; Martisiute, Dalia; Petrov, Pavel; Sabonis, Tomas; Castilla Valdez, Heriberto; De La Cruz Burelo, Eduard; Lopez-Fernandez, Ricardo; Sánchez Hernández, Alberto; Villasenor-Cendejas, Luis Manuel; Carrillo Moreno, Salvador; Vazquez Valencia, Fabiola; Salazar Ibarguen, Humberto Antonio; Casimiro Linares, Edgar; Morelos Pineda, Antonio; Reyes-Santos, Marco A.; Allfrey, Philip; Krofcheck, David; Tam, Jason; Butler, Philip H.; Doesburg, Robert; Silverwood, Hamish; Ahmad, Muhammad; Ahmed, Ijaz; Asghar, Muhammad Irfan; Hoorani, Hafeez R.; Khan, Wajid Ali; Khurshid, Taimoor; Qazi, Shamona; Cwiok, Mikolaj; Dominik, Wojciech; Doroba, Krzysztof; Kalinowski, Artur; Konecki, Marcin; Krolikowski, Jan; Frueboes, Tomasz; Gokieli, Ryszard; Górski, Maciej; Kazana, Malgorzata; Nawrocki, Krzysztof; Szleper, Michal; Wrochna, Grzegorz; Zalewski, Piotr; Almeida, Nuno; David Tinoco Mendes, Andre; Faccioli, Pietro; Ferreira Parracho, Pedro Guilherme; Gallinaro, Michele; Sá Martins, Pedro; Musella, Pasquale; Nayak, Aruna; Ribeiro, Pedro Quinaz; Seixas, Joao; Silva, Pedro; Varela, Joao; Wöhri, Hermine Katharina; Belotelov, Ivan; Bunin, Pavel; Finger, Miroslav; Finger Jr., Michael; Golutvin, Igor; Kamenev, Alexey; Karjavin, Vladimir; Kozlov, Guennady; Lanev, Alexander; Moisenz, Petr; Palichik, Vladimir; Perelygin, Victor; Shmatov, Sergey; Smirnov, Vitaly; Volodko, Anton; Zarubin, Anatoli; Bondar, Nikolai; Golovtsov, Victor; Ivanov, Yury; Kim, Victor; Levchenko, Petr; Murzin, Victor; Oreshkin, Vadim; Smirnov, Igor; Sulimov, Valentin; Uvarov, Lev; Vavilov, Sergey; Vorobyev, Alexey; Andreev, Yuri; Gninenko, Sergei; Golubev, Nikolai; Kirsanov, Mikhail; Krasnikov, Nikolai; Matveev, Viktor; Pashenkov, Anatoli; Toropin, Alexander; Troitsky, Sergey; Epshteyn, Vladimir; Gavrilov, Vladimir; Kaftanov, Vitali; Kossov, Mikhail; Krokhotin, Andrey; Lychkovskaya, Natalia; Safronov, Grigory; Semenov, Sergey; Shreyber, Irina; Stolin, Viatcheslav; Vlasov, Evgueni; Zhokin, Alexander; Boos, Edouard; Dubinin, Mikhail; Dudko, Lev; Ershov, Alexander; Gribushin, Andrey; Kodolova, Olga; Lokhtin, Igor; Obraztsov, Stepan; Petrushanko, Sergey; Sarycheva, Ludmila; Savrin, Viktor; Snigirev, Alexander; Andreev, Vladimir; Azarkin, Maksim; Dremin, Igor; Kirakosyan, Martin; Rusakov, Sergey V.; Vinogradov, Alexey; Azhgirey, Igor; Bitioukov, Sergei; Grishin, Viatcheslav; Kachanov, Vassili; Konstantinov, Dmitri; Korablev, Andrey; Krychkine, Victor; Petrov, Vladimir; Ryutin, Roman; Slabospitsky, Sergey; Sobol, Andrei; Tourtchanovitch, Leonid; Troshin, Sergey; Tyurin, Nikolay; Uzunian, Andrey; Volkov, Alexey; Adzic, Petar; Djordjevic, Milos; Krpic, Dragomir; Milosevic, Jovan; Aguilar-Benitez, Manuel; Alcaraz Maestre, Juan; Arce, Pedro; Battilana, Carlo; Calvo, Enrique; Cepeda, Maria; Cerrada, Marcos; Colino, Nicanor; De La Cruz, Begona; Diez Pardos, Carmen; Fernandez Bedoya, Cristina; Fernández Ramos, Juan Pablo; Ferrando, Antonio; Flix, Jose; Fouz, Maria Cruz; Garcia-Abia, Pablo; Gonzalez Lopez, Oscar; Goy Lopez, Silvia; Hernandez, Jose M.; Josa, Maria Isabel; Merino, Gonzalo; Puerta Pelayo, Jesus; Redondo, Ignacio; Romero, Luciano; Santaolalla, Javier; Willmott, Carlos; Albajar, Carmen; Codispoti, Giuseppe; de Trocóniz, Jorge F; Cuevas, Javier; Fernandez Menendez, Javier; Folgueras, Santiago; Gonzalez Caballero, Isidro; Lloret Iglesias, Lara; Vizan Garcia, Jesus Manuel; Brochero Cifuentes, Javier Andres; Cabrillo, Iban Jose; Calderon, Alicia; Chamizo Llatas, Maria; Chuang, Shan-Huei; Duarte Campderros, Jordi; Felcini, Marta; Fernandez, Marcos; Gomez, Gervasio; Gonzalez Sanchez, Javier; Gonzalez Suarez, Rebeca; Jorda, Clara; Lobelle Pardo, Patricia; Lopez Virto, Amparo; Marco, Jesus; Marco, Rafael; Martinez Rivero, Celso; Matorras, Francisco; Piedra Gomez, Jonatan; Rodrigo, Teresa; Ruiz Jimeno, Alberto; Scodellaro, Luca; Sobron Sanudo, Mar; Vila, Ivan; Vilar Cortabitarte, Rocio; Abbaneo, Duccio; Auffray, Etiennette; Auzinger, Georg; Baillon, Paul; Ball, Austin; Barney, David; Bell, Alan James; Benedetti, Daniele; Bernet, Colin; Bialas, Wojciech; Bloch, Philippe; Bocci, Andrea; Bolognesi, Sara; Breuker, Horst; Brona, Grzegorz; Bunkowski, Karol; Camporesi, Tiziano; Cano, Eric; Cerminara, Gianluca; Christiansen, Tim; Coarasa Perez, Jose Antonio; Covarelli, Roberto; Curé, Benoît; D'Enterria, David; Dahms, Torsten; De Roeck, Albert; Duarte Ramos, Fernando; Elliott-Peisert, Anna; Funk, Wolfgang; Gaddi, Andrea; Gennai, Simone; Georgiou, Georgios; Gerwig, Hubert; Gigi, Dominique; Gill, Karl; Giordano, Domenico; Glege, Frank; Gomez-Reino Garrido, Robert; Gouzevitch, Maxime; Govoni, Pietro; Gowdy, Stephen; Guiducci, Luigi; Hansen, Magnus; Harvey, John; Hegeman, Jeroen; Hegner, Benedikt; Henderson, Conor; Hoffmann, Hans Falk; Honma, Alan; Innocente, Vincenzo; Janot, Patrick; Karavakis, Edward; Lecoq, Paul; Leonidopoulos, Christos; Lourenco, Carlos; Macpherson, Alick; Maki, Tuula; Malgeri, Luca; Mannelli, Marcello; Masetti, Lorenzo; Meijers, Frans; Mersi, Stefano; Meschi, Emilio; Moser, Roland; Mozer, Matthias Ulrich; Mulders, Martijn; Nesvold, Erik; Nguyen, Matthew; Orimoto, Toyoko; Orsini, Luciano; Perez, Emmanuelle; Petrilli, Achille; Pfeiffer, Andreas; Pierini, Maurizio; Pimiä, Martti; Polese, Giovanni; Racz, Attila; Rolandi, Gigi; Rommerskirchen, Tanja; Rovelli, Chiara; Rovere, Marco; Sakulin, Hannes; Schäfer, Christoph; Schwick, Christoph; Segoni, Ilaria; Sharma, Archana; Siegrist, Patrice; Simon, Michal; Sphicas, Paraskevas; Spiga, Daniele; Spiropulu, Maria; Stöckli, Fabian; Stoye, Markus; Tropea, Paola; Tsirou, Andromachi; Tsyganov, Andrey; Veres, Gabor Istvan; Vichoudis, Paschalis; Voutilainen, Mikko; Zeuner, Wolfram Dietrich; Bertl, Willi; Deiters, Konrad; Erdmann, Wolfram; Gabathuler, Kurt; Horisberger, Roland; Ingram, Quentin; Kaestli, Hans-Christian; König, Stefan; Kotlinski, Danek; Langenegger, Urs; Meier, Frank; Renker, Dieter; Rohe, Tilman; Sibille, Jennifer; Starodumov, Andrei; Bortignon, Pierluigi; Caminada, Lea; Chen, Zhiling; Cittolin, Sergio; Dissertori, Günther; Dittmar, Michael; Eugster, Jürg; Freudenreich, Klaus; Grab, Christoph; Hervé, Alain; Hintz, Wieland; Lecomte, Pierre; Lustermann, Werner; Marchica, Carmelo; Martinez Ruiz del Arbol, Pablo; Meridiani, Paolo; Milenovic, Predrag; Moortgat, Filip; Nef, Pascal; Nessi-Tedaldi, Francesca; Pape, Luc; Pauss, Felicitas; Punz, Thomas; Rizzi, Andrea; Ronga, Frederic Jean; Sala, Leonardo; Sanchez, Ann - Karin; Sawley, Marie-Christine; Stieger, Benjamin; Tauscher, Ludwig; Thea, Alessandro; Theofilatos, Konstantinos; Treille, Daniel; Urscheler, Christina; Wallny, Rainer; Weber, Matthias; Wehrli, Lukas; Weng, Joanna; Aguiló, Ernest; Amsler, Claude; Chiochia, Vincenzo; De Visscher, Simon; Favaro, Carlotta; Ivova Rikova, Mirena; Millan Mejias, Barbara; Regenfus, Christian; Robmann, Peter; Schmidt, Alexander; Snoek, Hella; Wilke, Lotte; Chang, Yuan-Hann; Chen, Kuan-Hsin; Chen, Wan-Ting; Dutta, Suchandra; Go, Apollo; Kuo, Chia-Ming; Li, Syue-Wei; Lin, Willis; Liu, Ming-Hsiung; Liu, Zong-kai; Lu, Yun-Ju; Wu, Jing-Han; Yu, Shin-Shan; Bartalini, Paolo; Chang, Paoti; Chang, You-Hao; Chang, Yu-Wei; Chao, Yuan; Chen, Kai-Feng; Hou, George Wei-Shu; Hsiung, Yee; Kao, Kai-Yi; Lei, Yeong-Jyi; Lu, Rong-Shyang; Shiu, Jing-Ge; Tzeng, Yeng-Ming; Wang, Minzu; Adiguzel, Aytul; Bakirci, Mustafa Numan; Cerci, Salim; Dozen, Candan; Dumanoglu, Isa; Eskut, Eda; Girgis, Semiray; Gökbulut, Gül; Güler, Yalcin; Gurpinar, Emine; Hos, Ilknur; Kangal, Evrim Ersin; Karaman, Turker; Kayis Topaksu, Aysel; Nart, Alisah; Önengüt, Gülsen; Ozdemir, Kadri; Ozturk, Sertac; Polatöz, Ayse; Sogut, Kenan; Tali, Bayram; Topakli, Huseyin; Uzun, Dilber; Vergili, Latife Nukhet; Vergili, Mehmet; Zorbilmez, Caglar; Akin, Ilina Vasileva; Aliev, Takhmasib; Bilmis, Selcuk; Deniz, Muhammed; Gamsizkan, Halil; Guler, Ali Murat; Ocalan, Kadir; Ozpineci, Altug; Serin, Meltem; Sever, Ramazan; Surat, Ugur Emrah; Yildirim, Eda; Zeyrek, Mehmet; Deliomeroglu, Mehmet; Demir, Durmus; Gülmez, Erhan; Halu, Arda; Isildak, Bora; Kaya, Mithat; Kaya, Ozlem; Özbek, Melih; Ozkorucuklu, Suat; Sonmez, Nasuf; Levchuk, Leonid; Bell, Peter; Bostock, Francis; Brooke, James John; Cheng, Teh Lee; Clement, Emyr; Cussans, David; Frazier, Robert; Goldstein, Joel; Grimes, Mark; Hansen, Maria; Hartley, Dominic; Heath, Greg P.; Heath, Helen F.; Huckvale, Benedickt; Jackson, James; Kreczko, Lukasz; Metson, Simon; Newbold, Dave M.; Nirunpong, Kachanon; Poll, Anthony; Senkin, Sergey; Smith, Vincent J.; Ward, Simon; Basso, Lorenzo; Bell, Ken W.; Belyaev, Alexander; Brew, Christopher; Brown, Robert M.; Camanzi, Barbara; Cockerill, David J A; Coughlan, John A.; Harder, Kristian; Harper, Sam; Kennedy, Bruce W.; Olaiya, Emmanuel; Petyt, David; Radburn-Smith, Benjamin Charles; Shepherd-Themistocleous, Claire; Tomalin, Ian R.; Womersley, William John; Worm, Steven; Bainbridge, Robert; Ball, Gordon; Ballin, Jamie; Beuselinck, Raymond; Buchmuller, Oliver; Colling, David; Cripps, Nicholas; Cutajar, Michael; Davies, Gavin; Della Negra, Michel; Fulcher, Jonathan; Futyan, David; Guneratne Bryer, Arlo; Hall, Geoffrey; Hatherell, Zoe; Hays, Jonathan; Iles, Gregory; Karapostoli, Georgia; Lyons, Louis; Magnan, Anne-Marie; Marrouche, Jad; Nandi, Robin; Nash, Jordan; Nikitenko, Alexander; Papageorgiou, Anastasios; Pesaresi, Mark; Petridis, Konstantinos; Pioppi, Michele; Raymond, David Mark; Rompotis, Nikolaos; Rose, Andrew; Ryan, Matthew John; Seez, Christopher; Sharp, Peter; Sparrow, Alex; Tapper, Alexander; Tourneur, Stephane; Vazquez Acosta, Monica; Virdee, Tejinder; Wakefield, Stuart; Wardrope, David; Whyntie, Tom; Barrett, Matthew; Chadwick, Matthew; Cole, Joanne; Hobson, Peter R.; Khan, Akram; Kyberd, Paul; Leslie, Dawn; Martin, William; Reid, Ivan; Teodorescu, Liliana; Hatakeyama, Kenichi; Bose, Tulika; Carrera Jarrin, Edgar; Clough, Andrew; Fantasia, Cory; Heister, Arno; St. John, Jason; Lawson, Philip; Lazic, Dragoslav; Rohlf, James; Sperka, David; Sulak, Lawrence; Avetisyan, Aram; Bhattacharya, Saptaparna; Chou, John Paul; Cutts, David; Esen, Selda; Ferapontov, Alexey; Heintz, Ulrich; Jabeen, Shabnam; Kukartsev, Gennadiy; Landsberg, Greg; Narain, Meenakshi; Nguyen, Duong; Segala, Michael; Speer, Thomas; Tsang, Ka Vang; Borgia, Maria Assunta; Breedon, Richard; Calderon De La Barca Sanchez, Manuel; Cebra, Daniel; Chauhan, Sushil; Chertok, Maxwell; Conway, John; Cox, Peter Timothy; Dolen, James; Erbacher, Robin; Friis, Evan; Ko, Winston; Kopecky, Alexandra; Lander, Richard; Liu, Haidong; Maruyama, Sho; Miceli, Tia; Nikolic, Milan; Pellett, Dave; Robles, Jorge; Schwarz, Thomas; Searle, Matthew; Smith, John; Squires, Michael; Tripathi, Mani; Vasquez Sierra, Ricardo; Veelken, Christian; Andreev, Valeri; Arisaka, Katsushi; Cline, David; Cousins, Robert; Deisher, Amanda; Duris, Joseph; Erhan, Samim; Farrell, Chris; Hauser, Jay; Ignatenko, Mikhail; Jarvis, Chad; Plager, Charles; Rakness, Gregory; Schlein, Peter; Tucker, Jordan; Valuev, Vyacheslav; Babb, John; Clare, Robert; Ellison, John Anthony; Gary, J William; Giordano, Ferdinando; Hanson, Gail; Jeng, Geng-Yuan; Kao, Shih-Chuan; Liu, Feng; Liu, Hongliang; Luthra, Arun; Nguyen, Harold; Pasztor, Gabriella; Satpathy, Asish; Shen, Benjamin C.; Stringer, Robert; Sturdy, Jared; Sumowidagdo, Suharyo; Wilken, Rachel; Wimpenny, Stephen; Andrews, Warren; Branson, James G.; Dusinberre, Elizabeth; Evans, David; Golf, Frank; Holzner, André; Kelley, Ryan; Lebourgeois, Matthew; Letts, James; Mangano, Boris; Muelmenstaedt, Johannes; Padhi, Sanjay; Palmer, Christopher; Petrucciani, Giovanni; Pi, Haifeng; Pieri, Marco; Ranieri, Riccardo; Sani, Matteo; Sharma, Vivek; Simon, Sean; Tu, Yanjun; Vartak, Adish; Würthwein, Frank; Yagil, Avraham; Barge, Derek; Bellan, Riccardo; Campagnari, Claudio; D'Alfonso, Mariarosaria; Danielson, Thomas; Geffert, Paul; Incandela, Joe; Justus, Christopher; Kalavase, Puneeth; Koay, Sue Ann; Kovalskyi, Dmytro; Krutelyov, Vyacheslav; Lowette, Steven; Mccoll, Nickolas; Pavlunin, Viktor; Rebassoo, Finn; Ribnik, Jacob; Richman, Jeffrey; Rossin, Roberto; Stuart, David; To, Wing; Vlimant, Jean-Roch; Apresyan, Artur; Bornheim, Adolf; Bunn, Julian; Chen, Yi; Gataullin, Marat; Kcira, Dorian; Litvine, Vladimir; Ma, Yousi; Mott, Alexander; Newman, Harvey B.; Rogan, Christopher; Timciuc, Vladlen; Traczyk, Piotr; Veverka, Jan; Wilkinson, Richard; Yang, Yong; Zhu, Ren-Yuan; Akgun, Bora; Carroll, Ryan; Ferguson, Thomas; Iiyama, Yutaro; Jang, Dong Wook; Jun, Soon Yung; Liu, Yueh-Feng; Paulini, Manfred; Russ, James; Terentyev, Nikolay; Vogel, Helmut; Vorobiev, Igor; Cumalat, John Perry; Dinardo, Mauro Emanuele; Drell, Brian Robert; Edelmaier, Christopher; Ford, William T.; Heyburn, Bernadette; Luiggi Lopez, Eduardo; Nauenberg, Uriel; Smith, James; Stenson, Kevin; Ulmer, Keith; Wagner, Stephen Robert; Zang, Shi-Lei; Agostino, Lorenzo; Alexander, James; Chatterjee, Avishek; Das, Souvik; Eggert, Nicholas; Fields, Laura Johanna; Gibbons, Lawrence Kent; Heltsley, Brian; Hopkins, Walter; Khukhunaishvili, Aleko; Kreis, Benjamin; Kuznetsov, Valentin; Nicolas Kaufman, Gala; Patterson, Juliet Ritchie; Puigh, Darren; Riley, Daniel; Ryd, Anders; Shi, Xin; Sun, Werner; Teo, Wee Don; Thom, Julia; Thompson, Joshua; Vaughan, Jennifer; Weng, Yao; Winstrom, Lucas; Wittich, Peter; Biselli, Angela; Cirino, Guy; Winn, Dave; Abdullin, Salavat; Albrow, Michael; Anderson, Jacob; Apollinari, Giorgio; Atac, Muzaffer; Bakken, Jon Alan; Banerjee, Sunanda; Bauerdick, Lothar A T; Beretvas, Andrew; Berryhill, Jeffrey; Bhat, Pushpalatha C.; Bloch, Ingo; Borcherding, Frederick; Burkett, Kevin; Butler, Joel Nathan; Chetluru, Vasundhara; Cheung, Harry; Chlebana, Frank; Cihangir, Selcuk; Demarteau, Marcel; Eartly, David P.; Elvira, Victor Daniel; Fisk, Ian; Freeman, Jim; Gao, Yanyan; Gottschalk, Erik; Green, Dan; Gunthoti, Kranti; Gutsche, Oliver; Hahn, Alan; Hanlon, Jim; Harris, Robert M.; Hirschauer, James; Hooberman, Benjamin; James, Eric; Jensen, Hans; Johnson, Marvin; Joshi, Umesh; Khatiwada, Rakshya; Kilminster, Benjamin; Klima, Boaz; Kousouris, Konstantinos; Kunori, Shuichi; Kwan, Simon; Limon, Peter; Lipton, Ron; Lykken, Joseph; Maeshima, Kaori; Marraffino, John Michael; Mason, David; McBride, Patricia; McCauley, Thomas; Miao, Ting; Mishra, Kalanand; Mrenna, Stephen; Musienko, Yuri; Newman-Holmes, Catherine; O'Dell, Vivian; Popescu, Sorina; Pordes, Ruth; Prokofyev, Oleg; Saoulidou, Niki; Sexton-Kennedy, Elizabeth; Sharma, Seema; Soha, Aron; Spalding, William J.; Spiegel, Leonard; Tan, Ping; Taylor, Lucas; Tkaczyk, Slawek; Uplegger, Lorenzo; Vaandering, Eric Wayne; Vidal, Richard; Whitmore, Juliana; Wu, Weimin; Yang, Fan; Yumiceva, Francisco; Yun, Jae Chul; Acosta, Darin; Avery, Paul; Bourilkov, Dimitri; Chen, Mingshui; Di Giovanni, Gian Piero; Dobur, Didar; Drozdetskiy, Alexey; Field, Richard D.; Fisher, Matthew; Fu, Yu; Furic, Ivan-Kresimir; Gartner, Joseph; Goldberg, Sean; Kim, Bockjoo; Klimenko, Sergey; Konigsberg, Jacobo; Korytov, Andrey; Kropivnitskaya, Anna; Kypreos, Theodore; Matchev, Konstantin; Mitselmakher, Guenakh; Muniz, Lana; Pakhotin, Yuriy; Prescott, Craig; Remington, Ronald; Schmitt, Michael Houston; Scurlock, Bobby; Sellers, Paul; Skhirtladze, Nikoloz; Wang, Dayong; Yelton, John; Zakaria, Mohammed; Ceron, Cristobal; Gaultney, Vanessa; Kramer, Laird; Lebolo, Luis Miguel; Linn, Stephan; Markowitz, Pete; Martinez, German; Rodriguez, Jorge Luis; Adams, Todd; Askew, Andrew; Bandurin, Dmitry; Bochenek, Joseph; Chen, Jie; Diamond, Brendan; Gleyzer, Sergei V; Haas, Jeff; Hagopian, Sharon; Hagopian, Vasken; Jenkins, Merrill; Johnson, Kurtis F.; Prosper, Harrison; Sekmen, Sezen; Veeraraghavan, Venkatesh; Baarmand, Marc M.; Dorney, Brian; Guragain, Samir; Hohlmann, Marcus; Kalakhety, Himali; Ralich, Robert; Vodopiyanov, Igor; Adams, Mark Raymond; Anghel, Ioana Maria; Apanasevich, Leonard; Bai, Yuting; Bazterra, Victor Eduardo; Betts, Russell Richard; Callner, Jeremy; Cavanaugh, Richard; Dragoiu, Cosmin; Garcia-Solis, Edmundo Javier; Gerber, Cecilia Elena; Hofman, David Jonathan; Khalatyan, Samvel; Lacroix, Florent; O'Brien, Christine; Silvestre, Catherine; Smoron, Agata; Strom, Derek; Varelas, Nikos; Akgun, Ugur; Albayrak, Elif Asli; Bilki, Burak; Cankocak, Kerem; Clarida, Warren; Duru, Firdevs; Lae, Chung Khim; McCliment, Edward; Merlo, Jean-Pierre; Mermerkaya, Hamit; Mestvirishvili, Alexi; Moeller, Anthony; Nachtman, Jane; Newsom, Charles Ray; Norbeck, Edwin; Olson, Jonathan; Onel, Yasar; Ozok, Ferhat; Sen, Sercan; Wetzel, James; Yetkin, Taylan; Yi, Kai; Barnett, Bruce Arnold; Blumenfeld, Barry; Bonato, Alessio; Eskew, Christopher; Fehling, David; Giurgiu, Gavril; Gritsan, Andrei; Guo, Zijin; Hu, Guofan; Maksimovic, Petar; Rappoccio, Salvatore; Swartz, Morris; Tran, Nhan Viet; Whitbeck, Andrew; Baringer, Philip; Bean, Alice; Benelli, Gabriele; Grachov, Oleg; Murray, Michael; Noonan, Daniel; Radicci, Valeria; Sanders, Stephen; Wood, Jeffrey Scott; Zhukova, Victoria; Bolton, Tim; Chakaberia, Irakli; Ivanov, Andrew; Makouski, Mikhail; Maravin, Yurii; Shrestha, Shruti; Svintradze, Irakli; Wan, Zongru; Gronberg, Jeffrey; Lange, David; Wright, Douglas; Baden, Drew; Boutemeur, Madjid; Eno, Sarah Catherine; Ferencek, Dinko; Gomez, Jaime; Hadley, Nicholas John; Kellogg, Richard G.; Kirn, Malina; Lu, Ying; Mignerey, Alice; Rossato, Kenneth; Rumerio, Paolo; Santanastasio, Francesco; Skuja, Andris; Temple, Jeffrey; Tonjes, Marguerite; Tonwar, Suresh C.; Twedt, Elizabeth; Alver, Burak; Bauer, Gerry; Bendavid, Joshua; Busza, Wit; Butz, Erik; Cali, Ivan Amos; Chan, Matthew; Dutta, Valentina; Everaerts, Pieter; Gomez Ceballos, Guillelmo; Goncharov, Maxim; Hahn, Kristan Allan; Harris, Philip; Kim, Yongsun; Klute, Markus; Lee, Yen-Jie; Li, Wei; Loizides, Constantinos; Luckey, Paul David; Ma, Teng; Nahn, Steve; Paus, Christoph; Roland, Christof; Roland, Gunther; Rudolph, Matthew; Stephans, George; Sumorok, Konstanty; Sung, Kevin; Wenger, Edward Allen; Xie, Si; Yang, Mingming; Yilmaz, Yetkin; Yoon, Sungho; Zanetti, Marco; Cole, Perrie; Cooper, Seth; Cushman, Priscilla; Dahmes, Bryan; De Benedetti, Abraham; Dudero, Phillip Russell; Franzoni, Giovanni; Haupt, Jason; Klapoetke, Kevin; Kubota, Yuichi; Mans, Jeremy; Rekovic, Vladimir; Rusack, Roger; Sasseville, Michael; Singovsky, Alexander; Cremaldi, Lucien Marcus; Godang, Romulus; Kroeger, Rob; Perera, Lalith; Rahmat, Rahmat; Sanders, David A; Summers, Don; Bloom, Kenneth; Bose, Suvadeep; Butt, Jamila; Claes, Daniel R.; Dominguez, Aaron; Eads, Michael; Keller, Jason; Kelly, Tony; Kravchenko, Ilya; Lazo-Flores, Jose; Lundstedt, Carl; Malbouisson, Helena; Malik, Sudhir; Snow, Gregory R.; Baur, Ulrich; Godshalk, Andrew; Iashvili, Ia; Kharchilava, Avto; Kumar, Ashish; Smith, Kenneth; Alverson, George; Barberis, Emanuela; Baumgartel, Darin; Boeriu, Oana; Chasco, Matthew; Kaadze, Ketino; Reucroft, Steve; Swain, John; Wood, Darien; Zhang, Jinzhong; Anastassov, Anton; Kubik, Andrew; Odell, Nathaniel; Ofierzynski, Radoslaw Adrian; Pollack, Brian; Pozdnyakov, Andrey; Schmitt, Michael Henry; Stoynev, Stoyan; Velasco, Mayda; Won, Steven; Antonelli, Louis; Berry, Douglas; Hildreth, Michael; Jessop, Colin; Karmgard, Daniel John; Kolb, Jeff; Kolberg, Ted; Lannon, Kevin; Luo, Wuming; Lynch, Sean; Marinelli, Nancy; Morse, David Michael; Pearson, Tessa; Ruchti, Randy; Slaunwhite, Jason; Valls, Nil; Warchol, Jadwiga; Wayne, Mitchell; Ziegler, Jill; Bylsma, Ben; Durkin, Lloyd Stanley; Gu, Jianhui; Hill, Christopher; Killewald, Phillip; Kotov, Khristian; Ling, Ta-Yung; Rodenburg, Marissa; Williams, Grayson; Adam, Nadia; Berry, Edmund; Elmer, Peter; Gerbaudo, Davide; Halyo, Valerie; Hebda, Philip; Hunt, Adam; Jones, John; Laird, Edward; Lopes Pegna, David; Marlow, Daniel; Medvedeva, Tatiana; Mooney, Michael; Olsen, James; Piroué, Pierre; Quan, Xiaohang; Saka, Halil; Stickland, David; Tully, Christopher; Werner, Jeremy Scott; Zuranski, Andrzej; Acosta, Jhon Gabriel; Huang, Xing Tao; Lopez, Angel; Mendez, Hector; Oliveros, Sandra; Ramirez Vargas, Juan Eduardo; Zatserklyaniy, Andriy; Alagoz, Enver; Barnes, Virgil E.; Bolla, Gino; Borrello, Laura; Bortoletto, Daniela; Everett, Adam; Garfinkel, Arthur F.; Gecse, Zoltan; Gutay, Laszlo; Jones, Matthew; Koybasi, Ozhan; Laasanen, Alvin T.; Leonardo, Nuno; Liu, Chang; Maroussov, Vassili; Merkel, Petra; Miller, David Harry; Neumeister, Norbert; Potamianos, Karolos; Shipsey, Ian; Silvers, David; Svyatkovskiy, Alexey; Yoo, Hwi Dong; Zablocki, Jakub; Zheng, Yu; Jindal, Pratima; Parashar, Neeti; Boulahouache, Chaouki; Cuplov, Vesna; Ecklund, Karl Matthew; Geurts, Frank J M; Liu, Jinghua H.; Morales, Jafet; Padley, Brian Paul; Redjimi, Radia; Roberts, Jay; Zabel, James; Betchart, Burton; Bodek, Arie; Chung, Yeon Sei; de Barbaro, Pawel; Demina, Regina; Eshaq, Yossof; Flacher, Henning; Garcia-Bellido, Aran; Goldenzweig, Pablo; Gotra, Yury; Han, Jiyeon; Harel, Amnon; Miner, Daniel Carl; Orbaker, Douglas; Petrillo, Gianluca; Vishnevskiy, Dmitry; Zielinski, Marek; Bhatti, Anwar; Demortier, Luc; Goulianos, Konstantin; Lungu, Gheorghe; Mesropian, Christina; Yan, Ming; Atramentov, Oleksiy; Barker, Anthony; Duggan, Daniel; Gershtein, Yuri; Gray, Richard; Halkiadakis, Eva; Hidas, Dean; Hits, Dmitry; Lath, Amitabh; Panwalkar, Shruti; Patel, Rishi; Richards, Alan; Rose, Keith; Schnetzer, Steve; Somalwar, Sunil; Stone, Robert; Thomas, Scott; Cerizza, Giordano; Hollingsworth, Matthew; Spanier, Stefan; Yang, Zong-Chang; York, Andrew; Asaadi, Jonathan; Eusebi, Ricardo; Gilmore, Jason; Gurrola, Alfredo; Kamon, Teruki; Khotilovich, Vadim; Montalvo, Roy; Nguyen, Chi Nhan; Pivarski, James; Safonov, Alexei; Sengupta, Sinjini; Tatarinov, Aysen; Toback, David; Weinberger, Michael; Akchurin, Nural; Bardak, Cemile; Damgov, Jordan; Jeong, Chiyoung; Kovitanggoon, Kittikul; Lee, Sung Won; Mane, Poonam; Roh, Youn; Sill, Alan; Volobouev, Igor; Wigmans, Richard; Yazgan, Efe; Appelt, Eric; Brownson, Eric; Engh, Daniel; Florez, Carlos; Gabella, William; Johns, Willard; Kurt, Pelin; Maguire, Charles; Melo, Andrew; Sheldon, Paul; Velkovska, Julia; Arenton, Michael Wayne; Balazs, Michael; Boutle, Sarah; Buehler, Marc; Conetti, Sergio; Cox, Bradley; Francis, Brian; Hirosky, Robert; Ledovskoy, Alexander; Lin, Chuanzhe; Neu, Christopher; Yohay, Rachel; Gollapinni, Sowjanya; Harr, Robert; Karchin, Paul Edmund; Mattson, Mark; Milstène, Caroline; Sakharov, Alexandre; Anderson, Michael; Bachtis, Michail; Bellinger, James Nugent; Carlsmith, Duncan; Dasu, Sridhara; Efron, Jonathan; Gray, Lindsey; Grogg, Kira Suzanne; Grothe, Monika; Hall-Wilton, Richard; Herndon, Matthew; Klabbers, Pamela; Klukas, Jeffrey; Lanaro, Armando; Lazaridis, Christos; Leonard, Jessica; Lomidze, David; Loveless, Richard; Mohapatra, Ajit; Parker, William; Reeder, Don; Ross, Ian; Savin, Alexander; Smith, Wesley H.; Swanson, Joshua; Weinberg, Marc

    2011-01-01

    The production of J/psi mesons is studied in pp collisions at sqrt(s)=7 TeV with the CMS experiment at the LHC. The measurement is based on a dimuon sample corresponding to an integrated luminosity of 314 inverse nanobarns. The J/psi differential cross section is determined, as a function of the J/psi transverse momentum, in three rapidity ranges. A fit to the decay length distribution is used to separate the prompt from the non-prompt (b hadron to J/psi) component. Integrated over J/psi transverse momentum from 6.5 to 30 GeV/c and over rapidity in the range |y| < 2.4, the measured cross sections, times the dimuon decay branching fraction, are 70.9 \\pm 2.1 (stat.) \\pm 3.0 (syst.) \\pm 7.8(luminosity) nb for prompt J/psi mesons assuming unpolarized production and 26.0 \\pm 1.4 (stat.) \\pm 1.6 (syst.) \\pm 2.9 (luminosity) nb for J/psi mesons from b-hadron decays.

  20. Prompt gamma-ray imaging for small animals

    Science.gov (United States)

    Xu, Libai

    Small animal imaging is recognized as a powerful discovery tool for small animal modeling of human diseases, which is providing an important clue to complete understanding of disease mechanisms and is helping researchers develop and test new treatments. The current small animal imaging techniques include positron emission tomography (PET), single photon emission tomography (SPECT), computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound (US). A new imaging modality called prompt gamma-ray imaging (PGI) has been identified and investigated primarily by Monte Carlo simulation. Currently it is suggested for use on small animals. This new technique could greatly enhance and extend the present capabilities of PET and SPECT imaging from ingested radioisotopes to the imaging of selected non-radioactive elements, such as Gd, Cd, Hg, and B, and has the great potential to be used in Neutron Cancer Therapy to monitor neutron distribution and neutron-capture agent distribution. This approach consists of irradiating small animals in the thermal neutron beam of a nuclear reactor to produce prompt gamma rays from the elements in the sample by the radiative capture (n, gamma) reaction. These prompt gamma rays are emitted in energies that are characteristic of each element and they are also produced in characteristic coincident chains. After measuring these prompt gamma rays by surrounding spectrometry array, the distribution of each element of interest in the sample is reconstructed from the mapping of each detected signature gamma ray by either electronic collimations or mechanical collimations. In addition, the transmitted neutrons from the beam can be simultaneously used for very sensitive anatomical imaging, which provides the registration for the elemental distributions obtained from PGI. The primary approach is to use Monte Carlo simulation methods either with the specific purpose code CEARCPG, developed at NC State University or with the general purpose

  1. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  2. A bilingual child learns social communication skills through video modeling-a single case study in a norwegian school setting

    Directory of Open Access Journals (Sweden)

    Meral Özerk

    2015-09-01

    Full Text Available Video modeling is one of the recognized methods used in the training and teaching of children with Autism Spectrum Disorders (ASD. The model’s theoretical base stems from Albert Bandura's (1977; 1986 social learning theory in which he asserts that children can learn many skills and behaviors observationally through modeling. One can assume that by observing others, a child with ASD can construct an idea of how new behaviors are performed, and on later occasions this mentally and visually constructed information will serve as a guide for his/her way of behaving. There are two types of methods for model learning: 1 In Vivo Modeling and 2 Video Modeling. These can be used a to teach children with ASD skills that are not yet in their behavioral repertoire and / or b to improve the children's emerging behaviors or skills. In the case of linguistic minority children at any stage of their bilingual development, it has been presumed that some of their behaviors that can be interpreted as attitude or culture-related actions. This approach, however, can sometimes delay referral, diagnosis, and intervention. In our project, we used Video Modeling and achieved positive results with regard to teaching social communication skills and target behavior to an eleven year-old bilingual boy with ASD. Our study also reveals that through Video Modeling, children with ASD can learn desirable behavioral skills as by-products. Video Modeling can also contribute positively to the social inclusion of bilingual children with ASD in school settings. In other words, bilingual children with ASD can transfer the social communication skills and targeted behaviors they learn through second-language at school to a first-language milieu.

  3. Video Modeling: A Visually Based Intervention for Children with Autism Spectrum Disorder

    Science.gov (United States)

    Ganz, Jennifer B.; Earles-Vollrath, Theresa L.; Cook, Katherine E.

    2011-01-01

    Visually based interventions such as video modeling have been demonstrated to be effective with students with autism spectrum disorder (ASD). This approach has wide utility, is appropriate for use with students of a range of ages and abilities, promotes independent functioning, and can be used to address numerous learner objectives, including…

  4. Using Video Models to Teach Students with Disabilities to Play the Wii

    Science.gov (United States)

    Sherrow, Lauren A.; Spriggs, Amy D.; Knight, Victoria F.

    2016-01-01

    This study investigated effects of video modeling (VM) when teaching recreation and leisure skills to three high school students with moderate intellectual disabilities and autism spectrum disorder. Results, evaluated via a multiple probe across participants design, indicated that VM was effective for teaching all students to play the Wii.…

  5. The effects of video modeling in teaching functional living skills to persons with ASD: A meta-analysis of single-case studies.

    Science.gov (United States)

    Hong, Ee Rea; Ganz, Jennifer B; Mason, Rose; Morin, Kristi; Davis, John L; Ninci, Jennifer; Neely, Leslie C; Boles, Margot B; Gilliland, Whitney D

    2016-10-01

    Many individuals with autism spectrum disorders (ASD) show deficits in functional living skills, leading to low independence, limited community involvement, and poor quality of life. With development of mobile devices, utilizing video modeling has become more feasible for educators to promote functional living skills of individuals with ASD. This article aims to review the single-case experimental literature and aggregate results across studies involving the use of video modeling to improve functional living skills of individuals with ASD. The authors extracted data from single-case experimental studies and evaluated them using the Tau-U effect size measure. Effects were also differentiated by categories of potential moderators and other variables, including age of participants, concomitant diagnoses, types of video modeling, and outcome measures. Results indicate that video modeling interventions are overall moderately effective with this population and dependent measures. While significant differences were not found between categories of moderators and other variables, effects were found to be at least moderate for most of them. It is apparent that more single-case experiments are needed in this area, particularly with preschool and secondary-school aged participants, participants with ASD-only and those with high-functioning ASD, and for video modeling interventions addressing community access skills. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. VideoSET: Video Summary Evaluation through Text

    OpenAIRE

    Yeung, Serena; Fathi, Alireza; Fei-Fei, Li

    2014-01-01

    In this paper we present VideoSET, a method for Video Summary Evaluation through Text that can evaluate how well a video summary is able to retain the semantic information contained in its original video. We observe that semantics is most easily expressed in words, and develop a text-based approach for the evaluation. Given a video summary, a text representation of the video summary is first generated, and an NLP-based metric is then used to measure its semantic distance to ground-truth text ...

  7. Prompt single muon production by protons on iron

    International Nuclear Information System (INIS)

    Bodek, A.; Breedon, R.; Coleman, R.N.

    1981-01-01

    A new experiment has been performed at Fermilab to measure the hadronic production of prompt single muons. A preliminary analysis of a sample of the data indicates approximately equal production of prompt single μ + 's and μ - 's in 350 GeV p-Fe interactions. The observed momentum distributions of prompt single μ + 's and μ - 's can satisfactorily be fit by the hypothesis of central production of D mesons with a cross section of 16 +- 4 μb/nucleon

  8. Training Self-Regulated Learning Skills with Video Modeling Examples: Do Task-Selection Skills Transfer?

    Science.gov (United States)

    Raaijmakers, Steven F.; Baars, Martine; Schaap, Lydia; Paas, Fred; van Merriënboer, Jeroen; van Gog, Tamara

    2018-01-01

    Self-assessment and task-selection skills are crucial in self-regulated learning situations in which students can choose their own tasks. Prior research suggested that training with video modeling examples, in which another person (the model) demonstrates and explains the cyclical process of problem-solving task performance, self-assessment, and…

  9. Video Waterscrambling: Towards a Video Protection Scheme Based on the Disturbance of Motion Vectors

    Science.gov (United States)

    Bodo, Yann; Laurent, Nathalie; Laurent, Christophe; Dugelay, Jean-Luc

    2004-12-01

    With the popularity of high-bandwidth modems and peer-to-peer networks, the contents of videos must be highly protected from piracy. Traditionally, the models utilized to protect this kind of content are scrambling and watermarking. While the former protects the content against eavesdropping (a priori protection), the latter aims at providing a protection against illegal mass distribution (a posteriori protection). Today, researchers agree that both models must be used conjointly to reach a sufficient level of security. However, scrambling works generally by encryption resulting in an unintelligible content for the end-user. At the moment, some applications (such as e-commerce) may require a slight degradation of content so that the user has an idea of the content before buying it. In this paper, we propose a new video protection model, called waterscrambling, whose aim is to give such a quality degradation-based security model. This model works in the compressed domain and disturbs the motion vectors, degrading the video quality. It also allows embedding of a classical invisible watermark enabling protection against mass distribution. In fact, our model can be seen as an intermediary solution to scrambling and watermarking.

  10. Video Waterscrambling: Towards a Video Protection Scheme Based on the Disturbance of Motion Vectors

    Directory of Open Access Journals (Sweden)

    Yann Bodo

    2004-10-01

    Full Text Available With the popularity of high-bandwidth modems and peer-to-peer networks, the contents of videos must be highly protected from piracy. Traditionally, the models utilized to protect this kind of content are scrambling and watermarking. While the former protects the content against eavesdropping (a priori protection, the latter aims at providing a protection against illegal mass distribution (a posteriori protection. Today, researchers agree that both models must be used conjointly to reach a sufficient level of security. However, scrambling works generally by encryption resulting in an unintelligible content for the end-user. At the moment, some applications (such as e-commerce may require a slight degradation of content so that the user has an idea of the content before buying it. In this paper, we propose a new video protection model, called waterscrambling, whose aim is to give such a quality degradation-based security model. This model works in the compressed domain and disturbs the motion vectors, degrading the video quality. It also allows embedding of a classical invisible watermark enabling protection against mass distribution. In fact, our model can be seen as an intermediary solution to scrambling and watermarking.

  11. Measurement of the differential cross-sections of inclusive, prompt and non-prompt $J/\\psi$ production in proton-proton collisions at $\\sqrt{s}$ = 7 TeV

    CERN Document Server

    Aad, Georges; Abdallah, Jalal; Abdelalim, Ahmed Ali; Abdesselam, Abdelouahab; Abdinov, Ovsat; Abi, Babak; Abolins, Maris; Abramowicz, Halina; Abreu, Henso; Acerbi, Emilio; Acharya, Bobby Samir; Adams, David; Addy, Tetteh; Adelman, Jahred; Aderholz, Michael; Adomeit, Stefanie; Adragna, Paolo; Adye, Tim; Aefsky, Scott; Aguilar-Saavedra, Juan Antonio; Aharrouche, Mohamed; Ahlen, Steven; Ahles, Florian; Ahmad, Ashfaq; Ahsan, Mahsana; Aielli, Giulio; Akdogan, Taylan; Akesson, Torsten Paul; Akimoto, Ginga; Akimov, Andrei; Akiyama, Kunihiro; Alam, Mohammad; Alam, Muhammad Aftab; Albrand, Solveig; Aleksa, Martin; Aleksandrov, Igor; Aleppo, Mario; Alessandria, Franco; Alexa, Calin; Alexander, Gideon; Alexandre, Gauthier; Alexopoulos, Theodoros; Alhroob, Muhammad; Aliev, Malik; Alimonti, Gianluca; Alison, John; Aliyev, Magsud; Allport, Phillip; Allwood-Spiers, Sarah; Almond, John; Aloisio, Alberto; Alon, Raz; Alonso, Alejandro; Alviggi, Mariagrazia; Amako, Katsuya; Amaral, Pedro; Amelung, Christoph; Ammosov, Vladimir; Amorim, Antonio; Amoros, Gabriel; Amram, Nir; Anastopoulos, Christos; Andeen, Timothy; Anders, Christoph Falk; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Andrieux, Marie-Laure; Anduaga, Xabier; Angerami, Aaron; Anghinolfi, Francis; Anjos, Nuno; Annovi, Alberto; Antonaki, Ariadni; Antonelli, Mario; Antonelli, Stefano; Antonov, Alexey; Antos, Jaroslav; Anulli, Fabio; Aoun, Sahar; Aperio Bella, Ludovica; Apolle, Rudi; Arabidze, Giorgi; Aracena, Ignacio; Arai, Yasuo; Arce, Ayana; Archambault, John-Paul; Arfaoui, Samir; Arguin, Jean-Francois; Arik, Engin; Arik, Metin; Armbruster, Aaron James; Arnaez, Olivier; Arnault, Christian; Artamonov, Andrei; Artoni, Giacomo; Arutinov, David; Asai, Shoji; Asfandiyarov, Ruslan; Ask, Stefan; Asman, Barbro; Asquith, Lily; Assamagan, Ketevi; Astbury, Alan; Astvatsatourov, Anatoli; Atoian, Grigor; Aubert, Bernard; Auerbach, Benjamin; Auge, Etienne; Augsten, Kamil; Aurousseau, Mathieu; Austin, Nicholas; Avramidou, Rachel Maria; Axen, David; Ay, Cano; Azuelos, Georges; Azuma, Yuya; Baak, Max; Baccaglioni, Giuseppe; Bacci, Cesare; Bach, Andre; Bachacou, Henri; Bachas, Konstantinos; Bachy, Gerard; Backes, Moritz; Backhaus, Malte; Badescu, Elisabeta; Bagnaia, Paolo; Bahinipati, Seema; Bai, Yu; Bailey, David; Bain, Travis; Baines, John; Baker, Oliver Keith; Baker, Mark; Baker, Sarah; Baltasar Dos Santos Pedrosa, Fernando; Banas, Elzbieta; Banerjee, Piyali; Banerjee, Swagato; Banfi, Danilo; Bangert, Andrea Michelle; Bansal, Vikas; Bansil, Hardeep Singh; Barak, Liron; Baranov, Sergei; Barashkou, Andrei; Galtieri, Angela Barbaro; Barber, Tom; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Bardin, Dmitri; Barillari, Teresa; Barisonzi, Marcello; Barklow, Timothy; Barlow, Nick; Barnett, Bruce; Barnett, Michael; Baroncelli, Antonio; Barr, Alan; Barreiro, Fernando; Barreiro Guimaraes da Costa, Joao; Barrillon, Pierre; Bartoldus, Rainer; Barton, Adam Edward; Bartsch, Detlef; Bartsch, Valeria; Bates, Richard; Batkova, Lucia; Batley, Richard; Battaglia, Andreas; Battistin, Michele; Battistoni, Giuseppe; Bauer, Florian; Bawa, Harinder Singh; Beare, Brian; Beau, Tristan; Beauchemin, Pierre-Hugues; Beccherle, Roberto; Bechtle, Philip; Beck, Hans Peter; Beckingham, Matthew; Becks, Karl-Heinz; Beddall, Andrew; Beddall, Ayda; Bedikian, Sourpouhi; Bednyakov, Vadim; Bee, Christopher; Begel, Michael; Behar Harpaz, Silvia; Behera, Prafulla; Beimforde, Michael; Belanger-Champagne, Camille; Bell, Paul; Bell, William; Bella, Gideon; Bellagamba, Lorenzo; Bellina, Francesco; Bellomo, Giovanni; Bellomo, Massimiliano; Belloni, Alberto; Beloborodova, Olga; Belotskiy, Konstantin; Beltramello, Olga; Ben Ami, Sagi; Benary, Odette; Benchekroun, Driss; Benchouk, Chafik; Bendel, Markus; Benedict, Brian Hugues; Benekos, Nektarios; Benhammou, Yan; Benjamin, Douglas; Benoit, Mathieu; Bensinger, James; Benslama, Kamal; Bentvelsen, Stan; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Berghaus, Frank; Berglund, Elina; Beringer, Jurg; Bernardet, Karim; Bernat, Pauline; Bernhard, Ralf; Bernius, Catrin; Berry, Tracey; Bertinelli, Francesco; Bertolucci, Federico; Besana, Maria Ilaria; Besson, Nathalie; Bethke, Siegfried; Bhimji, Wahid; Bianchi, Riccardo-Maria; Bianco, Michele; Biebel, Otmar; Bieniek, Stephen Paul; Biesiada, Jed; Biglietti, Michela; Bilokon, Halina; Bindi, Marcello; Binet, Sebastien; Bingul, Ahmet; Bini, Cesare; Biscarat, Catherine; Bitenc, Urban; Black, Kevin; Blair, Robert; Blanchard, Jean-Baptiste; Blanchot, Georges; Blocker, Craig; Blocki, Jacek; Blondel, Alain; Blum, Walter; Blumenschein, Ulrike; Bobbink, Gerjan; Bobrovnikov, Victor; Bocci, Andrea; Boddy, Christopher Richard; Boehler, Michael; Boek, Jennifer; Boelaert, Nele; Boser, Sebastian; Bogaerts, Joannes Andreas; Bogdanchikov, Alexander; Bogouch, Andrei; Bohm, Christian; Boisvert, Veronique; Bold, Tomasz; Boldea, Venera; Bona, Marcella; Bondarenko, Valery; Boonekamp, Maarten; Boorman, Gary; Booth, Chris; Booth, Peter; Bordoni, Stefania; Borer, Claudia; Borisov, Anatoly; Borissov, Guennadi; Borjanovic, Iris; Borroni, Sara; Bos, Kors; Boscherini, Davide; Bosman, Martine; Boterenbrood, Hendrik; Botterill, David; Bouchami, Jihene; Boudreau, Joseph; Bouhova-Thacker, Evelina Vassileva; Boulahouache, Chaouki; Bourdarios, Claire; Bousson, Nicolas; Boveia, Antonio; Boyd, James; Boyko, Igor; Bozhko, Nikolay; Bozovic-Jelisavcic, Ivanka; Bracinik, Juraj; Braem, Andre; Brambilla, Elena; Branchini, Paolo; Brandenburg, George; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Bratzler, Uwe; Brau, Benjamin; Brau, James; Braun, Helmut; Brelier, Bertrand; Bremer, Johan; Brenner, Richard; Bressler, Shikma; Breton, Dominique; Brett, Nicolas; Bright-Thomas, Paul; Britton, Dave; Brochu, Frederic; Brock, Ian; Brock, Raymond; Brodbeck, Timothy; Brodet, Eyal; Broggi, Francesco; Bromberg, Carl; Brooijmans, Gustaaf; Brooks, William; Brown, Gareth; Brubaker, Erik; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruneliere, Renaud; Brunet, Sylvie; Bruni, Alessia; Bruni, Graziano; Bruschi, Marco; Buanes, Trygve; Bucci, Francesca; Buchanan, James; Buchanan, Norman; Buchholz, Peter; Buckingham, Ryan; Buckley, Andrew; Buda, Stelian Ioan; Budagov, Ioulian; Budick, Burton; Buscher, Volker; Bugge, Lars; Buira-Clark, Daniel; Buis, Ernst-Jan; Bulekov, Oleg; Bunse, Moritz; Buran, Torleiv; Burckhart, Helfried; Burdin, Sergey; Burgess, Thomas; Burke, Stephen; Busato, Emmanuel; Bussey, Peter; Buszello, Claus-Peter; Butin, Francois; Butler, Bart; Butler, John; Buttar, Craig; Butterworth, Jonathan; Buttinger, William; Byatt, Tom; Cabrera Urban, Susana; Caccia, Massimo; Caforio, Davide; Cakir, Orhan; Calafiura, Paolo; Calderini, Giovanni; Calfayan, Philippe; Calkins, Robert; Caloba, Luiz; Caloi, Rita; Calvet, David; Calvet, Samuel; Camacho Toro, Reina; Camard, Arnaud; Camarri, Paolo; Cambiaghi, Mario; Cameron, David; Cammin, Jochen; Campana, Simone; Campanelli, Mario; Canale, Vincenzo; Canelli, Florencia; Canepa, Anadi; Cantero, Josu; Capasso, Luciano; Garrido, Maria Del Mar Capeans; Caprini, Irinel; Caprini, Mihai; Capriotti, Daniele; Capua, Marcella; Caputo, Regina; Caramarcu, Costin; Cardarelli, Roberto; Carli, Tancredi; Carlino, Gianpaolo; Carminati, Leonardo; Caron, Bryan; Caron, Sascha; Carpentieri, Carmen; Montoya, German D.Carrillo; Carter, Antony; Carter, Janet; Carvalho, Joao; Casadei, Diego; Casado, Maria Pilar; Cascella, Michele; Caso, Carlo; Castaneda Hernandez, Alfredo Martin; Castaneda-Miranda, Elizabeth; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Cataldi, Gabriella; Cataneo, Fernando; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Cattani, Giordano; Caughron, Seth; Cauz, Diego; Cavallari, Alvise; Cavalleri, Pietro; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Cazzato, Antonio; Ceradini, Filippo; Santiago Cerqueira, Augusto; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cetin, Serkant Ali; Cevenini, Francesco; Chafaq, Aziz; Chakraborty, Dhiman; Chan, Kevin; Chapleau, Bertrand; Chapman, John Derek; Chapman, John Wehrley; Chareyre, Eve; Charlton, Dave; Chavda, Vikash; Cheatham, Susan; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Chunhui; Chen, Hucheng; Chen, Li; Chen, Shenjian; Chen, Tingyang; Chen, Xin; Cheng, Shaochen; Cheplakov, Alexander; Chepurnov, Vladimir; Cherkaoui El Moursli, Rajaa; Chernyatin, Valeriy; Cheu, Elliott; Cheung, Sing-Leung; Chevalier, Laurent; Chevallier, Florent; Chiefari, Giovanni; Chikovani, Leila; Childers, John Taylor; Chilingarov, Alexandre; Chiodini, Gabriele; Chizhov, Mihail; Choudalakis, Georgios; Chouridou, Sofia; Christidi, Illectra-Athanasia; Christov, Asen; Chromek-Burckhart, Doris; Chudoba, Jiri; Ciapetti, Guido; Ciba, Krzysztof; Ciftci, Abbas Kenan; Ciftci, Rena; Cinca, Diane; Cindro, Vladimir; Ciobotaru, Matei Dan; Ciocca, Claudia; Ciocio, Alessandra; Cirilli, Manuela; Ciubancan, Mihai; Clark, Allan G.; Clark, Philip James; Cleland, Bill; Clemens, Jean-Claude; Clement, Benoit; Clement, Christophe; Clifft, Roger; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H.; Coe, Paul; Cogan, Joshua Godfrey; Coggeshall, James; Cogneras, Eric; Cojocaru, Claudiu; Colas, Jacques; Colijn, Auke-Pieter; Collard, Caroline; Collins, Neil; Collins-Tooth, Christopher; Collot, Johann; Colon, German; Coluccia, Rita; Comune, Gianluca; Conde Muino, Patricia; Coniavitis, Elias; Conidi, Maria Chiara; Consonni, Michele; Constantinescu, Serban; Conta, Claudio; Conventi, Francesco; Cook, James; Cooke, Mark; Cooper, Ben; Cooper-Sarkar, Amanda; Cooper-Smith, Neil; Copic, Katherine; Cornelissen, Thijs; Corradi, Massimo; Corriveau, Francois; Cortes-Gonzalez, Arely; Cortiana, Giorgio; Costa, Giuseppe; Costa, Maria Jose; Costanzo, Davide; Costin, Tudor; Cote, David; Courneyea, Lorraine; Cowan, Glen; Cowden, Christopher; Cox, Brian; Cranmer, Kyle; Crescioli, Francesco; Cristinziani, Markus; Crosetti, Giovanni; Crupi, Roberto; Crepe-Renaudin, Sabine; Cuenca Almenar, Cristobal; Donszelmann, Tulay Cuhadar; Cuneo, Stefano; Curatolo, Maria; Curtis, Chris; Cwetanski, Peter; Czirr, Hendrik; Czyczula, Zofia; D'Auria, Saverio; D'Onofrio, Monica; D'Orazio, Alessia; Da Rocha Gesualdi Mello, Aline; Da Silva, Paulo Vitor; Da Via, Cinzia; Dabrowski, Wladyslaw; Dahlhoff, Andrea; Dai, Tiesheng; Dallapiccola, Carlo; Dallison, Steve; Dam, Mogens; Dameri, Mauro; Damiani, Daniel; Danielsson, Hans Olof; Dankers, Reinier; Dannheim, Dominik; Dao, Valerio; Darbo, Giovanni; Darlea, Georgiana Lavinia; Daum, Cornelis; Dauvergne, Jean-Pierre; Davey, Will; Davidek, Tomas; Davidson, Nadia; Davidson, Ruth; Davies, Merlin; Davison, Adam; Dawe, Edmund; Dawson, Ian; Dawson, John; Daya, Rozmin; De, Kaushik; De Asmundis, Riccardo; De Castro, Stefano; De Castro Faria Salgado, Pedro; De Cecco, Sandro; de Graat, Julien; De Groot, Nicolo; de Jong, Paul; de la Taille, Christophe; de la Torre, Hector; De Lotto, Barbara; De Mora, Lee; De Nooij, Lucie; De Oliveira Branco, Miguel; De Pedis, Daniele; de Saintignon, Paul; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; de Vivie De Regie, Jean-Baptiste; Dean, Simon; Dedovich, Dmitri; Degenhardt, James; Dehchar, Mohamed; Deile, Mario; del Papa, Carlo; del Peso, Jose; del Prete, Tarcisio; Dell'Acqua, Andrea; Dell'Asta, Lidia; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delpierre, Pierre; Delruelle, Nicolas; Delsart, Pierre-Antoine; Deluca, Carolina; Demers, Sarah; Demichev, Mikhail; Demirkoz, Bilge; Deng, Jianrong; Denisov, Sergey; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Devetak, Erik; Deviveiros, Pier-Olivier; Dewhurst, Alastair; DeWilde, Burton; Dhaliwal, Saminder; Dhullipudi, Ramasudhakar; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Girolamo, Alessandro; Di Girolamo, Beniamino; Di Luise, Silvestro; Di Mattia, Alessandro; Di Micco, Biagio; Di Nardo, Roberto; Di Simone, Andrea; Di Sipio, Riccardo; Diaz, Marco Aurelio; Diblen, Faruk; Diehl, Edward; Dietl, Hans; Dietrich, Janet; Dietzsch, Thorsten; Diglio, Sara; Yagci, Kamile Dindar; Dingfelder, Jochen; Dionisi, Carlo; Dita, Petre; Dita, Sanda; Dittus, Fridolin; Djama, Fares; Djilkibaev, Rashid; Djobava, Tamar; Barros do Vale, Maria Aline; Do Valle Wemans, Andre; Doan, Thi Kieu Oanh; Dobbs, Matt; Dobinson, Robert; Dobos, Daniel; Dobson, Ellie; Dobson, Marc; Dodd, Jeremy; Dogan, Ozgen Berkol; Doglioni, Caterina; Doherty, Tom; Doi, Yoshikuni; Dolejsi, Jiri; Dolenc, Irena; Dolezal, Zdenek; Dolgoshein, Boris; Dohmae, Takeshi; Donadelli, Marisilvia; Donega, Mauro; Donini, Julien; Dopke, Jens; Doria, Alessandra; dos Anjos, Andre; Dosil, Mireia; Dotti, Andrea; Dova, Maria-Teresa; Dowell, John; Doxiadis, Alexander; Doyle, Tony; Drasal, Zbynek; Drees, Jurgen; Dressnandt, Nandor; Drevermann, Hans; Driouichi, Chafik; Dris, Manolis; Drohan, Janice; Dubbert, Jorg; Dubbs, Tim; Dube, Sourabh; Duchovni, Ehud; Duckeck, Guenter; Dudarev, Alexey; Dudziak, Fanny; Duhrssen, Michael; Duerdoth, Ian; Duflot, Laurent; Dufour, Marc-Andre; Dunford, Monica; Duran Yildiz, Hatice; Duxfield, Robert; Dwuznik, Michal; Dydak, Friedrich; Dzahini, Daniel; Duren, Michael; Ebenstein, William; Ebke, Johannes; Eckert, Simon; Eckweiler, Sebastian; Edmonds, Keith; Edwards, Clive; Efthymiopoulos, Ilias; Ehrenfeld, Wolfgang; Ehrich, Thies; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Eisenhandler, Eric; Ekelof, Tord; El Kacimi, Mohamed; Ellert, Mattias; Elles, Sabine; Ellinghaus, Frank; Ellis, Katherine; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Ely, Robert; Emeliyanov, Dmitry; Engelmann, Roderich; Engl, Albert; Epp, Brigitte; Eppig, Andrew; Erdmann, Johannes; Ereditato, Antonio; Eriksson, Daniel; Ernst, Jesse; Ernst, Michael; Ernwein, Jean; Errede, Deborah; Errede, Steven; Ertel, Eugen; Escalier, Marc; Escobar, Carlos; Espinal Curull, Xavier; Esposito, Bellisario; Etienne, Francois; Etienvre, Anne-Isabelle; Etzion, Erez; Evangelakou, Despoina; Evans, Hal; Fabbri, Laura; Fabre, Caroline; Facius, Katrine; Fakhrutdinov, Rinat; Falciano, Speranza; Falou, Alain; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farley, Jason; Farooque, Trisha; Farrington, Sinead; Farthouat, Philippe; Fasching, Damon; Fassnacht, Patrick; Fassouliotis, Dimitrios; Fatholahzadeh, Baharak; Favareto, Andrea; Fayard, Louis; Fazio, Salvatore; Febbraro, Renato; Federic, Pavol; Fedin, Oleg; Fedorko, Ivan; Fedorko, Woiciech; Fehling-Kaschek, Mirjam; Feligioni, Lorenzo; Fellmann, Denis; Felzmann, Ulrich; Feng, Cunfeng; Feng, Eric; Fenyuk, Alexander; Ferencei, Jozef; Ferland, Jonathan; Fernandes, Bruno; Fernando, Waruna; Ferrag, Samir; Ferrando, James; Ferrara, Valentina; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferrer, Antonio; Ferrer, Maria Lorenza; Ferrere, Didier; Ferretti, Claudio; Ferretto Parodi, Andrea; Fiascaris, Maria; Fiedler, Frank; Filipcic, Andrej; Filippas, Anastasios; Filthaut, Frank; Fincke-Keeler, Margret; Fiolhais, Miguel; Fiorini, Luca; Firan, Ana; Fischer, Gordon; Fischer, Peter; Fisher, Matthew; Fisher, Steve; Flammer, Joachim; Flechl, Martin; Fleck, Ivor; Fleckner, Johanna; Fleischmann, Philipp; Fleischmann, Sebastian; Flick, Tobias; Flores Castillo, Luis; Flowerdew, Michael; Fohlisch, Florian; Fokitis, Manolis; Fonseca Martin, Teresa; Forbush, David Alan; Formica, Andrea; Forti, Alessandra; Fortin, Dominique; Foster, Joe; Fournier, Daniel; Foussat, Arnaud; Fowler, Andrew; Fowler, Ken; Fox, Harald; Francavilla, Paolo; Franchino, Silvia; Francis, David; Frank, Tal; Franklin, Melissa; Franz, Sebastien; Fraternali, Marco; Fratina, Sasa; French, Sky; Froeschl, Robert; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fullana Torregrosa, Esteban; Fuster, Juan; Gabaldon, Carolina; Gabizon, Ofir; Gadfort, Thomas; Gadomski, Szymon; Gagliardi, Guido; Gagnon, Pauline; Galea, Cristina; Gallas, Elizabeth; Gallas, Manuel; Gallo, Valentina Santina; Gallop, Bruce; Gallus, Petr; Galyaev, Eugene; Gan, K.K.; Gao, Yongsheng; Gapienko, Vladimir; Gaponenko, Andrei; Garberson, Ford; Garcia-Sciveres, Maurice; Garcia, Carmen; Garcia Navarro, Jose Enrique; Gardner, Robert; Garelli, Nicoletta; Garitaonandia, Hegoi; Garonne, Vincent; Garvey, John; Gatti, Claudio; Gaudio, Gabriella; Gaumer, Olivier; Gaur, Bakul; Gauthier, Lea; Gavrilenko, Igor; Gay, Colin; Gaycken, Goetz; Gayde, Jean-Christophe; Gazis, Evangelos; Ge, Peng; Gee, Norman; Geerts, Daniel Alphonsus Adrianus; Geich-Gimbel, Christoph; Gellerstedt, Karl; Gemme, Claudia; Gemmell, Alistair; Genest, Marie-Helene; Gentile, Simonetta; George, Matthias; George, Simon; Gerlach, Peter; Gershon, Avi; Geweniger, Christoph; Ghez, Philippe; Ghodbane, Nabil; Giacobbe, Benedetto; Giagu, Stefano; Giakoumopoulou, Victoria; Giangiobbe, Vincent; Gianotti, Fabiola; Gibbard, Bruce; Gibson, Adam; Gibson, Stephen; Gieraltowski, Gerry; Gilbert, Laura; Gilchriese, Murdock; Gilewsky, Valentin; Gillberg, Dag; Gillman, Tony; Gingrich, Douglas; Ginzburg, Jonatan; Giokaris, Nikos; Giordano, Raffaele; Giorgi, Francesco Michelangelo; Giovannini, Paola; Giraud, Pierre-Francois; Giugni, Danilo; Giusti, Paolo; Gjelsten, Borge Kile; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glazov, Alexandre; Glitza, Karl-Walter; Glonti, George; Godfrey, Jennifer; Godlewski, Jan; Goebel, Martin; Gopfert, Thomas; Goeringer, Christian; Gossling, Claus; Gottfert, Tobias; Goldfarb, Steven; Goldin, Daniel; Golling, Tobias; Golovnia, Serguei; Gomes, Agostinho; Gomez Fajardo, Luz Stella; Goncalo, Ricardo; Goncalves Pinto Firmino Da Costa, Joao; Gonella, Laura; Gonidec, Allain; Gonzalez, Saul; Gonzalez de la Hoz, Santiago; Gonzalez Silva, Laura; Gonzalez-Sevilla, Sergio; Goodson, Jeremiah Jet; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorelov, Igor; Gorfine, Grant; Gorini, Benedetto; Gorini, Edoardo; Gorisek, Andrej; Gornicki, Edward; Gorokhov, Serguei; Goryachev, Vladimir; Gosdzik, Bjoern; Gosselink, Martijn; Gostkin, Mikhail Ivanovitch; Gouanere, Michel; Gough Eschrich, Ivo; Gouighri, Mohamed; Goujdami, Driss; Goulette, Marc Phillippe; Goussiou, Anna; Goy, Corinne; Grabowska-Bold, Iwona; Grabski, Varlen; Grafstrom, Per; Grah, Christian; Grahn, Karl-Johan; Grancagnolo, Francesco; Grancagnolo, Sergio; Grassi, Valerio; Gratchev, Vadim; Grau, Nathan; Gray, Heather; Gray, Julia Ann; Graziani, Enrico; Grebenyuk, Oleg; Greenfield, Debbie; Greenshaw, Timothy; Greenwood, Zeno Dixon; Gregor, Ingrid-Maria; Grenier, Philippe; Griesmayer, Erich; Griffiths, Justin; Grigalashvili, Nugzar; Grillo, Alexander; Grinstein, Sebastian; Gris, Philippe Luc Yves; Grishkevich, Yaroslav; Grivaz, Jean-Francois; Grognuz, Joel; Groh, Manfred; Gross, Eilam; Grosse-Knetter, Joern; Groth-Jensen, Jacob; Gruwe, Magali; Grybel, Kai; Guarino, Victor; Guest, Daniel; Guicheney, Christophe; Guida, Angelo; Guillemin, Thibault; Guindon, Stefan; Guler, Hulya; Gunther, Jaroslav; Guo, Bin; Guo, Jun; Gupta, Ambreesh; Gusakov, Yury; Gushchin, Vladimir; Gutierrez, Andrea; Gutierrez, Phillip; Guttman, Nir; Gutzwiller, Olivier; Guyot, Claude; Gwenlan, Claire; Gwilliam, Carl; Haas, Andy; Haas, Stefan; Haber, Carl; Hackenburg, Robert; Hadavand, Haleh Khani; Hadley, David; Haefner, Petra; Hahn, Ferdinand; Haider, Stefan; Hajduk, Zbigniew; Hakobyan, Hrachya; Haller, Johannes; Hamacher, Klaus; Hamal, Petr; Hamilton, Andrew; Hamilton, Samuel; Han, Hongguang; Han, Liang; Hanagaki, Kazunori; Hance, Michael; Handel, Carsten; Hanke, Paul; Hansen, Christian Johan; Hansen, John Renner; Hansen, Jorgen Beck; Hansen, Jorn Dines; Hansen, Peter Henrik; Hansson, Per; Hara, Kazuhiko; Hare, Gabriel; Harenberg, Torsten; Harper, Devin; Harrington, Robert; Harris, Orin; Harrison, Karl; Hartert, Jochen; Hartjes, Fred; Haruyama, Tomiyoshi; Harvey, Alex; Hasegawa, Satoshi; Hasegawa, Yoji; Hassani, Samira; Hatch, Mark; Hauff, Dieter; Haug, Sigve; Hauschild, Michael; Hauser, Reiner; Havranek, Miroslav; Hawes, Brian; Hawkes, Christopher; Hawkings, Richard John; Hawkins, Donovan; Hayakawa, Takashi; Hayden, Daniel; Hayward, Helen; Haywood, Stephen; Hazen, Eric; He, Mao; Head, Simon; Hedberg, Vincent; Heelan, Louise; Heim, Sarah; Heinemann, Beate; Heisterkamp, Simon; Helary, Louis; Heldmann, Michael; Heller, Mathieu; Hellman, Sten; Helsens, Clement; Henderson, Robert; Henke, Michael; Henrichs, Anna; Henriques Correia, Ana Maria; Henrot-Versille, Sophie; Henry-Couannier, Frederic; Hensel, Carsten; Henss, Tobias; Hernandez Jimenez, Yesenia; Herrberg, Ruth; Hershenhorn, Alon David; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Hessey, Nigel; Hidvegi, Attila; Higon-Rodriguez, Emilio; Hill, Daniel; Hill, John; Hill, Norman; Hiller, Karl Heinz; Hillert, Sonja; Hillier, Stephen; Hinchliffe, Ian; Hines, Elizabeth; Hirose, Minoru; Hirsch, Florian; Hirschbuehl, Dominic; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hodgson, Paul; Hoecker, Andreas; Hoeferkamp, Martin; Hoffman, Julia; Hoffmann, Dirk; Hohlfeld, Marc; Holder, Martin; Holmes, Alan; Holmgren, Sven-Olof; Holy, Tomas; Holzbauer, Jenny; Homma, Yasuhiro; Hooft van Huysduynen, Loek; Horn, Claus; Horner, Stephan; Horton, Katherine; Hostachy, Jean-Yves; Hott, Thomas; Hou, Suen; Houlden, Michael; Hoummada, Abdeslam; Howarth, James; Howell, David; Hristova, Ivana; Hrivnac, Julius; Hruska, Ivan; Hryn'ova, Tetiana; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Huang, Guang Shun; Hubacek, Zdenek; Hubaut, Fabrice; Huegging, Fabian; Huffman, Todd Brian; Hughes, Emlyn; Hughes, Gareth; Hughes-Jones, Richard; Huhtinen, Mika; Hurst, Peter; Hurwitz, Martina; Husemann, Ulrich; Huseynov, Nazim; Huston, Joey; Huth, John; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibbotson, Michael; Ibragimov, Iskander; Ichimiya, Ryo; Iconomidou-Fayard, Lydia; Idarraga, John; Idzik, Marek; Iengo, Paolo; Igonkina, Olga; Ikegami, Yoichi; Ikeno, Masahiro; Ilchenko, Yuri; Iliadis, Dimitrios; Imbault, Didier; Imhaeuser, Martin; Imori, Masatoshi; Ince, Tayfun; Inigo-Golfin, Joaquin; Ioannou, Pavlos; Iodice, Mauro; Ionescu, Gelu; Irles Quiles, Adrian; Ishii, Koji; Ishikawa, Akimasa; Ishino, Masaya; Ishmukhametov, Renat; Issever, Cigdem; Istin, Serhat; Itoh, Yuki; Ivashin, Anton; Iwanski, Wieslaw; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jackson, Brett; Jackson, John; Jackson, Paul; Jaekel, Martin; Jain, Vivek; Jakobs, Karl; Jakobsen, Sune; Jakubek, Jan; Jana, Dilip; Jankowski, Ernest; Jansen, Eric; Jantsch, Andreas; Janus, Michel; Jarlskog, Goran; Jeanty, Laura; Jelen, Kazimierz; Jen-La Plante, Imai; Jenni, Peter; Jeremie, Andrea; Jez, Pavel; Jezequel, Stephane; Jha, Manoj Kumar; Ji, Haoshuang; Ji, Weina; Jia, Jiangyong; Jiang, Yi; Jimenez Belenguer, Marcos; Jin, Ge; Jin, Shan; Jinnouchi, Osamu; Joergensen, Morten Dam; Joffe, David; Johansen, Lars; Johansen, Marianne; Johansson, Erik; Johansson, Per; Johnert, Sebastian; Johns, Kenneth; Jon-And, Kerstin; Jones, Graham; Jones, Roger; Jones, Tegid; Jones, Tim; Jonsson, Ove; Joram, Christian; Jorge, Pedro; Joseph, John; Ju, Xiangyang; Juranek, Vojtech; Jussel, Patrick; Kabachenko, Vasily; Kabana, Sonja; Kaci, Mohammed; Kaczmarska, Anna; Kadlecik, Peter; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kaiser, Steffen; Kajomovitz, Enrique; Kalinin, Sergey; Kalinovskaya, Lidia; Kama, Sami; Kanaya, Naoko; Kaneda, Michiru; Kanno, Takayuki; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kapliy, Anton; Kaplon, Jan; Kar, Deepak; Karagoz, Muge; Karnevskiy, Mikhail; Karr, Kristo; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kashif, Lashkar; Kasmi, Azzedine; Kass, Richard; Kastanas, Alex; Kataoka, Mayuko; Kataoka, Yousuke; Katsoufis, Elias; Katzy, Judith; Kaushik, Venkatesh; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kayl, Manuel; Kazanin, Vassili; Kazarinov, Makhail; Kazi, Sandor Istvan; Keates, James Robert; Keeler, Richard; Kehoe, Robert; Keil, Markus; Kekelidze, George; Kelly, Marc; Kennedy, John; Kenyon, Mike; Kepka, Oldrich; Kerschen, Nicolas; Kersevan, Borut Paul; Kersten, Susanne; Kessoku, Kohei; Ketterer, Christian; Khakzad, Mohsen; Khalil-zada, Farkhad; Khandanyan, Hovhannes; Khanov, Alexander; Kharchenko, Dmitri; Khodinov, Alexander; Kholodenko, Anatoli; Khomich, Andrei; Khoo, Teng Jian; Khoriauli, Gia; Khovanskiy, Nikolai; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kilvington, Graham; Kim, Hyeon Jin; Kim, Min Suk; Kim, Peter; Kim, Shinhong; Kimura, Naoki; Kind, Oliver; King, Barry; King, Matthew; King, Robert Steven Beaufoy; Kirk, Julie; Kirsch, Guillaume; Kirsch, Lawrence; Kiryunin, Andrey; Kisielewska, Danuta; Kittelmann, Thomas; Kiver, Andrey; Kiyamura, Hironori; Kladiva, Eduard; Klaiber-Lodewigs, Jonas; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klemetti, Miika; Klier, Amit; Klimentov, Alexei; Klingenberg, Reiner; Klinkby, Esben; Klioutchnikova, Tatiana; Klok, Peter; Klous, Sander; Kluge, Eike-Erik; Kluge, Thomas; Kluit, Peter; Kluth, Stefan; Kneringer, Emmerich; Knobloch, Juergen; Knoops, Edith B F G; Knue, Andrea; Ko, Byeong Rok; Kobayashi, Tomio; Kobel, Michael; Koblitz, Birger; Kocian, Martin; Kocnar, Antonin; Kodys, Peter; Koneke, Karsten; Konig, Adriaan; Koenig, Sebastian; Konig, Stefan; Kopke, Lutz; Koetsveld, Folkert; Koevesarki, Peter; Koffas, Thomas; Koffeman, Els; Kohn, Fabian; Kohout, Zdenek; Kohriki, Takashi; Koi, Tatsumi; Kokott, Thomas; Kolachev, Guennady; Kolanoski, Hermann; Kolesnikov, Vladimir; Koletsou, Iro; Koll, James; Kollar, Daniel; Kollefrath, Michael; Kolya, Scott; Komar, Aston; Komaragiri, Jyothsna Rani; Kondo, Takahiko; Kono, Takanori; Kononov, Anatoly; Konoplich, Rostislav; Konstantinidis, Nikolaos; Kootz, Andreas; Koperny, Stefan; Kopikov, Sergey; Korcyl, Krzysztof; Kordas, Kostantinos; Koreshev, Victor; Korn, Andreas; Korol, Aleksandr; Korolkov, Ilya; Korolkova, Elena; Korotkov, Vladislav; Kortner, Oliver; Kortner, Sandra; Kostyukhin, Vadim; Kotamaki, Miikka Juhani; Kotov, Sergey; Kotov, Vladislav; Kourkoumelis, Christine; Kouskoura, Vasiliki; Koutsman, Alex; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozanecki, Witold; Kozhin, Anatoly; Kral, Vlastimil; Kramarenko, Viktor; Kramberger, Gregor; Krasel, Olaf; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Kraus, James; Kreisel, Arik; Krejci, Frantisek; Kretzschmar, Jan; Krieger, Nina; Krieger, Peter; Kroeninger, Kevin; Kroha, Hubert; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Kruger, Hans; Krumshteyn, Zinovii; Kruth, Andre; Kubota, Takashi; Kuehn, Susanne; Kugel, Andreas; Kuhl, Thorsten; Kuhn, Dietmar; Kukhtin, Victor; Kulchitsky, Yuri; Kuleshov, Sergey; Kummer, Christian; Kuna, Marine; Kundu, Nikhil; Kunkle, Joshua; Kupco, Alexander; Kurashige, Hisaya; Kurata, Masakazu; Kurochkin, Yurii; Kus, Vlastimil; Kuykendall, William; Kuze, Masahiro; Kuzhir, Polina; Kvasnicka, Ondrej; Kvita, Jiri; Kwee, Regina; La Rosa, Alessandro; La Rotonda, Laura; Labarga, Luis; Labbe, Julien; Lablak, Said; Lacasta, Carlos; Lacava, Francesco; Lacker, Heiko; Lacour, Didier; Lacuesta, Vicente Ramon; Ladygin, Evgueni; Lafaye, Remi; Laforge, Bertrand; Lagouri, Theodota; Lai, Stanley; Laisne, Emmanuel; Lamanna, Massimo; Lampen, Caleb; Lampl, Walter; Lancon, Eric; Landgraf, Ulrich; Landon, Murrough; Landsman, Hagar; Lane, Jenna; Lange, Clemens; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Lapin, Vladimir; Laplace, Sandrine; Lapoire, Cecile; Laporte, Jean-Francois; Lari, Tommaso; Larionov, Anatoly; Larner, Aimee; Lasseur, Christian; Lassnig, Mario; Lau, Wing; Laurelli, Paolo; Lavorato, Antonia; Lavrijsen, Wim; Laycock, Paul; Lazarev, Alexandre; Lazzaro, Alfio; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Maner, Christophe; Le Menedeu, Eve; Leahu, Marius; Lebedev, Alexander; Lebel, Celine; LeCompte, Thomas; Ledroit-Guillon, Fabienne Agnes Marie; Lee, Hurng-Chun; Lee, Jason; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Michel; Legendre, Marie; Leger, Annie; LeGeyt, Benjamin; Legger, Federica; Leggett, Charles; Lehmacher, Marc; Lehmann Miotto, Giovanna; Lei, Xiaowen; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Lellouch, Jeremie; Leltchouk, Mikhail; Lendermann, Victor; Leney, Katharine; Lenz, Tatiana; Lenzen, Georg; Lenzi, Bruno; Leonhardt, Kathrin; Leontsinis, Stefanos; Leroy, Claude; Lessard, Jean-Raphael; Lesser, Jonas; Lester, Christopher; Leung Fook Cheong, Annabelle; Leveque, Jessica; Levin, Daniel; Levinson, Lorne; Levitski, Mikhail; Lewandowska, Marta; Lewis, George; Leyton, Michael; Li, Bo; Li, Haifeng; Li, Shu; Li, Xuefei; Liang, Zhihua; Liang, Zhijun; Liberti, Barbara; Lichard, Peter; Lichtnecker, Markus; Lie, Ki; Liebig, Wolfgang; Lifshitz, Ronen; Lilley, Joseph; Limbach, Christian; Limosani, Antonio; Limper, Maaike; Lin, Simon; Linde, Frank; Linnemann, James; Lipeles, Elliot; Lipinsky, Lukas; Lipniacka, Anna; Liss, Tony; Lissauer, David; Lister, Alison; Litke, Alan; Liu, Chuanlei; Liu, Dong; Liu, Hao; Liu, Jianbei; Liu, Minghui; Liu, Shengli; Liu, Yanwen; Livan, Michele; Livermore, Sarah; Lleres, Annick; Lloyd, Stephen; Lobodzinska, Ewelina; Loch, Peter; Lockman, William; Lockwitz, Sarah; Loddenkoetter, Thomas; Loebinger, Fred; Loginov, Andrey; Loh, Chang Wei; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Loken, James; Lombardo, Vincenzo Paolo; Long, Robin Eamonn; Lopes, Lourenco; Lopez Mateos, David; Losada, Marta; Loscutoff, Peter; Sterzo, Francesco Lo; Losty, Michael; Lou, Xinchou; Lounis, Abdenour; Loureiro, Karina; Love, Jeremy; Love, Peter; Lowe, Andrew; Lu, Feng; Lu, Jiansen; Lu, Liang; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Ludwig, Andreas; Ludwig, Dorthe; Ludwig, Inga; Ludwig, Jens; Luehring, Frederick; Luijckx, Guy; Lumb, Debra; Luminari, Lamberto; Lund, Esben; Lund-Jensen, Bengt; Lundberg, Bjorn; Lundberg, Johan; Lundquist, Johan; Lungwitz, Matthias; Lupi, Anna; Lutz, Gerhard; Lynn, David; Lys, Jeremy; Lytken, Else; Ma, Hong; Ma, Lian Liang; Macana Goia, Jorge Andres; Maccarrone, Giovanni; Macchiolo, Anna; Macek, Bostjan; Machado Miguens, Joana; Macina, Daniela; Mackeprang, Rasmus; Madaras, Ronald; Mader, Wolfgang; Maenner, Reinhard; Maeno, Tadashi; Mattig, Peter; Mattig, Stefan; Magalhaes Martins, Paulo Jorge; Magnoni, Luca; Magradze, Erekle; Magrath, Caroline; Mahalalel, Yair; Mahboubi, Kambiz; Mahout, Gilles; Maiani, Camilla; Maidantchik, Carmen; Maio, Amelia; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Mal, Prolay; Malecki, Pawel; Malecki, Piotr; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Maltezos, Stavros; Malyshev, Vladimir; Malyukov, Sergei; Mameghani, Raphael; Mamuzic, Judita; Manabe, Atsushi; Mandelli, Luciano; Mandic, Igor; Mandrysch, Rocco; Maneira, Jose; Mangeard, Pierre-Simon; Manjavidze, Ioseb; Mann, Alexander; Manning, Peter; Manousakis-Katsikakis, Arkadios; Mansoulie, Bruno; Manz, Andreas; Mapelli, Alessandro; Mapelli, Livio; March, Luis; Marchand, Jean-Francois; Marchese, Fabrizio; Marchesotti, Marco; Marchiori, Giovanni; Marcisovsky, Michal; Marin, Alexandru; Marino, Christopher; Marroquim, Fernando; Marshall, Robin; Marshall, Zach; Martens, Kalen; Marti-Garcia, Salvador; Martin, Andrew; Martin, Brian; Martin, Brian Thomas; Martin, Franck Francois; Martin, Jean-Pierre; Martin, Philippe; Martin, Tim; Martin Dit Latour, Bertrand; Martinez, Mario; Martinez Outschoorn, Verena; Martyniuk, Alex; Marx, Marilyn; Marzano, Francesco; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Mass, Martin; Massa, Ignazio; Massaro, Graziano; Massol, Nicolas; Mastroberardino, Anna; Masubuchi, Tatsuya; Mathes, Markus; Matricon, Pierre; Matsumoto, Hiroshi; Matsunaga, Hiroyuki; Matsushita, Takashi; Mattravers, Carly; Maugain, Jean-Marie; Maxfield, Stephen; Maximov, Dmitriy; May, Edward; Mayne, Anna; Mazini, Rachid; Mazur, Michael; Mazzanti, Marcello; Mazzoni, Enrico; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Robert; McCarthy, Tom; McCubbin, Norman; McFarlane, Kenneth; Mcfayden, Josh; McGlone, Helen; Mchedlidze, Gvantsa; McLaren, Robert Andrew; Mclaughlan, Tom; McMahon, Steve; McPherson, Robert; Meade, Andrew; Mechnich, Joerg; Mechtel, Markus; Medinnis, Mike; Meera-Lebbai, Razzak; Meguro, Tatsuma; Mehdiyev, Rashid; Mehlhase, Sascha; Mehta, Andrew; Meier, Karlheinz; Meinhardt, Jens; Meirose, Bernhard; Melachrinos, Constantinos; Mellado Garcia, Bruce Rafael; Mendoza Navas, Luis; Meng, Zhaoxia; Mengarelli, Alberto; Menke, Sven; Menot, Claude; Meoni, Evelin; Mercurio, Kevin Michael; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meuser, Stefan; Meyer, Carsten; Meyer, Jean-Pierre; Meyer, Jochen; Meyer, Joerg; Meyer, Thomas Christian; Meyer, W.Thomas; Miao, Jiayuan; Michal, Sebastien; Micu, Liliana; Middleton, Robin; Miele, Paola; Migas, Sylwia; Mijovic, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikulec, Bettina; Mikuz, Marko; Miller, David; Miller, Robert; Mills, Bill; Mills, Corrinne; Milov, Alexander; Milstead, David; Milstein, Dmitry; Minaenko, Andrey; Minano, Mercedes; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Ming, Yao; Mir, Lluisa-Maria; Mirabelli, Giovanni; Miralles Verge, Lluis; Misiejuk, Andrzej; Mitrevski, Jovan; Mitrofanov, Gennady; Mitsou, Vasiliki A.; Mitsui, Shingo; Miyagawa, Paul; Miyazaki, Kazuki; Mjornmark, Jan-Ulf; Moa, Torbjoern; Mockett, Paul; Moed, Shulamit; Moeller, Victoria; Monig, Klaus; Moser, Nicolas; Mohapatra, Soumya; Mohn, Bjarte; Mohr, Wolfgang; Mohrdieck-Mock, Susanne; Moisseev, Artemy; Moles-Valls, Regina; Molina-Perez, Jorge; Moneta, Lorenzo; Monk, James; Monnier, Emmanuel; Montesano, Simone; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Moorhead, Gareth; Mora Herrera, Clemencia; Moraes, Arthur; Morais, Antonio; Morange, Nicolas; Morello, Gianfranco; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morii, Masahiro; Morin, Jerome; Morita, Youhei; Morley, Anthony Keith; Mornacchi, Giuseppe; Morone, Maria-Christina; Morozov, Sergey; Morris, John; Moser, Hans-Guenther; Mosidze, Maia; Moss, Josh; Mount, Richard; Mountricha, Eleni; Mouraviev, Sergei; Moyse, Edward; Mudrinic, Mihajlo; Mueller, Felix; Mueller, James; Mueller, Klemens; Muller, Thomas; Muenstermann, Daniel; Muijs, Sandra; Muir, Alex; Munwes, Yonathan; Murakami, Koichi; Murray, Bill; Mussche, Ido; Musto, Elisa; Myagkov, Alexey; Myska, Miroslav; Nadal, Jordi; Nagai, Koichi; Nagano, Kunihiro; Nagasaka, Yasushi; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakano, Itsuo; Nanava, Gizo; Napier, Austin; Nash, Michael; Nation, Nigel; Nattermann, Till; Naumann, Thomas; Navarro, Gabriela; Neal, Homer; Nebot, Eduardo; Nechaeva, Polina; Negri, Andrea; Negri, Guido; Nektarijevic, Snezana; Nelson, Andrew; Nelson, Silke; Nelson, Timothy Knight; Nemecek, Stanislav; Nemethy, Peter; Nepomuceno, Andre Asevedo; Nessi, Marzio; Nesterov, Stanislav; Neubauer, Mark; Neusiedl, Andrea; Neves, Ricardo; Nevski, Pavel; Newman, Paul; Nickerson, Richard; Nicolaidou, Rosy; Nicolas, Ludovic; Nicquevert, Bertrand; Niedercorn, Francois; Nielsen, Jason; Niinikoski, Tapio; Nikiforov, Andriy; Nikolaenko, Vladimir; Nikolaev, Kirill; Nikolic-Audit, Irena; Nikolopoulos, Konstantinos; Nilsen, Henrik; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nishiyama, Tomonori; Nisius, Richard; Nodulman, Lawrence; Nomachi, Masaharu; Nomidis, Ioannis; Nomoto, Hiroshi; Nordberg, Markus; Nordkvist, Bjoern; Norton, Peter; Novakova, Jana; Nozaki, Mitsuaki; Nozicka, Miroslav; Nozka, Libor; Nugent, Ian Michael; Nuncio-Quiroz, Adriana-Elizabeth; Nunes Hanninger, Guilherme; Nunnemann, Thomas; Nurse, Emily; Nyman, Tommi; O'Brien, Brendan Joseph; O'Neale, Steve; O'Neil, Dugan; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Ocariz, Jose; Ochi, Atsuhiko; Oda, Susumu; Odaka, Shigeru; Odier, Jerome; Ogren, Harold; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohshima, Takayoshi; Ohshita, Hidetoshi; Ohska, Tokio Kenneth; Ohsugi, Takashi; Okada, Shogo; Okawa, Hideki; Okumura, Yasuyuki; Okuyama, Toyonobu; Olcese, Marco; Olchevski, Alexander; Oliveira, Miguel Alfonso; Oliveira Damazio, Denis; Oliver Garcia, Elena; Olivito, Dominick; Olszewski, Andrzej; Olszowska, Jolanta; Omachi, Chihiro; Onofre, Antonio; Onyisi, Peter; Oram, Christopher; Ordonez, Gustavo; Oreglia, Mark; Orellana, Frederik; Oren, Yona; Orestano, Domizia; Orlov, Iliya; Oropeza Barrera, Cristina; Orr, Robert; Ortega, Eduardo; Osculati, Bianca; Ospanov, Rustem; Osuna, Carlos; Otero y Garzon, Gustavo; Ottersbach, John; Ouchrif, Mohamed; Ould-Saada, Farid; Ouraou, Ahmimed; Ouyang, Qun; Owen, Mark; Owen, Simon; Oyarzun, Alejandro; Oye, Ola; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pacheco Pages, Andres; Padilla Aranda, Cristobal; Paganis, Efstathios; Paige, Frank; Pajchel, Katarina; Palestini, Sandro; Pallin, Dominique; Palma, Alberto; Palmer, Jody; Pan, Yibin; Panagiotopoulou, Evgenia; Panes, Boris; Panikashvili, Natalia; Panitkin, Sergey; Pantea, Dan; Panuskova, Monika; Paolone, Vittorio; Paoloni, Alessandro; Papadelis, Aras; Papadopoulou, Theodora; Paramonov, Alexander; Park, Woochun; Parker, Andy; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pasqualucci, Enrico; Passeri, Antonio; Pastore, Fernanda; Pastore, Francesca; Pasztor, Gabriella; Pataraia, Sophio; Patel, Nikhul; Pater, Joleen; Patricelli, Sergio; Pauly, Thilo; Pecsy, Martin; Pedraza Morales, Maria Isabel; Peleganchuk, Sergey; Peng, Haiping; Pengo, Ruggero; Penson, Alexander; Penwell, John; Perantoni, Marcelo; Perez, Kerstin; Cavalcanti, Tiago Perez; Perez Codina, Estel; Perez Garcia-Estan, Maria Teresa; Perez Reale, Valeria; Peric, Ivan; Perini, Laura; Pernegger, Heinz; Perrino, Roberto; Perrodo, Pascal; Persembe, Seda; Peshekhonov, Vladimir; Peters, Onne; Petersen, Brian; Petersen, Jorgen; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petrolo, Emilio; Petrucci, Fabrizio; Petschull, Dennis; Petteni, Michele; Pezoa, Raquel; Phan, Anna; Phillips, Alan; Phillips, Peter William; Piacquadio, Giacinto; Piccaro, Elisa; Piccinini, Maurizio; Pickford, Andrew; Piec, Sebastian Marcin; Piegaia, Ricardo; Pilcher, James; Pilkington, Andrew; Pina, Joao Antonio; Pinamonti, Michele; Pinder, Alex; Pinfold, James; Ping, Jialun; Pinto, Belmiro; Pirotte, Olivier; Pizio, Caterina; Placakyte, Ringaile; Plamondon, Mathieu; Plano, Will; Pleier, Marc-Andre; Pleskach, Anatoly; Poblaguev, Andrei; Poddar, Sahill; Podlyski, Fabrice; Poggioli, Luc; Poghosyan, Tatevik; Pohl, Martin; Polci, Francesco; Polesello, Giacomo; Policicchio, Antonio; Polini, Alessandro; Poll, James; Polychronakos, Venetios; Pomarede, Daniel Marc; Pomeroy, Daniel; Pommes, Kathy; Pontecorvo, Ludovico; Pope, Bernard; Popeneciu, Gabriel Alexandru; Popovic, Dragan; Poppleton, Alan; Bueso, Xavier Portell; Porter, Robert; Posch, Christoph; Pospelov, Guennady; Pospisil, Stanislav; Potrap, Igor; Potter, Christina; Potter, Christopher; Poulard, Gilbert; Poveda, Joaquin; Prabhu, Robindra; Pralavorio, Pascal; Prasad, Srivas; Pravahan, Rishiraj; Prell, Soeren; Pretzl, Klaus Peter; Pribyl, Lukas; Price, Darren; Price, Lawrence; Price, Michael John; Prichard, Paul; Prieur, Damien; Primavera, Margherita; Prokofiev, Kirill; Prokoshin, Fedor; Protopopescu, Serban; Proudfoot, James; Prudent, Xavier; Przysiezniak, Helenka; Psoroulas, Serena; Ptacek, Elizabeth; Purdham, John; Purohit, Milind; Puzo, Patrick; Pylypchenko, Yuriy; Qian, Jianming; Qian, Zuxuan; Qin, Zhonghua; Quadt, Arnulf; Quarrie, David; Quayle, William; Quinonez, Fernando; Raas, Marcel; Radescu, Voica; Radics, Balint; Rador, Tonguc; Ragusa, Francesco; Rahal, Ghita; Rahimi, Amir; Rahm, David; Rajagopalan, Srinivasan; Rajek, Silke; Rammensee, Michael; Rammes, Marcus; Ramstedt, Magnus; Randrianarivony, Koloina; Ratoff, Peter; Rauscher, Felix; Rauter, Emanuel; Raymond, Michel; Read, Alexander Lincoln; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reeves, Kendall; Reichold, Armin; Reinherz-Aronis, Erez; Reinsch, Andreas; Reisinger, Ingo; Reljic, Dusan; Rembser, Christoph; Ren, Zhongliang; Renaud, Adrien; Renkel, Peter; Rensch, Bertram; Rescigno, Marco; Resconi, Silvia; Resende, Bernardo; Reznicek, Pavel; Rezvani, Reyhaneh; Richter, Robert; Richter-Was, Elzbieta; Ridel, Melissa; Rieke, Stefan; Rijpstra, Manouk; Rijssenbeek, Michael; Rimoldi, Adele; Rinaldi, Lorenzo; Rios, Ryan Randy; Riu, Imma; Rivoltella, Giancesare; Rizatdinova, Flera; Rizvi, Eram; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robinson, Mary; Robson, Aidan; Rocha de Lima, Jose Guilherme; Roda, Chiara; Roda Dos Santos, Denis; Rodier, Stephane; Rodriguez, Diego; Rodriguez Garcia, Yohany; Roe, Adam; Roe, Shaun; Rohne, Ole; Rojo, Victoria; Rolli, Simona; Romaniouk, Anatoli; Romanov, Victor; Romeo, Gaston; Romero Maltrana, Diego; Roos, Lydia; Ros, Eduardo; Rosati, Stefano; Rose, Matthew; Rosenbaum, Gabriel; Rosenberg, Eli; Rosendahl, Peter Lundgaard; Rosselet, Laurent; Rossetti, Valerio; Rossi, Elvira; Rossi, Leonardo Paolo; Rossi, Lucio; Rotaru, Marina; Roth, Itamar; Rothberg, Joseph; Rottlander, Iris; Rousseau, David; Royon, Christophe; Rozanov, Alexander; Rozen, Yoram; Ruan, Xifeng; Rubinskiy, Igor; Ruckert, Benjamin; Ruckstuhl, Nicole; Rud, Viacheslav; Rudolph, Gerald; Ruhr, Frederik; Ruggieri, Federico; Ruiz-Martinez, Aranzazu; Rulikowska-Zarebska, Elzbieta; Rumiantsev, Viktor; Rumyantsev, Leonid; Runge, Kay; Runolfsson, Ogmundur; Rurikova, Zuzana; Rusakovich, Nikolai; Rust, Dave; Rutherfoord, John; Ruwiedel, Christoph; Ruzicka, Pavel; Ryabov, Yury; Ryadovikov, Vasily; Ryan, Patrick; Rybar, Martin; Rybkin, Grigori; Ryder, Nick; Rzaeva, Sevda; Saavedra, Aldo; Sadeh, Iftach; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Sakamoto, Hiroshi; Salamanna, Giuseppe; Salamon, Andrea; Saleem, Muhammad; Salihagic, Denis; Salnikov, Andrei; Salt, Jose; Salvachua Ferrando, Belen; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sampsonidis, Dimitrios; Samset, Bjorn Hallvard; Sandaker, Heidi; Sander, Heinz Georg; Sanders, Michiel; Sandhoff, Marisa; Sandhu, Pawan; Sandoval, Tanya; Sandstroem, Rikard; Sandvoss, Stephan; Sankey, Dave; Sansoni, Andrea; Santamarina Rios, Cibran; Santoni, Claudio; Santonico, Rinaldo; Santos, Helena; Saraiva, Joao; Sarangi, Tapas; Sarkisyan-Grinbaum, Edward; Sarri, Francesca; Sartisohn, Georg; Sasaki, Osamu; Sasaki, Takashi; Sasao, Noboru; Satsounkevitch, Igor; Sauvage, Gilles; Sauvan, Jean-Baptiste; Savard, Pierre; Savinov, Vladimir; Savu, Dan Octavian; Savva, Panagiota; Sawyer, Lee; Saxon, David; Says, Louis-Pierre; Sbarra, Carla; Sbrizzi, Antonio; Scallon, Olivia; Scannicchio, Diana; Schaarschmidt, Jana; Schacht, Peter; Schafer, Uli; Schaepe, Steffen; Schaetzel, Sebastian; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R. Dean; Schamov, Andrey; Scharf, Veit; Schegelsky, Valery; Scheirich, Daniel; Scherzer, Max; Schiavi, Carlo; Schieck, Jochen; Schioppa, Marco; Schlenker, Stefan; Schlereth, James; Schmidt, Evelyn; Schmidt, Michael; Schmieden, Kristof; Schmitt, Christian; Schmitz, Martin; Schoning, Andre; Schott, Matthias; Schouten, Doug; Schovancova, Jaroslava; Schram, Malachi; Schroeder, Christian; Schroer, Nicolai; Schuh, Silvia; Schuler, Georges; Schultes, Joachim; Schultz-Coulon, Hans-Christian; Schulz, Holger; Schumacher, Jan; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwanenberger, Christian; Schwartzman, Ariel; Schwemling, Philippe; Schwienhorst, Reinhard; Schwierz, Rainer; Schwindling, Jerome; Scott, Bill; Searcy, Jacob; Sedykh, Evgeny; Segura, Ester; Seidel, Sally; Seiden, Abraham; Seifert, Frank; Seixas, Jose; Sekhniaidze, Givi; Seliverstov, Dmitry; Sellden, Bjoern; Sellers, Graham; Seman, Michal; Semprini-Cesari, Nicola; Serfon, Cedric; Serin, Laurent; Seuster, Rolf; Severini, Horst; Sevior, Martin; Sfyrla, Anna; Shabalina, Elizaveta; Shamim, Mansoora; Shan, Lianyou; Shank, James; Shao, Qi Tao; Shapiro, Marjorie; Shatalov, Pavel; Shaver, Leif; Shaw, Christian; Shaw, Kate; Sherman, Daniel; Sherwood, Peter; Shibata, Akira; Shimizu, Shima; Shimojima, Makoto; Shin, Taeksu; Shmeleva, Alevtina; Shochet, Mel; Short, Daniel; Shupe, Michael; Sicho, Petr; Sidoti, Antonio; Siebel, Anca-Mirela; Siegert, Frank; Siegrist, James; Sijacki, Djordje; Silbert, Ohad; Silva, Jose; Silver, Yiftah; Silverstein, Daniel; Silverstein, Samuel; Simak, Vladislav; Simard, Olivier; Simic, Ljiljana; Simion, Stefan; Simmons, Brinick; Simonyan, Margar; Sinervo, Pekka; Sinev, Nikolai; Sipica, Valentin; Siragusa, Giovanni; Sisakyan, Alexei; Sivoklokov, Serguei; Sjolin, Jorgen; Sjursen, Therese; Skinnari, Louise Anastasia; Skovpen, Kirill; Skubic, Patrick; Skvorodnev, Nikolai; Slater, Mark; Slavicek, Tomas; Sliwa, Krzysztof; Sloan, Terrence; Sloper, John erik; Smakhtin, Vladimir; Smirnov, Sergei; Smirnova, Lidia; Smirnova, Oxana; Smith, Ben Campbell; Smith, Douglas; Smith, Kenway; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snow, Steve; Snow, Joel; Snuverink, Jochem; Snyder, Scott; Soares, Mara; Sobie, Randall; Sodomka, Jaromir; Soffer, Abner; Solans, Carlos; Solar, Michael; Solc, Jaroslav; Soldatov, Evgeny; Soldevila, Urmila; Solfaroli Camillocci, Elena; Solodkov, Alexander; Solovyanov, Oleg; Sondericker, John; Soni, Nitesh; Sopko, Vit; Sopko, Bruno; Sorbi, Massimo; Sosebee, Mark; Soukharev, Andrey; Spagnolo, Stefania; Spano, Francesco; Spighi, Roberto; Spigo, Giancarlo; Spila, Federico; Spiriti, Eleuterio; Spiwoks, Ralf; Spousta, Martin; Spreitzer, Teresa; Spurlock, Barry; St. Denis, Richard Dante; Stahl, Thorsten; Stahlman, Jonathan; Stamen, Rainer; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stapnes, Steinar; Starchenko, Evgeny; Stark, Jan; Staroba, Pavel; Starovoitov, Pavel; Staude, Arnold; Stavina, Pavel; Stavropoulos, Georgios; Steele, Genevieve; Steinbach, Peter; Steinberg, Peter; Stekl, Ivan; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stevenson, Kyle; Stewart, Graeme; Stillings, Jan Andre; Stockmanns, Tobias; Stockton, Mark; Stoerig, Kathrin; Stoicea, Gabriel; Stonjek, Stefan; Strachota, Pavel; Stradling, Alden; Straessner, Arno; Strandberg, Jonas; Strandberg, Sara; Strandlie, Are; Strang, Michael; Strauss, Emanuel; Strauss, Michael; Strizenec, Pavol; Strohmer, Raimund; Strom, David; Strong, John; Stroynowski, Ryszard; Strube, Jan; Stugu, Bjarne; Stumer, Iuliu; Stupak, John; Sturm, Philipp; Soh, Dart-yin; Su, Dong; Subramania, Halasya Siva; Sugaya, Yorihito; Sugimoto, Takuya; Suhr, Chad; Suita, Koichi; Suk, Michal; Sulin, Vladimir; Sultansoy, Saleh; Sumida, Toshi; Sun, Xiaohu; Sundermann, Jan Erik; Suruliz, Kerim; Sushkov, Serge; Susinno, Giancarlo; Sutton, Mark; Suzuki, Yu; Sviridov, Yuri; Swedish, Stephen; Sykora, Ivan; Sykora, Tomas; Szeless, Balazs; Sanchez, Javier; Ta, Duc; Tackmann, Kerstin; Taffard, Anyes; Tafirout, Reda; Taga, Adrian; Taiblum, Nimrod; Takahashi, Yuta; Takai, Helio; Takashima, Ryuichi; Takeda, Hiroshi; Takeshita, Tohru; Talby, Mossadek; Talyshev, Alexey; Tamsett, Matthew; Tanaka, Junichi; Tanaka, Reisaburo; Tanaka, Satoshi; Tanaka, Shuji; Tanaka, Yoshito; Tani, Kazutoshi; Tannoury, Nancy; Tappern, Geoffrey; Tapprogge, Stefan; Tardif, Dominique; Tarem, Shlomit; Tarrade, Fabien; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tassi, Enrico; Tatarkhanov, Mous; Taylor, Christopher; Taylor, Frank; Taylor, Geoffrey; Taylor, Wendy; Teixeira Dias Castanheira, Matilde; Teixeira-Dias, Pedro; Temming, Kim Katrin; Ten Kate, Herman; Teng, Ping-Kun; Terada, Susumu; Terashi, Koji; Terron, Juan; Terwort, Mark; Testa, Marianna; Teuscher, Richard; Tevlin, Christopher; Thadome, Jocelyn; Therhaag, Jan; Theveneaux-Pelzer, Timothee; Thioye, Moustapha; Thoma, Sascha; Thomas, Juergen; Thompson, Emily; Thompson, Paul; Thompson, Peter; Thompson, Stan; Thomson, Evelyn; Thomson, Mark; Thun, Rudolf; Tic, Tomas; Tikhomirov, Vladimir; Tikhonov, Yury; Timmermans, Charles; Tipton, Paul; Viegas, Florbela De Jes Tique Aires; Tisserant, Sylvain; Tobias, Jurgen; Toczek, Barbara; Todorov, Theodore; Todorova-Nova, Sharka; Toggerson, Brokk; Tojo, Junji; Tokar, Stanislav; Tokunaga, Kaoru; Tokushuku, Katsuo; Tollefson, Kirsten; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tonazzo, Alessandra; Tong, Guoliang; Tonoyan, Arshak; Topfel, Cyril; Topilin, Nikolai; Torchiani, Ingo; Torrence, Eric; Torro Pastor, Emma; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Traynor, Daniel; Trefzger, Thomas; Treis, Johannes; Tremblet, Louis; Tricoli, Alesandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Trinh, Thi Nguyet; Tripiana, Martin; Triplett, Nathan; Trischuk, William; Trivedi, Arjun; Trocme, Benjamin; Troncon, Clara; Trottier-McDonald, Michel; Trzupek, Adam; Tsarouchas, Charilaos; Tseng, Jeffrey; Tsiakiris, Menelaos; Tsiareshka, Pavel; Tsionou, Dimitra; Tsipolitis, Georgios; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsung, Jieh-Wen; Tsuno, Soshi; Tsybychev, Dmitri; Tua, Alan; Tuggle, Joseph; Turala, Michal; Turecek, Daniel; Turk Cakir, Ilkay; Turlay, Emmanuel; Turra, Ruggero; Tuts, Michael; Tykhonov, Andrii; Tylmad, Maja; Tyndel, Mike; Typaldos, Dimitrios; Tyrvainen, Harri; Tzanakos, George; Uchida, Kirika; Ueda, Ikuo; Ueno, Ryuichi; Ugland, Maren; Uhlenbrock, Mathias; Uhrmacher, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Underwood, David; Undrus, Alexander; Unel, Gokhan; Unno, Yoshinobu; Urbaniec, Dustin; Urkovsky, Evgeny; Urrejola, Pedro; Usai, Giulio; Uslenghi, Massimiliano; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Vahsen, Sven; Valderanis, Chrysostomos; Valenta, Jan; Valente, Paolo; Valentinetti, Sara; Valkar, Stefan; Valladolid Gallego, Eva; Vallecorsa, Sofia; Ferrer, Juan Antonio Valls; Van der Graaf, Harry; van der Kraaij, Erik; van der Leeuw, Robin; van der Poel, Egge; van der Ster, Daniel; Van Eijk, Bob; van Eldik, Niels; Van Gemmeren, Peter; van Kesteren, Zdenko; Van Vulpen, Ivo; Vandelli, Wainer; Vandoni, Giovanna; Vaniachine, Alexandre; Vankov, Peter; Vannucci, Francois; Varela Rodriguez, Fernando; Vari, Riccardo; Varnes, Erich; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vassilakopoulos, Vassilios; Vazeille, Francois; Vegni, Guido; Veillet, Jean-Jacques; Vellidis, Constantine; Veloso, Filipe; Veness, Raymond; Veneziano, Stefano; Ventura, Andrea; Ventura, Daniel; Venturi, Manuela; Venturi, Nicola; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Jos; Vest, Anja; Vetterli, Michel; Vichou, Irene; Vickey, Trevor; Viehhauser, Georg; Viel, Simon; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinek, Elisabeth; Vinogradov, Vladimir; Virchaux, Marc; Viret, Sebastien; Virzi, Joseph; Vitale, Antonio; Vitells, Ofer; Viti, Michele; Vivarelli, Iacopo; Vives Vaque, Francesc; Vlachos, Sotirios; Vlasak, Michal; Vlasov, Nikolai; Vogel, Adrian; Vokac, Petr; Volpi, Guido; Volpi, Matteo; Volpini, Giovanni; von der Schmitt, Hans; von Loeben, Joerg; von Radziewski, Holger; von Toerne, Eckhard; Vorobel, Vit; Vorobiev, Alexander; Vorwerk, Volker; Vos, Marcel; Voss, Rudiger; Voss, Thorsten Tobias; Vossebeld, Joost; Vovenko, Anatoly; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Anh, Tuan Vu; Vuillermet, Raphael; Vukotic, Ilija; Wagner, Wolfgang; Wagner, Peter; Wahlen, Helmut; Wakabayashi, Jun; Walbersloh, Jorg; Walch, Shannon; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wall, Richard; Waller, Peter; Wang, Chiho; Wang, Haichen; Wang, Jike; Wang, Jin; Wang, Joshua C.; Wang, Rui; Wang, Song-Ming; Warburton, Andreas; Ward, Patricia; Warsinsky, Markus; Watkins, Peter; Watson, Alan; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Anthony; Waugh, Ben; Weber, Jens; Weber, Marc; Weber, Michele; Weber, Pavel; Weidberg, Anthony; Weigell, Philipp; Weingarten, Jens; Weiser, Christian; Wellenstein, Hermann; Wells, Phillippa; Wen, Mei; Wenaus, Torre; Wendler, Shanti; Weng, Zhili; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Matthias; Werner, Per; Werth, Michael; Wessels, Martin; Whalen, Kathleen; Wheeler-Ellis, Sarah Jane; Whitaker, Scott; White, Andrew; White, Martin; White, Sebastian; Whitehead, Samuel Robert; Whiteson, Daniel; Whittington, Denver; Wicek, Francois; Wicke, Daniel; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wienemann, Peter; Wiglesworth, Craig; Wiik, Liv Antje Mari; Wijeratne, Peter Alexander; Wildauer, Andreas; Wildt, Martin Andre; Wilhelm, Ivan; Wilkens, Henric George; Will, Jonas Zacharias; Williams, Eric; Williams, Hugh; Willis, William; Willocq, Stephane; Wilson, John; Wilson, Michael Galante; Wilson, Alan; Wingerter-Seez, Isabelle; Winkelmann, Stefan; Winklmeier, Frank; Wittgen, Matthias; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wooden, Gemma; Wosiek, Barbara; Wotschack, Jorg; Woudstra, Martin; Wraight, Kenneth; Wright, Catherine; Wrona, Bozydar; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wulf, Evan; Wunstorf, Renate; Wynne, Benjamin; Xaplanteris, Leonidas; Xella, Stefania; Xie, Song; Xie, Yigang; Xu, Chao; Xu, Da; Xu, Guofa; Yabsley, Bruce; Yamada, Miho; Yamamoto, Akira; Yamamoto, Kyoko; Yamamoto, Shimpei; Yamamura, Taiki; Yamaoka, Jared; Yamazaki, Takayuki; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Un-Ki; Yang, Yi; Yang, Yi; Yang, Zhaoyu; Yanush, Serguei; Yao, Weiming; Yao, Yushu; Yasu, Yoshiji; Ybeles Smit, Gabriel Valentijn; Ye, Jingbo; Ye, Shuwei; Yilmaz, Metin; Yoosoofmiya, Reza; Yorita, Kohei; Yoshida, Riktura; Young, Charles; Youssef, Saul; Yu, Dantong; Yu, Jaehoon; Yu, Jie; Yuan, Li; Yurkewicz, Adam; Zaets, Vassilli; Zaidan, Remi; Zaitsev, Alexander; Zajacova, Zuzana; Zalite, Youris; Zanello, Lucia; Zarzhitsky, Pavel; Zaytsev, Alexander; Zeitnitz, Christian; Zeller, Michael; Zema, Pasquale Federico; Zemla, Andrzej; Zendler, Carolin; Zenin, Anton; Zenin, Oleg; Zenis, Tibor; Zenonos, Zenonas; Zenz, Seth; Zerwas, Dirk; Zevi Della Porta, Giovanni; Zhan, Zhichao; Zhang, Dongliang; Zhang, Huaqiao; Zhang, Jinlong; Zhang, Xueyao; Zhang, Zhiqing; Zhao, Long; Zhao, Tianchi; Zhao, Zhengguo; Zhemchugov, Alexey; Zheng, Shuchen; Zhong, Jiahang; Zhou, Bing; Zhou, Ning; Zhou, Yue; Zhu, Cheng Guang; Zhu, Hongbo; Zhu, Yingchun; Zhuang, Xuai; Zhuravlov, Vadym; Zieminska, Daria; Zilka, Branislav; Zimmermann, Robert; Zimmermann, Simone; Zimmermann, Stephanie; Ziolkowski, Michael; Zitoun, Robert; Zivkovic, Lidija; Zmouchko, Viatcheslav; Zobernig, Georg; Zoccoli, Antonio; Zolnierowski, Yves; Zsenei, Andras; zur Nedden, Martin; Zutshi, Vishnu; Zwalinski, Lukasz

    2011-01-01

    The inclusive $J/\\psi$ production cross-section and fraction of $J/\\psi$ mesons produced in B-hadron decays are measured in proton-proton collisions at $\\sqrt{s}$ = 7 TeV with the ATLAS detector at the LHC, as a function of the transverse momentum and rapidity of the $J/\\psi$, using 2.3 pb.1 of integrated luminosity. The cross-section is measured from a minimum pT of 1 GeV to a maximum of 70 GeV and for rapidities within |y| < 2.4 giving the widest reach of any measurement of $J/\\psi$ production to date. The differential production cross-sections of prompt and non-prompt $J/\\psi$ are separately determined and are compared to Colour Singlet NNLO, Colour Evaporation Model, and FONLL predictions.

  12. Teaching Generalized Imitation Skills to a Preschooler with Autism Using Video Modeling

    Science.gov (United States)

    Kleeberger, Vickie; Mirenda, Pat

    2010-01-01

    This study examined the effectiveness of video modeling to teach a preschooler with autism to imitate previously mastered and not mastered actions during song and toy play activities. A general case approach was used to examine the instructional universe of preschool songs and select exemplars that were most likely to facilitate generalization.…

  13. Anticipating students' reasoning and planning prompts in structured problem-solving lessons

    Science.gov (United States)

    Vale, Colleen; Widjaja, Wanty; Doig, Brian; Groves, Susie

    2018-02-01

    Structured problem-solving lessons are used to explore mathematical concepts such as pattern and relationships in early algebra, and regularly used in Japanese Lesson Study research lessons. However, enactment of structured problem-solving lessons which involves detailed planning, anticipation of student solutions and orchestration of whole-class discussion of solutions is an ongoing challenge for many teachers. Moreover, primary teachers have limited experience in teaching early algebra or mathematical reasoning actions such as generalising. In this study, the critical factors of enacting the structured problem-solving lessons used in Japanese Lesson Study to elicit and develop primary students' capacity to generalise are explored. Teachers from three primary schools participated in two Japanese Lesson Study teams for this study. The lesson plans and video recordings of teaching and post-lesson discussion of the two research lessons along with students' responses and learning are compared to identify critical factors. The anticipation of students' reasoning together with preparation of supporting and challenging prompts was critical for scaffolding students' capacity to grasp and communicate generality.

  14. Consideration of neutral beam prompt loss in the design of a tokamak helicon antenna

    International Nuclear Information System (INIS)

    Pace, D.C.; Van Zeeland, M.A.; Fishler, B.; Murphy, C.

    2016-01-01

    Highlights: • Neutral beam prompt losses place appreciable power on an in-vessel tokamak antenna. • Simulations predict prompt loss power and inform protective tile design. • Experiments confirm the validity of the prompt loss simulations. - Abstract: Neutral beam prompt losses (injected neutrals that ionize such that their first poloidal transit intersects with the wall) can put appreciable power on the outer wall of tokamaks, and this power may damage the wall or other internal components. These prompt losses are simulated including a protruding helicon antenna installation in the DIII-D tokamak and it is determined that 160 kW of power will impact the antenna during the injection of a particular neutral beam. Protective graphite tiles are designed in response to this modeling and the wall shape of the installed antenna is precisely measured to improve the accuracy of these calculations. Initial experiments confirm that the antenna component temperature increases according to the amount of neutral beam energy injected into the plasma. In this case, only injection of beams that are aimed counter to the plasma current produce an appreciable power load on the outer wall, suggesting that the effect is of little concern for tokamaks featuring only co-current neutral beam injection. Incorporating neutral beam prompt loss considerations into the design of this in-vessel component serves to ensure that adequate protection or cooling is provided.

  15. Consideration of neutral beam prompt loss in the design of a tokamak helicon antenna

    Energy Technology Data Exchange (ETDEWEB)

    Pace, D.C., E-mail: pacedc@fusion.gat.com; Van Zeeland, M.A.; Fishler, B.; Murphy, C.

    2016-11-15

    Highlights: • Neutral beam prompt losses place appreciable power on an in-vessel tokamak antenna. • Simulations predict prompt loss power and inform protective tile design. • Experiments confirm the validity of the prompt loss simulations. - Abstract: Neutral beam prompt losses (injected neutrals that ionize such that their first poloidal transit intersects with the wall) can put appreciable power on the outer wall of tokamaks, and this power may damage the wall or other internal components. These prompt losses are simulated including a protruding helicon antenna installation in the DIII-D tokamak and it is determined that 160 kW of power will impact the antenna during the injection of a particular neutral beam. Protective graphite tiles are designed in response to this modeling and the wall shape of the installed antenna is precisely measured to improve the accuracy of these calculations. Initial experiments confirm that the antenna component temperature increases according to the amount of neutral beam energy injected into the plasma. In this case, only injection of beams that are aimed counter to the plasma current produce an appreciable power load on the outer wall, suggesting that the effect is of little concern for tokamaks featuring only co-current neutral beam injection. Incorporating neutral beam prompt loss considerations into the design of this in-vessel component serves to ensure that adequate protection or cooling is provided.

  16. Deriving video content type from HEVC bitstream semantics

    Science.gov (United States)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio R.

    2014-05-01

    As network service providers seek to improve customer satisfaction and retention levels, they are increasingly moving from traditional quality of service (QoS) driven delivery models to customer-centred quality of experience (QoE) delivery models. QoS models only consider metrics derived from the network however, QoE models also consider metrics derived from within the video sequence itself. Various spatial and temporal characteristics of a video sequence have been proposed, both individually and in combination, to derive methods of classifying video content either on a continuous scale or as a set of discrete classes. QoE models can be divided into three broad categories, full reference, reduced reference and no-reference models. Due to the need to have the original video available at the client for comparison, full reference metrics are of limited practical value in adaptive real-time video applications. Reduced reference metrics often require metadata to be transmitted with the bitstream, while no-reference metrics typically operate in the decompressed domain at the client side and require significant processing to extract spatial and temporal features. This paper proposes a heuristic, no-reference approach to video content classification which is specific to HEVC encoded bitstreams. The HEVC encoder already makes use of spatial characteristics to determine partitioning of coding units and temporal characteristics to determine the splitting of prediction units. We derive a function which approximates the spatio-temporal characteristics of the video sequence by using the weighted averages of the depth at which the coding unit quadtree is split and the prediction mode decision made by the encoder to estimate spatial and temporal characteristics respectively. Since the video content type of a sequence is determined by using high level information parsed from the video stream, spatio-temporal characteristics are identified without the need for full decoding and can

  17. Attention to the Model's Face When Learning from Video Modeling Examples in Adolescents with and without Autism Spectrum Disorder

    Science.gov (United States)

    van Wermeskerken, Margot; Grimmius, Bianca; van Gog, Tamara

    2018-01-01

    We investigated the effects of seeing the instructor's (i.e., the model's) face in video modeling examples on students' attention and their learning outcomes. Research with university students suggested that the model's face attracts students' attention away from what the model is doing, but this did not hamper learning. We aimed to investigate…

  18. It’s all a matter of perspective : Viewing first-person video modeling examples promotes learning of an assembly task

    NARCIS (Netherlands)

    Fiorella, Logan; van Gog, T.; Hoogerheide, V.; Mayer, Richard

    2017-01-01

    The present study tests whether presenting video modeling examples from the learner’s (first-person) perspective promotes learning of an assembly task, compared to presenting video examples from a third-person perspective. Across 2 experiments conducted in different labs, university students viewed

  19. Sending Safety Video over WiMAX in Vehicle Communications

    Directory of Open Access Journals (Sweden)

    Jun Steed Huang

    2013-10-01

    Full Text Available This paper reports on the design of an OPNET simulation platform to test the performance of sending real-time safety video over VANET (Vehicular Adhoc NETwork using the WiMAX technology. To provide a more realistic environment for streaming real-time video, a video model was created based on the study of video traffic traces captured from a realistic vehicular camera, and different design considerations were taken into account. A practical controller over real-time streaming protocol is implemented to control data traffic congestion for future road safety development. Our driving video model was then integrated with the WiMAX OPNET model along with a mobility model based on real road maps. Using this simulation platform, different mobility cases have been studied and the performance evaluated in terms of end-to-end delay, jitter and visual experience.

  20. Video Modeling of SBIRT for Alcohol Use Disorders Increases Student Empathy in Standardized Patient Encounters.

    Science.gov (United States)

    Crisafio, Anthony; Anderson, Victoria; Frank, Julia

    2018-04-01

    The purpose of this study was to assess the usefulness of adding video models of brief alcohol assessment and counseling to a standardized patient (SP) curriculum that covers and tests acquisition of this skill. The authors conducted a single-center, retrospective cohort study of third- and fourth-year medical students between 2013 and 2015. All students completed a standardized patient (SP) encounter illustrating the diagnosis of alcohol use disorder, followed by an SP exam on the same topic. Beginning in August 2014, the authors supplemented the existing formative SP exercise on problem drinking with one of two 5-min videos demonstrating screening, brief intervention, and referral for treatment (SBIRT). P values and Z tests were performed to evaluate differences between students who did and did not see the video in knowledge and skills related to alcohol use disorders. One hundred ninety-four students were included in this analysis. Compared to controls, subjects did not differ in their ability to uncover and accurately characterize an alcohol problem during a standardized encounter (mean exam score 41.29 vs 40.93, subject vs control, p = 0.539). However, the SPs' rating of students' expressions of empathy were significantly higher for the group who saw the video (81.63 vs 69.79%, p videos would improve students' recognition and knowledge of alcohol-related conditions. However, feedback from the SPs produced the serendipitous finding that the communication skills demonstrated in the videos had a sustained effect in enhancing students' professional behavior.

  1. A method of intentional movement estimation of oblique small-UAV videos stabilized based on homography model

    Science.gov (United States)

    Guo, Shiyi; Mai, Ying; Zhao, Hongying; Gao, Pengqi

    2013-05-01

    The airborne video streams of small-UAVs are commonly plagued with distractive jittery and shaking motions, disorienting rotations, noisy and distorted images and other unwanted movements. These problems collectively make it very difficult for observers to obtain useful information from the video. Due to the small payload of small-UAVs, it is a priority to improve the image quality by means of electronic image stabilization. But when small-UAV makes a turn, affected by the flight characteristics of it, the video is easy to become oblique. This brings a lot of difficulties to electronic image stabilization technology. Homography model performed well in the oblique image motion estimation, while bringing great challenges to intentional motion estimation. Therefore, in this paper, we focus on solve the problem of the video stabilized when small-UAVs banking and turning. We attend to the small-UAVs fly along with an arc of a fixed turning radius. For this reason, after a series of experimental analysis on the flight characteristics and the path how small-UAVs turned, we presented a new method to estimate the intentional motion in which the path of the frame center was used to fit the video moving track. Meanwhile, the image sequences dynamic mosaic was done to make up for the limited field of view. At last, the proposed algorithm was carried out and validated by actual airborne videos. The results show that the proposed method is effective to stabilize the oblique video of small-UAVs.

  2. A No-Reference Modular Video Quality Prediction Model for H.265/HEVC and VP9 Codecs on a Mobile Device

    Directory of Open Access Journals (Sweden)

    Debajyoti Pal

    2017-01-01

    Full Text Available We propose a modular no-reference video quality prediction model for videos that are encoded with H.265/HEVC and VP9 codecs and viewed on mobile devices. The impairments which can affect video transmission are classified into two broad types depending upon which layer of the TCP/IP model they originated from. Impairments from the network layer are called the network QoS factors, while those from the application layer are called the application/payload QoS factors. Initially we treat the network and application QoS factors separately and find out the 1 : 1 relationship between the respective QoS factors and the corresponding perceived video quality or QoE. The mapping from the QoS to the QoE domain is based upon a decision variable that gives an optimal performance. Next, across each group we choose multiple QoS factors and find out the QoE for such multifactor impaired videos by using an additive, multiplicative, and regressive approach. We refer to these as the integrated network and application QoE, respectively. At the end, we use a multiple regression approach to combine the network and application QoE for building the final model. We also use an Artificial Neural Network approach for building the model and compare its performance with the regressive approach.

  3. Segmentation of object-based video of gaze communication

    DEFF Research Database (Denmark)

    Aghito, Shankar Manuel; Stegmann, Mikkel Bille; Forchhammer, Søren

    2005-01-01

    Aspects of video communication based on gaze interaction are considered. The overall idea is to use gaze interaction to control video, e.g. for video conferencing. Towards this goal, animation of a facial mask is demonstrated. The animation is based on images using Active Appearance Models (AAM......). Good quality reproduction of (low-resolution) coded video of an animated facial mask as low as 10-20 kbit/s using MPEG-4 object based video is demonstated....

  4. Reproducibility of prompts in computer-aided detection (CAD) of breast cancer

    International Nuclear Information System (INIS)

    Taylor, C.G.; Champness, J.; Reddy, M.; Taylor, P.; Potts, H.W.W.; Given-Wilson, R.

    2003-01-01

    AIM: We evaluated the reproducibility of prompts using the R2 ImageChecker M2000 computer-aided detection (CAD) system. MATERIALS AND METHODS: Forty selected two-view mammograms of women with breast cancer were digitized and analysed using the ImageChecker on 10 separate occasions. The mammograms were chosen to provide both straightforward and subtle signs of malignancy. Data analysed included mammographic abnormality, pathology, and whether the cancer was prompted or given an emphasized prompt. RESULTS: Correct prompts were generated in 86 out of 100 occasions for screen-detected cancers. Reproducibility was less in the other categories of more subtle cancers: 21% for cancers previously missed by CAD, a group that contained more grade 1 and small (<10 mm) tumours. Prompts for calcifications were more reproducible than those for masses (76% versus 53%) and these cancers were more likely to have an emphasized prompt. CONCLUSIONS: Probably the most important cause of variability of prompts is shifts in film position between sequential digitizations. Consequently subtle lesions that are only just above the threshold for display may not be prompted on repeat scanning. However, users of CAD should be aware that even emphasized prompts are not consistently reproducible

  5. Categorizing Video Game Audio

    DEFF Research Database (Denmark)

    Westerberg, Andreas Rytter; Schoenau-Fog, Henrik

    2015-01-01

    they can use audio in video games. The conclusion of this study is that the current models' view of the diegetic spaces, used to categorize video game audio, is not t to categorize all sounds. This can however possibly be changed though a rethinking of how the player interprets audio.......This paper dives into the subject of video game audio and how it can be categorized in order to deliver a message to a player in the most precise way. A new categorization, with a new take on the diegetic spaces, can be used a tool of inspiration for sound- and game-designers to rethink how...

  6. An evaluation of in vivo desensitization and video modeling to increase compliance with dental procedures in persons with mental retardation.

    Science.gov (United States)

    Conyers, Carole; Miltenberger, Raymond G; Peterson, Blake; Gubin, Amber; Jurgens, Mandy; Selders, Andrew; Dickinson, Jessica; Barenz, Rebecca

    2004-01-01

    Fear of dental procedures deters many individuals with mental retardation from accepting dental treatment. This study was conducted to assess the effectiveness of two procedures, in vivo desensitization and video modeling, for increasing compliance with dental procedures in participants with severe or profound mental retardation. Desensitization increased compliance for all 5 participants, whereas video modeling increased compliance for only 1 of 3 participants.

  7. Camera network video summarization

    Science.gov (United States)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  8. The Kinematic Learning Model using Video and Interfaces Analysis

    Science.gov (United States)

    Firdaus, T.; Setiawan, W.; Hamidah, I.

    2017-09-01

    An educator currently in demand to apply the learning to not be separated from the development of technology. Educators often experience difficulties when explaining kinematics material, this is because kinematics is one of the lessons that often relate the concept to real life. Kinematics is one of the courses of physics that explains the cause of motion of an object, Therefore it takes the thinking skills and analytical skills in understanding these symptoms. Technology is one that can bridge between conceptual relationship with real life. A framework of technology-based learning models has been developed using video and interfaces analysis on kinematics concept. By using this learning model, learners will be better able to understand the concept that is taught by the teacher. This learning model is able to improve the ability of creative thinking, analytical skills, and problem-solving skills on the concept of kinematics.

  9. Video-based rendering

    CERN Document Server

    Magnor, Marcus A

    2005-01-01

    Driven by consumer-market applications that enjoy steadily increasing economic importance, graphics hardware and rendering algorithms are a central focus of computer graphics research. Video-based rendering is an approach that aims to overcome the current bottleneck in the time-consuming modeling process and has applications in areas such as computer games, special effects, and interactive TV. This book offers an in-depth introduction to video-based rendering, a rapidly developing new interdisciplinary topic employing techniques from computer graphics, computer vision, and telecommunication en

  10. Contribution to the study of prompt gamma-rays from fission

    International Nuclear Information System (INIS)

    Regnier, D.

    2013-01-01

    This PhD thesis has essentially been motivated by the nuclear heating problematic in reactors. The main goal of this work was the production of methods capable of simulating the prompt gamma emission from fission. First of all, several algorithms for the treatment of the nucleus deexcitation were implemented. They have been successfully tested through various calculations (isomeric branching ratio, total radiative width, etc). These methods were then incorporated in the frame of the fission code FIFRELIN. The tool which results from this work, enables the determination of numerous fission observables in the frame of a single consistent model. A sensitivity study of the results to several numerical and nuclear models has been realized. At last, calculation have been lead for the 252 Cf spontaneous fission and the thermal neutron induced fission of 235 U and 239 Pu. The prompt gamma spectra obtained for those three fissioning systems have been determined. The results are in good agreement with available experimental data, including recent measurements published in 2012 and 2013. (author) [fr

  11. Longer you play, the more hostile you feel: examination of first person shooter video games and aggression during video game play.

    Science.gov (United States)

    Barlett, Christopher P; Harris, Richard J; Baldassaro, Ross

    2007-01-01

    This study investigated the effects of video game play on aggression. Using the General Aggression Model, as applied to video games by Anderson and Bushman, [2002] this study measured physiological arousal, state hostility, and how aggressively participants would respond to three hypothetical scenarios. In addition, this study measured each of these variables multiple times to gauge how aggression would change with increased video game play. Results showed a significant increase from baseline in hostility and aggression (based on two of the three story stems), which is consistent with the General Aggression Model. This study adds to the existing literature on video games and aggression by showing that increased play of a violent first person shooter video game can significantly increase aggression from baseline. 2007 Wiley-Liss, Inc.

  12. Measurement of the differential cross-sections of inclusive, prompt and non-prompt J/{psi} production in proton-proton collisions at {radical}(s)=7 TeV

    Energy Technology Data Exchange (ETDEWEB)

    Aad, G [Fakultaet fuer Mathematik und Physik, Albert-Ludwigs-Universitaet, Freiburg i.Br. (Germany); Abbott, B [Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, Norman OK (United States); Abdallah, J [Institut de Fisica d' Altes Energies and Universitat Autonoma de Barcelona and ICREA, Barcelona (Spain); Abdelalim, A A [Section de Physique, Universite de Geneve, Geneva (Switzerland); Abdesselam, A [Department of Physics, Oxford University, Oxford (United Kingdom); Abdinov, O [Institute of Physics, Azerbaijan Academy of Sciences, Baku (Azerbaijan); Abi, B [Department of Physics, Oklahoma State University, Stillwater OK (United States); Abolins, M [Department of Physics and Astronomy, Michigan State University, East Lansing MI (United States); Abramowicz, H [Raymond and Beverly Sackler School of Physics and Astronomy, Tel Aviv University, Tel Aviv (Israel); Abreu, H [LAL, Univ. Paris-Sud and CNRS/IN2P3, Orsay (France); Acerbi, E [INFN Sezione di Milano (Italy); Dipartimento di Fisica, Universita di Milano, Milano (Italy); Acharya, B S [INFN Gruppo Collegato di Udine (Italy); ICTP, Trieste [Italy; Adams, D L [Physics Department, Brookhaven National Laboratory, Upton NY (United States); Addy, T N [Department of Physics, Hampton University, Hampton VA (United States); Adelman, J [Department of Physics, Yale University, New Haven CT (United States); Aderholz, M [Max-Planck-Institut fuer Physik (Werner-Heisenberg-Institut), Muenchen (Germany); Adomeit, S [Fakultaet fuer Physik, Ludwig-Maximilians-Universitaet Muenchen, Muenchen (Germany); Adragna, P [Department of Physics, Queen Mary University of London, London (United Kingdom); Adye, T [Particle Physics Department, Rutherford Appleton Laboratory, Didcot (United Kingdom); Aefsky, S [Department of Physics, Brandeis University, Waltham MA (United States)

    2011-09-21

    The inclusive J/{psi} production cross-section and fraction of J/{psi} mesons produced in B-hadron decays are measured in proton-proton collisions at {radical}(s)=7 TeV with the ATLAS detector at the LHC, as a function of the transverse momentum and rapidity of the J/{psi}, using 2.3 pb{sup -1} of integrated luminosity. The cross-section is measured from a minimum p{sub T} of 1 GeV to a maximum of 70 GeV and for rapidities within |y|<2.4 giving the widest reach of any measurement of J/{psi} production to date. The differential production cross-sections of prompt and non-prompt J/{psi} are separately determined and are compared to Colour Singlet NNLO{sup *}, Colour Evaporation Model, and FONLL predictions.

  13. Robust video object cosegmentation.

    Science.gov (United States)

    Wang, Wenguan; Shen, Jianbing; Li, Xuelong; Porikli, Fatih

    2015-10-01

    With ever-increasing volumes of video data, automatic extraction of salient object regions became even more significant for visual analytic solutions. This surge has also opened up opportunities for taking advantage of collective cues encapsulated in multiple videos in a cooperative manner. However, it also brings up major challenges, such as handling of drastic appearance, motion pattern, and pose variations, of foreground objects as well as indiscriminate backgrounds. Here, we present a cosegmentation framework to discover and segment out common object regions across multiple frames and multiple videos in a joint fashion. We incorporate three types of cues, i.e., intraframe saliency, interframe consistency, and across-video similarity into an energy optimization framework that does not make restrictive assumptions on foreground appearance and motion model, and does not require objects to be visible in all frames. We also introduce a spatio-temporal scale-invariant feature transform (SIFT) flow descriptor to integrate across-video correspondence from the conventional SIFT-flow into interframe motion flow from optical flow. This novel spatio-temporal SIFT flow generates reliable estimations of common foregrounds over the entire video data set. Experimental results show that our method outperforms the state-of-the-art on a new extensive data set (ViCoSeg).

  14. Video Modeling to Teach Social Safety Skills to Young Adults with Intellectual Disability

    Science.gov (United States)

    Spivey, Corrine E.; Mechling, Linda C.

    2016-01-01

    This study evaluated the effectiveness of video modeling with a constant time delay procedure to teach social safety skills to three young women with intellectual disability. A multiple probe design across three social safety skills (responding to strangers who: requested personal information; requested money; and entered the participant's…

  15. Distributed Video Coding: Iterative Improvements

    DEFF Research Database (Denmark)

    Luong, Huynh Van

    Nowadays, emerging applications such as wireless visual sensor networks and wireless video surveillance are requiring lightweight video encoding with high coding efficiency and error-resilience. Distributed Video Coding (DVC) is a new coding paradigm which exploits the source statistics...... and noise modeling and also learn from the previous decoded Wyner-Ziv (WZ) frames, side information and noise learning (SING) is proposed. The SING scheme introduces an optical flow technique to compensate the weaknesses of the block based SI generation and also utilizes clustering of DCT blocks to capture...... cross band correlation and increase local adaptivity in noise modeling. During decoding, the updated information is used to iteratively reestimate the motion and reconstruction in the proposed motion and reconstruction reestimation (MORE) scheme. The MORE scheme not only reestimates the motion vectors...

  16. Prompting a consumer behavior for pollution control.

    Science.gov (United States)

    Geller, E S; Farris, J C; Post, D S

    1973-01-01

    A field application of behavior modification studied the relative effectiveness of different prompting procedures for increasing the probability that customers entering a grocery store would select their soft drinks in returnable rather than nonreturnable containers. Six different 2-hr experimental conditions during which bottle purchases were recorded were (1) No Prompt (i.e., control), (2) one student gave incoming customers a handbill urging the purchase of soft drinks in returnable bottles, (3) distribution of the handbill by one student and public charting of each customer's bottle purchases by another student, (4) handbill distribution and charting by a five-member group, (5) handbills distributed and purchases charted by three females. The variant prompting techniques were equally effective, and in general increased the percentage of returnable-bottle customers by an average of 25%.

  17. A Coupled Hidden Markov Random Field Model for Simultaneous Face Clustering and Tracking in Videos

    KAUST Repository

    Wu, Baoyuan

    2016-10-25

    Face clustering and face tracking are two areas of active research in automatic facial video processing. They, however, have long been studied separately, despite the inherent link between them. In this paper, we propose to perform simultaneous face clustering and face tracking from real world videos. The motivation for the proposed research is that face clustering and face tracking can provide useful information and constraints to each other, thus can bootstrap and improve the performances of each other. To this end, we introduce a Coupled Hidden Markov Random Field (CHMRF) to simultaneously model face clustering, face tracking, and their interactions. We provide an effective algorithm based on constrained clustering and optimal tracking for the joint optimization of cluster labels and face tracking. We demonstrate significant improvements over state-of-the-art results in face clustering and tracking on several videos.

  18. Negotiation for Strategic Video Games

    OpenAIRE

    Afiouni, Einar Nour; Øvrelid, Leif Julian

    2013-01-01

    This project aims to examine the possibilities of using game theoretic concepts and multi-agent systems in modern video games with real time demands. We have implemented a multi-issue negotiation system for the strategic video game Civilization IV, evaluating different negotiation techniques with a focus on the use of opponent modeling to improve negotiation results.

  19. Earthquake casualty models within the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system

    Science.gov (United States)

    Jaiswal, Kishor; Wald, David J.; Earle, Paul S.; Porter, Keith A.; Hearne, Mike

    2011-01-01

    Since the launch of the USGS’s Prompt Assessment of Global Earthquakes for Response (PAGER) system in fall of 2007, the time needed for the U.S. Geological Survey (USGS) to determine and comprehend the scope of any major earthquake disaster anywhere in the world has been dramatically reduced to less than 30 min. PAGER alerts consist of estimated shaking hazard from the ShakeMap system, estimates of population exposure at various shaking intensities, and a list of the most severely shaken cities in the epicentral area. These estimates help government, scientific, and relief agencies to guide their responses in the immediate aftermath of a significant earthquake. To account for wide variability and uncertainty associated with inventory, structural vulnerability and casualty data, PAGER employs three different global earthquake fatality/loss computation models. This article describes the development of the models and demonstrates the loss estimation capability for earthquakes that have occurred since 2007. The empirical model relies on country-specific earthquake loss data from past earthquakes and makes use of calibrated casualty rates for future prediction. The semi-empirical and analytical models are engineering-based and rely on complex datasets including building inventories, time-dependent population distributions within different occupancies, the vulnerability of regional building stocks, and casualty rates given structural collapse.

  20. Adaptive modeling of sky for video processing and coding applications

    NARCIS (Netherlands)

    Zafarifar, B.; With, de P.H.N.; Lagendijk, R.L.; Weber, Jos H.; Berg, van den A.F.M.

    2006-01-01

    Video content analysis for still- and moving images can be used for various applications, such as high-level semantic-driven operations or pixel-level contentdependent image manipulation. Within video content analysis, sky regions of an image form visually important objects, for which interesting

  1. PANCHROMATIC OBSERVATIONS OF THE TEXTBOOK GRB 110205A: CONSTRAINING PHYSICAL MECHANISMS OF PROMPT EMISSION AND AFTERGLOW

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, W. [Department of Physics, University of Michigan, 450 Church Street, Ann Arbor, MI 48109 (United States); Shen, R. F. [Department of Astronomy and Astrophysics, University of Toronto, Toronto, Ontario M5S 3H4 (Canada); Sakamoto, T. [Center for Research and Exploration in Space Science and Technology (CRESST), NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Beardmore, A. P. [Department of Physics and Astronomy, University of Leicester, Leicester LE1 7RH (United Kingdom); De Pasquale, M. [Mullard Space Science Laboratory, University College London, Holmbury Road, Holmbury St. Mary, Dorking RH5 6NT (United Kingdom); Wu, X. F.; Zhang, B. [Department of Physics and Astronomy, University of Nevada Las Vegas, Las Vegas, NV 89154 (United States); Gorosabel, J. [Instituto de Astrofisica de Andalucia (IAA-CSIC), 18008 Granada (Spain); Urata, Y. [Institute of Astronomy, National Central University, Chung-Li 32054, Taiwan (China); Sugita, S. [EcoTopia Science Institute, Nagoya University, Furo-cho, chikusa, Nagoya 464-8603 (Japan); Pozanenko, A. [Space Research Institute (IKI), 84/32 Profsoyuznaya St., Moscow 117997 (Russian Federation); Nissinen, M. [Taurus Hill Observatory, Haerkaemaeentie 88, 79480 Kangaslampi (Finland); Sahu, D. K. [CREST, Indian Institute of Astrophysics, Koramangala, Bangalore 560034 (India); Im, M. [Center for the Exploration of the Origin of the Universe, Department of Physics and Astronomy, FPRD, Seoul National University, Shillim-dong, San 56-1, Kwanak-gu, Seoul (Korea, Republic of); Ukwatta, T. N. [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Andreev, M. [Terskol Branch of Institute of Astronomy of RAS, Kabardino-Balkaria Republic 361605 (Russian Federation); Klunko, E., E-mail: zwk@umich.edu, E-mail: rfshen@astro.utoronto.ca, E-mail: zhang@physics.unlv.edu [Institute of Solar-Terrestrial Physics, Lermontov St., 126a, Irkutsk 664033 (Russian Federation); and others

    2012-06-01

    We present a comprehensive analysis of a bright, long-duration (T{sub 90} {approx} 257 s) GRB 110205A at redshift z = 2.22. The optical prompt emission was detected by Swift/UVOT, ROTSE-IIIb, and BOOTES telescopes when the gamma-ray burst (GRB) was still radiating in the {gamma}-ray band, with optical light curve showing correlation with {gamma}-ray data. Nearly 200 s of observations were obtained simultaneously from optical, X-ray, to {gamma}-ray (1 eV to 5 MeV), which makes it one of the exceptional cases to study the broadband spectral energy distribution during the prompt emission phase. In particular, we clearly identify, for the first time, an interesting two-break energy spectrum, roughly consistent with the standard synchrotron emission model in the fast cooling regime. Shortly after prompt emission ({approx}1100 s), a bright (R = 14.0) optical emission hump with very steep rise ({alpha} {approx} 5.5) was observed, which we interpret as the reverse shock (RS) emission. It is the first time that the rising phase of an RS component has been closely observed. The full optical and X-ray afterglow light curves can be interpreted within the standard reverse shock (RS) + forward shock (FS) model. In general, the high-quality prompt and afterglow data allow us to apply the standard fireball model to extract valuable information, including the radiation mechanism (synchrotron), radius of prompt emission (R{sub GRB} {approx} 3 Multiplication-Sign 10{sup 13} cm), initial Lorentz factor of the outflow ({Gamma}{sub 0} {approx} 250), the composition of the ejecta (mildly magnetized), the collimation angle, and the total energy budget.

  2. Video Modeling for Children and Adolescents with Autism Spectrum Disorder: A Meta-Analysis

    Science.gov (United States)

    Thompson, Teresa Lynn

    2014-01-01

    The objective of this research was to conduct a meta-analysis to examine existing research studies on video modeling as an effective teaching tool for children and adolescents diagnosed with Autism Spectrum Disorder (ASD). Study eligibility criteria included (a) single case research design using multiple baselines, alternating treatment designs,…

  3. Effectiveness of Teaching Naming Facial Expression to Children with Autism via Video Modeling

    Science.gov (United States)

    Akmanoglu, Nurgul

    2015-01-01

    This study aims to examine the effectiveness of teaching naming emotional facial expression via video modeling to children with autism. Teaching the naming of emotions (happy, sad, scared, disgusted, surprised, feeling physical pain, and bored) was made by creating situations that lead to the emergence of facial expressions to children…

  4. How College Students' Conceptions of Newton's Second and Third Laws Change through Watching Interactive Video Vignettes: A Mixed Methods Study

    Science.gov (United States)

    Engelman, Jonathan

    2016-01-01

    Changing student conceptions in physics is a difficult process and has been a topic of research for many years. The purpose of this study was to understand what prompted students to change or not change their incorrect conceptions of Newtons Second or Third Laws in response to an intervention, Interactive Video Vignettes (IVVs), designed to…

  5. An assessment of prompt neutron reproduction time in a reflector dominated fast critical system: ELECTRA

    International Nuclear Information System (INIS)

    Suvdantsetseg, E.; Wallenius, J.

    2014-01-01

    Highlights: • Prompt neutron reproduction time of ELECTRA is evaluated. • Static and dynamic reproduction times are distinguished for ELECTRA. • Avery-Cohn’s two-region prompt neutron theory is applied. - Abstract: In this paper, an accurate method to evaluate the prompt neutron reproduction time for a reflector dominated fast critical reactor, ELECTRA, is discussed. To adequately handle the problem, explicit time dependent Monte Carlo calculations with MCNP, applying repeated time cut-off technique, are used and compared against the σ∼1/v time dependent absorber method, applying artificial cross-section data in the Monte Carlo code SERPENT. The results show that when a reflector plays a major role in criticality for fast neutron reactor, the two methods predict different physical parameters (Λ=69±2 ns and Λ=83±1 ns for time cut-off and the 1/v method respectively). The reason is explained by applying Avery-Cohn’s two-region prompt neutron model

  6. A method for prediction of prompt fission neutron spectra

    International Nuclear Information System (INIS)

    Grashin, A.F.; Lepeshkin, M.V.

    1988-01-01

    Three-parameter formula for the prompt-fission-neutron integral spectrum is derived from a thermodynamical model. Two parameters, scission-neutron weight p = 11 % and anisotropy factor for accelerated fragments b = 10 %, are determined from experimental data, the same values being assumed for any type of fission. The thermodynamical theory provides the value of the third parameter, temperature τ, thus prognozing neutron spectrum and average energy with an error about 1 %. (author)

  7. Effects of video modeling on social initiations by children with autism.

    Science.gov (United States)

    Nikopoulos, Christos K; Keenan, Michael

    2004-01-01

    We examined the effects of a video modeling intervention on social initiation and play behaviors with 3 children with autism using a multiple baseline across subjects design. Each child watched a videotape showing a typically developing peer, and the experimenter engaged in a simple social interactive play using one toy. For all children, social initiation and reciprocal play skills were enhanced, and these effects were maintained at 1- and 3-month follow-up periods.

  8. The flipped classroom: A learning model to increase student engagement not academic achievement

    OpenAIRE

    Masha Smallhorn

    2017-01-01

    A decrease in student attendance at lectures both nationally and internationally, has prompted educators to re-evaluate their teaching methods and investigate strategies which promote student engagement. The flipped classroom model, grounded in active learning pedagogy, transforms the face-to-face classroom. Students prepare for the flipped classroom in their own time by watching short online videos and completing readings. Face-to-face time is used to apply learning through problem-solving w...

  9. Estimation of neutron energy distributions from prompt gamma emissions

    Science.gov (United States)

    Panikkath, Priyada; Udupi, Ashwini; Sarkar, P. K.

    2017-11-01

    A technique of estimating the incident neutron energy distribution from emitted prompt gamma intensities from a system exposed to neutrons is presented. The emitted prompt gamma intensities or the measured photo peaks in a gamma detector are related to the incident neutron energy distribution through a convolution of the response of the system generating the prompt gammas to mono-energetic neutrons. Presently, the system studied is a cylinder of high density polyethylene (HDPE) placed inside another cylinder of borated HDPE (BHDPE) having an outer Pb-cover and exposed to neutrons. The emitted five prompt gamma peaks from hydrogen, boron, carbon and lead can be utilized to unfold the incident neutron energy distribution as an under-determined deconvolution problem. Such an under-determined set of equations are solved using the genetic algorithm based Monte Carlo de-convolution code GAMCD. Feasibility of the proposed technique is demonstrated theoretically using the Monte Carlo calculated response matrix and intensities of emitted prompt gammas from the Pb-covered BHDPE-HDPE system in the case of several incident neutron spectra spanning different energy ranges.

  10. Interactive Videos Enhance Learning about Socio-Ecological Systems

    Science.gov (United States)

    Smithwick, Erica; Baxter, Emily; Kim, Kyung; Edel-Malizia, Stephanie; Rocco, Stevie; Blackstock, Dean

    2018-01-01

    Two forms of interactive video were assessed in an online course focused on conservation. The hypothesis was that interactive video enhances student perceptions about learning and improves mental models of social-ecological systems. Results showed that students reported greater learning and attitudes toward the subject following interactive video.…

  11. 77 FR 76624 - Prompt Payment Interest Rate; Contract Disputes Act

    Science.gov (United States)

    2012-12-28

    ... DEPARTMENT OF THE TREASURY Fiscal Service Prompt Payment Interest Rate; Contract Disputes Act... beginning January 1, 2013, and ending on June 30, 2013, the prompt payment interest rate is 1-3/8 per centum... Prompt Payment Act, 31 U.S.C. 3902(a), provide for the calculation of interest due on claims at the rate...

  12. Using Video in the English Language Clasroom

    Directory of Open Access Journals (Sweden)

    Amado Vicente

    2002-08-01

    Full Text Available Video is a popular and a motivating potential medium in schools. Using video in the language classroom helps the language teachers in many different ways. Video, for instance, brings the outside world into the language classroom, providing the class with many different topics and reasons to talk about. It can provide comprehensible input to the learners through contextualised models of language use. It also offers good opportunities to introduce native English speech into the language classroom. Through this article I will try to show what the benefits of using video are and, at the end, I present an instrument to select and classify video materials.

  13. Modeling the Quality of Videos Displayed With Local Dimming Backlight at Different Peak White and Ambient Light Levels

    DEFF Research Database (Denmark)

    Mantel, Claire; Søgaard, Jacob; Bech, Søren

    2016-01-01

    is computed using a model of the display. Widely used objective quality metrics are applied based on the rendering models of the videos to predict the subjective evaluations. As these predictions are not satisfying, three machine learning methods are applied: partial least square regression, elastic net......This paper investigates the impact of ambient light and peak white (maximum brightness of a display) on the perceived quality of videos displayed using local backlight dimming. Two subjective tests providing quality evaluations are presented and analyzed. The analyses of variance show significant...

  14. Maximizing Reading Narrative Text Ability by Probing Prompting Learning Technique

    Directory of Open Access Journals (Sweden)

    Wiwied Pratiwi

    2017-12-01

    Full Text Available The objective of this research was to know whether Probing Prompting Learning Technique can be used to get the maximum effect of students’ reading narrative ability in teaching and learning process. This research was applied collaborative action reEsearch, this research was done in two cycle. The subject of this research was 23 students at tenth grade of SMA Kartikatama Metro. The result of the research showed that the Probing Prompting Learning Technique is useful and effective to help students get maximum effect of their reading. Based on the results of the questionnaire obtained an average percentage of 95%, it indicated that application of Probing Prompting Learning Technique in teaching l reading was appropriately applied. In short that students’ responses toward Probing Prompting Learning Technique in teaching reading was positive. In conclusion, Probing Prompting Learning Technique can get maximum effect of students’ reading ability. In relation to the result of the reserach, some suggestion are offered to english teacher, that  the use of Probing Prompting learning Technique in teaching reading will get the maximum effect of students’ reading abilty.

  15. Content-based video retrieval by example video clip

    Science.gov (United States)

    Dimitrova, Nevenka; Abdel-Mottaleb, Mohamed

    1997-01-01

    This paper presents a novel approach for video retrieval from a large archive of MPEG or Motion JPEG compressed video clips. We introduce a retrieval algorithm that takes a video clip as a query and searches the database for clips with similar contents. Video clips are characterized by a sequence of representative frame signatures, which are constructed from DC coefficients and motion information (`DC+M' signatures). The similarity between two video clips is determined by using their respective signatures. This method facilitates retrieval of clips for the purpose of video editing, broadcast news retrieval, or copyright violation detection.

  16. Turning Video Resource Management into Cloud Computing

    Directory of Open Access Journals (Sweden)

    Weili Kou

    2016-07-01

    Full Text Available Big data makes cloud computing more and more popular in various fields. Video resources are very useful and important to education, security monitoring, and so on. However, issues of their huge volumes, complex data types, inefficient processing performance, weak security, and long times for loading pose challenges in video resource management. The Hadoop Distributed File System (HDFS is an open-source framework, which can provide cloud-based platforms and presents an opportunity for solving these problems. This paper presents video resource management architecture based on HDFS to provide a uniform framework and a five-layer model for standardizing the current various algorithms and applications. The architecture, basic model, and key algorithms are designed for turning video resources into a cloud computing environment. The design was tested by establishing a simulation system prototype.

  17. Research on Construction of Road Network Database Based on Video Retrieval Technology

    Directory of Open Access Journals (Sweden)

    Wang Fengling

    2017-01-01

    Full Text Available Based on the characteristics of the video database and the basic structure of the video database and several typical video data models, the segmentation-based multi-level data model is used to describe the landscape information video database, the network database model and the road network management database system. Landscape information management system detailed design and implementation of a detailed preparation.

  18. Processing Decoded Video for Backlight Dimming

    DEFF Research Database (Denmark)

    Burini, Nino; Korhonen, Jari

    rendition of the signals, particularly in the case of LCDs with dynamic local backlight. This thesis shows that it is possible to model LCDs with dynamic backlight to design algorithms that improve the visual quality of 2D and 3D content, and that digital video coding artifacts like blocking or ringing can......Quality of digital image and video signals on TV screens is aected by many factors, including the display technology and compression standards. An accurate knowledge of the characteristics of the display andof the video signals can be used to develop advanced algorithms that improve the visual...... be reduced with post-processing. LCD screens with dynamic local backlight are modeled in their main aspects, like pixel luminance, light diusion and light perception. Following the model, novel algorithms based on optimization are presented and extended, then reduced in complexity, to produce backlights...

  19. Prompt photon measurements with PHENIX's MPC-EX detector

    Science.gov (United States)

    Campbell, Sarah; PHENIX Collaboration

    2013-08-01

    The MPC-EX detector is a Si-W preshower extension to the existing Muon Piston Calorimeter (MPC). The MPC-EX consists of eight layers of alternating W absorber and Si mini-pad sensors. Located at forward rapidity, 3.1 80 GeV, a factor of four improvement over current capabilities. Not only will the MPC-EX strengthen PHENIX's existing forward π0 and jet measurements, it will provide sufficient prompt photon and π0 separation to make a prompt photon measurement possible. Prompt photon yields at high pT, pT > 3 GeV/c, can be statistically extracted using the double ratio method. In transversely polarized p+p collisions, the measurement of the prompt photon single spin asymmetry, AN, will resolve the sign discrepancy between the Sivers and twist-3 extractions of AN. In p+Au collisions, the prompt photon RpAu will quantify the level of gluon saturation in the Au nucleus at low-x, x ~ 10-3, with a projected systematic error band a factor of four smaller than EPS09's current allowable range. The MPC-EX detector will expand our understanding of the gluon nuclear parton distribution functions, providing important information about the initial state of heavy ion collisions, and clarify how the valence parton's transverse momentum and spin correlates to the proton spin.

  20. Prompt neutron emission

    International Nuclear Information System (INIS)

    Sher, R.

    1959-01-01

    It is shown that Ramanna and Rao's tentative conclusion that prompt fission neutrons are emitted (in the fragment system) preferentially in the direction of fragment motion is not necessitated by their angular distribution measurements, which are well explained by the usual assumptions of isotropic emission with a Maxwell (or Maxwell-like) emission spectrum. The energy distribution (Watt spectrum) and the angular distribution, both including the effects of anisotropic emission, are given. (author) [fr

  1. Maintaining Vocational Skills of Individuals with Autism and Developmental Disabilities through Video Modeling

    Science.gov (United States)

    Van Laarhoven, Toni; Winiarski, Lauren; Blood, Erika; Chan, Jeffrey M.

    2012-01-01

    A modified pre/posttest control group design was used to measure the effectiveness of video modeling on the maintenance of vocational tasks for six students with autism spectrum disorder and/or developmental disabilities. Each student was assigned two vocational tasks at their employment settings and their independence with each task was measured…

  2. The Impact of Video Modeling on Improving Social Skills in Children with Autism

    Science.gov (United States)

    Alzyoudi, Mohammed; Sartawi, AbedAlziz; Almuhiri, Osha

    2014-01-01

    Children with autism often show a lack of the interactive social skills that would allow them to engage with others successfully. They therefore frequently need training to aid them in successful social interaction. Video modeling is a widely used instructional technique that has been applied to teach children with developmental disabilities such…

  3. Modelling of prompt losses of high energy charged particles in Tokamaks

    International Nuclear Information System (INIS)

    Dillner, Oe.; Anderson, D.; Hamnen, H.; Lisak, M.

    1990-01-01

    A simple analytical expression for the total prompt loss fraction of high energy charged particles in an axisymmetric Tokamak is derived. The results are compared with predictions obtained from numerical simulations and show good agreement. An application is made to sawtooth induced changes in the losses of fusion generated high energy charged particles. Particular emphasis is given to the importance of sawtooth induced profile changes of the background ion densities and temperature as well as to redistribution of particles which have accumulated during the sawtooth rise but are being lost by redistribution at the sawtooth crash. (au)

  4. Verbal Prompting, Hand-over-Hand Instruction, and Passive Observation in Teaching Children with Developmental Disabilities.

    Science.gov (United States)

    Biederman, G. B.; Fairhall, J. L.; Raven, K. A.; Davey, V. A.

    1998-01-01

    A study involving six children (ages 5-13) with mental retardation found that overall passive modeling was significantly more effective than hand-over-hand modeling in teaching skills, and that passive modeling was significantly more effective than hand-over-hand modeling with response-contingent verbal prompting. (Author/CR)

  5. Video demystified

    CERN Document Server

    Jack, Keith

    2004-01-01

    This international bestseller and essential reference is the "bible" for digital video engineers and programmers worldwide. This is by far the most informative analog and digital video reference available, includes the hottest new trends and cutting-edge developments in the field. Video Demystified, Fourth Edition is a "one stop" reference guide for the various digital video technologies. The fourth edition is completely updated with all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video (Video over DSL, Ethernet, etc.), as well as discussions of the latest standards throughout. The accompanying CD-ROM is updated to include a unique set of video test files in the newest formats. *This essential reference is the "bible" for digital video engineers and programmers worldwide *Contains all new chapters on MPEG-4, H.264, SDTV/HDTV, ATSC/DVB, and Streaming Video *Completely revised with all the latest and most up-to-date industry standards.

  6. Efficient Temporal Action Localization in Videos

    KAUST Repository

    Alwassel, Humam

    2018-01-01

    as an application of the action spotting problem. Experiments on the THUMOS14 dataset reveal that our model is not only able to explore the video efficiently (observing on average 17.3% of the video) but it also accurately finds human activities with 30.8% mAP (0

  7. Video pedagogy

    OpenAIRE

    Länsitie, Janne; Stevenson, Blair; Männistö, Riku; Karjalainen, Tommi; Karjalainen, Asko

    2016-01-01

    The short film is an introduction to the concept of video pedagogy. The five categories of video pedagogy further elaborate how videos can be used as a part of instruction and learning process. Most pedagogical videos represent more than one category. A video itself doesn’t necessarily define the category – the ways in which the video is used as a part of pedagogical script are more defining factors. What five categories did you find? Did you agree with the categories, or are more...

  8. Anatomical knowledge gain through a clay-modeling exercise compared to live and video observations.

    Science.gov (United States)

    Kooloos, Jan G M; Schepens-Franke, Annelieke N; Bergman, Esther M; Donders, Rogier A R T; Vorstenbosch, Marc A T M

    2014-01-01

    Clay modeling is increasingly used as a teaching method other than dissection. The haptic experience during clay modeling is supposed to correspond to the learning effect of manipulations during exercises in the dissection room involving tissues and organs. We questioned this assumption in two pretest-post-test experiments. In these experiments, the learning effects of clay modeling were compared to either live observations (Experiment I) or video observations (Experiment II) of the clay-modeling exercise. The effects of learning were measured with multiple choice questions, extended matching questions, and recognition of structures on illustrations of cross-sections. Analysis of covariance with pretest scores as the covariate was used to elaborate the results. Experiment I showed a significantly higher post-test score for the observers, whereas Experiment II showed a significantly higher post-test score for the clay modelers. This study shows that (1) students who perform clay-modeling exercises show less gain in anatomical knowledge than students who attentively observe the same exercise being carried out and (2) performing a clay-modeling exercise is better in anatomical knowledge gain compared to the study of a video of the recorded exercise. The most important learning effect seems to be the engagement in the exercise, focusing attention and stimulating time on task. © 2014 American Association of Anatomists.

  9. Video Modeling and Children with Autism Spectrum Disorder: A Survey of Caregiver Perspectives

    Science.gov (United States)

    Cardon, Teresa A.; Guimond, Amy; Smith-Treadwell, Amanda M.

    2015-01-01

    Video modeling (VM) has shown promise as an effective intervention for individuals with autism spectrum disorder (ASD); however, little is known about what may promote or prevent caregivers' use of this intervention. While VM is an effective tool to support skill development among a wide range of children in research and clinical settings, VM is…

  10. Action recognition in depth video from RGB perspective: A knowledge transfer manner

    Science.gov (United States)

    Chen, Jun; Xiao, Yang; Cao, Zhiguo; Fang, Zhiwen

    2018-03-01

    Different video modal for human action recognition has becoming a highly promising trend in the video analysis. In this paper, we propose a method for human action recognition from RGB video to Depth video using domain adaptation, where we use learned feature from RGB videos to do action recognition for depth videos. More specifically, we make three steps for solving this problem in this paper. First, different from image, video is more complex as it has both spatial and temporal information, in order to better encode this information, dynamic image method is used to represent each RGB or Depth video to one image, based on this, most methods for extracting feature in image can be used in video. Secondly, as video can be represented as image, so standard CNN model can be used for training and testing for videos, beside, CNN model can be also used for feature extracting as its powerful feature expressing ability. Thirdly, as RGB videos and Depth videos are belong to two different domains, in order to make two different feature domains has more similarity, domain adaptation is firstly used for solving this problem between RGB and Depth video, based on this, the learned feature from RGB video model can be directly used for Depth video classification. We evaluate the proposed method on one complex RGB-D action dataset (NTU RGB-D), and our method can have more than 2% accuracy improvement using domain adaptation from RGB to Depth action recognition.

  11. Effects of cognitive stimulation with a self-modeling video on time to exhaustion while running at maximal aerobic velocity: a pilot study.

    Science.gov (United States)

    Hagin, Vincent; Gonzales, Benoît R; Groslambert, Alain

    2015-04-01

    This study assessed whether video self-modeling improves running performance and influences the rate of perceived exertion and heart rate response. Twelve men (M age=26.8 yr., SD=6; M body mass index=22.1 kg.m(-2), SD=1) performed a time to exhaustion running test at 100 percent maximal aerobic velocity while focusing on a video self-modeling loop to synchronize their stride. Compared to the control condition, there was a significant increase of time to exhaustion. Perceived exertion was lower also, but there was no significant change in mean heart rate. In conclusion, the video self-modeling used as a pacer apparently increased endurance by decreasing perceived exertion without affecting the heart rate.

  12. A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices

    KAUST Repository

    Dutta, Aritra

    2017-07-02

    Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.

  13. A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices

    KAUST Repository

    Dutta, Aritra; Li, Xin; Richtarik, Peter

    2017-01-01

    Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.

  14. Effective Quality-of-Service Renegotiating Schemes for Streaming Video

    Directory of Open Access Journals (Sweden)

    Song Hwangjun

    2004-01-01

    Full Text Available This paper presents effective quality-of-service renegotiating schemes for streaming video. The conventional network supporting quality of service generally allows a negotiation at a call setup. However, it is not efficient for the video application since the compressed video traffic is statistically nonstationary. Thus, we consider the network supporting quality-of-service renegotiations during the data transmission and study effective quality-of-service renegotiating schemes for streaming video. The token bucket model, whose parameters are token filling rate and token bucket size, is adopted for the video traffic model. The renegotiating time instants and the parameters are determined by analyzing the statistical information of compressed video traffic. In this paper, two renegotiating approaches, that is, fixed renegotiating interval case and variable renegotiating interval case, are examined. Finally, the experimental results are provided to show the performance of the proposed schemes.

  15. Contagious Content: Viral Video Ads Identification of Content Characteristics that Help Online Video Advertisements Go Viral

    Directory of Open Access Journals (Sweden)

    Yentl Knossenburg

    2016-12-01

    Full Text Available Why do some online video advertisements go viral while others remain unnoticed? What kind of video content keeps the viewer interested and motivated to share? Many companies have realized the need to innovate their marketing strategies and have embraced the newest ways of using technology, as the Internet, to their advantage as in the example of virality. Yet few marketers actually understand how, and academic literature on this topic is still in development. This study investigated which content characteristics distinguish successful from non-successful online viral video advertisements by analyzing 641 cases using Structural Equation Modeling. Results show that Engagement and Surprise are two main content characteristics that significantly increase the chance of online video advertisements to go viral.  

  16. Indication for double parton scatterings in W+ prompt J/ψ production at the LHC

    Science.gov (United States)

    Lansberg, Jean-Philippe; Shao, Hua-Sheng; Yamanaka, Nodoka

    2018-06-01

    We re-analyse the associated production of a prompt J / ψ and a W boson in pp collisions at the LHC following the results of the ATLAS Collaboration. We perform the first study of the Single-Parton-Scattering (SPS) contributions at the Next-to-Leading Order (NLO) in αs in the Colour-Evaporation Model (CEM), an approach based on the quark-hadron-duality. Our study provides clear indications for Double-Parton-Scattering (DPS) contributions, in particular at low transverse momenta, since our SPS CEM evaluation, which can be viewed as a conservative upper limit of the SPS yields, falls short compared to the ATLAS experimental data by 3.1 standard deviations. We also determine a finite allowed region for σeff, inversely proportional to the size of the DPS yields, corresponding to the otherwise opposed hypotheses, namely our NLO CEM evaluation and the LO direct Colour-Singlet (CS) Model contribution. In both cases, the resulting DPS yields are significantly larger than that initially assumed by ATLAS based on jet-related analyses but is consistent with their observed raw-yield azimuthal distribution and with their prompt J / ψ + J / ψ and Z+ prompt J / ψ data.

  17. Prompt and non-prompt $J/\\psi$ and $\\psi(2\\mathrm{S})$ suppression at high transverse momentum in 5.02 TeV Pb+Pb collisions with the ATLAS experiment

    CERN Document Server

    Aaboud, Morad; ATLAS Collaboration; Abbott, Brad; Abdinov, Ovsat; Abeloos, Baptiste; Abidi, Syed Haider; AbouZeid, Ossama; Abraham, Nicola; Abramowicz, Halina; Abreu, Henso; Abulaiti, Yiming; Acharya, Bobby Samir; Adachi, Shunsuke; Adamczyk, Leszek; Adelman, Jahred; Adersberger, Michael; Adye, Tim; Affolder, Tony; Afik, Yoav; Agheorghiesei, Catalin; Aguilar-Saavedra, Juan Antonio; Ahmadov, Faig; Aielli, Giulio; Akatsuka, Shunichi; Åkesson, Torsten Paul Ake; Akilli, Ece; Akimov, Andrei; Alberghi, Gian Luigi; Albert, Justin; Albicocco, Pietro; Alconada Verzini, Maria Josefina; Alderweireldt, Sara; Aleksa, Martin; Aleksandrov, Igor; Alexa, Calin; Alexander, Gideon; Alexopoulos, Theodoros; Alhroob, Muhammad; Ali, Babar; Aliev, Malik; Alimonti, Gianluca; Alison, John; Alkire, Steven Patrick; Allaire, Corentin; Allbrooke, Benedict; Allen, Benjamin William; Allport, Phillip; Aloisio, Alberto; Alonso, Alejandro; Alonso, Francisco; Alpigiani, Cristiano; Alshehri, Azzah Aziz; Alstaty, Mahmoud; Alvarez Gonzalez, Barbara; Álvarez Piqueras, Damián; Alviggi, Mariagrazia; Amadio, Brian Thomas; Amaral Coutinho, Yara; Ambroz, Luca; Amelung, Christoph; Amidei, Dante; Amor Dos Santos, Susana Patricia; Amoroso, Simone; Amrouche, Cherifa Sabrina; Anastopoulos, Christos; Ancu, Lucian Stefan; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anders, John Kenneth; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Angelidakis, Stylianos; Angelozzi, Ivan; Angerami, Aaron; Anisenkov, Alexey; Annovi, Alberto; Antel, Claire; Anthony, Matthew; Antonelli, Mario; Antrim, Daniel Joseph; Anulli, Fabio; Aoki, Masato; Aperio Bella, Ludovica; Arabidze, Giorgi; Arai, Yasuo; Araque, Juan Pedro; Araujo Ferraz, Victor; Araujo Pereira, Rodrigo; Arce, Ayana; Ardell, Rose Elisabeth; Arduh, Francisco Anuar; Arguin, Jean-Francois; Argyropoulos, Spyridon; Armbruster, Aaron James; Armitage, Lewis James; Arnaez, Olivier; Arnold, Hannah; Arratia, Miguel; Arslan, Ozan; Artamonov, Andrei; Artoni, Giacomo; Artz, Sebastian; Asai, Shoji; Asbah, Nedaa; Ashkenazi, Adi; Asimakopoulou, Eleni Myrto; Asquith, Lily; Assamagan, Ketevi; Astalos, Robert; Atkin, Ryan Justin; Atkinson, Markus; Atlay, Naim Bora; Augsten, Kamil; Avolio, Giuseppe; Avramidou, Rachel Maria; Axen, Bradley; Ayoub, Mohamad Kassem; Azuelos, Georges; Baas, Alessandra; Baca, Matthew John; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Bagnaia, Paolo; Bahmani, Marzieh; Bahrasemani, Sina; Bailey, Adam; Baines, John; Bajic, Milena; Baker, Oliver Keith; Bakker, Pepijn Johannes; Bakshi Gupta, Debottam; Baldin, Evgenii; Balek, Petr; Balli, Fabrice; Balunas, William Keaton; Banas, Elzbieta; Bandyopadhyay, Anjishnu; Banerjee, Swagato; Bannoura, Arwa A E; Barak, Liron; Barbe, William Mickael; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Barillari, Teresa; Barisits, Martin-Stefan; Barkeloo, Jason Tyler Colt; Barklow, Timothy; Barlow, Nick; Barnea, Rotem; Barnes, Sarah Louise; Barnett, Bruce; Barnett, Michael; Barnovska-Blenessy, Zuzana; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barranco Navarro, Laura; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Bartoldus, Rainer; Barton, Adam Edward; Bartos, Pavol; Basalaev, Artem; Bassalat, Ahmed; Bates, Richard; Batista, Santiago Juan; Batlamous, Souad; Batley, Richard; Battaglia, Marco; Bauce, Matteo; Bauer, Florian; Bauer, Kevin Thomas; Bawa, Harinder Singh; Beacham, James; Beattie, Michael David; Beau, Tristan; Beauchemin, Pierre-Hugues; Bechtle, Philip; Beck, Hans~Peter; Beck, Helge Christoph; Becker, Kathrin; Becker, Maurice; Becot, Cyril; Beddall, Andrew; Beddall, Ayda; Bednyakov, Vadim; Bedognetti, Matteo; Bee, Christopher; Beermann, Thomas; Begalli, Marcia; Begel, Michael; Behera, Arabinda; Behr, Janna Katharina; Bell, Andrew Stuart; Bella, Gideon; Bellagamba, Lorenzo; Bellerive, Alain; Bellomo, Massimiliano; Belotskiy, Konstantin; Belyaev, Nikita; Benary, Odette; Benchekroun, Driss; Bender, Michael; Benekos, Nektarios; Benhammou, Yan; Benhar Noccioli, Eleonora; Benitez, Jose; Benjamin, Douglas; Benoit, Mathieu; Bensinger, James; Bentvelsen, Stan; Beresford, Lydia; Beretta, Matteo; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Bergsten, Laura Jean; Beringer, Jürg; Berlendis, Simon; Bernard, Nathan Rogers; Bernardi, Gregorio; Bernius, Catrin; Bernlochner, Florian Urs; Berry, Tracey; Berta, Peter; Bertella, Claudia; Bertoli, Gabriele; Bertram, Iain Alexander; Bertsche, Carolyn; Besjes, Geert-Jan; Bessidskaia Bylund, Olga; Bessner, Martin Florian; Besson, Nathalie; Bethani, Agni; Bethke, Siegfried; Betti, Alessandra; Bevan, Adrian John; Beyer, Julien-christopher; Bianchi, Riccardo-Maria; Biebel, Otmar; Biedermann, Dustin; Bielski, Rafal; Bierwagen, Katharina; Biesuz, Nicolo Vladi; Biglietti, Michela; Billoud, Thomas Remy Victor; Bindi, Marcello; Bingul, Ahmet; Bini, Cesare; Biondi, Silvia; Bisanz, Tobias; Biswal, Jyoti Prakash; Bittrich, Carsten; Bjergaard, David Martin; Black, James; Black, Kevin; Blair, Robert; Blazek, Tomas; Bloch, Ingo; Blocker, Craig; Blue, Andrew; Blumenschein, Ulrike; Blunier, Sylvain; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Bock, Christopher; Boerner, Daniela; Bogavac, Danijela; Bogdanchikov, Alexander; Bohm, Christian; Boisvert, Veronique; Bokan, Petar; Bold, Tomasz; Boldyrev, Alexey; Bolz, Arthur Eugen; Bomben, Marco; Bona, Marcella; Bonilla, Johan Sebastian; Boonekamp, Maarten; Borisov, Anatoly; Borissov, Guennadi; Bortfeldt, Jonathan; Bortoletto, Daniela; Bortolotto, Valerio; Boscherini, Davide; Bosman, Martine; Bossio Sola, Jonathan David; Boudreau, Joseph; Bouhova-Thacker, Evelina Vassileva; Boumediene, Djamel Eddine; Bourdarios, Claire; Boutle, Sarah Kate; Boveia, Antonio; Boyd, James; Boyko, Igor; Bozson, Adam James; Bracinik, Juraj; Brahimi, Nihal; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Braren, Frued; Bratzler, Uwe; Brau, Benjamin; Brau, James; Breaden Madden, William Dmitri; Brendlinger, Kurt; Brennan, Amelia Jean; Brenner, Lydia; Brenner, Richard; Bressler, Shikma; Brickwedde, Bernard; Briglin, Daniel Lawrence; Bristow, Timothy Michael; Britton, Dave; Britzger, Daniel; Brock, Ian; Brock, Raymond; Brooijmans, Gustaaf; Brooks, Timothy; Brooks, William; Brost, Elizabeth; Broughton, James; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruni, Alessia; Bruni, Graziano; Bruni, Lucrezia Stella; Bruno, Salvatore; Brunt, Benjamin; Bruschi, Marco; Bruscino, Nello; Bryant, Patrick; Bryngemark, Lene; Buanes, Trygve; Buat, Quentin; Buchholz, Peter; Buckley, Andrew; Budagov, Ioulian; Buehrer, Felix; Bugge, Magnar Kopangen; Bulekov, Oleg; Bullock, Daniel; Burch, Tyler James; Burdin, Sergey; Burgard, Carsten Daniel; Burger, Angela Maria; Burghgrave, Blake; Burka, Klaudia; Burke, Stephen; Burmeister, Ingo; Burr, Jonathan Thomas Peter; Büscher, Daniel; Büscher, Volker; Buschmann, Eric; Bussey, Peter; Butler, John; Buttar, Craig; Butterworth, Jonathan; Butti, Pierfrancesco; Buttinger, William; Buzatu, Adrian; Buzykaev, Aleksey; Cabras, Grazia; Cabrera Urbán, Susana; Caforio, Davide; Cai, Huacheng; Cairo, Valentina; Cakir, Orhan; Calace, Noemi; Calafiura, Paolo; Calandri, Alessandro; Calderini, Giovanni; Calfayan, Philippe; Callea, Giuseppe; Caloba, Luiz; Calvente Lopez, Sergio; Calvet, David; Calvet, Samuel; Calvet, Thomas Philippe; Calvetti, Milene; Camacho Toro, Reina; Camarda, Stefano; Camarri, Paolo; Cameron, David; Caminal Armadans, Roger; Camincher, Clement; Campana, Simone; Campanelli, Mario; Camplani, Alessandra; Campoverde, Angel; Canale, Vincenzo; Cano Bret, Marc; Cantero, Josu; Cao, Tingting; Cao, Yumeng; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capua, Marcella; Carbone, Ryne Michael; Cardarelli, Roberto; Cardillo, Fabio; Carli, Ina; Carli, Tancredi; Carlino, Gianpaolo; Carlson, Benjamin Taylor; Carminati, Leonardo; Carney, Rebecca; Caron, Sascha; Carquin, Edson; Carrá, Sonia; Carrillo-Montoya, German D; Casadei, Diego; Casado, Maria Pilar; Casha, Albert Francis; Casolino, Mirkoantonio; Casper, David William; Castelijn, Remco; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Caudron, Julien; Cavaliere, Viviana; Cavallaro, Emanuele; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Celebi, Emre; Ceradini, Filippo; Cerda Alberich, Leonor; Santiago Cerqueira, Augusto; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cervelli, Alberto; Cetin, Serkant Ali; Chafaq, Aziz; Chakraborty, Dhiman; Chan, Stephen Kam-wah; Chan, Wing Sheung; Chan, Yat Long; Chang, Philip; Chapman, John Derek; Charlton, David; Chau, Chav Chhiv; Chavez Barajas, Carlos Alberto; Che, Siinn; Chegwidden, Andrew; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Cheng; Chen, Chunhui; Chen, Hucheng; Chen, Jing; Chen, Jue; Chen, Shenjian; Chen, Shion; Chen, Xin; Chen, Ye; Chen, Yu-Heng; Cheng, Hok Chuen; Cheng, Huajie; Cheplakov, Alexander; Cheremushkina, Evgeniya; Cherkaoui El Moursli, Rajaa; Cheu, Elliott; Cheung, Kingman; Chevalier, Laurent; Chiarella, Vitaliano; Chiarelli, Giorgio; Chiodini, Gabriele; Chisholm, Andrew; Chitan, Adrian; Chiu, I-huan; Chiu, Yu Him Justin; Chizhov, Mihail; Choi, Kyungeon; Chomont, Arthur Rene; Chouridou, Sofia; Chow, Yun Sang; Christodoulou, Valentinos; Chu, Ming Chung; Chudoba, Jiri; Chuinard, Annabelle Julia; Chwastowski, Janusz; Chytka, Ladislav; Cinca, Diane; Cindro, Vladimir; Cioară, Irina Antonela; Ciocio, Alessandra; Cirotto, Francesco; Citron, Zvi Hirsh; Citterio, Mauro; Clark, Allan G; Clark, Michael; Clark, Philip James; Clarke, Robert; Clement, Christophe; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Coimbra, Artur Emanuel; Colasurdo, Luca; Cole, Brian; Colijn, Auke-Pieter; Collot, Johann; Conde Muiño, Patricia; Coniavitis, Elias; Connell, Simon Henry; Connelly, Ian; Constantinescu, Serban; Conventi, Francesco; Cooper-Sarkar, Amanda; Cormier, Felix; Cormier, Kyle James Read; Corradi, Massimo; Corrigan, Eric Edward; Corriveau, François; Cortes-Gonzalez, Arely; Costa, María José; Costanzo, Davide; Cottin, Giovanna; Cowan, Glen; Cox, Brian; Crane, Jonathan; Cranmer, Kyle; Crawley, Samuel Joseph; Creager, Rachael; Cree, Graham; Crépé-Renaudin, Sabine; Crescioli, Francesco; Cristinziani, Markus; Croft, Vince; Crosetti, Giovanni; Cueto, Ana; Cuhadar Donszelmann, Tulay; Cukierman, Aviv Ruben; Curatolo, Maria; Cúth, Jakub; Czekierda, Sabina; Czodrowski, Patrick; D'amen, Gabriele; D'Auria, Saverio; D'Eramo, Louis; D'Onofrio, Monica; Da Cunha Sargedas De Sousa, Mario Jose; Da Via, Cinzia; Dabrowski, Wladyslaw; Dado, Tomas; Dahbi, Salah-eddine; Dai, Tiesheng; Dale, Orjan; Dallaire, Frederick; Dallapiccola, Carlo; Dam, Mogens; Dandoy, Jeffrey; Daneri, Maria Florencia; Dang, Nguyen Phuong; Dann, Nick; Danninger, Matthias; Dao, Valerio; Darbo, Giovanni; Darmora, Smita; Dartsi, Olympia; Dattagupta, Aparajita; Daubney, Thomas; Davey, Will; David, Claire; Davidek, Tomas; Davis, Douglas; Dawe, Edmund; Dawson, Ian; De, Kaushik; de Asmundis, Riccardo; De Benedetti, Abraham; De Castro, Stefano; De Cecco, Sandro; De Groot, Nicolo; de Jong, Paul; De la Torre, Hector; De Lorenzi, Francesco; De Maria, Antonio; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vasconcelos Corga, Kevin; De Vivie De Regie, Jean-Baptiste; Debenedetti, Chiara; Dedovich, Dmitri; Dehghanian, Nooshin; Del Gaudio, Michela; Del Peso, Jose; Delgove, David; Deliot, Frederic; Delitzsch, Chris Malena; Dell'Acqua, Andrea; Dell'Asta, Lidia; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delporte, Charles; Delsart, Pierre-Antoine; DeMarco, David; Demers, Sarah; Demichev, Mikhail; Denisov, Sergey; Denysiuk, Denys; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Deterre, Cecile; Dette, Karola; Devesa, Maria Roberta; Deviveiros, Pier-Olivier; Dewhurst, Alastair; Dhaliwal, Saminder; Di Bello, Francesco Armando; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Clemente, William Kennedy; Di Donato, Camilla; Di Girolamo, Alessandro; Di Micco, Biagio; Di Nardo, Roberto; Di Petrillo, Karri Folan; Di Simone, Andrea; Di Sipio, Riccardo; Di Valentino, David; Diaconu, Cristinel; Diamond, Miriam; Dias, Flavia; Dias do Vale, Tiago; Diaz, Marco Aurelio; Dickinson, Jennet; Diehl, Edward; Dietrich, Janet; Díez Cornell, Sergio; Dimitrievska, Aleksandra; Dingfelder, Jochen; Dittus, Fridolin; Djama, Fares; Djobava, Tamar; Djuvsland, Julia Isabell; Barros do Vale, Maria Aline; Dobre, Monica; Dodsworth, David; Doglioni, Caterina; Dolejsi, Jiri; Dolezal, Zdenek; Donadelli, Marisilvia; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dova, Maria-Teresa; Doyle, Tony; Drechsler, Eric; Dreyer, Etienne; Dreyer, Timo; Dris, Manolis; Du, Yanyan; Duarte-Campderros, Jorge; Dubinin, Filipp; Dubreuil, Arnaud; Duchovni, Ehud; Duckeck, Guenter; Ducourthial, Audrey; Ducu, Otilia Anamaria; Duda, Dominik; Dudarev, Alexey; Dudder, Andreas Christian; Duffield, Emily Marie; Duflot, Laurent; Dührssen, Michael; Dülsen, Carsten; Dumancic, Mirta; Dumitriu, Ana Elena; Duncan, Anna Kathryn; Dunford, Monica; Duperrin, Arnaud; Duran Yildiz, Hatice; Düren, Michael; Durglishvili, Archil; Duschinger, Dirk; Dutta, Baishali; Duvnjak, Damir; Dyndal, Mateusz; Dziedzic, Bartosz Sebastian; Eckardt, Christoph; Ecker, Katharina Maria; Edgar, Ryan Christopher; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Ekelof, Tord; El Kacimi, Mohamed; El Kosseifi, Rima; Ellajosyula, Venugopal; Ellert, Mattias; Ellinghaus, Frank; Elliot, Alison; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Enari, Yuji; Ennis, Joseph Stanford; Epland, Matthew Berg; Erdmann, Johannes; Ereditato, Antonio; Errede, Steven; Escalier, Marc; Escobar, Carlos; Esposito, Bellisario; Estrada Pastor, Oscar; Etienvre, Anne-Isabelle; Etzion, Erez; Evans, Hal; Ezhilov, Alexey; Ezzi, Mohammed; Fabbri, Federica; Fabbri, Laura; Fabiani, Veronica; Facini, Gabriel; Faisca Rodrigues Pereira, Rui Miguel; Fakhrutdinov, Rinat; Falciano, Speranza; Falke, Peter Johannes; Falke, Saskia; Faltova, Jana; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farina, Edoardo Maria; Farooque, Trisha; Farrell, Steven; Farrington, Sinead; Farthouat, Philippe; Fassi, Farida; Fassnacht, Patrick; Fassouliotis, Dimitrios; Faucci Giannelli, Michele; Favareto, Andrea; Fawcett, William James; Fayard, Louis; Fedin, Oleg; Fedorko, Wojciech; Feickert, Matthew; Feigl, Simon; Feligioni, Lorenzo; Feng, Cunfeng; Feng, Eric; Feng, Minyu; Fenton, Michael James; Fenyuk, Alexander; Feremenga, Last; Ferrando, James; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferreira de Lima, Danilo Enoque; Ferrer, Antonio; Ferrere, Didier; Ferretti, Claudio; Fiedler, Frank; Filipčič, Andrej; Filthaut, Frank; Fincke-Keeler, Margret; Finelli, Kevin Daniel; Fiolhais, Miguel; Fiorini, Luca; Fischer, Cora; Fischer, Julia; Fisher, Wade Cameron; Flaschel, Nils; Fleck, Ivor; Fleischmann, Philipp; Fletcher, Rob Roy MacGregor; Flick, Tobias; Flierl, Bernhard Matthias; Flores, Lucas Macrorie; Flores Castillo, Luis; Fomin, Nikolai; Forcolin, Giulio Tiziano; Formica, Andrea; Förster, Fabian Alexander; Forti, Alessandra; Foster, Andrew Geoffrey; Fournier, Daniel; Fox, Harald; Fracchia, Silvia; Francavilla, Paolo; Franchini, Matteo; Franchino, Silvia; Francis, David; Franconi, Laura; Franklin, Melissa; Frate, Meghan; Fraternali, Marco; Freeborn, David; Fressard-Batraneanu, Silvia; Freund, Benjamin; Spolidoro Freund, Werner; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fusayasu, Takahiro; Fuster, Juan; Gabizon, Ofir; Gabrielli, Alessandro; Gabrielli, Andrea; Gach, Grzegorz; Gadatsch, Stefan; Gadomski, Szymon; Gadow, Philipp; Gagliardi, Guido; Gagnon, Louis Guillaume; Galea, Cristina; Galhardo, Bruno; Gallas, Elizabeth; Gallop, Bruce; Gallus, Petr; Galster, Gorm Aske Gram Krohn; Gamboa Goni, Rodrigo; Gan, KK; Ganguly, Sanmay; Gao, Yanyan; Gao, Yongsheng; Garay Walls, Francisca; García, Carmen; García Navarro, José Enrique; García Pascual, Juan Antonio; Garcia-Sciveres, Maurice; Gardner, Robert; Garelli, Nicoletta; Garonne, Vincent; Gasnikova, Ksenia; Gaudiello, Andrea; Gaudio, Gabriella; Gavrilenko, Igor; Gavrilyuk, Alexander; Gay, Colin; Gaycken, Goetz; Gazis, Evangelos; Gee, Norman; Geisen, Jannik; Geisen, Marc; Geisler, Manuel Patrice; Gellerstedt, Karl; Gemme, Claudia; Genest, Marie-Hélène; Geng, Cong; Gentile, Simonetta; Gentsos, Christos; George, Simon; Gerbaudo, Davide; Gessner, Gregor; Ghasemi, Sara; Ghneimat, Mazuza; Giacobbe, Benedetto; Giagu, Stefano; Giangiacomi, Nico; Giannetti, Paola; Gibson, Stephen; Gignac, Matthew; Gillberg, Dag; Gilles, Geoffrey; Gingrich, Douglas; Giordani, MarioPaolo; Giorgi, Filippo Maria; Giraud, Pierre-Francois; Giromini, Paolo; Giugliarelli, Gilberto; Giugni, Danilo; Giuli, Francesco; Giulini, Maddalena; Gkaitatzis, Stamatios; Gkialas, Ioannis; Gkougkousis, Evangelos Leonidas; Gkountoumis, Panagiotis; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glaysher, Paul; Glazov, Alexandre; Goblirsch-Kolb, Maximilian; Godlewski, Jan; Goldfarb, Steven; Golling, Tobias; Golubkov, Dmitry; Gomes, Agostinho; Gonçalo, Ricardo; Goncalves Gama, Rafael; Gonella, Giulia; Gonella, Laura; Gongadze, Alexi; Gonnella, Francesco; Gonski, Julia; González de la Hoz, Santiago; Gonzalez-Sevilla, Sergio; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Goshaw, Alfred; Gössling, Claus; Gostkin, Mikhail Ivanovitch; Gottardo, Carlo Alberto; Goudet, Christophe Raymond; Goujdami, Driss; Goussiou, Anna; Govender, Nicolin; Goy, Corinne; Gozani, Eitan; Grabowska-Bold, Iwona; Gradin, Per Olov Joakim; Graham, Emily Charlotte; Gramling, Johanna; Gramstad, Eirik; Grancagnolo, Sergio; Gratchev, Vadim; Gravila, Paul Mircea; Gray, Chloe; Gray, Heather; Greenwood, Zeno Dixon; Grefe, Christian; Gregersen, Kristian; Gregor, Ingrid-Maria; Grenier, Philippe; Grevtsov, Kirill; Griffiths, Justin; Grillo, Alexander; Grimm, Kathryn; Grinstein, Sebastian; Gris, Philippe Luc Yves; Grivaz, Jean-Francois; Groh, Sabrina; Gross, Eilam; Grosse-Knetter, Joern; Grossi, Giulio Cornelio; Grout, Zara Jane; Grummer, Aidan; Guan, Liang; Guan, Wen; Guenther, Jaroslav; Guerguichon, Antinea; Guescini, Francesco; Guest, Daniel; Gueta, Orel; Gugel, Ralf; Gui, Bin; Guillemin, Thibault; Guindon, Stefan; Gul, Umar; Gumpert, Christian; Guo, Jun; Guo, Wen; Guo, Yicheng; Guo, Ziyu; Gupta, Ruchi; Gurbuz, Saime; Gustavino, Giuliano; Gutelman, Benjamin Jacque; Gutierrez, Phillip; Gutierrez Ortiz, Nicolas Gilberto; Gutschow, Christian; Guyot, Claude; Guzik, Marcin Pawel; Gwenlan, Claire; Gwilliam, Carl; Hönle, Andreas; Haas, Andy; Haber, Carl; Hadavand, Haleh Khani; Haddad, Nacim; Hadef, Asma; Hageböck, Stephan; Hagihara, Mutsuto; Hakobyan, Hrachya; Haleem, Mahsana; Haley, Joseph; Halladjian, Garabed; Hallewell, Gregory David; Hamacher, Klaus; Hamal, Petr; Hamano, Kenji; Hamilton, Andrew; Hamity, Guillermo Nicolas; Han, Kunlin; Han, Liang; Han, Shuo; Hanagaki, Kazunori; Hance, Michael; Handl, David Michael; Haney, Bijan; Hankache, Robert; Hanke, Paul; Hansen, Eva; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, Maike Christina; Hansen, Peter Henrik; Hara, Kazuhiko; Hard, Andrew; Harenberg, Torsten; Harkusha, Siarhei; Harrison, Paul Fraser; Hartmann, Nikolai Marcel; Hasegawa, Yoji; Hasib, Ahmed; Hassani, Samira; Haug, Sigve; Hauser, Reiner; Hauswald, Lorenz; Havener, Laura Brittany; Havranek, Miroslav; Hawkes, Christopher; Hawkings, Richard John; Hayden, Daniel; Hayes, Christopher; Hays, Chris; Hays, Jonathan Michael; Hayward, Helen; Haywood, Stephen; Heath, Matthew Peter; Hedberg, Vincent; Heelan, Louise; Heer, Sebastian; Heidegger, Kim Katrin; Heilman, Jesse; Heim, Sarah; Heim, Timon; Heinemann, Beate; Heinrich, Jochen Jens; Heinrich, Lukas; Heinz, Christian; Hejbal, Jiri; Helary, Louis; Held, Alexander; Hellesund, Simen; Hellman, Sten; Helsens, Clement; Henderson, Robert; Heng, Yang; Henkelmann, Steffen; Henriques Correia, Ana Maria; Herbert, Geoffrey Henry; Herde, Hannah; Herget, Verena; Hernández Jiménez, Yesenia; Herr, Holger; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Herwig, Theodor Christian; Hesketh, Gavin Grant; Hessey, Nigel; Hetherly, Jeffrey Wayne; Higashino, Satoshi; Higón-Rodriguez, Emilio; Hildebrand, Kevin; Hill, Ewan; Hill, John; Hiller, Karl Heinz; Hillier, Stephen; Hils, Maximilian; Hinchliffe, Ian; Hirose, Minoru; Hirschbuehl, Dominic; Hiti, Bojan; Hladik, Ondrej; Hlaluku, Dingane Reward; Hoad, Xanthe; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hoecker, Andreas; Hoeferkamp, Martin; Hoenig, Friedrich; Hohn, David; Hohov, Dmytro; Holmes, Tova Ray; Holzbock, Michael; Homann, Michael; Honda, Shunsuke; Honda, Takuya; Hong, Tae Min; Hooberman, Benjamin Henry; Hopkins, Walter; Horii, Yasuyuki; Horn, Philipp; Horton, Arthur James; Horyn, Lesya Anna; Hostachy, Jean-Yves; Hostiuc, Alexandru; Hou, Suen; Hoummada, Abdeslam; Howarth, James; Hoya, Joaquin; Hrabovsky, Miroslav; Hrdinka, Julia; Hristova, Ivana; Hrivnac, Julius; Hryn'ova, Tetiana; Hrynevich, Aliaksei; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Hu, Qipeng; Hu, Shuyang; Huang, Yanping; Hubacek, Zdenek; Hubaut, Fabrice; Huebner, Michael; Huegging, Fabian; Huffman, Todd Brian; Hughes, Emlyn; Huhtinen, Mika; Hunter, Robert Francis Holub; Huo, Peng; Hupe, Andre Marc; Huseynov, Nazim; Huston, Joey; Huth, John; Hyneman, Rachel; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibragimov, Iskander; Iconomidou-Fayard, Lydia; Idrissi, Zineb; Iengo, Paolo; Ignazzi, Rosanna; Igonkina, Olga; Iguchi, Ryunosuke; Iizawa, Tomoya; Ikegami, Yoichi; Ikeno, Masahiro; Iliadis, Dimitrios; Ilic, Nikolina; Iltzsche, Franziska; Introzzi, Gianluca; Iodice, Mauro; Iordanidou, Kalliopi; Ippolito, Valerio; Isacson, Max Fredrik; Ishijima, Naoki; Ishino, Masaya; Ishitsuka, Masaki; Issever, Cigdem; Istin, Serhat; Ito, Fumiaki; Iturbe Ponce, Julia Mariana; Iuppa, Roberto; Ivina, Anna; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jabbar, Samina; Jacka, Petr; Jackson, Paul; Jacobs, Ruth Magdalena; Jain, Vivek; Jäkel, Gunnar; Jakobi, Katharina Bianca; Jakobs, Karl; Jakobsen, Sune; Jakoubek, Tomas; Jamin, David Olivier; Jana, Dilip; Jansky, Roland; Janssen, Jens; Janus, Michel; Janus, Piotr Andrzej; Jarlskog, Göran; Javadov, Namig; Javůrek, Tomáš; Javurkova, Martina; Jeanneau, Fabien; Jeanty, Laura; Jejelava, Juansher; Jelinskas, Adomas; Jenni, Peter; Jeong, Jihyun; Jeske, Carl; Jézéquel, Stéphane; Ji, Haoshuang; Jia, Jiangyong; Jiang, Hai; Jiang, Yi; Jiang, Zihao; Jiggins, Stephen; Jimenez Morales, Fabricio Andres; Jimenez Pena, Javier; Jin, Shan; Jinaru, Adam; Jinnouchi, Osamu; Jivan, Harshna; Johansson, Per; Johns, Kenneth; Johnson, Christian; Johnson, William Joseph; Jon-And, Kerstin; Jones, Roger; Jones, Samuel David; Jones, Sarah; Jones, Tim; Jongmanns, Jan; Jorge, Pedro; Jovicevic, Jelena; Ju, Xiangyang; Junggeburth, Johannes Josef; Juste Rozas, Aurelio; Kaczmarska, Anna; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kaji, Toshiaki; Kajomovitz, Enrique; Kalderon, Charles William; Kaluza, Adam; Kama, Sami; Kamenshchikov, Andrey; Kanjir, Luka; Kano, Yuya; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kaplan, Laser Seymour; Kar, Deepak; Kareem, Mohammad Jawad; Karentzos, Efstathios; Karpov, Sergey; Karpova, Zoya; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kasahara, Kota; Kashif, Lashkar; Kass, Richard; Kastanas, Alex; Kataoka, Yousuke; Kato, Chikuma; Katre, Akshay; Katzy, Judith; Kawade, Kentaro; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kay, Ellis; Kazanin, Vassili; Keeler, Richard; Kehoe, Robert; Keller, John; Kellermann, Edgar; Kempster, Jacob Julian; Kendrick, James; Kepka, Oldrich; Kerševan, Borut Paul; Kersten, Susanne; Keyes, Robert; Khader, Mazin; Khalil-zada, Farkhad; Khanov, Alexander; Kharlamov, Alexey; Kharlamova, Tatyana; Khodinov, Alexander; Khoo, Teng Jian; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kido, Shogo; Kiehn, Moritz; Kilby, Callum; Kim, Hee Yeun; Kim, Shinhong; Kim, Young-Kee; Kimura, Naoki; Kind, Oliver Maria; King, Barry; Kirchmeier, David; Kirk, Julie; Kiryunin, Andrey; Kishimoto, Tomoe; Kisielewska, Danuta; Kitali, Vincent; Kivernyk, Oleh; Kladiva, Eduard; Klapdor-Kleingrothaus, Thorwald; Klein, Matthew Henry; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klimek, Pawel; Klimentov, Alexei; Klingenberg, Reiner; Klingl, Tobias; Klioutchnikova, Tatiana; Klitzner, Felix Fidelio; Kluit, Peter; Kluth, Stefan; Kneringer, Emmerich; Knoops, Edith; Knue, Andrea; Kobayashi, Aine; Kobayashi, Dai; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kodys, Peter; Koffas, Thomas; Koffeman, Els; Köhler, Nicolas Maximilian; Koi, Tatsumi; Kolb, Mathis; Koletsou, Iro; Kondo, Takahiko; Kondrashova, Nataliia; Köneke, Karsten; König, Adriaan; Kono, Takanori; Konoplich, Rostislav; Konstantinidis, Nikolaos; Konya, Balazs; Kopeliansky, Revital; Koperny, Stefan; Korcyl, Krzysztof; Kordas, Kostantinos; Korn, Andreas; Korolkov, Ilya; Korolkova, Elena; Kortner, Oliver; Kortner, Sandra; Kosek, Tomas; Kostyukhin, Vadim; Kotwal, Ashutosh; Koulouris, Aimilianos; Kourkoumeli-Charalampidi, Athina; Kourkoumelis, Christine; Kourlitis, Evangelos; Kouskoura, Vasiliki; Kowalewska, Anna Bozena; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozakai, Chihiro; Kozanecki, Witold; Kozhin, Anatoly; Kramarenko, Viktor; Kramberger, Gregor; Krasnopevtsev, Dimitrii; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Krauss, Dominik; Kremer, Jakub Andrzej; Kretzschmar, Jan; Kreutzfeldt, Kristof; Krieger, Peter; Krizka, Karol; Kroeninger, Kevin; Kroha, Hubert; Kroll, Jiri; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Krumnack, Nils; Kruse, Mark; Kubota, Takashi; Kuday, Sinan; Kuechler, Jan Thomas; Kuehn, Susanne; Kugel, Andreas; Kuger, Fabian; Kuhl, Thorsten; Kukhtin, Victor; Kukla, Romain; Kulchitsky, Yuri; Kuleshov, Sergey; Kulinich, Yakov Petrovich; Kuna, Marine; Kunigo, Takuto; Kupco, Alexander; Kupfer, Tobias; Kuprash, Oleg; Kurashige, Hisaya; Kurchaninov, Leonid; Kurochkin, Yurii; Kurth, Matthew Glenn; Kuwertz, Emma Sian; Kuze, Masahiro; Kvita, Jiri; Kwan, Tony; La Rosa, Alessandro; La Rosa Navarro, Jose Luis; La Rotonda, Laura; La Ruffa, Francesco; Lacasta, Carlos; Lacava, Francesco; Lacey, James; Lack, David Philip John; Lacker, Heiko; Lacour, Didier; Ladygin, Evgueni; Lafaye, Remi; Laforge, Bertrand; Lai, Stanley; Lammers, Sabine; Lampl, Walter; Lançon, Eric; Landgraf, Ulrich; Landon, Murrough; Lanfermann, Marie Christine; Lang, Valerie Susanne; Lange, Jörn Christian; Langenberg, Robert Johannes; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Lanza, Agostino; Lapertosa, Alessandro; Laplace, Sandrine; Laporte, Jean-Francois; Lari, Tommaso; Lasagni Manghi, Federico; Lassnig, Mario; Lau, Tak Shun; Laudrain, Antoine; Law, Alexander; Laycock, Paul; Lazzaroni, Massimo; Le, Brian; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Quilleuc, Eloi; LeBlanc, Matthew Edgar; LeCompte, Thomas; Ledroit-Guillon, Fabienne; Lee, Claire Alexandra; Lee, Graham Richard; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Benoit; Lefebvre, Michel; Legger, Federica; Leggett, Charles; Lehmann Miotto, Giovanna; Leight, William Axel; Leisos, Antonios; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Lemmer, Boris; Leney, Katharine; Lenz, Tatjana; Lenzi, Bruno; Leone, Robert; Leone, Sandra; Leonidopoulos, Christos; Lerner, Giuseppe; Leroy, Claude; Les, Robert; Lesage, Arthur; Lester, Christopher; Levchenko, Mikhail; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Lewis, Dave; Li, Bing; Li, Changqiao; Li, Haifeng; Li, Liang; Li, Qi; Li, Quanyin; Li, Shu; Li, Xingguo; Li, Yichen; Liang, Zhijun; Liberti, Barbara; Liblong, Aaron; Lie, Ki; Liem, Sebastian; Limosani, Antonio; Lin, Chiao-ying; Lin, Kuan-yu; Lin, Simon; Lin, Tai-Hua; Linck, Rebecca Anne; Lindquist, Brian Edward; Lionti, Anthony; Lipeles, Elliot; Lipniacka, Anna; Lisovyi, Mykhailo; Liss, Tony; Lister, Alison; Litke, Alan; Little, Jared David; Liu, Bingxuan; Liu, Bo; Liu, Hao; Liu, Hongbin; Liu, Jesse; Liu, Jianbei; Liu, Kun; Liu, Minghui; Liu, Peilian; Liu, Yanlin; Liu, Yanwen; Livan, Michele; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lo, Cheuk Yee; Lo Sterzo, Francesco; Lobodzinska, Ewelina Maria; Loch, Peter; Loebinger, Fred; Loesle, Alena; Loew, Kevin Michael; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Long, Brian Alexander; Long, Jonathan David; Long, Robin Eamonn; Longo, Luigi; Looper, Kristina Anne; Lopez, Jorge; Lopez Paz, Ivan; Lopez Solis, Alvaro; Lorenz, Jeanette; Lorenzo Martinez, Narei; Losada, Marta; Lösel, Philipp Jonathan; Lou, XinChou; Lou, Xuanhong; Lounis, Abdenour; Love, Jeremy; Love, Peter; Lozano Bahilo, Jose Julio; Lu, Haonan; Lu, Nan; Lu, Yun-Ju; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Luedtke, Christian; Luehring, Frederick; Luise, Ilaria; Lukas, Wolfgang; Luminari, Lamberto; Lund-Jensen, Bengt; Lutz, Margaret Susan; Luzi, Pierre Marc; Lynn, David; Lysak, Roman; Lytken, Else; Lyu, Feng; Lyubushkin, Vladimir; Ma, Hong; Ma, Lian Liang; Ma, Yanhui; Maccarrone, Giovanni; Macchiolo, Anna; Macdonald, Calum Michael; Maček, Boštjan; Machado Miguens, Joana; Madaffari, Daniele; Madar, Romain; Mader, Wolfgang; Madsen, Alexander; Madysa, Nico; Maeda, Junpei; Maeland, Steffen; Maeno, Tadashi; Maevskiy, Artem; Magerl, Veronika; Maidantchik, Carmen; Maier, Thomas; Maio, Amélia; Majersky, Oliver; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Malaescu, Bogdan; Malecki, Pawel; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Malone, Claire; Maltezos, Stavros; Malyukov, Sergei; Mamuzic, Judita; Mancini, Giada; Mandić, Igor; Maneira, José; Manhaes de Andrade Filho, Luciano; Manjarres Ramos, Joany; Mankinen, Katja Hannele; Mann, Alexander; Manousos, Athanasios; Mansoulie, Bruno; Mansour, Jason Dhia; Mantifel, Rodger; Mantoani, Matteo; Manzoni, Stefano; Marceca, Gino; March, Luis; Marchese, Luigi; Marchiori, Giovanni; Marcisovsky, Michal; Marin Tobon, Cesar Augusto; Marjanovic, Marija; Marley, Daniel; Marroquim, Fernando; Marshall, Zach; Martensson, Mikael; Marti-Garcia, Salvador; Martin, Christopher Blake; Martin, Tim; Martin, Victoria Jane; Martin dit Latour, Bertrand; Martinez, Mario; Martinez Outschoorn, Verena; Martin-Haugh, Stewart; Martoiu, Victor Sorin; Martyniuk, Alex; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Mason, Lara Hannan; Massa, Lorenzo; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Mättig, Peter; Maurer, Julien; Maxfield, Stephen; Maximov, Dmitriy; Mazini, Rachid; Maznas, Ioannis; Mazza, Simone Michele; Mc Fadden, Neil Christopher; Mc Goldrick, Garrin; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Thomas; McClymont, Laurie; McDonald, Emily; Mcfayden, Josh; Mchedlidze, Gvantsa; McKay, Madalyn; McLean, Kayla; McMahon, Steve; McNamara, Peter Charles; McNicol, Christopher John; McPherson, Robert; Mdhluli, Joyful Elma; Meadows, Zachary Alden; Meehan, Samuel; Megy, Theo; Mehlhase, Sascha; Mehta, Andrew; Meideck, Thomas; Meirose, Bernhard; Melini, Davide; Mellado Garcia, Bruce Rafael; Mellenthin, Johannes Donatus; Melo, Matej; Meloni, Federico; Melzer, Alexander; Menary, Stephen Burns; Meng, Lingxin; Meng, Xiangting; Mengarelli, Alberto; Menke, Sven; Meoni, Evelin; Mergelmeyer, Sebastian; Merlassino, Claudia; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meyer, Christopher; Meyer, Jean-Pierre; Meyer, Jochen; Meyer Zu Theenhausen, Hanno; Miano, Fabrizio; Middleton, Robin; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Milesi, Marco; Milic, Adriana; Millar, Declan Andrew; Miller, David; Milov, Alexander; Milstead, David; Minaenko, Andrey; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Minegishi, Yuji; Ming, Yao; Mir, Lluisa-Maria; Mirto, Alessandro; Mistry, Khilesh; Mitani, Takashi; Mitrevski, Jovan; Mitsou, Vasiliki A; Miucci, Antonio; Miyagawa, Paul; Mizukami, Atsushi; Mjörnmark, Jan-Ulf; Mkrtchyan, Tigran; Mlynarikova, Michaela; Moa, Torbjoern; Mochizuki, Kazuya; Mogg, Philipp; Mohapatra, Soumya; Molander, Simon; Moles-Valls, Regina; Mondragon, Matthew Craig; Mönig, Klaus; Monk, James; Monnier, Emmanuel; Montalbano, Alyssa; Montejo Berlingen, Javier; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Morange, Nicolas; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morgenstern, Marcus; Morgenstern, Stefanie; Mori, Daniel; Mori, Tatsuya; Morii, Masahiro; Morinaga, Masahiro; Morisbak, Vanja; Morley, Anthony Keith; Mornacchi, Giuseppe; Morris, John; Morvaj, Ljiljana; Moschovakos, Paris; Mosidze, Maia; Moss, Harry James; Moss, Josh; Motohashi, Kazuki; Mount, Richard; Mountricha, Eleni; Moyse, Edward; Muanza, Steve; Mueller, Felix; Mueller, James; Mueller, Ralph Soeren Peter; Muenstermann, Daniel; Mullen, Paul; Mullier, Geoffrey; Munoz Sanchez, Francisca Javiela; Murin, Pavel; Murray, Bill; Murrone, Alessia; Muškinja, Miha; Mwewa, Chilufya; Myagkov, Alexey; Myers, John; Myska, Miroslav; Nachman, Benjamin Philip; Nackenhorst, Olaf; Nagai, Koichi; Nagai, Ryo; Nagano, Kunihiro; Nagasaka, Yasushi; Nagata, Kazuki; Nagel, Martin; Nagy, Elemer; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakamura, Tomoaki; Nakano, Itsuo; Napolitano, Fabrizio; Naranjo Garcia, Roger Felipe; Narayan, Rohin; Narrias Villar, Daniel Isaac; Naryshkin, Iouri; Naumann, Thomas; Navarro, Gabriela; Nayyar, Ruchika; Neal, Homer; Nechaeva, Polina; Neep, Thomas James; Negri, Andrea; Negrini, Matteo; Nektarijevic, Snezana; Nellist, Clara; Nelson, Michael Edward; Nemecek, Stanislav; Nemethy, Peter; Nessi, Marzio; Neubauer, Mark; Neumann, Manuel; Newman, Paul; Ng, Tsz Yu; Ng, Sam Yanwing; Nguyen, Hoang Dai Nghia; Nguyen Manh, Tuan; Nibigira, Emery; Nickerson, Richard; Nicolaidou, Rosy; Nielsen, Jason; Nikiforou, Nikiforos; Nikolaenko, Vladimir; Nikolic-Audit, Irena; Nikolopoulos, Konstantinos; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nishu, Nishu; Nisius, Richard; Nitsche, Isabel; Nitta, Tatsumi; Nobe, Takuya; Noguchi, Yohei; Nomachi, Masaharu; Nomidis, Ioannis; Nomura, Marcelo Ayumu; Nooney, Tamsin; Nordberg, Markus; Norjoharuddeen, Nurfikri; Novak, Tadej; Novgorodova, Olga; Novotny, Radek; Nozaki, Mitsuaki; Nozka, Libor; Ntekas, Konstantinos; Nurse, Emily; Nuti, Francesco; O'Connor, Kelsey; O'Neil, Dugan; O'Rourke, Abigail Alexandra; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Obermann, Theresa; Ocariz, Jose; Ochi, Atsuhiko; Ochoa, Ines; Ochoa-Ricoux, Juan Pedro; Oda, Susumu; Odaka, Shigeru; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohman, Henrik; Oide, Hideyuki; Okawa, Hideki; Okazaki, Yuta; Okumura, Yasuyuki; Okuyama, Toyonobu; Olariu, Albert; Oleiro Seabra, Luis Filipe; Olivares Pino, Sebastian Andres; Oliveira Damazio, Denis; Oliver, Jason; Olsson, Joakim; Olszewski, Andrzej; Olszowska, Jolanta; Onofre, António; Onogi, Kouta; Onyisi, Peter; Oppen, Henrik; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orgill, Emily Claire; Orlando, Nicola; Orr, Robert; Osculati, Bianca; Ospanov, Rustem; Otero y Garzon, Gustavo; Otono, Hidetoshi; Ouchrif, Mohamed; Ould-Saada, Farid; Ouraou, Ahmimed; Ouyang, Qun; Owen, Mark; Owen, Rhys Edward; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pachal, Katherine; Pacheco Pages, Andres; Pacheco Rodriguez, Laura; Padilla Aranda, Cristobal; Pagan Griso, Simone; Paganini, Michela; Palacino, Gabriel; Palazzo, Serena; Palestini, Sandro; Palka, Marek; Pallin, Dominique; Panagoulias, Ilias; Pandini, Carlo Enrico; Panduro Vazquez, William; Pani, Priscilla; Paolozzi, Lorenzo; Papadopoulou, Theodora; Papageorgiou, Konstantinos; Paramonov, Alexander; Paredes Hernandez, Daniela; Parida, Bibhuti; Parker, Adam Jackson; Parker, Michael Andrew; Parker, Kerry Ann; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pascuzzi, Vincent; Pasner, Jacob Martin; Pasqualucci, Enrico; Passaggio, Stefano; Pastore, Francesca; Pasuwan, Patrawan; Pataraia, Sophio; Pater, Joleen; Pathak, Atanu; Pauly, Thilo; Pearson, Benjamin; Pedersen, Maiken; Pedraza Lopez, Sebastian; Pedro, Rute; Peleganchuk, Sergey; Penc, Ondrej; Peng, Cong; Peng, Haiping; Penwell, John; Peralva, Bernardo; Perego, Marta Maria; Pereira Peixoto, Ana Paula; Perepelitsa, Dennis; Peri, Francesco; Perini, Laura; Pernegger, Heinz; Perrella, Sabrina; Peshekhonov, Vladimir; Peters, Krisztian; Peters, Yvonne; Petersen, Brian; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petroff, Pierre; Petrolo, Emilio; Petrov, Mariyan; Petrucci, Fabrizio; Pettersson, Nora Emilia; Peyaud, Alan; Pezoa, Raquel; Pham, Thu; Phillips, Forrest Hays; Phillips, Peter William; Piacquadio, Giacinto; Pianori, Elisabetta; Picazio, Attilio; Pickering, Mark Andrew; Piegaia, Ricardo; Pilcher, James; Pilkington, Andrew; Pinamonti, Michele; Pinfold, James; Pitt, Michael; Pleier, Marc-Andre; Pleskot, Vojtech; Plotnikova, Elena; Pluth, Daniel; Podberezko, Pavel; Poettgen, Ruth; Poggi, Riccardo; Poggioli, Luc; Pogrebnyak, Ivan; Pohl, David-leon; Pokharel, Ishan; Polesello, Giacomo; Poley, Anne-luise; Policicchio, Antonio; Polifka, Richard; Polini, Alessandro; Pollard, Christopher Samuel; Polychronakos, Venetios; Ponomarenko, Daniil; Pontecorvo, Ludovico; Popeneciu, Gabriel Alexandru; Portillo Quintero, Dilia María; Pospisil, Stanislav; Potamianos, Karolos; Potrap, Igor; Potter, Christina; Potti, Harish; Poulsen, Trine; Poveda, Joaquin; Powell, Thomas Dennis; Pozo Astigarraga, Mikel Eukeni; Pralavorio, Pascal; Prell, Soeren; Price, Darren; Primavera, Margherita; Prince, Sebastien; Proklova, Nadezda; Prokofiev, Kirill; Prokoshin, Fedor; Protopopescu, Serban; Proudfoot, James; Przybycien, Mariusz; Puri, Akshat; Puzo, Patrick; Qian, Jianming; Qin, Yang; Quadt, Arnulf; Queitsch-Maitland, Michaela; Qureshi, Anum; Radhakrishnan, Sooraj Krishnan; Rados, Pere; Ragusa, Francesco; Rahal, Ghita; Raine, John Andrew; Rajagopalan, Srinivasan; Rashid, Tasneem; Raspopov, Sergii; Ratti, Maria Giulia; Rauch, Daniel; Rauscher, Felix; Rave, Stefan; Ravina, Baptiste; Ravinovich, Ilia; Rawling, Jacob Henry; Raymond, Michel; Read, Alexander Lincoln; Readioff, Nathan Peter; Reale, Marilea; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reed, Robert; Reeves, Kendall; Rehnisch, Laura; Reichert, Joseph; Reiss, Andreas; Rembser, Christoph; Ren, Huan; Rescigno, Marco; Resconi, Silvia; Resseguie, Elodie Deborah; Rettie, Sebastien; Reynolds, Elliot; Rezanova, Olga; Reznicek, Pavel; Richter, Robert; Richter, Stefan; Richter-Was, Elzbieta; Ricken, Oliver; Ridel, Melissa; Rieck, Patrick; Riegel, Christian Johann; Rifki, Othmane; Rijssenbeek, Michael; Rimoldi, Adele; Rimoldi, Marco; Rinaldi, Lorenzo; Ripellino, Giulia; Ristić, Branislav; Ritsch, Elmar; Riu, Imma; Rivera Vergara, Juan Cristobal; Rizatdinova, Flera; Rizvi, Eram; Rizzi, Chiara; Roberts, Rhys Thomas; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robson, Aidan; Rocco, Elena; Roda, Chiara; Rodina, Yulia; Rodriguez Bosca, Sergi; Rodriguez Perez, Andrea; Rodriguez Rodriguez, Daniel; Rodríguez Vera, Ana María; Roe, Shaun; Rogan, Christopher Sean; Røhne, Ole; Röhrig, Rainer; Roland, Christophe Pol A; Roloff, Jennifer; Romaniouk, Anatoli; Romano, Marino; Romero Adam, Elena; Rompotis, Nikolaos; Ronzani, Manfredi; Roos, Lydia; Rosati, Stefano; Rosbach, Kilian; Rose, Peyton; Rosien, Nils-Arne; Rossi, Elvira; Rossi, Leonardo Paolo; Rossini, Lorenzo; Rosten, Jonatan; Rosten, Rachel; Rotaru, Marina; Rothberg, Joseph; Rousseau, David; Roy, Debarati; Rozanov, Alexandre; Rozen, Yoram; Ruan, Xifeng; Rubbo, Francesco; Rühr, Frederik; Ruiz-Martinez, Aranzazu; Rurikova, Zuzana; Rusakovich, Nikolai; Russell, Heather; Rutherfoord, John; Ruthmann, Nils; Rüttinger, Elias Michael; Ryabov, Yury; Rybar, Martin; Rybkin, Grigori; Ryu, Soo; Ryzhov, Andrey; Rzehorz, Gerhard Ferdinand; Sabatini, Paolo; Sabato, Gabriele; Sacerdoti, Sabrina; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Saha, Puja; Sahinsoy, Merve; Saimpert, Matthias; Saito, Masahiko; Saito, Tomoyuki; Sakamoto, Hiroshi; Sakharov, Alexander; Salamani, Dalila; Salamanna, Giuseppe; Salazar Loyola, Javier Esteban; Salek, David; Sales De Bruin, Pedro Henrique; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sammel, Dirk; Sampsonidis, Dimitrios; Sampsonidou, Despoina; Sánchez, Javier; Sanchez Pineda, Arturo Rodolfo; Sandaker, Heidi; Sander, Christian Oliver; Sandhoff, Marisa; Sandoval, Carlos; Sankey, Dave; Sannino, Mario; Sano, Yuta; Sansoni, Andrea; Santoni, Claudio; Santos, Helena; Santoyo Castillo, Itzebelt; Sapronov, Andrey; Saraiva, João; Sasaki, Osamu; Sato, Koji; Sauvan, Emmanuel; Savard, Pierre; Savic, Natascha; Sawada, Ryu; Sawyer, Craig; Sawyer, Lee; Sbarra, Carla; Sbrizzi, Antonio; Scanlon, Tim; Scannicchio, Diana; Schaarschmidt, Jana; Schacht, Peter; Schachtner, Balthasar Maria; Schaefer, Douglas; Schaefer, Leigh; Schaeffer, Jan; Schaepe, Steffen; Schäfer, Uli; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R Dean; Scharmberg, Nicolas; Schegelsky, Valery; Scheirich, Daniel; Schenck, Ferdinand; Schernau, Michael; Schiavi, Carlo; Schier, Sheena; Schildgen, Lara Katharina; Schillaci, Zachary Michael; Schioppa, Enrico Junior; Schioppa, Marco; Schleicher, Katharina; Schlenker, Stefan; Schmidt-Sommerfeld, Korbinian Ralf; Schmieden, Kristof; Schmitt, Christian; Schmitt, Stefan; Schmitz, Simon; Schnoor, Ulrike; Schoeffel, Laurent; Schoening, Andre; Schopf, Elisabeth; Schott, Matthias; Schouwenberg, Jeroen; Schovancova, Jaroslava; Schramm, Steven; Schuh, Natascha; Schulte, Alexandra; Schultz-Coulon, Hans-Christian; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwartzman, Ariel; Schwarz, Thomas Andrew; Schweiger, Hansdieter; Schwemling, Philippe; Schwienhorst, Reinhard; Sciandra, Andrea; Sciolla, Gabriella; Scornajenghi, Matteo; Scuri, Fabrizio; Scutti, Federico; Scyboz, Ludovic Michel; Searcy, Jacob; Sebastiani, Cristiano David; Seema, Pienpen; Seidel, Sally; Seiden, Abraham; Seixas, José; Sekhniaidze, Givi; Sekhon, Karishma; Sekula, Stephen; Semprini-Cesari, Nicola; Senkin, Sergey; Serfon, Cedric; Serin, Laurent; Serkin, Leonid; Sessa, Marco; Severini, Horst; Šfiligoj, Tina; Sforza, Federico; Sfyrla, Anna; Shabalina, Elizaveta; Shahinian, Jeffrey David; Shaikh, Nabila Wahab; Shan, Lianyou; Shang, Ruo-yu; Shank, James; Shapiro, Marjorie; Sharma, Abhishek; Sharma, Abhishek; Shatalov, Pavel; Shaw, Kate; Shaw, Savanna Marie; Shcherbakova, Anna; Shehu, Ciwake Yusufu; Shen, Yu-Ting; Sherafati, Nima; Sherman, Alexander David; Sherwood, Peter; Shi, Liaoshan; Shimizu, Shima; Shimmin, Chase Owen; Shimojima, Makoto; Shipsey, Ian Peter Joseph; Shirabe, Shohei; Shiyakova, Mariya; Shlomi, Jonathan; Shmeleva, Alevtina; Shoaleh Saadi, Diane; Shochet, Mel; Shojaii, Seyed Ruhollah; Shope, David Richard; Shrestha, Suyog; Shulga, Evgeny; Sicho, Petr; Sickles, Anne Marie; Sidebo, Per Edvin; Sideras Haddad, Elias; Sidiropoulou, Ourania; Sidoti, Antonio; Siegert, Frank; Sijacki, Djordje; Silva, José; Silva Jr, Manuel; Silverstein, Samuel; Simic, Ljiljana; Simion, Stefan; Simioni, Eduard; Simmons, Brinick; Simon, Manuel; Sinervo, Pekka; Sinev, Nikolai; Sioli, Maximiliano; Siragusa, Giovanni; Siral, Ismet; Sivoklokov, Serguei; Sjölin, Jörgen; Skinner, Malcolm Bruce; Skubic, Patrick; Slater, Mark; Slavicek, Tomas; Slawinska, Magdalena; Sliwa, Krzysztof; Slovak, Radim; Smakhtin, Vladimir; Smart, Ben; Smiesko, Juraj; Smirnov, Nikita; Smirnov, Sergei; Smirnov, Yury; Smirnova, Lidia; Smirnova, Oxana; Smith, Joshua Wyatt; Smith, Matthew; Smith, Russell; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snyder, Ian Michael; Snyder, Scott; Sobie, Randall; Socher, Felix; Soffa, Aaron Michael; Soffer, Abner; Søgaard, Andreas; Soh, Dart-yin; Sokhrannyi, Grygorii; Solans Sanchez, Carlos; Solar, Michael; Soldatov, Evgeny; Soldevila, Urmila; Solodkov, Alexander; Soloshenko, Alexei; Solovyanov, Oleg; Solovyev, Victor; Sommer, Philip; Son, Hyungsuk; Song, Weimin; Sopczak, Andre; Sopkova, Filomena; Sosa, David; Sotiropoulou, Calliope Louisa; Sottocornola, Simone; Soualah, Rachik; Soukharev, Andrey; South, David; Sowden, Benjamin; Spagnolo, Stefania; Spalla, Margherita; Spangenberg, Martin; Spanò, Francesco; Sperlich, Dennis; Spettel, Fabian; Spieker, Thomas Malte; Spighi, Roberto; Spigo, Giancarlo; Spiller, Laurence Anthony; Spousta, Martin; Stabile, Alberto; Stamen, Rainer; Stamm, Soren; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stanitzki, Marcel Michael; Stapf, Birgit Sylvia; Stapnes, Steinar; Starchenko, Evgeny; Stark, Giordon; Stark, Jan; Stark, Simon Holm; Staroba, Pavel; Starovoitov, Pavel; Stärz, Steffen; Staszewski, Rafal; Stegler, Martin; Steinberg, Peter; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stevenson, Thomas James; Stewart, Graeme; Stockton, Mark; Stoicea, Gabriel; Stolte, Philipp; Stonjek, Stefan; Straessner, Arno; Strandberg, Jonas; Strandberg, Sara; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Stroynowski, Ryszard; Strubig, Antonia; Stucci, Stefania Antonia; Stugu, Bjarne; Stupak, John; Styles, Nicholas Adam; Su, Dong; Su, Jun; Suchek, Stanislav; Sugaya, Yorihito; Suk, Michal; Sulin, Vladimir; Sultan, D M S; Sultansoy, Saleh; Sumida, Toshi; Sun, Siyuan; Sun, Xiaohu; Suruliz, Kerim; Suster, Carl; Sutton, Mark; Suzuki, Shota; Svatos, Michal; Swiatlowski, Maximilian; Swift, Stewart Patrick; Sydorenko, Alexander; Sykora, Ivan; Sykora, Tomas; Ta, Duc; Tackmann, Kerstin; Taenzer, Joe; Taffard, Anyes; Tafirout, Reda; Tahirovic, Elvedin; Taiblum, Nimrod; Takai, Helio; Takashima, Ryuichi; Takasugi, Eric Hayato; Takeda, Kosuke; Takeshita, Tohru; Takubo, Yosuke; Talby, Mossadek; Talyshev, Alexey; Tanaka, Junichi; Tanaka, Masahiro; Tanaka, Reisaburo; Tanioka, Ryo; Tannenwald, Benjamin Bordy; Tapia Araya, Sebastian; Tapprogge, Stefan; Tarek Abouelfadl Mohamed, Ahmed; Tarem, Shlomit; Tarna, Grigore; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tashiro, Takuya; Tassi, Enrico; Tavares Delgado, Ademar; Tayalati, Yahya; Taylor, Aaron; Taylor, Alan James; Taylor, Geoffrey; Taylor, Pierre Thor Elliot; Taylor, Wendy; Tee, Amy Selvi; Teixeira-Dias, Pedro; Temple, Darren; Ten Kate, Herman; Teng, Ping-Kun; Teoh, Jia Jian; Tepel, Fabian-Phillipp; Terada, Susumu; Terashi, Koji; Terron, Juan; Terzo, Stefano; Testa, Marianna; Teuscher, Richard; Thais, Savannah Jennifer; Theveneaux-Pelzer, Timothée; Thiele, Fabian; Thomas, Juergen; Thompson, Paul; Thompson, Stan; Thomsen, Lotte Ansgaard; Thomson, Evelyn; Tian, Yun; Ticse Torres, Royer Edson; Tikhomirov, Vladimir; Tikhonov, Yury; Timoshenko, Sergey; Tipton, Paul; Tisserant, Sylvain; Todome, Kazuki; Todorova-Nova, Sharka; Todt, Stefanie; Tojo, Junji; Tokár, Stanislav; Tokushuku, Katsuo; Tolley, Emma; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tong, Baojia(Tony); Tornambe, Peter; Torrence, Eric; Torres, Heberth; Torró Pastor, Emma; Tosciri, Cecilia; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Treado, Colleen Jennifer; Trefzger, Thomas; Tresoldi, Fabio; Tricoli, Alessandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Tripiana, Martin; Trischuk, William; Trocmé, Benjamin; Trofymov, Artur; Troncon, Clara; Trovatelli, Monica; Trovato, Fabrizio; Truong, Loan; Trzebinski, Maciej; Trzupek, Adam; Tsai, Fang-ying; Tsang, Ka Wa; Tseng, Jeffrey; Tsiareshka, Pavel; Tsirintanis, Nikolaos; Tsiskaridze, Shota; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsuno, Soshi; Tsybychev, Dmitri; Tu, Yanjun; Tudorache, Alexandra; Tudorache, Valentina; Tulbure, Traian Tiberiu; Tuna, Alexander Naip; Turchikhin, Semen; Turgeman, Daniel; Turk Cakir, Ilkay; Turra, Ruggero; Tuts, Michael; Tzovara, Eftychia; Ucchielli, Giulia; Ueda, Ikuo; Ughetto, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Undrus, Alexander; Unel, Gokhan; Ungaro, Francesca; Unno, Yoshinobu; Uno, Kenta; Urban, Jozef; Urquijo, Phillip; Urrejola, Pedro; Usai, Giulio; Usui, Junya; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Vadla, Knut Oddvar Hoie; Vaidya, Amal; Valderanis, Chrysostomos; Valdes Santurio, Eduardo; Valente, Marco; Valentinetti, Sara; Valero, Alberto; Valéry, Loïc; Vallance, Robert Adam; Vallier, Alexis; Valls Ferrer, Juan Antonio; Van Daalen, Tal Roelof; Van Den Wollenberg, Wouter; van der Graaf, Harry; van Gemmeren, Peter; Van Nieuwkoop, Jacobus; van Vulpen, Ivo; van Woerden, Marius Cornelis; Vanadia, Marco; Vandelli, Wainer; Vaniachine, Alexandre; Vankov, Peter; Vari, Riccardo; Varnes, Erich; Varni, Carlo; Varol, Tulin; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vasquez, Jared Gregory; Vasquez, Gerardo; Vazeille, Francois; Vazquez Furelos, David; Vazquez Schroeder, Tamara; Veatch, Jason; Vecchio, Valentina; Veloce, Laurelle Maria; Veloso, Filipe; Veneziano, Stefano; Ventura, Andrea; Venturi, Manuela; Venturi, Nicola; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Ambrosius Thomas; Vermeulen, Jos; Vetterli, Michel; Viaux Maira, Nicolas; Viazlo, Oleksandr; Vichou, Irene; Vickey, Trevor; Vickey Boeriu, Oana Elena; Viehhauser, Georg; Viel, Simon; Vigani, Luigi; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinogradov, Vladimir; Vishwakarma, Akanksha; Vittori, Camilla; Vivarelli, Iacopo; Vlachos, Sotirios; Vogel, Marcelo; Vokac, Petr; Volpi, Guido; von Buddenbrock, Stefan; von Toerne, Eckhard; Vorobel, Vit; Vorobev, Konstantin; Vos, Marcel; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vuillermet, Raphael; Vukotic, Ilija; Wagner, Peter; Wagner, Wolfgang; Wagner-Kuhr, Jeannine; Wahlberg, Hernan; Wahrmund, Sebastian; Wakamiya, Kotaro; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wallangen, Veronica; Wang, Ann Miao; Wang, Chao; Wang, Fuquan; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Peilong; Wang, Qing; Wang, Renjie; Wang, Rongkun; Wang, Rui; Wang, Song-Ming; Wang, Tingting; Wang, Wei; Wang, Wenxiao; Wang, Yufeng; Wang, Zirui; Wanotayaroj, Chaowaroj; Warburton, Andreas; Ward, Patricia; Wardrope, David Robert; Washbrook, Andrew; Watkins, Peter; Watson, Alan; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Ben; Webb, Aaron Foley; Webb, Samuel; Weber, Christian; Weber, Michele; Weber, Sebastian Mario; Weber, Stephen; Webster, Jordan S; Weidberg, Anthony; Weinert, Benjamin; Weingarten, Jens; Weirich, Marcel; Weiser, Christian; Wells, Phillippa; Wenaus, Torre; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Michael David; Werner, Per; Wessels, Martin; Weston, Thomas; Whalen, Kathleen; Whallon, Nikola Lazar; Wharton, Andrew Mark; White, Aaron; White, Andrew; White, Martin; White, Ryan; Whiteson, Daniel; Whitmore, Ben William; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wiglesworth, Craig; Wiik-Fuchs, Liv Antje Mari; Wildauer, Andreas; Wilk, Fabian; Wilkens, Henric George; Williams, Hugh; Williams, Sarah; Willis, Christopher; Willocq, Stephane; Wilson, John; Wingerter-Seez, Isabelle; Winkels, Emma; Winklmeier, Frank; Winston, Oliver James; Winter, Benedict Tobias; Wittgen, Matthias; Wobisch, Markus; Wolf, Anton; Wolf, Tim Michael Heinz; Wolff, Robert; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wong, Vincent Wai Sum; Woods, Natasha Lee; Worm, Steven; Wosiek, Barbara; Woźniak, Krzysztof; Wraight, Kenneth; Wu, Miles; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wyatt, Terry Richard; Wynne, Benjamin; Xella, Stefania; Xi, Zhaoxu; Xia, Ligang; Xu, Da; Xu, Hanlin; Xu, Lailin; Xu, Tairan; Xu, Wenhao; Yabsley, Bruce; Yacoob, Sahal; Yajima, Kazuki; Yallup, David; Yamaguchi, Daiki; Yamaguchi, Yohei; Yamamoto, Akira; Yamanaka, Takashi; Yamane, Fumiya; Yamatani, Masahiro; Yamazaki, Tomohiro; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Hongtao; Yang, Siqi; Yang, Yi; Yang, Yi-lin; Yang, Zongchang; Yao, Weiming; Yap, Yee Chinn; Yasu, Yoshiji; Yatsenko, Elena; Yau Wong, Kaven Henry; Ye, Jingbo; Ye, Shuwei; Yeletskikh, Ivan; Yigitbasi, Efe; Yildirim, Eda; Yorita, Kohei; Yoshihara, Keisuke; Young, Charles; Young, Christopher John; Yu, Jaehoon; Yu, Jie; Yue, Xiaoguang; Yuen, Stephanie P; Yusuff, Imran; Zabinski, Bartlomiej; Zacharis, Georgios; Zaidan, Remi; Zaitsev, Alexander; Zakharchuk, Nataliia; Zalieckas, Justas; Zambito, Stefano; Zanzi, Daniele; Zeitnitz, Christian; Zemaityte, Gabija; Zeng, Jian Cong; Zeng, Qi; Zenin, Oleg; Ženiš, Tibor; Zerwas, Dirk; Zgubič, Miha; Zhang, Dengfeng; Zhang, Dongliang; Zhang, Fangzhou; Zhang, Guangyi; Zhang, Huijun; Zhang, Jinlong; Zhang, Lei; Zhang, Liqing; Zhang, Matt; Zhang, Peng; Zhang, Rui; Zhang, Ruiqi; Zhang, Xueyao; Zhang, Yu; Zhang, Zhiqing; Zhao, Xiandong; Zhao, Yongke; Zhao, Zhengguo; Zhemchugov, Alexey; Zhou, Bing; Zhou, Chen; Zhou, Li; Zhou, Maosen; Zhou, Mingliang; Zhou, Ning; Zhou, You; Zhu, Cheng Guang; Zhu, Heling; Zhu, Hongbo; Zhu, Junjie; Zhu, Yingchun; Zhuang, Xuai; Zhukov, Konstantin; Zhulanov, Vladimir; Zibell, Andre; Zieminska, Daria; Zimine, Nikolai; Zimmermann, Stephanie; Zinonos, Zinonas; Zinser, Markus; Ziolkowski, Michael; Živković, Lidija; Zobernig, Georg; Zoccoli, Antonio; Zoch, Knut; Zorbas, Theodore Georgio; Zou, Rui; zur Nedden, Martin; Zwalinski, Lukasz

    2018-01-01

    A measurement of $J/\\psi$ and $\\psi(2\\mathrm{S})$ production is presented. It is based on a data sample from Pb+Pb collisions at $\\sqrt{s_{\\mathrm{NN}}}$ = 5.02 TeV and $pp$ collisions at $\\sqrt{s}$ = 5.02 TeV recorded by the ATLAS detector at the LHC in 2015, corresponding to an integrated luminosity of 0.42 nb$^{-1}$ and 25 pb$^{-1}$ in Pb+Pb and $pp$, respectively. The measurements of per-event yields, nuclear modification factors, and non-prompt fractions are performed in the dimuon decay channel for $9 < p_{T}^{\\mu\\mu} < 40$ GeV in dimuon transverse momentum, and $-2.0 < y_{\\mu\\mu} < 2.0$ in rapidity. Strong suppression is found in Pb+Pb collisions for both prompt and non-prompt $J/\\psi$, as well as for prompt and non-prompt $\\psi(2\\mathrm{S})$, increasing with event centrality. The suppression of prompt $\\psi(2\\mathrm{S})$ is observed to be stronger than that of $J/\\psi$, while the suppression of non-prompt $\\psi(2\\mathrm{S})$ is equal to that of the non-prompt $J/\\psi$ within uncertainties,...

  18. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  19. A Study of Online Review Promptness in a B2C System

    Directory of Open Access Journals (Sweden)

    Junqiang Zhang

    2016-01-01

    Full Text Available Web 2.0 technologies have attracted an increasing number of active online writers and viewers. A deeper understanding of when customers will review and what motivates them to write online reviews is of both theoretical and practical significance. In this paper, we present a novel methodological framework, which consists of theoretical modeling and text-mining technologies, to study the relationships among customers’ review promptness, their review opinions, and their review motivations. We first study customers’ online “purchase-review” behavior dynamics; then, we introduce the LDA method to mine customers’ opinion from their review text; finally, we propose a theoretical model to explore some motivations for those people publishing review online. The analytical and experimental results with real data from a Chinese B2C website demonstrate that the behavior dynamics of customers’ online review are influenced by the multidimensional motivations, and some of them can be observed from their review behaviors, such as review promptness.

  20. Change in the game : business model innovation in the video game industry across time

    OpenAIRE

    Locke, Austin; Uhrínová, Bianka

    2017-01-01

    Technological innovation has changed business models across multiple industries – retail (Amazon), taxi (Uber), hotel (Airbnb). Through exploratory research, using secondary data, this thesis describes changes that have occurred in video gaming industry from its creation to the current, modern era that are connected to technological innovation. Based on the current research of business models, the authors created a “Value Creation-Revenue Stream Framework” that they use to anal...

  1. Modeling 3D Unknown object by Range Finder and Video Camera ...

    African Journals Online (AJOL)

    real world); proprioceptive and exteroceptive sensors allowing the recreating of the 3D geometric database of an environment (virtual world). The virtual world is projected onto a video display terminal (VDT). Computer-generated and video ...

  2. Camera Control and Geo-Registration for Video Sensor Networks

    Science.gov (United States)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  3. Video event classification and image segmentation based on noncausal multidimensional hidden Markov models.

    Science.gov (United States)

    Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A

    2009-06-01

    In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.

  4. Mapping (and modeling) physiological movements during EEG-fMRI recordings: the added value of the video acquired simultaneously.

    Science.gov (United States)

    Ruggieri, Andrea; Vaudano, Anna Elisabetta; Benuzzi, Francesca; Serafini, Marco; Gessaroli, Giuliana; Farinelli, Valentina; Nichelli, Paolo Frigio; Meletti, Stefano

    2015-01-15

    During resting-state EEG-fMRI studies in epilepsy, patients' spontaneous head-face movements occur frequently. We tested the usefulness of synchronous video recording to identify and model the fMRI changes associated with non-epileptic movements to improve sensitivity and specificity of fMRI maps related to interictal epileptiform discharges (IED). Categorization of different facial/cranial movements during EEG-fMRI was obtained for 38 patients [with benign epilepsy with centro-temporal spikes (BECTS, n=16); with idiopathic generalized epilepsy (IGE, n=17); focal symptomatic/cryptogenic epilepsy (n=5)]. We compared at single subject- and at group-level the IED-related fMRI maps obtained with and without additional regressors related to spontaneous movements. As secondary aim, we considered facial movements as events of interest to test the usefulness of video information to obtain fMRI maps of the following face movements: swallowing, mouth-tongue movements, and blinking. Video information substantially improved the identification and classification of the artifacts with respect to the EEG observation alone (mean gain of 28 events per exam). Inclusion of physiological activities as additional regressors in the GLM model demonstrated an increased Z-score and number of voxels of the global maxima and/or new BOLD clusters in around three quarters of the patients. Video-related fMRI maps for swallowing, mouth-tongue movements, and blinking were comparable to the ones obtained in previous task-based fMRI studies. Video acquisition during EEG-fMRI is a useful source of information. Modeling physiological movements in EEG-fMRI studies for epilepsy will lead to more informative IED-related fMRI maps in different epileptic conditions. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. The use of scientific direct instruction model with video learning of ethnoscience to improve students’ critical thinking skills

    Science.gov (United States)

    Sudarmin, S.; Mursiti, S.; Asih, A. G.

    2018-04-01

    In this disruption era, students are encouraged to develop critical thinking skills and important cultural conservation characters. Student's thinking skill in chemistry learning has not been developed because learning chemistry in schools still uses teacher-centered, lecture method, is less interesting and does not utilize local culture as a learning resource. The purpose of this research is to know the influence of the application of direct Instruction (DI) model with video learning of ethnoscience on the improvement of students’ critical thinking skills. This study was experimental research. The population was the students from class XI MIPA MA Negeri Gombong with the sample chosen by purposive random sampling. The material of local wisdom as the study of ethnosciences which was the focus of the research was the production of genting, dawet, lanting, and sempor reservoirs which is integrated with colloidal chemical contents. The learning video of ethnoscience before being applied was validated by experts. Students’ critical thinking skills were revealed through the concept of conceptualizing test instruments. The data analysis technique used was the test of proportion and Kolmogorov-Smirnov test. The results of this study suggested that the experimental class that was treated by scientific direct instruction model with the learning video of ethnoscience shows cognitive learning and critical thinking which were better than the control class. Besides, the students indicated their interest in the application of scientific direct instruction model with ethnoscience learning video.

  6. Procedures and Compliance of a Video Modeling Applied Behavior Analysis Intervention for Brazilian Parents of Children with Autism Spectrum Disorders

    Science.gov (United States)

    Bagaiolo, Leila F.; Mari, Jair de J.; Bordini, Daniela; Ribeiro, Tatiane C.; Martone, Maria Carolina C.; Caetano, Sheila C.; Brunoni, Decio; Brentani, Helena; Paula, Cristiane S.

    2017-01-01

    Video modeling using applied behavior analysis techniques is one of the most promising and cost-effective ways to improve social skills for parents with autism spectrum disorder children. The main objectives were: (1) To elaborate/describe videos to improve eye contact and joint attention, and to decrease disruptive behaviors of autism spectrum…

  7. Video transmission on ATM networks. Ph.D. Thesis

    Science.gov (United States)

    Chen, Yun-Chung

    1993-01-01

    The broadband integrated services digital network (B-ISDN) is expected to provide high-speed and flexible multimedia applications. Multimedia includes data, graphics, image, voice, and video. Asynchronous transfer mode (ATM) is the adopted transport techniques for B-ISDN and has the potential for providing a more efficient and integrated environment for multimedia. It is believed that most broadband applications will make heavy use of visual information. The prospect of wide spread use of image and video communication has led to interest in coding algorithms for reducing bandwidth requirements and improving image quality. The major results of a study on the bridging of network transmission performance and video coding are: Using two representative video sequences, several video source models are developed. The fitness of these models are validated through the use of statistical tests and network queuing performance. A dual leaky bucket algorithm is proposed as an effective network policing function. The concept of the dual leaky bucket algorithm can be applied to a prioritized coding approach to achieve transmission efficiency. A mapping of the performance/control parameters at the network level into equivalent parameters at the video coding level is developed. Based on that, a complete set of principles for the design of video codecs for network transmission is proposed.

  8. Video-assisted palatopharyngeal surgery: a model for improved education and training.

    Science.gov (United States)

    Allori, Alexander C; Marcus, Jeffrey R; Daluvoy, Sanjay; Bond, Jennifer

    2014-09-01

    Objective : The learning process for intraoral procedures is arguably more difficult than for other surgical procedures because of the assistant's severely limited visibility. Consequently, trainees may not be able to adequately see and follow all steps of the procedure, and attending surgeons may be less willing to entrust trainees with critical portions of the procedure. In this report, we propose a video-assisted approach to intraoral procedures that improves lighting, visibility, and potential for effective education and training. Design : Technical report (idea/innovation). Setting : Tertiary referral hospital. Patients : Children with cleft palate and velopharyngeal insufficiency requiring surgery. Interventions : Video-assisted palatoplasty, sphincteroplasty, and pharyngoplasty. Main Outcome Measures : Qualitative and semiquantitative educational outcomes, including learner perception regarding "real-time" (video-assisted surgery) and "non-real-time" (video-library-based) surgical education. Results : Trainees were strongly in favor of the video-assisted modality in "real-time" surgical training. Senior trainees identified more opportunities in which they had been safely entrusted to perform critical portions of the procedure, corresponding with satisfaction with the learning process scores, and they showed greater comfort/confidence scores related to performing the procedure under supervision and alone. Conclusions : Adoption of the video-assisted approach can be expected to markedly improve the learning curve for surgeons in training. This is now standard practice at our institution. We are presently conducting a full educational technology assessment to better characterize the effect on knowledge acquisition and technical improvement.

  9. A simplified 2D to 3D video conversion technology——taking virtual campus video production as an example

    Directory of Open Access Journals (Sweden)

    ZHUANG Huiyang

    2012-10-01

    Full Text Available This paper describes a simplified 2D to 3D Video Conversion Technology, taking virtual campus 3D video production as an example. First, it clarifies the meaning of the 2D to 3D Video Conversion Technology, and points out the disadvantages of traditional methods. Second, it forms an innovative and convenient method. A flow diagram, software and hardware configurations are presented. Finally, detailed description of the conversion steps and precautions are given in turn to the three processes, namely, preparing materials, modeling objects and baking landscapes, recording screen and converting videos .

  10. To act or not to act: responses to electronic health record prompts by family medicine clinicians.

    Science.gov (United States)

    Zazove, Philip; McKee, Michael; Schleicher, Lauren; Green, Lee; Kileny, Paul; Rapai, Mary; Mulhem, Elie

    2017-03-01

    A major focus of health care today is a strong emphasis on improving the health and quality of care for entire patient populations. One common approach utilizes electronic clinical alerts to prompt clinicians when certain interventions are due for individual patients being seen. However, these alerts have not been consistently effective, particularly for less visible (though important) conditions such as hearing loss (HL) screening. We conducted hour-long cognitive task analysis interviews to explore how family medicine clinicians view, perceive, and use electronic clinical alerts, and to utilize this information to design a more effective alert using HL identification and referral as a model diagnosis. Four key direct barriers were identified that impeded alert use: poor standardization and formatting, time pressures in primary care, clinic workflow variations, and mental models of the condition being prompted (in this case, HL). One indirect barrier was identified: electronic health record and institution/government regulations. We identified that clinicians' mental model of the condition being prompted was probably the major barrier, though this was often expressed as time pressure. We discuss solutions to each of the 5 identified barriers, such as addressing physicians' mental models, by focusing on physicians' expertise rather than knowledge to improve their comfort when caring for patients with the conditions being prompted. To unleash the potential of electronic clinical alerts, electronic health record and health care institutions need to address some key barriers. We outline these barriers and propose solutions. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  11. Online Video Business Models: YouTube vs. Hulu

    Directory of Open Access Journals (Sweden)

    Juan P. Artero

    2010-01-01

    Full Text Available Los orígenes y el desarrollo de dos de los servicios de vídeo en línea con más éxito en los Estados Unidos: YouTube y Hulu se examinan en este documento. Al mirar ambas historias de negocios, este estudio de caso analiza los diferentes modelos comerciales aplicados, los resultados en términos de tráfico web e ingresos y la perspectiva estratégica para cada una. YouTube desarrolla un modelo que ofrece vídeos gratis a una escala global, pero con peculiaridades locales en los mercados más importantes. Tiene una gran cantidad de videos; sin embargo, en general, son de corta duración y de baja calidad. En la mayoría de los casos, presentados y producidos por los propios usuarios. Esto tiene el potencial para crear problemas tecnológicos (la capacidad de video streaming tendrá que ser de alto rendimiento, dificultades de orden jurídico (posibles infracciones con respecto a contenido protegido o inadecuado y los problemas comerciales (reticencia entre los anunciantes a insertar publicidad en los vídeos de baja calidad. Hulu se concentra en la oferta gratuita de contenido profesional y sólo a escala nacional en los Estados Unidos. La cantidad de videos es menor y, por lo general, de mayor duración y mejor calidad. Los videos son puestos a disposición por los canales y las productoras que posee los derechos. En consecuencia, Hulu enfrenta menos problemas de carácter tecnológico, legal y comercial, pero su marca no es tan conocida, ni tiene el poder de convocatoria de YouTube

  12. Using Video Modeling to Teach Children with PDD-NOS to Respond to Facial Expressions

    Science.gov (United States)

    Axe, Judah B.; Evans, Christine J.

    2012-01-01

    Children with autism spectrum disorders often exhibit delays in responding to facial expressions, and few studies have examined teaching responding to subtle facial expressions to this population. We used video modeling to train 3 participants with PDD-NOS (age 5) to respond to eight facial expressions: approval, bored, calming, disapproval,…

  13. Measurement of the Shape of the Optical-IR Spectrum of Prompt Emission from Gamma-Ray Bursts

    Science.gov (United States)

    Grossan, Bruce; Kistaubayev, M.; Smoot, G.; Scherr, L.

    2017-06-01

    While the afterglow phase of gamma-ray bursts (GRBs) has been extensively measured, detections of prompt emission (i.e. during bright X-gamma emission) are more limited. Some prompt optical measurements are regularly made, but these are typically in a single wide band, with limited time resolution, and no measurement of spectral shape. Some models predict a synchrotron self-absorption spectral break somewhere in the IR-optical region. Measurement of the absorption frequency would give extensive information on each burst, including the electron Lorentz factor, the radius of emission, and more (Shen & Zhang 2008). Thus far the best prompt observations have been explained invoking a variety of models, but often with a non-unique interpretation. To understand this apparently heterogeneous behavior, and to reduce the number of possible models, it is critical to add data on the optical - IR spectral shape.Long GRB prompt X-gamma emission typically lasts ~40-80 s. The Swift BAT instrument rapidly measures GRB positions to within a few arc minutes and communicates them via the internet within a few seconds. We have measured the time for a fast-moving D=700 mm telescope to point and settle to be less than 9 s anywhere on the observable sky. Therefore, the majority of prompt optical-IR emission can be measured responding to BAT positions with this telescope. In this presentation, we describe our observing and science programs, and give our design for the Burst Simultaneous Three-channel Instrument (BSTI), which uses dichroics to send eparate bands to 3 cameras. Two EMCCD cameras, give high-time resolution in B and V; a third camera with a HgCdTe sensor covers H band, allowing us to study extinguished bursts. For a total exposure time of 10 s, we find a 5 sigma sensitivity of 21.3 and 20.3 mag in B and R for 1" seeing and Kitt Peak sky brightness, much fainter than typical previous prompt detections. We estimate 5 sigma H-band sensitivity for an IR optimized telescope to be

  14. Teaching social-communication skills to preschoolers with autism: efficacy of video versus in vivo modeling in the classroom.

    Science.gov (United States)

    Wilson, Kaitlyn P

    2013-08-01

    Video modeling is a time- and cost-efficient intervention that has been proven effective for children with autism spectrum disorder (ASD); however, the comparative efficacy of this intervention has not been examined in the classroom setting. The present study examines the relative efficacy of video modeling as compared to the more widely-used strategy of in vivo modeling using an alternating treatments design with baseline and replication across four preschool-aged students with ASD. Results offer insight into the heterogeneous treatment response of students with ASD. Additional data reflecting visual attention and social validity were captured to further describe participants' learning preferences and processes, as well as educators' perceptions of the acceptability of each intervention's procedures in the classroom setting.

  15. Prompting a consumer behavior for pollution control1

    Science.gov (United States)

    Geller, E. Scott; Farris, John C.; Post, David S.

    1973-01-01

    A field application of behavior modification studied the relative effectiveness of different prompting procedures for increasing the probability that customers entering a grocery store would select their soft drinks in returnable rather than nonreturnable containers. Six different 2-hr experimental conditions during which bottle purchases were recorded were (1) No Prompt (i.e., control), (2) one student gave incoming customers a handbill urging the purchase of soft drinks in returnable bottles, (3) distribution of the handbill by one student and public charting of each customer's bottle purchases by another student, (4) handbill distribution and charting by a five-member group, (5) handbills distributed and purchases charted by three females. The variant prompting techniques were equally effective, and in general increased the percentage of returnable-bottle customers by an average of 25%. PMID:16795418

  16. Physical basis for prompt-neutron activation analysis

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1982-01-01

    The technique called prompt ν-ray neutron activation analysis has been applied to rapid materials analysis. The radiation following the neutron radiation capture is prompt in the sense that the nuclear decay time is on the order of 10 - 15 second, and thus the technique is not strictly activation, but should be called radiation neutron capture spectroscopy or neutron capture ν-ray spectroscopy. This paper reviews the following: sources and detectors, theory of radiative capture, nonstatistical capture, giant dipole resonance, fast neutron capture, and thermal neutron capture ν-ray spectra. 14 figures

  17. Efficient Temporal Action Localization in Videos

    KAUST Repository

    Alwassel, Humam

    2018-04-17

    State-of-the-art temporal action detectors inefficiently search the entire video for specific actions. Despite the encouraging progress these methods achieve, it is crucial to design automated approaches that only explore parts of the video which are the most relevant to the actions being searched. To address this need, we propose the new problem of action spotting in videos, which we define as finding a specific action in a video while observing a small portion of that video. Inspired by the observation that humans are extremely efficient and accurate in spotting and finding action instances in a video, we propose Action Search, a novel Recurrent Neural Network approach that mimics the way humans spot actions. Moreover, to address the absence of data recording the behavior of human annotators, we put forward the Human Searches dataset, which compiles the search sequences employed by human annotators spotting actions in the AVA and THUMOS14 datasets. We consider temporal action localization as an application of the action spotting problem. Experiments on the THUMOS14 dataset reveal that our model is not only able to explore the video efficiently (observing on average 17.3% of the video) but it also accurately finds human activities with 30.8% mAP (0.5 tIoU), outperforming state-of-the-art methods

  18. Prompt ντ fluxes in present and future τ neutrino experiments

    International Nuclear Information System (INIS)

    Gonzalez-Garcia, M.C.; Gomez-Cadenas, J.J.

    1997-01-01

    We use a nonperturbative QCD approach, the quark-gluon string model, to compute the τ-neutrino fluxes produced by fixed target pA collisions (where A is a target material) for incident protons of energies ranging from 120 to 800 GeV. The purpose of this calculation is to estimate in a consistent way the prompt background for the ν μ (ν e )↔ν τ oscillation search in the on-going ν μ (ν e )↔ν τ oscillation search experiments CHORUS and NOMAD, as well as the expected prompt background in future experiments, such as COSMOS at Fermilab and a possible second-generation ν μ (ν e )↔ν τ search experiment at the CERN SPS. In addition, we compute the number of ν τ interactions expected by the experiment E872 at Fermilab. copyright 1997 The American Physical Society

  19. Video game addiction in children and teenagers in Taiwan.

    Science.gov (United States)

    Chiu, Shao-I; Lee, Jie-Zhi; Huang, Der-Hsiang

    2004-10-01

    Video game addiction in children and teenagers in Taiwan is associated with levels of animosity, social skills, and academic achievement. This study suggests that video game addiction can be statistically predicted on measures of hostility, and a group with high video game addiction has more hostility than others. Both gender and video game addiction are negatively associated with academic achievement. Family function, sensation seeking, gender, and boredom have statistically positive relationships with levels of social skills. Current models of video game addiction do not seem to fit the findings of this study.

  20. 31 CFR 8.32 - Prompt disposition of pending matters.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Prompt disposition of pending matters. 8.32 Section 8.32 Money and Finance: Treasury Office of the Secretary of the Treasury PRACTICE... Prompt disposition of pending matters. No attorney, certified public accountant, or enrolled practitioner...

  1. Candidate Smoke Region Segmentation of Fire Video Based on Rough Set Theory

    Directory of Open Access Journals (Sweden)

    Yaqin Zhao

    2015-01-01

    Full Text Available Candidate smoke region segmentation is the key link of smoke video detection; an effective and prompt method of candidate smoke region segmentation plays a significant role in a smoke recognition system. However, the interference of heavy fog and smoke-color moving objects greatly degrades the recognition accuracy. In this paper, a novel method of candidate smoke region segmentation based on rough set theory is presented. First, Kalman filtering is used to update video background in order to exclude the interference of static smoke-color objects, such as blue sky. Second, in RGB color space smoke regions are segmented by defining the upper approximation, lower approximation, and roughness of smoke-color distribution. Finally, in HSV color space small smoke regions are merged by the definition of equivalence relation so as to distinguish smoke images from heavy fog images in terms of V component value variety from center to edge of smoke region. The experimental results on smoke region segmentation demonstrated the effectiveness and usefulness of the proposed scheme.

  2. Application results for an augmented video tracker

    Science.gov (United States)

    Pierce, Bill

    1991-08-01

    The Relay Mirror Experiment (RME) is a research program to determine the pointing accuracy and stability levels achieved when a laser beam is reflected by the RME satellite from one ground station to another. This paper reports the results of using a video tracker augmented with a quad cell signal to improve the RME ground station tracking system performance. The video tracker controls a mirror to acquire the RME satellite, and provides a robust low bandwidth tracking loop to remove line of sight (LOS) jitter. The high-passed, high-gain quad cell signal is added to the low bandwidth, low-gain video tracker signal to increase the effective tracking loop bandwidth, and significantly improves LOS disturbance rejection. The quad cell augmented video tracking system is analyzed, and the math model for the tracker is developed. A MATLAB model is then developed from this, and performance as a function of bandwidth and disturbances is given. Improvements in performance due to the addition of the video tracker and the augmentation with the quad cell are provided. Actual satellite test results are then presented and compared with the simulated results.

  3. GUIDED USE OF WRITING PROMPTS TO IMPROVE ACADEMIC WRITING IN COLLEGE STUDENTS

    Directory of Open Access Journals (Sweden)

    Lina Marcela Trigos Carrillo

    2011-12-01

    Full Text Available The paper presents empirical data supporting the hypothesis that the systematic and guided use of academic writing prompts is a successful instructional strategy to improve the academic writing in Spanish of college students, mainly during their first semesters. A combined methodology, with pre- and post-tests, was used in this research project conducted from July 2009 to June 2010. The participants were freshmen students of different disciplines of the Human Sciences in a private university in Bogota, Colombia. The aim of this research project was twofold. First, it sought to identify the difficulties students faced in the writing process of academic texts when they are related to real communicative contexts. Second, it involved the design and application of the guided and systematic use of writing prompts for academic writing in a sequence called "The Cognitive Pedagogical Model of Writing for Higher Education". The results show empirical evidence supporting the use of writing prompts designed with specific academic purposes to improve the academic writing level of college students in their first stages of study. However, further research is needed to consolidate the results presented here.

  4. Spatial data processing for the purpose of video games

    Directory of Open Access Journals (Sweden)

    Chądzyńska Dominika

    2016-03-01

    Full Text Available Advanced terrain models are currently commonly used in many video/computers games. Professional GIS technologies, existing spatial datasets and cartographic methodology are more widely used in their development. This allows for achieving a realistic model of the world. On the other hand, the so-called game engines have very high capability of spatial data visualization. Preparing terrain models for the purpose of video games requires knowledge and experience of GIS specialists and cartographers, although it is also accessible for non-professionals. The authors point out commonness and variety of use of terrain models in video games and the existence of a series of ready, advanced tools and procedures of terrain model creating. Finally the authors describe the experiment of performing the process of data modeling for “Condor Soar Simulator”.

  5. Dashboard Videos

    Science.gov (United States)

    Gleue, Alan D.; Depcik, Chris; Peltier, Ted

    2012-01-01

    Last school year, I had a web link emailed to me entitled "A Dashboard Physics Lesson." The link, created and posted by Dale Basier on his "Lab Out Loud" blog, illustrates video of a car's speedometer synchronized with video of the road. These two separate video streams are compiled into one video that students can watch and analyze. After seeing…

  6. Learning from instructional explanations: effects of prompts based on the active-constructive-interactive framework.

    Science.gov (United States)

    Roelle, Julian; Müller, Claudia; Roelle, Detlev; Berthold, Kirsten

    2015-01-01

    Although instructional explanations are commonly provided when learners are introduced to new content, they often fail because they are not integrated into effective learning activities. The recently introduced active-constructive-interactive framework posits an effectiveness hierarchy in which interactive learning activities are at the top; these are then followed by constructive and active learning activities, respectively. Against this background, we combined instructional explanations with different types of prompts that were designed to elicit these learning activities and tested the central predictions of the active-constructive-interactive framework. In Experiment 1, N = 83 students were randomly assigned to one of four combinations of instructional explanations and prompts. To test the active learning hypothesis, the learners received either (1) complete explanations and engaging prompts designed to elicit active activities or (2) explanations that were reduced by inferences and inference prompts designed to engage learners in constructing the withheld information. Furthermore, in order to explore how interactive learning activities can be elicited, we gave the learners who had difficulties in constructing the prompted inferences adapted remedial explanations with either (3) unspecific engaging prompts or (4) revision prompts. In support of the active learning hypothesis, we found that the learners who received reduced explanations and inference prompts outperformed the learners who received complete explanations and engaging prompts. Moreover, revision prompts were more effective in eliciting interactive learning activities than engaging prompts. In Experiment 2, N = 40 students were randomly assigned to either (1) a reduced explanations and inference prompts or (2) a reduced explanations and inference prompts plus adapted remedial explanations and revision prompts condition. In support of the constructive learning hypothesis, the learners who received

  7. Hierarchical structure for audio-video based semantic classification of sports video sequences

    Science.gov (United States)

    Kolekar, M. H.; Sengupta, S.

    2005-07-01

    A hierarchical structure for sports event classification based on audio and video content analysis is proposed in this paper. Compared to the event classifications in other games, those of cricket are very challenging and yet unexplored. We have successfully solved cricket video classification problem using a six level hierarchical structure. The first level performs event detection based on audio energy and Zero Crossing Rate (ZCR) of short-time audio signal. In the subsequent levels, we classify the events based on video features using a Hidden Markov Model implemented through Dynamic Programming (HMM-DP) using color or motion as a likelihood function. For some of the game-specific decisions, a rule-based classification is also performed. Our proposed hierarchical structure can easily be applied to any other sports. Our results are very promising and we have moved a step forward towards addressing semantic classification problems in general.

  8. 31 CFR 10.23 - Prompt disposition of pending matters.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Prompt disposition of pending matters. 10.23 Section 10.23 Money and Finance: Treasury Office of the Secretary of the Treasury PRACTICE... Revenue Service § 10.23 Prompt disposition of pending matters. A practitioner may not unreasonably delay...

  9. Video game addiction, ADHD symptomatology, and video game reinforcement.

    Science.gov (United States)

    Mathews, Christine L; Morrell, Holly E R; Molle, Jon E

    2018-06-06

    Up to 23% of people who play video games report symptoms of addiction. Individuals with attention deficit hyperactivity disorder (ADHD) may be at increased risk for video game addiction, especially when playing games with more reinforcing properties. The current study tested whether level of video game reinforcement (type of game) places individuals with greater ADHD symptom severity at higher risk for developing video game addiction. Adult video game players (N = 2,801; Mean age = 22.43, SD = 4.70; 93.30% male; 82.80% Caucasian) completed an online survey. Hierarchical multiple linear regression analyses were used to test type of game, ADHD symptom severity, and the interaction between type of game and ADHD symptomatology as predictors of video game addiction severity, after controlling for age, gender, and weekly time spent playing video games. ADHD symptom severity was positively associated with increased addiction severity (b = .73 and .68, ps .05. The relationship between ADHD symptom severity and addiction severity did not depend on the type of video game played or preferred most, ps > .05. Gamers who have greater ADHD symptom severity may be at greater risk for developing symptoms of video game addiction and its negative consequences, regardless of type of video game played or preferred most. Individuals who report ADHD symptomatology and also identify as gamers may benefit from psychoeducation about the potential risk for problematic play.

  10. Multi Elemental Study Using Prompt Gamma Technique

    International Nuclear Information System (INIS)

    Normanshah Dahing; Muhamad Samudi Yasir; Normanshah Dahing; Hanafi Ithnin; Mohd Fitri Abdul Rahman; Hearie Hassan

    2016-01-01

    In this study, principle of prompt gamma neutron activation analysis has been used as a technique to determine the elements in the sample. The system consists of collimated isotopic neutron source, Cf-252 with HPGe detector and Multichannel Analysis (MCA). Concrete with size of 10x10x10 cm 3 and 15x15x15 cm 3 were analysed as sample. When neutrons enter and interact with elements in the concrete, the neutron capture reaction will occur and produce characteristic prompt gamma ray of the elements. The preliminary result of this study demonstrate the major element in the concrete was determined such as Si, Mg, Ca, Al, Fe and H as well as others element, such as Cl by analysis the gamma ray lines respectively. The results obtained were compared with computer simulation, NAA and XRF as a part of reference and validation. The potential and the capability of neutron induced prompt gamma as tool for multi elemental analysis qualitatively to identify the elements present in the concrete sample discussed. (author)

  11. Prompt Gamma Ray Analysis of Soil Samples

    Energy Technology Data Exchange (ETDEWEB)

    Naqvi, A.A.; Khiari, F.Z.; Haseeb, S.M.A.; Hussein, Tanvir; Khateeb-ur-Rehman [Department of Physics, King Fahd University of Petroleum and Minerals, Dhahran (Saudi Arabia); Isab, A.H. [Department of Chemistry, King Fahd University of Petroleum and Minerals, Dhahran (Saudi Arabia)

    2015-07-01

    Neutron moderation effects were measured in bulk soil samples through prompt gamma ray measurements from water and benzene contaminated soil samples using 14 MeV neutron inelastic scattering. The prompt gamma rays were measured using a cylindrical 76 mm x 76 mm (diameter x height) LaBr{sub 3}:Ce detector. Since neutron moderation effects strongly depend upon hydrogen concentration of the sample, for comparison purposes, moderation effects were studied from samples containing different hydrogen concentrations. The soil samples with different hydrogen concentration were prepared by mixing soil with water as well as benzene in different weight proportions. Then, the effects of increasing water and benzene concentrations on the yields of hydrogen, carbon and silicon prompt gamma rays were measured. Moderation effects are more pronounced in soil samples mixed with water as compared to those from soil samples mixed with benzene. This is due to the fact that benzene contaminated soil samples have about 30% less hydrogen concentration by weight than the water contaminated soil samples. Results of the study will be presented. (authors)

  12. A Method for Estimating Surveillance Video Georeferences

    Directory of Open Access Journals (Sweden)

    Aleksandar Milosavljević

    2017-07-01

    Full Text Available The integration of a surveillance camera video with a three-dimensional (3D geographic information system (GIS requires the georeferencing of that video. Since a video consists of separate frames, each frame must be georeferenced. To georeference a video frame, we rely on the information about the camera view at the moment that the frame was captured. A camera view in 3D space is completely determined by the camera position, orientation, and field-of-view. Since the accurate measuring of these parameters can be extremely difficult, in this paper we propose a method for their estimation based on matching video frame coordinates of certain point features with their 3D geographic locations. To obtain these coordinates, we rely on high-resolution orthophotos and digital elevation models (DEM of the area of interest. Once an adequate number of points are matched, Levenberg–Marquardt iterative optimization is applied to find the most suitable video frame georeference, i.e., position and orientation of the camera.

  13. Study of Temporal Effects on Subjective Video Quality of Experience.

    Science.gov (United States)

    Bampis, Christos George; Zhi Li; Moorthy, Anush Krishna; Katsavounidis, Ioannis; Aaron, Anne; Bovik, Alan Conrad

    2017-11-01

    HTTP adaptive streaming is being increasingly deployed by network content providers, such as Netflix and YouTube. By dividing video content into data chunks encoded at different bitrates, a client is able to request the appropriate bitrate for the segment to be played next based on the estimated network conditions. However, this can introduce a number of impairments, including compression artifacts and rebuffering events, which can severely impact an end-user's quality of experience (QoE). We have recently created a new video quality database, which simulates a typical video streaming application, using long video sequences and interesting Netflix content. Going beyond previous efforts, the new database contains highly diverse and contemporary content, and it includes the subjective opinions of a sizable number of human subjects regarding the effects on QoE of both rebuffering and compression distortions. We observed that rebuffering is always obvious and unpleasant to subjects, while bitrate changes may be less obvious due to content-related dependencies. Transient bitrate drops were preferable over rebuffering only on low complexity video content, while consistently low bitrates were poorly tolerated. We evaluated different objective video quality assessment algorithms on our database and found that objective video quality models are unreliable for QoE prediction on videos suffering from both rebuffering events and bitrate changes. This implies the need for more general QoE models that take into account objective quality models, rebuffering-aware information, and memory. The publicly available video content as well as metadata for all of the videos in the new database can be found at http://live.ece.utexas.edu/research/LIVE_NFLXStudy/nflx_index.html.

  14. Iterative method for obtaining the prompt and delayed alpha-modes of the diffusion equation

    International Nuclear Information System (INIS)

    Singh, K.P.; Degweker, S.B.; Modak, R.S.; Singh, Kanchhi

    2011-01-01

    Highlights: → A method for obtaining α-modes of the neutron diffusion equation has been developed. → The difference between the prompt and delayed modes is more pronounced for the higher modes. → Prompt and delayed modes differ more in reflector region. - Abstract: Higher modes of the neutron diffusion equation are required in some applications such as second order perturbation theory, and modal kinetics. In an earlier paper we had discussed a method for computing the α-modes of the diffusion equation. The discussion assumed that all neutrons are prompt. The present paper describes an extension of the method for finding the α-modes of diffusion equation with the inclusion of delayed neutrons. Such modes are particularly suitable for expanding the time dependent flux in a reactor for describing transients in a reactor. The method is illustrated by applying it to a three dimensional heavy water reactor model problem. The problem is solved in two and three neutron energy groups and with one and six delayed neutron groups. The results show that while the delayed α-modes are similar to λ-modes they are quite different from prompt modes. The difference gets progressively larger as we go to higher modes.

  15. Co-viewing supports toddlers' word learning from contingent and noncontingent video.

    Science.gov (United States)

    Strouse, Gabrielle A; Troseth, Georgene L; O'Doherty, Katherine D; Saylor, Megan M

    2018-02-01

    Social cues are one way young children determine that a situation is pedagogical in nature-containing information to be learned and generalized. However, some social cues (e.g., contingent gaze and responsiveness) are missing from prerecorded video, a potential reason why toddlers' language learning from video can be inefficient compared with their learning directly from a person. This study explored two methods for supporting children's word learning from video by adding social-communicative cues. A sample of 88 30-month-olds began their participation with a video training phase. In one manipulation, an on-screen actress responded contingently to children through a live video feed (similar to Skype or FaceTime "video chat") or appeared in a prerecorded demonstration. In the other manipulation, parents either modeled responsiveness to the actress's on-screen bids for participation or sat out of their children's view. Children then viewed a labeling demonstration on video, and their knowledge of the label was tested with three-dimensional objects. Results indicated that both on-screen contingency and parent modeling increased children's engagement with the actress during training. However, only parent modeling increased children's subsequent word learning, perhaps by revealing the symbolic (representational) intentions underlying this video. This study highlights the importance of adult co-viewing in helping toddlers to interpret communicative cues from video. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. A Prompting Procedure for Increasing Sales in a Small Pet Store

    Science.gov (United States)

    Milligan, Jacqueline; Hantula, Donald A.

    2006-01-01

    A simple prompting procedure involving index cards was used to increase suggestive selling by the owner/operator of a small pet grooming business. Over a year of baseline data revealed that no sales prompts were given and few pet products were sold. When the owner was prompted by an index card to ask customers if they wanted to purchase pet…

  17. Scalable gastroscopic video summarization via similar-inhibition dictionary selection.

    Science.gov (United States)

    Wang, Shuai; Cong, Yang; Cao, Jun; Yang, Yunsheng; Tang, Yandong; Zhao, Huaici; Yu, Haibin

    2016-01-01

    This paper aims at developing an automated gastroscopic video summarization algorithm to assist clinicians to more effectively go through the abnormal contents of the video. To select the most representative frames from the original video sequence, we formulate the problem of gastroscopic video summarization as a dictionary selection issue. Different from the traditional dictionary selection methods, which take into account only the number and reconstruction ability of selected key frames, our model introduces the similar-inhibition constraint to reinforce the diversity of selected key frames. We calculate the attention cost by merging both gaze and content change into a prior cue to help select the frames with more high-level semantic information. Moreover, we adopt an image quality evaluation process to eliminate the interference of the poor quality images and a segmentation process to reduce the computational complexity. For experiments, we build a new gastroscopic video dataset captured from 30 volunteers with more than 400k images and compare our method with the state-of-the-arts using the content consistency, index consistency and content-index consistency with the ground truth. Compared with all competitors, our method obtains the best results in 23 of 30 videos evaluated based on content consistency, 24 of 30 videos evaluated based on index consistency and all videos evaluated based on content-index consistency. For gastroscopic video summarization, we propose an automated annotation method via similar-inhibition dictionary selection. Our model can achieve better performance compared with other state-of-the-art models and supplies more suitable key frames for diagnosis. The developed algorithm can be automatically adapted to various real applications, such as the training of young clinicians, computer-aided diagnosis or medical report generation. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. The Effects of Mental Imagery with Video-Modeling on Self-Efficacy and Maximal Front Squat Ability

    Directory of Open Access Journals (Sweden)

    Daniel J. M. Buck

    2016-04-01

    Full Text Available This study was designed to assess the effectiveness of mental imagery supplemented with video-modeling on self-efficacy and front squat strength (three repetition maximum; 3RM. Subjects (13 male, 7 female who had at least 6 months of front squat experience were assigned to either an experimental (n = 10 or a control (n = 10 group. Subjects′ 3RM and self-efficacy for the 3RM were measured at baseline. Following this, subjects in the experimental group followed a structured imagery protocol, incorporating video recordings of both their own 3RM performance and a model lifter with excellent technique, twice a day for three days. Subjects in the control group spent the same amount of time viewing a placebo video. Following three days with no physical training, measurements of front squat 3RM and self-efficacy for the 3RM were repeated. Subjects in the experimental group increased in self-efficacy following the intervention, and showed greater 3RM improvement than those in the control group. Self-efficacy was found to significantly mediate the relationship between imagery and front squat 3RM. These findings point to the importance of mental skills training for the enhancement of self-efficacy and front squat performance.

  19. The Effects of Mental Imagery with Video-Modeling on Self-Efficacy and Maximal Front Squat Ability.

    Science.gov (United States)

    Buck, Daniel J M; Hutchinson, Jasmin C; Winter, Christa R; Thompson, Brian A

    2016-04-14

    This study was designed to assess the effectiveness of mental imagery supplemented with video-modeling on self-efficacy and front squat strength (three repetition maximum; 3RM). Subjects (13 male, 7 female) who had at least 6 months of front squat experience were assigned to either an experimental ( n = 10) or a control ( n = 10) group. Subjects' 3RM and self-efficacy for the 3RM were measured at baseline. Following this, subjects in the experimental group followed a structured imagery protocol, incorporating video recordings of both their own 3RM performance and a model lifter with excellent technique, twice a day for three days. Subjects in the control group spent the same amount of time viewing a placebo video. Following three days with no physical training, measurements of front squat 3RM and self-efficacy for the 3RM were repeated. Subjects in the experimental group increased in self-efficacy following the intervention, and showed greater 3RM improvement than those in the control group. Self-efficacy was found to significantly mediate the relationship between imagery and front squat 3RM. These findings point to the importance of mental skills training for the enhancement of self-efficacy and front squat performance.

  20. High-Speed Video Analysis in a Conceptual Physics Class

    Science.gov (United States)

    Desbien, Dwain M.

    2011-09-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.

  1. Dimensioning Method for Conversational Video Applications in Wireless Convergent Networks

    Directory of Open Access Journals (Sweden)

    Raquel Perez Leal

    2007-12-01

    Full Text Available New convergent services are becoming possible, thanks to the expansion of IP networks based on the availability of innovative advanced coding formats such as H.264, which reduce network bandwidth requirements providing good video quality, and the rapid growth in the supply of dual-mode WiFi cellular terminals. This paper provides, first, a comprehensive subject overview as several technologies are involved, such as medium access protocol in IEEE802.11, H.264 advanced video coding standards, and conversational application characterization and recommendations. Second, the paper presents a new and simple dimensioning model of conversational video over wireless LAN. WLAN is addressed under the optimal network throughput and the perspective of video quality. The maximum number of simultaneous users resulting from throughput is limited by the collisions taking place in the shared medium with the statistical contention protocol. The video quality is conditioned by the packet loss in the contention protocol. Both approaches are analyzed within the scope of the advanced video codecs used in conversational video over IP, to conclude that conversational video dimensioning based on network throughput is not enough to ensure a satisfactory user experience, and video quality has to be taken also into account. Finally, the proposed model has been applied to a real-office scenario.

  2. Dimensioning Method for Conversational Video Applications in Wireless Convergent Networks

    Directory of Open Access Journals (Sweden)

    Alonso JoséI

    2008-01-01

    Full Text Available Abstract New convergent services are becoming possible, thanks to the expansion of IP networks based on the availability of innovative advanced coding formats such as H.264, which reduce network bandwidth requirements providing good video quality, and the rapid growth in the supply of dual-mode WiFi cellular terminals. This paper provides, first, a comprehensive subject overview as several technologies are involved, such as medium access protocol in IEEE802.11, H.264 advanced video coding standards, and conversational application characterization and recommendations. Second, the paper presents a new and simple dimensioning model of conversational video over wireless LAN. WLAN is addressed under the optimal network throughput and the perspective of video quality. The maximum number of simultaneous users resulting from throughput is limited by the collisions taking place in the shared medium with the statistical contention protocol. The video quality is conditioned by the packet loss in the contention protocol. Both approaches are analyzed within the scope of the advanced video codecs used in conversational video over IP, to conclude that conversational video dimensioning based on network throughput is not enough to ensure a satisfactory user experience, and video quality has to be taken also into account. Finally, the proposed model has been applied to a real-office scenario.

  3. Feynman-α correlation analysis by prompt-photon detection

    International Nuclear Information System (INIS)

    Hashimoto, Kengo; Yamada, Sumasu; Hasegawa, Yasuhiro; Horiguchi, Tetsuo

    1998-01-01

    Two-detector Feynman-α measurements were carried out using the UTR-KINKI reactor, a light-water-moderated and graphite-reflected reactor, by detecting high-energy, prompt gamma rays. For comparison, the conventional measurements by detecting neutrons were also performed. These measurements were carried out in the subcriticality range from 0 to $1.8. The gate-time dependence of the variance-and covariance-to-mean ratios measured by gamma-ray detection were nearly identical with those obtained using standard neutron-detection techniques. Consequently, the prompt-neutron decay constants inferred from the gamma-ray correlation data agreed with those from the neutron data. Furthermore, the correlated-to-uncorrelated amplitude ratios obtained by gamma-ray detection significantly depended on the low-energy discriminator level of the single-channel analyzer. The discriminator level was determined as optimum for obtaining a maximum value of the amplitude ratio. The maximum amplitude ratio was much larger than that obtained by neutron detection. The subcriticality dependence of the decay constant obtained by gamma-ray detection was consistent with that obtained by neutron detection and followed the linear relation based on the one-point kinetic model in the vicinity of delayed critical. These experimental results suggest that the gamma-ray correlation technique can be applied to measure reactor kinetic parameters more efficiently

  4. Enhancement system of nighttime infrared video image and visible video image

    Science.gov (United States)

    Wang, Yue; Piao, Yan

    2016-11-01

    Visibility of Nighttime video image has a great significance for military and medicine areas, but nighttime video image has so poor quality that we can't recognize the target and background. Thus we enhance the nighttime video image by fuse infrared video image and visible video image. According to the characteristics of infrared and visible images, we proposed improved sift algorithm andαβ weighted algorithm to fuse heterologous nighttime images. We would deduced a transfer matrix from improved sift algorithm. The transfer matrix would rapid register heterologous nighttime images. And theαβ weighted algorithm can be applied in any scene. In the video image fusion system, we used the transfer matrix to register every frame and then used αβ weighted method to fuse every frame, which reached the time requirement soft video. The fused video image not only retains the clear target information of infrared video image, but also retains the detail and color information of visible video image and the fused video image can fluency play.

  5. Prompt Burst Energetics in the oxide/sodium system

    International Nuclear Information System (INIS)

    Reil, K.O.; Young, M.F.

    1979-01-01

    A series of twelve Prompt Burst Energetics (PBE) experiments utilizing fresh uranium dioxide fuel pins in stagnant sodium coolant has been performed in Sandia Laboratories' Annular Core Pulse Reactor (ACPR). Results and analysis described in the paper include: observation of FCIs (pressures up to 32 MPa) in the UO 2 /Na system, some apparently triggered by small pressure transients (2 MPa); prediction of failure times via the pin model EXPAND; observed thermal-to-mechanical energy conversion ratios up to approximately 0.4%; and identification of potential reactivity effects caused by the pre- and post-failure motion of fuel

  6. Can Video Self-Modeling Improve Affected Limb Reach and Grasp Ability in Stroke Patients?

    Science.gov (United States)

    Steel, Kylie Ann; Mudie, Kurt; Sandoval, Remi; Anderson, David; Dogramaci, Sera; Rehmanjan, Mohammad; Birznieks, Ingvars

    2018-01-01

    The authors examined whether feedforward video self-modeling (FF VSM) would improve control over the affected limb, movement self-confidence, movement self-consciousness, and well-being in 18 stroke survivors. Participants completed a cup transport task and 2 questionnaires related to psychological processes pre- and postintervention. Pretest video footage of the unaffected limb performing the task was edited to create a best-of or mirror-reversed training DVD, creating the illusion that patients were performing proficiently with the affected limb. The training yielded significant improvements for the forward movement of the affected limb compared to the unaffected limb. Significant improvements were also seen in movement self-confidence, movement self-consciousness, and well-being. FF VSM appears to be a viable way to improve motor ability in populations with movement disorders.

  7. The Twist Tensor Nuclear Norm for Video Completion.

    Science.gov (United States)

    Hu, Wenrui; Tao, Dacheng; Zhang, Wensheng; Xie, Yuan; Yang, Yehui

    2017-12-01

    In this paper, we propose a new low-rank tensor model based on the circulant algebra, namely, twist tensor nuclear norm (t-TNN). The twist tensor denotes a three-way tensor representation to laterally store 2-D data slices in order. On one hand, t-TNN convexly relaxes the tensor multirank of the twist tensor in the Fourier domain, which allows an efficient computation using fast Fourier transform. On the other, t-TNN is equal to the nuclear norm of block circulant matricization of the twist tensor in the original domain, which extends the traditional matrix nuclear norm in a block circulant way. We test the t-TNN model on a video completion application that aims to fill missing values and the experiment results validate its effectiveness, especially when dealing with video recorded by a nonstationary panning camera. The block circulant matricization of the twist tensor can be transformed into a circulant block representation with nuclear norm invariance. This representation, after transformation, exploits the horizontal translation relationship between the frames in a video, and endows the t-TNN model with a more powerful ability to reconstruct panning videos than the existing state-of-the-art low-rank models.

  8. Prompt Neutron Lifetime for the NBSR Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Hanson, A.L.; Diamond, D.

    2012-06-24

    In preparation for the proposed conversion of the National Institute of Standards and Technology (NIST) research reactor (NBSR) from high-enriched uranium (HEU) to low-enriched uranium (LEU) fuel, certain point kinetics parameters must be calculated. We report here values of the prompt neutron lifetime that have been calculated using three independent methods. All three sets of calculations demonstrate that the prompt neutron lifetime is shorter for the LEU fuel when compared to the HEU fuel and longer for the equilibrium end-of-cycle (EOC) condition when compared to the equilibrium startup (SU) condition for both the HEU and LEU fuels.

  9. Prompt and non-prompt $J/\\psi$ and $\\psi(2\\mathrm{S})$ suppression at high transverse momentum in 5.02 TeV Pb+Pb collisions with the ATLAS experiment arXiv

    CERN Document Server

    Aaboud, Morad; Abbott, Brad; Abdinov, Ovsat; Abeloos, Baptiste; Abidi, Syed Haider; AbouZeid, Ossama; Abraham, Nicola; Abramowicz, Halina; Abreu, Henso; Abulaiti, Yiming; Acharya, Bobby Samir; Adachi, Shunsuke; Adamczyk, Leszek; Adelman, Jahred; Adersberger, Michael; Adye, Tim; Affolder, Tony; Afik, Yoav; Agheorghiesei, Catalin; Aguilar-Saavedra, Juan Antonio; Ahmadov, Faig; Aielli, Giulio; Akatsuka, Shunichi; Åkesson, Torsten Paul Ake; Akilli, Ece; Akimov, Andrei; Alberghi, Gian Luigi; Albert, Justin; Albicocco, Pietro; Alconada Verzini, Maria Josefina; Alderweireldt, Sara; Aleksa, Martin; Aleksandrov, Igor; Alexa, Calin; Alexander, Gideon; Alexopoulos, Theodoros; Alhroob, Muhammad; Ali, Babar; Aliev, Malik; Alimonti, Gianluca; Alison, John; Alkire, Steven Patrick; Allaire, Corentin; Allbrooke, Benedict; Allen, Benjamin William; Allport, Phillip; Aloisio, Alberto; Alonso, Alejandro; Alonso, Francisco; Alpigiani, Cristiano; Alshehri, Azzah Aziz; Alstaty, Mahmoud; Alvarez Gonzalez, Barbara; Álvarez Piqueras, Damián; Alviggi, Mariagrazia; Amadio, Brian Thomas; Amaral Coutinho, Yara; Ambroz, Luca; Amelung, Christoph; Amidei, Dante; Amor Dos Santos, Susana Patricia; Amoroso, Simone; Amrouche, Cherifa Sabrina; Anastopoulos, Christos; Ancu, Lucian Stefan; Andari, Nansi; Andeen, Timothy; Anders, Christoph Falk; Anders, John Kenneth; Anderson, Kelby; Andreazza, Attilio; Andrei, George Victor; Angelidakis, Stylianos; Angelozzi, Ivan; Angerami, Aaron; Anisenkov, Alexey; Annovi, Alberto; Antel, Claire; Anthony, Matthew; Antonelli, Mario; Antrim, Daniel Joseph; Anulli, Fabio; Aoki, Masato; Aperio Bella, Ludovica; Arabidze, Giorgi; Arai, Yasuo; Araque, Juan Pedro; Araujo Ferraz, Victor; Araujo Pereira, Rodrigo; Arce, Ayana; Ardell, Rose Elisabeth; Arduh, Francisco Anuar; Arguin, Jean-Francois; Argyropoulos, Spyridon; Armbruster, Aaron James; Armitage, Lewis James; Arnaez, Olivier; Arnold, Hannah; Arratia, Miguel; Arslan, Ozan; Artamonov, Andrei; Artoni, Giacomo; Artz, Sebastian; Asai, Shoji; Asbah, Nedaa; Ashkenazi, Adi; Asimakopoulou, Eleni Myrto; Asquith, Lily; Assamagan, Ketevi; Astalos, Robert; Atkin, Ryan Justin; Atkinson, Markus; Atlay, Naim Bora; Augsten, Kamil; Avolio, Giuseppe; Avramidou, Rachel Maria; Axen, Bradley; Ayoub, Mohamad Kassem; Azuelos, Georges; Baas, Alessandra; Baca, Matthew John; Bachacou, Henri; Bachas, Konstantinos; Backes, Moritz; Bagnaia, Paolo; Bahmani, Marzieh; Bahrasemani, Sina; Bailey, Adam; Baines, John; Bajic, Milena; Baker, Oliver Keith; Bakker, Pepijn Johannes; Bakshi Gupta, Debottam; Baldin, Evgenii; Balek, Petr; Balli, Fabrice; Balunas, William Keaton; Banas, Elzbieta; Bandyopadhyay, Anjishnu; Banerjee, Swagato; Bannoura, Arwa A E; Barak, Liron; Barbe, William Mickael; Barberio, Elisabetta Luigia; Barberis, Dario; Barbero, Marlon; Barillari, Teresa; Barisits, Martin-Stefan; Barkeloo, Jason Tyler Colt; Barklow, Timothy; Barlow, Nick; Barnea, Rotem; Barnes, Sarah Louise; Barnett, Bruce; Barnett, Michael; Barnovska-Blenessy, Zuzana; Baroncelli, Antonio; Barone, Gaetano; Barr, Alan; Barranco Navarro, Laura; Barreiro, Fernando; Barreiro Guimarães da Costa, João; Bartoldus, Rainer; Barton, Adam Edward; Bartos, Pavol; Basalaev, Artem; Bassalat, Ahmed; Bates, Richard; Batista, Santiago Juan; Batlamous, Souad; Batley, Richard; Battaglia, Marco; Bauce, Matteo; Bauer, Florian; Bauer, Kevin Thomas; Bawa, Harinder Singh; Beacham, James; Beattie, Michael David; Beau, Tristan; Beauchemin, Pierre-Hugues; Bechtle, Philip; Beck, Hans~Peter; Beck, Helge Christoph; Becker, Kathrin; Becker, Maurice; Becot, Cyril; Beddall, Andrew; Beddall, Ayda; Bednyakov, Vadim; Bedognetti, Matteo; Bee, Christopher; Beermann, Thomas; Begalli, Marcia; Begel, Michael; Behera, Arabinda; Behr, Janna Katharina; Bell, Andrew Stuart; Bella, Gideon; Bellagamba, Lorenzo; Bellerive, Alain; Bellomo, Massimiliano; Belotskiy, Konstantin; Belyaev, Nikita; Benary, Odette; Benchekroun, Driss; Bender, Michael; Benekos, Nektarios; Benhammou, Yan; Benhar Noccioli, Eleonora; Benitez, Jose; Benjamin, Douglas; Benoit, Mathieu; Bensinger, James; Bentvelsen, Stan; Beresford, Lydia; Beretta, Matteo; Berge, David; Bergeaas Kuutmann, Elin; Berger, Nicolas; Bergsten, Laura Jean; Beringer, Jürg; Berlendis, Simon; Bernard, Nathan Rogers; Bernardi, Gregorio; Bernius, Catrin; Bernlochner, Florian Urs; Berry, Tracey; Berta, Peter; Bertella, Claudia; Bertoli, Gabriele; Bertram, Iain Alexander; Bertsche, Carolyn; Besjes, Geert-Jan; Bessidskaia Bylund, Olga; Bessner, Martin Florian; Besson, Nathalie; Bethani, Agni; Bethke, Siegfried; Betti, Alessandra; Bevan, Adrian John; Beyer, Julien-christopher; Bianchi, Riccardo-Maria; Biebel, Otmar; Biedermann, Dustin; Bielski, Rafal; Bierwagen, Katharina; Biesuz, Nicolo Vladi; Biglietti, Michela; Billoud, Thomas Remy Victor; Bindi, Marcello; Bingul, Ahmet; Bini, Cesare; Biondi, Silvia; Bisanz, Tobias; Biswal, Jyoti Prakash; Bittrich, Carsten; Bjergaard, David Martin; Black, James; Black, Kevin; Blair, Robert; Blazek, Tomas; Bloch, Ingo; Blocker, Craig; Blue, Andrew; Blumenschein, Ulrike; Blunier, Sylvain; Bobbink, Gerjan; Bobrovnikov, Victor; Bocchetta, Simona Serena; Bocci, Andrea; Bock, Christopher; Boerner, Daniela; Bogavac, Danijela; Bogdanchikov, Alexander; Bohm, Christian; Boisvert, Veronique; Bokan, Petar; Bold, Tomasz; Boldyrev, Alexey; Bolz, Arthur Eugen; Bomben, Marco; Bona, Marcella; Bonilla, Johan Sebastian; Boonekamp, Maarten; Borisov, Anatoly; Borissov, Guennadi; Bortfeldt, Jonathan; Bortoletto, Daniela; Bortolotto, Valerio; Boscherini, Davide; Bosman, Martine; Bossio Sola, Jonathan David; Boudreau, Joseph; Bouhova-Thacker, Evelina Vassileva; Boumediene, Djamel Eddine; Bourdarios, Claire; Boutle, Sarah Kate; Boveia, Antonio; Boyd, James; Boyko, Igor; Bozson, Adam James; Bracinik, Juraj; Brahimi, Nihal; Brandt, Andrew; Brandt, Gerhard; Brandt, Oleg; Braren, Frued; Bratzler, Uwe; Brau, Benjamin; Brau, James; Breaden Madden, William Dmitri; Brendlinger, Kurt; Brennan, Amelia Jean; Brenner, Lydia; Brenner, Richard; Bressler, Shikma; Brickwedde, Bernard; Briglin, Daniel Lawrence; Bristow, Timothy Michael; Britton, Dave; Britzger, Daniel; Brock, Ian; Brock, Raymond; Brooijmans, Gustaaf; Brooks, Timothy; Brooks, William; Brost, Elizabeth; Broughton, James; Bruckman de Renstrom, Pawel; Bruncko, Dusan; Bruni, Alessia; Bruni, Graziano; Bruni, Lucrezia Stella; Bruno, Salvatore; Brunt, Benjamin; Bruschi, Marco; Bruscino, Nello; Bryant, Patrick; Bryngemark, Lene; Buanes, Trygve; Buat, Quentin; Buchholz, Peter; Buckley, Andrew; Budagov, Ioulian; Buehrer, Felix; Bugge, Magnar Kopangen; Bulekov, Oleg; Bullock, Daniel; Burch, Tyler James; Burdin, Sergey; Burgard, Carsten Daniel; Burger, Angela Maria; Burghgrave, Blake; Burka, Klaudia; Burke, Stephen; Burmeister, Ingo; Burr, Jonathan Thomas Peter; Büscher, Daniel; Büscher, Volker; Buschmann, Eric; Bussey, Peter; Butler, John; Buttar, Craig; Butterworth, Jonathan; Butti, Pierfrancesco; Buttinger, William; Buzatu, Adrian; Buzykaev, Aleksey; Cabras, Grazia; Cabrera Urbán, Susana; Caforio, Davide; Cai, Huacheng; Cairo, Valentina; Cakir, Orhan; Calace, Noemi; Calafiura, Paolo; Calandri, Alessandro; Calderini, Giovanni; Calfayan, Philippe; Callea, Giuseppe; Caloba, Luiz; Calvente Lopez, Sergio; Calvet, David; Calvet, Samuel; Calvet, Thomas Philippe; Calvetti, Milene; Camacho Toro, Reina; Camarda, Stefano; Camarri, Paolo; Cameron, David; Caminal Armadans, Roger; Camincher, Clement; Campana, Simone; Campanelli, Mario; Camplani, Alessandra; Campoverde, Angel; Canale, Vincenzo; Cano Bret, Marc; Cantero, Josu; Cao, Tingting; Cao, Yumeng; Capeans Garrido, Maria Del Mar; Caprini, Irinel; Caprini, Mihai; Capua, Marcella; Carbone, Ryne Michael; Cardarelli, Roberto; Cardillo, Fabio; Carli, Ina; Carli, Tancredi; Carlino, Gianpaolo; Carlson, Benjamin Taylor; Carminati, Leonardo; Carney, Rebecca; Caron, Sascha; Carquin, Edson; Carrá, Sonia; Carrillo-Montoya, German D; Casadei, Diego; Casado, Maria Pilar; Casha, Albert Francis; Casolino, Mirkoantonio; Casper, David William; Castelijn, Remco; Castillo Gimenez, Victoria; Castro, Nuno Filipe; Catinaccio, Andrea; Catmore, James; Cattai, Ariella; Caudron, Julien; Cavaliere, Viviana; Cavallaro, Emanuele; Cavalli, Donatella; Cavalli-Sforza, Matteo; Cavasinni, Vincenzo; Celebi, Emre; Ceradini, Filippo; Cerda Alberich, Leonor; Santiago Cerqueira, Augusto; Cerri, Alessandro; Cerrito, Lucio; Cerutti, Fabio; Cervelli, Alberto; Cetin, Serkant Ali; Chafaq, Aziz; Chakraborty, Dhiman; Chan, Stephen Kam-wah; Chan, Wing Sheung; Chan, Yat Long; Chang, Philip; Chapman, John Derek; Charlton, David; Chau, Chav Chhiv; Chavez Barajas, Carlos Alberto; Che, Siinn; Chegwidden, Andrew; Chekanov, Sergei; Chekulaev, Sergey; Chelkov, Gueorgui; Chelstowska, Magda Anna; Chen, Cheng; Chen, Chunhui; Chen, Hucheng; Chen, Jing; Chen, Jue; Chen, Shenjian; Chen, Shion; Chen, Xin; Chen, Ye; Chen, Yu-Heng; Cheng, Hok Chuen; Cheng, Huajie; Cheplakov, Alexander; Cheremushkina, Evgeniya; Cherkaoui El Moursli, Rajaa; Cheu, Elliott; Cheung, Kingman; Chevalier, Laurent; Chiarella, Vitaliano; Chiarelli, Giorgio; Chiodini, Gabriele; Chisholm, Andrew; Chitan, Adrian; Chiu, I-huan; Chiu, Yu Him Justin; Chizhov, Mihail; Choi, Kyungeon; Chomont, Arthur Rene; Chouridou, Sofia; Chow, Yun Sang; Christodoulou, Valentinos; Chu, Ming Chung; Chudoba, Jiri; Chuinard, Annabelle Julia; Chwastowski, Janusz; Chytka, Ladislav; Cinca, Diane; Cindro, Vladimir; Cioară, Irina Antonela; Ciocio, Alessandra; Cirotto, Francesco; Citron, Zvi Hirsh; Citterio, Mauro; Clark, Allan G; Clark, Michael; Clark, Philip James; Clarke, Robert; Clement, Christophe; Coadou, Yann; Cobal, Marina; Coccaro, Andrea; Cochran, James H; Coimbra, Artur Emanuel; Colasurdo, Luca; Cole, Brian; Colijn, Auke-Pieter; Collot, Johann; Conde Muiño, Patricia; Coniavitis, Elias; Connell, Simon Henry; Connelly, Ian; Constantinescu, Serban; Conventi, Francesco; Cooper-Sarkar, Amanda; Cormier, Felix; Cormier, Kyle James Read; Corradi, Massimo; Corrigan, Eric Edward; Corriveau, François; Cortes-Gonzalez, Arely; Costa, María José; Costanzo, Davide; Cottin, Giovanna; Cowan, Glen; Cox, Brian; Crane, Jonathan; Cranmer, Kyle; Crawley, Samuel Joseph; Creager, Rachael; Cree, Graham; Crépé-Renaudin, Sabine; Crescioli, Francesco; Cristinziani, Markus; Croft, Vince; Crosetti, Giovanni; Cueto, Ana; Cuhadar Donszelmann, Tulay; Cukierman, Aviv Ruben; Curatolo, Maria; Cúth, Jakub; Czekierda, Sabina; Czodrowski, Patrick; D'amen, Gabriele; D'Auria, Saverio; D'Eramo, Louis; D'Onofrio, Monica; Da Cunha Sargedas De Sousa, Mario Jose; Da Via, Cinzia; Dabrowski, Wladyslaw; Dado, Tomas; Dahbi, Salah-eddine; Dai, Tiesheng; Dale, Orjan; Dallaire, Frederick; Dallapiccola, Carlo; Dam, Mogens; Dandoy, Jeffrey; Daneri, Maria Florencia; Dang, Nguyen Phuong; Dann, Nick; Danninger, Matthias; Dao, Valerio; Darbo, Giovanni; Darmora, Smita; Dartsi, Olympia; Dattagupta, Aparajita; Daubney, Thomas; Davey, Will; David, Claire; Davidek, Tomas; Davis, Douglas; Dawe, Edmund; Dawson, Ian; De, Kaushik; de Asmundis, Riccardo; De Benedetti, Abraham; De Castro, Stefano; De Cecco, Sandro; De Groot, Nicolo; de Jong, Paul; De la Torre, Hector; De Lorenzi, Francesco; De Maria, Antonio; De Pedis, Daniele; De Salvo, Alessandro; De Sanctis, Umberto; De Santo, Antonella; De Vasconcelos Corga, Kevin; De Vivie De Regie, Jean-Baptiste; Debenedetti, Chiara; Dedovich, Dmitri; Dehghanian, Nooshin; Del Gaudio, Michela; Del Peso, Jose; Delgove, David; Deliot, Frederic; Delitzsch, Chris Malena; Dell'Acqua, Andrea; Dell'Asta, Lidia; Della Pietra, Massimo; della Volpe, Domenico; Delmastro, Marco; Delporte, Charles; Delsart, Pierre-Antoine; DeMarco, David; Demers, Sarah; Demichev, Mikhail; Denisov, Sergey; Denysiuk, Denys; Derendarz, Dominik; Derkaoui, Jamal Eddine; Derue, Frederic; Dervan, Paul; Desch, Klaus Kurt; Deterre, Cecile; Dette, Karola; Devesa, Maria Roberta; Deviveiros, Pier-Olivier; Dewhurst, Alastair; Dhaliwal, Saminder; Di Bello, Francesco Armando; Di Ciaccio, Anna; Di Ciaccio, Lucia; Di Clemente, William Kennedy; Di Donato, Camilla; Di Girolamo, Alessandro; Di Micco, Biagio; Di Nardo, Roberto; Di Petrillo, Karri Folan; Di Simone, Andrea; Di Sipio, Riccardo; Di Valentino, David; Diaconu, Cristinel; Diamond, Miriam; Dias, Flavia; Dias do Vale, Tiago; Diaz, Marco Aurelio; Dickinson, Jennet; Diehl, Edward; Dietrich, Janet; Díez Cornell, Sergio; Dimitrievska, Aleksandra; Dingfelder, Jochen; Dittus, Fridolin; Djama, Fares; Djobava, Tamar; Djuvsland, Julia Isabell; Barros do Vale, Maria Aline; Dobre, Monica; Dodsworth, David; Doglioni, Caterina; Dolejsi, Jiri; Dolezal, Zdenek; Donadelli, Marisilvia; Donini, Julien; Dopke, Jens; Doria, Alessandra; Dova, Maria-Teresa; Doyle, Tony; Drechsler, Eric; Dreyer, Etienne; Dreyer, Timo; Dris, Manolis; Du, Yanyan; Duarte-Campderros, Jorge; Dubinin, Filipp; Dubreuil, Arnaud; Duchovni, Ehud; Duckeck, Guenter; Ducourthial, Audrey; Ducu, Otilia Anamaria; Duda, Dominik; Dudarev, Alexey; Dudder, Andreas Christian; Duffield, Emily Marie; Duflot, Laurent; Dührssen, Michael; Dülsen, Carsten; Dumancic, Mirta; Dumitriu, Ana Elena; Duncan, Anna Kathryn; Dunford, Monica; Duperrin, Arnaud; Duran Yildiz, Hatice; Düren, Michael; Durglishvili, Archil; Duschinger, Dirk; Dutta, Baishali; Duvnjak, Damir; Dyndal, Mateusz; Dziedzic, Bartosz Sebastian; Eckardt, Christoph; Ecker, Katharina Maria; Edgar, Ryan Christopher; Eifert, Till; Eigen, Gerald; Einsweiler, Kevin; Ekelof, Tord; El Kacimi, Mohamed; El Kosseifi, Rima; Ellajosyula, Venugopal; Ellert, Mattias; Ellinghaus, Frank; Elliot, Alison; Ellis, Nicolas; Elmsheuser, Johannes; Elsing, Markus; Emeliyanov, Dmitry; Enari, Yuji; Ennis, Joseph Stanford; Epland, Matthew Berg; Erdmann, Johannes; Ereditato, Antonio; Errede, Steven; Escalier, Marc; Escobar, Carlos; Esposito, Bellisario; Estrada Pastor, Oscar; Etienvre, Anne-Isabelle; Etzion, Erez; Evans, Hal; Ezhilov, Alexey; Ezzi, Mohammed; Fabbri, Federica; Fabbri, Laura; Fabiani, Veronica; Facini, Gabriel; Faisca Rodrigues Pereira, Rui Miguel; Fakhrutdinov, Rinat; Falciano, Speranza; Falke, Peter Johannes; Falke, Saskia; Faltova, Jana; Fang, Yaquan; Fanti, Marcello; Farbin, Amir; Farilla, Addolorata; Farina, Edoardo Maria; Farooque, Trisha; Farrell, Steven; Farrington, Sinead; Farthouat, Philippe; Fassi, Farida; Fassnacht, Patrick; Fassouliotis, Dimitrios; Faucci Giannelli, Michele; Favareto, Andrea; Fawcett, William James; Fayard, Louis; Fedin, Oleg; Fedorko, Wojciech; Feickert, Matthew; Feigl, Simon; Feligioni, Lorenzo; Feng, Cunfeng; Feng, Eric; Feng, Minyu; Fenton, Michael James; Fenyuk, Alexander; Feremenga, Last; Ferrando, James; Ferrari, Arnaud; Ferrari, Pamela; Ferrari, Roberto; Ferreira de Lima, Danilo Enoque; Ferrer, Antonio; Ferrere, Didier; Ferretti, Claudio; Fiedler, Frank; Filipčič, Andrej; Filthaut, Frank; Fincke-Keeler, Margret; Finelli, Kevin Daniel; Fiolhais, Miguel; Fiorini, Luca; Fischer, Cora; Fischer, Julia; Fisher, Wade Cameron; Flaschel, Nils; Fleck, Ivor; Fleischmann, Philipp; Fletcher, Rob Roy MacGregor; Flick, Tobias; Flierl, Bernhard Matthias; Flores, Lucas Macrorie; Flores Castillo, Luis; Fomin, Nikolai; Forcolin, Giulio Tiziano; Formica, Andrea; Förster, Fabian Alexander; Forti, Alessandra; Foster, Andrew Geoffrey; Fournier, Daniel; Fox, Harald; Fracchia, Silvia; Francavilla, Paolo; Franchini, Matteo; Franchino, Silvia; Francis, David; Franconi, Laura; Franklin, Melissa; Frate, Meghan; Fraternali, Marco; Freeborn, David; Fressard-Batraneanu, Silvia; Freund, Benjamin; Spolidoro Freund, Werner; Froidevaux, Daniel; Frost, James; Fukunaga, Chikara; Fusayasu, Takahiro; Fuster, Juan; Gabizon, Ofir; Gabrielli, Alessandro; Gabrielli, Andrea; Gach, Grzegorz; Gadatsch, Stefan; Gadomski, Szymon; Gadow, Philipp; Gagliardi, Guido; Gagnon, Louis Guillaume; Galea, Cristina; Galhardo, Bruno; Gallas, Elizabeth; Gallop, Bruce; Gallus, Petr; Galster, Gorm Aske Gram Krohn; Gamboa Goni, Rodrigo; Gan, KK; Ganguly, Sanmay; Gao, Yanyan; Gao, Yongsheng; Garay Walls, Francisca; García, Carmen; García Navarro, José Enrique; García Pascual, Juan Antonio; Garcia-Sciveres, Maurice; Gardner, Robert; Garelli, Nicoletta; Garonne, Vincent; Gasnikova, Ksenia; Gaudiello, Andrea; Gaudio, Gabriella; Gavrilenko, Igor; Gavrilyuk, Alexander; Gay, Colin; Gaycken, Goetz; Gazis, Evangelos; Gee, Norman; Geisen, Jannik; Geisen, Marc; Geisler, Manuel Patrice; Gellerstedt, Karl; Gemme, Claudia; Genest, Marie-Hélène; Geng, Cong; Gentile, Simonetta; Gentsos, Christos; George, Simon; Gerbaudo, Davide; Gessner, Gregor; Ghasemi, Sara; Ghneimat, Mazuza; Giacobbe, Benedetto; Giagu, Stefano; Giangiacomi, Nico; Giannetti, Paola; Gibson, Stephen; Gignac, Matthew; Gillberg, Dag; Gilles, Geoffrey; Gingrich, Douglas; Giordani, MarioPaolo; Giorgi, Filippo Maria; Giraud, Pierre-Francois; Giromini, Paolo; Giugliarelli, Gilberto; Giugni, Danilo; Giuli, Francesco; Giulini, Maddalena; Gkaitatzis, Stamatios; Gkialas, Ioannis; Gkougkousis, Evangelos Leonidas; Gkountoumis, Panagiotis; Gladilin, Leonid; Glasman, Claudia; Glatzer, Julian; Glaysher, Paul; Glazov, Alexandre; Goblirsch-Kolb, Maximilian; Godlewski, Jan; Goldfarb, Steven; Golling, Tobias; Golubkov, Dmitry; Gomes, Agostinho; Gonçalo, Ricardo; Goncalves Gama, Rafael; Gonella, Giulia; Gonella, Laura; Gongadze, Alexi; Gonnella, Francesco; Gonski, Julia; González de la Hoz, Santiago; Gonzalez-Sevilla, Sergio; Goossens, Luc; Gorbounov, Petr Andreevich; Gordon, Howard; Gorini, Benedetto; Gorini, Edoardo; Gorišek, Andrej; Goshaw, Alfred; Gössling, Claus; Gostkin, Mikhail Ivanovitch; Gottardo, Carlo Alberto; Goudet, Christophe Raymond; Goujdami, Driss; Goussiou, Anna; Govender, Nicolin; Goy, Corinne; Gozani, Eitan; Grabowska-Bold, Iwona; Gradin, Per Olov Joakim; Graham, Emily Charlotte; Gramling, Johanna; Gramstad, Eirik; Grancagnolo, Sergio; Gratchev, Vadim; Gravila, Paul Mircea; Gray, Chloe; Gray, Heather; Greenwood, Zeno Dixon; Grefe, Christian; Gregersen, Kristian; Gregor, Ingrid-Maria; Grenier, Philippe; Grevtsov, Kirill; Griffiths, Justin; Grillo, Alexander; Grimm, Kathryn; Grinstein, Sebastian; Gris, Philippe Luc Yves; Grivaz, Jean-Francois; Groh, Sabrina; Gross, Eilam; Grosse-Knetter, Joern; Grossi, Giulio Cornelio; Grout, Zara Jane; Grummer, Aidan; Guan, Liang; Guan, Wen; Guenther, Jaroslav; Guerguichon, Antinea; Guescini, Francesco; Guest, Daniel; Gueta, Orel; Gugel, Ralf; Gui, Bin; Guillemin, Thibault; Guindon, Stefan; Gul, Umar; Gumpert, Christian; Guo, Jun; Guo, Wen; Guo, Yicheng; Guo, Ziyu; Gupta, Ruchi; Gurbuz, Saime; Gustavino, Giuliano; Gutelman, Benjamin Jacque; Gutierrez, Phillip; Gutierrez Ortiz, Nicolas Gilberto; Gutschow, Christian; Guyot, Claude; Guzik, Marcin Pawel; Gwenlan, Claire; Gwilliam, Carl; Hönle, Andreas; Haas, Andy; Haber, Carl; Hadavand, Haleh Khani; Haddad, Nacim; Hadef, Asma; Hageböck, Stephan; Hagihara, Mutsuto; Hakobyan, Hrachya; Haleem, Mahsana; Haley, Joseph; Halladjian, Garabed; Hallewell, Gregory David; Hamacher, Klaus; Hamal, Petr; Hamano, Kenji; Hamilton, Andrew; Hamity, Guillermo Nicolas; Han, Kunlin; Han, Liang; Han, Shuo; Hanagaki, Kazunori; Hance, Michael; Handl, David Michael; Haney, Bijan; Hankache, Robert; Hanke, Paul; Hansen, Eva; Hansen, Jørgen Beck; Hansen, Jorn Dines; Hansen, Maike Christina; Hansen, Peter Henrik; Hara, Kazuhiko; Hard, Andrew; Harenberg, Torsten; Harkusha, Siarhei; Harrison, Paul Fraser; Hartmann, Nikolai Marcel; Hasegawa, Yoji; Hasib, Ahmed; Hassani, Samira; Haug, Sigve; Hauser, Reiner; Hauswald, Lorenz; Havener, Laura Brittany; Havranek, Miroslav; Hawkes, Christopher; Hawkings, Richard John; Hayden, Daniel; Hayes, Christopher; Hays, Chris; Hays, Jonathan Michael; Hayward, Helen; Haywood, Stephen; Heath, Matthew Peter; Hedberg, Vincent; Heelan, Louise; Heer, Sebastian; Heidegger, Kim Katrin; Heilman, Jesse; Heim, Sarah; Heim, Timon; Heinemann, Beate; Heinrich, Jochen Jens; Heinrich, Lukas; Heinz, Christian; Hejbal, Jiri; Helary, Louis; Held, Alexander; Hellesund, Simen; Hellman, Sten; Helsens, Clement; Henderson, Robert; Heng, Yang; Henkelmann, Steffen; Henriques Correia, Ana Maria; Herbert, Geoffrey Henry; Herde, Hannah; Herget, Verena; Hernández Jiménez, Yesenia; Herr, Holger; Herten, Gregor; Hertenberger, Ralf; Hervas, Luis; Herwig, Theodor Christian; Hesketh, Gavin Grant; Hessey, Nigel; Hetherly, Jeffrey Wayne; Higashino, Satoshi; Higón-Rodriguez, Emilio; Hildebrand, Kevin; Hill, Ewan; Hill, John; Hiller, Karl Heinz; Hillier, Stephen; Hils, Maximilian; Hinchliffe, Ian; Hirose, Minoru; Hirschbuehl, Dominic; Hiti, Bojan; Hladik, Ondrej; Hlaluku, Dingane Reward; Hoad, Xanthe; Hobbs, John; Hod, Noam; Hodgkinson, Mark; Hoecker, Andreas; Hoeferkamp, Martin; Hoenig, Friedrich; Hohn, David; Hohov, Dmytro; Holmes, Tova Ray; Holzbock, Michael; Homann, Michael; Honda, Shunsuke; Honda, Takuya; Hong, Tae Min; Hooberman, Benjamin Henry; Hopkins, Walter; Horii, Yasuyuki; Horn, Philipp; Horton, Arthur James; Horyn, Lesya Anna; Hostachy, Jean-Yves; Hostiuc, Alexandru; Hou, Suen; Hoummada, Abdeslam; Howarth, James; Hoya, Joaquin; Hrabovsky, Miroslav; Hrdinka, Julia; Hristova, Ivana; Hrivnac, Julius; Hryn'ova, Tetiana; Hrynevich, Aliaksei; Hsu, Pai-hsien Jennifer; Hsu, Shih-Chieh; Hu, Qipeng; Hu, Shuyang; Huang, Yanping; Hubacek, Zdenek; Hubaut, Fabrice; Huebner, Michael; Huegging, Fabian; Huffman, Todd Brian; Hughes, Emlyn; Huhtinen, Mika; Hunter, Robert Francis Holub; Huo, Peng; Hupe, Andre Marc; Huseynov, Nazim; Huston, Joey; Huth, John; Hyneman, Rachel; Iacobucci, Giuseppe; Iakovidis, Georgios; Ibragimov, Iskander; Iconomidou-Fayard, Lydia; Idrissi, Zineb; Iengo, Paolo; Ignazzi, Rosanna; Igonkina, Olga; Iguchi, Ryunosuke; Iizawa, Tomoya; Ikegami, Yoichi; Ikeno, Masahiro; Iliadis, Dimitrios; Ilic, Nikolina; Iltzsche, Franziska; Introzzi, Gianluca; Iodice, Mauro; Iordanidou, Kalliopi; Ippolito, Valerio; Isacson, Max Fredrik; Ishijima, Naoki; Ishino, Masaya; Ishitsuka, Masaki; Issever, Cigdem; Istin, Serhat; Ito, Fumiaki; Iturbe Ponce, Julia Mariana; Iuppa, Roberto; Ivina, Anna; Iwasaki, Hiroyuki; Izen, Joseph; Izzo, Vincenzo; Jabbar, Samina; Jacka, Petr; Jackson, Paul; Jacobs, Ruth Magdalena; Jain, Vivek; Jäkel, Gunnar; Jakobi, Katharina Bianca; Jakobs, Karl; Jakobsen, Sune; Jakoubek, Tomas; Jamin, David Olivier; Jana, Dilip; Jansky, Roland; Janssen, Jens; Janus, Michel; Janus, Piotr Andrzej; Jarlskog, Göran; Javadov, Namig; Javůrek, Tomáš; Javurkova, Martina; Jeanneau, Fabien; Jeanty, Laura; Jejelava, Juansher; Jelinskas, Adomas; Jenni, Peter; Jeong, Jihyun; Jeske, Carl; Jézéquel, Stéphane; Ji, Haoshuang; Jia, Jiangyong; Jiang, Hai; Jiang, Yi; Jiang, Zihao; Jiggins, Stephen; Jimenez Morales, Fabricio Andres; Jimenez Pena, Javier; Jin, Shan; Jinaru, Adam; Jinnouchi, Osamu; Jivan, Harshna; Johansson, Per; Johns, Kenneth; Johnson, Christian; Johnson, William Joseph; Jon-And, Kerstin; Jones, Roger; Jones, Samuel David; Jones, Sarah; Jones, Tim; Jongmanns, Jan; Jorge, Pedro; Jovicevic, Jelena; Ju, Xiangyang; Junggeburth, Johannes Josef; Juste Rozas, Aurelio; Kaczmarska, Anna; Kado, Marumi; Kagan, Harris; Kagan, Michael; Kaji, Toshiaki; Kajomovitz, Enrique; Kalderon, Charles William; Kaluza, Adam; Kama, Sami; Kamenshchikov, Andrey; Kanjir, Luka; Kano, Yuya; Kantserov, Vadim; Kanzaki, Junichi; Kaplan, Benjamin; Kaplan, Laser Seymour; Kar, Deepak; Kareem, Mohammad Jawad; Karentzos, Efstathios; Karpov, Sergey; Karpova, Zoya; Kartvelishvili, Vakhtang; Karyukhin, Andrey; Kasahara, Kota; Kashif, Lashkar; Kass, Richard; Kastanas, Alex; Kataoka, Yousuke; Kato, Chikuma; Katre, Akshay; Katzy, Judith; Kawade, Kentaro; Kawagoe, Kiyotomo; Kawamoto, Tatsuo; Kawamura, Gen; Kay, Ellis; Kazanin, Vassili; Keeler, Richard; Kehoe, Robert; Keller, John; Kellermann, Edgar; Kempster, Jacob Julian; Kendrick, James; Kepka, Oldrich; Kerševan, Borut Paul; Kersten, Susanne; Keyes, Robert; Khader, Mazin; Khalil-zada, Farkhad; Khanov, Alexander; Kharlamov, Alexey; Kharlamova, Tatyana; Khodinov, Alexander; Khoo, Teng Jian; Khovanskiy, Valery; Khramov, Evgeniy; Khubua, Jemal; Kido, Shogo; Kiehn, Moritz; Kilby, Callum; Kim, Hee Yeun; Kim, Shinhong; Kim, Young-Kee; Kimura, Naoki; Kind, Oliver Maria; King, Barry; Kirchmeier, David; Kirk, Julie; Kiryunin, Andrey; Kishimoto, Tomoe; Kisielewska, Danuta; Kitali, Vincent; Kivernyk, Oleh; Kladiva, Eduard; Klapdor-Kleingrothaus, Thorwald; Klein, Matthew Henry; Klein, Max; Klein, Uta; Kleinknecht, Konrad; Klimek, Pawel; Klimentov, Alexei; Klingenberg, Reiner; Klingl, Tobias; Klioutchnikova, Tatiana; Klitzner, Felix Fidelio; Kluit, Peter; Kluth, Stefan; Kneringer, Emmerich; Knoops, Edith; Knue, Andrea; Kobayashi, Aine; Kobayashi, Dai; Kobayashi, Tomio; Kobel, Michael; Kocian, Martin; Kodys, Peter; Koffas, Thomas; Koffeman, Els; Köhler, Nicolas Maximilian; Koi, Tatsumi; Kolb, Mathis; Koletsou, Iro; Kondo, Takahiko; Kondrashova, Nataliia; Köneke, Karsten; König, Adriaan; Kono, Takanori; Konoplich, Rostislav; Konstantinidis, Nikolaos; Konya, Balazs; Kopeliansky, Revital; Koperny, Stefan; Korcyl, Krzysztof; Kordas, Kostantinos; Korn, Andreas; Korolkov, Ilya; Korolkova, Elena; Kortner, Oliver; Kortner, Sandra; Kosek, Tomas; Kostyukhin, Vadim; Kotwal, Ashutosh; Koulouris, Aimilianos; Kourkoumeli-Charalampidi, Athina; Kourkoumelis, Christine; Kourlitis, Evangelos; Kouskoura, Vasiliki; Kowalewska, Anna Bozena; Kowalewski, Robert Victor; Kowalski, Tadeusz; Kozakai, Chihiro; Kozanecki, Witold; Kozhin, Anatoly; Kramarenko, Viktor; Kramberger, Gregor; Krasnopevtsev, Dimitrii; Krasny, Mieczyslaw Witold; Krasznahorkay, Attila; Krauss, Dominik; Kremer, Jakub Andrzej; Kretzschmar, Jan; Kreutzfeldt, Kristof; Krieger, Peter; Krizka, Karol; Kroeninger, Kevin; Kroha, Hubert; Kroll, Jiri; Kroll, Joe; Kroseberg, Juergen; Krstic, Jelena; Kruchonak, Uladzimir; Krüger, Hans; Krumnack, Nils; Kruse, Mark; Kubota, Takashi; Kuday, Sinan; Kuechler, Jan Thomas; Kuehn, Susanne; Kugel, Andreas; Kuger, Fabian; Kuhl, Thorsten; Kukhtin, Victor; Kukla, Romain; Kulchitsky, Yuri; Kuleshov, Sergey; Kulinich, Yakov Petrovich; Kuna, Marine; Kunigo, Takuto; Kupco, Alexander; Kupfer, Tobias; Kuprash, Oleg; Kurashige, Hisaya; Kurchaninov, Leonid; Kurochkin, Yurii; Kurth, Matthew Glenn; Kuwertz, Emma Sian; Kuze, Masahiro; Kvita, Jiri; Kwan, Tony; La Rosa, Alessandro; La Rosa Navarro, Jose Luis; La Rotonda, Laura; La Ruffa, Francesco; Lacasta, Carlos; Lacava, Francesco; Lacey, James; Lack, David Philip John; Lacker, Heiko; Lacour, Didier; Ladygin, Evgueni; Lafaye, Remi; Laforge, Bertrand; Lai, Stanley; Lammers, Sabine; Lampl, Walter; Lançon, Eric; Landgraf, Ulrich; Landon, Murrough; Lanfermann, Marie Christine; Lang, Valerie Susanne; Lange, Jörn Christian; Langenberg, Robert Johannes; Lankford, Andrew; Lanni, Francesco; Lantzsch, Kerstin; Lanza, Agostino; Lapertosa, Alessandro; Laplace, Sandrine; Laporte, Jean-Francois; Lari, Tommaso; Lasagni Manghi, Federico; Lassnig, Mario; Lau, Tak Shun; Laudrain, Antoine; Law, Alexander; Laycock, Paul; Lazzaroni, Massimo; Le, Brian; Le Dortz, Olivier; Le Guirriec, Emmanuel; Le Quilleuc, Eloi; LeBlanc, Matthew Edgar; LeCompte, Thomas; Ledroit-Guillon, Fabienne; Lee, Claire Alexandra; Lee, Graham Richard; Lee, Shih-Chang; Lee, Lawrence; Lefebvre, Benoit; Lefebvre, Michel; Legger, Federica; Leggett, Charles; Lehmann Miotto, Giovanna; Leight, William Axel; Leisos, Antonios; Leite, Marco Aurelio Lisboa; Leitner, Rupert; Lellouch, Daniel; Lemmer, Boris; Leney, Katharine; Lenz, Tatjana; Lenzi, Bruno; Leone, Robert; Leone, Sandra; Leonidopoulos, Christos; Lerner, Giuseppe; Leroy, Claude; Les, Robert; Lesage, Arthur; Lester, Christopher; Levchenko, Mikhail; Levêque, Jessica; Levin, Daniel; Levinson, Lorne; Lewis, Dave; Li, Bing; Li, Changqiao; Li, Haifeng; Li, Liang; Li, Qi; Li, Quanyin; Li, Shu; Li, Xingguo; Li, Yichen; Liang, Zhijun; Liberti, Barbara; Liblong, Aaron; Lie, Ki; Liem, Sebastian; Limosani, Antonio; Lin, Chiao-ying; Lin, Kuan-yu; Lin, Simon; Lin, Tai-Hua; Linck, Rebecca Anne; Lindquist, Brian Edward; Lionti, Anthony; Lipeles, Elliot; Lipniacka, Anna; Lisovyi, Mykhailo; Liss, Tony; Lister, Alison; Litke, Alan; Little, Jared David; Liu, Bingxuan; Liu, Bo; Liu, Hao; Liu, Hongbin; Liu, Jesse; Liu, Jianbei; Liu, Kun; Liu, Minghui; Liu, Peilian; Liu, Yanlin; Liu, Yanwen; Livan, Michele; Lleres, Annick; Llorente Merino, Javier; Lloyd, Stephen; Lo, Cheuk Yee; Lo Sterzo, Francesco; Lobodzinska, Ewelina Maria; Loch, Peter; Loebinger, Fred; Loesle, Alena; Loew, Kevin Michael; Lohse, Thomas; Lohwasser, Kristin; Lokajicek, Milos; Long, Brian Alexander; Long, Jonathan David; Long, Robin Eamonn; Longo, Luigi; Looper, Kristina Anne; Lopez, Jorge; Lopez Paz, Ivan; Lopez Solis, Alvaro; Lorenz, Jeanette; Lorenzo Martinez, Narei; Losada, Marta; Lösel, Philipp Jonathan; Lou, XinChou; Lou, Xuanhong; Lounis, Abdenour; Love, Jeremy; Love, Peter; Lozano Bahilo, Jose Julio; Lu, Haonan; Lu, Nan; Lu, Yun-Ju; Lubatti, Henry; Luci, Claudio; Lucotte, Arnaud; Luedtke, Christian; Luehring, Frederick; Luise, Ilaria; Lukas, Wolfgang; Luminari, Lamberto; Lund-Jensen, Bengt; Lutz, Margaret Susan; Luzi, Pierre Marc; Lynn, David; Lysak, Roman; Lytken, Else; Lyu, Feng; Lyubushkin, Vladimir; Ma, Hong; Ma, Lian Liang; Ma, Yanhui; Maccarrone, Giovanni; Macchiolo, Anna; Macdonald, Calum Michael; Maček, Boštjan; Machado Miguens, Joana; Madaffari, Daniele; Madar, Romain; Mader, Wolfgang; Madsen, Alexander; Madysa, Nico; Maeda, Junpei; Maeland, Steffen; Maeno, Tadashi; Maevskiy, Artem; Magerl, Veronika; Maidantchik, Carmen; Maier, Thomas; Maio, Amélia; Majersky, Oliver; Majewski, Stephanie; Makida, Yasuhiro; Makovec, Nikola; Malaescu, Bogdan; Malecki, Pawel; Maleev, Victor; Malek, Fairouz; Mallik, Usha; Malon, David; Malone, Claire; Maltezos, Stavros; Malyukov, Sergei; Mamuzic, Judita; Mancini, Giada; Mandić, Igor; Maneira, José; Manhaes de Andrade Filho, Luciano; Manjarres Ramos, Joany; Mankinen, Katja Hannele; Mann, Alexander; Manousos, Athanasios; Mansoulie, Bruno; Mansour, Jason Dhia; Mantifel, Rodger; Mantoani, Matteo; Manzoni, Stefano; Marceca, Gino; March, Luis; Marchese, Luigi; Marchiori, Giovanni; Marcisovsky, Michal; Marin Tobon, Cesar Augusto; Marjanovic, Marija; Marley, Daniel; Marroquim, Fernando; Marshall, Zach; Martensson, Mikael; Marti-Garcia, Salvador; Martin, Christopher Blake; Martin, Tim; Martin, Victoria Jane; Martin dit Latour, Bertrand; Martinez, Mario; Martinez Outschoorn, Verena; Martin-Haugh, Stewart; Martoiu, Victor Sorin; Martyniuk, Alex; Marzin, Antoine; Masetti, Lucia; Mashimo, Tetsuro; Mashinistov, Ruslan; Masik, Jiri; Maslennikov, Alexey; Mason, Lara Hannan; Massa, Lorenzo; Mastrandrea, Paolo; Mastroberardino, Anna; Masubuchi, Tatsuya; Mättig, Peter; Maurer, Julien; Maxfield, Stephen; Maximov, Dmitriy; Mazini, Rachid; Maznas, Ioannis; Mazza, Simone Michele; Mc Fadden, Neil Christopher; Mc Goldrick, Garrin; Mc Kee, Shawn Patrick; McCarn, Allison; McCarthy, Thomas; McClymont, Laurie; McDonald, Emily; Mcfayden, Josh; Mchedlidze, Gvantsa; McKay, Madalyn; McLean, Kayla; McMahon, Steve; McNamara, Peter Charles; McNicol, Christopher John; McPherson, Robert; Mdhluli, Joyful Elma; Meadows, Zachary Alden; Meehan, Samuel; Megy, Theo; Mehlhase, Sascha; Mehta, Andrew; Meideck, Thomas; Meirose, Bernhard; Melini, Davide; Mellado Garcia, Bruce Rafael; Mellenthin, Johannes Donatus; Melo, Matej; Meloni, Federico; Melzer, Alexander; Menary, Stephen Burns; Meng, Lingxin; Meng, Xiangting; Mengarelli, Alberto; Menke, Sven; Meoni, Evelin; Mergelmeyer, Sebastian; Merlassino, Claudia; Mermod, Philippe; Merola, Leonardo; Meroni, Chiara; Merritt, Frank; Messina, Andrea; Metcalfe, Jessica; Mete, Alaettin Serhan; Meyer, Christopher; Meyer, Jean-Pierre; Meyer, Jochen; Meyer Zu Theenhausen, Hanno; Miano, Fabrizio; Middleton, Robin; Mijović, Liza; Mikenberg, Giora; Mikestikova, Marcela; Mikuž, Marko; Milesi, Marco; Milic, Adriana; Millar, Declan Andrew; Miller, David; Milov, Alexander; Milstead, David; Minaenko, Andrey; Minashvili, Irakli; Mincer, Allen; Mindur, Bartosz; Mineev, Mikhail; Minegishi, Yuji; Ming, Yao; Mir, Lluisa-Maria; Mirto, Alessandro; Mistry, Khilesh; Mitani, Takashi; Mitrevski, Jovan; Mitsou, Vasiliki A; Miucci, Antonio; Miyagawa, Paul; Mizukami, Atsushi; Mjörnmark, Jan-Ulf; Mkrtchyan, Tigran; Mlynarikova, Michaela; Moa, Torbjoern; Mochizuki, Kazuya; Mogg, Philipp; Mohapatra, Soumya; Molander, Simon; Moles-Valls, Regina; Mondragon, Matthew Craig; Mönig, Klaus; Monk, James; Monnier, Emmanuel; Montalbano, Alyssa; Montejo Berlingen, Javier; Monticelli, Fernando; Monzani, Simone; Moore, Roger; Morange, Nicolas; Moreno, Deywis; Moreno Llácer, María; Morettini, Paolo; Morgenstern, Marcus; Morgenstern, Stefanie; Mori, Daniel; Mori, Tatsuya; Morii, Masahiro; Morinaga, Masahiro; Morisbak, Vanja; Morley, Anthony Keith; Mornacchi, Giuseppe; Morris, John; Morvaj, Ljiljana; Moschovakos, Paris; Mosidze, Maia; Moss, Harry James; Moss, Josh; Motohashi, Kazuki; Mount, Richard; Mountricha, Eleni; Moyse, Edward; Muanza, Steve; Mueller, Felix; Mueller, James; Mueller, Ralph Soeren Peter; Muenstermann, Daniel; Mullen, Paul; Mullier, Geoffrey; Munoz Sanchez, Francisca Javiela; Murin, Pavel; Murray, Bill; Murrone, Alessia; Muškinja, Miha; Mwewa, Chilufya; Myagkov, Alexey; Myers, John; Myska, Miroslav; Nachman, Benjamin Philip; Nackenhorst, Olaf; Nagai, Koichi; Nagai, Ryo; Nagano, Kunihiro; Nagasaka, Yasushi; Nagata, Kazuki; Nagel, Martin; Nagy, Elemer; Nairz, Armin Michael; Nakahama, Yu; Nakamura, Koji; Nakamura, Tomoaki; Nakano, Itsuo; Napolitano, Fabrizio; Naranjo Garcia, Roger Felipe; Narayan, Rohin; Narrias Villar, Daniel Isaac; Naryshkin, Iouri; Naumann, Thomas; Navarro, Gabriela; Nayyar, Ruchika; Neal, Homer; Nechaeva, Polina; Neep, Thomas James; Negri, Andrea; Negrini, Matteo; Nektarijevic, Snezana; Nellist, Clara; Nelson, Michael Edward; Nemecek, Stanislav; Nemethy, Peter; Nessi, Marzio; Neubauer, Mark; Neumann, Manuel; Newman, Paul; Ng, Tsz Yu; Ng, Sam Yanwing; Nguyen, Hoang Dai Nghia; Nguyen Manh, Tuan; Nibigira, Emery; Nickerson, Richard; Nicolaidou, Rosy; Nielsen, Jason; Nikiforou, Nikiforos; Nikolaenko, Vladimir; Nikolic-Audit, Irena; Nikolopoulos, Konstantinos; Nilsson, Paul; Ninomiya, Yoichi; Nisati, Aleandro; Nishu, Nishu; Nisius, Richard; Nitsche, Isabel; Nitta, Tatsumi; Nobe, Takuya; Noguchi, Yohei; Nomachi, Masaharu; Nomidis, Ioannis; Nomura, Marcelo Ayumu; Nooney, Tamsin; Nordberg, Markus; Norjoharuddeen, Nurfikri; Novak, Tadej; Novgorodova, Olga; Novotny, Radek; Nozaki, Mitsuaki; Nozka, Libor; Ntekas, Konstantinos; Nurse, Emily; Nuti, Francesco; O'Connor, Kelsey; O'Neil, Dugan; O'Rourke, Abigail Alexandra; O'Shea, Val; Oakham, Gerald; Oberlack, Horst; Obermann, Theresa; Ocariz, Jose; Ochi, Atsuhiko; Ochoa, Ines; Ochoa-Ricoux, Juan Pedro; Oda, Susumu; Odaka, Shigeru; Oh, Alexander; Oh, Seog; Ohm, Christian; Ohman, Henrik; Oide, Hideyuki; Okawa, Hideki; Okazaki, Yuta; Okumura, Yasuyuki; Okuyama, Toyonobu; Olariu, Albert; Oleiro Seabra, Luis Filipe; Olivares Pino, Sebastian Andres; Oliveira Damazio, Denis; Oliver, Jason; Olsson, Joakim; Olszewski, Andrzej; Olszowska, Jolanta; Onofre, António; Onogi, Kouta; Onyisi, Peter; Oppen, Henrik; Oreglia, Mark; Oren, Yona; Orestano, Domizia; Orgill, Emily Claire; Orlando, Nicola; Orr, Robert; Osculati, Bianca; Ospanov, Rustem; Otero y Garzon, Gustavo; Otono, Hidetoshi; Ouchrif, Mohamed; Ould-Saada, Farid; Ouraou, Ahmimed; Ouyang, Qun; Owen, Mark; Owen, Rhys Edward; Ozcan, Veysi Erkcan; Ozturk, Nurcan; Pachal, Katherine; Pacheco Pages, Andres; Pacheco Rodriguez, Laura; Padilla Aranda, Cristobal; Pagan Griso, Simone; Paganini, Michela; Palacino, Gabriel; Palazzo, Serena; Palestini, Sandro; Palka, Marek; Pallin, Dominique; Panagoulias, Ilias; Pandini, Carlo Enrico; Panduro Vazquez, William; Pani, Priscilla; Paolozzi, Lorenzo; Papadopoulou, Theodora; Papageorgiou, Konstantinos; Paramonov, Alexander; Paredes Hernandez, Daniela; Parida, Bibhuti; Parker, Adam Jackson; Parker, Michael Andrew; Parker, Kerry Ann; Parodi, Fabrizio; Parsons, John; Parzefall, Ulrich; Pascuzzi, Vincent; Pasner, Jacob Martin; Pasqualucci, Enrico; Passaggio, Stefano; Pastore, Francesca; Pasuwan, Patrawan; Pataraia, Sophio; Pater, Joleen; Pathak, Atanu; Pauly, Thilo; Pearson, Benjamin; Pedersen, Maiken; Pedraza Lopez, Sebastian; Pedro, Rute; Peleganchuk, Sergey; Penc, Ondrej; Peng, Cong; Peng, Haiping; Penwell, John; Peralva, Bernardo; Perego, Marta Maria; Pereira Peixoto, Ana Paula; Perepelitsa, Dennis; Peri, Francesco; Perini, Laura; Pernegger, Heinz; Perrella, Sabrina; Peshekhonov, Vladimir; Peters, Krisztian; Peters, Yvonne; Petersen, Brian; Petersen, Troels; Petit, Elisabeth; Petridis, Andreas; Petridou, Chariclia; Petroff, Pierre; Petrolo, Emilio; Petrov, Mariyan; Petrucci, Fabrizio; Pettersson, Nora Emilia; Peyaud, Alan; Pezoa, Raquel; Pham, Thu; Phillips, Forrest Hays; Phillips, Peter William; Piacquadio, Giacinto; Pianori, Elisabetta; Picazio, Attilio; Pickering, Mark Andrew; Piegaia, Ricardo; Pilcher, James; Pilkington, Andrew; Pinamonti, Michele; Pinfold, James; Pitt, Michael; Pleier, Marc-Andre; Pleskot, Vojtech; Plotnikova, Elena; Pluth, Daniel; Podberezko, Pavel; Poettgen, Ruth; Poggi, Riccardo; Poggioli, Luc; Pogrebnyak, Ivan; Pohl, David-leon; Pokharel, Ishan; Polesello, Giacomo; Poley, Anne-luise; Policicchio, Antonio; Polifka, Richard; Polini, Alessandro; Pollard, Christopher Samuel; Polychronakos, Venetios; Ponomarenko, Daniil; Pontecorvo, Ludovico; Popeneciu, Gabriel Alexandru; Portillo Quintero, Dilia María; Pospisil, Stanislav; Potamianos, Karolos; Potrap, Igor; Potter, Christina; Potti, Harish; Poulsen, Trine; Poveda, Joaquin; Powell, Thomas Dennis; Pozo Astigarraga, Mikel Eukeni; Pralavorio, Pascal; Prell, Soeren; Price, Darren; Primavera, Margherita; Prince, Sebastien; Proklova, Nadezda; Prokofiev, Kirill; Prokoshin, Fedor; Protopopescu, Serban; Proudfoot, James; Przybycien, Mariusz; Puri, Akshat; Puzo, Patrick; Qian, Jianming; Qin, Yang; Quadt, Arnulf; Queitsch-Maitland, Michaela; Qureshi, Anum; Radhakrishnan, Sooraj Krishnan; Rados, Pere; Ragusa, Francesco; Rahal, Ghita; Raine, John Andrew; Rajagopalan, Srinivasan; Rashid, Tasneem; Raspopov, Sergii; Ratti, Maria Giulia; Rauch, Daniel; Rauscher, Felix; Rave, Stefan; Ravina, Baptiste; Ravinovich, Ilia; Rawling, Jacob Henry; Raymond, Michel; Read, Alexander Lincoln; Readioff, Nathan Peter; Reale, Marilea; Rebuzzi, Daniela; Redelbach, Andreas; Redlinger, George; Reece, Ryan; Reed, Robert; Reeves, Kendall; Rehnisch, Laura; Reichert, Joseph; Reiss, Andreas; Rembser, Christoph; Ren, Huan; Rescigno, Marco; Resconi, Silvia; Resseguie, Elodie Deborah; Rettie, Sebastien; Reynolds, Elliot; Rezanova, Olga; Reznicek, Pavel; Richter, Robert; Richter, Stefan; Richter-Was, Elzbieta; Ricken, Oliver; Ridel, Melissa; Rieck, Patrick; Riegel, Christian Johann; Rifki, Othmane; Rijssenbeek, Michael; Rimoldi, Adele; Rimoldi, Marco; Rinaldi, Lorenzo; Ripellino, Giulia; Ristić, Branislav; Ritsch, Elmar; Riu, Imma; Rivera Vergara, Juan Cristobal; Rizatdinova, Flera; Rizvi, Eram; Rizzi, Chiara; Roberts, Rhys Thomas; Robertson, Steven; Robichaud-Veronneau, Andree; Robinson, Dave; Robinson, James; Robson, Aidan; Rocco, Elena; Roda, Chiara; Rodina, Yulia; Rodriguez Bosca, Sergi; Rodriguez Perez, Andrea; Rodriguez Rodriguez, Daniel; Rodríguez Vera, Ana María; Roe, Shaun; Rogan, Christopher Sean; Røhne, Ole; Röhrig, Rainer; Roland, Christophe Pol A; Roloff, Jennifer; Romaniouk, Anatoli; Romano, Marino; Romero Adam, Elena; Rompotis, Nikolaos; Ronzani, Manfredi; Roos, Lydia; Rosati, Stefano; Rosbach, Kilian; Rose, Peyton; Rosien, Nils-Arne; Rossi, Elvira; Rossi, Leonardo Paolo; Rossini, Lorenzo; Rosten, Jonatan; Rosten, Rachel; Rotaru, Marina; Rothberg, Joseph; Rousseau, David; Roy, Debarati; Rozanov, Alexandre; Rozen, Yoram; Ruan, Xifeng; Rubbo, Francesco; Rühr, Frederik; Ruiz-Martinez, Aranzazu; Rurikova, Zuzana; Rusakovich, Nikolai; Russell, Heather; Rutherfoord, John; Ruthmann, Nils; Rüttinger, Elias Michael; Ryabov, Yury; Rybar, Martin; Rybkin, Grigori; Ryu, Soo; Ryzhov, Andrey; Rzehorz, Gerhard Ferdinand; Sabatini, Paolo; Sabato, Gabriele; Sacerdoti, Sabrina; Sadrozinski, Hartmut; Sadykov, Renat; Safai Tehrani, Francesco; Saha, Puja; Sahinsoy, Merve; Saimpert, Matthias; Saito, Masahiko; Saito, Tomoyuki; Sakamoto, Hiroshi; Sakharov, Alexander; Salamani, Dalila; Salamanna, Giuseppe; Salazar Loyola, Javier Esteban; Salek, David; Sales De Bruin, Pedro Henrique; Salihagic, Denis; Salnikov, Andrei; Salt, José; Salvatore, Daniela; Salvatore, Pasquale Fabrizio; Salvucci, Antonio; Salzburger, Andreas; Sammel, Dirk; Sampsonidis, Dimitrios; Sampsonidou, Despoina; Sánchez, Javier; Sanchez Pineda, Arturo Rodolfo; Sandaker, Heidi; Sander, Christian Oliver; Sandhoff, Marisa; Sandoval, Carlos; Sankey, Dave; Sannino, Mario; Sano, Yuta; Sansoni, Andrea; Santoni, Claudio; Santos, Helena; Santoyo Castillo, Itzebelt; Sapronov, Andrey; Saraiva, João; Sasaki, Osamu; Sato, Koji; Sauvan, Emmanuel; Savard, Pierre; Savic, Natascha; Sawada, Ryu; Sawyer, Craig; Sawyer, Lee; Sbarra, Carla; Sbrizzi, Antonio; Scanlon, Tim; Scannicchio, Diana; Schaarschmidt, Jana; Schacht, Peter; Schachtner, Balthasar Maria; Schaefer, Douglas; Schaefer, Leigh; Schaeffer, Jan; Schaepe, Steffen; Schäfer, Uli; Schaffer, Arthur; Schaile, Dorothee; Schamberger, R Dean; Scharmberg, Nicolas; Schegelsky, Valery; Scheirich, Daniel; Schenck, Ferdinand; Schernau, Michael; Schiavi, Carlo; Schier, Sheena; Schildgen, Lara Katharina; Schillaci, Zachary Michael; Schioppa, Enrico Junior; Schioppa, Marco; Schleicher, Katharina; Schlenker, Stefan; Schmidt-Sommerfeld, Korbinian Ralf; Schmieden, Kristof; Schmitt, Christian; Schmitt, Stefan; Schmitz, Simon; Schnoor, Ulrike; Schoeffel, Laurent; Schoening, Andre; Schopf, Elisabeth; Schott, Matthias; Schouwenberg, Jeroen; Schovancova, Jaroslava; Schramm, Steven; Schuh, Natascha; Schulte, Alexandra; Schultz-Coulon, Hans-Christian; Schumacher, Markus; Schumm, Bruce; Schune, Philippe; Schwartzman, Ariel; Schwarz, Thomas Andrew; Schweiger, Hansdieter; Schwemling, Philippe; Schwienhorst, Reinhard; Sciandra, Andrea; Sciolla, Gabriella; Scornajenghi, Matteo; Scuri, Fabrizio; Scutti, Federico; Scyboz, Ludovic Michel; Searcy, Jacob; Sebastiani, Cristiano David; Seema, Pienpen; Seidel, Sally; Seiden, Abraham; Seixas, José; Sekhniaidze, Givi; Sekhon, Karishma; Sekula, Stephen; Semprini-Cesari, Nicola; Senkin, Sergey; Serfon, Cedric; Serin, Laurent; Serkin, Leonid; Sessa, Marco; Severini, Horst; Šfiligoj, Tina; Sforza, Federico; Sfyrla, Anna; Shabalina, Elizaveta; Shahinian, Jeffrey David; Shaikh, Nabila Wahab; Shan, Lianyou; Shang, Ruo-yu; Shank, James; Shapiro, Marjorie; Sharma, Abhishek; Sharma, Abhishek; Shatalov, Pavel; Shaw, Kate; Shaw, Savanna Marie; Shcherbakova, Anna; Shehu, Ciwake Yusufu; Shen, Yu-Ting; Sherafati, Nima; Sherman, Alexander David; Sherwood, Peter; Shi, Liaoshan; Shimizu, Shima; Shimmin, Chase Owen; Shimojima, Makoto; Shipsey, Ian Peter Joseph; Shirabe, Shohei; Shiyakova, Mariya; Shlomi, Jonathan; Shmeleva, Alevtina; Shoaleh Saadi, Diane; Shochet, Mel; Shojaii, Seyed Ruhollah; Shope, David Richard; Shrestha, Suyog; Shulga, Evgeny; Sicho, Petr; Sickles, Anne Marie; Sidebo, Per Edvin; Sideras Haddad, Elias; Sidiropoulou, Ourania; Sidoti, Antonio; Siegert, Frank; Sijacki, Djordje; Silva, José; Silva Jr, Manuel; Silverstein, Samuel; Simic, Ljiljana; Simion, Stefan; Simioni, Eduard; Simmons, Brinick; Simon, Manuel; Sinervo, Pekka; Sinev, Nikolai; Sioli, Maximiliano; Siragusa, Giovanni; Siral, Ismet; Sivoklokov, Serguei; Sjölin, Jörgen; Skinner, Malcolm Bruce; Skubic, Patrick; Slater, Mark; Slavicek, Tomas; Slawinska, Magdalena; Sliwa, Krzysztof; Slovak, Radim; Smakhtin, Vladimir; Smart, Ben; Smiesko, Juraj; Smirnov, Nikita; Smirnov, Sergei; Smirnov, Yury; Smirnova, Lidia; Smirnova, Oxana; Smith, Joshua Wyatt; Smith, Matthew; Smith, Russell; Smizanska, Maria; Smolek, Karel; Snesarev, Andrei; Snyder, Ian Michael; Snyder, Scott; Sobie, Randall; Socher, Felix; Soffa, Aaron Michael; Soffer, Abner; Søgaard, Andreas; Soh, Dart-yin; Sokhrannyi, Grygorii; Solans Sanchez, Carlos; Solar, Michael; Soldatov, Evgeny; Soldevila, Urmila; Solodkov, Alexander; Soloshenko, Alexei; Solovyanov, Oleg; Solovyev, Victor; Sommer, Philip; Son, Hyungsuk; Song, Weimin; Sopczak, Andre; Sopkova, Filomena; Sosa, David; Sotiropoulou, Calliope Louisa; Sottocornola, Simone; Soualah, Rachik; Soukharev, Andrey; South, David; Sowden, Benjamin; Spagnolo, Stefania; Spalla, Margherita; Spangenberg, Martin; Spanò, Francesco; Sperlich, Dennis; Spettel, Fabian; Spieker, Thomas Malte; Spighi, Roberto; Spigo, Giancarlo; Spiller, Laurence Anthony; Spousta, Martin; Stabile, Alberto; Stamen, Rainer; Stamm, Soren; Stanecka, Ewa; Stanek, Robert; Stanescu, Cristian; Stanitzki, Marcel Michael; Stapf, Birgit Sylvia; Stapnes, Steinar; Starchenko, Evgeny; Stark, Giordon; Stark, Jan; Stark, Simon Holm; Staroba, Pavel; Starovoitov, Pavel; Stärz, Steffen; Staszewski, Rafal; Stegler, Martin; Steinberg, Peter; Stelzer, Bernd; Stelzer, Harald Joerg; Stelzer-Chilton, Oliver; Stenzel, Hasko; Stevenson, Thomas James; Stewart, Graeme; Stockton, Mark; Stoicea, Gabriel; Stolte, Philipp; Stonjek, Stefan; Straessner, Arno; Strandberg, Jonas; Strandberg, Sara; Strauss, Michael; Strizenec, Pavol; Ströhmer, Raimund; Strom, David; Stroynowski, Ryszard; Strubig, Antonia; Stucci, Stefania Antonia; Stugu, Bjarne; Stupak, John; Styles, Nicholas Adam; Su, Dong; Su, Jun; Suchek, Stanislav; Sugaya, Yorihito; Suk, Michal; Sulin, Vladimir; Sultan, D M S; Sultansoy, Saleh; Sumida, Toshi; Sun, Siyuan; Sun, Xiaohu; Suruliz, Kerim; Suster, Carl; Sutton, Mark; Suzuki, Shota; Svatos, Michal; Swiatlowski, Maximilian; Swift, Stewart Patrick; Sydorenko, Alexander; Sykora, Ivan; Sykora, Tomas; Ta, Duc; Tackmann, Kerstin; Taenzer, Joe; Taffard, Anyes; Tafirout, Reda; Tahirovic, Elvedin; Taiblum, Nimrod; Takai, Helio; Takashima, Ryuichi; Takasugi, Eric Hayato; Takeda, Kosuke; Takeshita, Tohru; Takubo, Yosuke; Talby, Mossadek; Talyshev, Alexey; Tanaka, Junichi; Tanaka, Masahiro; Tanaka, Reisaburo; Tanioka, Ryo; Tannenwald, Benjamin Bordy; Tapia Araya, Sebastian; Tapprogge, Stefan; Tarek Abouelfadl Mohamed, Ahmed; Tarem, Shlomit; Tarna, Grigore; Tartarelli, Giuseppe Francesco; Tas, Petr; Tasevsky, Marek; Tashiro, Takuya; Tassi, Enrico; Tavares Delgado, Ademar; Tayalati, Yahya; Taylor, Aaron; Taylor, Alan James; Taylor, Geoffrey; Taylor, Pierre Thor Elliot; Taylor, Wendy; Tee, Amy Selvi; Teixeira-Dias, Pedro; Temple, Darren; Ten Kate, Herman; Teng, Ping-Kun; Teoh, Jia Jian; Tepel, Fabian-Phillipp; Terada, Susumu; Terashi, Koji; Terron, Juan; Terzo, Stefano; Testa, Marianna; Teuscher, Richard; Thais, Savannah Jennifer; Theveneaux-Pelzer, Timothée; Thiele, Fabian; Thomas, Juergen; Thompson, Paul; Thompson, Stan; Thomsen, Lotte Ansgaard; Thomson, Evelyn; Tian, Yun; Ticse Torres, Royer Edson; Tikhomirov, Vladimir; Tikhonov, Yury; Timoshenko, Sergey; Tipton, Paul; Tisserant, Sylvain; Todome, Kazuki; Todorova-Nova, Sharka; Todt, Stefanie; Tojo, Junji; Tokár, Stanislav; Tokushuku, Katsuo; Tolley, Emma; Tomoto, Makoto; Tompkins, Lauren; Toms, Konstantin; Tong, Baojia(Tony); Tornambe, Peter; Torrence, Eric; Torres, Heberth; Torró Pastor, Emma; Tosciri, Cecilia; Toth, Jozsef; Touchard, Francois; Tovey, Daniel; Treado, Colleen Jennifer; Trefzger, Thomas; Tresoldi, Fabio; Tricoli, Alessandro; Trigger, Isabel Marian; Trincaz-Duvoid, Sophie; Tripiana, Martin; Trischuk, William; Trocmé, Benjamin; Trofymov, Artur; Troncon, Clara; Trovatelli, Monica; Trovato, Fabrizio; Truong, Loan; Trzebinski, Maciej; Trzupek, Adam; Tsai, Fang-ying; Tsang, Ka Wa; Tseng, Jeffrey; Tsiareshka, Pavel; Tsirintanis, Nikolaos; Tsiskaridze, Shota; Tsiskaridze, Vakhtang; Tskhadadze, Edisher; Tsukerman, Ilya; Tsulaia, Vakhtang; Tsuno, Soshi; Tsybychev, Dmitri; Tu, Yanjun; Tudorache, Alexandra; Tudorache, Valentina; Tulbure, Traian Tiberiu; Tuna, Alexander Naip; Turchikhin, Semen; Turgeman, Daniel; Turk Cakir, Ilkay; Turra, Ruggero; Tuts, Michael; Tzovara, Eftychia; Ucchielli, Giulia; Ueda, Ikuo; Ughetto, Michael; Ukegawa, Fumihiko; Unal, Guillaume; Undrus, Alexander; Unel, Gokhan; Ungaro, Francesca; Unno, Yoshinobu; Uno, Kenta; Urban, Jozef; Urquijo, Phillip; Urrejola, Pedro; Usai, Giulio; Usui, Junya; Vacavant, Laurent; Vacek, Vaclav; Vachon, Brigitte; Vadla, Knut Oddvar Hoie; Vaidya, Amal; Valderanis, Chrysostomos; Valdes Santurio, Eduardo; Valente, Marco; Valentinetti, Sara; Valero, Alberto; Valéry, Loïc; Vallance, Robert Adam; Vallier, Alexis; Valls Ferrer, Juan Antonio; Van Daalen, Tal Roelof; Van Den Wollenberg, Wouter; van der Graaf, Harry; van Gemmeren, Peter; Van Nieuwkoop, Jacobus; van Vulpen, Ivo; van Woerden, Marius Cornelis; Vanadia, Marco; Vandelli, Wainer; Vaniachine, Alexandre; Vankov, Peter; Vari, Riccardo; Varnes, Erich; Varni, Carlo; Varol, Tulin; Varouchas, Dimitris; Vartapetian, Armen; Varvell, Kevin; Vasquez, Jared Gregory; Vasquez, Gerardo; Vazeille, Francois; Vazquez Furelos, David; Vazquez Schroeder, Tamara; Veatch, Jason; Vecchio, Valentina; Veloce, Laurelle Maria; Veloso, Filipe; Veneziano, Stefano; Ventura, Andrea; Venturi, Manuela; Venturi, Nicola; Vercesi, Valerio; Verducci, Monica; Verkerke, Wouter; Vermeulen, Ambrosius Thomas; Vermeulen, Jos; Vetterli, Michel; Viaux Maira, Nicolas; Viazlo, Oleksandr; Vichou, Irene; Vickey, Trevor; Vickey Boeriu, Oana Elena; Viehhauser, Georg; Viel, Simon; Vigani, Luigi; Villa, Mauro; Villaplana Perez, Miguel; Vilucchi, Elisabetta; Vincter, Manuella; Vinogradov, Vladimir; Vishwakarma, Akanksha; Vittori, Camilla; Vivarelli, Iacopo; Vlachos, Sotirios; Vogel, Marcelo; Vokac, Petr; Volpi, Guido; von Buddenbrock, Stefan; von Toerne, Eckhard; Vorobel, Vit; Vorobev, Konstantin; Vos, Marcel; Vossebeld, Joost; Vranjes, Nenad; Vranjes Milosavljevic, Marija; Vrba, Vaclav; Vreeswijk, Marcel; Vuillermet, Raphael; Vukotic, Ilija; Wagner, Peter; Wagner, Wolfgang; Wagner-Kuhr, Jeannine; Wahlberg, Hernan; Wahrmund, Sebastian; Wakamiya, Kotaro; Walder, James; Walker, Rodney; Walkowiak, Wolfgang; Wallangen, Veronica; Wang, Ann Miao; Wang, Chao; Wang, Fuquan; Wang, Haichen; Wang, Hulin; Wang, Jike; Wang, Jin; Wang, Peilong; Wang, Qing; Wang, Renjie; Wang, Rongkun; Wang, Rui; Wang, Song-Ming; Wang, Tingting; Wang, Wei; Wang, Wenxiao; Wang, Yufeng; Wang, Zirui; Wanotayaroj, Chaowaroj; Warburton, Andreas; Ward, Patricia; Wardrope, David Robert; Washbrook, Andrew; Watkins, Peter; Watson, Alan; Watson, Miriam; Watts, Gordon; Watts, Stephen; Waugh, Ben; Webb, Aaron Foley; Webb, Samuel; Weber, Christian; Weber, Michele; Weber, Sebastian Mario; Weber, Stephen; Webster, Jordan S; Weidberg, Anthony; Weinert, Benjamin; Weingarten, Jens; Weirich, Marcel; Weiser, Christian; Wells, Phillippa; Wenaus, Torre; Wengler, Thorsten; Wenig, Siegfried; Wermes, Norbert; Werner, Michael David; Werner, Per; Wessels, Martin; Weston, Thomas; Whalen, Kathleen; Whallon, Nikola Lazar; Wharton, Andrew Mark; White, Aaron; White, Andrew; White, Martin; White, Ryan; Whiteson, Daniel; Whitmore, Ben William; Wickens, Fred; Wiedenmann, Werner; Wielers, Monika; Wiglesworth, Craig; Wiik-Fuchs, Liv Antje Mari; Wildauer, Andreas; Wilk, Fabian; Wilkens, Henric George; Williams, Hugh; Williams, Sarah; Willis, Christopher; Willocq, Stephane; Wilson, John; Wingerter-Seez, Isabelle; Winkels, Emma; Winklmeier, Frank; Winston, Oliver James; Winter, Benedict Tobias; Wittgen, Matthias; Wobisch, Markus; Wolf, Anton; Wolf, Tim Michael Heinz; Wolff, Robert; Wolter, Marcin Wladyslaw; Wolters, Helmut; Wong, Vincent Wai Sum; Woods, Natasha Lee; Worm, Steven; Wosiek, Barbara; Woźniak, Krzysztof; Wraight, Kenneth; Wu, Miles; Wu, Sau Lan; Wu, Xin; Wu, Yusheng; Wyatt, Terry Richard; Wynne, Benjamin; Xella, Stefania; Xi, Zhaoxu; Xia, Ligang; Xu, Da; Xu, Hanlin; Xu, Lailin; Xu, Tairan; Xu, Wenhao; Yabsley, Bruce; Yacoob, Sahal; Yajima, Kazuki; Yallup, David; Yamaguchi, Daiki; Yamaguchi, Yohei; Yamamoto, Akira; Yamanaka, Takashi; Yamane, Fumiya; Yamatani, Masahiro; Yamazaki, Tomohiro; Yamazaki, Yuji; Yan, Zhen; Yang, Haijun; Yang, Hongtao; Yang, Siqi; Yang, Yi; Yang, Yi-lin; Yang, Zongchang; Yao, Weiming; Yap, Yee Chinn; Yasu, Yoshiji; Yatsenko, Elena; Yau Wong, Kaven Henry; Ye, Jingbo; Ye, Shuwei; Yeletskikh, Ivan; Yigitbasi, Efe; Yildirim, Eda; Yorita, Kohei; Yoshihara, Keisuke; Young, Charles; Young, Christopher John; Yu, Jaehoon; Yu, Jie; Yue, Xiaoguang; Yuen, Stephanie P; Yusuff, Imran; Zabinski, Bartlomiej; Zacharis, Georgios; Zaidan, Remi; Zaitsev, Alexander; Zakharchuk, Nataliia; Zalieckas, Justas; Zambito, Stefano; Zanzi, Daniele; Zeitnitz, Christian; Zemaityte, Gabija; Zeng, Jian Cong; Zeng, Qi; Zenin, Oleg; Ženiš, Tibor; Zerwas, Dirk; Zgubič, Miha; Zhang, Dengfeng; Zhang, Dongliang; Zhang, Fangzhou; Zhang, Guangyi; Zhang, Huijun; Zhang, Jinlong; Zhang, Lei; Zhang, Liqing; Zhang, Matt; Zhang, Peng; Zhang, Rui; Zhang, Ruiqi; Zhang, Xueyao; Zhang, Yu; Zhang, Zhiqing; Zhao, Xiandong; Zhao, Yongke; Zhao, Zhengguo; Zhemchugov, Alexey; Zhou, Bing; Zhou, Chen; Zhou, Li; Zhou, Maosen; Zhou, Mingliang; Zhou, Ning; Zhou, You; Zhu, Cheng Guang; Zhu, Heling; Zhu, Hongbo; Zhu, Junjie; Zhu, Yingchun; Zhuang, Xuai; Zhukov, Konstantin; Zhulanov, Vladimir; Zibell, Andre; Zieminska, Daria; Zimine, Nikolai; Zimmermann, Stephanie; Zinonos, Zinonas; Zinser, Markus; Ziolkowski, Michael; Živković, Lidija; Zobernig, Georg; Zoccoli, Antonio; Zoch, Knut; Zorbas, Theodore Georgio; Zou, Rui; zur Nedden, Martin; Zwalinski, Lukasz

    A measurement of $J/\\psi$ and $\\psi(2\\mathrm{S})$ production is presented. It is based on a data sample from Pb+Pb collisions at $\\sqrt{s_{\\mathrm{NN}}}$ = 5.02 TeV and $pp$ collisions at $\\sqrt{s}$ = 5.02 TeV recorded by the ATLAS detector at the LHC in 2015, corresponding to an integrated luminosity of $0.42\\mathrm{nb}^{-1}$ and $25\\mathrm{pb}^{-1}$ in Pb+Pb and $pp$, respectively. The measurements of per-event yields, nuclear modification factors, and non-prompt fractions are performed in the dimuon decay channel for $9 < p_{T}^{\\mu\\mu} < 40$ GeV in dimuon transverse momentum, and $-2.0 < y_{\\mu\\mu} < 2.0$ in rapidity. Strong suppression is found in Pb+Pb collisions for both prompt and non-prompt $J/\\psi$, as well as for prompt and non-prompt $\\psi(2\\mathrm{S})$, increasing with event centrality. The suppression of prompt $\\psi(2\\mathrm{S})$ is observed to be stronger than that of $J/\\psi$, while the suppression of non-prompt $\\psi(2\\mathrm{S})$ is equal to that of the non-prompt $J/\\psi$ withi...

  10. Prompt photon measurements with the PHENIX MPC-EX detector

    Science.gov (United States)

    Campbell, Sarah

    2013-04-01

    The MPC-EX detector is a preshower extension to PHENIX's Muon Piston Calorimeter (MPC). It consists of eight layers of alternating W absorber and Si mini-pad sensors. Located at forward rapidity, 3.180 GeV, allowing the measurement of prompt photons using the double ratio method. At forward rapidities, prompt photons are dominated by direct photons produced by quark-gluon Compton scattering. In transversely polarized p+p collisions, the prompt photon single spin asymmetry measurement, AN, will resolve the sign discrepancy between the Sivers and twist-3 extractions of AN. In p+Au collisions, the prompt photon RpAu will quantify the level of gluon saturation in the Au nucleus at low-x, 10-3, with a projected systematic error band a factor of four smaller than EPS09's current allowable range. The MPC-EX detector will expand our understanding of gluon nuclear parton distribution functions, providing information about the initial state of heavy ion collisions, and clarify how valence parton's pT and spin correlate to the proton spin.

  11. Examining the Impact of Video Modeling Techniques on the Efficacy of Clinical Voice Assessment.

    Science.gov (United States)

    Werner, Cara; Bowyer, Samantha; Weinrich, Barbara; Gottliebson, Renee; Brehm, Susan Baker

    2017-01-01

    The purpose of the current study was to determine whether or not presenting patients with a video model improves efficacy of the assessment as defined by efficiency and decreased variability in trials during the acoustic component of voice evaluations. Twenty pediatric participants with a mean age of 7.6 years (SD = 1.50; range = 6-11 years), 32 college-age participants with a mean age of 21.32 years (SD = 1.61; range = 18-30 years), and 17 adult participants with a mean age of 54.29 years (SD = 2.78; range = 50-70 years) were included in the study and divided into experimental and control groups. The experimental group viewed a training video prior to receiving verbal instructions and performing acoustic assessment tasks, whereas the control group received verbal instruction only prior to completing the acoustic assessment. Primary measures included the number of clinician cues required and instructional time. Standard deviations of acoustic measurements (eg, minimum and maximum frequency) were also examined to determine effects on stability. Individuals in the experimental group required significantly less cues, P = 0.012, compared to the control group. Although some trends were observed in instructional time and stability of measurements, no significant differences were observed. The findings of this study may be useful for speech-language pathologists in regard to improving assessment of patients' voice disorders with the use of video modeling. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  12. A Comparison of Prompting Tactics for Teaching Intraverbals to Young Adults with Autism.

    Science.gov (United States)

    Vedora, Joseph; Conant, Erin

    2015-10-01

    Several researchers have compared the effectiveness of tact or textual prompts to echoic prompts for teaching intraverbal behavior to young children with autism. We extended this line of research by comparing the effectiveness of visual (textual or tact) prompts to echoic prompts to teach intraverbal responses to three young adults with autism. An adapted alternating treatments design was used with 2 to 3 comparisons for each participant. The results were mixed and did not reveal a more effective prompting procedure across participants, suggesting that the effectiveness of a prompting tactic may be idiosyncratic. The role of one's learning history and the implications for practitioners teaching intraverbal behavior to individuals with autism are discussed.

  13. Security and Privacy in Video Surveillance: Requirements and Challenges

    DEFF Research Database (Denmark)

    Mahmood Rajpoot, Qasim; Jensen, Christian D.

    2014-01-01

    observed by the system. Several techniques to protect the privacy of individuals have therefore been proposed, but very little research work has focused on the specific security requirements of video surveillance data (in transit or in storage) and on authorizing access to this data. In this paper, we...... present a general model of video surveillance systems that will help identify the major security and privacy requirements for a video surveillance system and we use this model to identify practical challenges in ensuring the security of video surveillance data in all stages (in transit and at rest). Our...... study shows a gap between the identified security requirements and the proposed security solutions where future research efforts may focus in this domain....

  14. Statistical conditional sampling for variable-resolution video compression.

    Directory of Open Access Journals (Sweden)

    Alexander Wong

    Full Text Available In this study, we investigate a variable-resolution approach to video compression based on Conditional Random Field and statistical conditional sampling in order to further improve compression rate while maintaining high-quality video. In the proposed approach, representative key-frames within a video shot are identified and stored at full resolution. The remaining frames within the video shot are stored and compressed at a reduced resolution. At the decompression stage, a region-based dictionary is constructed from the key-frames and used to restore the reduced resolution frames to the original resolution via statistical conditional sampling. The sampling approach is based on the conditional probability of the CRF modeling by use of the constructed dictionary. Experimental results show that the proposed variable-resolution approach via statistical conditional sampling has potential for improving compression rates when compared to compressing the video at full resolution, while achieving higher video quality when compared to compressing the video at reduced resolution.

  15. Studenterproduceret video til eksamen

    DEFF Research Database (Denmark)

    Jensen, Kristian Nøhr; Hansen, Kenneth

    2016-01-01

    Formålet med denne artikel er at vise, hvordan læringsdesign og stilladsering kan anvendes til at skabe en ramme for studenterproduceret video til eksamen på videregående uddannelser. Artiklen tager udgangspunkt i en problemstilling, hvor uddannelsesinstitutionerne skal håndtere og koordinere...... medieproduktioner. Med afsæt i Lanarca Declarationens perspektiver på læringsdesign og hovedsageligt Jerome Bruners principper for stilladsering, sammensættes en model for understøttelse af videoproduktion af studerende på videregående uddannelser. Ved at anvende denne model for undervisningssessioner og forløb får...... de fagfaglige og mediefaglige undervisere et redskab til at fokusere og koordinere indsatsen frem mod målet med, at de studerende producerer og anvender video til eksamen....

  16. Augmented video viewing: transforming video consumption into an active experience

    OpenAIRE

    WIJNANTS, Maarten; Leën, Jeroen; QUAX, Peter; LAMOTTE, Wim

    2014-01-01

    Traditional video productions fail to cater to the interactivity standards that the current generation of digitally native customers have become accustomed to. This paper therefore advertises the \\activation" of the video consumption process. In particular, it proposes to enhance HTML5 video playback with interactive features in order to transform video viewing into a dynamic pastime. The objective is to enable the authoring of more captivating and rewarding video experiences for end-users. T...

  17. Impact on mortality of prompt admission to critical care for deteriorating ward patients: an instrumental variable analysis using critical care bed strain.

    Science.gov (United States)

    Harris, Steve; Singer, Mervyn; Sanderson, Colin; Grieve, Richard; Harrison, David; Rowan, Kathryn

    2018-05-07

    To estimate the effect of prompt admission to critical care on mortality for deteriorating ward patients. We performed a prospective cohort study of consecutive ward patients assessed for critical care. Prompt admissions (within 4 h of assessment) were compared to a 'watchful waiting' cohort. We used critical care strain (bed occupancy) as a natural randomisation event that would predict prompt transfer to critical care. Strain was classified as low, medium or high (2+, 1 or 0 empty beds). This instrumental variable (IV) analysis was repeated for the subgroup of referrals with a recommendation for critical care once assessed. Risk-adjusted 90-day survival models were also constructed. A total of 12,380 patients from 48 hospitals were available for analysis. There were 2411 (19%) prompt admissions (median delay 1 h, IQR 1-2) and 9969 (81%) controls; 1990 (20%) controls were admitted later (median delay 11 h, IQR 6-26). Prompt admissions were less frequent (p care. In the risk-adjust survival model, 90-day mortality was similar. After allowing for unobserved prognostic differences between the groups, we find that prompt admission to critical care leads to lower 90-day mortality for patients assessed and recommended to critical care.

  18. Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification

    Science.gov (United States)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-02-01

    Experimental or operational modal analysis traditionally requires physically-attached wired or wireless sensors for vibration measurement of structures. This instrumentation can result in mass-loading on lightweight structures, and is costly and time-consuming to install and maintain on large civil structures, especially for long-term applications (e.g., structural health monitoring) that require significant maintenance for cabling (wired sensors) or periodic replacement of the energy supply (wireless sensors). Moreover, these sensors are typically placed at a limited number of discrete locations, providing low spatial sensing resolution that is hardly sufficient for modal-based damage localization, or model correlation and updating for larger-scale structures. Non-contact measurement methods such as scanning laser vibrometers provide high-resolution sensing capacity without the mass-loading effect; however, they make sequential measurements that require considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation, optical flow), video camera based measurements have been successfully used for vibration measurements and subsequent modal analysis, based on techniques such as the digital image correlation (DIC) and the point-tracking. However, they typically require speckle pattern or high-contrast markers to be placed on the surface of structures, which poses challenges when the measurement area is large or inaccessible. This work explores advanced computer vision and video processing algorithms to develop a novel video measurement and vision-based operational (output-only) modal analysis method that alleviate the need of structural surface preparation associated with existing vision-based methods and can be implemented in a relatively efficient and autonomous manner with little

  19. Quality and noise measurements in mobile phone video capture

    Science.gov (United States)

    Petrescu, Doina; Pincenti, John

    2011-02-01

    The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.

  20. Establishing verbal repertoires in children with autism using function-based video modeling.

    Science.gov (United States)

    Plavnick, Joshua B; Ferreri, Summer J

    2011-01-01

    Previous research suggests that language-training procedures for children with autism might be enhanced following an assessment of conditions that evoke emerging verbal behavior. The present investigation examined a methodology to teach recognizable mands based on environmental variables known to evoke participants' idiosyncratic communicative responses in the natural environment. An alternating treatments design was used during Experiment 1 to identify the variables that were functionally related to gestures emitted by 4 children with autism. Results showed that gestures functioned as requests for attention for 1 participant and as requests for assistance to obtain a preferred item or event for 3 participants. Video modeling was used during Experiment 2 to compare mand acquisition when video sequences were either related or unrelated to the results of the functional analysis. An alternating treatments within multiple probe design showed that participants repeatedly acquired mands during the function-based condition but not during the nonfunction-based condition. In addition, generalization of the response was observed during the former but not the latter condition.

  1. Real-Time Human Detection for Aerial Captured Video Sequences via Deep Models

    Directory of Open Access Journals (Sweden)

    Nouar AlDahoul

    2018-01-01

    Full Text Available Human detection in videos plays an important role in various real life applications. Most of traditional approaches depend on utilizing handcrafted features which are problem-dependent and optimal for specific tasks. Moreover, they are highly susceptible to dynamical events such as illumination changes, camera jitter, and variations in object sizes. On the other hand, the proposed feature learning approaches are cheaper and easier because highly abstract and discriminative features can be produced automatically without the need of expert knowledge. In this paper, we utilize automatic feature learning methods which combine optical flow and three different deep models (i.e., supervised convolutional neural network (S-CNN, pretrained CNN feature extractor, and hierarchical extreme learning machine for human detection in videos captured using a nonstatic camera on an aerial platform with varying altitudes. The models are trained and tested on the publicly available and highly challenging UCF-ARG aerial dataset. The comparison between these models in terms of training, testing accuracy, and learning speed is analyzed. The performance evaluation considers five human actions (digging, waving, throwing, walking, and running. Experimental results demonstrated that the proposed methods are successful for human detection task. Pretrained CNN produces an average accuracy of 98.09%. S-CNN produces an average accuracy of 95.6% with soft-max and 91.7% with Support Vector Machines (SVM. H-ELM has an average accuracy of 95.9%. Using a normal Central Processing Unit (CPU, H-ELM’s training time takes 445 seconds. Learning in S-CNN takes 770 seconds with a high performance Graphical Processing Unit (GPU.

  2. Correction factor for the experimental prompt neutron decay constant

    International Nuclear Information System (INIS)

    Talamo, Alberto; Gohar, Y.; Sadovich, S.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2013-01-01

    Highlights: • Definition of a spatial correction factor for the experimental prompt neutron decay constant. • Introduction of a MCNP6 calculation methodology to simulate Rossi-alpha distribution for pulsed neutron sources. • Comparison of MCNP6 results with experimental data for count rate, Rossi-alpha, and Feynman-alpha distributions. • Improvement of the comparison between numerical and experimental results by taking into account the dead-time effect. - Abstract: This study introduces a new correction factor to obtain the experimental effective multiplication factor of subcritical assemblies by the point kinetics formulation. The correction factor is defined as the ratio between the MCNP6 prompt neutron decay constant obtained in criticality mode and the one obtained in source mode. The correction factor mainly takes into account the longer neutron lifetime in the reflector region and the effects of the external neutron source. For the YALINA Thermal facility, the comparison between the experimental and computational effective multiplication factors noticeably improves after the application of the correction factor. The accuracy of the MCNP6 computational model of the YALINA Thermal subcritical assembly has been verified by reproducing the neutron count rate, Rossi-α, and Feynman-α distributions obtained from the experimental data

  3. Streaming Video--The Wave of the Video Future!

    Science.gov (United States)

    Brown, Laura

    2004-01-01

    Videos and DVDs give the teachers more flexibility than slide projectors, filmstrips, and 16mm films but teachers and students are excited about a new technology called streaming. Streaming allows the educators to view videos on demand via the Internet, which works through the transfer of digital media like video, and voice data that is received…

  4. The Measurement and Modeling of a P2P Streaming Video Service

    Science.gov (United States)

    Gao, Peng; Liu, Tao; Chen, Yanming; Wu, Xingyao; El-Khatib, Yehia; Edwards, Christopher

    Most of the work on grid technology in video area has been generally restricted to aspects of resource scheduling and replica management. The traffic of such service has a lot of characteristics in common with that of the traditional video service. However the architecture and user behavior in Grid networks are quite different from those of traditional Internet. Considering the potential of grid networks and video sharing services, measuring and analyzing P2P IPTV traffic are important and fundamental works in the field grid networks.

  5. Perceptual quality estimation of H.264/AVC videos using reduced-reference and no-reference models

    Science.gov (United States)

    Shahid, Muhammad; Pandremmenou, Katerina; Kondi, Lisimachos P.; Rossholm, Andreas; Lövström, Benny

    2016-09-01

    Reduced-reference (RR) and no-reference (NR) models for video quality estimation, using features that account for the impact of coding artifacts, spatio-temporal complexity, and packet losses, are proposed. The purpose of this study is to analyze a number of potentially quality-relevant features in order to select the most suitable set of features for building the desired models. The proposed sets of features have not been used in the literature and some of the features are used for the first time in this study. The features are employed by the least absolute shrinkage and selection operator (LASSO), which selects only the most influential of them toward perceptual quality. For comparison, we apply feature selection in the complete feature sets and ridge regression on the reduced sets. The models are validated using a database of H.264/AVC encoded videos that were subjectively assessed for quality in an ITU-T compliant laboratory. We infer that just two features selected by RR LASSO and two bitstream-based features selected by NR LASSO are able to estimate perceptual quality with high accuracy, higher than that of ridge, which uses more features. The comparisons with competing works and two full-reference metrics also verify the superiority of our models.

  6. A video authentication technique

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1987-01-01

    Unattended video surveillance systems are particularly vulnerable to the substitution of false video images into the cable that connects the camera to the video recorder. New technology has made it practical to insert a solid state video memory into the video cable, freeze a video image from the camera, and hold this image as long as desired. Various techniques, such as line supervision and sync detection, have been used to detect video cable tampering. The video authentication technique described in this paper uses the actual video image from the camera as the basis for detecting any image substitution made during the transmission of the video image to the recorder. The technique, designed for unattended video systems, can be used for any video transmission system where a two-way digital data link can be established. The technique uses similar microprocessor circuitry at the video camera and at the video recorder to select sample points in the video image for comparison. The gray scale value of these points is compared at the recorder controller and if the values agree within limits, the image is authenticated. If a significantly different image was substituted, the comparison would fail at a number of points and the video image would not be authenticated. The video authentication system can run as a stand-alone system or at the request of another system

  7. The Effect of Online Violent Video Games on Levels of Aggression

    OpenAIRE

    Hollingdale, Jack; Greitemeyer, Tobias

    2014-01-01

    BACKGROUND: In recent years the video game industry has surpassed both the music and video industries in sales. Currently violent video games are among the most popular video games played by consumers, most specifically First-Person Shooters (FPS). Technological advancements in game play experience including the ability to play online has accounted for this increase in popularity. Previous research, utilising the General Aggression Model (GAM), has identified that violent video games increase...

  8. An Evaluation of Video Modeling with Embedded Instructions to Teach Implementation of Stimulus Preference Assessments

    Science.gov (United States)

    Rosales, Rocío; Gongola, Leah; Homlitas, Christa

    2015-01-01

    A multiple baseline design across participants was used to evaluate the effects of video modeling with embedded instructions on training teachers to implement 3 preference assessments. Each assessment was conducted with a confederate learner or a child with autism during generalization probes. All teachers met the predetermined mastery criterion,…

  9. Perceptual learning during action video game playing.

    Science.gov (United States)

    Green, C Shawn; Li, Renjie; Bavelier, Daphne

    2010-04-01

    Action video games have been shown to enhance behavioral performance on a wide variety of perceptual tasks, from those that require effective allocation of attentional resources across the visual scene, to those that demand the successful identification of fleetingly presented stimuli. Importantly, these effects have not only been shown in expert action video game players, but a causative link has been established between action video game play and enhanced processing through training studies. Although an account based solely on attention fails to capture the variety of enhancements observed after action game playing, a number of models of perceptual learning are consistent with the observed results, with behavioral modeling favoring the hypothesis that avid video game players are better able to form templates for, or extract the relevant statistics of, the task at hand. This may suggest that the neural site of learning is in areas where information is integrated and actions are selected; yet changes in low-level sensory areas cannot be ruled out. Copyright © 2009 Cognitive Science Society, Inc.

  10. A Practitioner Model for Increasing Eye Contact in Children With Autism.

    Science.gov (United States)

    Cook, Jennifer L; Rapp, John T; Mann, Kathryn R; McHugh, Catherine; Burji, Carla; Nuta, Raluca

    2017-05-01

    Although many teaching techniques for children with autism spectrum disorder (ASD) require the instructor to gain the child's eye contact prior to delivering an instructional demand, the literature contains notably few procedures that reliably produce this outcome. To address this problem, we evaluated the effects of a sequential model for increasing eye contact in children with ASD. The model included the following phases: contingent praise only (for eye contact), contingent edibles plus praise, stimulus prompts plus contingent edibles and praise, contingent video and praise, schedule thinning, and maintenance evaluations for up to 2 years. Results indicated that the procedures increased eye contact for 20 participants (one additional participant did not require consequences). For 16 participants, praise (alone) was not sufficient to support eye contact; however, high levels of eye contact were typically maintained with these participants when therapists used combined schedules of intermittent edibles or video and continuous praise. We discuss some limitations of this model and directions for future research on increasing eye contact for children with ASD.

  11. Utilization of actively-induced, prompt radiation emission for nonproliferation applications

    International Nuclear Information System (INIS)

    Blackburn, B.W.; Jones, J.L.; Moss, C.E.; Mihalczo, J.T.; Hunt, A.W.; Harmon, F.; Watson, S.M.; Johnson, J.T.

    2007-01-01

    The pulsed photonuclear assessment (PPA) technique, which has demonstrated the ability to detect shielded nuclear material, is based on utilizing delayed neutrons and photons between accelerator pulses. While most active interrogation systems have focused on delayed neutron and gamma-ray signatures, there is an increasing need to bring faster detection and acquisition capabilities to field inspection applications. This push for decreased interrogation times, increased sensitivity, and mitigation of false positives requires that detection systems take advantage of all available information. Collaborative research between Idaho National Lab (INL), Idaho State University's Idaho Accelerator Center (IAC), Los Alamos National Laboratory (LANL), and Oak Ridge National Laboratory (ORNL), has focused on exploiting actively-induced, prompt radiation signatures from nuclear material within a pulsed photonuclear environment. To date, these prompt emissions have not been effectively exploited due to difficulties in detection and signal processing inherent in the prompt regime as well as an overall poor understanding of the magnitude and yields of these emissions. Exploitation of prompt radiation (defined as during an accelerator pulse/photofission event and/or immediately after (<1 μs)) has the potential to dramatically reduce interrogation times since neutron yields are more than two orders of magnitude greater than delayed emissions. Recent preliminary experiments conducted at the IAC suggest that it is indeed possible to extract prompt neutron information within a pulsed photon environment. Successful exploitation of prompt emissions is critical for the development of an improved robust, high-throughput, low target dose inspection system for detection of shielded nuclear materials

  12. Utilization of Actively-induced, Prompt Radiation Emission for Nonproliferation Applications

    International Nuclear Information System (INIS)

    F. W. Blackburn; J. L. Jones; C. E. Moss; J. T. Mihalzco; A. W. Hunt; F. Harmon

    2006-01-01

    The pulsed Photonuclear Assessment (PPA) technique, which has demonstrated the ability to detect shielded nuclear material, is based on utilizing delayed neutrons and photons between accelerator pulses. While most active interrogation systems have focused on delayed neutron and gamma-ray signatures, the current requirements of various agencies necessitate bringing faster detection and acquisition capabilities to field inspection applications. This push for decreased interrogation times, increased sensitivity and mitigation of false positives requires that detection systems take advantage of all available information. Collaborative research between Idaho National Lab (INL), Idaho State University's Idaho Accelerator Center (IAC), Los Alamos National Laboratory (LANL), and Oak Ridge National Laboratory (ORNL), has focused on exploiting actively-induced, prompt radiation signatures from nuclear material within a pulsed photonuclear environment. To date, these prompt emissions have not been effectively exploited due to difficulties in detection and signal processing inherent in the prompt regime as well as an overall poor understanding of the magnitude and yields of these emissions. Exploitation of prompt radiation (defined as during an accelerator pulse/(photo) fission event and/or immediately after (< l ms)) has the potential to dramatically reduce interrogation times since the yields are more than two orders of magnitude greater than delayed emissions. Recent preliminary experiments conducted at the IAC suggest that it is indeed possible to extract prompt neutron information within a pulsed photon environment. Successful exploitation of prompt emissions is critical for the development of an improved robust, high-throughput, low target dose inspection system for detection of shielded nuclear materials

  13. Playing Action Video Games Improves Visuomotor Control.

    Science.gov (United States)

    Li, Li; Chen, Rongrong; Chen, Jing

    2016-08-01

    Can playing action video games improve visuomotor control? If so, can these games be used in training people to perform daily visuomotor-control tasks, such as driving? We found that action gamers have better lane-keeping and visuomotor-control skills than do non-action gamers. We then trained non-action gamers with action or nonaction video games. After they played a driving or first-person-shooter video game for 5 or 10 hr, their visuomotor control improved significantly. In contrast, non-action gamers showed no such improvement after they played a nonaction video game. Our model-driven analysis revealed that although different action video games have different effects on the sensorimotor system underlying visuomotor control, action gaming in general improves the responsiveness of the sensorimotor system to input error signals. The findings support a causal link between action gaming (for as little as 5 hr) and enhancement in visuomotor control, and suggest that action video games can be beneficial training tools for driving. © The Author(s) 2016.

  14. The LivePhoto Physics videos and video analysis site

    Science.gov (United States)

    Abbott, David

    2009-09-01

    The LivePhoto site is similar to an archive of short films for video analysis. Some videos have Flash tools for analyzing the video embedded in the movie. Most of the videos address mechanics topics with titles like Rolling Pencil (check this one out for pedagogy and content knowledge—nicely done!), Juggler, Yo-yo, Puck and Bar (this one is an inelastic collision with rotation), but there are a few titles in other areas (E&M, waves, thermo, etc.).

  15. Methodology for using prompt gamma activation analysis to measure the binary diffusion coefficient of a gas in a porous medium

    International Nuclear Information System (INIS)

    Rios Perez, Carlos A.; Biegalski, Steve R.; Deinert, Mark R.

    2012-01-01

    Highlights: ► Prompt gamma activation analysis is used to study gas diffusion in a porous system. ► Diffusion coefficients are determined using prompt gamma activation analysis. ► Predictions concentrations fit experimental measurements with an R 2 of 0.98. - Abstract: Diffusion plays a critical role in determining the rate at which gases migrate through porous systems. Accurate estimates of diffusion coefficients are essential if gas transport is to be accurately modeled and better techniques are needed that can be used to measure these coefficients non-invasively. Here we present a novel method for using prompt gamma activation analysis to determine the binary diffusion coefficients of a gas in a porous system. Argon diffusion experiments were conducted in a 1 m long, 10 cm diameter, horizontal column packed with a SiO 2 sand. The temporal variation of argon concentration within the system was measured using prompt gamma activation analysis. The binary diffusion coefficient was obtained by comparing the experimental data with the predictions from a numerical model in which the diffusion coefficient was varied until the sum of square errors between experiment and model data was minimized. Predictions of argon concentration using the optimal diffusivity fit experimental measurements with an R 2 of 0.983.

  16. Exploration of the impact of a voice activated decision support system (VADSS) with video on resuscitation performance by lay rescuers during simulated cardiopulmonary arrest.

    Science.gov (United States)

    Hunt, Elizabeth A; Heine, Margaret; Shilkofski, Nicole S; Bradshaw, Jamie Haggerty; Nelson-McMillan, Kristen; Duval-Arnould, Jordan; Elfenbein, Ron

    2015-03-01

    To assess whether access to a voice activated decision support system (VADSS) containing video clips demonstrating resuscitation manoeuvres was associated with increased compliance with American Heart Association Basic Life Support (AHA BLS) guidelines. This was a prospective, randomised controlled trial. Subjects with no recent clinical experience were randomised to the VADSS or control group and participated in a 5-min simulated out-of-hospital cardiopulmonary arrest with another 'bystander'. Data on performance for predefined outcome measures based on the AHA BLS guidelines were abstracted from videos and the simulator log. 31 subjects were enrolled (VADSS 16 vs control 15), with no significant differences in baseline characteristics. Study subjects in the VADSS were more likely to direct the bystander to: (1) perform compressions to ventilations at the correct ratio of 30:2 (VADSS 15/16 (94%) vs control 4/15 (27%), p=compressor versus ventilator roles after 2 min (VADSS 12/16 (75%) vs control 2/15 (13%), p=0.001). The VADSS group took longer to initiate chest compressions than the control group: VADSS 159.5 (±53) s versus control 78.2 (±20) s, pcontrol 75.4 (±8.0), p=0.35. The use of an audio and video assisted decision support system during a simulated out-of-hospital cardiopulmonary arrest prompted lay rescuers to follow cardiopulmonary resuscitation (CPR) guidelines but was also associated with an unacceptable delay to starting chest compressions. Future studies should explore: (1) if video is synergistic to audio prompts, (2) how mobile technologies may be leveraged to spread CPR decision support and (3) usability testing to avoid unintended consequences. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  17. Automated UAV-based mapping for airborne reconnaissance and video exploitation

    Science.gov (United States)

    Se, Stephen; Firoozfam, Pezhman; Goldstein, Norman; Wu, Linda; Dutkiewicz, Melanie; Pace, Paul; Naud, J. L. Pierre

    2009-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for force protection, situational awareness, mission planning, damage assessment and others. UAVs gather huge amount of video data but it is extremely labour-intensive for operators to analyse hours and hours of received data. At MDA, we have developed a suite of tools towards automated video exploitation including calibration, visualization, change detection and 3D reconstruction. The on-going work is to improve the robustness of these tools and automate the process as much as possible. Our calibration tool extracts and matches tie-points in the video frames incrementally to recover the camera calibration and poses, which are then refined by bundle adjustment. Our visualization tool stabilizes the video, expands its field-of-view and creates a geo-referenced mosaic from the video frames. It is important to identify anomalies in a scene, which may include detecting any improvised explosive devices (IED). However, it is tedious and difficult to compare video clips to look for differences manually. Our change detection tool allows the user to load two video clips taken from two passes at different times and flags any changes between them. 3D models are useful for situational awareness, as it is easier to understand the scene by visualizing it in 3D. Our 3D reconstruction tool creates calibrated photo-realistic 3D models from video clips taken from different viewpoints, using both semi-automated and automated approaches. The resulting 3D models also allow distance measurements and line-of- sight analysis.

  18. Quality Assessment of Compressed Video for Automatic License Plate Recognition

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Støttrup-Andersen, Jesper; Forchhammer, Søren

    2014-01-01

    Definition of video quality requirements for video surveillance poses new questions in the area of quality assessment. This paper presents a quality assessment experiment for an automatic license plate recognition scenario. We explore the influence of the compression by H.264/AVC and H.265/HEVC s...... recognition in our study has a behavior similar to human recognition, allowing the use of the same mathematical models. We furthermore propose an application of one of the models for video surveillance systems......Definition of video quality requirements for video surveillance poses new questions in the area of quality assessment. This paper presents a quality assessment experiment for an automatic license plate recognition scenario. We explore the influence of the compression by H.264/AVC and H.265/HEVC...... standards on the recognition performance. We compare logarithmic and logistic functions for quality modeling. Our results show that a logistic function can better describe the dependence of recognition performance on the quality for both compression standards. We observe that automatic license plate...

  19. An evaluation of parent-produced video self-modeling to improve independence in an adolescent with intellectual developmental disorder and an autism spectrum disorder: a controlled case study.

    Science.gov (United States)

    Allen, Keith D; Vatland, Christopher; Bowen, Scott L; Burke, Raymond V

    2015-07-01

    We evaluated a parent-created video self-modeling (VSM) intervention to improve independence in an adolescent diagnosed with Intellectual Developmental Disorder (IDD) and Autism Spectrum Disorder (ASD). In a multiple baseline design across routines, a parent and her 17-year-old daughter created self-modeling videos of three targeted routines needed for independence in the community. The parent used a tablet device with a mobile app called "VideoTote" to produce videos of the daughter performing the targeted routines. The mobile app includes a 30-s tutorial about making modeling videos. The parent and daughter produced and watched a VSM scene prior to performing each of the three routines in an analogue community setting. The adolescent showed marked, immediate, and sustained improvements in performing each routine following the production and implementation of the VSM. Performance was found to generalize to the natural community setting. Results suggest that parents can use available technology to promote community independence for transition age individuals. © The Author(s) 2015.

  20. REAL-TIME VIDEO SCALING BASED ON CONVOLUTION NEURAL NETWORK ARCHITECTURE

    Directory of Open Access Journals (Sweden)

    S Safinaz

    2017-08-01

    Full Text Available In recent years, video super resolution techniques becomes mandatory requirements to get high resolution videos. Many super resolution techniques researched but still video super resolution or scaling is a vital challenge. In this paper, we have presented a real-time video scaling based on convolution neural network architecture to eliminate the blurriness in the images and video frames and to provide better reconstruction quality while scaling of large datasets from lower resolution frames to high resolution frames. We compare our outcomes with multiple exiting algorithms. Our extensive results of proposed technique RemCNN (Reconstruction error minimization Convolution Neural Network shows that our model outperforms the existing technologies such as bicubic, bilinear, MCResNet and provide better reconstructed motioning images and video frames. The experimental results shows that our average PSNR result is 47.80474 considering upscale-2, 41.70209 for upscale-3 and 36.24503 for upscale-4 for Myanmar dataset which is very high in contrast to other existing techniques. This results proves our proposed model real-time video scaling based on convolution neural network architecture’s high efficiency and better performance.

  1. 45 CFR 235.70 - Prompt notice to child support or Medicaid agency.

    Science.gov (United States)

    2010-10-01

    ... 45 Public Welfare 2 2010-10-01 2010-10-01 false Prompt notice to child support or Medicaid agency... Medicaid agency. (a) A State plan under title IV-A of the Social Security Act must provide for prompt.... Prompt notice must also include all relevant information as prescribed by the State medicaid agency for...

  2. Constructing Student Knowledge in the Online Classroom: The Effectiveness of Focal Prompts

    Science.gov (United States)

    Howell, Ginger S.; LaCour, Misty M.; McGlawn, Penny A.

    2017-01-01

    The purpose of this study was to examine the effect of three Structured Divergent discussion board prompt designs on knowledge construction in a graduate online course. According to Andrews (1980), the form of the question affects the extent of the response within a discussion. The Playground prompt, the Brainstorming prompt, and the Focal prompt…

  3. Image/video understanding systems based on network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  4. Secured web-based video repository for multicenter studies.

    Science.gov (United States)

    Yan, Ling; Hicks, Matt; Winslow, Korey; Comella, Cynthia; Ludlow, Christy; Jinnah, H A; Rosen, Ami R; Wright, Laura; Galpern, Wendy R; Perlmutter, Joel S

    2015-04-01

    We developed a novel secured web-based dystonia video repository for the Dystonia Coalition, part of the Rare Disease Clinical Research network funded by the Office of Rare Diseases Research and the National Institute of Neurological Disorders and Stroke. A critical component of phenotypic data collection for all projects of the Dystonia Coalition includes a standardized video of each participant. We now describe our method for collecting, serving and securing these videos that is widely applicable to other studies. Each recruiting site uploads standardized videos to a centralized secured server for processing to permit website posting. The streaming technology used to view the videos from the website does not allow downloading of video files. With appropriate institutional review board approval and agreement with the hosting institution, users can search and view selected videos on the website using customizable, permissions-based access that maintains security yet facilitates research and quality control. This approach provides a convenient platform for researchers across institutions to evaluate and analyze shared video data. We have applied this methodology for quality control, confirmation of diagnoses, validation of rating scales, and implementation of new research projects. We believe our system can be a model for similar projects that require access to common video resources. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. The Use of Individualized Video Modeling to Enhance Positive Peer Interactions in Three Preschool Children

    Science.gov (United States)

    Green, Vanessa A.; Prior, Tessa; Smart, Emily; Boelema, Tanya; Drysdale, Heather; Harcourt, Susan; Roche, Laura; Waddington, Hannah

    2017-01-01

    The study described in this article sought to enhance the social interaction skills of 3 preschool children using video modeling. All children had been assessed as having difficulties in their interactions with peers. Two were above average on internalizing problems and the third was above average on externalizing problems. The study used a…

  6. Determination of prompt neutron decay constant of the AP-600 reactor core

    International Nuclear Information System (INIS)

    Surbakti, T.

    1998-01-01

    Determination of prompt neutron decay constant of the AP-600 reactor core has been performed using combination of two codes WIMS/D4 and Batan-2DIFF. The calculation was done at beginning of cycle and all of control rods pulled out. Cell generation from various kinds of core materials was done with 4 neutron energy group in 1-D transport code (WIMS/D4). The cell is considered for 1/4 fuel assembly in cluster model with square pitch arrange and then, the dimension of its unit cell is calculated. The unit cell consist of a fuel and moderator unit. The unit cell dimension as input data of WIMS/D4 code, called it annulus, is obtained from the equivalent unit cell. Macroscopic cross sections as output was used as input on neutron diffusion code Batan-2DIFF for core calculation as appropriate with three enrichment regions of the fuel of AP-600 core, namely 2, 2.5, and 3%. From result of diffusion code ( Batan-2DIFF) is obtained the value of delayed neutron fraction of 6.932E-03 and average prompt neutron life-time of 26.38 μs, so that the value of prompt neutron decay constant is 262.8 s-1. If it is compared the calculation result with the design value, the deviation are, for the design value of delayed neutron fraction is 7.5E-03, about 8% and the design value of average prompt neutron life time is 19.6 μs, about 34% respectively. The deviation because there are still unknown several core components of AP-600, so it didn't include in calculation yet

  7. Physics and Video Analysis

    Science.gov (United States)

    Allain, Rhett

    2016-05-01

    We currently live in a world filled with videos. There are videos on YouTube, feature movies and even videos recorded with our own cameras and smartphones. These videos present an excellent opportunity to not only explore physical concepts, but also inspire others to investigate physics ideas. With video analysis, we can explore the fantasy world in science-fiction films. We can also look at online videos to determine if they are genuine or fake. Video analysis can be used in the introductory physics lab and it can even be used to explore the make-believe physics embedded in video games. This book covers the basic ideas behind video analysis along with the fundamental physics principles used in video analysis. The book also includes several examples of the unique situations in which video analysis can be used.

  8. Measurement of Prompt Photon Cross Sections in Photoproduction at HERA

    CERN Document Server

    Aktas, A.; Anthonis, T.; Asmone, A.; Babaev, A.; Backovic, S.; Bahr, J.; Baranov, P.; Barrelet, E.; Bartel, W.; Baumgartner, S.; Becker, J.; Beckingham, M.; Behnke, O.; Behrendt, O.; Belousov, A.; Berger, Ch.; Berger, N.; Berndt, T.; Bizot, J.C.; Bohme, J.; Boenig, M.-O.; Boudry, V.; Bracinik, J.; Brisson, V.; Broker, H.-B.; Brown, D.P.; Bruncko, D.; Busser, F.W.; Bunyatyan, A.; Buschhorn, G.; Bystritskaya, L.; Campbell, A.J.; Caron, S.; Cassol-Brunner, F.; Cerny, K.; Chekelian, V.; Collard, C.; Contreras, J.G.; Coppens, Y.R.; Coughlan, J.A.; Cox, B.E.; Cozzika, G.; Cvach, J.; Dainton, J.B.; Dau, W.D.; Daum, K.; Delcourt, B.; Demirchyan, R.; De Roeck, A.; Desch, K.; De Wolf, E.A.; Diaconu, C.; Dingfelder, J.; Dodonov, V.; Dubak, A.; Duprel, C.; Eckerlin, Guenter; Efremenko, V.; Egli, S.; Eichler, R.; Eisele, F.; Ellerbrock, M.; Elsen, E.; Erdmann, M.; Erdmann, W.; Faulkner, P.J.W.; Favart, L.; Fedotov, A.; Felst, R.; Ferencei, J.; Fleischer, M.; Fleischmann, P.; Fleming, Y.H.; Flucke, G.; Flugge, G.; Fomenko, A.; Foresti, I.; Formanek, J.; Franke, G.; Frising, G.; Gabathuler, E.; Gabathuler, K.; Garutti, E.; Garvey, J.; Gayler, J.; Gerhards, R.; Gerlich, C.; Ghazaryan, Samvel; Goerlich, L.; Gogitidze, N.; Gorbounov, S.; Grab, C.; Grassler, H.; Greenshaw, T.; Gregori, M.; Grindhammer, Guenter; Gwilliam, C.; Haidt, D.; Hajduk, L.; Haller, J.; Hansson, M.; Heinzelmann, G.; Henderson, R.C.W.; Henschel, H.; Henshaw, O.; Heremans, R.; Herrera, G.; Herynek, I.; Heuer, R.-D.; Hildebrandt, M.; Hiller, K.H.; Hladky, J.; Hoting, P.; Hoffmann, D.; Horisberger, R.; Hovhannisyan, A.; Ibbotson, M.; Ismail, M.; Jacquet, M.; Janauschek, L.; Janssen, X.; Jemanov, V.; Jonsson, L.; Johnson, D.P.; Jung, H.; Kant, D.; Kapichine, M.; Karlsson, M.; Katzy, J.; Keller, N.; Kennedy, J.; Kenyon, I.R.; Kiesling, Christian M.; Klein, M.; Kleinwort, C.; Kluge, T.; Knies, G.; Knutsson, A.; Koblitz, B.; Korbel, V.; Kostka, P.; Koutouev, R.; Kropivnitskaya, A.; Kroseberg, J.; Kuckens, J.; Kuhr, T.; Landon, M.P.J.; Lange, W.; Lastovicka, T.; Laycock, P.; Lebedev, A.; Leiner, B.; Lemrani, R.; Lendermann, V.; Levonian, S.; Lindfeld, L.; Lipka, K.; List, B.; Lobodzinska, E.; Loktionova, N.; Lopez-Fernandez, R.; Lubimov, V.; Lueders, H.; Luke, D.; Lux, T.; Lytkin, L.; Makankine, A.; Malden, N.; Malinovski, E.; Mangano, S.; Marage, P.; Marks, J.; Marshall, R.; Martisikova, M.; Martyn, H.-U.; Maxfield, S.J.; Meer, D.; Mehta, A.; Meier, K.; Meyer, A.B.; Meyer, H.; Meyer, J.; Michine, S.; Mikocki, S.; Milcewicz-Mika, I.; Milstead, D.; Mohamed, A.; Moreau, F.; Morozov, A.; Morozov, I.; Morris, J.V.; Mozer, Matthias Ulrich; Muller, K.; Murin, P.; Nagovizin, V.; Naroska, B.; Naumann, J.; Naumann, Th.; Newman, Paul R.; Niebuhr, C.; Nikiforov, A.; Nikitin, D.; Nowak, G.; Nozicka, M.; Oganezov, R.; Olivier, B.; Olsson, J.E.; Ossoskov, G.; Ozerov, D.; Pascaud, C.; Patel, G.D.; Peez, M.; Perez, E.; Perieanu, A.; Petrukhin, A.; Pitzl, D.; Placakyte, R.; Poschl, R.; Portheault, B.; Povh, B.; Raicevic, N.; Ratiani, Z.; Reimer, P.; Reisert, B.; Rimmer, A.; Risler, C.; Rizvi, E.; Robmann, P.; Roland, B.; Roosen, R.; Rostovtsev, A.; Rurikova, Z.; Rusakov, S.; Rybicki, K.; Sankey, D.P.C.; Sauvan, E.; Schatzel, S.; Scheins, J.; Schilling, F.-P.; Schleper, P.; Schmidt, S.; Schmitt, S.; Schneider, M.; Schoeffel, L.; Schoning, A.; Schroder, V.; Schultz-Coulon, H.-C.; Schwanenberger, C.; Sedlak, K.; Sefkow, F.; Sheviakov, I.; Shtarkov, L.N.; Sirois, Y.; Sloan, T.; Smirnov, P.; Soloviev, Y.; South, D.; Spaskov, V.; Specka, Arnd E.; Spitzer, H.; Stamen, R.; Stella, B.; Stiewe, J.; Strauch, I.; Straumann, U.; Tchoulakov, V.; Thompson, Graham; Thompson, P.D.; Tomasz, F.; Traynor, D.; Truoel, Peter; Tsipolitis, G.; Tsurin, I.; Turnau, J.; Tzamariudaki, E.; Uraev, A.; Urban, Marcel; Usik, A.; Utkin, D.; Valkar, S.; Valkarova, A.; Vallee, C.; Van Mechelen, P.; Vargas Trevino, A.; Vazdik, Y.; Veelken, C.; Vest, A.; Vinokurova, S.; Volchinski, V.; Wacker, K.; Wagner, J.; Weber, G.; Weber, R.; Wegener, D.; Werner, C.; Werner, N.; Wessels, M.; Wessling, B.; Winter, G.-G.; Wissing, Ch.; Woehrling, E.-E.; Wolf, R.; Wunsch, E.; Xella, S.; Yan, W.; Zacek, J.; Zalesak, J.; Zhang, Z.; Zhokin, A.; Zohrabyan, H.; Zomer, F.

    2004-01-01

    Results are presented on the photoproduction of isolated prompt photons, inclusively and associated with jets, in the gamma p center of mass energy range 142 4.5 GeV. They are measured differentially as a function of E_T^gamma, E_T^jet, the pseudorapidities eta^gamma and eta^jet and estimators of the momentum fractions x_gamma and x_p of the incident photon and proton carried by the constituents participating in the hard process. In order to further investigate the underlying dynamics, the angular correlation between the prompt photon and the jet in the transverse plane is studied. Predictions by perturbative QCD calculations in next to leading order are about 30% below the inclusive prompt photon data after corrections for hadronisation and multiple interactions, but are in reasonable agreement with the results for prompt photons associated with jets. Comparisons with the predictions of the event generators PYTHIA and HERWIG are also presented.

  9. A feasibility study for a clinical decision support system prompting HIV testing.

    Science.gov (United States)

    Chadwick, D R; Hall, C; Rae, C; Rayment, Ml; Branch, M; Littlewood, J; Sullivan, A

    2017-07-01

    Levels of undiagnosed HIV infection and late presentation remain high globally despite attempts to increase testing. The objective of this study was to evaluate a risk-based prototype application to prompt HIV testing when patients undergo routine blood tests. Two computer physician order entry (CPOE) systems were modified using the application to prompt health care workers (HCWs) to add an HIV test when other tests selected suggested that the patient was at higher risk of HIV infection. The application was applied for a 3-month period in two areas, in a large London hospital and in general practices in Teesside/North Yorkshire. At the end of the evaluation period, HCWs were interviewed to assess the usability and acceptability of the prompt. Numbers of HIV tests ordered in the general practice areas were also compared before and after the prompt's introduction. The system was found to be both useable and generally acceptable to hospital doctors, general practitioners and nurse practitioners, with little evidence of prompt/alert fatigue. The issue of the prompt appearing late in the patient consultation did lead to some difficulties, particularly around discussion of the test and consent. In the general practices, around 1 in 10 prompts were accepted and there was a 6% increase in testing rates over the 3-month study period (P = 0.169). Using a CPOE-based clinical decision support application to prompt HIV testing appears both feasible and acceptable to HCWs. Refining the application to provide more accurate risk stratification is likely to make it more effective. © 2016 British HIV Association.

  10. Least-Square Prediction for Backward Adaptive Video Coding

    Directory of Open Access Journals (Sweden)

    Li Xin

    2006-01-01

    Full Text Available Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP, and demonstrate its potential in video coding. Motivated by the duality between edge contour in images and motion trajectory in video, we propose to derive the best prediction of the current frame from its causal past using least-square method. It is demonstrated that LSP is particularly effective for modeling video material with slow motion and can be extended to handle fast motion by temporal warping and forward adaptation. For typical QCIF test sequences, LSP often achieves smaller MSE than , full-search, quarter-pel block matching algorithm (BMA without the need of transmitting any overhead.

  11. Video games

    OpenAIRE

    Kolář, Vojtěch

    2012-01-01

    This thesis is based on a detailed analysis of various topics related to the question of whether video games can be art. In the first place it analyzes the current academic discussion on this subject and confronts different opinions of both supporters and objectors of the idea, that video games can be a full-fledged art form. The second point of this paper is to analyze the properties, that are inherent to video games, in order to find the reason, why cultural elite considers video games as i...

  12. Prompt gamma-ray activation analysis (PGAA)

    International Nuclear Information System (INIS)

    Kern, J.

    1996-01-01

    The paper deals with a brief description of the principles of prompt gamma-ray activation analysis (PGAA), with the detection of gamma-rays, the PGAA project at SINQ and with the expected performances. 8 figs., 3 tabs., 10 refs

  13. Prompt gamma-ray activation analysis (PGAA)

    Energy Technology Data Exchange (ETDEWEB)

    Kern, J [Fribourg Univ. (Switzerland). Inst. de Physique

    1996-11-01

    The paper deals with a brief description of the principles of prompt gamma-ray activation analysis (PGAA), with the detection of gamma-rays, the PGAA project at SINQ and with the expected performances. 8 figs., 3 tabs., 10 refs.

  14. A video for teaching english tenses

    Directory of Open Access Journals (Sweden)

    Frida Unsiah

    2017-04-01

    Students of English Language Education Program in Faculty of Cultural Studies Universitas Brawijaya ideally master Grammar before taking the degree of Sarjana Pendidikan. However, the fact shows that they are still weak in Grammar especially tenses. Therefore, the researchers initiate to develop a video as a media to teach tenses. Objectively, by using video, students get better understanding on tenses so that they can communicate using English accurately and contextually. To develop the video, the researchers used ADDIE model (Analysis, Design, Development, Implementation, Evaluation. First, the researchers analyzed the students’ learning need to determine the product that would be developed, in this case was a movie about English tenses. Then, the researchers developed a video as the product. The product then was validated by media expert who validated attractiveness, typography, audio, image, and usefulness and content expert and validated by a content expert who validated the language aspects and tenses of English used by the actors in the video dealing with the grammar content, pronunciation, and fluency performed by the actors. The result of validation shows that the video developed was considered good. Theoretically, it is appropriate to be used English Grammar classes. However, the media expert suggests that it still needs some improvement for the next development especially dealing with the synchronization between lips movement and sound on the scenes while the content expert suggests that the Grammar content of the video should focus on one tense only to provide more detailed concept of the tense.

  15. Effectiveness and Efficiency of Peer and Adult Models Used in Video Modeling in Teaching Pretend Play Skills to Children with Autism Spectrum Disorder

    Science.gov (United States)

    Sani-Bozkurt, Sunagul; Ozen, Arzu

    2015-01-01

    This study aimed to examine whether or not there was any difference in the effectiveness and efficiency of the presentation of video modeling interventions using peer and adult models in teaching pretend play skills to children with ASD and to examine the views of parents about the study. Participants were two boys and one girl, aged 5-6 years…

  16. An automatic analyzer for sports video databases using visual cues and real-world modeling

    NARCIS (Netherlands)

    Han, Jungong; Farin, D.S.; With, de P.H.N.; Lao, Weilun

    2006-01-01

    With the advent of hard-disk video recording, video databases gradually emerge for consumer applications. The large capacity of disks requires the need for fast storage and retrieval functions. We propose a semantic analyzer for sports video, which is able to automatically extract and analyze key

  17. Suppression of non-prompt J/psi, prompt J/psi, and Y(1S) in PbPb collisions at sqrt(sNN) = 2.76 TeV

    Energy Technology Data Exchange (ETDEWEB)

    Chatrchyan, Serguei [Yerevan Physics Inst. (Armenia); et al.

    2012-05-01

    Yields of prompt and non-prompt J/psi, as well as Y(1S) mesons, are measured by the CMS experiment via their dimuon decays in PbPb and pp collisions at sqrt(sNN) = 2.76 TeV for quarkonium rapidity |y|<2.4. Differential cross sections and nuclear modification factors are reported as functions of y and transverse momentum pt, as well as collision centrality. For prompt J/psi with relatively high pt (6.5prompt J/psi, which is sensitive to the in-medium b-quark energy loss, is measured for the first time. Also the low-pt Y(1S) mesons are suppressed in PbPb collisions.

  18. No-Reference Video Quality Assessment using Codec Analysis

    DEFF Research Database (Denmark)

    Søgaard, Jacob; Forchhammer, Søren; Korhonen, Jari

    2015-01-01

    types of videos, estimating the level of quantization used in the I-frames, and exploiting this information to assess the video quality. In order to do this for H.264/AVC, the distribution of the DCT-coefficients after intra-prediction and deblocking are modeled. To obtain VQA features for H.264/AVC, we......A no-reference video quality assessment (VQA) method is presented for videos distorted by H.264/AVC and MPEG-2. The assessment is performed without access to the bit-stream. Instead we analyze and estimate coefficients based on decoded pixels. The approach involves distinguishing between the two...... propose a novel estimation method of the quantization in H.264/AVC videos without bitstream access, which can also be used for Peak Signalto-Noise Ratio (PSNR) estimation. The results from the MPEG-2 and H.264/AVC analysis are mapped to a perceptual measure of video quality by Support Vector Regression...

  19. Collaborative consumption : live fashion, don’t own it : developing new business models for the fashion industry

    OpenAIRE

    Duml, Valeria; Perlacia, Anna Soler

    2016-01-01

    The rise of collaborative consumption is a phenomenon that appeared in many industries, such as in space sharing (e.g. Airbnb), car sharing (e.g. Uber), video streaming (e.g. Netflix), and more recently also in the fashion industry. This has prompted fashion companies to innovate their business models and start changing the way of doing business (e.g. Rent the Runway, Tradesy, and Vestiaire Collective). Through a qualitative and exploratory study based on a sample of twenty-six companies, thi...

  20. Web-based remote video monitoring system implemented using Java technology

    Science.gov (United States)

    Li, Xiaoming

    2012-04-01

    A HTTP based video transmission system has been built upon the p2p(peer to peer) network structure utilizing the Java technologies. This makes the video monitoring available to any host which has been connected to the World Wide Web in any method, including those hosts behind firewalls or in isolated sub-networking. In order to achieve this, a video source peer has been developed, together with the client video playback peer. The video source peer can respond to the video stream request in HTTP protocol. HTTP based pipe communication model is developed to speeding the transmission of video stream data, which has been encoded into fragments using the JPEG codec. To make the system feasible in conveying video streams between arbitrary peers on the web, a HTTP protocol based relay peer is implemented as well. This video monitoring system has been applied in a tele-robotic system as a visual feedback to the operator.

  1. Learning perceptual aspects of diagnosis in medicine via eye movement modeling examples on patient video cases

    NARCIS (Netherlands)

    Jarodzka, Halszka; Balslev, Thomas; Holmqvist, Kenneth; Nyström, Marcus; Scheiter, Katharina; Gerjets, Peter; Eika, Berit

    2010-01-01

    Jarodzka, H., Balslev, T., Holmqvist, K., Nyström, M., Scheiter, K., Gerjets, P., & Eika, B. (2010, August). Learning perceptual aspects of diagnosis in medicine via eye movement modeling examples on patient video cases. Poster presented at the 32nd Annual Conference of the Cognitive Science

  2. Proceedings of the workshop on multiple prompt gamma-ray analysis

    International Nuclear Information System (INIS)

    Ebihara, Mitsuru; Hatsukawa, Yuichi; Oshima, Masumi

    2006-10-01

    The workshop on 'Multiple Prompt Gamma-ray Analysis' was held on March 8, 2006 at Tokai. It is based on a project, 'Developments of real time, non-destructive ultra sensitive elemental analysis using multiple gamma-ray detections and prompt gamma ray analysis and its application to real samples', one of the High priority Cooperative Research Programs performed by Japan Atomic Energy Agency and the University of Tokyo. In this workshop, the latest results of the Multiple Prompt Gamma ray Analysis (MPGA) study were presented, together with those of Neutron Activation Analysis with Multiple Gamma-ray Detection (NAAMG). The 9 of the presented papers are indexed individually. (J.P.N.)

  3. Deep Learning for Detection of Object-Based Forgery in Advanced Video

    Directory of Open Access Journals (Sweden)

    Ye Yao

    2017-12-01

    Full Text Available Passive video forensics has drawn much attention in recent years. However, research on detection of object-based forgery, especially for forged video encoded with advanced codec frameworks, is still a great challenge. In this paper, we propose a deep learning-based approach to detect object-based forgery in the advanced video. The presented deep learning approach utilizes a convolutional neural network (CNN to automatically extract high-dimension features from the input image patches. Different from the traditional CNN models used in computer vision domain, we let video frames go through three preprocessing layers before being fed into our CNN model. They include a frame absolute difference layer to cut down temporal redundancy between video frames, a max pooling layer to reduce computational complexity of image convolution, and a high-pass filter layer to enhance the residual signal left by video forgery. In addition, an asymmetric data augmentation strategy has been established to get a similar number of positive and negative image patches before the training. The experiments have demonstrated that the proposed CNN-based model with the preprocessing layers has achieved excellent results.

  4. Tracking and recognition face in videos with incremental local sparse representation model

    Science.gov (United States)

    Wang, Chao; Wang, Yunhong; Zhang, Zhaoxiang

    2013-10-01

    This paper addresses the problem of tracking and recognizing faces via incremental local sparse representation. First a robust face tracking algorithm is proposed via employing local sparse appearance and covariance pooling method. In the following face recognition stage, with the employment of a novel template update strategy, which combines incremental subspace learning, our recognition algorithm adapts the template to appearance changes and reduces the influence of occlusion and illumination variation. This leads to a robust video-based face tracking and recognition with desirable performance. In the experiments, we test the quality of face recognition in real-world noisy videos on YouTube database, which includes 47 celebrities. Our proposed method produces a high face recognition rate at 95% of all videos. The proposed face tracking and recognition algorithms are also tested on a set of noisy videos under heavy occlusion and illumination variation. The tracking results on challenging benchmark videos demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods. In the case of the challenging dataset in which faces undergo occlusion and illumination variation, and tracking and recognition experiments under significant pose variation on the University of California, San Diego (Honda/UCSD) database, our proposed method also consistently demonstrates a high recognition rate.

  5. A Bilingual Child Learns Social Communication Skills through Video Modeling--A Single Case Study in a Norwegian School Setting

    Science.gov (United States)

    Özerk, Meral; Özerk, Kamil

    2015-01-01

    "Video modeling" is one of the recognized methods used in the training and teaching of children with Autism Spectrum Disorders (ASD). The model's theoretical base stems from Albert Bandura's (1977; 1986) social learning theory in which he asserts that children can learn many skills and behaviors observationally through modeling. One can…

  6. 3. barriers to prompt malaria treatment among under five children

    African Journals Online (AJOL)

    Esem

    strategy need to be established. Therefore, this study aimed at determining barriers to prompt malaria treatment among this vulnerable age group in Mpika district. Objective: To determine the barriers to prompt malaria treatment among children under five years of age with malaria in Mpika district. Study design: This was an ...

  7. 75 FR 82146 - Prompt Payment Interest Rate; Contract Disputes Act

    Science.gov (United States)

    2010-12-29

    ... DEPARTMENT OF THE TREASURY Fiscal Service Prompt Payment Interest Rate; Contract Disputes Act... beginning January 1, 2011, and ending on June 30, 2011, the prompt payment interest rate is 2\\5/8\\ per... calculation of interest due on claims at the rate established by the Secretary of the Treasury. The Secretary...

  8. 77 FR 38888 - Prompt Payment Interest Rate; Contract Disputes Act

    Science.gov (United States)

    2012-06-29

    ... DEPARTMENT OF THE TREASURY Fiscal Service Prompt Payment Interest Rate; Contract Disputes Act... beginning July 1, 2012, and ending on December 31, 2012, the prompt payment interest rate is 1\\3/4\\ per... interest due on claims at the rate established by the Secretary of the Treasury. The Secretary of the...

  9. 75 FR 37881 - Prompt Payment Interest Rate; Contract Disputes Act

    Science.gov (United States)

    2010-06-30

    ... DEPARTMENT OF THE TREASURY Fiscal Service Prompt Payment Interest Rate; Contract Disputes Act... beginning July 1, 2010, and ending on December 31, 2010, the prompt payment interest rate is 3\\1/8\\ per... of interest due on claims at the rate established by the Secretary of the Treasury. The Secretary of...

  10. 78 FR 39063 - Prompt Payment Interest Rate; Contract Disputes Act

    Science.gov (United States)

    2013-06-28

    ... DEPARTMENT OF THE TREASURY Fiscal Service Prompt Payment Interest Rate; Contract Disputes Act..., 2013, and ending on December 31, 2013, the prompt payment interest rate is 1\\3/4\\ per centum per annum... authority to specify the rate by which the interest shall be computed for interest payments under section 12...

  11. 76 FR 38742 - Prompt Payment Interest Rate; Contract Disputes Act

    Science.gov (United States)

    2011-07-01

    ... DEPARTMENT OF THE TREASURY Fiscal Service Prompt Payment Interest Rate; Contract Disputes Act... beginning July 1, 2011, and ending on December 31, 2011, the prompt payment interest rate is 2\\1/2\\ per.... 3902(a), provide for the calculation of interest due on claims at the rate established by the Secretary...

  12. 76 FR 82350 - Prompt Payment Interest Rate; Contract Disputes Act

    Science.gov (United States)

    2011-12-30

    ... DEPARTMENT OF THE TREASURY Fiscal Service Prompt Payment Interest Rate; Contract Disputes Act... beginning January 1, 2012, and ending on June 30, 2012, the prompt payment interest rate is 2 per centum per... of interest due on claims at the rate established by the Secretary of the Treasury. The Secretary of...

  13. Playing prosocial video games increases the accessibility of prosocial thoughts.

    Science.gov (United States)

    Greitemeyer, Tobias; Osswald, Silvia

    2011-01-01

    Past research has provided abundant evidence that playing violent video games increases aggressive tendencies. In contrast, evidence on possible positive effects of video game exposure on prosocial tendencies has been relatively sparse. The present research tested and found support for the hypothesis that exposure to prosocial video games increases the accessibility of prosocial thoughts. These results provide support to the predictive validity of the General Learning Model (Buckley & Anderson, 2006) for the effects of exposure to prosocial media on social tendencies. Thus, depending on the content of the video game, playing video games can harm but may also benefit social relations.

  14. Student Teachers' Modeling of Acceleration Using a Video-Based Laboratory in Physics Education: A Multimodal Case Study

    Directory of Open Access Journals (Sweden)

    Louis Trudel

    2016-06-01

    Full Text Available This exploratory study intends to model kinematics learning of a pair of student teachers when exposed to prescribed teaching strategies in a video-based laboratory. Two student teachers were chosen from the Francophone B.Ed. program of the Faculty of Education of a Canadian university. The study method consisted of having the participants interact with a video-based laboratory to complete two activities for learning properties of acceleration in rectilinear motion. Time limits were placed on the learning activities during which the researcher collected detailed multimodal information from the student teachers' answers to questions, the graphs they produced from experimental data, and the videos taken during the learning sessions. As a result, we describe the learning approach each one followed, the evidence of conceptual change and the difficulties they face in tackling various aspects of the accelerated motion. We then specify advantages and limits of our research and propose recommendations for further study.

  15. Finite fission chain length and symmetry around prompt-criticality

    International Nuclear Information System (INIS)

    Xie Qilin; Yin Yanpeng; Gao Hui; Huang Po; Fang Xiaoqiang

    2012-01-01

    Probability distribution of finite fission chain length was derived by assuming that all neutrons behave identically. Finite fission chain length was also calculated using a zero-dimension Monte-Carlo method based on point kinetics. Then symmetry of finite fission chain length probability distribution around prompt-criticality was deduced, which helps understanding the emission rate of delayed neutrons and initiation of fission chain in super-prompt-critical system. (authors)

  16. Using Photogrammetry to Estimate Tank Waste Volumes from Video

    Energy Technology Data Exchange (ETDEWEB)

    Field, Jim G. [Washington River Protection Solutions, LLC, Richland, WA (United States)

    2013-03-27

    Washington River Protection Solutions (WRPS) contracted with HiLine Engineering & Fabrication, Inc. to assess the accuracy of photogrammetry tools as compared to video Camera/CAD Modeling System (CCMS) estimates. This test report documents the results of using photogrammetry to estimate the volume of waste in tank 241-C-I04 from post-retrieval videos and results using photogrammetry to estimate the volume of waste piles in the CCMS test video.

  17. Using Photogrammetry to Estimate Tank Waste Volumes from Video

    International Nuclear Information System (INIS)

    Field, Jim G.

    2013-01-01

    Washington River Protection Solutions (WRPS) contracted with HiLine Engineering and Fabrication, Inc. to assess the accuracy of photogrammetry tools as compared to video Camera/CAD Modeling System (CCMS) estimates. This test report documents the results of using photogrammetry to estimate the volume of waste in tank 241-C-I04 from post-retrieval videos and results using photogrammetry to estimate the volume of waste piles in the CCMS test video

  18. A Comparison of Prompting Tactics for Teaching Intraverbals to Young Adults with Autism

    OpenAIRE

    Vedora, Joseph; Conant, Erin

    2015-01-01

    Several researchers have compared the effectiveness of tact or textual prompts to echoic prompts for teaching intraverbal behavior to young children with autism. We extended this line of research by comparing the effectiveness of visual (textual or tact) prompts to echoic prompts to teach intraverbal responses to three young adults with autism. An adapted alternating treatments design was used with 2 to 3 comparisons for each participant. The results were mixed and did not reveal a more effec...

  19. Experimental Investigation of Aeroelastic Deformation of Slender Wings at Supersonic Speeds Using a Video Model Deformation Measurement Technique

    Science.gov (United States)

    Erickson, Gary E.

    2013-01-01

    A video-based photogrammetric model deformation system was established as a dedicated optical measurement technique at supersonic speeds in the NASA Langley Research Center Unitary Plan Wind Tunnel. This system was used to measure the wing twist due to aerodynamic loads of two supersonic commercial transport airplane models with identical outer mold lines but different aeroelastic properties. One model featured wings with deflectable leading- and trailing-edge flaps and internal channels to accommodate static pressure tube instrumentation. The wings of the second model were of single-piece construction without flaps or internal channels. The testing was performed at Mach numbers from 1.6 to 2.7, unit Reynolds numbers of 1.0 million to 5.0 million, and angles of attack from -4 degrees to +10 degrees. The video model deformation system quantified the wing aeroelastic response to changes in the Mach number, Reynolds number concurrent with dynamic pressure, and angle of attack and effectively captured the differences in the wing twist characteristics between the two test articles.

  20. Multiple player tracking in sports video: a dual-mode two-way bayesian inference approach with progressive observation modeling.

    Science.gov (United States)

    Xing, Junliang; Ai, Haizhou; Liu, Liwei; Lao, Shihong

    2011-06-01

    Multiple object tracking (MOT) is a very challenging task yet of fundamental importance for many practical applications. In this paper, we focus on the problem of tracking multiple players in sports video which is even more difficult due to the abrupt movements of players and their complex interactions. To handle the difficulties in this problem, we present a new MOT algorithm which contributes both in the observation modeling level and in the tracking strategy level. For the observation modeling, we develop a progressive observation modeling process that is able to provide strong tracking observations and greatly facilitate the tracking task. For the tracking strategy, we propose a dual-mode two-way Bayesian inference approach which dynamically switches between an offline general model and an online dedicated model to deal with single isolated object tracking and multiple occluded object tracking integrally by forward filtering and backward smoothing. Extensive experiments on different kinds of sports videos, including football, basketball, as well as hockey, demonstrate the effectiveness and efficiency of the proposed method.

  1. Relacije umetnosti i video igara / Relations of Art and Video Games

    OpenAIRE

    Manojlo Maravić

    2012-01-01

    When discussing the art of video games, three different contexts need to be considered: the 'high' art (video games and the art); commercial video games (video games as the art) and the fan art. Video games are a legitimate artistic medium subject to modifications and recontextualisations in the process of creating a specific experience of the player/user/audience and political action by referring to particular social problems. They represent a high technological medium that increases, with p...

  2. Developing Expertise: Using Video to Hone Teacher Candidates' Classroom Observation Skills

    Science.gov (United States)

    Cuthrell, Kristen; Steadman, Sharilyn C.; Stapleton, Joy; Hodge, Elizabeth

    2016-01-01

    This article explores the impact of a video observation model developed for teacher candidates in an early experiences course. Video Grand Rounds (VGR) combines a structured observation protocol, videos, and directed debriefing to enhance teacher candidates' observations skills within nonstructured and field-based observations. A comparative…

  3. Rare Disease Video Portal

    OpenAIRE

    Sánchez Bocanegra, Carlos Luis

    2011-01-01

    Rare Disease Video Portal (RD Video) is a portal web where contains videos from Youtube including all details from 12 channels of Youtube. Rare Disease Video Portal (RD Video) es un portal web que contiene los vídeos de Youtube incluyendo todos los detalles de 12 canales de Youtube. Rare Disease Video Portal (RD Video) és un portal web que conté els vídeos de Youtube i que inclou tots els detalls de 12 Canals de Youtube.

  4. Comparison of Everyday and Every-Fourth-Day Probe Sessions with the Simultaneous Prompting Procedure

    Science.gov (United States)

    Reichow, Brian; Wolery, Mark

    2009-01-01

    Simultaneous prompting is a response-prompting procedure requiring two daily sessions: an instructional session in which a controlling prompt is provided on all trials, and a probe session in which no prompt is provided on any trials. In this study, two schedules of conducting the probe sessions (daily vs. every fourth day) were compared using the…

  5. Application of robust face recognition in video surveillance systems

    Science.gov (United States)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  6. Medical students' perceptions of video-linked lectures and video-streaming

    Directory of Open Access Journals (Sweden)

    Karen Mattick

    2010-12-01

    Full Text Available Video-linked lectures allow healthcare students across multiple sites, and between university and hospital bases, to come together for the purposes of shared teaching. Recording and streaming video-linked lectures allows students to view them at a later date and provides an additional resource to support student learning. As part of a UK Higher Education Academy-funded Pathfinder project, this study explored medical students' perceptions of video-linked lectures and video-streaming, and their impact on learning. The methodology involved semi-structured interviews with 20 undergraduate medical students across four sites and five year groups. Several key themes emerged from the analysis. Students generally preferred live lectures at the home site and saw interaction between sites as a major challenge. Students reported that their attendance at live lectures was not affected by the availability of streamed lectures and tended to be influenced more by the topic and speaker than the technical arrangements. These findings will inform other educators interested in employing similar video technologies in their teaching.Keywords: video-linked lecture; video-streaming; student perceptions; decisionmaking; cross-campus teaching.

  7. Medical students review of formative OSCE scores, checklists, and videos improves with student-faculty debriefing meetings.

    Science.gov (United States)

    Bernard, Aaron W; Ceccolini, Gabbriel; Feinn, Richard; Rockfeld, Jennifer; Rosenberg, Ilene; Thomas, Listy; Cassese, Todd

    2017-01-01

    Performance feedback is considered essential to clinical skills development. Formative objective structured clinical exams (F-OSCEs) often include immediate feedback by standardized patients. Students can also be provided access to performance metrics including scores, checklists, and video recordings after the F-OSCE to supplement this feedback. How often students choose to review this data and how review impacts future performance has not been documented. We suspect student review of F-OSCE performance data is variable. We hypothesize that students who review this data have better performance on subsequent F-OSCEs compared to those who do not. We also suspect that frequency of data review can be improved with faculty involvement in the form of student-faculty debriefing meetings. Simulation recording software tracks and time stamps student review of performance data. We investigated a cohort of first- and second-year medical students from the 2015-16 academic year. Basic descriptive statistics were used to characterize frequency of data review and a linear mixed-model analysis was used to determine relationships between data review and future F-OSCE performance. Students reviewed scores (64%), checklists (42%), and videos (28%) in decreasing frequency. Frequency of review of all metric and modalities improved when student-faculty debriefing meetings were conducted (p<.001). Among 92 first-year students, checklist review was associated with an improved performance on subsequent F-OSCEs (p = 0.038) by 1.07 percentage points on a scale of 0-100. Among 86 second year students, no review modality was associated with improved performance on subsequent F-OSCEs. Medical students review F-OSCE checklists and video recordings less than 50% of the time when not prompted. Student-faculty debriefing meetings increased student data reviews. First-year student's review of checklists on F-OSCEs was associated with increases in performance on subsequent F-OSCEs, however this

  8. Measurements of Prompt Radiation-Induced Conductivity of Pyralux®

    Energy Technology Data Exchange (ETDEWEB)

    Hartman, E. Frederick [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Radiation Effects Experimentation Dept.; Zarick, Thomas Andrew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Radiation Effects Experimentation Dept.; McLain, Michael Lee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Radiation Effects Experimentation Dept.; Sheridan, Timothy J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Radiation Effects Experimentation Dept.; Preston, Eric F. [ITT Exelis, Colorado Springs, CO (United States); Stringer, Thomas Arthur [ITT Exelis, Colorado Springs, CO (United States)

    2014-01-01

    In this report, measurements of the prompt radiation-induced conductivity (RIC) in 3 mil samples of Pyralux® are presented as a function of dose rate, pulse width, and applied bias. The experiments were conducted with the Medusa linear accelerator (LINAC) located at the Little Mountain Test Facility (LMTF) near Ogden, UT. The nominal electron energy for the LINAC is 20 MeV. Prompt conduction current data were obtained for dose rates ranging from ~2 x 109 rad(Si)/s to ~1.1 x 1011 rad(Si)/s and for nominal pulse widths of 50 ns and 500 ns. At a given dose rate, the applied bias across the samples was stepped between -1500 V and 1500 V. Calculated values of the prompt RIC varied between 1.39x10-8 Ω-1 · m-1 and 2.67x10-7 Ω-1 · m-1 and the prompt RIC coefficient varied between 1.25x10-18 Ω-1 · m-1/(rad/s) and 1.93x10-17 Ω-1 · m-1/(rad/s).

  9. Automated Indexing and Search of Video Data in Large Collections with inVideo

    Directory of Open Access Journals (Sweden)

    Shuangbao Paul Wang

    2017-08-01

    Full Text Available In this paper, we present a novel system, inVideo, for automatically indexing and searching videos based on the keywords spoken in the audio track and the visual content of the video frames. Using the highly efficient video indexing engine we developed, inVideo is able to analyze videos using machine learning and pattern recognition without the need for initial viewing by a human. The time-stamped commenting and tagging features refine the accuracy of search results. The cloud-based implementation makes it possible to conduct elastic search, augmented search, and data analytics. Our research shows that inVideo presents an efficient tool in processing and analyzing videos and increasing interactions in video-based online learning environment. Data from a cybersecurity program with more than 500 students show that applying inVideo to current video material, interactions between student-student and student-faculty increased significantly across 24 sections program-wide.

  10. NEI You Tube Videos: Amblyopia

    Medline Plus

    Full Text Available ... YouTube Videos » NEI YouTube Videos: Amblyopia Listen NEI YouTube Videos YouTube Videos Home Age-Related Macular Degeneration ... Retinopathy of Prematurity Science Spanish Videos Webinars NEI YouTube Videos: Amblyopia Embedded video for NEI YouTube Videos: ...

  11. Learning with Technology: Video Modeling with Concrete-Representational-Abstract Sequencing for Students with Autism Spectrum Disorder

    Science.gov (United States)

    Yakubova, Gulnoza; Hughes, Elizabeth M.; Shinaberry, Megan

    2016-01-01

    The purpose of this study was to determine the effectiveness of a video modeling intervention with concrete-representational-abstract instructional sequence in teaching mathematics concepts to students with autism spectrum disorder (ASD). A multiple baseline across skills design of single-case experimental methodology was used to determine the…

  12. Learning perceptual aspects of diagnosis in medicine via eye movement modeling examples on patient video cases

    NARCIS (Netherlands)

    Jarodzka, Halszka; Balslev, Thomas; Holmqvist, Kenneth; Nyström, Marcus; Scheiter, Katharina; Gerjets, Peter; Eika, Berit

    2010-01-01

    Jarodzka, H., Balslev, T., Holmqvist, K., Nyström, M., Scheiter, K., Gerjets, P., & Eika, B. (2010). Learning perceptual aspects of diagnosis in medicine via eye movement modeling examples on patient video cases. In S. Ohlsson & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the

  13. Distributed Video Coding for Multiview and Video-plus-depth Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo

    The interest in Distributed Video Coding (DVC) systems has grown considerably in the academic world in recent years. With DVC the correlation between frames is exploited at the decoder (joint decoding). The encoder codes the frame independently, performing relatively simple operations. Therefore......, with DVC the complexity is shifted from encoder to decoder, making the coding architecture a viable solution for encoders with limited resources. DVC may empower new applications which can benefit from this reversed coding architecture. Multiview Distributed Video Coding (M-DVC) is the application...... of the to-be-decoded frame. Another key element is the Residual estimation, indicating the reliability of the SI, which is used to calculate the parameters of the correlation noise model between SI and original frame. In this thesis new methods for Inter-camera SI generation are analyzed in the Stereo...

  14. Guerrilla Video: A New Protocol for Producing Classroom Video

    Science.gov (United States)

    Fadde, Peter; Rich, Peter

    2010-01-01

    Contemporary changes in pedagogy point to the need for a higher level of video production value in most classroom video, replacing the default video protocol of an unattended camera in the back of the classroom. The rich and complex environment of today's classroom can be captured more fully using the higher level, but still easily manageable,…

  15. Video performance for high security applications

    International Nuclear Information System (INIS)

    Connell, Jack C.; Norman, Bradley C.

    2010-01-01

    The complexity of physical protection systems has increased to address modern threats to national security and emerging commercial technologies. A key element of modern physical protection systems is the data presented to the human operator used for rapid determination of the cause of an alarm, whether false (e.g., caused by an animal, debris, etc.) or real (e.g., a human adversary). Alarm assessment, the human validation of a sensor alarm, primarily relies on imaging technologies and video systems. Developing measures of effectiveness (MOE) that drive the design or evaluation of a video system or technology becomes a challenge, given the subjectivity of the application (e.g., alarm assessment). Sandia National Laboratories has conducted empirical analysis using field test data and mathematical models such as binomial distribution and Johnson target transfer functions to develop MOEs for video system technologies. Depending on the technology, the task of the security operator and the distance to the target, the Probability of Assessment (PAs) can be determined as a function of a variety of conditions or assumptions. PAs used as an MOE allows the systems engineer to conduct trade studies, make informed design decisions, or evaluate new higher-risk technologies. This paper outlines general video system design trade-offs, discusses ways video can be used to increase system performance and lists MOEs for video systems used in subjective applications such as alarm assessment.

  16. Surgical gesture classification from video and kinematic data.

    Science.gov (United States)

    Zappella, Luca; Béjar, Benjamín; Hager, Gregory; Vidal, René

    2013-10-01

    Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on dynamic cues (e.g., time to completion, speed, forces, torque) or kinematic data (e.g., robot trajectories and velocities). While videos could be equally or more discriminative (e.g., videos contain semantic information not present in kinematic data), they are typically not used because of the difficulties associated with automatic video interpretation. In this paper, we propose several methods for automatic surgical gesture classification from video data. We assume that the video of a surgical task (e.g., suturing) has been segmented into video clips corresponding to a single gesture (e.g., grabbing the needle, passing the needle) and propose three methods to classify the gesture of each video clip. In the first one, we model each video clip as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words, and use a bag-of-features (BoF) approach to classify new video clips. In the third one, we use multiple kernel learning (MKL) to combine the LDS and BoF approaches. Since the LDS approach is also applicable to kinematic data, we also use MKL to combine both types of data in order to exploit their complementarity. Our experiments on a typical surgical training setup show that methods based on video data perform equally well, if not better, than state-of-the-art approaches based on kinematic data. In turn, the combination of both kinematic and video data outperforms any other algorithm based on one type of data alone. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Using learning analytics to evaluate a video-based lecture series.

    Science.gov (United States)

    Lau, K H Vincent; Farooque, Pue; Leydon, Gary; Schwartz, Michael L; Sadler, R Mark; Moeller, Jeremy J

    2018-01-01

    The video-based lecture (VBL), an important component of the flipped classroom (FC) and massive open online course (MOOC) approaches to medical education, has primarily been evaluated through direct learner feedback. Evaluation may be enhanced through learner analytics (LA) - analysis of quantitative audience usage data generated by video-sharing platforms. We applied LA to an experimental series of ten VBLs on electroencephalography (EEG) interpretation, uploaded to YouTube in the model of a publicly accessible MOOC. Trends in view count; total percentage of video viewed and audience retention (AR) (percentage of viewers watching at a time point compared to the initial total) were examined. The pattern of average AR decline was characterized using regression analysis, revealing a uniform linear decline in viewership for each video, with no evidence of an optimal VBL length. Segments with transient increases in AR corresponded to those focused on core concepts, indicative of content requiring more detailed evaluation. We propose a model for applying LA at four levels: global, series, video, and feedback. LA may be a useful tool in evaluating a VBL series. Our proposed model combines analytics data and learner self-report for comprehensive evaluation.

  18. Background Reduction around Prompt Gamma-ray Peaks from Korean White Ginseng

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Y. N.; Sun, G. M.; Moon, J. H.; Chung, Y. S. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Y. E. [Chung-buk National University, Chungju (Korea, Republic of)

    2007-10-15

    Prompt gamma-ray activation analysis (PGAA) is recognized as a very powerful and unique nuclear method in terms of its non-destruction, high precision, and no time-consuming advantages. This method is used for the analysis of trace elements in various types of sample matrix such as metallurgical, environmental, biological samples, etc. When a spectrum is evaluated, background continuum is a major disturbing factor for a precise and accurate analysis. Furthermore, a prompt gamma spectrum is complicate with a wide range. To make the condition free from this limitation, a reduction of the background is important for the PGAA analysis. The background-reducing methods are divided into using the electronic equipment like a suppression mode and principal component analysis (PCA) based on a multivariate statistical method. In PGAA analysis, Lee et al. compared the background reduction methods like PCA and wavelet transform for the prompt gamma-ray spectra. Lim et al. have applied the multivariate statistical method to the identification of the peaks with low-statistics from the explosives. In this paper, effective reduction of background in the prompt gamma spectra using the PCA is applied to the prompt gammaray peaks from Korean Baeksam (Korean white ginseng)

  19. ENERGY STAR Certified Audio Video

    Data.gov (United States)

    U.S. Environmental Protection Agency — Certified models meet all ENERGY STAR requirements as listed in the Version 3.0 ENERGY STAR Program Requirements for Audio Video Equipment that are effective as of...

  20. The effect of online violent video games on levels of aggression.

    Science.gov (United States)

    Hollingdale, Jack; Greitemeyer, Tobias

    2014-01-01

    In recent years the video game industry has surpassed both the music and video industries in sales. Currently violent video games are among the most popular video games played by consumers, most specifically First-Person Shooters (FPS). Technological advancements in game play experience including the ability to play online has accounted for this increase in popularity. Previous research, utilising the General Aggression Model (GAM), has identified that violent video games increase levels of aggression. Little is known, however, as to the effect of playing a violent video game online. Participants (N = 101) were randomly assigned to one of four experimental conditions; neutral video game--offline, neutral video game--online, violent video game--offline and violent video game--online. Following this they completed questionnaires to assess their attitudes towards the game and engaged in a chilli sauce paradigm to measure behavioural aggression. The results identified that participants who played a violent video game exhibited more aggression than those who played a neutral video game. Furthermore, this main effect was not particularly pronounced when the game was played online. These findings suggest that both playing violent video games online and offline compared to playing neutral video games increases aggression.