WorldWideScience

Sample records for gesture workshop gw

  1. Gestures to intuitively control large displays : 7th International Gesture Workshop, GW 2007, Lisbon, Portugal, May 23-25, 2007 : revised selected papers

    NARCIS (Netherlands)

    Fikkert, W.; van der Vet, P.; Rauwerda, H.; Breit, T.; Nijholt, A.; Sales Dias, M.; Gibet, S.; Wanderley, M.M.; Bastos, R.

    2009-01-01

    Large displays are highly suited to support discussions in empirical science. Such displays can display project results on a large digital surface to feed the discussion. This paper describes our approach to closely involve multidisciplinary omics scientists in the design of an intuitive display

  2. Gestural apraxia.

    Science.gov (United States)

    Etcharry-Bouyx, F; Le Gall, D; Jarry, C; Osiurak, F

    Gestural apraxia was first described in 1905 by Hugo Karl Liepmann. While his description is still used, the actual terms are often confusing. The cognitive approach using models proposes thinking of the condition in terms of production and conceptual knowledge. The underlying cognitive processes are still being debated, as are also the optimal ways to assess them. Several neuroimaging studies have revealed the involvement of a left-lateralized frontoparietal network, with preferential activation of the superior parietal lobe, intraparietal sulcus and inferior parietal cortex. The presence of apraxia after a stroke is prevalent, and the incidence is sufficient to propose rehabilitation. Copyright © 2017. Published by Elsevier Masson SAS.

  3. Gestures Specialized for Dialogue.

    Science.gov (United States)

    Bavelas, Janet Beavin; And Others

    1995-01-01

    Explored how hand gestures help interlocutors coordinate their dialogue. Analysis of dyadic conversations and monologues revealed that requirements of dialogue uniquely affect interactive gestures. Gestures aided the speaker's efforts to include the addressee in the conversation. Gestures also demonstrated the importance of social processes in…

  4. The development of gesture

    OpenAIRE

    Tellier, Marion

    2009-01-01

    Human beings gesture everyday while speaking: they move their hands, their heads, their arms; their whole body is involved in communication. But how does it work? How do we produce gestures and in what purpose? How are gestures connected to speech? When do we begin producing gestures and how do they evolve throughout the life span? These are questions gesture researchers have been trying to answer since the second half of the 20th century. This chapter will first define what a gesture is by d...

  5. Gestures maintain spatial imagery.

    Science.gov (United States)

    Wesp, R; Hesse, J; Keutmann, D; Wheaton, K

    2001-01-01

    Recent theories suggest alternatives to the commonly held belief that the sole role of gestures is to communicate meaning directly to listeners. Evidence suggests that gestures may serve a cognitive function for speakers, possibly acting as lexical primes. We observed that participants gestured more often when describing a picture from memory than when the picture was present and that gestures were not influenced by manipulating eye contact of a listener. We argue that spatial imagery serves a short-term memory function during lexical search and that gestures may help maintain spatial images. When spatial imagery is not necessary, as in conditions of direct visual stimulation, reliance on gestures is reduced or eliminated.

  6. Mainstreaming gesture based interfaces

    Directory of Open Access Journals (Sweden)

    David Procházka

    2013-01-01

    Full Text Available Gestures are a common way of interaction with mobile devices. They emerged especially with the iPhone production. Gestures in currently used devices are usually based on the original gestures presented by Apple in its iOS (iPhone Operating System. Therefore, there is a wide agreement on the mobile gesture design. In last years, it is possible to see experiments with gesture usage also in the other areas of consumer electronics and computers. The examples can include televisions, large projections etc. These gestures can be marked as spatial or 3D gestures. They are connected with a natural 3D environment rather than with a flat 2D screen. Nevertheless, it is hard to find a comparable design agreement within the spatial gestures. Various projects are based on completely different gesture sets. This situation is confusing for their users and slows down spatial gesture adoption.This paper is focused on the standardization of spatial gestures. The review of projects focused on spatial gesture usage is provided in the first part. The main emphasis is placed on the usability point-of-view. On the basis of our analysis, we argue that the usability is the key issue enabling the wide adoption. The mobile gesture emergence was possible easily because the iPhone gestures were natural. Therefore, it was not necessary to learn them.The design and implementation of our presentation software, which is controlled by gestures, is outlined in the second part of the paper. Furthermore, the usability testing results are provided as well. We have tested our application on a group of users not instructed in the implemented gestures design. These results were compared with the other ones, obtained with our original implementation. The evaluation can be used as the basis for implementation of similar projects.

  7. Single gaze gestures

    DEFF Research Database (Denmark)

    Møllenbach, Emilie; Lilholm, Martin; Gail, Alastair

    2010-01-01

    This paper examines gaze gestures and their applicability as a generic selection method for gaze-only controlled interfaces. The method explored here is the Single Gaze Gesture (SGG), i.e. gestures consisting of a single point-to-point eye movement. Horizontal and vertical, long and short SGGs were...

  8. From Gesture to Speech

    Directory of Open Access Journals (Sweden)

    Maurizio Gentilucci

    2012-11-01

    Full Text Available One of the major problems concerning the evolution of human language is to understand how sounds became associated to meaningful gestures. It has been proposed that the circuit controlling gestures and speech evolved from a circuit involved in the control of arm and mouth movements related to ingestion. This circuit contributed to the evolution of spoken language, moving from a system of communication based on arm gestures. The discovery of the mirror neurons has provided strong support for the gestural theory of speech origin because they offer a natural substrate for the embodiment of language and create a direct link between sender and receiver of a message. Behavioural studies indicate that manual gestures are linked to mouth movements used for syllable emission. Grasping with the hand selectively affected movement of inner or outer parts of the mouth according to syllable pronunciation and hand postures, in addition to hand actions, influenced the control of mouth grasp and vocalization. Gestures and words are also related to each other. It was found that when producing communicative gestures (emblems the intention to interact directly with a conspecific was transferred from gestures to words, inducing modification in voice parameters. Transfer effects of the meaning of representational gestures were found on both vocalizations and meaningful words. It has been concluded that the results of our studies suggest the existence of a system relating gesture to vocalization which was precursor of a more general system reciprocally relating gesture to word.

  9. Large scale GW calculations

    International Nuclear Information System (INIS)

    Govoni, Marco; Argonne National Lab., Argonne, IL; Galli, Giulia; Argonne National Lab., Argonne, IL

    2015-01-01

    We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green's function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfaces with thousands of electrons

  10. Mnemonic Effect of Iconic Gesture and Beat Gesture in Adults and Children: Is Meaning in Gesture Important for Memory Recall?

    Science.gov (United States)

    So, Wing Chee; Chen-Hui, Colin Sim; Wei-Shan, Julie Low

    2012-01-01

    Abundant research has shown that encoding meaningful gesture, such as an iconic gesture, enhances memory. This paper asked whether gesture needs to carry meaning to improve memory recall by comparing the mnemonic effect of meaningful (i.e., iconic gestures) and nonmeaningful gestures (i.e., beat gestures). Beat gestures involve simple motoric…

  11. Natural gesture interfaces

    Science.gov (United States)

    Starodubtsev, Illya

    2017-09-01

    The paper describes the implementation of the system of interaction with virtual objects based on gestures. The paper describes the common problems of interaction with virtual objects, specific requirements for the interfaces for virtual and augmented reality.

  12. Gesture and Power

    OpenAIRE

    Covington-Ward, Yolanda

    2016-01-01

    In Gesture and Power Yolanda Covington-Ward examines the everyday embodied practices and performances of the BisiKongo people of the lower Congo to show how their gestures, dances, and spirituality are critical in mobilizing social and political action. Conceiving of the body as the center of analysis, a catalyst for social action, and as a conduit for the social construction of reality, Covington-Ward focuses on specific flashpoints in the last ninety years of Congo's troubled history, when ...

  13. Gesture Imitation in Schizophrenia

    Science.gov (United States)

    Matthews, Natasha; Gold, Brian J.; Sekuler, Robert; Park, Sohee

    2013-01-01

    Recent evidence suggests that individuals with schizophrenia (SZ) are impaired in their ability to imitate gestures and movements generated by others. This impairment in imitation may be linked to difficulties in generating and maintaining internal representations in working memory (WM). We used a novel quantitative technique to investigate the relationship between WM and imitation ability. SZ outpatients and demographically matched healthy control (HC) participants imitated hand gestures. In Experiment 1, participants imitated single gestures. In Experiment 2, they imitated sequences of 2 gestures, either while viewing the gesture online or after a short delay that forced the use of WM. In Experiment 1, imitation errors were increased in SZ compared with HC. Experiment 2 revealed a significant interaction between imitation ability and WM. SZ produced more errors and required more time to imitate when that imitation depended upon WM compared with HC. Moreover, impaired imitation from WM was significantly correlated with the severity of negative symptoms but not with positive symptoms. In sum, gesture imitation was impaired in schizophrenia, especially when the production of an imitation depended upon WM and when an imitation entailed multiple actions. Such a deficit may have downstream consequences for new skill learning. PMID:21765171

  14. Gesture imitation in schizophrenia.

    Science.gov (United States)

    Matthews, Natasha; Gold, Brian J; Sekuler, Robert; Park, Sohee

    2013-01-01

    Recent evidence suggests that individuals with schizophrenia (SZ) are impaired in their ability to imitate gestures and movements generated by others. This impairment in imitation may be linked to difficulties in generating and maintaining internal representations in working memory (WM). We used a novel quantitative technique to investigate the relationship between WM and imitation ability. SZ outpatients and demographically matched healthy control (HC) participants imitated hand gestures. In Experiment 1, participants imitated single gestures. In Experiment 2, they imitated sequences of 2 gestures, either while viewing the gesture online or after a short delay that forced the use of WM. In Experiment 1, imitation errors were increased in SZ compared with HC. Experiment 2 revealed a significant interaction between imitation ability and WM. SZ produced more errors and required more time to imitate when that imitation depended upon WM compared with HC. Moreover, impaired imitation from WM was significantly correlated with the severity of negative symptoms but not with positive symptoms. In sum, gesture imitation was impaired in schizophrenia, especially when the production of an imitation depended upon WM and when an imitation entailed multiple actions. Such a deficit may have downstream consequences for new skill learning.

  15. Gesture Modelling for Linguistic Purposes

    CSIR Research Space (South Africa)

    Olivrin, GJ

    2007-05-01

    Full Text Available The study of sign languages attempts to create a coherent model that binds the expressive nature of signs conveyed in gestures to a linguistic framework. Gesture modelling offers an alternative that provides device independence, scalability...

  16. Gesture in the Developing Brain

    Science.gov (United States)

    Dick, Anthony Steven; Goldin-Meadow, Susan; Solodkin, Ana; Small, Steven L.

    2012-01-01

    Speakers convey meaning not only through words, but also through gestures. Although children are exposed to co-speech gestures from birth, we do not know how the developing brain comes to connect meaning conveyed in gesture with speech. We used functional magnetic resonance imaging (fMRI) to address this question and scanned 8- to 11-year-old…

  17. Instant MinGW starter

    CERN Document Server

    Shpigor, Ilya

    2013-01-01

    This is a Starter guide designed to enable the reader to start using MinGW to develop Microsoft Windows applications as quickly, and as efficiently, as possible. This book is for C and C++ developers who are looking for new and effective instruments to use in application development for Microsoft Windows. No experience of MinGW is needed: this book will guide you through the essentials to get you using the software like a pro in a matter of hours.

  18. Eye-based head gestures

    DEFF Research Database (Denmark)

    Mardanbegi, Diako; Witzner Hansen, Dan; Pederson, Thomas

    2012-01-01

    A novel method for video-based head gesture recognition using eye information by an eye tracker has been proposed. The method uses a combination of gaze and eye movement to infer head gestures. Compared to other gesture-based methods a major advantage of the method is that the user keeps the gaze...... mobile phone screens. The user study shows that the method detects a set of defined gestures reliably.......A novel method for video-based head gesture recognition using eye information by an eye tracker has been proposed. The method uses a combination of gaze and eye movement to infer head gestures. Compared to other gesture-based methods a major advantage of the method is that the user keeps the gaze...

  19. Single progenitor model for GW150914 and GW170104

    Science.gov (United States)

    D'Orazio, Daniel J.; Loeb, Abraham

    2018-04-01

    The merger of stellar-mass black holes (BHs) is not expected to generate detectable electromagnetic (EM) emission. However, the gravitational wave (GW) events GW150914 and GW170104, detected by the Laser Interferometer Gravitational Wave Observatory to be the result of merging, ˜60 M⊙ binary black holes (BBHs), each have claimed coincident gamma-ray emission. Motivated by the intriguing possibility of an EM counterpart to BBH mergers, we construct a model that can reproduce the observed EM and GW signals for GW150914- and GW170104-like events, from a single-star progenitor. Following Loeb [Astrophys. J. Lett. 819, L21 (2016), 10.3847/2041-8205/819/2/L21], we envision a massive, rapidly rotating star within which a rotating-bar instability fractures the core into two overdensities that fragment into clumps which merge to form BHs in a tight binary with arbitrary spin-orbit alignment. Once formed, the BBH inspirals due to gas and gravitational-wave drag until tidal forces trigger strong feeding of the BHs with the surrounding stellar-density gas about 10 sec before merger. The resulting giga-Eddington accretion peak launches a jet that breaks out of the progenitor star and drives a powerful outflow that clears the gas from the orbit of the binary within 1 sec, preserving the vacuum GW waveform in the Laser Interferometer Gravitational Wave Observatory band. The single-progenitor scenario predicts the existence of variability of the gamma-ray burst, modulated at the ˜0.2 sec chirping period of the BBH due to relativistic Doppler boost. The jet breakout should be accompanied by a low-luminosity supernova. Finally, because the BBHs of the single-progenitor model do not exist at large separations, they will not be detectable in the low-frequency gravitational-wave band of the Laser Interferometer Space Antenna. Hence, the single-progenitor BBHs will be unambiguously discernible from BBHs formed through alternate, double-progenitor evolution scenarios.

  20. Gestures Enhance Foreign Language Learning

    Directory of Open Access Journals (Sweden)

    Manuela Macedonia

    2012-11-01

    Full Text Available Language and gesture are highly interdependent systems that reciprocally influence each other. For example, performing a gesture when learning a word or a phrase enhances its retrieval compared to pure verbal learning. Although the enhancing effects of co-speech gestures on memory are known to be robust, the underlying neural mechanisms are still unclear. Here, we summarize the results of behavioral and neuroscientific studies. They indicate that the neural representation of words consists of complex multimodal networks connecting perception and motor acts that occur during learning. In this context, gestures can reinforce the sensorimotor representation of a word or a phrase, making it resistant to decay. Also, gestures can favor embodiment of abstract words by creating it from scratch. Thus, we propose the use of gesture as a facilitating educational tool that integrates body and mind.

  1. Gestures and multimodal input

    OpenAIRE

    Keates, Simeon; Robinson, Peter

    1999-01-01

    For users with motion impairments, the standard keyboard and mouse arrangement for computer access often presents problems. Other approaches have to be adopted to overcome this. In this paper, we will describe the development of a prototype multimodal input system based on two gestural input channels. Results from extensive user trials of this system are presented. These trials showed that the physical and cognitive loads on the user can quickly become excessive and detrimental to the interac...

  2. Pantomimic gestures for human-robot interaction

    CSIR Research Space (South Africa)

    Burke, Michael G

    2015-10-01

    Full Text Available -1 IEEE TRANSACTIONS ON ROBOTICS 1 Pantomimic Gestures for Human-Robot Interaction Michael Burke, Student Member, IEEE, and Joan Lasenby Abstract This work introduces a pantomimic gesture interface, which classifies human hand gestures using...

  3. THE PROGENITOR OF GW150914

    Energy Technology Data Exchange (ETDEWEB)

    Woosley, S. E., E-mail: woosley@ucolick.org [Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States)

    2016-06-10

    The spectacular detection of gravitational waves (GWs) from GW150914 and its reported association with a gamma-ray burst (GRB) offer new insights into the evolution of massive stars. Here, it is shown that no single star of any mass and credible metallicity is likely to produce the observed GW signal. Stars with helium cores in the mass range 35–133 M {sub ⊙} encounter the pair instability and either explode or pulse until the core mass is less than 45 M {sub ⊙}, smaller than the combined mass of the observed black holes. The rotation of more massive helium cores is either braked by interaction with a slowly rotating hydrogen envelope, if one is present, or by mass loss, if one is not. The very short interval between the GW signal and the observed onset of the putative GRB in GW150914 is also too short to have come from a single star. A more probable model for making the gravitational radiation is the delayed merger of two black holes made by 70 and 90 M {sub ⊙} stars in a binary system. The more massive component was a pulsational-pair instability supernova before making the first black hole.

  4. Geothermal GW cogeneration system GEOCOGEN

    Energy Technology Data Exchange (ETDEWEB)

    Grob, Gustav R

    2010-09-15

    GEOCOGEN is the GW zero pollution, no risk solution to replace nuclear and fossil fuelled power plants. It can be built near the energy consumption centers, is invisible and produces electricity and heat at a fraction of the cost of any other the energy mix options. It is a break through deep well geothermal energy technology lasting forever driving also millions of electric vehicles.

  5. Gesture en route to words

    DEFF Research Database (Denmark)

    Jensen de López, Kristine M.

    2010-01-01

    This study explores the communicative production of gestrural and vocal modalities by 8 normally developing children in two different cultures (Danish and Zapotec: Mexican indigenous) 16 to 20 months). We analyzed spontaneous production of gestrures and words in children's transition to the two-word...... the children showed an early preference for the gestural or vocal modality. Through Analyzes of two-element combinations of words and/or gestures, we observd a relative increase in cross-modal (gesture-word and two-word) combinations. The results are discussed in terms understanding gestures as a transition...

  6. Gesturing Makes Memories that Last

    Science.gov (United States)

    Cook, Susan Wagner; Yip, Terina KuangYi; Goldin-Meadow, Susan

    2010-01-01

    When people are asked to perform actions, they remember those actions better than if they are asked to talk about the same actions. But when people talk, they often gesture with their hands, thus adding an action component to talking. The question we asked in this study was whether producing gesture along with speech makes the information encoded…

  7. INTEGRAL Observations of GW170104

    DEFF Research Database (Denmark)

    Savchenko, V.; Ferrigno, C.; Bozzo, E.

    2017-01-01

    We used data from the International Gamma-Ray Astrophysics Laboratory (INTEGRAL) to set upper limits on the γ-ray and hard X-ray prompt emission associated with the gravitational-wave event GW170104, discovered by the Laser Interferometer Gravitational-wave Observatory (LIGO)/Virgo collaboration...... the INTEGRAL observations range from F γ = 1.9 × 10−7 erg cm−2 to F γ = 10−6 erg cm−2 (75 keV–2 MeV energy range). This translates into a ratio between the prompt energy released in γ-rays along the direction to the observer and the gravitational-wave energy of E γ /E GW

  8. Gestures and Insight in Advanced Mathematical Thinking

    Science.gov (United States)

    Yoon, Caroline; Thomas, Michael O. J.; Dreyfus, Tommy

    2011-01-01

    What role do gestures play in advanced mathematical thinking? We argue that the role of gestures goes beyond merely communicating thought and supporting understanding--in some cases, gestures can help generate new mathematical insights. Gestures feature prominently in a case study of two participants working on a sequence of calculus activities.…

  9. INTEGRAL Observations of GW170104

    Energy Technology Data Exchange (ETDEWEB)

    Savchenko, V.; Ferrigno, C.; Bozzo, E.; Courvoisier, T. J.-L. [ISDC, Department of Astronomy, University of Geneva, chemin d’Écogia, 16 CH-1290 Versoix (Switzerland); Bazzano, A. [INAF-Institute for Space Astrophysics and Planetology, Via Fosso del Cavaliere 100, I-00133-Rome (Italy); Brandt, S.; Chenevez, J.; Ubertini, P. [DTU Space—National Space Institute Elektrovej, Building 327, DK-2800 Kongens Lyngby (Denmark); Diehl, R.; Von Kienlin, A. [Max-Planck-Institut für Extraterrestrische Physik, Garching (Germany); Hanlon, L.; Martin-Carillo, A. [Space Science Group, School of Physics, University College Dublin, Belfield, Dublin 4 (Ireland); Kuulkers, E. [European Space Research and Technology Centre (ESA/ESTEC), Keplerlaan 1, 2201 AZ Noordwijk (Netherlands); Laurent, P. [APC, AstroParticule et Cosmologie, Université Paris Diderot, CNRS/IN2P3, CEA/Irfu, Observatoire de Paris Sorbonne Paris Cité, 10 rue Alice Domont et Léonie Duquet, F-75205 Paris Cedex 13 (France); Lebrun, F. [DSM/Irfu/Service d’Astrophysique, Bat. 709 Orme des Merisiers CEA Saclay, F-91191 Gif-sur-Yvette Cedex (France); Lutovinov, A.; Sunyaev, R. [Space Research Institute of Russian Academy of Sciences, Profsoyuznaya 84/32, 117997 Moscow (Russian Federation); Mereghetti, S. [INAF, IASF-Milano, via E.Bassini 15, I-20133 Milano (Italy); Roques, J. P. [Université Toulouse, UPS-OMP, CNRS, IRAP, 9 Av. Roche, BP 44346, F-31028 Toulouse (France)

    2017-09-10

    We used data from the International Gamma-Ray Astrophysics Laboratory ( INTEGRAL ) to set upper limits on the γ -ray and hard X-ray prompt emission associated with the gravitational-wave event GW170104, discovered by the Laser Interferometer Gravitational-wave Observatory (LIGO)/Virgo collaboration. The unique omnidirectional viewing capability of the instruments on board INTEGRAL allowed us to examine the full 90% confidence level localization region of the LIGO trigger. Depending on the particular spectral model assumed and the specific position within this region, the upper limits inferred from the INTEGRAL observations range from F {sub γ} = 1.9 × 10{sup −7} erg cm{sup −2} to F {sub γ} = 10{sup −6} erg cm{sup −2} (75 keV–2 MeV energy range). This translates into a ratio between the prompt energy released in γ -rays along the direction to the observer and the gravitational-wave energy of E {sub γ} / E {sub GW} < 2.6 × 10{sup −5}. Using the INTEGRAL results, we cannot confirm the γ -ray proposed counterpart to GW170104 by the Astro—Rivelatore Gamma a Immagini Leggero (AGILE) team with the mini-Calorimeter (MCAL) instrument. The reported flux of the AGILE/MCAL event, E2, is not compatible with the INTEGRAL upper limits within most of the 90% LIGO localization region. There is only a relatively limited portion of the sky where the sensitivity of the INTEGRAL instruments was not optimal and the lowest-allowed fluence estimated for E2 would still be compatible with the INTEGRAL results. This region was also observed independently by Fermi /Gamma-ray Burst Monitor and AstroSAT, from which, as far as we are aware, there are no reports of any significant detection of a prompt high-energy event.

  10. Machine Learning of Musical Gestures

    OpenAIRE

    Caramiaux, Baptiste; Tanaka, Atau

    2013-01-01

    We present an overview of machine learning (ML) techniques and theirapplication in interactive music and new digital instruments design. We firstgive to the non-specialist reader an introduction to two ML tasks,classification and regression, that are particularly relevant for gesturalinteraction. We then present a review of the literature in current NIMEresearch that uses ML in musical gesture analysis and gestural sound control.We describe the ways in which machine learning is useful for cre...

  11. Kazakh Traditional Dance Gesture Recognition

    Science.gov (United States)

    Nussipbekov, A. K.; Amirgaliyev, E. N.; Hahn, Minsoo

    2014-04-01

    Full body gesture recognition is an important and interdisciplinary research field which is widely used in many application spheres including dance gesture recognition. The rapid growth of technology in recent years brought a lot of contribution in this domain. However it is still challenging task. In this paper we implement Kazakh traditional dance gesture recognition. We use Microsoft Kinect camera to obtain human skeleton and depth information. Then we apply tree-structured Bayesian network and Expectation Maximization algorithm with K-means clustering to calculate conditional linear Gaussians for classifying poses. And finally we use Hidden Markov Model to detect dance gestures. Our main contribution is that we extend Kinect skeleton by adding headwear as a new skeleton joint which is calculated from depth image. This novelty allows us to significantly improve the accuracy of head gesture recognition of a dancer which in turn plays considerable role in whole body gesture recognition. Experimental results show the efficiency of the proposed method and that its performance is comparable to the state-of-the-art system performances.

  12. Non Audio-Video gesture recognition system

    DEFF Research Database (Denmark)

    Craciunescu, Razvan; Mihovska, Albena Dimitrova; Kyriazakos, Sofoklis

    2016-01-01

    Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current research focus includes on the emotion...... recognition from the face and hand gesture recognition. Gesture recognition enables humans to communicate with the machine and interact naturally without any mechanical devices. This paper investigates the possibility to use non-audio/video sensors in order to design a low-cost gesture recognition device...

  13. Perspectives on gesture from music informatics, performance and aesthetics

    DEFF Research Database (Denmark)

    Jensen, Kristoffer; Frimodt-Møller, Søren; Grund, Cynthia

    2014-01-01

    This article chronicles the research of the Nordic Network of Music Informatics, Performance and Aesthetics (NNIMIPA), and shows how the milieux bridge the gap between the disciplines involved. As examples, three projects within NNIMIPA involving performance interaction examine the role of audio...... and gestures in emotional musical expression using motion capture, the visual and auditive cues musicians provide each other in an ensemble when rehearsing, and the decision processes involved when a musician coordinates with other musicians. These projects seek to combine and compare intuitions derived from...... low-tech instructional music workshops that rely heavily on the use of whole-body gestures with the insights provided by high-tech studies and formal logic models of the performing musician, not only with respect to the sound, but also with regard to the movements of the performer and the mechanisms...

  14. Ethics, Gesture and the Western

    Directory of Open Access Journals (Sweden)

    Michael Minden

    2017-06-01

    Full Text Available This paper relates the Western Movie to Agamben’s implied gestural zone between intention and act. Film is important in the realisation of this zone because it was the first means of representation to capture the body in movement. The Western movie explores the space of ethical indistinction between the acts of individual fighters and the establishment of a rule of law, or putting this another way, between violence and justice. Two classic examples of an archetypal Western plot (Shane, 1953 and Unforgiven, 1991 that particularly embodies this are cited. In both a gunfighter who has forsworn violence at the start is led by the circumstances of the plot to take it up once more at the conclusion. In these terms all the gestures contained between these beginning- and end-points are analysable as an ethics of gesture because, captured as gestures, they occupy the human space between abstraction and action, suspended between them, and reducible to neither.  David Foster Wallace's definition of this narrative arc in Infinite Jest (and embodied in it is adduced in order to suggest a parallel between Agamben's notion of an ethics of gesture, and an ethics of genre.

  15. How do gestures influence thinking and speaking? The gesture-for-conceptualization hypothesis

    OpenAIRE

    Kita, Sotaro; Alibali, M. W.; Chu, Mingyuan

    2017-01-01

    People spontaneously produce gestures during speaking and thinking. The authors focus here on gestures that depict or indicate information related to the contents of concurrent speech or thought (i.e., representational gestures). Previous research indicates that such gestures have not only communicative functions, but also self-oriented cognitive functions. In this article, the authors propose a new theoretical framework, the gesture-for-conceptualization hypothesis, which explains the self-o...

  16. An in-situ trainable gesture classifier

    NARCIS (Netherlands)

    van Diepen, A.; Cox, M.G.H.; de Vries, A.; Duivesteijn, W.; Pechenizkiy, M.; Fletcher, G.H.L.

    2017-01-01

    Gesture recognition, i.e., the recognition of pre-defined gestures by arm or hand movements, enables a natural extension of the way we currently interact with devices (Horsley, 2016). Commercially available gesture recognition systems are usually pre-trained: the developers specify a set of

  17. Gesture Activated Mobile Edutainment (GAME)

    DEFF Research Database (Denmark)

    Rehm, Matthias; Leichtenstern, Karin; Plomer, Joerg

    2010-01-01

    An approach to intercultural training of nonverbal behavior is presented that draws from research on role-plays with virtual agents and ideas from situated learning. To this end, a mobile serious game is realized where the user acquires knowledge about German emblematic gestures and tries them out...... in role-plays with virtual agents. Gesture performance is evaluated making use of build-in acceleration sensors of smart phones. After an account of the theoretical background covering diverse areas like virtual agents, situated learning and intercultural training, the paper presents the GAME approach...... along with details on the gesture recognition and content authoring. By its experience-based role plays with virtual characters, GAME brings together ideas from situated learning and intercultural training in an integrated approach and paves the way for new m-learning concepts....

  18. Gesture, sign, and language: The coming of age of sign language and gesture studies.

    Science.gov (United States)

    Goldin-Meadow, Susan; Brentari, Diane

    2017-01-01

    How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.

  19. Thirty years of great ape gestures.

    Science.gov (United States)

    Tomasello, Michael; Call, Josep

    2018-02-21

    We and our colleagues have been doing studies of great ape gestural communication for more than 30 years. Here we attempt to spell out what we have learned. Some aspects of the process have been reliably established by multiple researchers, for example, its intentional structure and its sensitivity to the attentional state of the recipient. Other aspects are more controversial. We argue here that it is a mistake to assimilate great ape gestures to the species-typical displays of other mammals by claiming that they are fixed action patterns, as there are many differences, including the use of attention-getters. It is also a mistake, we argue, to assimilate great ape gestures to human gestures by claiming that they are used referentially and declaratively in a human-like manner, as apes' "pointing" gesture has many limitations and they do not gesture iconically. Great ape gestures constitute a unique form of primate communication with their own unique qualities.

  20. Gesturing Gives Children New Ideas About Math

    Science.gov (United States)

    Goldin-Meadow, Susan; Cook, Susan Wagner; Mitchell, Zachary A.

    2009-01-01

    How does gesturing help children learn? Gesturing might encourage children to extract meaning implicit in their hand movements. If so, children should be sensitive to the particular movements they produce and learn accordingly. Alternatively, all that may matter is that children move their hands. If so, they should learn regardless of which movements they produce. To investigate these alternatives, we manipulated gesturing during a math lesson. We found that children required to produce correct gestures learned more than children required to produce partially correct gestures, who learned more than children required to produce no gestures. This effect was mediated by whether children took information conveyed solely in their gestures and added it to their speech. The findings suggest that body movements are involved not only in processing old ideas, but also in creating new ones. We may be able to lay foundations for new knowledge simply by telling learners how to move their hands. PMID:19222810

  1. How do gestures influence thinking and speaking? The gesture-for-conceptualization hypothesis.

    Science.gov (United States)

    Kita, Sotaro; Alibali, Martha W; Chu, Mingyuan

    2017-04-01

    People spontaneously produce gestures during speaking and thinking. The authors focus here on gestures that depict or indicate information related to the contents of concurrent speech or thought (i.e., representational gestures). Previous research indicates that such gestures have not only communicative functions, but also self-oriented cognitive functions. In this article, the authors propose a new theoretical framework, the gesture-for-conceptualization hypothesis, which explains the self-oriented functions of representational gestures. According to this framework, representational gestures affect cognitive processes in 4 main ways: gestures activate, manipulate, package, and explore spatio-motoric information for speaking and thinking. These four functions are shaped by gesture's ability to schematize information, that is, to focus on a small subset of available information that is potentially relevant to the task at hand. The framework is based on the assumption that gestures are generated from the same system that generates practical actions, such as object manipulation; however, gestures are distinct from practical actions in that they represent information. The framework provides a novel, parsimonious, and comprehensive account of the self-oriented functions of gestures. The authors discuss how the framework accounts for gestures that depict abstract or metaphoric content, and they consider implications for the relations between self-oriented and communicative functions of gestures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Gesture Interaction at a Distance

    NARCIS (Netherlands)

    Fikkert, F.W.

    2010-01-01

    The aim of this work is to explore, from a perspective of human behavior, which gestures are suited to control large display surfaces from a short distance away; why that is so; and, equally important, how such an interface can be made a reality. A well-known example of the type of interface that is

  3. GW170817 falsifies dark matter emulators

    Science.gov (United States)

    Boran, S.; Desai, S.; Kahya, E. O.; Woodard, R. P.

    2018-02-01

    On August 17, 2017 the LIGO interferometers detected the gravitational wave (GW) signal (GW170817) from the coalescence of binary neutron stars. This signal was also simultaneously seen throughout the electromagnetic (EM) spectrum from radio waves to gamma rays. We point out that this simultaneous detection of GW and EM signals rules out a class of modified gravity theories, termed "dark matter emulators," which dispense with the need for dark matter by making ordinary matter couple to a different metric from that of GW. We discuss other kinds of modified gravity theories which dispense with the need for dark matter and are still viable. This simultaneous observation also provides the first observational test of Einstein's weak equivalence principle (WEP) between gravitons and photons. We estimate the Shapiro time delay due to the gravitational potential of the total dark matter distribution along the line of sight (complementary to the calculation by Abbott et al. [Astrophys. J. Lett. 848, L13 (2017)], 10.3847/2041-8213/aa920c) to be about 400 days. Using this estimate for the Shapiro delay and from the time difference of 1.7 seconds between the GW signal and gamma rays, we can constrain violations of the WEP using the parametrized post-Newtonian parameter γ , and it is given by |γGW-γEM|<9.8 ×10-8.

  4. Communicative Gestures Facilitate Problem Solving for Both Communicators and Recipients

    Science.gov (United States)

    Lozano, Sandra C.; Tversky, Barbara

    2006-01-01

    Gestures are a common, integral part of communication. Here, we investigate the roles of gesture and speech in explanations, both for communicators and recipients. Communicators explained how to assemble a simple object, using either speech with gestures or gestures alone. Gestures used for explaining included pointing and exhibiting to indicate…

  5. Hand Matters: Left-Hand Gestures Enhance Metaphor Explanation

    Science.gov (United States)

    Argyriou, Paraskevi; Mohr, Christine; Kita, Sotaro

    2017-01-01

    Research suggests that speech-accompanying gestures influence cognitive processes, but it is not clear whether the gestural benefit is specific to the gesturing hand. Two experiments tested the "(right/left) hand-specificity" hypothesis for self-oriented functions of gestures: gestures with a particular hand enhance cognitive processes…

  6. What Iconic Gesture Fragments Reveal about Gesture-Speech Integration: When Synchrony Is Lost, Memory Can Help

    Science.gov (United States)

    Obermeier, Christian; Holle, Henning; Gunter, Thomas C.

    2011-01-01

    The present series of experiments explores several issues related to gesture-speech integration and synchrony during sentence processing. To be able to more precisely manipulate gesture-speech synchrony, we used gesture fragments instead of complete gestures, thereby avoiding the usual long temporal overlap of gestures with their coexpressive…

  7. How early do children understand gesture-speech combinations with iconic gestures?

    Science.gov (United States)

    Stanfield, Carmen; Williamson, Rebecca; Ozçalişkan, Seyda

    2014-03-01

    Children understand gesture+speech combinations in which a deictic gesture adds new information to the accompanying speech by age 1;6 (Morford & Goldin-Meadow, 1992; 'push'+point at ball). This study explores how early children understand gesture+speech combinations in which an iconic gesture conveys additional information not found in the accompanying speech (e.g., 'read'+BOOK gesture). Our analysis of two- to four-year-old children's responses in a gesture+speech comprehension task showed that children grasp the meaning of iconic co-speech gestures by age three and continue to improve their understanding with age. Overall, our study highlights the important role gesture plays in language comprehension as children learn to unpack increasingly complex communications addressed to them at the early ages.

  8. Gestures Towards the Digital Maypole

    Directory of Open Access Journals (Sweden)

    Christian McRea

    2005-01-01

    Full Text Available To paraphrase Blanchot: We are not learned; we are not ignorant. We have known joys. That is saying too little: We are alive, and this life gives us the greatest pleasure. The intensities afforded by mobile communication can be thought of as an extension of the styles and gestures already materialised by multiple maypole cultures, pre-digital community forms and the very clustered natures of speech and being. In his Critique of Judgment, Kant argues that the information selection process at the heart of communication is one of the fundamental activities of any aesthetically produced knowledge form. From this radial point, "Gestures Towards The Digital Maypole" begins the process of reorganising conceptions of modalities of communication around the absent centre and the affective realms that form through the movement of information-energy, like sugar in a hurricane.

  9. Gestures in an Intelligent User Interface

    Science.gov (United States)

    Fikkert, Wim; van der Vet, Paul; Nijholt, Anton

    In this chapter we investigated which hand gestures are intuitive to control a large display multimedia interface from a user's perspective. Over the course of two sequential user evaluations, we defined a simple gesture set that allows users to fully control a large display multimedia interface, intuitively. First, we evaluated numerous gesture possibilities for a set of commands that can be issued to the interface. These gestures were selected from literature, science fiction movies, and a previous exploratory study. Second, we implemented a working prototype with which the users could interact with both hands and the preferred hand gestures with 2D and 3D visualizations of biochemical structures. We found that the gestures are influenced to significant extent by the fast paced developments in multimedia interfaces such as the Apple iPhone and the Nintendo Wii and to no lesser degree by decades of experience with the more traditional WIMP-based interfaces.

  10. Initial experiments with Multiple Musical Gestures

    DEFF Research Database (Denmark)

    Jensen, Kristoffer; Graugaard, Lars

    2005-01-01

    The classic orchestra has a diminishing role in society, while hard-disc recorded music plays a predominant role today. A simple to use pointer interface in 2D for producing music is presented as a means for playing in a social situation. The sounds of the music are produced by a low-level...... synthesizer, and the music is produced by simple gestures that are repeated easily. The gestures include left-to-right and right-to-left motion shapes for spectral envelope and temporal envelope of the sounds, with optional backwards motion for the addition of noise; downward motion for note onset and several...... other manipulation gestures. The initial position controls which parameter is being affected, the notes intensity is controlled by the downward gesture speed, and a sequence is finalized instantly with one upward gesture. The synthesis employs a novel interface structure, the multiple musical gesture...

  11. Acid chat: gestural interface design

    OpenAIRE

    Gökhan, Ali Oytun; Gokhan, Ali Oytun

    2005-01-01

    AcidChat is an experimental design project that aims to create an innovative computer software interface for Internet chat software using today's well known technologies; Adobe Photoshop, Macromedia Freehand and digital photography. The aim of the project is to create new understandings of interface and it's usage, by adding new conceptions to chat based interfaces which creates a totally new look at the computer software and application. One of the key features is to add a gestural approach ...

  12. Gesture analysis for physics education researchers

    Directory of Open Access Journals (Sweden)

    Rachel E. Scherr

    2008-01-01

    Full Text Available Systematic observations of student gestures can not only fill in gaps in students’ verbal expressions, but can also offer valuable information about student ideas, including their source, their novelty to the speaker, and their construction in real time. This paper provides a review of the research in gesture analysis that is most relevant to physics education researchers and illustrates gesture analysis for the purpose of better understanding student thinking about physics.

  13. Hand Gesture Recognition with Leap Motion

    OpenAIRE

    Du, Youchen; Liu, Shenglan; Feng, Lin; Chen, Menghui; Wu, Jie

    2017-01-01

    The recent introduction of depth cameras like Leap Motion Controller allows researchers to exploit the depth information to recognize hand gesture more robustly. This paper proposes a novel hand gesture recognition system with Leap Motion Controller. A series of features are extracted from Leap Motion tracking data, we feed these features along with HOG feature extracted from sensor images into a multi-class SVM classifier to recognize performed gesture, dimension reduction and feature weight...

  14. Grounded Blends and Mathematical Gesture Spaces: Developing Mathematical Understandings via Gestures

    Science.gov (United States)

    Yoon, Caroline; Thomas, Michael O. J.; Dreyfus, Tommy

    2011-01-01

    This paper examines how a person's gesture space can become endowed with mathematical meaning associated with mathematical spaces and how the resulting mathematical gesture space can be used to communicate and interpret mathematical features of gestures. We use the theory of grounded blends to analyse a case study of two teachers who used gestures…

  15. Language, Gesture, Action! A Test of the Gesture as Simulated Action Framework

    Science.gov (United States)

    Hostetter, Autumn B.; Alibali, Martha W.

    2010-01-01

    The Gesture as Simulated Action (GSA) framework (Hostetter & Alibali, 2008) holds that representational gestures are produced when actions are simulated as part of thinking and speaking. Accordingly, speakers should gesture more when describing images with which they have specific physical experience than when describing images that are less…

  16. Quasi-Particle Self-Consistent GW for Molecules.

    Science.gov (United States)

    Kaplan, F; Harding, M E; Seiler, C; Weigend, F; Evers, F; van Setten, M J

    2016-06-14

    We present the formalism and implementation of quasi-particle self-consistent GW (qsGW) and eigenvalue only quasi-particle self-consistent GW (evGW) adapted to standard quantum chemistry packages. Our implementation is benchmarked against high-level quantum chemistry computations (coupled-cluster theory) and experimental results using a representative set of molecules. Furthermore, we compare the qsGW approach for five molecules relevant for organic photovoltaics to self-consistent GW results (scGW) and analyze the effects of the self-consistency on the ground state density by comparing calculated dipole moments to their experimental values. We show that qsGW makes a significant improvement over conventional G0W0 and that partially self-consistent flavors (in particular evGW) can be excellent alternatives.

  17. Gesture facilitates the syntactic analysis of speech

    Directory of Open Access Journals (Sweden)

    Henning eHolle

    2012-03-01

    Full Text Available Recent research suggests that the brain routinely binds together information from gesture and speech. However, most of this research focused on the integration of representational gestures with the semantic content of speech. Much less is known about how other aspects of gesture, such as emphasis, influence the interpretation of the syntactic relations in a spoken message. Here, we investigated whether beat gestures alter which syntactic structure is assigned to ambiguous spoken German sentences. The P600 component of the Event Related Brain Potential indicated that the more complex syntactic structure is easier to process when the speaker emphasizes the subject of a sentence with a beat. Thus, a simple flick of the hand can change our interpretation of who has been doing what to whom in a spoken sentence. We conclude that gestures and speech are an integrated system. Unlike previous studies, which have shown that the brain effortlessly integrates semantic information from gesture and speech, our study is the first to demonstrate that this integration also occurs for syntactic information. Moreover, the effect appears to be gesture-specific and was not found for other stimuli that draw attention to certain parts of speech, including prosodic emphasis, or a moving visual stimulus with the same trajectory as the gesture. This suggests that only visual emphasis produced with a communicative intention in mind (that is, beat gestures influences language comprehension, but not a simple visual movement lacking such an intention.

  18. Research on Interaction-oriented Gesture Recognition

    Directory of Open Access Journals (Sweden)

    Lu Huang

    2014-01-01

    Full Text Available This thesis designs a series of gesture interaction with the features of the natural human-machine interaction; besides, it utilizes the 3D acceleration sensors as interactive input. Afterwards, it builds the Discrete Hidden Markov Model to make gesture recognition by introducing the collection proposal of gesture interaction based on the acceleration sensors and pre-handling the gesture acceleration signal obtained in the collection. In the end, the thesis proofs the design proposal workable and effective according to the experiments.

  19. Gesture and Speech in Interaction - 4th edition (GESPIN 4)

    OpenAIRE

    Ferré , Gaëlle; Mark , Tutton

    2015-01-01

    International audience; The fourth edition of Gesture and Speech in Interaction (GESPIN) was held in Nantes, France. With more than 40 papers, these proceedings show just what a flourishing field of enquiry gesture studies continues to be. The keynote speeches of the conference addressed three different aspects of multimodal interaction:gesture and grammar, gesture acquisition, and gesture and social interaction. In a talk entitled Qualitiesof event construal in speech and gesture: Aspect and...

  20. Tests of General Relativity with GW150914

    OpenAIRE

    Abbott, B. P.; Abbott, R.; Abernathy, M. R.; Adhikari, R. X.; Anderson, S. B.; Arai, K.; Araya, M. C.; Barayoga, J. C.; Barish, B. C.; Berger, B. K.; Billingsley, G.; Blackburn, J. K.; Bork, R.; Brooks, A. F.; Cahillane, C.

    2016-01-01

    The LIGO detection of GW150914 provides an unprecedented opportunity to study the two-body motion of a compact-object binary in the large-velocity, highly nonlinear regime, and to witness the final merger of the binary and the excitation of uniquely relativistic modes of the gravitational field. We carry out several investigations to determine whether GW150914 is consistent with a binary black-hole merger in general relativity. We find that the final remnant’s mass and spin, as determined fro...

  1. Hand Gesture Recognition Using Ultrasonic Waves

    KAUST Repository

    AlSharif, Mohammed Hussain

    2016-04-01

    Gesturing is a natural way of communication between people and is used in our everyday conversations. Hand gesture recognition systems are used in many applications in a wide variety of fields, such as mobile phone applications, smart TVs, video gaming, etc. With the advances in human-computer interaction technology, gesture recognition is becoming an active research area. There are two types of devices to detect gestures; contact based devices and contactless devices. Using ultrasonic waves for determining gestures is one of the ways that is employed in contactless devices. Hand gesture recognition utilizing ultrasonic waves will be the focus of this thesis work. This thesis presents a new method for detecting and classifying a predefined set of hand gestures using a single ultrasonic transmitter and a single ultrasonic receiver. This method uses a linear frequency modulated ultrasonic signal. The ultrasonic signal is designed to meet the project requirements such as the update rate, the range of detection, etc. Also, it needs to overcome hardware limitations such as the limited output power, transmitter, and receiver bandwidth, etc. The method can be adapted to other hardware setups. Gestures are identified based on two main features; range estimation of the moving hand and received signal strength (RSS). These two factors are estimated using two simple methods; channel impulse response (CIR) and cross correlation (CC) of the reflected ultrasonic signal from the gesturing hand. A customized simple hardware setup was used to classify a set of hand gestures with high accuracy. The detection and classification were done using methods of low computational cost. This makes the proposed method to have a great potential for the implementation in many devices including laptops and mobile phones. The predefined set of gestures can be used for many control applications.

  2. GESTURE'S ROLE IN CREATING AND LEARNING LANGUAGE.

    Science.gov (United States)

    Goldin-Meadow, Susan

    2010-09-22

    Imagine a child who has never seen or heard language. Would such a child be able to invent a language? Despite what one might guess, the answer is "yes". This chapter describes children who are congenitally deaf and cannot learn the spoken language that surrounds them. In addition, the children have not been exposed to sign language, either by their hearing parents or their oral schools. Nevertheless, the children use their hands to communicate--they gesture--and those gestures take on many of the forms and functions of language (Goldin-Meadow 2003a). The properties of language that we find in these gestures are just those properties that do not need to be handed down from generation to generation, but can be reinvented by a child de novo. They are the resilient properties of language, properties that all children, deaf or hearing, come to language-learning ready to develop. In contrast to these deaf children who are inventing language with their hands, hearing children are learning language from a linguistic model. But they too produce gestures, as do all hearing speakers (Feyereisen and de Lannoy 1991; Goldin-Meadow 2003b; Kendon 1980; McNeill 1992). Indeed, young hearing children often use gesture to communicate before they use words. Interestingly, changes in a child's gestures not only predate but also predict changes in the child's early language, suggesting that gesture may be playing a role in the language-learning process. This chapter begins with a description of the gestures the deaf child produces without speech. These gestures assume the full burden of communication and take on a language-like form--they are language. This phenomenon stands in contrast to the gestures hearing speakers produce with speech. These gestures share the burden of communication with speech and do not take on a language-like form--they are part of language.

  3. Individual differences in the gesture effect on working memory.

    Science.gov (United States)

    Marstaller, Lars; Burianová, Hana

    2013-06-01

    Co-speech gestures have been shown to interact with working memory (WM). However, no study has investigated whether there are individual differences in the effect of gestures on WM. Combining a novel gesture/no-gesture task and an operation span task, we examined the differences in WM accuracy between individuals who gestured and individuals who did not gesture in relation to their WM capacity. Our results showed individual differences in the gesture effect on WM. Specifically, only individuals with low WM capacity showed a reduced WM accuracy when they did not gesture. Individuals with low WM capacity who did gesture, as well as high-capacity individuals (irrespective of whether they gestured or not), did not show the effect. Our findings show that the interaction between co-speech gestures and WM is affected by an individual's WM load.

  4. Tests of General Relativity with GW150914

    NARCIS (Netherlands)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Phythian-Adams, A.T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.T.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, R.D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, M.J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, A.L.S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, J.G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, T.C; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Brooks, A. F.; Brown, A.D.; Brown, D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Bustillo, J. Calderon; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Diaz, J. Casanueva; Casentini, C.; Caudill, S.; Cavaglia, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Baiardi, L. Cerboni; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, D. S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Qian; Chua, S. E.; Chung, E.S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P. -F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, A.C.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J. -P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, A.L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Debra, D.; Debreczeni, G.; Degallaix, J.; De laurentis, M.; Deleglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.A.; Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Dhurandhar, S.; Diaz, M. C.; Di Fiore, L.; Giovanni, M.G.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H. -B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, T. M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.M.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M; Fournier, J. -D.; Franco, S; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.P.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; Gonzalez, Idelmis G.; Castro, J. M. Gonzalez; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Lee-Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.M.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Buffoni-Hall, R.; Hall, E. D.; Hammond, G.L.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, P.J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C. -J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J. -M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, D.H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jimenez-Forteza, F.; Johnson, W.; Johnson-McDaniel, N. K.; Jones, I.D.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, M. K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.H.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kefelian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.E.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan., S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, C.; Kim, J.; Kim, Nam-Gyu; Kim, K.; Kim, Namjun; Kim, Y.M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Krolak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C.H.; Lee, K.H.; Lee, M.H.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lousto, C. O.; Lovelace, G.; Lueck, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magana-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Marka, S.; Marka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R.M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B.C.; Moore, J.C.; Moraru, D.; Gutierrez Moreno, M.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, S.D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P.G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Gutierrez-Neri, M.; Neunzert, A.; Newton-Howes, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J.; Oh, S. H.; Ohme, F.; Oliver, M. B.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pan, Y.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.S; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Puerrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosinska, D.; Rowan, S.; Ruediger, A.; Ruggi, P.; Ryan, K.A.; Sachdev, P.S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J; Schmidt, P.; Schnabel, R.B.; Schofield, R. M. S.; Schoenbeck, A.; Schreiber, K.E.C.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, M.S.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shao, Z.M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, António Dias da; Simakov, D.; Singer, A; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, R. J. E.; Smith, N.D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, J.R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepanczyk, M. J.; Tacca, M.D.; Talukder, D.; Tanner, D. B.; Typai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, W.R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Toeyrae, D.; Travasso, F.; Traylor, G.; Trifiro, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Vallisneri, M.; Van Bakel, N.; Van Beuzekom, Martin; van den Brand, J. F. J.; Van Den Broeck, C.F.F.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasuth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P.J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Vicere, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J. -Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, MT; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L. -W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.M.; Wessels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, D.; Williams, D.R.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J.L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; Zadrozny, A.; Zangrando, L.; Zanolin, M.; Zendri, J. -P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; Boyle, M.; Campanelli, M.; Hemberger, D. A.; Kidder, L. E.; Ossokine, S.; Scheel, M. A.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.

    2016-01-01

    The LIGO detection of GW150914 provides an unprecedented opportunity to study the two-body motion of a compact-object binary in the large velocity, highly nonlinear regime, and to witness the final merger of the binary and the excitation of uniquely relativistic modes of the gravitational field. We

  5. Functional neuroanatomy of gesture-speech integration in children varies with individual differences in gesture processing.

    Science.gov (United States)

    Demir-Lira, Özlem Ece; Asaridou, Salomi S; Raja Beharelle, Anjali; Holt, Anna E; Goldin-Meadow, Susan; Small, Steven L

    2018-03-08

    Gesture is an integral part of children's communicative repertoire. However, little is known about the neurobiology of speech and gesture integration in the developing brain. We investigated how 8- to 10-year-old children processed gesture that was essential to understanding a set of narratives. We asked whether the functional neuroanatomy of gesture-speech integration varies as a function of (1) the content of speech, and/or (2) individual differences in how gesture is processed. When gestures provided missing information not present in the speech (i.e., disambiguating gesture; e.g., "pet" + flapping palms = bird), the presence of gesture led to increased activity in inferior frontal gyri, the right middle temporal gyrus, and the left superior temporal gyrus, compared to when gesture provided redundant information (i.e., reinforcing gesture; e.g., "bird" + flapping palms = bird). This pattern of activation was found only in children who were able to successfully integrate gesture and speech behaviorally, as indicated by their performance on post-test story comprehension questions. Children who did not glean meaning from gesture did not show differential activation across the two conditions. Our results suggest that the brain activation pattern for gesture-speech integration in children overlaps with-but is broader than-the pattern in adults performing the same task. Overall, our results provide a possible neurobiological mechanism that could underlie children's increasing ability to integrate gesture and speech over childhood, and account for individual differences in that integration. © 2018 John Wiley & Sons Ltd.

  6. Nonsymbolic Gestural Interaction for Ambient Intelligence

    DEFF Research Database (Denmark)

    Rehm, Matthias

    2010-01-01

    the addressee with subtle clues about personality or cultural background. Gestures are an extremly rich source of communication-specific and contextual information for interactions in ambient intelligence environments. This chapter reviews the semantic layers of gestural interaction, focusing on the layer...

  7. Gesture Analysis for Physics Education Researchers

    Science.gov (United States)

    Scherr, Rachel E.

    2008-01-01

    Systematic observations of student gestures can not only fill in gaps in students' verbal expressions, but can also offer valuable information about student ideas, including their source, their novelty to the speaker, and their construction in real time. This paper provides a review of the research in gesture analysis that is most relevant to…

  8. Enhancing Communication through Gesture and Naming Therapy

    Science.gov (United States)

    Caute, Anna; Pring, Tim; Cocks, Naomi; Cruice, Madeline; Best, Wendy; Marshall, Jane

    2013-01-01

    Purpose: In this study, the authors investigated whether gesture, naming, and strategic treatment improved the communication skills of 14 people with severe aphasia. Method: All participants received 15 hr of gesture and naming treatment (reported in a companion article [Marshall et al., 2012]). Half the group received a further 15 hr of strategic…

  9. Pitch Gestures in Generative Modeling of Music

    DEFF Research Database (Denmark)

    Jensen, Kristoffer

    2011-01-01

    Generative models of music are in need of performance and gesture additions, i.e. inclusions of subtle temporal and dynamic alterations, and gestures so as to render the music musical. While much of the research regarding music generation is based on music theory, the work presented here is based...

  10. Towards a Description of East African Gestures

    Science.gov (United States)

    Creider, Chet A.

    1977-01-01

    This paper describes the gestural behavior of four tribal groups, Kipsigis, Luo, Gusii, and Samburu, observed and elicted in the course of two and one-half years of field work in Western Kenya in 1970-72. The gestures are grouped into four categories: (1) initiators and finalizers of interaction; (2) imperatives; (3) responses; (4) qualifiers.…

  11. Aspects of the Multiple Musical Gestures

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2006-01-01

    is finalized instantly with one upward gesture. Several synthesis methods are presented and the control mechanisms are mapped into the multiple musical gesture interface. This enables a number of performers to interact on the same interface, either by each playing the same musical instruments simultaneously...

  12. The comprehension of gesture and speech

    NARCIS (Netherlands)

    Willems, R.M.; Özyürek, A.; Hagoort, P.

    2005-01-01

    Although generally studied in isolation, action observation and speech comprehension go hand in hand during everyday human communication. That is, people gesture while they speak. From previous research it is known that a tight link exists between spoken language and such hand gestures. This study

  13. The ontogenetic ritualization of bonobo gestures.

    Science.gov (United States)

    Halina, Marta; Rossano, Federico; Tomasello, Michael

    2013-07-01

    Great apes communicate with gestures in flexible ways. Based on several lines of evidence, Tomasello and colleagues have posited that many of these gestures are learned via ontogenetic ritualization-a process of mutual anticipation in which particular social behaviors come to function as intentional communicative signals. Recently, Byrne and colleagues have argued that all great ape gestures are basically innate. In the current study, for the first time, we attempted to observe the process of ontogenetic ritualization as it unfolds over time. We focused on one communicative function between bonobo mothers and infants: initiation of "carries" for joint travel. We observed 1,173 carries in ten mother-infant dyads. These were initiated by nine different gesture types, with mothers and infants using many different gestures in ways that reflected their different roles in the carry interaction. There was also a fair amount of variability among the different dyads, including one idiosyncratic gesture used by one infant. This gestural variation could not be attributed to sampling effects alone. These findings suggest that ontogenetic ritualization plays an important role in the origin of at least some great ape gestures.

  14. Integration of speech and gesture in aphasia.

    Science.gov (United States)

    Cocks, Naomi; Byrne, Suzanne; Pritchard, Madeleine; Morgan, Gary; Dipper, Lucy

    2018-02-07

    Information from speech and gesture is often integrated to comprehend a message. This integration process requires the appropriate allocation of cognitive resources to both the gesture and speech modalities. People with aphasia are likely to find integration of gesture and speech difficult. This is due to a reduction in cognitive resources, a difficulty with resource allocation or a combination of the two. Despite it being likely that people who have aphasia will have difficulty with integration, empirical evidence describing this difficulty is limited. Such a difficulty was found in a single case study by Cocks et al. in 2009, and is replicated here with a greater number of participants. To determine whether individuals with aphasia have difficulties understanding messages in which they have to integrate speech and gesture. Thirty-one participants with aphasia (PWA) and 30 control participants watched videos of an actor communicating a message in three different conditions: verbal only, gesture only, and verbal and gesture message combined. The message related to an action in which the name of the action (e.g., 'eat') was provided verbally and the manner of the action (e.g., hands in a position as though eating a burger) was provided gesturally. Participants then selected a picture that 'best matched' the message conveyed from a choice of four pictures which represented a gesture match only (G match), a verbal match only (V match), an integrated verbal-gesture match (Target) and an unrelated foil (UR). To determine the gain that participants obtained from integrating gesture and speech, a measure of multimodal gain (MMG) was calculated. The PWA were less able to integrate gesture and speech than the control participants and had significantly lower MMG scores. When the PWA had difficulty integrating, they more frequently selected the verbal match. The findings suggest that people with aphasia can have difficulty integrating speech and gesture in order to obtain

  15. A unique gesture of sharing

    International Nuclear Information System (INIS)

    Mustafa, T.

    1985-01-01

    The Atoms for Peace program was a unique gesture of sharing on the part of the leading industrialized nation, and has very few parallels in modern history. The author says one of the major advantages of the program for developing nations was the much needed stimulation of their indigenous science and technology efforts and the awakening of their governments to the multifaceted benefits of atomic energy. The author discusses how the program benefited Pakistan in the production of electrical energy and in the application of nuclear techniques in the fields of agriculture and medicine, which help to alleviate hunger and combat disease

  16. The gesture in Physical Culture career teaching

    Directory of Open Access Journals (Sweden)

    Alina Bestard-Revilla

    2015-04-01

    Full Text Available The research is in charge of gesture interpretation of Physical Culture Career's teacherr with the objective of revealing the senses that underlie in the pedagogic al interaction between the teacher and the students. It also tends to the analysis and understanding of the teacher's gestures during their pedagogic al interactions. The research answers the following question s: How to take advantage s from the Physical Culture university teachers for a greater quality of his lessons ?, and it precisely looks for the gesture inter pretation, analyzes what underlies in a gesture in a teaching learning space; reveals the meanings contained in a glance, the hands signalizations, the corporal postures, the approaches, the smiles, among other important expressions in the teachers communi cative situations in correspondence with the students gestures.

  17. Gesturing by Speakers with Aphasia: How Does It Compare?

    Science.gov (United States)

    Mol, Lisette; Krahmer, Emiel; van de Sandt-Koenderman, Mieke

    2013-01-01

    Purpose: To study the independence of gesture and verbal language production. The authors assessed whether gesture can be semantically compensatory in cases of verbal language impairment and whether speakers with aphasia and control participants use similar depiction techniques in gesture. Method: The informativeness of gesture was assessed in 3…

  18. Young Children Create Iconic Gestures to Inform Others

    Science.gov (United States)

    Behne, Tanya; Carpenter, Malinda; Tomasello, Michael

    2014-01-01

    Much is known about young children's use of deictic gestures such as pointing. Much less is known about their use of other types of communicative gestures, especially iconic or symbolic gestures. In particular, it is unknown whether children can create iconic gestures on the spot to inform others. Study 1 provided 27-month-olds with the…

  19. A search for electron antineutrinos associated with gravitational wave events GW150914 and GW151226 using KamLAND

    NARCIS (Netherlands)

    Gando, A.; Gando, Y.; Hachiya, T.; Hayashi, A.; Hayashida, S.; Ikeda, H.; Inoue, K.; Ishidoshiro, K.; Karino, Y.; Koga, M.; Matsuda, S.; Mitsui, T.; Nakamura, K.; Obara, S.; Oura, T.; Ozaki, H.; Shimizu, I.; Shirahata, Y.; Shirai, J.; Suzuki, A.; Takai, T.; Tamae, K.; Teraoka, Y.; Ueshima, K.; Watanabe, H.; Kozlov, A.; Takemoto, Y.; Yoshida, S.; Fushimi, K.; Piepke, A.; Banks, T.I.; Berger, B.E.; Fujikawa, B.K.; O'Donnell, T.; Learned, J.G.; Maricic, J.; Sakai, M.; Winslow, L.A.; Krupczak, E.; Ouellet, J.; Efremenko, Y.; Karwowski, H.J.; Markoff, D.M.; Tornow, W.; Detwiler, J.A.; Enomoto, S.; Decowski, M.P.

    2016-01-01

    We present a search, using KamLAND, a kiloton-scale anti-neutrino detector, for low-energy anti-neutrino events that were coincident with the gravitational-wave (GW) events GW150914 and GW151226, and the candidate event LVT151012. We find no inverse beta-decay neutrino events within ±500 s of either

  20. Do Verbal Children with Autism Comprehend Gesture as Readily as Typically Developing Children?

    OpenAIRE

    Dimitrova, N.; Özçalışkan, Ş.; Adamson, L.B.

    2017-01-01

    Gesture comprehension remains understudied, particularly in children with autism spectrum disorder (ASD) who have difficulties in gesture production. Using a novel gesture comprehension task, Study 1 examined how 2- to 4-year-old typically-developing (TD) children comprehend types of gestures and gesture-speech combinations, and showed better comprehension of deictic gestures and reinforcing gesture-speech combinations than iconic/conventional gestures and supplementary gesture-speech combina...

  1. Comprehensibility and neural substrate of communicative gestures in severe aphasia.

    Science.gov (United States)

    Hogrefe, Katharina; Ziegler, Wolfram; Weidinger, Nicole; Goldenberg, Georg

    2017-08-01

    Communicative gestures can compensate incomprehensibility of oral speech in severe aphasia, but the brain damage that causes aphasia may also have an impact on the production of gestures. We compared the comprehensibility of gestural communication of persons with severe aphasia and non-aphasic persons and used voxel based lesion symptom mapping (VLSM) to determine lesion sites that are responsible for poor gestural expression in aphasia. On group level, persons with aphasia conveyed more information via gestures than controls indicating a compensatory use of gestures in persons with severe aphasia. However, individual analysis showed a broad range of gestural comprehensibility. VLSM suggested that poor gestural expression was associated with lesions in anterior temporal and inferior frontal regions. We hypothesize that likely functional correlates of these localizations are selection of and flexible changes between communication channels as well as between different types of gestures and between features of actions and objects that are expressed by gestures. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Introduction: Towards an Ethics of Gesture

    Directory of Open Access Journals (Sweden)

    Lucia Ruprecht

    2017-06-01

    Full Text Available The introduction to this special section of Performance Philosophy takes Giorgio Agamben’s remarks about the mediality and potentiality of gesture as a starting point to rethink gesture’s nexus with ethics. Shifting the emphasis from philosophical reflection to corporeal practice, it defines gestural ethics as an acting-otherwise which comes into being in the particularities of singular gestural practice, its forms, kinetic qualities, temporal displacements and calls for response. Gestural acting-otherwise is illustrated in a number of ways: We might talk of a gestural ethics when gesturality becomes an object for dedicated analytical exploration and reflection on sites where it is not taken for granted, but exhibited, on stage or on screen, in its mediality, in the ways it quotes, signifies and departs from signification, but also in the ways in which it follows a forward-looking agenda driven by adaptability and inventiveness. It interrupts or modifies operative continua that might be geared towards violence; it appears in situations that are suspended between the possibility of malfunction and the potential of room for play; and it emerges in the ways in which gestures act on their own implication in the signifying structures of gender, sexuality, race, and class, on how these structures play out relationally across time and space, and between historically and locally situated human beings.

  3. Hand gesture recognition by analysis of codons

    Science.gov (United States)

    Ramachandra, Poornima; Shrikhande, Neelima

    2007-09-01

    The problem of recognizing gestures from images using computers can be approached by closely understanding how the human brain tackles it. A full fledged gesture recognition system will substitute mouse and keyboards completely. Humans can recognize most gestures by looking at the characteristic external shape or the silhouette of the fingers. Many previous techniques to recognize gestures dealt with motion and geometric features of hands. In this thesis gestures are recognized by the Codon-list pattern extracted from the object contour. All edges of an image are described in terms of sequence of Codons. The Codons are defined in terms of the relationship between maxima, minima and zeros of curvature encountered as one traverses the boundary of the object. We have concentrated on a catalog of 24 gesture images from the American Sign Language alphabet (Letter J and Z are ignored as they are represented using motion) [2]. The query image given as an input to the system is analyzed and tested against the Codon-lists, which are shape descriptors for external parts of a hand gesture. We have used the Weighted Frequency Indexing Transform (WFIT) approach which is used in DNA sequence matching for matching the Codon-lists. The matching algorithm consists of two steps: 1) the query sequences are converted to short sequences and are assigned weights and, 2) all the sequences of query gestures are pruned into match and mismatch subsequences by the frequency indexing tree based on the weights of the subsequences. The Codon sequences with the most weight are used to determine the most precise match. Once a match is found, the identified gesture and corresponding interpretation are shown as output.

  4. Device Control Using Gestures Sensed from EMG

    Science.gov (United States)

    Wheeler, Kevin R.

    2003-01-01

    In this paper we present neuro-electric interfaces for virtual device control. The examples presented rely upon sampling Electromyogram data from a participants forearm. This data is then fed into pattern recognition software that has been trained to distinguish gestures from a given gesture set. The pattern recognition software consists of hidden Markov models which are used to recognize the gestures as they are being performed in real-time. Two experiments were conducted to examine the feasibility of this interface technology. The first replicated a virtual joystick interface, and the second replicated a keyboard.

  5. Tests of General Relativity with GW150914

    Science.gov (United States)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Johnson-McDaniel, N. K.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, M. K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, Nam-Gyu; Kim, Namjun; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lousto, C. O.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R. M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pan, Y.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shao, Z.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, A. D.; Simakov, D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Vallisneri, M.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J. L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; Boyle, M.; Campanelli, M.; Hemberger, D. A.; Kidder, L. E.; Ossokine, S.; Scheel, M. A.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.; LIGO Scientific; Virgo Collaborations

    2016-06-01

    The LIGO detection of GW150914 provides an unprecedented opportunity to study the two-body motion of a compact-object binary in the large-velocity, highly nonlinear regime, and to witness the final merger of the binary and the excitation of uniquely relativistic modes of the gravitational field. We carry out several investigations to determine whether GW150914 is consistent with a binary black-hole merger in general relativity. We find that the final remnant's mass and spin, as determined from the low-frequency (inspiral) and high-frequency (postinspiral) phases of the signal, are mutually consistent with the binary black-hole solution in general relativity. Furthermore, the data following the peak of GW150914 are consistent with the least-damped quasinormal mode inferred from the mass and spin of the remnant black hole. By using waveform models that allow for parametrized general-relativity violations during the inspiral and merger phases, we perform quantitative tests on the gravitational-wave phase in the dynamical regime and we determine the first empirical bounds on several high-order post-Newtonian coefficients. We constrain the graviton Compton wavelength, assuming that gravitons are dispersed in vacuum in the same way as particles with mass, obtaining a 90%-confidence lower bound of 1013 km . In conclusion, within our statistical uncertainties, we find no evidence for violations of general relativity in the genuinely strong-field regime of gravity.

  6. Tests of General Relativity with GW150914.

    Science.gov (United States)

    Abbott, B P; Abbott, R; Abbott, T D; Abernathy, M R; Acernese, F; Ackley, K; Adams, C; Adams, T; Addesso, P; Adhikari, R X; Adya, V B; Affeldt, C; Agathos, M; Agatsuma, K; Aggarwal, N; Aguiar, O D; Aiello, L; Ain, A; Ajith, P; Allen, B; Allocca, A; Altin, P A; Anderson, S B; Anderson, W G; Arai, K; Araya, M C; Arceneaux, C C; Areeda, J S; Arnaud, N; Arun, K G; Ascenzi, S; Ashton, G; Ast, M; Aston, S M; Astone, P; Aufmuth, P; Aulbert, C; Babak, S; Bacon, P; Bader, M K M; Baker, P T; Baldaccini, F; Ballardin, G; Ballmer, S W; Barayoga, J C; Barclay, S E; Barish, B C; Barker, D; Barone, F; Barr, B; Barsotti, L; Barsuglia, M; Barta, D; Bartlett, J; Bartos, I; Bassiri, R; Basti, A; Batch, J C; Baune, C; Bavigadda, V; Bazzan, M; Behnke, B; Bejger, M; Bell, A S; Bell, C J; Berger, B K; Bergman, J; Bergmann, G; Berry, C P L; Bersanetti, D; Bertolini, A; Betzwieser, J; Bhagwat, S; Bhandare, R; Bilenko, I A; Billingsley, G; Birch, J; Birney, R; Birnholtz, O; Biscans, S; Bisht, A; Bitossi, M; Biwer, C; Bizouard, M A; Blackburn, J K; Blair, C D; Blair, D G; Blair, R M; Bloemen, S; Bock, O; Bodiya, T P; Boer, M; Bogaert, G; Bogan, C; Bohe, A; Bojtos, P; Bond, C; Bondu, F; Bonnand, R; Boom, B A; Bork, R; Boschi, V; Bose, S; Bouffanais, Y; Bozzi, A; Bradaschia, C; Brady, P R; Braginsky, V B; Branchesi, M; Brau, J E; Briant, T; Brillet, A; Brinkmann, M; Brisson, V; Brockill, P; Brooks, A F; Brown, D A; Brown, D D; Brown, N M; Buchanan, C C; Buikema, A; Bulik, T; Bulten, H J; Buonanno, A; Buskulic, D; Buy, C; Byer, R L; Cadonati, L; Cagnoli, G; Cahillane, C; Calderón Bustillo, J; Callister, T; Calloni, E; Camp, J B; Cannon, K C; Cao, J; Capano, C D; Capocasa, E; Carbognani, F; Caride, S; Casanueva Diaz, J; Casentini, C; Caudill, S; Cavaglià, M; Cavalier, F; Cavalieri, R; Cella, G; Cepeda, C B; Cerboni Baiardi, L; Cerretani, G; Cesarini, E; Chakraborty, R; Chalermsongsak, T; Chamberlin, S J; Chan, M; Chao, S; Charlton, P; Chassande-Mottin, E; Chen, H Y; Chen, Y; Cheng, C; Chincarini, A; Chiummo, A; Cho, H S; Cho, M; Chow, J H; Christensen, N; Chu, Q; Chua, S; Chung, S; Ciani, G; Clara, F; Clark, J A; Cleva, F; Coccia, E; Cohadon, P-F; Colla, A; Collette, C G; Cominsky, L; Constancio, M; Conte, A; Conti, L; Cook, D; Corbitt, T R; Cornish, N; Corsi, A; Cortese, S; Costa, C A; Coughlin, M W; Coughlin, S B; Coulon, J-P; Countryman, S T; Couvares, P; Cowan, E E; Coward, D M; Cowart, M J; Coyne, D C; Coyne, R; Craig, K; Creighton, J D E; Cripe, J; Crowder, S G; Cumming, A; Cunningham, L; Cuoco, E; Dal Canton, T; Danilishin, S L; D'Antonio, S; Danzmann, K; Darman, N S; Dattilo, V; Dave, I; Daveloza, H P; Davier, M; Davies, G S; Daw, E J; Day, R; DeBra, D; Debreczeni, G; Degallaix, J; De Laurentis, M; Deléglise, S; Del Pozzo, W; Denker, T; Dent, T; Dereli, H; Dergachev, V; De Rosa, R; DeRosa, R T; DeSalvo, R; Dhurandhar, S; Díaz, M C; Di Fiore, L; Di Giovanni, M; Di Lieto, A; Di Pace, S; Di Palma, I; Di Virgilio, A; Dojcinoski, G; Dolique, V; Donovan, F; Dooley, K L; Doravari, S; Douglas, R; Downes, T P; Drago, M; Drever, R W P; Driggers, J C; Du, Z; Ducrot, M; Dwyer, S E; Edo, T B; Edwards, M C; Effler, A; Eggenstein, H-B; Ehrens, P; Eichholz, J; Eikenberry, S S; Engels, W; Essick, R C; Etzel, T; Evans, M; Evans, T M; Everett, R; Factourovich, M; Fafone, V; Fair, H; Fairhurst, S; Fan, X; Fang, Q; Farinon, S; Farr, B; Farr, W M; Favata, M; Fays, M; Fehrmann, H; Fejer, M M; Ferrante, I; Ferreira, E C; Ferrini, F; Fidecaro, F; Fiori, I; Fiorucci, D; Fisher, R P; Flaminio, R; Fletcher, M; Fournier, J-D; Franco, S; Frasca, S; Frasconi, F; Frei, Z; Freise, A; Frey, R; Frey, V; Fricke, T T; Fritschel, P; Frolov, V V; Fulda, P; Fyffe, M; Gabbard, H A G; Gair, J R; Gammaitoni, L; Gaonkar, S G; Garufi, F; Gatto, A; Gaur, G; Gehrels, N; Gemme, G; Gendre, B; Genin, E; Gennai, A; George, J; Gergely, L; Germain, V; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S; Giaime, J A; Giardina, K D; Giazotto, A; Gill, K; Glaefke, A; Goetz, E; Goetz, R; Gondan, L; González, G; Gonzalez Castro, J M; Gopakumar, A; Gordon, N A; Gorodetsky, M L; Gossan, S E; Gosselin, M; Gouaty, R; Graef, C; Graff, P B; Granata, M; Grant, A; Gras, S; Gray, C; Greco, G; Green, A C; Groot, P; Grote, H; Grunewald, S; Guidi, G M; Guo, X; Gupta, A; Gupta, M K; Gushwa, K E; Gustafson, E K; Gustafson, R; Hacker, J J; Hall, B R; Hall, E D; Hammond, G; Haney, M; Hanke, M M; Hanks, J; Hanna, C; Hannam, M D; Hanson, J; Hardwick, T; Harms, J; Harry, G M; Harry, I W; Hart, M J; Hartman, M T; Haster, C-J; Haughian, K; Healy, J; Heidmann, A; Heintze, M C; Heitmann, H; Hello, P; Hemming, G; Hendry, M; Heng, I S; Hennig, J; Heptonstall, A W; Heurs, M; Hild, S; Hoak, D; Hodge, K A; Hofman, D; Hollitt, S E; Holt, K; Holz, D E; Hopkins, P; Hosken, D J; Hough, J; Houston, E A; Howell, E J; Hu, Y M; Huang, S; Huerta, E A; Huet, D; Hughey, B; Husa, S; Huttner, S H; Huynh-Dinh, T; Idrisy, A; Indik, N; Ingram, D R; Inta, R; Isa, H N; Isac, J-M; Isi, M; Islas, G; Isogai, T; Iyer, B R; Izumi, K; Jacqmin, T; Jang, H; Jani, K; Jaranowski, P; Jawahar, S; Jiménez-Forteza, F; Johnson, W W; Johnson-McDaniel, N K; Jones, D I; Jones, R; Jonker, R J G; Ju, L; Haris, M K; Kalaghatgi, C V; Kalogera, V; Kandhasamy, S; Kang, G; Kanner, J B; Karki, S; Kasprzack, M; Katsavounidis, E; Katzman, W; Kaufer, S; Kaur, T; Kawabe, K; Kawazoe, F; Kéfélian, F; Kehl, M S; Keitel, D; Kelley, D B; Kells, W; Kennedy, R; Key, J S; Khalaidovski, A; Khalili, F Y; Khan, I; Khan, S; Khan, Z; Khazanov, E A; Kijbunchoo, N; Kim, C; Kim, J; Kim, K; Kim, Nam-Gyu; Kim, Namjun; Kim, Y-M; King, E J; King, P J; Kinzel, D L; Kissel, J S; Kleybolte, L; Klimenko, S; Koehlenbeck, S M; Kokeyama, K; Koley, S; Kondrashov, V; Kontos, A; Korobko, M; Korth, W Z; Kowalska, I; Kozak, D B; Kringel, V; Krishnan, B; Królak, A; Krueger, C; Kuehn, G; Kumar, P; Kuo, L; Kutynia, A; Lackey, B D; Landry, M; Lange, J; Lantz, B; Lasky, P D; Lazzarini, A; Lazzaro, C; Leaci, P; Leavey, S; Lebigot, E O; Lee, C H; Lee, H K; Lee, H M; Lee, K; Lenon, A; Leonardi, M; Leong, J R; Leroy, N; Letendre, N; Levin, Y; Levine, B M; Li, T G F; Libson, A; Littenberg, T B; Lockerbie, N A; Logue, J; Lombardi, A L; London, L T; Lord, J E; Lorenzini, M; Loriette, V; Lormand, M; Losurdo, G; Lough, J D; Lousto, C O; Lovelace, G; Lück, H; Lundgren, A P; Luo, J; Lynch, R; Ma, Y; MacDonald, T; Machenschalk, B; MacInnis, M; Macleod, D M; Magaña-Sandoval, F; Magee, R M; Mageswaran, M; Majorana, E; Maksimovic, I; Malvezzi, V; Man, N; Mandel, I; Mandic, V; Mangano, V; Mansell, G L; Manske, M; Mantovani, M; Marchesoni, F; Marion, F; Márka, S; Márka, Z; Markosyan, A S; Maros, E; Martelli, F; Martellini, L; Martin, I W; Martin, R M; Martynov, D V; Marx, J N; Mason, K; Masserot, A; Massinger, T J; Masso-Reid, M; Matichard, F; Matone, L; Mavalvala, N; Mazumder, N; Mazzolo, G; McCarthy, R; McClelland, D E; McCormick, S; McGuire, S C; McIntyre, G; McIver, J; McManus, D J; McWilliams, S T; Meacher, D; Meadors, G D; Meidam, J; Melatos, A; Mendell, G; Mendoza-Gandara, D; Mercer, R A; Merilh, E; Merzougui, M; Meshkov, S; Messenger, C; Messick, C; Meyers, P M; Mezzani, F; Miao, H; Michel, C; Middleton, H; Mikhailov, E E; Milano, L; Miller, J; Millhouse, M; Minenkov, Y; Ming, J; Mirshekari, S; Mishra, C; Mitra, S; Mitrofanov, V P; Mitselmakher, G; Mittleman, R; Moggi, A; Mohan, M; Mohapatra, S R P; Montani, M; Moore, B C; Moore, C J; Moraru, D; Moreno, G; Morriss, S R; Mossavi, K; Mours, B; Mow-Lowry, C M; Mueller, C L; Mueller, G; Muir, A W; Mukherjee, Arunava; Mukherjee, D; Mukherjee, S; Mukund, N; Mullavey, A; Munch, J; Murphy, D J; Murray, P G; Mytidis, A; Nardecchia, I; Naticchioni, L; Nayak, R K; Necula, V; Nedkova, K; Nelemans, G; Neri, M; Neunzert, A; Newton, G; Nguyen, T T; Nielsen, A B; Nissanke, S; Nitz, A; Nocera, F; Nolting, D; Normandin, M E; Nuttall, L K; Oberling, J; Ochsner, E; O'Dell, J; Oelker, E; Ogin, G H; Oh, J J; Oh, S H; Ohme, F; Oliver, M; Oppermann, P; Oram, Richard J; O'Reilly, B; O'Shaughnessy, R; Ottaway, D J; Ottens, R S; Overmier, H; Owen, B J; Pai, A; Pai, S A; Palamos, J R; Palashov, O; Palomba, C; Pal-Singh, A; Pan, H; Pan, Y; Pankow, C; Pannarale, F; Pant, B C; Paoletti, F; Paoli, A; Papa, M A; Paris, H R; Parker, W; Pascucci, D; Pasqualetti, A; Passaquieti, R; Passuello, D; Patricelli, B; Patrick, Z; Pearlstone, B L; Pedraza, M; Pedurand, R; Pekowsky, L; Pele, A; Penn, S; Perreca, A; Pfeiffer, H P; Phelps, M; Piccinni, O; Pichot, M; Piergiovanni, F; Pierro, V; Pillant, G; Pinard, L; Pinto, I M; Pitkin, M; Poggiani, R; Popolizio, P; Post, A; Powell, J; Prasad, J; Predoi, V; Premachandra, S S; Prestegard, T; Price, L R; Prijatelj, M; Principe, M; Privitera, S; Prix, R; Prodi, G A; Prokhorov, L; Puncken, O; Punturo, M; Puppo, P; Pürrer, M; Qi, H; Qin, J; Quetschke, V; Quintero, E A; Quitzow-James, R; Raab, F J; Rabeling, D S; Radkins, H; Raffai, P; Raja, S; Rakhmanov, M; Rapagnani, P; Raymond, V; Razzano, M; Re, V; Read, J; Reed, C M; Regimbau, T; Rei, L; Reid, S; Reitze, D H; Rew, H; Reyes, S D; Ricci, F; Riles, K; Robertson, N A; Robie, R; Robinet, F; Rocchi, A; Rolland, L; Rollins, J G; Roma, V J; Romano, R; Romanov, G; Romie, J H; Rosińska, D; Rowan, S; Rüdiger, A; Ruggi, P; Ryan, K; Sachdev, S; Sadecki, T; Sadeghian, L; Salconi, L; Saleem, M; Salemi, F; Samajdar, A; Sammut, L; Sanchez, E J; Sandberg, V; Sandeen, B; Sanders, J R; Sassolas, B; Sathyaprakash, B S; Saulson, P R; Sauter, O; Savage, R L; Sawadsky, A; Schale, P; Schilling, R; Schmidt, J; Schmidt, P; Schnabel, R; Schofield, R M S; Schönbeck, A; Schreiber, E; Schuette, D; Schutz, B F; Scott, J; Scott, S M; Sellers, D; Sengupta, A S; Sentenac, D; Sequino, V; Sergeev, A; Serna, G; Setyawati, Y; Sevigny, A; Shaddock, D A; Shah, S; Shahriar, M S; Shaltev, M; Shao, Z; Shapiro, B; Shawhan, P; Sheperd, A; Shoemaker, D H; Shoemaker, D M; Siellez, K; Siemens, X; Sigg, D; Silva, A D; Simakov, D; Singer, A; Singer, L P; Singh, A; Singh, R; Singhal, A; Sintes, A M; Slagmolen, B J J; Smith, J R; Smith, N D; Smith, R J E; Son, E J; Sorazu, B; Sorrentino, F; Souradeep, T; Srivastava, A K; Staley, A; Steinke, M; Steinlechner, J; Steinlechner, S; Steinmeyer, D; Stephens, B C; Stone, R; Strain, K A; Straniero, N; Stratta, G; Strauss, N A; Strigin, S; Sturani, R; Stuver, A L; Summerscales, T Z; Sun, L; Sutton, P J; Swinkels, B L; Szczepańczyk, M J; Tacca, M; Talukder, D; Tanner, D B; Tápai, M; Tarabrin, S P; Taracchini, A; Taylor, R; Theeg, T; Thirugnanasambandam, M P; Thomas, E G; Thomas, M; Thomas, P; Thorne, K A; Thorne, K S; Thrane, E; Tiwari, S; Tiwari, V; Tokmakov, K V; Tomlinson, C; Tonelli, M; Torres, C V; Torrie, C I; Töyrä, D; Travasso, F; Traylor, G; Trifirò, D; Tringali, M C; Trozzo, L; Tse, M; Turconi, M; Tuyenbayev, D; Ugolini, D; Unnikrishnan, C S; Urban, A L; Usman, S A; Vahlbruch, H; Vajente, G; Valdes, G; Vallisneri, M; van Bakel, N; van Beuzekom, M; van den Brand, J F J; Van Den Broeck, C; Vander-Hyde, D C; van der Schaaf, L; van Heijningen, J V; van Veggel, A A; Vardaro, M; Vass, S; Vasúth, M; Vaulin, R; Vecchio, A; Vedovato, G; Veitch, J; Veitch, P J; Venkateswara, K; Verkindt, D; Vetrano, F; Viceré, A; Vinciguerra, S; Vine, D J; Vinet, J-Y; Vitale, S; Vo, T; Vocca, H; Vorvick, C; Voss, D; Vousden, W D; Vyatchanin, S P; Wade, A R; Wade, L E; Wade, M; Walker, M; Wallace, L; Walsh, S; Wang, G; Wang, H; Wang, M; Wang, X; Wang, Y; Ward, R L; Warner, J; Was, M; Weaver, B; Wei, L-W; Weinert, M; Weinstein, A J; Weiss, R; Welborn, T; Wen, L; Weßels, P; Westphal, T; Wette, K; Whelan, J T; White, D J; Whiting, B F; Williams, D; Williams, R D; Williamson, A R; Willis, J L; Willke, B; Wimmer, M H; Winkler, W; Wipf, C C; Wittel, H; Woan, G; Worden, J; Wright, J L; Wu, G; Yablon, J; Yam, W; Yamamoto, H; Yancey, C C; Yap, M J; Yu, H; Yvert, M; Zadrożny, A; Zangrando, L; Zanolin, M; Zendri, J-P; Zevin, M; Zhang, F; Zhang, L; Zhang, M; Zhang, Y; Zhao, C; Zhou, M; Zhou, Z; Zhu, X J; Zucker, M E; Zuraw, S E; Zweizig, J; Boyle, M; Campanelli, M; Hemberger, D A; Kidder, L E; Ossokine, S; Scheel, M A; Szilagyi, B; Teukolsky, S; Zlochower, Y

    2016-06-03

    The LIGO detection of GW150914 provides an unprecedented opportunity to study the two-body motion of a compact-object binary in the large-velocity, highly nonlinear regime, and to witness the final merger of the binary and the excitation of uniquely relativistic modes of the gravitational field. We carry out several investigations to determine whether GW150914 is consistent with a binary black-hole merger in general relativity. We find that the final remnant's mass and spin, as determined from the low-frequency (inspiral) and high-frequency (postinspiral) phases of the signal, are mutually consistent with the binary black-hole solution in general relativity. Furthermore, the data following the peak of GW150914 are consistent with the least-damped quasinormal mode inferred from the mass and spin of the remnant black hole. By using waveform models that allow for parametrized general-relativity violations during the inspiral and merger phases, we perform quantitative tests on the gravitational-wave phase in the dynamical regime and we determine the first empirical bounds on several high-order post-Newtonian coefficients. We constrain the graviton Compton wavelength, assuming that gravitons are dispersed in vacuum in the same way as particles with mass, obtaining a 90%-confidence lower bound of 10^{13}  km. In conclusion, within our statistical uncertainties, we find no evidence for violations of general relativity in the genuinely strong-field regime of gravity.

  7. Gestures make memories, but what kind? Patients with impaired procedural memory display disruptions in gesture production and comprehension

    OpenAIRE

    Klooster, Nathaniel B.; Cook, Susan W.; Uc, Ergun Y.; Duff, Melissa C.

    2015-01-01

    Hand gesture, a ubiquitous feature of human interaction, facilitates communication. Gesture also facilitates new learning, benefiting speakers and listeners alike. Thus, gestures must impact cognition beyond simply supporting the expression of already-formed ideas. However, the cognitive and neural mechanisms supporting the effects of gesture on learning and memory are largely unknown. We hypothesized that gesture's ability to drive new learning is supported by procedural memory and that proc...

  8. Gestural Control Of Wavefield synthesis

    DEFF Research Database (Denmark)

    Grani, Francesco; Di Carlo, Diego; Portillo, Jorge Madrid

    2016-01-01

    We present a report covering our preliminary research on the control of spatial sound sources in wavefield synthesis through gesture based interfaces. After a short general introduction on spatial sound and few basic concepts on wavefield synthesis, we presents a graphical application called sp......AAce which let users to con- trol real-time movements of sound sources by drawing tra- jectories on a screen. The first prototype of this application has been developed bound to WFSCollider, an open-source software based on Supercollider which let users control wavefield synthesis. The spAAce application has...... been im- plemented using Processing, a programming language for sketches and prototypes within the context of visual arts, and communicates with WFSCollider through the Open Sound Control protocol. This application aims to create a new way of interaction for live performance of spatial composition...

  9. Gesture recognition for an exergame prototype

    NARCIS (Netherlands)

    Gacem, Brahim; Vergouw, Robert; Verbiest, Harm; Cicek, Emrullah; Kröse, Ben; van Oosterhout, Tim; Bakkes, S.C.J.

    2011-01-01

    We will demonstrate a prototype exergame aimed at the serious domain of elderly fitness. The exergame incorporates straightforward means to gesture recognition, and utilises a Kinect camera to obtain 2.5D sensory data of the human user.

  10. Hand Gesture Recognition Using Ultrasonic Waves

    KAUST Repository

    AlSharif, Mohammed Hussain

    2016-01-01

    estimation of the moving hand and received signal strength (RSS). These two factors are estimated using two simple methods; channel impulse response (CIR) and cross correlation (CC) of the reflected ultrasonic signal from the gesturing hand. A customized

  11. Evolutionary Sound Synthesis Controlled by Gestural Data

    Directory of Open Access Journals (Sweden)

    Jose Fornari

    2011-05-01

    Full Text Available This article focuses on the interdisciplinary research involving Computer Music and Generative Visual Art. We describe the implementation of two interactive artistic systems based on principles of Gestural Data (WILSON, 2002 retrieval and self-organization (MORONI, 2003, to control an Evolutionary Sound Synthesis method (ESSynth. The first implementation uses, as gestural data, image mapping of handmade drawings. The second one uses gestural data from dynamic body movements of dance. The resulting computer output is generated by an interactive system implemented in Pure Data (PD. This system uses principles of Evolutionary Computation (EC, which yields the generation of a synthetic adaptive population of sound objects. Considering that music could be seen as “organized sound” the contribution of our study is to develop a system that aims to generate "self-organized sound" – a method that uses evolutionary computation to bridge between gesture, sound and music.

  12. Gestural interaction in a virtual environment

    Science.gov (United States)

    Jacoby, Richard H.; Ferneau, Mark; Humphries, Jim

    1994-04-01

    This paper discusses the use of hand gestures (i.e., changing finger flexion) within a virtual environment (VE). Many systems now employ static hand postures (i.e., static finger flexion), often coupled with hand translations and rotations, as a method of interacting with a VE. However, few systems are currently using dynamically changing finger flexion for interacting with VEs. In our system, the user wears an electronically instrumented glove. We have developed a simple algorithm for recognizing gestures for use in two applications: automotive design and visualization of atmospheric data. In addition to recognizing the gestures, we also calculate the rate at which the gestures are made and the rate and direction of hand movement while making the gestures. We report on our experiences with the algorithm design and implementation, and the use of the gestures in our applications. We also talk about our background work in user calibration of the glove, as well as learned and innate posture recognition (postures recognized with and without training, respectively).

  13. A Pulsation Mechanism for GW Virginis Variables

    Science.gov (United States)

    Cox, Arthur N.

    2003-03-01

    The mechanism that produces pulsations in the hottest pre-white dwarfs has been uncertain since the early work indicated that helium is a poison that smooths opacity bumps in the opacity-temperature plane caused by the ionizations of the large observed amounts of carbon and oxygen. Very little helium seemed to be needed to prevent the kappa effect pulsation driving, but helium amounts of almost half of the mass in the surface composition are observed in the pulsating PG 1159-035 stars called the GW Virginis variables. Rather little change in the C and O surface abundances is observed from the hottest (RX J2117.1+3412 at 170,000 K) to the coolest (PG 0122+200 at 80,000 K) GW Vir variables. Actually the shortest observed periods (300-400 s) of these variables are generally predicted to be unstable in all models, but the longest observed periods (up to 1000 s) are difficult to excite. Three recent investigations differ in their conclusions, with two finding that helium and even a slight amount of hydrogen does not prevent the kappa effect of C and O ionizations. A more detailed study reported here confirms the poisoning effect of helium. However, the ionization K- and L-edge opacity of the original iron, whose global abundance is unaffected by all previous evolution, especially if enhanced by radiation absorption levitation, can give different, previously unexplored, opacity driving that can explain the observed pulsations. But even this iron ionization driving can be somewhat poisoned by bump smoothing if the C and O abundances are large. Nonvariable GW Vir stars in the observed instability strip could be the result of small composition variations in the pulsation driving layers.

  14. Make Gestures to Learn: Reproducing Gestures Improves the Learning of Anatomical Knowledge More than Just Seeing Gestures

    Directory of Open Access Journals (Sweden)

    Mélaine Cherdieu

    2017-10-01

    Full Text Available Manual gestures can facilitate problem solving but also language or conceptual learning. Both seeing and making the gestures during learning seem to be beneficial. However, the stronger activation of the motor system in the second case should provide supplementary cues to consolidate and re-enact the mental traces created during learning. We tested this hypothesis in the context of anatomy learning by naïve adult participants. Anatomy is a challenging topic to learn and is of specific interest for research on embodied learning, as the learning content can be directly linked to learners' body. Two groups of participants were asked to look at a video lecture on the forearm anatomy. The video included a model making gestures related to the content of the lecture. Both groups see the gestures but only one also imitate the model. Tests of knowledge were run just after learning and few days later. The results revealed that imitating gestures improves the recall of structures names and their localization on a diagram. This effect was however significant only in long-term assessments. This suggests that: (1 the integration of motor actions and knowledge may require sleep; (2 a specific activation of the motor system during learning may improve the consolidation and/or the retrieval of memories.

  15. Make Gestures to Learn: Reproducing Gestures Improves the Learning of Anatomical Knowledge More than Just Seeing Gestures

    Science.gov (United States)

    Cherdieu, Mélaine; Palombi, Olivier; Gerber, Silvain; Troccaz, Jocelyne; Rochet-Capellan, Amélie

    2017-01-01

    Manual gestures can facilitate problem solving but also language or conceptual learning. Both seeing and making the gestures during learning seem to be beneficial. However, the stronger activation of the motor system in the second case should provide supplementary cues to consolidate and re-enact the mental traces created during learning. We tested this hypothesis in the context of anatomy learning by naïve adult participants. Anatomy is a challenging topic to learn and is of specific interest for research on embodied learning, as the learning content can be directly linked to learners' body. Two groups of participants were asked to look at a video lecture on the forearm anatomy. The video included a model making gestures related to the content of the lecture. Both groups see the gestures but only one also imitate the model. Tests of knowledge were run just after learning and few days later. The results revealed that imitating gestures improves the recall of structures names and their localization on a diagram. This effect was however significant only in long-term assessments. This suggests that: (1) the integration of motor actions and knowledge may require sleep; (2) a specific activation of the motor system during learning may improve the consolidation and/or the retrieval of memories. PMID:29062287

  16. Make Gestures to Learn: Reproducing Gestures Improves the Learning of Anatomical Knowledge More than Just Seeing Gestures.

    Science.gov (United States)

    Cherdieu, Mélaine; Palombi, Olivier; Gerber, Silvain; Troccaz, Jocelyne; Rochet-Capellan, Amélie

    2017-01-01

    Manual gestures can facilitate problem solving but also language or conceptual learning. Both seeing and making the gestures during learning seem to be beneficial. However, the stronger activation of the motor system in the second case should provide supplementary cues to consolidate and re-enact the mental traces created during learning. We tested this hypothesis in the context of anatomy learning by naïve adult participants. Anatomy is a challenging topic to learn and is of specific interest for research on embodied learning, as the learning content can be directly linked to learners' body. Two groups of participants were asked to look at a video lecture on the forearm anatomy. The video included a model making gestures related to the content of the lecture. Both groups see the gestures but only one also imitate the model. Tests of knowledge were run just after learning and few days later. The results revealed that imitating gestures improves the recall of structures names and their localization on a diagram. This effect was however significant only in long-term assessments. This suggests that: (1) the integration of motor actions and knowledge may require sleep; (2) a specific activation of the motor system during learning may improve the consolidation and/or the retrieval of memories.

  17. TOT phenomena: Gesture production in younger and older adults.

    Science.gov (United States)

    Theocharopoulou, Foteini; Cocks, Naomi; Pring, Timothy; Dipper, Lucy T

    2015-06-01

    This study explored age-related changes in gesture to better understand the relationship between gesture and word retrieval from memory. The frequency of gestures during tip-of-the-tongue (TOT) states highlights this relationship. There is a lack of evidence describing the form and content of iconic gestures arising spontaneously in such TOT states and a parallel gap addressing age-related variations. In this study, TOT states were induced in 45 participants from 2 age groups (older and younger adults) using a pseudoword paradigm. The type and frequency of gestures produced was recorded during 2 experimental conditions (single-word retrieval and narrative task). We found that both groups experienced a high number of TOT states, during which they gestured. Iconic co-TOT gestures were more common than noniconic gestures. Although there was no age effect on the type of gestures produced, there was a significant, task-specific age difference in the amount of gesturing. That is, younger adults gestured more in the narrative task, whereas older adults generated more gestures in the single-word-retrieval task. Task-specific age differences suggest that there are age-related differences in terms of the cognitive operations involved in TOT gesture production. (c) 2015 APA, all rights reserved.

  18. Co-Thought and Co-Speech Gestures Are Generated by the Same Action Generation Process

    Science.gov (United States)

    Chu, Mingyuan; Kita, Sotaro

    2016-01-01

    People spontaneously gesture when they speak (co-speech gestures) and when they solve problems silently (co-thought gestures). In this study, we first explored the relationship between these 2 types of gestures and found that individuals who produced co-thought gestures more frequently also produced co-speech gestures more frequently (Experiments…

  19. Hubbard physics in the PAW GW approximation

    Energy Technology Data Exchange (ETDEWEB)

    Booth, J. M., E-mail: jamie.booth@rmit.edu.au; Smith, J. S.; Russo, S. P. [Theoretical Chemical and Quantum Physics, School of Science, RMIT University, Melbourne, VIC (Australia); Drumm, D. W. [Theoretical Chemical and Quantum Physics, School of Science, RMIT University, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Nanoscale BioPhotonics, School of Science, RMIT University, Melbourne, VIC (Australia); Casey, P. S. [CSIRO Manufacturing, Clayton, VIC (Australia)

    2016-06-28

    It is demonstrated that the signatures of the Hubbard Model in the strongly interacting regime can be simulated by modifying the screening in the limit of zero wavevector in Projector-Augmented Wave GW calculations for systems without significant nesting. This modification, when applied to the Mott insulator CuO, results in the opening of the Mott gap by the splitting of states at the Fermi level into upper and lower Hubbard bands, and exhibits a giant transfer of spectral weight upon electron doping. The method is also employed to clearly illustrate that the M{sub 1} and M{sub 2} forms of vanadium dioxide are fundamentally different types of insulator. Standard GW calculations are sufficient to open a gap in M{sub 1} VO{sub 2}, which arise from the Peierls pairing filling the valence band, creating homopolar bonds. The valence band wavefunctions are stabilized with respect to the conduction band, reducing polarizability and pushing the conduction band eigenvalues to higher energy. The M{sub 2} structure, however, opens a gap from strong on-site interactions; it is a Mott insulator.

  20. Different visual exploration of tool-related gestures in left hemisphere brain damaged patients is associated with poor gestural imitation.

    Science.gov (United States)

    Vanbellingen, Tim; Schumacher, Rahel; Eggenberger, Noëmi; Hopfner, Simone; Cazzoli, Dario; Preisig, Basil C; Bertschi, Manuel; Nyffeler, Thomas; Gutbrod, Klemens; Bassetti, Claudio L; Bohlhalter, Stephan; Müri, René M

    2015-05-01

    According to the direct matching hypothesis, perceived movements automatically activate existing motor components through matching of the perceived gesture and its execution. The aim of the present study was to test the direct matching hypothesis by assessing whether visual exploration behavior correlate with deficits in gestural imitation in left hemisphere damaged (LHD) patients. Eighteen LHD patients and twenty healthy control subjects took part in the study. Gesture imitation performance was measured by the test for upper limb apraxia (TULIA). Visual exploration behavior was measured by an infrared eye-tracking system. Short videos including forty gestures (20 meaningless and 20 communicative gestures) were presented. Cumulative fixation duration was measured in different regions of interest (ROIs), namely the face, the gesturing hand, the body, and the surrounding environment. Compared to healthy subjects, patients fixated significantly less the ROIs comprising the face and the gesturing hand during the exploration of emblematic and tool-related gestures. Moreover, visual exploration of tool-related gestures significantly correlated with tool-related imitation as measured by TULIA in LHD patients. Patients and controls did not differ in the visual exploration of meaningless gestures, and no significant relationships were found between visual exploration behavior and the imitation of emblematic and meaningless gestures in TULIA. The present study thus suggests that altered visual exploration may lead to disturbed imitation of tool related gestures, however not of emblematic and meaningless gestures. Consequently, our findings partially support the direct matching hypothesis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. A Search for Electron Antineutrinos Associated with Gravitational-wave Events GW150914 and GW151226 Using KamLAND

    Science.gov (United States)

    Gando, A.; Gando, Y.; Hachiya, T.; Hayashi, A.; Hayashida, S.; Ikeda, H.; Inoue, K.; Ishidoshiro, K.; Karino, Y.; Koga, M.; Matsuda, S.; Mitsui, T.; Nakamura, K.; Obara, S.; Oura, T.; Ozaki, H.; Shimizu, I.; Shirahata, Y.; Shirai, J.; Suzuki, A.; Takai, T.; Tamae, K.; Teraoka, Y.; Ueshima, K.; Watanabe, H.; Kozlov, A.; Takemoto, Y.; Yoshida, S.; Fushimi, K.; Piepke, A.; Banks, T. I.; Berger, B. E.; Fujikawa, B. K.; O'Donnell, T.; Learned, J. G.; Maricic, J.; Sakai, M.; Winslow, L. A.; Krupczak, E.; Ouellet, J.; Efremenko, Y.; Karwowski, H. J.; Markoff, D. M.; Tornow, W.; Detwiler, J. A.; Enomoto, S.; Decowski, M. P.; KamLAND Collaboration

    2016-10-01

    We present a search, using KamLAND, a kiloton-scale anti-neutrino detector, for low-energy anti-neutrino events that were coincident with the gravitational-wave (GW) events GW150914 and GW151226, and the candidate event LVT151012. We find no inverse beta-decay neutrino events within ±500 s of either GW signal. This non-detection is used to constrain the electron anti-neutrino fluence and the total integrated luminosity of the astrophysical sources.

  2. THE PAST AND THE FUTURE OF DIRECT SEARCH OF GW FROM PULSARS IN THE ERA OF GW ANTENNAS

    Directory of Open Access Journals (Sweden)

    L. Milano

    2013-12-01

    Full Text Available In this paper we will give an overview of the past and present status of Gravitational Wave (GW research associated with pulsars, taking into account the target sensitivity achieved from interferometric laser GW antennas such as Tama, Geo, Ligo and Virgo. We will see that the upper limits obtained with searches for periodic GW begin to be astrophysically interesting by imposing non-trivial constraints on the structure and evolution of the neutron stars. We will give prospects for the future detection of pulsar GW signals, with Advanced Ligo and Advanced Virgo and future enhanced detectors, e.g. the Einstein Telescope.

  3. Spontaneous gestures influence strategy choices in problem solving.

    Science.gov (United States)

    Alibali, Martha W; Spencer, Robert C; Knox, Lucy; Kita, Sotaro

    2011-09-01

    Do gestures merely reflect problem-solving processes, or do they play a functional role in problem solving? We hypothesized that gestures highlight and structure perceptual-motor information, and thereby make such information more likely to be used in problem solving. Participants in two experiments solved problems requiring the prediction of gear movement, either with gesture allowed or with gesture prohibited. Such problems can be correctly solved using either a perceptual-motor strategy (simulation of gear movements) or an abstract strategy (the parity strategy). Participants in the gesture-allowed condition were more likely to use perceptual-motor strategies than were participants in the gesture-prohibited condition. Gesture promoted use of perceptual-motor strategies both for participants who talked aloud while solving the problems (Experiment 1) and for participants who solved the problems silently (Experiment 2). Thus, spontaneous gestures influence strategy choices in problem solving.

  4. Gesture Commanding of a Robot with EVA Gloves

    Data.gov (United States)

    National Aeronautics and Space Administration — Gestures commands allow a human operator to directly interact with a robot without the use of intermediary hand controllers. There are two main types of hand gesture...

  5. Bimanual Gesture Imitation in Alzheimer's Disease.

    Science.gov (United States)

    Sanin, G Nter; Benke, Thomas

    2017-01-01

    Unimanual gesture production or imitation has often been studied in Alzheimer's disease (AD) during apraxia testing. In the present study, it was hypothesized that bimanual motor tasks may be a sensitive method to detect impairments of motor cognition in AD due to increased demands on the cognitive system. We investigated bimanual, meaningless gesture imitation in 45 AD outpatients, 38 subjects with mild cognitive impairment (MCI), and 50 normal controls (NC) attending a memory clinic. Participants performed neuropsychological background testing and three tasks: the Interlocking Finger Test (ILF), Imitation of Alternating Hand Movements (AHM), and Bimanual Rhythm Tapping (BRT). The tasks were short and easy to administer. Inter-rater reliability was high across all three tests. AD patients performed significantly poorer than NC and MCI participants; a deficit to imitate bimanual gestures was rarely found in MCI and NC participants. Sensitivity to detect AD ranged from 0.5 and 0.7, specificity beyond 0.9. ROC analyses revealed good diagnostic accuracy (0.77 to 0.92). Impairment to imitate bimanual gestures was mainly predicted by diagnosis and disease severity. Our findings suggest that an impairment to imitate bimanual, meaningless gestures is a valid disease marker of mild to moderate AD and can easily be assessed in memory clinic settings. Based on our preliminary findings, it appears to be a separate impairment which can be distinguished from other cognitive deficits.

  6. Spatial reference in a bonobo gesture.

    Science.gov (United States)

    Genty, Emilie; Zuberbühler, Klaus

    2014-07-21

    Great apes frequently produce gestures during social interactions to communicate in flexible, goal-directed ways [1-3], a feature with considerable relevance for the ongoing debate over the evolutionary origins of human language [1, 4]. But despite this shared feature with language, there has been a lack of evidence for semantic content in ape gestures. According to one authoritative view, ape gestures thus do not have any specific referential, iconic, or deictic content, a fundamental difference versus human gestures and spoken language [1, 5] that suggests these features have a more recent origin in human evolution, perhaps caused by a fundamental transition from ape-like individual intentionality to human-like shared intentionality [6]. Here, we revisit this human uniqueness claim with a study of a previously undescribed human-like beckoning gesture in bonobos that has potentially both deictic and iconic character. We analyzed beckoning in two groups of bonobos, kept under near natural environmental and social conditions at the Lola Ya Bonobo sanctuary near Kinshasa, Democratic Republic of Congo, in terms of its linguistic content and underlying communicative intention. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Tactile Feedback for Above-Device Gesture Interfaces

    OpenAIRE

    Freeman, Euan; Brewster, Stephen; Lantz, Vuokko

    2014-01-01

    Above-device gesture interfaces let people interact in the space above mobile devices using hand and finger movements. For example, users could gesture over a mobile phone or wearable without having to use the touchscreen. We look at how above-device interfaces can also give feedback in the space over the device. Recent haptic and wearable technologies give new ways to provide tactile feedback while gesturing, letting touchless gesture interfaces give touch feedback. In this paper we take a f...

  8. Asymmetric coupling between gestures and speech during reasoning

    NARCIS (Netherlands)

    Hoekstra, Lisette

    2017-01-01

    When children learn, insights displayed in gestures typically precede insights displayed in speech. In this study, we investigated how this leading role of gestures in cognitive development is evident in (and emerges from) the dynamic coupling between gestures and speech during one task. We

  9. Iconic Gestures as Undervalued Representations during Science Teaching

    Science.gov (United States)

    Chue, Shien; Lee, Yew-Jin; Tan, Kim Chwee Daniel

    2015-01-01

    Iconic gestures that are ubiquitous in speech are integral to human meaning-making. However, few studies have attempted to map out the role of these gestures in science teaching. This paper provides a review of existing literature in everyday communication and education to articulate potential contributions of iconic gestures for science teaching.…

  10. Adaptation in Gesture: Converging Hands or Converging Minds?

    Science.gov (United States)

    Mol, Lisette; Krahmer, Emiel; Maes, Alfons; Swerts, Marc

    2012-01-01

    Interlocutors sometimes repeat each other's co-speech hand gestures. In three experiments, we investigate to what extent the copying of such gestures' form is tied to their meaning in the linguistic context, as well as to interlocutors' representations of this meaning at the conceptual level. We found that gestures were repeated only if they could…

  11. Recognizing Stress Using Semantics and Modulation of Speech and Gestures

    NARCIS (Netherlands)

    Lefter, I.; Burghouts, G.J.; Rothkrantz, L.J.M.

    2016-01-01

    This paper investigates how speech and gestures convey stress, and how they can be used for automatic stress recognition. As a first step, we look into how humans use speech and gestures to convey stress. In particular, for both speech and gestures, we distinguish between stress conveyed by the

  12. The cortical signature of impaired gesturing: Findings from schizophrenia

    Directory of Open Access Journals (Sweden)

    Petra Verena Viher

    2018-01-01

    Full Text Available Schizophrenia is characterized by deficits in gesturing that is important for nonverbal communication. Research in healthy participants and brain-damaged patients revealed a left-lateralized fronto-parieto-temporal network underlying gesture performance. First evidence from structural imaging studies in schizophrenia corroborates these results. However, as of yet, it is unclear if cortical thickness abnormalities contribute to impairments in gesture performance. We hypothesized that patients with deficits in gesture production show cortical thinning in 12 regions of interest (ROIs of a gesture network relevant for gesture performance and recognition. Forty patients with schizophrenia and 41 healthy controls performed hand and finger gestures as either imitation or pantomime. Group differences in cortical thickness between patients with deficits, patients without deficits, and controls were explored using a multivariate analysis of covariance. In addition, the relationship between gesture recognition and cortical thickness was investigated. Patients with deficits in gesture production had reduced cortical thickness in eight ROIs, including the pars opercularis of the inferior frontal gyrus, the superior and inferior parietal lobes, and the superior and middle temporal gyri. Gesture recognition correlated with cortical thickness in fewer, but mainly the same, ROIs within the patient sample. In conclusion, our results show that impaired gesture production and recognition in schizophrenia is associated with cortical thinning in distinct areas of the gesture network.

  13. ASTROPHYSICAL IMPLICATIONS OF THE BINARY BLACK HOLE MERGER GW150914

    NARCIS (Netherlands)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Phythian-Adams, A.T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.T.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, R.D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Belczynski, C.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, M.J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, A.L.S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, J.G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, T.C; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Brooks, A. F.; Brown, A.D.; Brown, D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Bustillo, J. Calderon; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Diaz, J. Casanueva; Casentini, C.; Caudill, S.; Cavaglia, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Baiardi, L. Cerboni; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, D. S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Qian; Chua, S. E.; Chung, E.S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P. -F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, A.C.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J. -P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, A.L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Debra, D.; Debreczeni, G.; Degallaix, J.; De laurentis, M.; Deleglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.A.; DeRosa, R. T.; DeRosa, R. T.; DeSalvo, R.; Dhurandhar, S.; Diaz, M. C.; Di Fiore, L.; Giovanni, M.G.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H. -B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, T. M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.M.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M; Fournier, J. -D.; Franco, S; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.P.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; Gonzalez, Idelmis G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Lee-Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.M.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Buffoni-Hall, R.; Hall, E. D.; Hammond, G.L.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, P.J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C. -J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J. -M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, D.H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jimenez-Forteza, F.; Johnson, W.; Jones, I.D.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.H.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kefelian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.E.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan., S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, Nam-Gyu; Kim, Namjun; Kim, Y.M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Krolak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C.H.; Lee, K.H.; Lee, M.H.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lueck, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magna-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Marka, S.; Marka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R.M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B.C.; Moore, J.C.; Moraru, D.; Gutierrez Moreno, M.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, S.D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P.G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Gutierrez-Neri, M.; Neunzert, A.; Newton-Howes, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J.; Oh, S. H.; Ohme, F.; Oliver, M. B.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.S; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Puerrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romanov, G.; Romie, J. H.; Rosinska, D.; Rowan, S.; Ruediger, A.; Ruggi, P.; Ryan, K.A.; Sachdev, P.S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J; Schmidt, P.; Schnabel, R.B.; Schofield, R. M. S.; Schoenbeck, A.; Schreiber, K.E.C.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, M.S.; Sellers, D.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shao, Z.M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, António Dias da; Simakov, D.; Singer, A; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, R. J. E.; Smith, N.D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson-Moore, P.; Stone, J.R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepanczyk, M. J.; Tacca, M.D.; Talukder, D.; Tanner, D. B.; Tapai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, W.R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Toyra, D.; Travasso, F.; Traylor, G.; Trifir, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Van Bakel, N.; Van Beuzekom, Martin; Van den Brand, J. F. J.; Van Den Broeck, C.F.F.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasuth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P.J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Vicere, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J. -Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, MT; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L. -W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.M.; Wessels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, D.R.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J.L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; Zadrozny, A.; Zangrando, L.; Zanolin, M.; Zendri, J. -P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.

    2016-01-01

    The discovery of the gravitational-wave (GW) source GW150914 with the Advanced LIGO detectors provides the first observational evidence for the existence of binary black hole (BH) systems that inspiral and merge within the age of the universe. Such BH mergers have been predicted in two main types of

  14. Editorial: Challenges and solutions in GW calculations for complex systems

    Science.gov (United States)

    Giustino, F.; Umari, P.; Rubio, A.

    2012-09-01

    We report key advances in the area of GW calculations, review the available software implementations and define standardization criteria to render the comparison between GW calculations from different codes meaningful, and identify future major challenges in the area of quasiparticle calculations. This Topical Issue should be a reference point for further developments in the field.

  15. Accelerating GW calculations with optimal polarizability basis

    Energy Technology Data Exchange (ETDEWEB)

    Umari, P.; Stenuit, G. [CNR-IOM DEMOCRITOS Theory Elettra Group, Basovizza (Trieste) (Italy); Qian, X.; Marzari, N. [Department of Materials Science and Engineering, MIT, Cambridge, MA (United States); Giacomazzi, L.; Baroni, S. [CNR-IOM DEMOCRITOS Theory Elettra Group, Basovizza (Trieste) (Italy); SISSA - Scuola Internazionale Superiore di Studi Avanzati, Trieste (Italy)

    2011-03-15

    We present a method for accelerating GW quasi-particle (QP) calculations. This is achieved through the introduction of optimal basis sets for representing polarizability matrices. First the real-space products of Wannier like orbitals are constructed and then optimal basis sets are obtained through singular value decomposition. Our method is validated by calculating the vertical ionization energies of the benzene molecule and the band structure of crystalline silicon. Its potentialities are illustrated by calculating the QP spectrum of a model structure of vitreous silica. Finally, we apply our method for studying the electronic structure properties of a model of quasi-stoichiometric amorphous silicon nitride and of its point defects. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  16. Gestures make memories, but what kind? Patients with impaired procedural memory display disruptions in gesture production and comprehension

    OpenAIRE

    Nathaniel Bloem Klooster; Nathaniel Bloem Klooster; Susan Wagner Cook; Susan Wagner Cook; Ergun Y. Uc; Ergun Y. Uc; Melissa C. Duff; Melissa C. Duff; Melissa C. Duff; Melissa C. Duff

    2015-01-01

    Hand gesture, a ubiquitous feature of human interaction, facilitates communication. Gesture also facilitates new learning, benefiting speakers and listeners alike. Thus, gestures must impact cognition beyond simply supporting the expression of already-formed ideas. However, the cognitive and neural mechanisms supporting the effects of gesture on learning and memory are largely unknown. We hypothesized that gesture’s ability to drive new learning is supported by procedural memory and that proc...

  17. An Instability Mechanism for GW Vir Variables

    Science.gov (United States)

    Cox, A. N.

    2002-05-01

    A puzzle for almost 20 years has been the cause of the pulsational instability for the hot post-planetary nebula pre-white dwarfs. It was known right after the discovery of these variable stars that the cyclical ionization of carbon and oxygen can make the stars pulsate by the normal kappa mechanism. However, the presence of helium observed on the surface of these stars poisons this mechanism by diluting the opacity ``bump" of C and O. The problem has been to get pulsationally unstable models with significant helium in the layers just below the surface where the pulsations are driven. Now it appears that an additional opacity ``bump" in the temperature-opacity plane, due to the K-shell ionization of the small amount of iron in the stellar mixture unaffected by stellar evolution, might give sufficient driving when added to that from the C and O ionizations. Some small ion levitation abundance enhancement from the solar value may be needed though. The latest extensive theoretical interpretations by Bradley and Dziembowski (1996) show low order nonradial g-modes with small motions in deep pulsation damping layers do not suffer much from the helium poison, but the observed longer periods for the hottest stars in this GW Vir (often called PG1159-035) class remained unexplained. The new Los Alamos opacities for the observed abundances, 0.6 solar mass models for GW Vir itself at 140,000 K, and the pulsational analysis for the observed periods around the observed 516 seconds will be presented.

  18. Exploring the Use of Discrete Gestures for Authentication

    Science.gov (United States)

    Chong, Ming Ki; Marsden, Gary

    Research in user authentication has been a growing field in HCI. Previous studies have shown that peoples’ graphical memory can be used to increase password memorability. On the other hand, with the increasing number of devices with built-in motion sensors, kinesthetic memory (or muscle memory) can also be exploited for authentication. This paper presents a novel knowledge-based authentication scheme, called gesture password, which uses discrete gestures as password elements. The research presents a study of multiple password retention using PINs and gesture passwords. The study reports that although participants could use kinesthetic memory to remember gesture passwords, retention of PINs is far superior to retention of gesture passwords.

  19. When gestures show us the way: Co-speech gestures selectively facilitate navigation and spatial memory.

    OpenAIRE

    Galati, Alexia; Weisberg, Steven M.; Newcombe, Nora S.; Avraamides, Marios N.

    2017-01-01

    How does gesturing during route learning relate to subsequent spatial performance? We examined the relationship between gestures produced spontaneously while studying route directions and spatial representations of the navigated environment. Participants studied route directions, then navigated those routes from memory in a virtual environment, and finally had their memory of the environment assessed. We found that, for navigators with low spatial perspective-taking pe...

  20. From action to abstraction: Gesture as a mechanism of change.

    Science.gov (United States)

    Goldin-Meadow, Susan

    2015-12-01

    Piaget was a master at observing the routine behaviors children produce as they go from knowing less to knowing more about at a task, and making inferences not only about how the children understood the task at each point, but also about how they progressed from one point to the next. In this paper, I examine a routine behavior that Piaget overlooked-the spontaneous gestures speakers produce as they explain their solutions to a problem. These gestures are not mere hand waving. They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker's talk. But gesture can do more than reflect ideas-it can also change them. In this sense, gesture behaves like any other action; both gesture and action on objects facilitate learning problems on which training was given. However, only gesture promotes transferring the knowledge gained to problems that require generalization. Gesture is, in fact, a special kind of action in that it represents the world rather than directly manipulating the world (gesture does not move objects around). The mechanisms by which gesture and action promote learning may therefore differ-gesture is able to highlight components of an action that promote abstract learning while leaving out details that could tie learning to a specific context. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning abstract ideas.

  1. Hippocampal declarative memory supports gesture production: Evidence from amnesia.

    Science.gov (United States)

    Hilverman, Caitlin; Cook, Susan Wagner; Duff, Melissa C

    2016-12-01

    Spontaneous co-speech hand gestures provide a visuospatial representation of what is being communicated in spoken language. Although it is clear that gestures emerge from representations in memory for what is being communicated (De Ruiter, 1998; Wesp, Hesse, Keutmann, & Wheaton, 2001), the mechanism supporting the relationship between gesture and memory is unknown. Current theories of gesture production posit that action - supported by motor areas of the brain - is key in determining whether gestures are produced. We propose that when and how gestures are produced is determined in part by hippocampally-mediated declarative memory. We examined the speech and gesture of healthy older adults and of memory-impaired patients with hippocampal amnesia during four discourse tasks that required accessing episodes and information from the remote past. Consistent with previous reports of impoverished spoken language in patients with hippocampal amnesia, we predicted that these patients, who have difficulty generating multifaceted declarative memory representations, may in turn have impoverished gesture production. We found that patients gestured less overall relative to healthy comparison participants, and that this was particularly evident in tasks that may rely more heavily on declarative memory. Thus, gestures do not just emerge from the motor representation activated for speaking, but are also sensitive to the representation available in hippocampal declarative memory, suggesting a direct link between memory and gesture production. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. The role of gestures in spatial working memory and speech.

    Science.gov (United States)

    Morsella, Ezequiel; Krauss, Robert M

    2004-01-01

    Co-speech gestures traditionally have been considered communicative, but they may also serve other functions. For example, hand-arm movements seem to facilitate both spatial working memory and speech production. It has been proposed that gestures facilitate speech indirectly by sustaining spatial representations in working memory. Alternatively, gestures may affect speech production directly by activating embodied semantic representations involved in lexical search. Consistent with the first hypothesis, we found participants gestured more when describing visual objects from memory and when describing objects that were difficult to remember and encode verbally. However, they also gestured when describing a visually accessible object, and gesture restriction produced dysfluent speech even when spatial memory was untaxed, suggesting that gestures can directly affect both spatial memory and lexical retrieval.

  3. Hand use and gestural communication in chimpanzees (Pan troglodytes).

    Science.gov (United States)

    Hopkins, W D; Leavens, D A

    1998-03-01

    Hand use in gestural communication was examined in 115 captive chimpanzees (Pan troglodytes). Hand use was measured in subjects while they gestured to food placed out of their reach. The distribution of hand use was examined in relation to sex, age, rearing history, gesture type, and whether the subjects vocalized while gesturing. Overall, significantly more chimpanzees, especially females and adults, gestured with their right than with their left hand. Foods begs were more lateralized to the right hand than pointing, and a greater prevalence of right-hand gesturing was found in subjects who simultaneously vocalized than those who did not. Taken together, these data suggest that referential, intentional communicative behaviors, in the form of gestures, are lateralized to the left hemisphere in chimpanzees.

  4. Humanoid Upper Torso Complexity for Displaying Gestures

    Directory of Open Access Journals (Sweden)

    Robert Richardson

    2008-11-01

    Full Text Available Body language is an important part of human-to-human communication; therefore body language in humanoid robots is very important for successful communication and social interaction with humans. The number of degrees of freedom (d.o.f necessary to achieve realistic body language in robots has been investigated. Using animation, three robots were simulated performing body language gestures; the complex model was given 25 d.o.f, the simplified model 18 d.o.f and the basic model 10 d.o.f. A subjective survey was created online using these animations, to obtain people's opinions on the realism of the gestures and to see if they could recognise the emotions portrayed. It was concluded that the basic system was the least realistic, complex system the most realistic, and the simplified system was only slightly less realistic than the human. Modular robotic joints were then fabricated so that the gestures could be implemented experimentally. The experimental results demonstrate that through simplification of the required degrees of freedom, the gestures can be experimentally reproduced.

  5. A Prelinguistic Gestural Universal of Human Communication

    Science.gov (United States)

    Liszkowski, Ulf; Brown, Penny; Callaghan, Tara; Takada, Akira; de Vos, Conny

    2012-01-01

    Several cognitive accounts of human communication argue for a language-independent, prelinguistic basis of human communication and language. The current study provides evidence for the universality of a prelinguistic gestural basis for human communication. We used a standardized, semi-natural elicitation procedure in seven very different cultures…

  6. A Brief Overview of Gesture Control Architectures

    Directory of Open Access Journals (Sweden)

    Gheorghe Gîlcă

    2014-12-01

    Full Text Available This papers deals with a detailed study of the literature about artificial vision systems and the applications where they can be used, such as: gesture interpretation for robot control, telephone control and the video control as well as presenting the structure of two vision systems: one for face recognition and the second to achieve the detection of multiple-touch finger.

  7. The Authentic Teacher: Gestures of Behavior.

    Science.gov (United States)

    Shimabukuro, Gini

    1998-01-01

    Stresses the importance for Catholic school educators to reveal the Christian message through every gesture of behavior and foster an experiential faith in students' lives. States that this demands a great deal of skill, knowledge, and self-awareness on the teacher's part, and requires self-esteem, authentic caring, humility, and communication…

  8. Grids and Gestures: A Comics Making Exercise

    Science.gov (United States)

    Sousanis, Nick

    2015-01-01

    Grids and Gestures is an exercise intended to offer participants insight into a comics maker's decision-making process for composing the entire page through the hands-on activity of making an abstract comic. It requires no prior drawing experience and serves to help reexamine what it means to draw. In addition to a description of how to proceed…

  9. The Gestural Theory of Language Origins

    Science.gov (United States)

    Armstrong, David F.

    2008-01-01

    The idea that iconic visible gesture had something to do with the origin of language, particularly speech, is a frequent element in speculation about this phenomenon and appears early in its history. Socrates hypothesizes about the origins of Greek words in Plato's satirical dialogue, "Cratylus", and his speculation includes a possible…

  10. Gesture-speech integration in children with specific language impairment.

    Science.gov (United States)

    Mainela-Arnold, Elina; Alibali, Martha W; Hostetter, Autumn B; Evans, Julia L

    2014-11-01

    Previous research suggests that speakers are especially likely to produce manual communicative gestures when they have relative ease in thinking about the spatial elements of what they are describing, paired with relative difficulty organizing those elements into appropriate spoken language. Children with specific language impairment (SLI) exhibit poor expressive language abilities together with within-normal-range nonverbal IQs. This study investigated whether weak spoken language abilities in children with SLI influence their reliance on gestures to express information. We hypothesized that these children would rely on communicative gestures to express information more often than their age-matched typically developing (TD) peers, and that they would sometimes express information in gestures that they do not express in the accompanying speech. Participants were 15 children with SLI (aged 5;6-10;0) and 18 age-matched TD controls. Children viewed a wordless cartoon and retold the story to a listener unfamiliar with the story. Children's gestures were identified and coded for meaning using a previously established system. Speech-gesture combinations were coded as redundant if the information conveyed in speech and gesture was the same, and non-redundant if the information conveyed in speech was different from the information conveyed in gesture. Children with SLI produced more gestures than children in the TD group; however, the likelihood that speech-gesture combinations were non-redundant did not differ significantly across the SLI and TD groups. In both groups, younger children were significantly more likely to produce non-redundant speech-gesture combinations than older children. The gesture-speech integration system functions similarly in children with SLI and TD, but children with SLI rely more on gesture to help formulate, conceptualize or express the messages they want to convey. This provides motivation for future research examining whether interventions

  11. On constraining the speed of gravitational waves following GW150914

    CERN Document Server

    Blas, Diego; Sawicki, Ignacy; Sibiryakov, Sergey

    2016-07-31

    We point out that the observed time delay between the detection of the signal at the Hanford and Livingston LIGO sites from the gravitational wave event GW150914 places an upper bound on the speed of propagation of gravitational waves, $c_{gw}\\lesssim 1.7$ in the units of speed of light. Combined with the lower bound from the absence of gravitational Cherenkov losses by cosmic rays that rules out most of subluminal velocities, this gives a model-independent double-sided constraint $1\\lesssim c_{gw}\\lesssim 1.7$. We compare this result to model-specific constraints from pulsar timing and cosmology.

  12. Gestures make memories, but what kind? Patients with impaired procedural memory display disruptions in gesture production and comprehension.

    Science.gov (United States)

    Klooster, Nathaniel B; Cook, Susan W; Uc, Ergun Y; Duff, Melissa C

    2014-01-01

    Hand gesture, a ubiquitous feature of human interaction, facilitates communication. Gesture also facilitates new learning, benefiting speakers and listeners alike. Thus, gestures must impact cognition beyond simply supporting the expression of already-formed ideas. However, the cognitive and neural mechanisms supporting the effects of gesture on learning and memory are largely unknown. We hypothesized that gesture's ability to drive new learning is supported by procedural memory and that procedural memory deficits will disrupt gesture production and comprehension. We tested this proposal in patients with intact declarative memory, but impaired procedural memory as a consequence of Parkinson's disease (PD), and healthy comparison participants with intact declarative and procedural memory. In separate experiments, we manipulated the gestures participants saw and produced in a Tower of Hanoi (TOH) paradigm. In the first experiment, participants solved the task either on a physical board, requiring high arching movements to manipulate the discs from peg to peg, or on a computer, requiring only flat, sideways movements of the mouse. When explaining the task, healthy participants with intact procedural memory displayed evidence of their previous experience in their gestures, producing higher, more arching hand gestures after solving on a physical board, and smaller, flatter gestures after solving on a computer. In the second experiment, healthy participants who saw high arching hand gestures in an explanation prior to solving the task subsequently moved the mouse with significantly higher curvature than those who saw smaller, flatter gestures prior to solving the task. These patterns were absent in both gesture production and comprehension experiments in patients with procedural memory impairment. These findings suggest that the procedural memory system supports the ability of gesture to drive new learning.

  13. 52 GW of photovoltaic capacity and what then?; 52 GW Photovoltaik - und dann?

    Energy Technology Data Exchange (ETDEWEB)

    Bode, Sven [arrhenius Institut fuer Energie- und Klimapolitik, Hamburg (Germany)

    2013-03-15

    Following a phase of rapid, uncontrolled expansion of photovoltaics in Germany a first absolute capacity target was proclaimed in August 2012 in a magnitude of 52 GW. How this target is to be approached and what is to happen thereafter has remained unclear however. One suggested path of resolution consists in defining annual rates of newly installed capacity that will permit the photovoltaic industry a softer landing, i.e. prevent a violent crash. A possible starting point for such a project would be after the elections to German parliament.

  14. Gestures, but Not Meaningless Movements, Lighten Working Memory Load when Explaining Math

    Science.gov (United States)

    Cook, Susan Wagner; Yip, Terina Kuangyi; Goldin-Meadow, Susan

    2012-01-01

    Gesturing is ubiquitous in communication and serves an important function for listeners, who are able to glean meaningful information from the gestures they see. But gesturing also functions for speakers, whose own gestures reduce demands on their working memory. Here we ask whether gesture's beneficial effects on working memory stem from its…

  15. Meaningful gesture in monkeys? Investigating whether mandrills create social culture.

    Directory of Open Access Journals (Sweden)

    Mark E Laidre

    Full Text Available BACKGROUND: Human societies exhibit a rich array of gestures with cultural origins. Often these gestures are found exclusively in local populations, where their meaning has been crafted by a community into a shared convention. In nonhuman primates like African monkeys, little evidence exists for such culturally-conventionalized gestures. METHODOLOGY/PRINCIPAL FINDINGS: Here I report a striking gesture unique to a single community of mandrills (Mandrillus sphinx among nineteen studied across North America, Africa, and Europe. The gesture was found within a community of 23 mandrills where individuals old and young, female and male covered their eyes with their hands for periods which could exceed 30 min, often while simultaneously raising their elbow prominently into the air. This 'Eye covering' gesture has been performed within the community for a decade, enduring deaths, removals, and births, and it persists into the present. Differential responses to Eye covering versus controls suggested that the gesture might have a locally-respected meaning, potentially functioning over a distance to inhibit interruptions as a 'do not disturb' sign operates. CONCLUSIONS/SIGNIFICANCE: The creation of this gesture by monkeys suggests that the ability to cultivate shared meanings using novel manual acts may be distributed more broadly beyond the human species. Although logistically difficult with primates, the translocation of gesturers between communities remains critical to experimentally establishing the possible cultural origin and transmission of nonhuman gestures.

  16. Musical Shaping Gestures: Considerations about Terminology and Methodology

    Directory of Open Access Journals (Sweden)

    Elaine King

    2013-12-01

    Full Text Available Fulford and Ginsborg's investigation into non-verbal communication during music rehearsal-talk between performers with and without hearing impairments extends existing research in the field of gesture studies by contributing significantly to our understanding of musicians' physical gestures as well as opening up discussion about the relationship between speech, sign and gesture in discourse about music. Importantly, the authors weigh up the possibility of an emerging sign language about music. This commentary focuses on three key considerations in response to their paper: first, use of terminology in the study of gesture, specifically about 'musical shaping gestures' (MSGs; second, methodological issues about capturing physical gestures; and third, evaluation of the application of gesture research beyond the rehearsal context. While the difficulties of categorizing gestures in observational research are acknowledged, I indicate that the consistent application of terminology from outside and within the study is paramount. I also suggest that the classification of MSGs might be based upon a set of observed physical characteristics within a single gesture, including size, duration, speed, plane and handedness, leading towards an alternative taxonomy for interpreting these data. Finally, evaluation of the application of gesture research in education and performance arenas is provided.

  17. Gesture's role in speaking, learning, and creating language.

    Science.gov (United States)

    Goldin-Meadow, Susan; Alibali, Martha Wagner

    2013-01-01

    When speakers talk, they gesture. The goal of this review is to investigate the contribution that these gestures make to how we communicate and think. Gesture can play a role in communication and thought at many timespans. We explore, in turn, gesture's contribution to how language is produced and understood in the moment; its contribution to how we learn language and other cognitive skills; and its contribution to how language is created over generations, over childhood, and on the spot. We find that the gestures speakers produce when they talk are integral to communication and can be harnessed in a number of ways. (a) Gesture reflects speakers' thoughts, often their unspoken thoughts, and thus can serve as a window onto cognition. Encouraging speakers to gesture can thus provide another route for teachers, clinicians, interviewers, etc., to better understand their communication partners. (b) Gesture can change speakers' thoughts. Encouraging gesture thus has the potential to change how students, patients, witnesses, etc., think about a problem and, as a result, alter the course of learning, therapy, or an interchange. (c) Gesture provides building blocks that can be used to construct a language. By watching how children and adults who do not already have a language put those blocks together, we can observe the process of language creation. Our hands are with us at all times and thus provide researchers and learners with an ever-present tool for understanding how we talk and think.

  18. Hand gestures support word learning in patients with hippocampal amnesia.

    Science.gov (United States)

    Hilverman, Caitlin; Cook, Susan Wagner; Duff, Melissa C

    2018-06-01

    Co-speech hand gesture facilitates learning and memory, yet the cognitive and neural mechanisms supporting this remain unclear. One possibility is that motor information in gesture may engage procedural memory representations. Alternatively, iconic information from gesture may contribute to declarative memory representations mediated by the hippocampus. To investigate these alternatives, we examined gesture's effects on word learning in patients with hippocampal damage and declarative memory impairment, with intact procedural memory, and in healthy and in brain-damaged comparison groups. Participants learned novel label-object pairings while producing gesture, observing gesture, or observing without gesture. After a delay, recall and object identification were assessed. Unsurprisingly, amnesic patients were unable to recall the labels at test. However, they correctly identified objects at above chance levels, but only if they produced a gesture at encoding. Comparison groups performed well above chance at both recall and object identification regardless of gesture. These findings suggest that gesture production may support word learning by engaging nondeclarative (procedural) memory. © 2018 Wiley Periodicals, Inc.

  19. THE INTERPLANETARY NETWORK RESPONSE TO LIGO GW150914

    Energy Technology Data Exchange (ETDEWEB)

    Hurley, K. [University of California, Berkeley, Space Sciences Laboratory, 7 Gauss Way, Berkeley, CA 94720-7450 (United States); Svinkin, D. S.; Aptekar, R. L.; Golenetskii, S. V.; Frederiks, D. D. [Ioffe Physical Technical Institute, Politekhnicheskaya 26, St. Petersburg 194021 (Russian Federation); Boynton, W. [University of Arizona, Department of Planetary Sciences, Tucson, AZ 85721 (United States); Mitrofanov, I. G.; Golovin, D. V.; Kozyrev, A. S.; Litvak, M. L.; Sanin, A. B. [Space Research Institute, 84/32, Profsoyuznaya, Moscow 117997 (Russian Federation); Rau, A.; Kienlin, A. von; Zhang, X. [Max-Planck-Institut für extraterrestrische Physik, Giessenbachstrasse, Postfach 1312, Garching, D-85748 Germany (Germany); Connaughton, V.; Meegan, C. [University of Alabama in Huntsville, NSSTC, 320 Sparkman Drive, Huntsville, AL 35805 (United States); Cline, T.; Gehrels, N., E-mail: khurley@ssl.berkeley.edu [NASA Goddard Space Flight Center, Code 661, Greenbelt, MD 20771 (United States)

    2016-09-20

    We have performed a blind search for a gamma-ray transient of arbitrary duration and energy spectrum around the time of the LIGO gravitational-wave event GW150914 with the six-spacecraft interplanetary network (IPN). Four gamma-ray bursts were detected between 30 hr prior to the event and 6.1 hr after it, but none could convincingly be associated with GW150914. No other transients were detected down to limiting 15–150 keV fluences of roughly 5 ×10{sup −8}–5 × 10{sup −7} erg cm{sup −2}. We discuss the search strategies and temporal coverage of the IPN on the day of the event and compare the spatial coverage to the region where GW150914 originated. We also report the negative result of a targeted search for the Fermi -GBM event reported in conjunction with GW150914.

  20. FXR agonist activity of conformationally constrained analogs of GW 4064.

    Science.gov (United States)

    Akwabi-Ameyaw, Adwoa; Bass, Jonathan Y; Caldwell, Richard D; Caravella, Justin A; Chen, Lihong; Creech, Katrina L; Deaton, David N; Madauss, Kevin P; Marr, Harry B; McFadyen, Robert B; Miller, Aaron B; Navas, Frank; Parks, Derek J; Spearing, Paul K; Todd, Dan; Williams, Shawn P; Bruce Wisely, G

    2009-08-15

    Two series of conformationally constrained analogs of the FXR agonist GW 4064 1 were prepared. Replacement of the metabolically labile stilbene with either benzothiophene or naphthalene rings led to the identification of potent full agonists 2a and 2g.

  1. Properties of the Binary Black Hole Merger GW150914

    OpenAIRE

    Abbott, B. P.; Abbott, R.; Abernathy, M. R.; Adhikari, R. X.; Anderson, S. B.; Arai, K.; Araya, M. C.; Barayoga, J. C.; Barish, B. C.; Berger, B. K.; Billingsley, G.; Blackburn, J. K.; Bork, R.; Brooks, A. F.; Cahillane, C.

    2016-01-01

    On September 14, 2015, the Laser Interferometer Gravitational-Wave Observatory (LIGO) detected a gravitational-wave transient (GW150914); we characterize the properties of the source and its parameters. The data around the time of the event were analyzed coherently across the LIGO network using a suite of accurate waveform models that describe gravitational waves from a compact binary system in general relativity. GW150914 was produced by a nearly equal mass binary black hole of masses 36^(+5...

  2. Properties of the Binary Black Hole Merger GW150914

    OpenAIRE

    Abbott, BP; Abbott, R; Abbott, TD; Abernathy, MR; Acernese, F; Ackley, K; Adams, C; Adams, T; Addesso, P; Adhikari, RX; Adya, VB; Affeldt, C; Agathos, M; Agatsuma, K; Aggarwal, N

    2016-01-01

    On September 14, 2015, the Laser Interferometer Gravitational-Wave Observatory (LIGO) detected a gravitational-wave transient (GW150914); we characterize the properties of the source and its parameters. The data around the time of the event were analyzed coherently across the LIGO network using a suite of accurate waveform models that describe gravitational waves from a compact binary system in general relativity. GW150914 was produced by a nearly equal mass binary black hole of masses 36+5−4...

  3. Astrophysical Implications of the Binary Black Hole Merger GW150914

    Science.gov (United States)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; hide

    2016-01-01

    The discovery of the gravitational-wave (GW) source GW150914 with the Advanced LIGO detectors provides the first observational evidence for the existence of binary black hole (BH) systems that in spiral and merge within the age of the universe. Such BH mergers have been predicted in two main types of formation models, involving isolated binaries in galactic fields or dynamical interactions in young and old dense stellar environments. The measured masses robustly demonstrate that relatively heavy BHs (> or approx. 25 Stellar Mass) can form in nature. This discovery implies relatively weak massive-star winds and thus the formation of GW150914 in an environment with a metallicity lower than about 12 of the solar value. The rate of binary-BH (BBH) mergers inferred from the observation of GW150914 is consistent with the higher end of rate predictions (> or approx. 1/cu Gpc/yr) from both types of formation models. The low measured redshift (z approx. = 0.1) of GW150914 and the low inferred metallicity of the stellar progenitor imply either BBH formation in a low-mass galaxy in the local universe and a prompt merger, or formation at high redshift with a time delay between formation and merger of several Gyr. This discovery motivates further studies of binary-BH formation astrophysics. It also has implications for future detections and studies by Advanced LIGO and Advanced Virgo, and GW detectors in space.

  4. Gestures make memories, but what kind? Patients with impaired procedural memory display disruptions in gesture production and comprehension

    Directory of Open Access Journals (Sweden)

    Nathaniel Bloem Klooster

    2015-01-01

    Full Text Available Hand gesture, a ubiquitous feature of human interaction, facilitates communication. Gesture also facilitates new learning, benefiting speakers and listeners alike. Thus, gestures must impact cognition beyond simply supporting the expression of already-formed ideas. However, the cognitive and neural mechanisms supporting the effects of gesture on learning and memory are largely unknown. We hypothesized that gesture’s ability to drive new learning is supported by procedural memory and that procedural memory deficits will disrupt gesture production and comprehension. We tested this proposal in patients with intact declarative memory, but impaired procedural memory as a consequence of Parkinson’s disease, and healthy comparison participants with intact declarative and procedural memory. In separate experiments, we manipulated the gestures participants saw and produced in a Tower of Hanoi paradigm. In the first experiment, participants solved the task either on a physical board, requiring high arching movements to manipulate the discs from peg to peg, or on a computer, requiring only flat, sideways movements of the mouse. When explaining the task, healthy participants with intact procedural memory displayed evidence of their previous experience in their gestures, producing higher, more arching hand gestures after solving on a physical board, and smaller, flatter gestures after solving on a computer. In the second experiment, healthy participants who saw high arching hand gestures in an explanation prior to solving the task subsequently moved the mouse with significantly higher curvature than those who saw smaller, flatter gestures prior to solving the task. These patterns were absent in both gesture production and comprehension experiments in patients with procedural memory impairment. These findings suggest that the procedural memory system supports the ability of gesture to drive new learning.

  5. 3D Hand Gesture Analysis through a Real-Time Gesture Search Engine

    Directory of Open Access Journals (Sweden)

    Shahrouz Yousefi

    2015-06-01

    Full Text Available 3D gesture recognition and tracking are highly desired features of interaction design in future mobile and smart environments. Specifically, in virtual/augmented reality applications, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities such as touchscreens. In this paper, we introduce a novel solution for real-time 3D gesture-based interaction by finding the best match from an extremely large gesture database. This database includes images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique matching algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query frames and database and retrieving the best match. Once the best match is found from the database in each moment, the pre-recorded 3D motion parameters can instantly be used for natural interaction. The proposed bare-hand interaction technology performs in real time with high accuracy using an ordinary camera.

  6. Human computer interaction using hand gestures

    CERN Document Server

    Premaratne, Prashan

    2014-01-01

    Human computer interaction (HCI) plays a vital role in bridging the 'Digital Divide', bringing people closer to consumer electronics control in the 'lounge'. Keyboards and mouse or remotes do alienate old and new generations alike from control interfaces. Hand Gesture Recognition systems bring hope of connecting people with machines in a natural way. This will lead to consumers being able to use their hands naturally to communicate with any electronic equipment in their 'lounge.' This monograph will include the state of the art hand gesture recognition approaches and how they evolved from their inception. The author would also detail his research in this area for the past 8 years and how the future might turn out to be using HCI. This monograph will serve as a valuable guide for researchers (who would endeavour into) in the world of HCI.

  7. Gestures recognition based on wavelet and LLE

    International Nuclear Information System (INIS)

    Ai, Qingsong; Liu, Quan; Lu, Ying; Yuan, Tingting

    2013-01-01

    Wavelet analysis is a time–frequency, non-stationary method while the largest Lyapunov exponent (LLE) is used to judge the non-linear characteristic of systems. Because surface electromyography signal (SEMGS) is a complex signal that is characterized by non-stationary and non-linear properties. This paper combines wavelet coefficient and LLE together as the new feature of SEMGS. The proposed method not only reflects the non-stationary and non-linear characteristics of SEMGS, but also is suitable for its classification. Then, the BP (back propagation) neural network is employed to implement the identification of six gestures (fist clench, fist extension, wrist extension, wrist flexion, radial deviation, ulnar deviation). The experimental results indicate that based on the proposed method, the identification of these six gestures can reach an average rate of 97.71 %.

  8. Working memory for meaningless manual gestures.

    Science.gov (United States)

    Rudner, Mary

    2015-03-01

    Effects on working memory performance relating to item similarity have been linked to prior categorisation of representations in long-term memory. However, there is evidence from gesture processing that this link may not be obligatory. The present study investigated whether working memory for incidentally generated meaningless manual gestures is influenced by formational similarity and whether this effect is modulated by working-memory load. Results showed that formational similarity did lower performance, demonstrating that similarity effects are not dependent on prior categorisation. However, this effect was only found when working-memory load was low, supporting a flexible resource allocation model according to which it is the quality rather than quantity of working memory representations that determines performance. This interpretation is in line with proposals suggesting language modality specific allocation of resources in working memory. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  9. Recognition of Gestures using Artifical Neural Network

    Directory of Open Access Journals (Sweden)

    Marcel MORE

    2013-12-01

    Full Text Available Sensors for motion measurements are now becoming more widespread. Thanks to their parameters and affordability they are already used not only in the professional sector, but also in devices intended for daily use or entertainment. One of their applications is in control of devices by gestures. Systems that can determine type of gesture from measured motion have many uses. Some are for example in medical practice, but they are still more often used in devices such as cell phones, where they serve as a non-standard form of input. Today there are already several approaches for solving this problem, but building sufficiently reliable system is still a challenging task. In our project we are developing solution based on artificial neural network. In difference to other solutions, this one doesn’t require building model for each measuring system and thus it can be used in combination with various sensors just with minimal changes in his structure.

  10. Spatial analogies pervade complex relational reasoning: Evidence from spontaneous gestures.

    Science.gov (United States)

    Cooperrider, Kensy; Gentner, Dedre; Goldin-Meadow, Susan

    2016-01-01

    How do people think about complex phenomena like the behavior of ecosystems? Here we hypothesize that people reason about such relational systems in part by creating spatial analogies, and we explore this possibility by examining spontaneous gestures. In two studies, participants read a written lesson describing positive and negative feedback systems and then explained the differences between them. Though the lesson was highly abstract and people were not instructed to gesture, people produced spatial gestures in abundance during their explanations. These gestures used space to represent simple abstract relations (e.g., increase ) and sometimes more complex relational structures (e.g., negative feedback ). Moreover, over the course of their explanations, participants' gestures often cohered into larger analogical models of relational structure. Importantly, the spatial ideas evident in the hands were largely unaccompanied by spatial words. Gesture thus suggests that spatial analogies are pervasive in complex relational reasoning, even when language does not.

  11. Virtual sculpting with advanced gestural interface

    OpenAIRE

    Kılıboz, Nurettin Çağrı

    2013-01-01

    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2013. Thesis (Master's) -- Bilkent University, 2013. Includes bibliographical references leaves 54-58. In this study, we propose a virtual reality application that can be utilized to design preliminary/conceptual models similar to real world clay sculpting. The proposed system makes use of the innovative gestural interface that enhances the experience of...

  12. Distinguishing the communicative functions of gestures

    DEFF Research Database (Denmark)

    Jokinen, Kristiina; Navarretta, Costanza; Paggio, Patrizia

    2008-01-01

    This paper deals with the results of a machine learning experiment conducted on annotated gesture data from two case studies (Danish and Estonian). The data concern mainly facial displays, that are annotated with attributes relating to shape and dynamics, as well as communicative function....... The results of the experiments show that the granularity of the attributes used seems appropriate for the task of distinguishing the desired communicative functions. This is a promising result in view of a future automation of the annotation task....

  13. Designing Gestural Interfaces Touchscreens and Interactive Devices

    CERN Document Server

    Saffer, Dan

    2008-01-01

    If you want to get started in new era of interaction design, this is the reference you need. Packed with informative illustrations and photos, Designing Gestural Interfaces provides you with essential information about kinesiology, sensors, ergonomics, physical computing, touchscreen technology, and new interface patterns -- information you need to augment your existing skills in traditional" websites, software, or product development. This book will help you enter this new world of possibilities."

  14. Towards successful user interaction with systems: focusing on user-derived gestures for smart home systems.

    Science.gov (United States)

    Choi, Eunjung; Kwon, Sunghyuk; Lee, Donghun; Lee, Hogin; Chung, Min K

    2014-07-01

    Various studies that derived gesture commands from users have used the frequency ratio to select popular gestures among the users. However, the users select only one gesture from a limited number of gestures that they could imagine during an experiment, and thus, the selected gesture may not always be the best gesture. Therefore, two experiments including the same participants were conducted to identify whether the participants maintain their own gestures after observing other gestures. As a result, 66% of the top gestures were different between the two experiments. Thus, to verify the changed gestures between the two experiments, a third experiment including another set of participants was conducted, which showed that the selected gestures were similar to those from the second experiment. This finding implies that the method of using the frequency in the first step does not necessarily guarantee the popularity of the gestures. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  15. Does training with beat gestures favour children's narrative discourse abilities?

    OpenAIRE

    Vilà Giménez, Ingrid

    2016-01-01

    There is consensus evidence that gestures and prosody are important precursors of children’s early language abilities and development. Previous literature has investigated the beneficial role of beat gestures in the recall of information by preschoolers (Igualada, Esteve-Gibert, & Prieto, under review; Austin & Sweller, 2014). However, to our knowledge, little is known about whether the use of beat gestures can promote children’s later linguistic abilities and specifically whether training wi...

  16. Effects of the restriction of hand gestures on disfluency.

    OpenAIRE

    Finlayson, Sheena; Forrest, Victoria; Lickley, Robin; Beck, Janet M

    2003-01-01

    This paper describes an experimental pilot study of disfluency and gesture rates in spontaneous speech where speakers perform a communication task in three conditions: hands free, one arm immobilized, both arms immobilized. Previous work suggests that the restriction of the ability to gesture can have an impact on the fluency of speech. In particular, it has been found that the inability to produce iconic gestures, which depict actions and objects, results in a higher rate of disfluency. Mode...

  17. Beat gestures and prosodic prominence: impact on learning

    OpenAIRE

    Kushch, Olga

    2018-01-01

    Previous research has shown that gestures are beneficial for language learning. This doctoral thesis centers on the effects of beat gestures– i.e., hand and arm gestures that are typically associated with prosodically prominent positions in speech - on such processes. Little is known about how the two central properties of beat gestures, namely how they mark both information focus and rhythmic positions in speech, can be beneficial for learning either a first or a second language. The main go...

  18. GESTURE-VERBAL UTTERANCES FROM THE COGNITIVE PERSPECTIVE

    OpenAIRE

    Martynyuk, Alla

    2016-01-01

    The article develops the idea of speech and gesture as an integral system of generation of meaning viewing an individual’s cognitive system as a dynamic, evolving semantic lattice organising semantic items of propositional and imagistic modes around a core meaning: linguistic items (propositions) are linked to ideas, concepts and beliefs as well as to specific feelings, mental states, images of gestures and stereotypic patterns of behaviour. Since gesture and speech are equally engaged in gen...

  19. Adult Gesture in Collaborative Mathematics Reasoning in Different Ages

    Science.gov (United States)

    Noto, M. S.; Harisman, Y.; Harun, L.; Amam, A.; Maarif, S.

    2017-09-01

    This article describes the case study on postgraduate students by using descriptive method. A problem is designed to facilitate the reasoning in the topic of Chi-Square test. The problem was given to two male students with different ages to investigate the gesture pattern and it will be related to their reasoning process. The indicators in reasoning problem can obtain the conclusion of analogy and generalization, and arrange the conjectures. This study refers to some questions—whether unique gesture is for every individual or to identify the pattern of the gesture used by the students with different ages. Reasoning problem was employed to collect the data. Two students were asked to collaborate to reason the problem. The discussion process recorded in using video tape to observe the gestures. The video recorded are explained clearly in this writing. Prosodic cues such as time, conversation text, gesture that appears, might help in understanding the gesture. The purpose of this study is to investigate whether different ages influences the maturity in collaboration observed from gesture perspective. The finding of this study shows that age is not a primary factor that influences the gesture in that reasoning process. In this case, adult gesture or gesture performed by order student does not show that he achieves, maintains, and focuses on the problem earlier on. Adult gesture also does not strengthen and expand the meaning if the student’s words or the language used in reasoning is not familiar for younger student. Adult gesture also does not affect cognitive uncertainty in mathematics reasoning. The future research is suggested to take more samples to find the consistency from that statement.

  20. Patients with hippocampal amnesia successfully integrate gesture and speech.

    Science.gov (United States)

    Hilverman, Caitlin; Clough, Sharice; Duff, Melissa C; Cook, Susan Wagner

    2018-06-19

    During conversation, people integrate information from co-speech hand gestures with information in spoken language. For example, after hearing the sentence, "A piece of the log flew up and hit Carl in the face" while viewing a gesture directed at the nose, people tend to later report that the log hit Carl in the nose (information only in gesture) rather than in the face (information in speech). The cognitive and neural mechanisms that support the integration of gesture with speech are unclear. One possibility is that the hippocampus - known for its role in relational memory and information integration - is necessary for integrating gesture and speech. To test this possibility, we examined how patients with hippocampal amnesia and healthy and brain-damaged comparison participants express information from gesture in a narrative retelling task. Participants watched videos of an experimenter telling narratives that included hand gestures that contained supplementary information. Participants were asked to retell the narratives and their spoken retellings were assessed for the presence of information from gesture. For features that had been accompanied by supplementary gesture, patients with amnesia retold fewer of these features overall and fewer retellings that matched the speech from the narrative. Yet their retellings included features that contained information that had been present uniquely in gesture in amounts that were not reliably different from comparison groups. Thus, a functioning hippocampus is not necessary for gesture-speech integration over short timescales. Providing unique information in gesture may enhance communication for individuals with declarative memory impairment, possibly via non-declarative memory mechanisms. Copyright © 2018. Published by Elsevier Ltd.

  1. A prelinguistic gestural universal of human communication.

    Science.gov (United States)

    Liszkowski, Ulf; Brown, Penny; Callaghan, Tara; Takada, Akira; de Vos, Conny

    2012-01-01

    Several cognitive accounts of human communication argue for a language-independent, prelinguistic basis of human communication and language. The current study provides evidence for the universality of a prelinguistic gestural basis for human communication. We used a standardized, semi-natural elicitation procedure in seven very different cultures around the world to test for the existence of preverbal pointing in infants and their caregivers. Results were that by 10-14 months of age, infants and their caregivers pointed in all cultures in the same basic situation with similar frequencies and the same proto-typical morphology of the extended index finger. Infants' pointing was best predicted by age and caregiver pointing, but not by cultural group. Further analyses revealed a strong relation between the temporal unfolding of caregivers' and infants' pointing events, uncovering a structure of early prelinguistic gestural conversation. Findings support the existence of a gestural, language-independent universal of human communication that forms a culturally shared, prelinguistic basis for diversified linguistic communication. Copyright © 2012 Cognitive Science Society, Inc.

  2. Gestures, vocalizations, and memory in language origins.

    Science.gov (United States)

    Aboitiz, Francisco

    2012-01-01

    THIS ARTICLE DISCUSSES THE POSSIBLE HOMOLOGIES BETWEEN THE HUMAN LANGUAGE NETWORKS AND COMPARABLE AUDITORY PROJECTION SYSTEMS IN THE MACAQUE BRAIN, IN AN ATTEMPT TO RECONCILE TWO EXISTING VIEWS ON LANGUAGE EVOLUTION: one that emphasizes hand control and gestures, and the other that emphasizes auditory-vocal mechanisms. The capacity for language is based on relatively well defined neural substrates whose rudiments have been traced in the non-human primate brain. At its core, this circuit constitutes an auditory-vocal sensorimotor circuit with two main components, a "ventral pathway" connecting anterior auditory regions with anterior ventrolateral prefrontal areas, and a "dorsal pathway" connecting auditory areas with parietal areas and with posterior ventrolateral prefrontal areas via the arcuate fasciculus and the superior longitudinal fasciculus. In humans, the dorsal circuit is especially important for phonological processing and phonological working memory, capacities that are critical for language acquisition and for complex syntax processing. In the macaque, the homolog of the dorsal circuit overlaps with an inferior parietal-premotor network for hand and gesture selection that is under voluntary control, while vocalizations are largely fixed and involuntary. The recruitment of the dorsal component for vocalization behavior in the human lineage, together with a direct cortical control of the subcortical vocalizing system, are proposed to represent a fundamental innovation in human evolution, generating an inflection point that permitted the explosion of vocal language and human communication. In this context, vocal communication and gesturing have a common history in primate communication.

  3. Gliding and Saccadic Gaze Gesture Recognition in Real Time

    DEFF Research Database (Denmark)

    Rozado, David; San Agustin, Javier; Rodriguez, Francisco

    2012-01-01

    , and their corresponding real-time recognition algorithms, Hierarchical Temporal Memory networks and the Needleman-Wunsch algorithm for sequence alignment. Our results show how a specific combination of gaze gesture modality, namely saccadic gaze gestures, and recognition algorithm, Needleman-Wunsch, allows for reliable...... usage of intentional gaze gestures to interact with a computer with accuracy rates of up to 98% and acceptable completion speed. Furthermore, the gesture recognition engine does not interfere with otherwise standard human-machine gaze interaction generating therefore, very low false positive rates...

  4. Gesturing more diminishes recall of abstract words when gesture is allowed and concrete words when it is taboo.

    Science.gov (United States)

    Matthews-Saugstad, Krista M; Raymakers, Erik P; Kelty-Stephen, Damian G

    2017-07-01

    Gesture during speech can promote or diminish recall for conversation content. We explored effects of cognitive load on this relationship, manipulating it at two scales: individual-word abstractness and social constraints to prohibit gestures. Prohibited gestures can diminish recall but more so for abstract-word recall. Insofar as movement planning adds to cognitive load, movement amplitude may moderate gesture effects on memory, with greater permitted- and prohibited-gesture movements reducing abstract-word recall and concrete-word recall, respectively. We tested these effects in a dyadic game in which 39 adult participants described words to confederates without naming the word or five related words. Results supported our expectations and indicated that memory effects of gesturing depend on social, cognitive, and motoric aspects of discourse.

  5. Give me a hand: Differential effects of gesture type in guiding young children's problem-solving

    OpenAIRE

    Vallotton, Claire; Fusaro, Maria; Hayden, Julia; Decker, Kalli; Gutowski, Elizabeth

    2015-01-01

    Adults’ gestures support children's learning in problem-solving tasks, but gestures may be differentially useful to children of different ages, and different features of gestures may make them more or less useful to children. The current study investigated parents’ use of gestures to support their young children (1.5 – 6 years) in a block puzzle task (N = 126 parent-child dyads), and identified patterns in parents’ gesture use indicating different gestural strategies. Further, we examined the...

  6. Foundational Issues in Touch-Screen Stroke Gesture Design - An Integrative Review

    OpenAIRE

    Zhai , Shumin; Kristensson , Per Ola; Appert , Caroline; Andersen , Tue Haste; Cao , Xiang

    2012-01-01

    International audience; The potential for using stroke gestures to enter, retrieve and select commands and text has been recently unleashed by the popularity of touchscreen devices. This monograph provides a state-of-the-art inte- grative review of a body of human-computer interaction research on stroke gestures. It begins with an analysis of the design dimensions of stroke gestures as an interaction medium. The analysis classifies gestures into analogue versus abstract gestures, gestures for...

  7. Preparing Interprofessional Faculty to Be Humanistic Mentors for Medical Students: The GW-Gold Mentor Development Program.

    Science.gov (United States)

    Blatt, Benjamin; Plack, Margaret M; Simmens, Samuel J

    2018-01-01

    The GW-Gold Humanistic Mentor Development Program addresses the challenge faced by medical schools to educate faculty to prepare students for humanistic practice. Grounded in Branch's Teaching Professional and Humanistic Values model, the program prepares interprofessional faculty mentoring teams in humanistic communities of practice. The teams consist of physician-psychosocial professional pairs, each mentoring a small student group in their professional development course. Through GW-Gold workshops, faculty mentors develop interprofessional humanistic communities of practice, preparing them to lead second such communities with their students. This article describes the program and its evaluation. To assess outcomes and better understand the mentor experience, we used a mixed-method validating triangulation design consisting of simultaneous collection of quantitative (mentor and student surveys) and qualitative (open-ended survey questions and focus group) data. Data were analyzed in parallel and merged at the point of interpretation, allowing for triangulation and validation of outcomes. Mentors rated the program highly, gained confidence in their humanistic skills, and received high scores from students. Three themes emerged that validated program design, confirmed outcomes, and expanded on the mentor experience: (1) Interprofessional faculty communities developed through observation, collaboration, reflection, and dialogue; (2) Humanistic mentors created safe environments for student engagement; and (3) Engaging in interprofessional humanistic communities of practice expanded mentors' personal and professional identities. Outcomes support the value of the GW-Gold program's distinctive features in preparing faculty to sustain humanism in medical education: an interprofessional approach and small communities of practice built on humanistic values.

  8. AGILE OBSERVATIONS OF THE GRAVITATIONAL-WAVE EVENT GW150914

    Energy Technology Data Exchange (ETDEWEB)

    Tavani, M.; Donnarumma, I.; Argan, A.; Monte, E. Del; Evangelista, Y.; Piano, G.; Munar-Adrover, P. [INAF-IAPS, via del Fosso del Cavaliere 100, I-00133 Roma (Italy); Pittori, C.; Verrecchia, F.; Lucarelli, F.; Antonelli, L. A. [ASI Science Data Center (ASDC), Via del Politecnico, I-00133 Roma (Italy); Bulgarelli, A.; Marisaldi, M.; Fioretti, V.; Zoli, A. [INAF-IASF-Bologna, Via Gobetti 101, I-40129 Bologna (Italy); Giuliani, A.; Caraveo, P. [INAF-IASF Milano, via E.Bassini 15, I-20133 Milano (Italy); Trois, A. [INAF, Osservatorio Astronomico di Cagliari, Poggio dei Pini, strada 54, I-09012 Capoterra (Italy); Barbiellini, G. [Dip. di Fisica, Universita’ di Trieste and INFN, Via Valerio 2, I-34127 Trieste (Italy); Cattaneo, P. W., E-mail: victor@roma2.infn.it.it [INFN-Pavia, Via Bassi 6, I-27100 Pavia (Italy); and others

    2016-07-01

    We report the results of an extensive search through the AGILE data for a gamma-ray counterpart to the LIGO gravitational-wave (GW) event GW150914. Currently in spinning mode, AGILE has the potential of cover 80% of the sky with its gamma-ray instrument, more than 100 times a day. It turns out that AGILE came within a minute of the event time of observing the accessible GW150914 localization region. Interestingly, the gamma-ray detector exposed ∼65% of this region during the 100 s time intervals centered at −100 and +300 s from the event time. We determine a 2 σ flux upper limit in the band 50 MeV–10 GeV, UL = 1.9 × 10{sup −8} erg cm{sup −2} s{sup −1}, obtained ∼300 s after the event. The timing of this measurement is the fastest ever obtained for GW150914, and significantly constrains the electromagnetic emission of a possible high-energy counterpart. We also carried out a search for a gamma-ray precursor and delayed emission over five timescales ranging from minutes to days: in particular, we obtained an optimal exposure during the interval −150/−30 s. In all these observations, we do not detect a significant signal associated with GW150914. We do not reveal the weak transient source reported by Fermi -GBM 0.4 s after the event time. However, even though a gamma-ray counterpart of the GW150914 event was not detected, the prospects for future AGILE observations of GW sources are decidedly promising.

  9. Archetypal Gesture and Everyday Gesture: a fundamental binomial in Delsartean theory

    Directory of Open Access Journals (Sweden)

    Elena Randi

    2012-11-01

    Full Text Available This text presents François Delsarte’s system from a historical-exploratory viewpoint, focusing on some particular aspects of the work of the French master and the interpretation of his work by some of his main disciples. The article describes the status of the body and its importance in the Delsarte system, taking the notions of archetypal gesture and everyday gesture as the bases of this system. Indeed, the text highlights both historical facts obtained from the Delsarte archive, and arguments questioning the authorship of exercises attributed to Delsarte, which, according to the text, may have been created by his students.

  10. Workshop report

    African Journals Online (AJOL)

    abp

    2017-09-14

    Sep 14, 2017 ... health: report of first EQUIST training workshop in Nigeria .... The difference between the before and after measurements was ... After the administration of the pre-workshop questionnaire the ... represent Likert rating scale of 1-5 points, where 1point = grossly .... Procedures Manual for the "Evaluating.

  11. INDICO Workshop

    CERN Multimedia

    CERN. Geneva; Fabbrichesi, Marco

    2004-01-01

    The INtegrated DIgital COnferencing EU project has finished building a complete software solution to facilitate the MANAGEMENT OF CONFERENCES, workshops, schools or simple meetings from their announcement to their archival. Everybody involved in the organization of events is welcome to join this workshop, in order to understand the scope of the project and to see demonstrations of the various features.

  12. Beat gestures help preschoolers recall and comprehend discourse information.

    Science.gov (United States)

    Llanes-Coromina, Judith; Vilà-Giménez, Ingrid; Kushch, Olga; Borràs-Comes, Joan; Prieto, Pilar

    2018-08-01

    Although the positive effects of iconic gestures on word recall and comprehension by children have been clearly established, less is known about the benefits of beat gestures (rhythmic hand/arm movements produced together with prominent prosody). This study investigated (a) whether beat gestures combined with prosodic information help children recall contrastively focused words as well as information related to those words in a child-directed discourse (Experiment 1) and (b) whether the presence of beat gestures helps children comprehend a narrative discourse (Experiment 2). In Experiment 1, 51 4-year-olds were exposed to a total of three short stories with contrastive words presented in three conditions, namely with prominence in both speech and gesture, prominence in speech only, and nonprominent speech. Results of a recall task showed that (a) children remembered more words when exposed to prominence in both speech and gesture than in either of the other two conditions and that (b) children were more likely to remember information related to those words when the words were associated with beat gestures. In Experiment 2, 55 5- and 6-year-olds were presented with six narratives with target items either produced with prosodic prominence but no beat gestures or produced with both prosodic prominence and beat gestures. Results of a comprehension task demonstrated that stories told with beat gestures were comprehended better by children. Together, these results constitute evidence that beat gestures help preschoolers not only to recall discourse information but also to comprehend it. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Workshop Proceedings

    DEFF Research Database (Denmark)

    2012-01-01

    , the main focus there is on spoken languages in their written and spoken forms. This series of workshops, however, offers a forum for researchers focussing on sign languages. For the third time, the workshop had sign language corpora as its main topic. This time, the focus was on the interaction between...... corpus and lexicon. More than half of the papers presented contribute to this topic. Once again, the papers at this workshop clearly identify the potentials of even closer cooperation between sign linguists and sign language engineers, and we think it is events like this that contribute a lot to a better...

  14. Gesture, Landscape and Embrace: A Phenomenological Analysis of ...

    African Journals Online (AJOL)

    The 'radical reflection' on the 'flesh of the world' to which this analysis aspires in turn bears upon the general field of gestural reciprocities and connections, providing the insight that intimate gestures of the flesh, such as the embrace, are primordial attunements, motions of rhythm and reciprocity, that emanate from the world ...

  15. Comprehension of iconic gestures by chimpanzees and human children.

    Science.gov (United States)

    Bohn, Manuel; Call, Josep; Tomasello, Michael

    2016-02-01

    Iconic gestures-communicative acts using hand or body movements that resemble their referent-figure prominently in theories of language evolution and development. This study contrasted the abilities of chimpanzees (N=11) and 4-year-old human children (N=24) to comprehend novel iconic gestures. Participants learned to retrieve rewards from apparatuses in two distinct locations, each requiring a different action. In the test, a human adult informed the participant where to go by miming the action needed to obtain the reward. Children used the iconic gestures (more than arbitrary gestures) to locate the reward, whereas chimpanzees did not. Some children also used arbitrary gestures in the same way, but only after they had previously shown comprehension for iconic gestures. Over time, chimpanzees learned to associate iconic gestures with the appropriate location faster than arbitrary gestures, suggesting at least some recognition of the iconicity involved. These results demonstrate the importance of iconicity in referential communication. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Communicative Effectiveness of Pantomime Gesture in People with Aphasia

    Science.gov (United States)

    Rose, Miranda L.; Mok, Zaneta; Sekine, Kazuki

    2017-01-01

    Background: Human communication occurs through both verbal and visual/motoric modalities. Simultaneous conversational speech and gesture occurs across all cultures and age groups. When verbal communication is compromised, more of the communicative load can be transferred to the gesture modality. Although people with aphasia produce meaning-laden…

  17. Does gesture add to the comprehensibility of people with aphasia?

    NARCIS (Netherlands)

    van Nispen, Karin; Sekine, Kazuki; Rose, Miranda; Ferré, Gaëlle; Tutton, Mark

    2015-01-01

    Gesture can convey information co-occurring with and in the absence of speech. As such, it seems a useful strategy for people with aphasia (PWA) to compensate for their impaired speech. To find out whether gestures used by PWA add to the comprehensibility of their communication we looked at the

  18. Enhancement of naming in nonfluent aphasia through gesture.

    Science.gov (United States)

    Hanlon, R E; Brown, J W; Gerstman, L J

    1990-02-01

    In a number of studies that have examined the gestural disturbance in aphasia and the utility of gestural interventions in aphasia therapy, a variable degree of facilitation of verbalization during gestural activity has been reported. The present study examined the effect of different unilateral gestural movements on simultaneous oral-verbal expression, specifically naming to confrontation. It was hypothesized that activation of the phylogenetically older proximal motor system of the hemiplegic right arm in the execution of a communicative but nonrepresentational pointing gesture would have a facilitatory effect on naming ability. Twenty-four aphasic patients, representing five aphasic subtypes, including Broca's, Transcortical Motor, Anomic, Global, and Wernicke's aphasics were assessed under three gesture/naming conditions. The findings indicated that gestures produced through activation of the proximal (shoulder) musculature of the right paralytic limb differentially facilitated naming performance in the nonfluent subgroup, but not in the Wernicke's aphasics. These findings may be explained on the view that functional activation of the archaic proximal motor system of the hemiplegic limb, in the execution of a communicative gesture, permits access to preliminary stages in the formative process of the anterior action microgeny, which ultimately emerges in vocal articulation.

  19. Gestures as Semiotic Resources in the Mathematics Classroom

    Science.gov (United States)

    Arzarello, Ferdinando; Paola, Domingo; Robutti, Ornella; Sabena, Cristina

    2009-01-01

    In this paper, we consider gestures as part of the resources activated in the mathematics classroom: speech, inscriptions, artifacts, etc. As such, gestures are seen as one of the semiotic tools used by students and teacher in mathematics teaching-learning. To analyze them, we introduce a suitable model, the "semiotic bundle." It allows focusing…

  20. The Effects of Prohibiting Gestures on Children's Lexical Retrieval Ability

    Science.gov (United States)

    Pine, Karen J.; Bird, Hannah; Kirk, Elizabeth

    2007-01-01

    Two alternative accounts have been proposed to explain the role of gestures in thinking and speaking. The Information Packaging Hypothesis (Kita, 2000) claims that gestures are important for the conceptual packaging of information before it is coded into a linguistic form for speech. The Lexical Retrieval Hypothesis (Rauscher, Krauss & Chen, 1996)…

  1. Mothers' labeling responses to infants' gestures predict vocabulary outcomes.

    Science.gov (United States)

    Olson, Janet; Masur, Elise Frank

    2015-11-01

    Twenty-nine infants aged 1;1 and their mothers were videotaped while interacting with toys for 18 minutes. Six experimental stimuli were presented to elicit infant communicative bids in two communicative intent contexts - proto-declarative and proto-imperative. Mothers' verbal responses to infants' gestural and non-gestural communicative bids were coded for object and action labels. Relations between maternal labeling responses and infants' vocabularies at 1;1 and 1;5 were examined. Mothers' labeling responses to infants' gestural communicative bids were concurrently and predictively related to infants' vocabularies, whereas responses to non-gestural communicative bids were not. Mothers' object labeling following gestures in the proto-declarative context mediated the association from infants' gesturing in the proto-declarative context to concurrent noun lexicons and was the strongest predictor of subsequent noun lexicons. Mothers' action labeling after infants' gestural bids in the proto-imperative context predicted infants' acquisition of action words at 1;5. Findings show that mothers' responsive labeling explain specific relations between infants' gestures and their vocabulary development.

  2. Gesture and Identity in the Teaching and Learning of Italian

    Science.gov (United States)

    Peltier, Ilaria Nardotto; McCafferty, Steven G.

    2010-01-01

    This study investigated the use of mimetic gestures of identity by foreign language teachers of Italian and their students in college classes as a form of meaning-making. All four of the teachers were found to use a variety of Italian gestures as a regular aspect of their teaching and presentation of self. Students and teachers also were found to…

  3. A Hierarchical Model for Continuous Gesture Recognition Using Kinect

    DEFF Research Database (Denmark)

    Jensen, Søren Kejser; Moesgaard, Christoffer; Nielsen, Christoffer Samuel

    2013-01-01

    Human gesture recognition is an area, which has been studied thoroughly in recent years,and close to100% recognition rates in restricted environments have been achieved, often either with single separated gestures in the input stream, or with computationally intensive systems. The results are unf...

  4. Diagram, Gesture, Agency: Theorizing Embodiment in the Mathematics Classroom

    Science.gov (United States)

    de Freitas, Elizabeth; Sinclair, Nathalie

    2012-01-01

    In this paper, we use the work of philosopher Gilles Chatelet to rethink the gesture/diagram relationship and to explore the ways mathematical agency is constituted through it. We argue for a fundamental philosophical shift to better conceptualize the relationship between gesture and diagram, and suggest that such an approach might open up new…

  5. Body in Mind: How Gestures Empower Foreign Language Learning

    Science.gov (United States)

    Macedonia, Manuela; Knosche, Thomas R.

    2011-01-01

    It has previously been demonstrated that enactment (i.e., performing representative gestures during encoding) enhances memory for concrete words, in particular action words. Here, we investigate the impact of enactment on abstract word learning in a foreign language. We further ask if learning novel words with gestures facilitates sentence…

  6. The Role of Conversational Hand Gestures in a Narrative Task

    Science.gov (United States)

    Jacobs, Naomi; Garnham, Alan

    2007-01-01

    The primary functional role of conversational hand gestures in narrative discourse is disputed. A novel experimental technique investigated whether gestures function primarily to aid speech production by the speaker, or communication to the listener. The experiment involved repeated narration of a cartoon story or stories to a single or multiple…

  7. Probing the Mental Representation of Gesture: Is Handwaving Spatial?

    Science.gov (United States)

    Wagner, Susan M.; Nusbaum, Howard; Goldin-Meadow, Susan

    2004-01-01

    What type of mental representation underlies the gestures that accompany speech? We used a dual-task paradigm to compare the demands gesturing makes on visuospatial and verbal working memories. Participants in one group remembered a string of letters (verbal working memory group) and those in a second group remembered a visual grid pattern…

  8. Reduction in gesture during the production of repeated references

    NARCIS (Netherlands)

    Hoetjes, M.W.; Koolen, R.M.F.; Goudbeek, M.B.; Krahmer, E.J.; Swerts, M.G.J.

    2015-01-01

    In dialogue, repeated references contain fewer words (which are also acoustically reduced) and fewer gestures than initial ones. In this paper, we describe three experiments studying to what extent gesture reduction is comparable to other forms of linguistic reduction. Since previous studies showed

  9. View Invariant Gesture Recognition using 3D Motion Primitives

    DEFF Research Database (Denmark)

    Holte, Michael Boelstoft; Moeslund, Thomas B.

    2008-01-01

    This paper presents a method for automatic recognition of human gestures. The method works with 3D image data from a range camera to achieve invariance to viewpoint. The recognition is based solely on motion from characteristic instances of the gestures. These instances are denoted 3D motion...

  10. Neural correlates of gesture processing across human development.

    Science.gov (United States)

    Wakefield, Elizabeth M; James, Thomas W; James, Karin H

    2013-01-01

    Co-speech gesture facilitates learning to a greater degree in children than in adults, suggesting that the mechanisms underlying the processing of co-speech gesture differ as a function of development. We suggest that this may be partially due to children's lack of experience producing gesture, leading to differences in the recruitment of sensorimotor networks when comparing adults to children. Here, we investigated the neural substrates of gesture processing in a cross-sectional sample of 5-, 7.5-, and 10-year-old children and adults and focused on relative recruitment of a sensorimotor system that included the precentral gyrus (PCG) and the posterior middle temporal gyrus (pMTG). Children and adults were presented with videos in which communication occurred through different combinations of speech and gesture during a functional magnetic resonance imaging (fMRI) session. Results demonstrated that the PCG and pMTG were recruited to different extents in the two populations. We interpret these novel findings as supporting the idea that gesture perception (pMTG) is affected by a history of gesture production (PCG), revealing the importance of considering gesture processing as a sensorimotor process.

  11. Brane-world extra dimensions in light of GW170817

    Science.gov (United States)

    Visinelli, Luca; Bolis, Nadia; Vagnozzi, Sunny

    2018-03-01

    The search for extra dimensions is a challenging endeavor to probe physics beyond the Standard Model. The joint detection of gravitational waves (GW) and electromagnetic (EM) signals from the merging of a binary system of compact objects like neutron stars can help constrain the geometry of extra dimensions beyond our 3 +1 spacetime ones. A theoretically well-motivated possibility is that our observable Universe is a 3 +1 -dimensional hypersurface, or brane, embedded in a higher 4 +1 -dimensional anti-de Sitter (AdS5 ) spacetime, in which gravity is the only force which propagates through the infinite bulk space, while other forces are confined to the brane. In these types of brane-world models, GW and EM signals between two points on the brane would, in general, travel different paths. This would result in a time lag between the detection of GW and EM signals emitted simultaneously from the same source. We consider the recent near-simultaneous detection of the GW event GW170817 from the LIGO/Virgo collaboration, and its EM counterpart, the short gamma-ray burst GRB170817A detected by the Fermi Gamma-ray Burst Monitor and the International Gamma-Ray Astrophysics Laboratory Anti-Coincidence Shield spectrometer. Assuming the standard Λ -cold dark matter scenario and performing a likelihood analysis which takes into account astrophysical uncertainties associated to the measured time lag, we set an upper limit of ℓ≲0.535 Mpc at 68% confidence level on the AdS5 radius of curvature ℓ. Although the bound is not competitive with current Solar System constraints, it is the first time that data from a multimessenger GW-EM measurement is used to constrain extra-dimensional models. Thus, our work provides a proof of principle for the possibility of using multimessenger astronomy for probing the geometry of our space-time.

  12. Conserving GW scheme for nonequilibrium quantum transport in molecular contacts

    DEFF Research Database (Denmark)

    Thygesen, Kristian Sommer; Rubio, Angel

    2008-01-01

    We give a detailed presentation of our recent scheme to include correlation effects in molecular transport calculations using the nonequilibrium Keldysh formalism. The scheme is general and can be used with any quasiparticle self-energy, but for practical reasons, we mainly specialize to the so......-called GW self-energy, widely used to describe the quasiparticle band structures and spectroscopic properties of extended and low-dimensional systems. We restrict the GW self-energy to a finite, central region containing the molecule, and we describe the leads by density functional theory (DFT). A minimal...

  13. Benchmarking GW against exact diagonalization for semiempirical models

    DEFF Research Database (Denmark)

    Kaasbjerg, Kristen; Thygesen, Kristian Sommer

    2010-01-01

    We calculate ground-state total energies and single-particle excitation energies of seven pi-conjugated molecules described with the semiempirical Pariser-Parr-Pople model using self-consistent many-body perturbation theory at the GW level and exact diagonalization. For the total energies GW capt...... (Hubbard models) where correlation effects dominate over screening/relaxation effects. Finally we illustrate the important role of the derivative discontinuity of the true exchange-correlation functional by computing the exact Kohn-Sham levels of benzene....

  14. Gestural acquisition in great apes: the Social Negotiation Hypothesis.

    Science.gov (United States)

    Pika, Simone; Fröhlich, Marlen

    2018-01-24

    Scientific interest in the acquisition of gestural signalling dates back to the heroic figure of Charles Darwin. More than a hundred years later, we still know relatively little about the underlying evolutionary and developmental pathways involved. Here, we shed new light on this topic by providing the first systematic, quantitative comparison of gestural development in two different chimpanzee (Pan troglodytes verus and Pan troglodytes schweinfurthii) subspecies and communities living in their natural environments. We conclude that the three most predominant perspectives on gestural acquisition-Phylogenetic Ritualization, Social Transmission via Imitation, and Ontogenetic Ritualization-do not satisfactorily explain our current findings on gestural interactions in chimpanzees in the wild. In contrast, we argue that the role of interactional experience and social exposure on gestural acquisition and communicative development has been strongly underestimated. We introduce the revised Social Negotiation Hypothesis and conclude with a brief set of empirical desiderata for instigating more research into this intriguing research domain.

  15. Barack Obama’s pauses and gestures in humorous speeches

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2017-01-01

    The main aim of this paper is to investigate speech pauses and gestures as means to engage the audience and present the humorous message in an effective way. The data consist of two speeches by the USA president Barack Obama at the 2011 and 2016 Annual White House Correspondents’ Association Dinner...... produced significantly more hand gestures in 2016 than in 2011. An analysis of the hand gestures produced by Barack Obama in two political speeches held at the United Nations in 2011 and 2016 confirms that the president produced significantly less communicative co-speech hand gestures during his speeches...... and they emphasise the speech segment which they follow or precede. We also found a highly significant correlation between Obama’s speech pauses and audience response. Obama produces numerous head movements, facial expressions and hand gestures and their functions are related to both discourse content and structure...

  16. Predicting an Individual’s Gestures from the Interlocutor’s Co-occurring Gestures and Related Speech

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2016-01-01

    to the prediction of gestures of the same type of the other subject. In this work, we also want to determine whether the speech segments to which these gestures are related to contribute to the prediction. The results of our pilot experiments show that a Naive Bayes classifier trained on the duration and shape...

  17. Co-verbal gestures among speakers with aphasia: Influence of aphasia severity, linguistic and semantic skills, and hemiplegia on gesture employment in oral discourse.

    Science.gov (United States)

    Kong, Anthony Pak-Hin; Law, Sam-Po; Wat, Watson Ka-Chun; Lai, Christy

    2015-01-01

    The use of co-verbal gestures is common in human communication and has been reported to assist word retrieval and to facilitate verbal interactions. This study systematically investigated the impact of aphasia severity, integrity of semantic processing, and hemiplegia on the use of co-verbal gestures, with reference to gesture forms and functions, by 131 normal speakers, 48 individuals with aphasia and their controls. All participants were native Cantonese speakers. It was found that the severity of aphasia and verbal-semantic impairment was associated with significantly more co-verbal gestures. However, there was no relationship between right-sided hemiplegia and gesture employment. Moreover, significantly more gestures were employed by the speakers with aphasia, but about 10% of them did not gesture. Among those who used gestures, content-carrying gestures, including iconic, metaphoric, deictic gestures, and emblems, served the function of enhancing language content and providing information additional to the language content. As for the non-content carrying gestures, beats were used primarily for reinforcing speech prosody or guiding speech flow, while non-identifiable gestures were associated with assisting lexical retrieval or with no specific functions. The above findings would enhance our understanding of the use of various forms of co-verbal gestures in aphasic discourse production and their functions. Speech-language pathologists may also refer to the current annotation system and the results to guide clinical evaluation and remediation of gestures in aphasia. None. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Co-verbal gestures among speakers with aphasia: Influence of aphasia severity, linguistic and semantic skills, and hemiplegia on gesture employment in oral discourse

    Science.gov (United States)

    Kong, Anthony Pak-Hin; Law, Sam-Po; Wat, Watson Ka-Chun; Lai, Christy

    2015-01-01

    The use of co-verbal gestures is common in human communication and has been reported to assist word retrieval and to facilitate verbal interactions. This study systematically investigated the impact of aphasia severity, integrity of semantic processing, and hemiplegia on the use of co-verbal gestures, with reference to gesture forms and functions, by 131 normal speakers, 48 individuals with aphasia and their controls. All participants were native Cantonese speakers. It was found that the severity of aphasia and verbal-semantic impairment was associated with significantly more co-verbal gestures. However, there was no relationship between right-sided hemiplegia and gesture employment. Moreover, significantly more gestures were employed by the speakers with aphasia, but about 10% of them did not gesture. Among those who used gestures, content-carrying gestures, including iconic, metaphoric, deictic gestures, and emblems, served the function of enhancing language content and providing information additional to the language content. As for the non-content carrying gestures, beats were used primarily for reinforcing speech prosody or guiding speech flow, while non-identifiable gestures were associated with assisting lexical retrieval or with no specific functions. The above findings would enhance our understanding of the use of various forms of co-verbal gestures in aphasic discourse production and their functions. Speech-language pathologists may also refer to the current annotation system and the results to guide clinical evaluation and remediation of gestures in aphasia. PMID:26186256

  19. Conductor gestures influence evaluations of ensemble performance.

    Science.gov (United States)

    Morrison, Steven J; Price, Harry E; Smedley, Eric M; Meals, Cory D

    2014-01-01

    Previous research has found that listener evaluations of ensemble performances vary depending on the expressivity of the conductor's gestures, even when performances are otherwise identical. It was the purpose of the present study to test whether this effect of visual information was evident in the evaluation of specific aspects of ensemble performance: articulation and dynamics. We constructed a set of 32 music performances that combined auditory and visual information and were designed to feature a high degree of contrast along one of two target characteristics: articulation and dynamics. We paired each of four music excerpts recorded by a chamber ensemble in both a high- and low-contrast condition with video of four conductors demonstrating high- and low-contrast gesture specifically appropriate to either articulation or dynamics. Using one of two equivalent test forms, college music majors and non-majors (N = 285) viewed sixteen 30 s performances and evaluated the quality of the ensemble's articulation, dynamics, technique, and tempo along with overall expressivity. Results showed significantly higher evaluations for performances featuring high rather than low conducting expressivity regardless of the ensemble's performance quality. Evaluations for both articulation and dynamics were strongly and positively correlated with evaluations of overall ensemble expressivity.

  20. Conductor gestures influence evaluations of ensemble performance

    Directory of Open Access Journals (Sweden)

    Steven eMorrison

    2014-07-01

    Full Text Available Previous research has found that listener evaluations of ensemble performances vary depending on the expressivity of the conductor’s gestures, even when performances are otherwise identical. It was the purpose of the present study to test whether this effect of visual information was evident in the evaluation of specific aspects of ensemble performance, articulation and dynamics. We constructed a set of 32 music performances that combined auditory and visual information and were designed to feature a high degree of contrast along one of two target characteristics: articulation and dynamics. We paired each of four music excerpts recorded by a chamber ensemble in both a high- and low-contrast condition with video of four conductors demonstrating high- and low-contrast gesture specifically appropriate to either articulation or dynamics. Using one of two equivalent test forms, college music majors and nonmajors (N = 285 viewed sixteen 30-second performances and evaluated the quality of the ensemble’s articulation, dynamics, technique and tempo along with overall expressivity. Results showed significantly higher evaluations for performances featuring high rather than low conducting expressivity regardless of the ensemble’s performance quality. Evaluations for both articulation and dynamics were strongly and positively correlated with evaluations of overall ensemble expressivity.

  1. View invariant gesture recognition using the CSEMSwissRanger SR-2 camera

    DEFF Research Database (Denmark)

    Holte, Michael Boelstoft; Moeslund, Thomas B.; Fihl, Preben

    2008-01-01

    by a hysteresis bandpass filter. Gestures are represented by concatenating harmonic shape contexts over time. This representation allows for a view invariant matching of the gestures. The system is trained on gestures from one viewpoint and evaluated on gestures from other viewpoints. The results show...

  2. Depth camera-based 3D hand gesture controls with immersive tactile feedback for natural mid-air gesture interactions.

    Science.gov (United States)

    Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun

    2015-01-08

    Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.

  3. Depth Camera-Based 3D Hand Gesture Controls with Immersive Tactile Feedback for Natural Mid-Air Gesture Interactions

    Directory of Open Access Journals (Sweden)

    Kwangtaek Kim

    2015-01-01

    Full Text Available Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user’s hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE, 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user’s gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.

  4. Individual differences in frequency and saliency of speech-accompanying gestures : the role of cognitive abilities and empathy

    OpenAIRE

    Chu, Mingyuan; Meyer, Antje; Foulkes, Lucy; Kita, Sotaro

    2014-01-01

    The present study concerns individual differences in gesture production. We used correlational and multiple regression analyses to examine the relationship between individuals’ cognitive abilities and empathy levels and their gesture frequency and saliency. We chose predictor variables according to experimental evidence of the functions of gesture in speech production and communication. We examined 3 types of gestures: representational gestures, conduit gestures, and palm-revealing gestures. ...

  5. 200 GW for Germany; 200 Gigawatt fuer Deutschland

    Energy Technology Data Exchange (ETDEWEB)

    Fuhs, Michael; Enkhardt, Sandra

    2012-11-01

    200 GW of solar power, i.e. seven times as much as today: Is that a realistic goal, or is it just propaganda for a lobby intending to make a good life for manufacturers and fitters? There is much to suggest that it may be a socially relevant goal.

  6. Properties of the Binary Black Hole Merger GW150914

    NARCIS (Netherlands)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Phythian-Adams, A.T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.T.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, R.D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, M.J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, A.L.S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, J.G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, T.C; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Brooks, A. F.; Brown, A.D.; Brown, D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderon Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Diaz, J. Casanueva; Casentini, C.; Caudill, S.; Cavaglia, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Baiardi, L. Cerboni; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, D. S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Qian; Chua, S. E.; Chung, E.S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P. -F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, A.C.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J. -P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, A.L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Debra, D.; Debreczeni, G.; Degallaix, J.; De laurentis, M.; Deleglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.A.; Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devine, R. C.; Dhurandhar, S.; Diaz, M. C.; Di Fiore, L.; Giovanni, M.G.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H. -B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, T. M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.M.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M; Fournier, J. -D.; Franco, S; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.P.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; Gonzalez, Idelmis G.; Castro, J. M. Gonzalez; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Lee-Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.M.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Buffoni-Hall, R.; Hall, E. D.; Hammond, G.L.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, P.J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C. -J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J. -M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, D.H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jimenez-Forteza, F.; Johnson, W.; Johnson-McDaniel, N. K.; Jones, I.D.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.H.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kefelian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.E.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan., S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, Nam-Gyu; Kim, Namjun; Kim, Y.M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Krolak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C.H.; Lee, K.H.; Lee, M.H.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lousto, C. O.; Lovelace, G.; Lueck, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magana-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Marka, S.; Marka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R.M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B.C.; Moore, J.C.; Moraru, D.; Gutierrez Moreno, M.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, S.D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P.G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Gutierrez-Neri, M.; Neunzert, A.; Newton-Howes, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J.; Oh, S. H.; Ohme, F.; Oliver, M. B.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pan, Y.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.S; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Puerrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosinska, D.; Roever, C.; Rowan, S.; Ruediger, A.; Ruggi, P.; Ryan, K.A.; Sachdev, P.S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J; Schmidt, P.; Schnabel, R.B.; Schofield, R. M. S.; Schoenbeck, A.; Schreiber, K.E.C.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, M.S.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shao, Z.M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, António Dias da; Simakov, D.; Singer, A; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, R. J. E.; Smith, N.D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson-Moore, P.; Stone, J.R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepanczyk, M. J.; Tacca, M.D.; Talukder, D.; Tanner, D. B.; Tapai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, W.R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Toyra, D.; Travasso, F.; Traylor, G.; Trifiro, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; Van Beuzekom, Martin; van den Brand, J. F. J.; Van Den Broeck, C.F.F.; Vander-Hyde, D. C.; van der Schaaf, L.; van der Sluys, M. V.; van Heijningen, J. V.; Vano-Vinuales, A.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasuth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P.J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Vicere, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J. -Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, MT; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L. -W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.M.; Wessels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, D.R.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J.L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; Zadrozny, A.; Zangrando, L.; Zanolin, M.; Zendri, J. -P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; Boyle, M.; Bruegmann, B.; Campanelli, M.; Clark, M.; Hamberger, D.; Kidder, L. E.; Kinsey, M.; Laguna, P.; Ossokine, S.; Scheel, M. A.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.

    2016-01-01

    On September 14, 2015, the Laser Interferometer Gravitational-Wave Observatory (LIGO) detected a gravitational-wave transient (GW150914); we characterize the properties of the source and its parameters. The data around the time of the event were analyzed coherently across the LIGO network using a

  7. Observing gravitational-wave transient GW150914 with minimal assumptions

    NARCIS (Netherlands)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Phythian-Adams, A.T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwa, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. C.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, R.D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, M.J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackburn, L.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, A.L.S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, J.G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, T.C; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brocki, P.; Brooks, A. F.; Brown, A.D.; Brown, D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderon Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Diaz, J. Casanueva; Casentini, C.; Caudill, S.; Cavaglia, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Baiardi, L. Cerboni; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chatterji, S.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, D. S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Qian; Chua, S. E.; Chung, E.S.; Ciani, G.; Clara, F.; Clark, J. A.; Clark, M.; Cleva, F.; Coccia, E.; Cohadon, P. -F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, A.C.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J. -P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, A.L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Debra, D.; Debreczeni, G.; Degallaix, J.; De laurentis, M.; Deleglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.A.; DeRosa, R. T.; Rosa, R.; DeSalvo, R.; Dhurandhar, S.; Diaz, M. C.; Di Fiore, L.; Giovanni, M.G.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H. -B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, T. M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.M.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. R.; Flaminio, R.; Fletcher, M; Fournier, J. -D.; Franco, S; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritsche, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.P.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; Gonzalez, Idelmis G.; Castro, J. M. Gonzalez; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Lee-Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.M.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; de Haas, R.; Hacker, J. J.; Buffoni-Hall, R.; Hall, E. D.; Hammond, G.L.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, P.J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C. -J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hinder, I.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J. -M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, D.H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jimenez-Forteza, F.; Johnson, W.; Jones, I.D.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.H.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kefelian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.E.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan., S.; Khan, Z.; Khazanov, E. A.; Kijhunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, Nam-Gyu; Kim, Namjun; Kim, Y.M.; King, E. J.; King, P. J.; Kinsey, M.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krolak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Laguna, P.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, R.; Leavey, S.; Lebigot, E. O.; Lee, C.H.; Lee, K.H.; Lee, M.H.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lueck, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magana-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Marka, S.; Marka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R.M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mende, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B.C.; Moore, J.C.; Moraru, D.; Gutierrez Moreno, M.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, S.D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P.G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Gutierrez-Neri, M.; Neunzert, A.; Newton-Howes, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J.; Oh, S. H.; Ohme, F.; Oliver, M. B.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Page, J.; Paris, H. R.; Parker, W.S; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prolchorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Puerrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosinska, D.; Rowan, S.; Ruediger, A.; Ruggi, P.; Ryan, K.A.; Sachdev, P.S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J; Schmidt, P.; Schnabel, R.B.; Schofield, R. M. S.; Schoenbeck, A.; Schreiber, K.E.C.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, M.S.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shithriar, M. S.; Shaltev, M.; Shao, Z.M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, António Dias da; Simakov, D.; Singer, A; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, R. J. E.; Smith, N.D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, J.R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepanczyk, M. J.; Tacca, M.D.; Talukder, D.; Tanner, D. B.; Tapai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, W.R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Toyra, D.; Travasso, F.; Traylor, G.; Trifiro, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlhruch, H.; Vajente, G.; Valdes, G.; Van Bakel, N.; Van Beuzekom, Martin; Van den Brand, J. F. J.; Van Den Broeck, C.F.F.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasuth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, R. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Vicere, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J. -Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, MT; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L. -W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.M.; Wessels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, D.; Williams, D.R.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J.L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; Zadrozny, A.; Zangrando, L.; Zanolin, M.; Zendri, J. -P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.

    2016-01-01

    The gravitational-wave signal GW150914 was first identified on September 14, 2015, by searches for short-duration gravitational-wave transients. These searches identify time-correlated transients in multiple detectors with minimal assumptions about the signal morphology, allowing them to be

  8. First Hours of the GW170817 Kilonova: Why So Blue?

    Science.gov (United States)

    Kohler, Susanna

    2018-04-01

    Now that the hubbub of GW170817 the first coincident detection of gravitational waves and an electromagnetic signature has died down, scientists are left with the task of taking the spectrum-spanning observations and piecing them together into a coherent picture. Researcher Iair Arcavi examines one particular question: what caused the blue color in the early hours of the neutron-star merger?Observations of the GW170817 kilonova by Hubble over a week-long span. [ESA/Hubble]Early ColorWhen the two neutron stars of GW170817 merged in August of last year, they produced not only gravitational waves, but a host of electromagnetic signatures. Chief among these was a flare of emission thought to be powered by the radioactive decay of heavy elements formed in the merger a kilonova.The emission during a kilonova can come from a number of different sources from the heavy-element-rich tidal tails of the disrupting neutron stars, or from fast, light polar jets, or from a wind or a disk outflow and each of these components could reveal different information about the original neutron stars and the merger.Its therefore important that we understand the sources of the emission that we observed in the GW170817 kilonova. In particular, wed like to know where the early blue emission came from that was spotted in the first hours of the kilonova.The combined ultravioletopticalinfrared light curve of the GW170817 kilonova. The rise in the emission occurs on roughly a day-long timescale. [Arcavi 2018]Comparing ModelsTo explore this question, Iair Arcavi (Einstein Fellow at University of California, Santa Barbara and Las Cumbres Observatory) compiled infrared through ultraviolet observations of the GW170817 kilonova from nearly 20 different telescopes. To try to distinguish between possible sources, Arcavi then compared the resulting combined light curves to a variety of models.Arcavi found that the light curves for the GW170817 kilonova indicate an initial 24-hour rise of emission. This

  9. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech

    Science.gov (United States)

    Bremner, Paul; Leonards, Ute

    2016-01-01

    Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances. PMID:26925010

  10. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech

    Directory of Open Access Journals (Sweden)

    Paul Adam Bremner

    2016-02-01

    Full Text Available Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realised remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances.

  11. AGILE Observations of the Gravitational-wave Source GW170104

    Energy Technology Data Exchange (ETDEWEB)

    Verrecchia, F.; Pittori, C.; Lucarelli, F. [ASI Space Science Data Center (SSDC), via del Politecnico, I-00133 Roma (Italy); Tavani, M.; Ursi, A.; Argan, A.; Evangelista, Y.; Minervini, G.; Cardillo, M.; Piano, G. [INAF-IAPS, via del Fosso del Cavaliere 100, I-00133 Roma (Italy); Donnarumma, I. [ASI, via del Politecnico snc, I-00133 Roma (Italy); Bulgarelli, A.; Fuschino, F.; Labanti, C.; Fioretti, V. [INAF-IASF-Bologna, via Gobetti 101, I-40129 Bologna (Italy); Marisaldi, M. [Birkeland Centre for Space Science, Department of Physics and Technology, University of Bergen, Bergen (Norway); Giuliani, A. [INAF-IASF Milano, via E.Bassini 15, I-20133 Milano (Italy); Longo, F. [Dipartimento di Fisica, Università di Trieste and INFN, via Valerio 2, I-34127 Trieste (Italy); Munar-Adrover, P. [Unitat de Física de les Radiacions, Departament de Física, and CERES-IEEC, Universitat Autònoma de Barcelona, E-08193 Bellaterra (Spain); Pilia, M. [INAF, Osservatorio Astronomico di Cagliari, via della Scienza 5, I-09047 Selargius (Italy); and others

    2017-10-01

    The LIGO/Virgo Collaboration (LVC) detected on 2017 January 4 a significant gravitational-wave (GW) event (now named GW170104). We report in this Letter the main results obtained from the analysis of hard X-ray and gamma-ray data of the AGILE mission that repeatedly observed the GW170104 localization region (LR). At the LVC detection time T {sub 0} AGILE observed about 36% of the LR. The gamma-ray imaging detector did not reveal any significant emission in the energy range 50 MeV–30 GeV. Furthermore, no significant gamma-ray transients were detected in the LR that was repeatedly exposed over timescales of minutes, hours, and days. We also searched for transient emission using data near T {sub 0} of the omnidirectional detector MCAL operating in the energy band 0.4–100 MeV. A refined analysis of MCAL data shows the existence of a weak event (that we call “E2”) with a signal-to-noise ratio of 4.4 σ lasting about 32 ms and occurring 0.46 ± 0.05 s before T {sub 0}. A study of the MCAL background and of the false-alarm rate of E2 leads to the determination of a post-trial significance between 2.4σ and 2.7σ for a temporal coincidence with GW170104. We note that E2 has characteristics similar to those detected from the weak precursor of GRB 090510. The candidate event E2 is worth consideration for simultaneous detection by other satellites. If associated with GW170104, it shows emission in the MeV band of a short burst preceding the final coalescence by 0.46 s and involving ∼10{sup −7} of the total rest mass energy of the system.

  12. Workshop meeting

    International Nuclear Information System (INIS)

    Veland, Oeystein

    2004-04-01

    1-2 September 2003 the Halden Project arranged a workshop on 'Innovative Human-System Interfaces and their Evaluation'. This topic is new in the HRP 2003-2005 programme, and it is important to get feedback from member organizations to the work that is being performed in Halden. It is also essential that relevant activities and experiences in this area from the member organizations are shared with the Halden staff and other HRP members. Altogether 25 persons attended the workshop. The workshop had a mixture of presentations and discussions, and was chaired by Dominique Pirus of EDF, France. Day one focused on the HRP/IFE activities on Human-System Interface design, including Function-oriented displays, Ecological Interface Design, Task-oriented displays, as well as work on innovative display solutions for the oil and gas domain. There were also presentations of relevant work in France, Japan and the Czech Republic. The main focus of day two was the verification and validation of human-system interfaces, with presentations of work at HRP on Human-Centered Validation, Criteria-Based System Validation, and Control Room Verification and Validation. The chairman concluded that it was a successful workshop, although one could have had more time for discussions. The Halden Project got valuable feedback and viewpoints on this new topic during the workshop, and will consider all recommendations related to the future work in this area. (Author)

  13. User-Generated Free-Form Gestures for Authentication: Security and Memorability

    OpenAIRE

    Sherman, Michael; Clark, Gradeigh; Yang, Yulong; Sugrim, Shridatt; Modig, Arttu; Lindqvist, Janne; Oulasvirta, Antti; Roos, Teemu

    2014-01-01

    This paper studies the security and memorability of free-form multitouch gestures for mobile authentication. Towards this end, we collected a dataset with a generate-test-retest paradigm where participants (N=63) generated free-form gestures, repeated them, and were later retested for memory. Half of the participants decided to generate one-finger gestures, and the other half generated multi-finger gestures. Although there has been recent work on template-based gestures, there are yet no metr...

  14. Cross-cultural variation of speech-accompanying gesture : a review

    OpenAIRE

    Kita, Sotaro

    2009-01-01

    This article reviews the literature on cross-cultural variation of gestures. Four factors governing the variation were identified. The first factor is the culture-specific convention for form-meaning associations. This factor is involved in well-known cross-cultural differences in emblem gestures (e.g., the OK-sign), as well as pointing gestures. The second factor is culture-specific spatial cognition. Representational gestures (i.e., iconic and deictic gestures) that express spatial contents...

  15. Seeing iconic gestures while encoding events facilitates children's memory of these events

    OpenAIRE

    Aussems, Suzanne; Kita, Sotaro

    2017-01-01

    An experiment with 72 three-year-olds investigated whether encoding events while seeing iconic gestures boosts children's memory representation of these events. The events, shown in videos of actors moving in an unusual manner, were presented with either iconic gestures depicting how the actors performed these actions, interactive gestures, or no gesture. In a recognition memory task, children in the iconic gesture condition remembered actors and actions better than children in the control co...

  16. SEARCH FOR NEUTRINOS IN SUPER-KAMIOKANDE ASSOCIATED WITH GRAVITATIONAL-WAVE EVENTS GW150914 AND GW151226

    International Nuclear Information System (INIS)

    Abe, K.; Haga, K.; Hayato, Y.; Ikeda, M.; Iyogi, K.; Kameda, J.; Kishimoto, Y.; Miura, M.; Moriyama, S.; Nakahata, M.; Nakajima, T.; Nakano, Y.; Nakayama, S.; Orii, A.; Sekiya, H.; Shiozawa, M.; Takeda, A.; Tanaka, H.; Tasaka, S.; Tomura, T.

    2016-01-01

    We report the results from a search in Super-Kamiokande for neutrino signals coincident with the first detected gravitational-wave events, GW150914 and GW151226, as well as LVT151012, using a neutrino energy range from 3.5 MeV to 100 PeV. We searched for coincident neutrino events within a time window of ±500 s around the gravitational-wave detection time. Four neutrino candidates are found for GW150914, and no candidates are found for GW151226. The remaining neutrino candidates are consistent with the expected background events. We calculated the 90% confidence level upper limits on the combined neutrino fluence for both gravitational-wave events, which depends on event energy and topologies. Considering the upward-going muon data set (1.6 GeV–100 PeV), the neutrino fluence limit for each gravitational-wave event is 14–37 (19–50) cm"−"2 for muon neutrinos (muon antineutrinos), depending on the zenith angle of the event. In the other data sets, the combined fluence limits for both gravitational-wave events range from 2.4 × 10"4 to 7.0 × 10"9 cm"−"2.

  17. UsiGesture: Test and Evaluation of an Environment for Integrating Gestures in User Interfaces

    OpenAIRE

    Beuvens, François; Vanderdonckt, Jean

    2014-01-01

    User interfaces allowing gesture recognition and manipulation are becoming more and more popular these last years. It however remains a hard task for programmers to developer such interfaces : some knowledge of recognition systems is required, along with user experience and user interface management knowledge. It is often difficult for only one developer to handle all this knowledge by itself and it is why a team gathering different skills is most of the time needed. We previously presented a...

  18. A Modified Tactile Brush Algorithm for Complex Touch Gestures

    Energy Technology Data Exchange (ETDEWEB)

    Ragan, Eric [Texas A& M University

    2015-01-01

    Several researchers have investigated phantom tactile sensation (i.e., the perception of a nonexistent actuator between two real actuators) and apparent tactile motion (i.e., the perception of a moving actuator due to time delays between onsets of multiple actuations). Prior work has focused primarily on determining appropriate Durations of Stimulation (DOS) and Stimulus Onset Asynchronies (SOA) for simple touch gestures, such as a single finger stroke. To expand upon this knowledge, we investigated complex touch gestures involving multiple, simultaneous points of contact, such as a whole hand touching the arm. To implement complex touch gestures, we modified the Tactile Brush algorithm to support rectangular areas of tactile stimulation.

  19. Network workshop

    DEFF Research Database (Denmark)

    Bruun, Jesper; Evans, Robert Harry

    2014-01-01

    This paper describes the background for, realisation of and author reflections on a network workshop held at ESERA2013. As a new research area in science education, networks offer a unique opportunity to visualise and find patterns and relationships in complicated social or academic network data....... These include student relations and interactions and epistemic and linguistic networks of words, concepts and actions. Network methodology has already found use in science education research. However, while networks hold the potential for new insights, they have not yet found wide use in the science education...... research community. With this workshop, participants were offered a way into network science based on authentic educational research data. The workshop was constructed as an inquiry lesson with emphasis on user autonomy. Learning activities had participants choose to work with one of two cases of networks...

  20. What makes a movement a gesture?

    Science.gov (United States)

    Novack, Miriam A; Wakefield, Elizabeth M; Goldin-Meadow, Susan

    2016-01-01

    Theories of how adults interpret the actions of others have focused on the goals and intentions of actors engaged in object-directed actions. Recent research has challenged this assumption, and shown that movements are often interpreted as being for their own sake (Schachner & Carey, 2013). Here we postulate a third interpretation of movement-movement that represents action, but does not literally act on objects in the world. These movements are gestures. In this paper, we describe a framework for predicting when movements are likely to be seen as representations. In Study 1, adults described one of three scenes: (1) an actor moving objects, (2) an actor moving her hands in the presence of objects (but not touching them) or (3) an actor moving her hands in the absence of objects. Participants systematically described the movements as depicting an object-directed action when the actor moved objects, and favored describing the movements as depicting movement for its own sake when the actor produced the same movements in the absence of objects. However, participants favored describing the movements as representations when the actor produced the movements near, but not on, the objects. Study 2 explored two additional features-the form of an actor's hands and the presence of speech-like sounds-to test the effect of context on observers' classification of movement as representational. When movements are seen as representations, they have the power to influence communication, learning, and cognition in ways that movement for its own sake does not. By incorporating representational gesture into our framework for movement analysis, we take an important step towards developing a more cohesive understanding of action-interpretation. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Virtual Workshop

    DEFF Research Database (Denmark)

    Buus, Lillian; Bygholm, Ann

    In relation to the Tutor course in the Mediterranean Virtual University (MVU) project, a virtual workshop “Getting experiences with different synchronous communication media, collaboration, and group work” was held with all partner institutions in January 2006. More than 25 key-tutors within MVU...

  2. Hands in space: gesture interaction with augmented-reality interfaces.

    Science.gov (United States)

    Billinghurst, Mark; Piumsomboon, Tham; Huidong Bai

    2014-01-01

    Researchers at the Human Interface Technology Laboratory New Zealand (HIT Lab NZ) are investigating free-hand gestures for natural interaction with augmented-reality interfaces. They've applied the results to systems for desktop computers and mobile devices.

  3. Holographic Raman Tweezers Controlled by Hand Gestures and Voice Commands

    Czech Academy of Sciences Publication Activity Database

    Tomori, Z.; Antalík, M.; Kesa, P.; Kaňka, Jan; Jákl, Petr; Šerý, Mojmír; Bernatová, Silvie; Zemánek, Pavel

    2013-01-01

    Roč. 3, 2B (2013), s. 331-336 ISSN 2160-8881 Institutional support: RVO:68081731 Keywords : Holographic Optical Tweezers * Raman Tweezers * Natural User Interface * Leap Motion * Gesture Camera Subject RIV: BH - Optics, Masers, Lasers

  4. An Interactive Astronaut-Robot System with Gesture Control

    Directory of Open Access Journals (Sweden)

    Jinguo Liu

    2016-01-01

    Full Text Available Human-robot interaction (HRI plays an important role in future planetary exploration mission, where astronauts with extravehicular activities (EVA have to communicate with robot assistants by speech-type or gesture-type user interfaces embedded in their space suits. This paper presents an interactive astronaut-robot system integrating a data-glove with a space suit for the astronaut to use hand gestures to control a snake-like robot. Support vector machine (SVM is employed to recognize hand gestures and particle swarm optimization (PSO algorithm is used to optimize the parameters of SVM to further improve its recognition accuracy. Various hand gestures from American Sign Language (ASL have been selected and used to test and validate the performance of the proposed system.

  5. Iconic gestures prime related concepts: an ERP study.

    Science.gov (United States)

    Wu, Ying Croon; Coulson, Seana

    2007-02-01

    To assess priming by iconic gestures, we recorded EEG (at 29 scalp sites) in two experiments while adults watched short, soundless videos of spontaneously produced, cospeech iconic gestures followed by related or unrelated probe words. In Experiment 1, participants classified the relatedness between gestures and words. In Experiment 2, they attended to stimuli, and performed an incidental recognition memory test on words presented during the EEG recording session. Event-related potentials (ERPs) time-locked to the onset of probe words were measured, along with response latencies and word recognition rates. Although word relatedness did not affect reaction times or recognition rates, contextually related probe words elicited less-negative ERPs than did unrelated ones between 300 and 500 msec after stimulus onset (N400) in both experiments. These findings demonstrate sensitivity to semantic relations between iconic gestures and words in brain activity engendered during word comprehension.

  6. The Aftermath of GW170817: Neutron Star or Black Hole?

    Science.gov (United States)

    Kohler, Susanna

    2018-06-01

    When two neutron stars merged in August of last year, leading to the first simultaneous detection of gravitational waves and electromagnetic signals, we knew this event was going to shed new light on compact-object mergers.A team of scientists says we now have an answer to one of the biggest mysteries of GW170817: after the neutron stars collided, what object was formed?Artists illustration of the black hole that resulted from GW170817. Some of the material accreting onto the black hole is flung out in a tightly collimated jet. [NASA/CXC/M.Weiss]A Fuzzy DivisionBased on gravitational-wave observations, we know that two neutron stars of about 1.48 and 1.26 solar masses merged in GW170817. But the result an object of 2.7 solar masses doesnt have a definitive identity; the remnant formed in the merger is either the most massive neutron star known or the least massive black hole known.The theoretical mass division between neutron stars and black holes is fuzzy, depending strongly on what model you use to describe the physics of these objects. Observations fall short as well: the most massive neutron star known is perhaps 2.3 solar masses, and the least massive black hole is perhaps 4 or 5, leaving the location of the dividing line unclear. For this reason, determining the nature of GW170817s remnant is an important target as we analyze past observations of the remnant and continue to make new ones.Chandra images of the field of GW170817 during three separate epochs. Each image is 30 x 30. [Adapted from Pooley et al. 2018]Luckily, we may not have long to wait! Led by David Pooley (Trinity University and Eureka Scientific, Inc.), a team of scientists has obtained new Chandra X-ray observations of the remnant of GW170817. By combining this new data with previous observations, the authors have drawn conclusions about what object was left behind after this fateful merger.X-Rays Provide AnswersX-ray radiation is generated in a merger of two neutron stars when the mergers

  7. Chaotic Music Generation System Using Music Conductor Gesture

    OpenAIRE

    Chen, Shuai; Maeda, Yoichiro; Takahashi, Yasutake

    2013-01-01

    In the research of interactive music generation, we propose a music generation method, that the computer generates the music, under the recognition of human music conductor's gestures.In this research, the generated music is tuned by the recognized gestures for the parameters of the network of chaotic elements in real time. The music conductor's hand motions are detected by Microsoft Kinect in this system. Music theories are embedded in the algorithm, as a result, the generated music will be ...

  8. Large Scale GW Calculations on the Cori System

    Science.gov (United States)

    Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven

    The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.

  9. f (T ) gravity after GW170817 and GRB170817A

    Science.gov (United States)

    Cai, Yi-Fu; Li, Chunlong; Saridakis, Emmanuel N.; Xue, Ling-Qin

    2018-05-01

    The combined observation of GW170817 and its electromagnetic counterpart GRB170817A reveals that gravitational waves propagate at the speed of light in high precision. We apply the standard analysis of cosmological perturbations, as well as the effective field theory approach, to investigate the experimental consequences for the theory of f (T ) gravity. Our analysis verifies for the first time that the speed of gravitational waves within f (T ) gravity is equal to the light speed, and hence, the constraints from GW170817 and GRB170817A are trivially satisfied. Nevertheless, by examining the dispersion relation and the frequency of cosmological gravitational waves, we observe a deviation from the results of general relativity, quantified by a new parameter. Although its value is relatively small in viable f (T ) models, its possible future measurement in advancing gravitational-wave astronomy would be the smoking gun of testing this type of modified gravity.

  10. The Firework of Electromagnetic Counterparts from GW170817

    Science.gov (United States)

    Siegel, Daniel

    2018-01-01

    The gravitational-wave signal of the binary neutron star merger GW170817 was followed by a firework of electromagnetic transients across the entire electromagnetic spectrum. The gamma-ray emission has provided strong evidence for the association of short gamma-ray bursts (SGRBs) with binary neutron star mergers and the ultraviolet, optical, and near-infrared emission is consistent with a kilonova indicative of the formation of heavy elements in the merger ejecta by the rapid neutron capture process (r-process). In this talk, I will discuss and review theoretical scenarios to interpret the gamma-ray, X-ray, and radio observations. I will present recent results from general-relativistic magnetohydrodynamic simulations and discuss possible scenarios and mass ejection mechanisms that can give rise to the observed kilonova features. In particular, I will argue that massive winds from neutrino-cooled post-merger accretion disks most likely synthesized the heavy r-process elements in GW170817.

  11. Fully self-consistent GW calculations for molecules

    DEFF Research Database (Denmark)

    Rostgaard, Carsten; Jacobsen, Karsten Wedel; Thygesen, Kristian Sommer

    2010-01-01

    We calculate single-particle excitation energies for a series of 34 molecules using fully self-consistent GW, one-shot G0W0, Hartree-Fock (HF), and hybrid density-functional theory (DFT). All calculations are performed within the projector-augmented wave method using a basis set of Wannier...... functions augmented by numerical atomic orbitals. The GW self-energy is calculated on the real frequency axis including its full frequency dependence and off-diagonal matrix elements. The mean absolute error of the ionization potential (IP) with respect to experiment is found to be 4.4, 2.6, 0.8, 0.4, and 0...

  12. Seeing Iconic Gestures While Encoding Events Facilitates Children's Memory of These Events.

    Science.gov (United States)

    Aussems, Suzanne; Kita, Sotaro

    2017-11-08

    An experiment with 72 three-year-olds investigated whether encoding events while seeing iconic gestures boosts children's memory representation of these events. The events, shown in videos of actors moving in an unusual manner, were presented with either iconic gestures depicting how the actors performed these actions, interactive gestures, or no gesture. In a recognition memory task, children in the iconic gesture condition remembered actors and actions better than children in the control conditions. Iconic gestures were categorized based on how much of the actors was represented by the hands (feet, legs, or body). Only iconic hand-as-body gestures boosted actor memory. Thus, seeing iconic gestures while encoding events facilitates children's memory of those aspects of events that are schematically highlighted by gesture. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.

  13. Comments on Graviton Propagation in Light of GW150914

    CERN Document Server

    Ellis, John; Nanopoulos, Dimitri V.

    2016-01-01

    The observation of gravitational waves from the Laser Interferometer Gravitational-Wave Observatory (LIGO) event GW150914 may be used to constrain the possibility of Lorentz violation in graviton propagation, and the observation by the Fermi Gamma-Ray Burst Monitor of a transient source in apparent coincidence may be used to constrain the difference between the velocities of light and gravitational waves: $c_g - c_\\gamma < 10^{-17}$.

  14. Hybrid gesture recognition system for short-range use

    Science.gov (United States)

    Minagawa, Akihiro; Fan, Wei; Katsuyama, Yutaka; Takebe, Hiroaki; Ozawa, Noriaki; Hotta, Yoshinobu; Sun, Jun

    2012-03-01

    In recent years, various gesture recognition systems have been studied for use in television and video games[1]. In such systems, motion areas ranging from 1 to 3 meters deep have been evaluated[2]. However, with the burgeoning popularity of small mobile displays, gesture recognition systems capable of operating at much shorter ranges have become necessary. The problems related to such systems are exacerbated by the fact that the camera's field of view is unknown to the user during operation, which imposes several restrictions on his/her actions. To overcome the restrictions generated from such mobile camera devices, and to create a more flexible gesture recognition interface, we propose a hybrid hand gesture system, in which two types of gesture recognition modules are prepared and with which the most appropriate recognition module is selected by a dedicated switching module. The two recognition modules of this system are shape analysis using a boosting approach (detection-based approach)[3] and motion analysis using image frame differences (motion-based approach)(for example, see[4]). We evaluated this system using sample users and classified the resulting errors into three categories: errors that depend on the recognition module, errors caused by incorrect module identification, and errors resulting from user actions. In this paper, we show the results of our investigations and explain the problems related to short-range gesture recognition systems.

  15. How iconic gestures enhance communication: an ERP study.

    Science.gov (United States)

    Wu, Ying Choon; Coulson, Seana

    2007-06-01

    EEG was recorded as adults watched short segments of spontaneous discourse in which the speaker's gestures and utterances contained complementary information. Videos were followed by one of four types of picture probes: cross-modal related probes were congruent with both speech and gestures; speech-only related probes were congruent with information in the speech, but not the gesture; and two sorts of unrelated probes were created by pairing each related probe with a different discourse prime. Event-related potentials (ERPs) elicited by picture probes were measured within the time windows of the N300 (250-350 ms post-stimulus) and N400 (350-550 ms post-stimulus). Cross-modal related probes elicited smaller N300 and N400 than speech-only related ones, indicating that pictures were easier to interpret when they corresponded with gestures. N300 and N400 effects were not due to differences in the visual complexity of each probe type, since the same cross-modal and speech-only picture probes elicited N300 and N400 with similar amplitudes when they appeared as unrelated items. These findings extend previous research on gesture comprehension by revealing how iconic co-speech gestures modulate conceptualization, enabling listeners to better represent visuo-spatial aspects of the speaker's meaning.

  16. Pointing and tracing gestures may enhance anatomy and physiology learning.

    Science.gov (United States)

    Macken, Lucy; Ginns, Paul

    2014-07-01

    Currently, instructional effects generated by Cognitive load theory (CLT) are limited to visual and auditory cognitive processing. In contrast, "embodied cognition" perspectives suggest a range of gestures, including pointing, may act to support communication and learning, but there is relatively little research showing benefits of such "embodied learning" in the health sciences. This study investigated whether explicit instructions to gesture enhance learning through its cognitive effects. Forty-two university-educated adults were randomly assigned to conditions in which they were instructed to gesture, or not gesture, as they learnt from novel, paper-based materials about the structure and function of the human heart. Subjective ratings were used to measure levels of intrinsic, extraneous and germane cognitive load. Participants who were instructed to gesture performed better on a knowledge test of terminology and a test of comprehension; however, instructions to gesture had no effect on subjective ratings of cognitive load. This very simple instructional re-design has the potential to markedly enhance student learning of typical topics and materials in the health sciences and medicine.

  17. Stationary Hand Gesture Authentication Using Edit Distance on Finger Pointing Direction Interval

    Directory of Open Access Journals (Sweden)

    Alex Ming Hui Wong

    2016-01-01

    Full Text Available One of the latest authentication methods is by discerning human gestures. Previous research has shown that different people can develop distinct gesture behaviours even when executing the same gesture. Hand gesture is one of the most commonly used gestures in both communication and authentication research since it requires less room to perform as compared to other bodily gestures. There are different types of hand gesture and they have been researched by many researchers, but stationary hand gesture has yet to be thoroughly explored. There are a number of disadvantages and flaws in general hand gesture authentication such as reliability, usability, and computational cost. Although stationary hand gesture is not able to solve all these problems, it still provides more benefits and advantages over other hand gesture authentication methods, such as making gesture into a motion flow instead of trivial image capturing, and requires less room to perform, less vision cue needed during performance, and so forth. In this paper, we introduce stationary hand gesture authentication by implementing edit distance on finger pointing direction interval (ED-FPDI from hand gesture to model behaviour-based authentication system. The accuracy rate of the proposed ED-FPDI shows promising results.

  18. Getting to the elephants: Gesture and preschoolers' comprehension of route direction information.

    Science.gov (United States)

    Austin, Elizabeth E; Sweller, Naomi

    2017-11-01

    During early childhood, children find spatial tasks such as following novel route directions challenging. Spatial tasks place demands on multiple cognitive processes, including language comprehension and memory, at a time in development when resources are limited. As such, gestures accompanying route directions may aid comprehension and facilitate task performance by scaffolding cognitive processes, including language and memory processing. This study examined the effect of presenting gesture during encoding on spatial task performance during early childhood. Three- to five-year-olds were presented with verbal route directions through a zoo-themed spatial array and, depending on assigned condition (no gesture, beat gesture, or iconic/deictic gesture), accompanying gestures. Children presented with verbal route directions accompanied by a combination of iconic (pantomime) and deictic (pointing) gestures verbally recalled more than children presented with beat gestures (rhythmic hand movements) or no gestures accompanying the route directions. The presence of gesture accompanying route directions similarly influenced physical route navigation, such that children presented with gesture (beat, pantomime, and pointing) navigated the route more accurately than children presented with no gestures. Across all gesture conditions, location information (e.g., the penguin pond) was recalled more than movement information (e.g., go around) and descriptive information (e.g., bright red). These findings suggest that speakers' gestures accompanying spatial task information influence listeners' recall and task performance. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. A Comparison of Coverbal Gesture Use in Oral Discourse Among Speakers With Fluent and Nonfluent Aphasia

    Science.gov (United States)

    Law, Sam-Po; Chak, Gigi Wan-Chi

    2017-01-01

    Purpose Coverbal gesture use, which is affected by the presence and degree of aphasia, can be culturally specific. The purpose of this study was to compare gesture use among Cantonese-speaking individuals: 23 neurologically healthy speakers, 23 speakers with fluent aphasia, and 21 speakers with nonfluent aphasia. Method Multimedia data of discourse samples from these speakers were extracted from the Cantonese AphasiaBank. Gestures were independently annotated on their forms and functions to determine how gesturing rate and distribution of gestures differed across speaker groups. A multiple regression was conducted to determine the most predictive variable(s) for gesture-to-word ratio. Results Although speakers with nonfluent aphasia gestured most frequently, the rate of gesture use in counterparts with fluent aphasia did not differ significantly from controls. Different patterns of gesture functions in the 3 speaker groups revealed that gesture plays a minor role in lexical retrieval whereas its role in enhancing communication dominates among the speakers with aphasia. The percentages of complete sentences and dysfluency strongly predicted the gesturing rate in aphasia. Conclusions The current results supported the sketch model of language–gesture association. The relationship between gesture production and linguistic abilities and clinical implications for gesture-based language intervention for speakers with aphasia are also discussed. PMID:28609510

  20. Cannabis-based medicines--GW pharmaceuticals: high CBD, high THC, medicinal cannabis--GW pharmaceuticals, THC:CBD.

    Science.gov (United States)

    2003-01-01

    GW Pharmaceuticals is undertaking a major research programme in the UK to develop and market distinct cannabis-based prescription medicines [THC:CBD, High THC, High CBD] in a range of medical conditions. The cannabis for this programme is grown in a secret location in the UK. It is expected that the product will be marketed in the US in late 2003. GW's cannabis-based products include selected phytocannabinoids from cannabis plants, including D9 tetrahydrocannabinol (THC) and cannabidiol (CBD). The company is investigating their use in three delivery systems, including sublingual spray, sublingual tablet and inhaled (but not smoked) dosage forms. The technology is protected by patent applications. Four different formulations are currently being investigated, including High THC, THC:CBD (narrow ratio), THC:CBD (broad ratio) and High CBD. GW is also developing a specialist security technology that will be incorporated in all its drug delivery systems. This technology allows for the recording and remote monitoring of patient usage to prevent any potential abuse of its cannabis-based medicines. GW plans to enter into agreements with other companies following phase III development, to secure the best commercialisation terms for its cannabis-based medicines. In June 2003, GW announced that exclusive commercialisation rights for the drug in the UK had been licensed to Bayer AG. The drug will be marketed under the Sativex brand name. This agreement also provides Bayer with an option to expand their license to include the European Union and certain world markets. GW was granted a clinical trial exemption certificate by the Medicines Control Agency to conduct clinical studies with cannabis-based medicines in the UK. The exemption includes investigations in the relief of pain of neurological origin and defects of neurological function in the following indications: multiple sclerosis (MS), spinal cord injury, peripheral nerve injury, central nervous system damage, neuroinvasive

  1. Individual differences in frequency and saliency of speech-accompanying gestures: the role of cognitive abilities and empathy.

    Science.gov (United States)

    Chu, Mingyuan; Meyer, Antje; Foulkes, Lucy; Kita, Sotaro

    2014-04-01

    The present study concerns individual differences in gesture production. We used correlational and multiple regression analyses to examine the relationship between individuals' cognitive abilities and empathy levels and their gesture frequency and saliency. We chose predictor variables according to experimental evidence of the functions of gesture in speech production and communication. We examined 3 types of gestures: representational gestures, conduit gestures, and palm-revealing gestures. Higher frequency of representational gestures was related to poorer visual and spatial working memory, spatial transformation ability, and conceptualization ability; higher frequency of conduit gestures was related to poorer visual working memory, conceptualization ability, and higher levels of empathy; and higher frequency of palm-revealing gestures was related to higher levels of empathy. The saliency of all gestures was positively related to level of empathy. These results demonstrate that cognitive abilities and empathy levels are related to individual differences in gesture frequency and saliency.

  2. GestuRe and ACtion Exemplar (GRACE) video database: stimuli for research on manners of human locomotion and iconic gestures.

    Science.gov (United States)

    Aussems, Suzanne; Kwok, Natasha; Kita, Sotaro

    2018-06-01

    Human locomotion is a fundamental class of events, and manners of locomotion (e.g., how the limbs are used to achieve a change of location) are commonly encoded in language and gesture. To our knowledge, there is no openly accessible database containing normed human locomotion stimuli. Therefore, we introduce the GestuRe and ACtion Exemplar (GRACE) video database, which contains 676 videos of actors performing novel manners of human locomotion (i.e., moving from one location to another in an unusual manner) and videos of a female actor producing iconic gestures that represent these actions. The usefulness of the database was demonstrated across four norming experiments. First, our database contains clear matches and mismatches between iconic gesture videos and action videos. Second, the male actors and female actors whose action videos matched the gestures in the best possible way, perform the same actions in very similar manners and different actions in highly distinct manners. Third, all the actions in the database are distinct from each other. Fourth, adult native English speakers were unable to describe the 26 different actions concisely, indicating that the actions are unusual. This normed stimuli set is useful for experimental psychologists working in the language, gesture, visual perception, categorization, memory, and other related domains.

  3. Collider workshop

    International Nuclear Information System (INIS)

    Anon.

    1982-01-01

    The promise of initial results after the start of operations at CERN's SPS proton-antiproton collider and the prospects for high energy hadron collisions at Fermilab (Tevatron) and Brookhaven (ISABELLE) provided a timely impetus for the recent Topical Workshop on Forward Collider Physics', held at Madison, Wisconsin, from 10-12 December. It became the second such workshop to be held, the first having been in 1979 at the College de France, Paris. The 100 or so participants had the chance to hear preliminary results from the UA1, UA4 and UA5 experiments at the CERN SPS collider, together with other new data, including that from proton-antiproton runs at the CERN Intersecting Storage Rings

  4. Workshop presentations

    International Nuclear Information System (INIS)

    Sanden, Per-Olof; Edland, Anne; Reiersen, Craig; Mullins, Peter; Ingemarsson, Karl-Fredrik; Bouchard, Andre; Watts, Germaine; Johnstone, John; Hollnagel, Erik; Ramberg, Patric; Reiman, Teemu

    2009-01-01

    An important part of the workshop was a series of invited presentations. The presentations were intended to both provide the participants with an understanding of various organisational approaches and activities as well as to stimulate the exchange of ideas during the small group discussion sessions. The presentation subjects ranged from current organisational regulations and licensee activities to new organisational research and the benefits of viewing organisations from a different perspective. There were more than a dozen invited presentations. The initial set of presentations gave the participants an overview of the background, structure, and aims of the workshop. This included a short presentation on the results from the regulatory responses to the pre-workshop survey. Representatives from four countries (Sweden, Canada, Finland, and the United Kingdom) expanded upon their survey responses with detailed presentations on both regulatory and licensee safety-related organisational activities in their countries. There were also presentations on new research concerning how to evaluate safety critical organisations and on a resilience engineering perspective to safety critical organisations. Below is the list of the presentations, the slides of which being available in Appendix 2: 1 - Workshop Welcome (Per-Olof Sanden); 2 - CSNI Working Group on Human and Organisational Factors (Craig Reiersen); 3 - Regulatory expectations on justification of suitability of licensee organisational structures, resources and competencies (Anne Edland); 4 - Justifying the suitability of licensee organisational structures, resources and competencies (Karl-Fredrik Ingemarsson); 5 - Nuclear Organisational Suitability in Canada (Andre Bouchard); 6 - Designing and Resourcing for Safety and Effectiveness (Germaine Watts); 7 - Organisational Suitability - What do you need and how do you know that you've got it? (Craig Reiersen); 8 - Suitability of Organisations - UK Regulator's View (Peter

  5. Antipneumococcal activities of two novel macrolides, GW 773546 and GW 708408, compared with those of erythromycin, azithromycin, clarithromycin, clindamycin, and telithromycin.

    Science.gov (United States)

    Matic, Vlatka; Kosowska, Klaudia; Bozdogan, Bulent; Kelly, Linda M; Smith, Kathy; Ednie, Lois M; Lin, Gengrong; Credito, Kim L; Clark, Catherine L; McGhee, Pamela; Pankuch, Glenn A; Jacobs, Michael R; Appelbaum, Peter C

    2004-11-01

    The MICs of GW 773546, GW 708408, and telithromycin for 164 macrolide-susceptible and 161 macrolide-resistant pneumococci were low. The MICs of GW 773546, GW 708408, and telithromycin for macrolide-resistant strains were similar, irrespective of the resistance genotypes of the strains. Clindamycin was active against all macrolide-resistant strains except those with erm(B) and one strain with a 23S rRNA mutation. GW 773546, GW 708408, and telithromycin at two times their MICs were bactericidal after 24 h for 7 to 8 of 12 strains. Serial passages of 12 strains in the presence of sub-MICs yielded 54 mutants, 29 of which had changes in the L4 or L22 protein or the 23S rRNA sequence. Among the macrolide-susceptible strains, resistant mutants developed most rapidly after passage in the presence of clindamycin, GW 773546, erythromycin, azithromycin, and clarithromycin and slowest after passage in the presence of GW 708408 and telithromycin. Selection of strains for which MICs were >/=0.5 microg/ml from susceptible parents occurred only with erythromycin, azithromycin, clarithromycin, and clindamycin; 36 resistant clones from susceptible parent strains had changes in the sequences of the L4 or L22 protein or 23S rRNA. No mef(E) strains yielded resistant clones after passage in the presence of erythromycin and azithromycin. Selection with GW 773546, GW 708408, telithromycin, and clindamycin in two mef(E) strains did not raise the erythromycin, azithromycin, and clarithromycin MICs more than twofold. There were no change in the ribosomal protein (L4 or L22) or 23S rRNA sequences for 15 of 18 mutants selected for macrolide resistance; 3 mutants had changes in the L22-protein sequence. GW 773546, GW 708408, and telithromycin selected clones for which MICs were 0.03 to >2.0 microg/ml. Single-step studies showed mutation frequencies 4.3 x 10(-3) for resistant strains. The postantibiotic effects of GW 773546, GW 708408, and telithromycin were 2.4 to 9.8 h.

  6. Great ape gestures: intentional communication with a rich set of innate signals.

    Science.gov (United States)

    Byrne, R W; Cartmill, E; Genty, E; Graham, K E; Hobaiter, C; Tanner, J

    2017-09-08

    Great apes give gestures deliberately and voluntarily, in order to influence particular target audiences, whose direction of attention they take into account when choosing which type of gesture to use. These facts make the study of ape gesture directly relevant to understanding the evolutionary precursors of human language; here we present an assessment of ape gesture from that perspective, focusing on the work of the "St Andrews Group" of researchers. Intended meanings of ape gestures are relatively few and simple. As with human words, ape gestures often have several distinct meanings, which are effectively disambiguated by behavioural context. Compared to the signalling of most other animals, great ape gestural repertoires are large. Because of this, and the relatively small number of intended meanings they achieve, ape gestures are redundant, with extensive overlaps in meaning. The great majority of gestures are innate, in the sense that the species' biological inheritance includes the potential to develop each gestural form and use it for a specific range of purposes. Moreover, the phylogenetic origin of many gestures is relatively old, since gestures are extensively shared between different genera in the great ape family. Acquisition of an adult repertoire is a process of first exploring the innate species potential for many gestures and then gradual restriction to a final (active) repertoire that is much smaller. No evidence of syntactic structure has yet been detected.

  7. XMM-NEWTON SLEW SURVEY OBSERVATIONS OF THE GRAVITATIONAL WAVE EVENT GW150914

    Energy Technology Data Exchange (ETDEWEB)

    Troja, E. [NASA Goddard Space Flight Center, 8800 Greenbelt Rd, Greenbelt, MD 20771 (United States); Read, A. M. [Department of Physics and Astronomy, Leicester University, Leicester LE1 7RH (United Kingdom); Tiengo, A. [Istituto Universitario di Studi Superiori, piazza della Vittoria 15, I-27100 Pavia (Italy); Salvaterra, R. [Istituto di Astrofisica Spaziale e Fisica Cosmica Milano, INAF, via E. Bassini 15, I-20133 Milano (Italy)

    2016-05-01

    The detection of the first gravitational wave (GW) transient GW150914 prompted an extensive campaign of follow-up observations at all wavelengths. Although no dedicated XMM-Newton observations have been performed, the satellite passed through the GW150914 error region during normal operations. Here we report the analysis of the data taken during these satellite slews performed two hours and two weeks after the GW event. Our data cover 1.1 and 4.8 deg{sup 2} of the final GW localization region. No X-ray counterpart to GW150914 is found down to a sensitivity of 6 × 10{sup −13} erg cm{sup −2} s{sup −1} in the 0.2–2 keV band. Nevertheless, these observations show the great potential of XMM-Newton slew observations for searching for the electromagnetic counterparts of GW events. A series of adjacent slews performed in response to a GW trigger would take ≲1.5 days to cover most of the typical GW credible region. We discuss this scenario and its prospects for detecting the X-ray counterpart of future GW detections.

  8. THE CONTRIBUTION OF GESTURES TO PERSONAL BRANDING

    Directory of Open Access Journals (Sweden)

    Brînduşa-Mariana Amălăncei

    2015-07-01

    Full Text Available A form of (self-promotion but also an authentic strategic choice, the personal brand has become a topical preoccupation of marketing specialists. Personal branding or self-marketing represents an innovative concept that associates the efficiency of personal development with the effectiveness of communication and marketing techniques adapted to the individual and that comprises the entire collection of techniques allowing the identification and promotion of the self/individual. The main objective is a clear communication with regard to personal identity, no matter by means of which method, so that it gives uniqueness and offers a competitive advantage. Although online promotion is increasingly gaining ground for the creation of a personal brand, an individual’s verbal and nonverbal behaviour represent very important differentiating elements. Starting from the premise that gestures often complement, anticipate, substitute or contradict the verbal, we will endeavour to highlight a number of significations that can be attributed to the various body movements and that can successfully contribute to the creation of a powerful personal brand.

  9. Hegel’s Gesture Towards Radical Cosmopolitanism

    Directory of Open Access Journals (Sweden)

    Shannon Brincat

    2009-09-01

    Full Text Available This is a preliminary argument of a much larger research project inquiring into the relation betweenHegel’s philosophical system and the project of emancipation in Critical International Relations Theory. Specifically, the paper examines how Hegel’s theory of recognition gestures towards a form of radical cosmopolitanism in world politics to ensure the conditions of rational freedom for all humankind. Much of the paper is a ground-clearing exercise defining what is ‘living’ in Hegel’s thought for emancipatory approaches in world politics, to borrow from Croce’s now famous question. It focuses on Hegel’s unique concept of freedom which places recognition as central in the formation of self-consciousness and therefore as a key determinant in the conditions necessary forhuman freedom to emerge in political community. While further research is needed to ascertain the precise relationship between Hegel’s recognition theoretic, emancipation and cosmopolitanism, it is contended that the intersubjective basis of Hegel’s concept of freedom through recognition necessitates some form of radical cosmopolitanism that ensures successful processes of recognition between all peoples, the precise institutional form of which remains unspecified.

  10. Grammatical Aspect and Gesture in French: A kinesiological approach

    Directory of Open Access Journals (Sweden)

    Доминик Бутэ

    2016-12-01

    Full Text Available In this paper, we defend the idea that research on Gesture with Speech can provide ways of studying speakers’ conceptualization of grammatical notions as they are speaking. Expressing an idea involves a dynamic interplay between our construal, shaped by the sensori-motoric and interactive experiences linked to that idea, the plurisemiotic means at our disposal for expressing it, and the linguistic category available for its expression in our language. By analyzing the expression of aspect in Speech with Gesture (GeSp in semi-guided oral interactions, we would like to make a new contribution to the field of aspect by exploring how speakers’ construal of aspectual differences grammaticalized in their language, may be enacted and visible in gesture. More specifically we want to see the degree to which event structure differences expressed in different grammatical aspects (perfective and imperfective correlate with kinesiological features of the gestures. To this end, we will focus on the speed and flow of the movements as well as on the segments involved (fingers, hand, forearm, arm, shoulder. A kinesiological approach to gestures enables us to analyze the movements of human bodies according to a biomechanical point of view that includes physiological features. This study is the first contribution focused on the links between speech and gesture in French in the domain of grammatical aspect. Grammatical aspect was defined by Comrie (1976 [1989] as involving the internal unfurling of the process, «[...] tense is a deictic category, i.e. locates situations in time, usually with reference to the present moment [...]. Aspect is not concerned with relating time of the situation to any other time-point, but rather with the internal temporal constituency of the one situation; one could state the difference as one between situation-internal time (aspect and situation-external time (tense » (Comrie, 1976 [1989]: 5. Can kinesic features express and make

  11. Preserved Imitation of Known Gestures in Children with High-Functioning Autism

    Science.gov (United States)

    Carmo, Joana C.; Rumiati, Raffaella I.; Siugzdaite, Roma; Brambilla, Paolo

    2013-01-01

    It has been suggested that children with autism are particularly deficient at imitating novel gestures or gestures without goals. In the present study, we asked high-functioning autistic children and age-matched typically developing children to imitate several types of gestures that could be either already known or novel to them. Known gestures either conveyed a communicative meaning (i.e., intransitive) or involved the use of objects (i.e., transitive). We observed a significant interaction between gesture type and group of participants, with children with autism performing known gestures better than novel gestures. However, imitation of intransitive and transitive gestures did not differ across groups. These findings are discussed in light of a dual-route model for action imitation. PMID:24062956

  12. Verbal working memory predicts co-speech gesture: evidence from individual differences.

    Science.gov (United States)

    Gillespie, Maureen; James, Ariel N; Federmeier, Kara D; Watson, Duane G

    2014-08-01

    Gesture facilitates language production, but there is debate surrounding its exact role. It has been argued that gestures lighten the load on verbal working memory (VWM; Goldin-Meadow, Nusbaum, Kelly, & Wagner, 2001), but gestures have also been argued to aid in lexical retrieval (Krauss, 1998). In the current study, 50 speakers completed an individual differences battery that included measures of VWM and lexical retrieval. To elicit gesture, each speaker described short cartoon clips immediately after viewing. Measures of lexical retrieval did not predict spontaneous gesture rates, but lower VWM was associated with higher gesture rates, suggesting that gestures can facilitate language production by supporting VWM when resources are taxed. These data also suggest that individual variability in the propensity to gesture is partly linked to cognitive capacities. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Web-based interactive drone control using hand gesture

    Science.gov (United States)

    Zhao, Zhenfei; Luo, Hao; Song, Guang-Hua; Chen, Zhou; Lu, Zhe-Ming; Wu, Xiaofeng

    2018-01-01

    This paper develops a drone control prototype based on web technology with the aid of hand gesture. The uplink control command and downlink data (e.g., video) are transmitted by WiFi communication, and all the information exchange is realized on web. The control command is translated from various predetermined hand gestures. Specifically, the hardware of this friendly interactive control system is composed by a quadrotor drone, a computer vision-based hand gesture sensor, and a cost-effective computer. The software is simplified as a web-based user interface program. Aided by natural hand gestures, this system significantly reduces the complexity of traditional human-computer interaction, making remote drone operation more intuitive. Meanwhile, a web-based automatic control mode is provided in addition to the hand gesture control mode. For both operation modes, no extra application program is needed to be installed on the computer. Experimental results demonstrate the effectiveness and efficiency of the proposed system, including control accuracy, operation latency, etc. This system can be used in many applications such as controlling a drone in global positioning system denied environment or by handlers without professional drone control knowledge since it is easy to get started.

  14. Autonomous learning in gesture recognition by using lobe component analysis

    Science.gov (United States)

    Lu, Jian; Weng, Juyang

    2007-02-01

    Gesture recognition is a new human-machine interface method implemented by pattern recognition(PR).In order to assure robot safety when gesture is used in robot control, it is required to implement the interface reliably and accurately. Similar with other PR applications, 1) feature selection (or model establishment) and 2) training from samples, affect the performance of gesture recognition largely. For 1), a simple model with 6 feature points at shoulders, elbows, and hands, is established. The gestures to be recognized are restricted to still arm gestures, and the movement of arms is not considered. These restrictions are to reduce the misrecognition, but are not so unreasonable. For 2), a new biological network method, called lobe component analysis(LCA), is used in unsupervised learning. Lobe components, corresponding to high-concentrations in probability of the neuronal input, are orientation selective cells follow Hebbian rule and lateral inhibition. Due to the advantage of LCA method for balanced learning between global and local features, large amount of samples can be used in learning efficiently.

  15. Web-based interactive drone control using hand gesture.

    Science.gov (United States)

    Zhao, Zhenfei; Luo, Hao; Song, Guang-Hua; Chen, Zhou; Lu, Zhe-Ming; Wu, Xiaofeng

    2018-01-01

    This paper develops a drone control prototype based on web technology with the aid of hand gesture. The uplink control command and downlink data (e.g., video) are transmitted by WiFi communication, and all the information exchange is realized on web. The control command is translated from various predetermined hand gestures. Specifically, the hardware of this friendly interactive control system is composed by a quadrotor drone, a computer vision-based hand gesture sensor, and a cost-effective computer. The software is simplified as a web-based user interface program. Aided by natural hand gestures, this system significantly reduces the complexity of traditional human-computer interaction, making remote drone operation more intuitive. Meanwhile, a web-based automatic control mode is provided in addition to the hand gesture control mode. For both operation modes, no extra application program is needed to be installed on the computer. Experimental results demonstrate the effectiveness and efficiency of the proposed system, including control accuracy, operation latency, etc. This system can be used in many applications such as controlling a drone in global positioning system denied environment or by handlers without professional drone control knowledge since it is easy to get started.

  16. Pantomimes are special gestures which rely on working memory.

    Science.gov (United States)

    Bartolo, A; Cubelli, R; Della Sala, S; Drei, S

    2003-12-01

    The case of a patient is reported who presented consistently with overt deficits in producing pantomimes in the absence of any other deficits in producing meaningful gestures. This pattern of spared and impaired abilities is difficult to reconcile with the current layout of cognitive models for praxis. This patient also showed clear impairment in a dual-task paradigm, a test taxing the co-ordination aspect of working memory, though performed normally in a series of other neuropsychological measures assessing language, visuo-spatial functions, reasoning function, and executive function. A specific working memory impairment associated with a deficit of pantomiming in the absence of any other disorders in the production of meaningful gestures suggested a way to modify the model to account for the data. Pantomimes are a particular category of gestures, meaningful, yet novel. We posit that by their very nature they call for the intervention of a mechanism to integrate and synthesise perceptual inputs together with information made available from the action semantics (knowledge about objects and functions) and the output lexicon (stored procedural programmes). This processing stage conceived as a temporary workspace where gesture information is actively manipulated, would generate new motor programmes to carry out pantomimes. The model of gesture production is refined to include this workspace.

  17. Comparison of gesture and conventional interaction techniques for interventional neuroradiology.

    Science.gov (United States)

    Hettig, Julian; Saalfeld, Patrick; Luz, Maria; Becker, Mathias; Skalej, Martin; Hansen, Christian

    2017-09-01

    Interaction with radiological image data and volume renderings within a sterile environment is a challenging task. Clinically established methods such as joystick control and task delegation can be time-consuming and error-prone and interrupt the workflow. New touchless input modalities may have the potential to overcome these limitations, but their value compared to established methods is unclear. We present a comparative evaluation to analyze the value of two gesture input modalities (Myo Gesture Control Armband and Leap Motion Controller) versus two clinically established methods (task delegation and joystick control). A user study was conducted with ten experienced radiologists by simulating a diagnostic neuroradiological vascular treatment with two frequently used interaction tasks in an experimental operating room. The input modalities were assessed using task completion time, perceived task difficulty, and subjective workload. Overall, the clinically established method of task delegation performed best under the study conditions. In general, gesture control failed to exceed the clinical input approach. However, the Myo Gesture Control Armband showed a potential for simple image selection task. Novel input modalities have the potential to take over single tasks more efficiently than clinically established methods. The results of our user study show the relevance of task characteristics such as task complexity on performance with specific input modalities. Accordingly, future work should consider task characteristics to provide a useful gesture interface for a specific use case instead of an all-in-one solution.

  18. Workshops som forskningsmetode

    OpenAIRE

    Ørngreen, Rikke; Levinsen, Karin Tweddell

    2017-01-01

    This paper contributes to knowledge on workshops as a research methodology, and specifically on how such workshops pertain to e-learning. A literature review illustrated that workshops are discussed according to three different perspectives: workshops as a means, workshops as practice, and workshops as a research methodology. Focusing primarily on the latter, this paper presents five studies on upper secondary and higher education teachers’ professional development and on teaching and learnin...

  19. Creating Fantastic PI Workshops

    Energy Technology Data Exchange (ETDEWEB)

    Biedermann, Laura B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Clark, Blythe G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Colbert, Rachel S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dagel, Amber Lynn [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gupta, Vipin P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hibbs, Michael R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Perkins, David Nikolaus [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); West, Roger Derek [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    The goal of this SAND report is to provide guidance for other groups hosting workshops and peerto-peer learning events at Sandia. Thus this SAND report provides detail about our team structure, how we brainstormed workshop topics and developed the workshop structure. A Workshop “Nuts and Bolts” section provides our timeline and check-list for workshop activities. The survey section provides examples of the questions we asked and how we adapted the workshop in response to the feedback.

  20. Desnarrativas: workshop

    Directory of Open Access Journals (Sweden)

    Ivânia Marques

    2014-08-01

    Full Text Available This is a report of a teacher workshop. It was an encounter among dialogues, pictures and possibilities of deconstruction in multiple directions. It enables studies inspiring debate in favor of images. Images are loaded with clichés and they risk breaking with the documentary/real character of photography. It leads us to think of the non-neutrality of an image and how the place is hegemonically imposed on us. It does away with blocking forces in a playful experimentation. The experimentation is extended into compositions with photographs, monotype printing, and different ways of perceiving space, dialogues, exchanges, poems and art.

  1. Workshop experience

    Directory of Open Access Journals (Sweden)

    Georgina Holt

    2007-04-01

    Full Text Available The setting for the workshop was a heady mix of history, multiculturalism and picturesque riverscapes. Within the group there was, as in many food studies, a preponderance of female scientists (or ethnographers, but the group interacted on lively, non-gendered terms - focusing instead on an appreciation of locals food and enthusiasm for research shared by all, and points of theoretical variance within that.The food provided by our hosts was of the very highest eating and local food qualities...

  2. Breaking the theoretical scaling limit for predicting quasiparticle energies: the stochastic GW approach.

    Science.gov (United States)

    Neuhauser, Daniel; Gao, Yi; Arntsen, Christopher; Karshenas, Cyrus; Rabani, Eran; Baer, Roi

    2014-08-15

    We develop a formalism to calculate the quasiparticle energy within the GW many-body perturbation correction to the density functional theory. The occupied and virtual orbitals of the Kohn-Sham Hamiltonian are replaced by stochastic orbitals used to evaluate the Green function G, the polarization potential W, and, thereby, the GW self-energy. The stochastic GW (sGW) formalism relies on novel theoretical concepts such as stochastic time-dependent Hartree propagation, stochastic matrix compression, and spatial or temporal stochastic decoupling techniques. Beyond the theoretical interest, the formalism enables linear scaling GW calculations breaking the theoretical scaling limit for GW as well as circumventing the need for energy cutoff approximations. We illustrate the method for silicon nanocrystals of varying sizes with N_{e}>3000 electrons.

  3. A common functional neural network for overt production of speech and gesture.

    Science.gov (United States)

    Marstaller, L; Burianová, H

    2015-01-22

    The perception of co-speech gestures, i.e., hand movements that co-occur with speech, has been investigated by several studies. The results show that the perception of co-speech gestures engages a core set of frontal, temporal, and parietal areas. However, no study has yet investigated the neural processes underlying the production of co-speech gestures. Specifically, it remains an open question whether Broca's area is central to the coordination of speech and gestures as has been suggested previously. The objective of this study was to use functional magnetic resonance imaging to (i) investigate the regional activations underlying overt production of speech, gestures, and co-speech gestures, and (ii) examine functional connectivity with Broca's area. We hypothesized that co-speech gesture production would activate frontal, temporal, and parietal regions that are similar to areas previously found during co-speech gesture perception and that both speech and gesture as well as co-speech gesture production would engage a neural network connected to Broca's area. Whole-brain analysis confirmed our hypothesis and showed that co-speech gesturing did engage brain areas that form part of networks known to subserve language and gesture. Functional connectivity analysis further revealed a functional network connected to Broca's area that is common to speech, gesture, and co-speech gesture production. This network consists of brain areas that play essential roles in motor control, suggesting that the coordination of speech and gesture is mediated by a shared motor control network. Our findings thus lend support to the idea that speech can influence co-speech gesture production on a motoric level. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  4. Lexical learning in mild aphasia: gesture benefit depends on patholinguistic profile and lesion pattern.

    Science.gov (United States)

    Kroenke, Klaus-Martin; Kraft, Indra; Regenbrecht, Frank; Obrig, Hellmuth

    2013-01-01

    Gestures accompany speech and enrich human communication. When aphasia interferes with verbal abilities, gestures become even more relevant, compensating for and/or facilitating verbal communication. However, small-scale clinical studies yielded diverging results with regard to a therapeutic gesture benefit for lexical retrieval. Based on recent functional neuroimaging results, delineating a speech-gesture integration network for lexical learning in healthy adults, we hypothesized that the commonly observed variability may stem from differential patholinguistic profiles in turn depending on lesion pattern. Therefore we used a controlled novel word learning paradigm to probe the impact of gestures on lexical learning, in the lesioned language network. Fourteen patients with chronic left hemispheric lesions and mild residual aphasia learned 30 novel words for manipulable objects over four days. Half of the words were trained with gestures while the other half were trained purely verbally. For the gesture condition, rootwords were visually presented (e.g., Klavier, [piano]), followed by videos of the corresponding gestures and the auditory presentation of the novel words (e.g., /krulo/). Participants had to repeat pseudowords and simultaneously reproduce gestures. In the verbal condition no gesture-video was shown and participants only repeated pseudowords orally. Correlational analyses confirmed that gesture benefit depends on the patholinguistic profile: lesser lexico-semantic impairment correlated with better gesture-enhanced learning. Conversely largely preserved segmental-phonological capabilities correlated with better purely verbal learning. Moreover, structural MRI-analysis disclosed differential lesion patterns, most interestingly suggesting that integrity of the left anterior temporal pole predicted gesture benefit. Thus largely preserved semantic capabilities and relative integrity of a semantic integration network are prerequisites for successful use of

  5. “TOT” phenomena: Gesture production in younger and older adults

    OpenAIRE

    Theochaaropoulou, F.; Cocks, N.; Pring, T.; Dipper, L.

    2015-01-01

    This study explored age-related changes in gesture in order to better understand the relationship between gesture and word retrieval from memory. The frequency of gestures during “Tip-of-the-Tongue” (TOT) states highlights this relationship. There is a lack of evidence describing the form and content of iconic gestures arising spontaneously in such TOT states, and a parallel gap addressing age-related variations. In this study, TOT states were induced in 45 participants from two age groups (o...

  6. Touch-less interaction with medical images using hand & foot gestures

    DEFF Research Database (Denmark)

    Jalaliniya, Shahram; Smith, Jeremiah; Sousa, Miguel

    2013-01-01

    control. In this paper, we present a system for gesture-based interaction with medical images based on a single wristband sensor and capacitive floor sensors, allowing for hand and foot gesture input. The first limited evaluation of the system showed an acceptable level of accuracy for 12 different hand...... & foot gestures; also users found that our combined hand and foot based gestures are intuitive for providing input....

  7. The Effect of Iconic and Beat Gestures on Memory Recall in Greek's First and Second Language

    OpenAIRE

    Eleni Ioanna Levantinou

    2016-01-01

    Gestures play a major role in comprehension and memory recall due to the fact that aid the efficient channel of the meaning and support listeners’ comprehension and memory. In the present study, the assistance of two kinds of gestures (iconic and beat gestures) is tested in regards to memory and recall. The hypothesis investigated here is whether or not iconic and beat gestures provide assistance in memory and recall in Greek and in Greek speakers’ second language. Two gr...

  8. LOCALIZATION AND RECOGNITION OF DYNAMIC HAND GESTURES BASED ON HIERARCHY OF MANIFOLD CLASSIFIERS

    OpenAIRE

    M. Favorskaya; A. Nosov; A. Popov

    2015-01-01

    Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin dete...

  9. Recent Workshops

    CERN Multimedia

    Wickens, F. J.

    Since the previous edition of ATLAS e-news, the NIKHEF Institute in Amsterdam has hosted not just one but two workshops related to ATLAS TDAQ activities. The first in October was dedicated to the Detector Control System (DCS). Just three institutes, CERN, NIKHEF and St Petersburg, provide the effort for the central DCS services, but each ATLAS sub-detector provides effort for their own controls. Some 30 people attended, including representatives for all of the ATLAS sub-detectors, representatives of the institutes working on the central services and the project leader of JCOP, which brings together common aspects of detector controls across the LHC experiments. During the three-day workshop the common components were discussed, and each sub-detector described their experiences and plans for their future systems. Whilst many of the components to be used are standard commercial components, a key custom item for ATLAS is the ELMB (Embedded Local Monitor Board). Prototypes for this have now been extensively test...

  10. Gesture analysis of students' majoring mathematics education in micro teaching process

    Science.gov (United States)

    Maldini, Agnesya; Usodo, Budi; Subanti, Sri

    2017-08-01

    In the process of learning, especially math learning, process of interaction between teachers and students is certainly a noteworthy thing. In these interactions appear gestures or other body spontaneously. Gesture is an important source of information, because it supports oral communication and reduce the ambiguity of understanding the concept/meaning of the material and improve posture. This research which is particularly suitable for an exploratory research design to provide an initial illustration of the phenomenon. The goal of the research in this article is to describe the gesture of S1 and S2 students of mathematics education at the micro teaching process. To analyze gesture subjects, researchers used McNeil clarification. The result is two subjects using 238 gesture in the process of micro teaching as a means of conveying ideas and concepts in mathematics learning. During the process of micro teaching, subjects using the four types of gesture that is iconic gestures, deictic gesture, regulator gesturesand adapter gesture as a means to facilitate the delivery of the intent of the material being taught and communication to the listener. Variance gesture that appear on the subject due to the subject using a different gesture patterns to communicate mathematical ideas of their own so that the intensity of gesture that appeared too different.

  11. Gestural Communication in Children with Autism Spectrum Disorders during Mother-Child Interaction

    Science.gov (United States)

    Mastrogiuseppe, Marilina; Capirci, Olga; Cuva, Simone; Venuti, Paola

    2015-01-01

    Children with autism spectrum disorders display atypical development of gesture production, and gesture impairment is one of the determining factors of autism spectrum disorder diagnosis. Despite the obvious importance of this issue for children with autism spectrum disorder, the literature on gestures in autism is scarce and contradictory. The…

  12. Methodological Reflections on Gesture Analysis in Second Language Acquisition and Bilingualism Research

    Science.gov (United States)

    Gullberg, Marianne

    2010-01-01

    Gestures, i.e. the symbolic movements that speakers perform while they speak, form a closely interconnected system with speech, where gestures serve both addressee-directed ("communicative") and speaker-directed ("internal") functions. This article aims (1) to show that a combined analysis of gesture and speech offers new ways to address…

  13. Parent-Child Gesture Use during Problem Solving in Autistic Spectrum Disorder

    Science.gov (United States)

    Medeiros, Kristen; Winsler, Adam

    2014-01-01

    This study examined the relationship between child language skills and parent and child gestures of 58 youths with and without an autism spectrum disorder (ASD) diagnosis. Frequencies and rates of total gesture use as well as five categories of gestures (deictic, conventional, beat, iconic, and metaphoric) were reliably coded during the…

  14. Traveller: An Interactive Cultural Training System Controlled by User-Defined Body Gestures

    NARCIS (Netherlands)

    Kistler, F.; André, E.; Mascarenhas, S.; Silva, A.; Paiva, A.; Degens, D.M.; Hofstede, G.J.; Krumhuber, E.; Kappas, A.; Aylett, R.

    2013-01-01

    In this paper, we describe a cultural training system based on an interactive storytelling approach and a culturally-adaptive agent architecture, for which a user-defined gesture set was created. 251 full body gestures by 22 users were analyzed to find intuitive gestures for the in-game actions in

  15. Baby Sign but Not Spontaneous Gesture Predicts Later Vocabulary in Children with Down Syndrome

    Science.gov (United States)

    Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Bailey, Jhonelle; Schmuck, Lauren

    2016-01-01

    Early spontaneous gesture, specifically deictic gesture, predicts subsequent vocabulary development in typically developing (TD) children. Here, we ask whether deictic gesture plays a similar role in predicting later vocabulary size in children with Down Syndrome (DS), who have been shown to have difficulties in speech production, but strengths in…

  16. Effects of eye contact and iconic gestures on message retention in human-robot interaction

    NARCIS (Netherlands)

    Dijk, van E.T.; Torta, E.; Cuijpers, R.H.

    2013-01-01

    The effects of iconic gestures and eye contact on message retention in human-robot interaction were investigated in a series of experiments. A humanoid robot gave short verbal messages to participants, accompanied either by iconic gestures or no gestures while making eye contact with the participant

  17. Gesture and naming therapy for people with severe aphasia: a group study.

    Science.gov (United States)

    Marshall, Jane; Best, Wendy; Cocks, Naomi; Cruice, Madeline; Pring, Tim; Bulcock, Gemma; Creek, Gemma; Eales, Nancy; Mummery, Alice Lockhart; Matthews, Niina; Caute, Anna

    2012-06-01

    In this study, the authors (a) investigated whether a group of people with severe aphasia could learn a vocabulary of pantomime gestures through therapy and (b) compared their learning of gestures with their learning of words. The authors also examined whether gesture therapy cued word production and whether naming therapy cued gestures. Fourteen people with severe aphasia received 15 hr of gesture and naming treatments. Evaluations comprised repeated measures of gesture and word production, comparing treated and untreated items. Baseline measures were stable but improved significantly following therapy. Across the group, improvements in naming were greater than improvements in gesture. This trend was evident in most individuals' results, although 3 participants made better progress in gesture. Gains were item specific, and there was no evidence of cross-modality cueing. Items that received gesture therapy did not improve in naming, and items that received naming therapy did not improve in gesture. Results show that people with severe aphasia can respond to gesture and naming therapies. Given the unequal gains, naming may be a more productive therapy target than gesture for many (although not all) individuals with severe aphasia. The communicative benefits of therapy were not examined but are addressed in a follow-up article.

  18. What is the best strategy for retaining gestures in working memory?

    Science.gov (United States)

    Gimenes, Guillaume; Pennequin, Valérie; Mercer, Tom

    2016-07-01

    This study aimed to determine whether the recall of gestures in working memory could be enhanced by verbal or gestural strategies. We also attempted to examine whether these strategies could help resist verbal or gestural interference. Fifty-four participants were divided into three groups according to the content of the training session. This included a control group, a verbal strategy group (where gestures were associated with labels) and a gestural strategy group (where participants repeated gestures and were told to imagine reproducing the movements). During the experiment, the participants had to reproduce a series of gestures under three conditions: "no interference", gestural interference (gestural suppression) and verbal interference (articulatory suppression). The results showed that task performance was enhanced in the verbal strategy group, but there was no significant difference between the gestural strategy and control groups. Moreover, compared to the "no interference" condition, performance decreased in the presence of gestural interference, except within the verbal strategy group. Finally, verbal interference hindered performance in all groups. The discussion focuses on the use of labels to recall gestures and differentiates the induced strategies from self-initiated strategies.

  19. Domestic Dogs Use Contextual Information and Tone of Voice when following a Human Pointing Gesture

    NARCIS (Netherlands)

    Scheider, Linda; Grassmann, Susanne; Kaminski, Juliane; Tomasello, Michael

    2011-01-01

    Domestic dogs are skillful at using the human pointing gesture. In this study we investigated whether dogs take contextual information into account when following pointing gestures, specifically, whether they follow human pointing gestures more readily in the context in which food has been found

  20. [Verbal and gestural communication in interpersonal interaction with Alzheimer's disease patients].

    Science.gov (United States)

    Schiaratura, Loris Tamara; Di Pastena, Angela; Askevis-Leherpeux, Françoise; Clément, Sylvain

    2015-03-01

    Communication can be defined as a verbal and non verbal exchange of thoughts and emotions. While verbal communication deficit in Alzheimer's disease is well documented, very little is known about gestural communication, especially in interpersonal situations. This study examines the production of gestures and its relations with verbal aspects of communication. Three patients suffering from moderately severe Alzheimer's disease were compared to three healthy adults. Each one were given a series of pictures and asked to explain which one she preferred and why. The interpersonal interaction was video recorded. Analyses concerned verbal production (quantity and quality) and gestures. Gestures were either non representational (i.e., gestures of small amplitude punctuating speech or accentuating some parts of utterance) or representational (i.e., referring to the object of the speech). Representational gestures were coded as iconic (depicting of concrete aspects), metaphoric (depicting of abstract meaning) or deictic (pointing toward an object). In comparison with healthy participants, patients revealed a decrease in quantity and quality of speech. Nevertheless, their production of gestures was always present. This pattern is in line with the conception that gestures and speech depend on different communicational systems and look inconsistent with the assumption of a parallel dissolution of gesture and speech. Moreover, analyzing the articulation between verbal and gestural dimensions suggests that representational gestures may compensate for speech deficits. It underlines the importance for the role of gestures in maintaining interpersonal communication.

  1. Does brain injury impair speech and gesture differently?

    Directory of Open Access Journals (Sweden)

    Tilbe Göksun

    2016-09-01

    Full Text Available People often use spontaneous gestures when talking about space, such as when giving directions. In a recent study from our lab, we examined whether focal brain-injured individuals’ naming motion event components of manner and path (represented in English by verbs and prepositions, respectively are impaired selectively, and whether gestures compensate for impairment in speech. Left or right hemisphere damaged patients and elderly control participants were asked to describe motion events (e.g., walking around depicted in brief videos. Results suggest that producing verbs and prepositions can be separately impaired in the left hemisphere and gesture production compensates for naming impairments when damage involves specific areas in the left temporal cortex.

  2. An Efficient Solution for Hand Gesture Recognition from Video Sequence

    Directory of Open Access Journals (Sweden)

    PRODAN, R.-C.

    2012-08-01

    Full Text Available The paper describes a system of hand gesture recognition by image processing for human robot interaction. The recognition and interpretation of the hand postures acquired through a video camera allow the control of the robotic arm activity: motion - translation and rotation in 3D - and tightening/releasing the clamp. A gesture dictionary was defined and heuristic algorithms for recognition were developed and tested. The system can be used for academic and industrial purposes, especially for those activities where the movements of the robotic arm were not previously scheduled, for training the robot easier than using a remote control. Besides the gesture dictionary, the novelty of the paper consists in a new technique for detecting the relative positions of the fingers in order to recognize the various hand postures, and in the achievement of a robust system for controlling robots by postures of the hands.

  3. Hand Gesture Recognition Using Modified 1$ and Background Subtraction Algorithms

    Directory of Open Access Journals (Sweden)

    Hazem Khaled

    2015-01-01

    Full Text Available Computers and computerized machines have tremendously penetrated all aspects of our lives. This raises the importance of Human-Computer Interface (HCI. The common HCI techniques still rely on simple devices such as keyboard, mice, and joysticks, which are not enough to convoy the latest technology. Hand gesture has become one of the most important attractive alternatives to existing traditional HCI techniques. This paper proposes a new hand gesture detection system for Human-Computer Interaction using real-time video streaming. This is achieved by removing the background using average background algorithm and the 1$ algorithm for hand’s template matching. Then every hand gesture is translated to commands that can be used to control robot movements. The simulation results show that the proposed algorithm can achieve high detection rate and small recognition time under different light changes, scales, rotation, and background.

  4. Scientific Visualization of Radio Astronomy Data using Gesture Interaction

    Science.gov (United States)

    Mulumba, P.; Gain, J.; Marais, P.; Woudt, P.

    2015-09-01

    MeerKAT in South Africa (Meer = More Karoo Array Telescope) will require software to help visualize, interpret and interact with multidimensional data. While visualization of multi-dimensional data is a well explored topic, little work has been published on the design of intuitive interfaces to such systems. More specifically, the use of non-traditional interfaces (such as motion tracking and multi-touch) has not been widely investigated within the context of visualizing astronomy data. We hypothesize that a natural user interface would allow for easier data exploration which would in turn lead to certain kinds of visualizations (volumetric, multidimensional). To this end, we have developed a multi-platform scientific visualization system for FITS spectral data cubes using VTK (Visualization Toolkit) and a natural user interface to explore the interaction between a gesture input device and multidimensional data space. Our system supports visual transformations (translation, rotation and scaling) as well as sub-volume extraction and arbitrary slicing of 3D volumetric data. These tasks were implemented across three prototypes aimed at exploring different interaction strategies: standard (mouse/keyboard) interaction, volumetric gesture tracking (Leap Motion controller) and multi-touch interaction (multi-touch monitor). A Heuristic Evaluation revealed that the volumetric gesture tracking prototype shows great promise for interfacing with the depth component (z-axis) of 3D volumetric space across multiple transformations. However, this is limited by users needing to remember the required gestures. In comparison, the touch-based gesture navigation is typically more familiar to users as these gestures were engineered from standard multi-touch actions. Future work will address a complete usability test to evaluate and compare the different interaction modalities against the different visualization tasks.

  5. A Versatile Embedded Platform for EMG Acquisition and Gesture Recognition.

    Science.gov (United States)

    Benatti, Simone; Casamassima, Filippo; Milosevic, Bojan; Farella, Elisabetta; Schönle, Philipp; Fateh, Schekeb; Burger, Thomas; Huang, Qiuting; Benini, Luca

    2015-10-01

    Wearable devices offer interesting features, such as low cost and user friendliness, but their use for medical applications is an open research topic, given the limited hardware resources they provide. In this paper, we present an embedded solution for real-time EMG-based hand gesture recognition. The work focuses on the multi-level design of the system, integrating the hardware and software components to develop a wearable device capable of acquiring and processing EMG signals for real-time gesture recognition. The system combines the accuracy of a custom analog front end with the flexibility of a low power and high performance microcontroller for on-board processing. Our system achieves the same accuracy of high-end and more expensive active EMG sensors used in applications with strict requirements on signal quality. At the same time, due to its flexible configuration, it can be compared to the few wearable platforms designed for EMG gesture recognition available on market. We demonstrate that we reach similar or better performance while embedding the gesture recognition on board, with the benefit of cost reduction. To validate this approach, we collected a dataset of 7 gestures from 4 users, which were used to evaluate the impact of the number of EMG channels, the number of recognized gestures and the data rate on the recognition accuracy and on the computational demand of the classifier. As a result, we implemented a SVM recognition algorithm capable of real-time performance on the proposed wearable platform, achieving a classification rate of 90%, which is aligned with the state-of-the-art off-line results and a 29.7 mW power consumption, guaranteeing 44 hours of continuous operation with a 400 mAh battery.

  6. Properties of the Binary Black Hole Merger GW150914

    Science.gov (United States)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Camp, J. B.

    2016-01-01

    On September 14, 2015, the Laser Interferometer Gravitational-Wave Observatory (LIGO) detected a gravitational-wave transient (GW150914); we characterize the properties of the source and its parameters. The data around the time of the event were analyzed coherently across the LIGO network using a suite of accurate waveform models that describe gravitational waves from a compact binary system in general relativity. GW150914 was produced by a nearly equal mass binary black hole of masses 36(+5/-4) solar mass and 29(+4/-4) solar mass; for each parameter we report the median value and the range of the 90% credible interval. The dimensionless spin magnitude of the more massive black hole is bound to be less than 0.7 (at 90% probability). The luminosity distance to the source is 410(+160/-180) Mpc, corresponding to a redshift 0.09(+0.03/-0.04) assuming standard cosmology. The source location is constrained to an annulus section of 610 sq deg, primarily in the southern hemisphere. The binary merges into a black hole of mass 62(+4/-4) solar mass and spin 0.67(+0.05/-0.07). This black hole is significantly more massive than any other inferred from electromagnetic observations in the stellar-mass regime.

  7. GW LIBRAE: STILL HOT EIGHT YEARS POST-OUTBURST

    Energy Technology Data Exchange (ETDEWEB)

    Szkody, Paula; Mukadam, Anjum S. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Gänsicke, Boris T.; Chote, Paul; Toloza, Odette [Department of Physics, University of Warwick, Coventry CV4 7AL (United Kingdom); Nelson, Peter; Myers, Gordon; Waagen, Elizabeth O. [AAVSO, 48 Bay State Road, Cambridge, MA 02138 (United States); Sion, Edward M. [Department of Astrophysics and Planetary Science, Villanova University, Villanova, PA 19085 (United States); Sullivan, Denis J. [School of Chemical and Physical Sciences, Victoria University of Wellington, P.O. Box 600, Wellington (New Zealand); Townsley, Dean M., E-mail: szkody@astro.washington.edu [Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487 (United States)

    2016-08-01

    We report continued Hubble Space Telescope ( HST ) ultraviolet spectra and ground-based optical photometry and spectroscopy of GW Librae eight years after its largest known dwarf nova outburst in 2007. This represents the longest cooling timescale measured for any dwarf nova. The spectra reveal that the white dwarf still remains about 3000 K hotter than its quiescent value. Both ultraviolet and optical light curves show a short period of 364–373 s, similar to one of the non-radial pulsation periods present for years prior to the outburst, and with a similar large UV/optical amplitude ratio. A large modulation at a period of 2 hr (also similar to that observed prior to outburst) is present in the optical data preceding and during the HST observations, but the satellite observation intervals did not cover the peaks of the optical modulation, and so it is not possible to determine its corresponding UV amplitude. The similarity of the short and long periods to quiescent values implies that the pulsating, fast spinning white dwarf in GW Lib may finally be nearing its quiescent configuration.

  8. Quasiparticle self-consistent GW method: a short summary

    International Nuclear Information System (INIS)

    Kotani, Takao; Schilfgaarde, Mark van; Faleev, Sergey V; Chantis, Athanasios

    2007-01-01

    We have developed a quasiparticle self-consistent GW method (QSGW), which is a new self-consistent method to calculate the electronic structure within the GW approximation. The method is formulated based on the idea of a self-consistent perturbation; the non-interacting Green function G 0 , which is the starting point for GWA to obtain G, is determined self-consistently so as to minimize the perturbative correction generated by GWA. After self-consistency is attained, we have G 0 , W (the screened Coulomb interaction) and G self-consistently. This G 0 can be interpreted as the optimum non-interacting propagator for the quasiparticles. We will summarize some theoretical discussions to justify QSGW. Then we will survey results which have been obtained up to now: e.g., band gaps for normal semiconductors are predicted to a precision of 0.1-0.3 eV; the self-consistency including the off-diagonal part is required for NiO and MnO; and so on. There are still some remaining disagreements with experiments; however, they are very systematic, and can be explained from the neglect of excitonic effects

  9. Properties of the Binary Black Hole Merger GW150914

    Science.gov (United States)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Carbon Baiardi, L.; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devine, C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Johnson-McDaniel, N. K.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; K, Haris; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, Nam-Gyu; Kim, Namjun; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lousto, C. O.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R. M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pan, Y.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Röver, C.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shao, Z.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, A. D.; Simakov, D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van der Sluys, M. V.; van Heijningen, J. V.; Vañó-Viñuales, A.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J. L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; Boyle, M.; Brügamin, B.; Campanelli, M.; Clark, M.; Hamberger, D.; Kidder, L. E.; Kinsey, M.; Laguna, P.; Ossokine, S.; Scheel, M. A.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.; LIGO Scientific Collaboration; Virgo Collaboration

    2016-06-01

    On September 14, 2015, the Laser Interferometer Gravitational-Wave Observatory (LIGO) detected a gravitational-wave transient (GW150914); we characterize the properties of the source and its parameters. The data around the time of the event were analyzed coherently across the LIGO network using a suite of accurate waveform models that describe gravitational waves from a compact binary system in general relativity. GW150914 was produced by a nearly equal mass binary black hole of masses 3 6-4+5M⊙ and 2 9-4+4M⊙ ; for each parameter we report the median value and the range of the 90% credible interval. The dimensionless spin magnitude of the more massive black hole is bound to be <0.7 (at 90% probability). The luminosity distance to the source is 41 0-180+160 Mpc , corresponding to a redshift 0.0 9-0.04+0.03 assuming standard cosmology. The source location is constrained to an annulus section of 610 deg2 , primarily in the southern hemisphere. The binary merges into a black hole of mass 6 2-4+4M⊙ and spin 0.6 7-0.07+0.05. This black hole is significantly more massive than any other inferred from electromagnetic observations in the stellar-mass regime.

  10. Smart Remote for the Setup Box Using Gesture Control

    OpenAIRE

    Surepally Uday Kumar; K. Shamini

    2016-01-01

    The basic purpose of this project is to provide a means to control a set top box (capable of infrared communication), in this case Hathway using hand gestures. Thus, this system will act like a remote control for operating set top box, but this will be achieved through hand gestures instead of pushing buttons. To send and receive remote control signals, this project uses an infrared LED as Transmitter. Using an infrared receiver, an Arduino can detect the bits being sent by a remo...

  11. Gesture Recognition for Educational Games: Magic Touch Math

    Science.gov (United States)

    Kye, Neo Wen; Mustapha, Aida; Azah Samsudin, Noor

    2017-08-01

    Children nowadays are having problem learning and understanding basic mathematical operations because they are not interested in studying or learning mathematics. This project proposes an educational game called Magic Touch Math that focuses on basic mathematical operations targeted to children between the age of three to five years old using gesture recognition to interact with the game. Magic Touch Math was developed in accordance to the Game Development Life Cycle (GDLC) methodology. The prototype developed has helped children to learn basic mathematical operations via intuitive gestures. It is hoped that the application is able to get the children motivated and interested in mathematics.

  12. MICCAI Workshops

    CERN Document Server

    Nedjati-Gilani, Gemma; Venkataraman, Archana; O'Donnell, Lauren; Panagiotaki, Eleftheria

    2014-01-01

    This volume contains the proceedings from two closely related workshops: Computational Diffusion MRI (CDMRI’13) and Mathematical Methods from Brain Connectivity (MMBC’13), held under the auspices of the 16th International Conference on Medical Image Computing and Computer Assisted Intervention, which took place in Nagoya, Japan, September 2013. Inside, readers will find contributions ranging from mathematical foundations and novel methods for the validation of inferring large-scale connectivity from neuroimaging data to the statistical analysis of the data, accelerated methods for data acquisition, and the most recent developments on mathematical diffusion modeling. This volume offers a valuable starting point for anyone interested in learning computational diffusion MRI and mathematical methods for brain connectivity as well as offers new perspectives and insights on current research challenges for those currently in the field. It will be of interest to researchers and practitioners in computer science, ...

  13. Asymmetric Dynamic Attunement of Speech and Gestures in the Construction of Children's Understanding.

    Science.gov (United States)

    De Jonge-Hoekstra, Lisette; Van der Steen, Steffie; Van Geert, Paul; Cox, Ralf F A

    2016-01-01

    As children learn they use their speech to express words and their hands to gesture. This study investigates the interplay between real-time gestures and speech as children construct cognitive understanding during a hands-on science task. 12 children (M = 6, F = 6) from Kindergarten (n = 5) and first grade (n = 7) participated in this study. Each verbal utterance and gesture during the task were coded, on a complexity scale derived from dynamic skill theory. To explore the interplay between speech and gestures, we applied a cross recurrence quantification analysis (CRQA) to the two coupled time series of the skill levels of verbalizations and gestures. The analysis focused on (1) the temporal relation between gestures and speech, (2) the relative strength and direction of the interaction between gestures and speech, (3) the relative strength and direction between gestures and speech for different levels of understanding, and (4) relations between CRQA measures and other child characteristics. The results show that older and younger children differ in the (temporal) asymmetry in the gestures-speech interaction. For younger children, the balance leans more toward gestures leading speech in time, while the balance leans more toward speech leading gestures for older children. Secondly, at the group level, speech attracts gestures in a more dynamically stable fashion than vice versa, and this asymmetry in gestures and speech extends to lower and higher understanding levels. Yet, for older children, the mutual coupling between gestures and speech is more dynamically stable regarding the higher understanding levels. Gestures and speech are more synchronized in time as children are older. A higher score on schools' language tests is related to speech attracting gestures more rigidly and more asymmetry between gestures and speech, only for the less difficult understanding levels. A higher score on math or past science tasks is related to less asymmetry between gestures and

  14. Asymmetric dynamic attunement of speech and gestures in the construction of children’s understanding

    Directory of Open Access Journals (Sweden)

    Lisette eDe Jonge-Hoekstra

    2016-03-01

    Full Text Available As children learn they use their speech to express words and their hands to gesture. This study investigates the interplay between real-time gestures and speech as children construct cognitive understanding during a hands-on science task. 12 children (M = 6, F = 6 from Kindergarten (n = 5 and first grade (n = 7 participated in this study. Each verbal utterance and gesture during the task were coded, on a complexity scale derived from dynamic skill theory. To explore the interplay between speech and gestures, we applied a cross recurrence quantification analysis (CRQA to the two coupled time series of the skill levels of verbalizations and gestures. The analysis focused on 1 the temporal relation between gestures and speech, 2 the relative strength and direction of the interaction between gestures and speech, 3 the relative strength and direction between gestures and speech for different levels of understanding, and 4 relations between CRQA measures and other child characteristics. The results show that older and younger children differ in the (temporal asymmetry in the gestures-speech interaction. For younger children, the balance leans more towards gestures leading speech in time, while the balance leans more towards speech leading gestures for older children. Secondly, at the group level, speech attracts gestures in a more dynamically stable fashion than vice versa, and this asymmetry in gestures and speech extends to lower and higher understanding levels. Yet, for older children, the mutual coupling between gestures and speech is more dynamically stable regarding the higher understanding levels. Gestures and speech are more synchronized in time as children are older. A higher score on schools’ language tests is related to speech attracting gestures more rigidly and more asymmetry between gestures and speech, only for the less difficult understanding levels. A higher score on math or past science tasks is related to less asymmetry between

  15. Early Gesture Provides a Helping Hand to Spoken Vocabulary Development for Children with Autism, Down Syndrome, and Typical Development

    Science.gov (United States)

    Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Baumann, Stephanie

    2017-01-01

    Typically developing (TD) children refer to objects uniquely in gesture (e.g., point at a cat) before they produce verbal labels for these objects ("cat"). The onset of such gestures predicts the onset of similar spoken words, showing a strong positive relation between early gestures and early words. We asked whether gesture plays the…

  16. Frontal and temporal contributions to understanding the iconic co-speech gestures that accompany speech.

    Science.gov (United States)

    Dick, Anthony Steven; Mok, Eva H; Raja Beharelle, Anjali; Goldin-Meadow, Susan; Small, Steven L

    2014-03-01

    In everyday conversation, listeners often rely on a speaker's gestures to clarify any ambiguities in the verbal message. Using fMRI during naturalistic story comprehension, we examined which brain regions in the listener are sensitive to speakers' iconic gestures. We focused on iconic gestures that contribute information not found in the speaker's talk, compared with those that convey information redundant with the speaker's talk. We found that three regions-left inferior frontal gyrus triangular (IFGTr) and opercular (IFGOp) portions, and left posterior middle temporal gyrus (MTGp)--responded more strongly when gestures added information to nonspecific language, compared with when they conveyed the same information in more specific language; in other words, when gesture disambiguated speech as opposed to reinforced it. An increased BOLD response was not found in these regions when the nonspecific language was produced without gesture, suggesting that IFGTr, IFGOp, and MTGp are involved in integrating semantic information across gesture and speech. In addition, we found that activity in the posterior superior temporal sulcus (STSp), previously thought to be involved in gesture-speech integration, was not sensitive to the gesture-speech relation. Together, these findings clarify the neurobiology of gesture-speech integration and contribute to an emerging picture of how listeners glean meaning from gestures that accompany speech. Copyright © 2012 Wiley Periodicals, Inc.

  17. Localization and Recognition of Dynamic Hand Gestures Based on Hierarchy of Manifold Classifiers

    Science.gov (United States)

    Favorskaya, M.; Nosov, A.; Popov, A.

    2015-05-01

    Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin detector, normalized skeleton representation of one or two hands, and motion history representing by motion vectors normalized through predetermined directions (8 and 16 in our case). Each dynamic gesture is separated into a set of sub-gestures in order to predict a trajectory and remove those samples of gestures, which do not satisfy to current trajectory. The posture classifiers involve the normalized skeleton representation of palm and fingers and relative finger positions using fingertips. The min-max criterion is used for trajectory recognition, and the decision tree technique was applied for posture recognition of sub-gestures. For experiments, a dataset "Multi-modal Gesture Recognition Challenge 2013: Dataset and Results" including 393 dynamic hand-gestures was chosen. The proposed method yielded 84-91% recognition accuracy, in average, for restricted set of dynamic gestures.

  18. LOCALIZATION AND RECOGNITION OF DYNAMIC HAND GESTURES BASED ON HIERARCHY OF MANIFOLD CLASSIFIERS

    Directory of Open Access Journals (Sweden)

    M. Favorskaya

    2015-05-01

    Full Text Available Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin detector, normalized skeleton representation of one or two hands, and motion history representing by motion vectors normalized through predetermined directions (8 and 16 in our case. Each dynamic gesture is separated into a set of sub-gestures in order to predict a trajectory and remove those samples of gestures, which do not satisfy to current trajectory. The posture classifiers involve the normalized skeleton representation of palm and fingers and relative finger positions using fingertips. The min-max criterion is used for trajectory recognition, and the decision tree technique was applied for posture recognition of sub-gestures. For experiments, a dataset “Multi-modal Gesture Recognition Challenge 2013: Dataset and Results” including 393 dynamic hand-gestures was chosen. The proposed method yielded 84–91% recognition accuracy, in average, for restricted set of dynamic gestures.

  19. Giving cognition a helping hand: the effect of congruent gestures on object name retrieval.

    Science.gov (United States)

    Pine, Karen J; Reeves, Lindsey; Howlett, Neil; Fletcher, Ben C

    2013-02-01

    The gestures that accompany speech are more than just arbitrary hand movements or communicative devices. They are simulated actions that can both prime and facilitate speech and cognition. This study measured participants' reaction times for naming degraded images of objects when simultaneously adopting a gesture that was either congruent with the target object, incongruent with it, and when not making any hand gesture. A within-subjects design was used, with participants (N= 122) naming 10 objects under each condition. Participants named the objects significantly faster when adopting a congruent gesture than when not gesturing at all. Adopting an incongruent gesture resulted in significantly slower naming times. The findings are discussed in the context of the intrapersonal cognitive and facilitatory effects of gestures and underline the relatedness between language, action, and cognition. © 2012 The British Psychological Society.

  20. Generating Culture-Specific Gestures for Virtual Agent Dialogs

    DEFF Research Database (Denmark)

    Endrass, Birgit; Damian, Ionut; Huber, Peter

    2010-01-01

    Integrating culture into the behavioral model of virtual agents has come into focus lately. When investigating verbal aspects of behavior, nonverbal behaviors are desirably added automatically, driven by the speech-act. In this paper, we present a corpus driven approach of generating gestures...

  1. Effects of a robotic storyteller's moody gestures on storytelling perception

    NARCIS (Netherlands)

    Xu, J.; Broekens, J.; Hindriks, K.; Neerincx, M.A.

    2015-01-01

    A parameterized behavior model was developed for robots to show mood during task execution. In this study, we applied the model to the coverbal gestures of a robotic storyteller. This study investigated whether parameterized mood expression can 1) show mood that is changing over time; 2) reinforce

  2. Static gesture recognition using features extracted from skeletal data

    CSIR Research Space (South Africa)

    Mangera, R

    2013-12-01

    Full Text Available -optimal classification accuracy. Therefore to improve the classification accuracy, a new feature vector, combining joint angles and the relative position of the arm joints with respect to the head, is proposed. A k-means classifier is used to cluster each gesture. New...

  3. RehabGesture: An Alternative Tool for Measuring Human Movement.

    Science.gov (United States)

    Brandão, Alexandre F; Dias, Diego R C; Castellano, Gabriela; Parizotto, Nivaldo A; Trevelin, Luis Carlos

    2016-07-01

    Systems for range of motion (ROM) measurement such as OptoTrak, Motion Capture, Motion Analysis, Vicon, and Visual 3D are so expensive that they become impracticable in public health systems and even in private rehabilitation clinics. Telerehabilitation is a branch within telemedicine intended to offer ways to increase motor and/or cognitive stimuli, aimed at faster and more effective recovery of given disabilities, and to measure kinematic data such as the improvement in ROM. In the development of the RehabGesture tool, we used the gesture recognition sensor Kinect(®) (Microsoft, Redmond, WA) and the concepts of Natural User Interface and Open Natural Interaction. RehabGesture can measure and record the ROM during rehabilitation sessions while the user interacts with the virtual reality environment. The software allows the measurement of the ROM (in the coronal plane) from 0° extension to 145° flexion of the elbow joint, as well as from 0° adduction to 180° abduction of the glenohumeral (shoulder) joint, leaving the standing position. The proposed tool has application in the fields of training and physical evaluation of professional and amateur athletes in clubs and gyms and may have application in rehabilitation and physiotherapy clinics for patients with compromised motor abilities. RehabGesture represents a low-cost solution to measure the movement of the upper limbs, as well as to stimulate the process of teaching and learning in disciplines related to the study of human movement, such as kinesiology.

  4. Recognition of sign language gestures using neural networks

    Directory of Open Access Journals (Sweden)

    Simon Vamplew

    2007-04-01

    Full Text Available This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan hand gestures.

  5. Recognition of sign language gestures using neural networks

    OpenAIRE

    Simon Vamplew

    2007-01-01

    This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan) hand gestures.

  6. Onomatopoeia, Gesture, and Synaesthesia in the Perception of Poetic Meaning.

    Science.gov (United States)

    Salper, Donald R.

    The author states that phonetic symbolism is not a generalizable phenomenon but maintains that those interested in the status of a poem as a speech event need not totally discount or discredit such perceptions. In his discussion of the theories which ascribe meaning to vocal utterance--the two imitative theories, the onomatopoeic and the gestural,…

  7. Gesture and Signing in Support of Expressive Language Development

    Science.gov (United States)

    Baker-Ramos, Leslie K.

    2017-01-01

    The purpose of this teacher inquiry is to explore the effects of signing and gesturing on the expressive language development of non-verbal children. The first phase of my inquiry begins with the observations of several non-verbal students with various etiologies in three different educational settings. The focus of these observations is to…

  8. Children's Use of Gesture to Resolve Lexical Ambiguity

    Science.gov (United States)

    Kidd, Evan; Holler, Judith

    2009-01-01

    We report on a study investigating 3-5-year-old children's use of gesture to resolve lexical ambiguity. Children were told three short stories that contained two homonym senses; for example, "bat" (flying mammal) and "bat" (sports equipment). They were then asked to re-tell these stories to a second experimenter. The data were coded for the means…

  9. Cascading neural networks for upper-body gesture recognition

    CSIR Research Space (South Africa)

    Mangera, R

    2014-01-01

    Full Text Available Gesture recognition has many applications ranging from health care to entertainment. However for it to be a feasible method of human-computer interaction it is essential that only intentional movements are interpreted and that the system can work...

  10. Gesturing on the Telephone: Independent Effects of Dialogue and Visibility

    Science.gov (United States)

    Bavelas, Janet; Gerwing, Jennifer; Sutton, Chantelle; Prevost, Danielle

    2008-01-01

    Speakers often gesture in telephone conversations, even though they are not visible to their addressees. To test whether this effect is due to being in a dialogue, we separated visibility and dialogue with three conditions: face-to-face dialogue (10 dyads), telephone dialogue (10 dyads), and monologue to a tape recorder (10 individuals). For the…

  11. Maternal Mental State Talk and Infants' Early Gestural Communication

    Science.gov (United States)

    Slaughter, Virginia; Peterson, Candida C.; Carpenter, Malinda

    2009-01-01

    Twenty-four infants were tested monthly for the production of imperative and declarative gestures between 0 ; 9 and 1 ; 3 and concurrent mother-infant free-play sessions were conducted at 0 ; 9, 1 ; 0 and 1 ; 3 (Carpenter, Nagell & Tomasello, 1998). Free-play transcripts were subsequently coded for maternal talk about mental states. Results…

  12. Nearest neighbour classification of Indian sign language gestures ...

    Indian Academy of Sciences (India)

    In the ideal case, a gesture recognition ... Every geographical region has developed its own sys- ... et al [10] present a study on vision-based static hand shape .... tures, and neural networks for recognition. ..... We used the city-block dis-.

  13. Sound Synthesis Affected by Physical Gestures in Real-Time

    DEFF Research Database (Denmark)

    Graugaard, Lars

    2006-01-01

    Motivation and strategies for affecting electronic music through physical gestures are presented and discussed. Two implementations are presented and experience with their use in performance is reported. A concept of sound shaping and sound colouring that connects an instrumental performer......’s playing and gesturest to sound synthesis is used. The results and future possibilities are discussed....

  14. LOCALIZATION AND BROADBAND FOLLOW-UP OF THE GRAVITATIONAL-WAVE TRANSIENT GW 150914

    NARCIS (Netherlands)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Phythian-Adams, A.T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.T.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, R.D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Barthelmy, S.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, M.J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Bitossi, M.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, A.L.S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, J.G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, T.C; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Brooks, A. F.; Brown, A.D.; Brown, D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Bustillo, J. Calderon; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Diaz, J. Casanueva; Casentini, C.; Caudill, S.; Cavaglia, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Baiardi, L. Cerboni; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, D. S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Qian; Chua, S. S. Y.; Chung, E.S.; Ciani, G.; Clara, F.; Clark, J. A.; Clark, J. A.; Coccia, E.; Cohadon, P. -F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, A.C.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J. -P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, Laura; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; Debra, D.; Debreczeni, G.; Degallaix, J.; De laurentis, M.; Deleglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.A.; DeRosa, R. T.; Rosa, R.; DeSalvo, R.; Dhurandhar, S.; Diaz, M. C.; Di Fiore, L.; Giovanni, M. Di; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H. -B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, T. M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.M.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M; Fournier, J. -D.; Franco, S; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, A.; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.P.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; Gonzalez, Idelmis G.; Castro, J. M. Gonzalez; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Lee-Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.M.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Buffoni-Hall, R.; Hall, E. D.; Hammond, G.L.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, P.J.; Hardwick, T.; Haris, K.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C. -J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hello, P.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J. -M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, D.H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jimenez-Forteza, F.; Johnson, W.; Jones, I.D.; Jones, R.; Jonker, R. J. G.; Ju, L.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.H.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kefelian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.E.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan., S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, Namjun; Kim, Namjun; Kim, Y.M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krolak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kuo, L.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C.H.; Lee, K.H.; Lee, M.H.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Losurdo, G.; Lueck, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magana-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Marka, S.; Marka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R.M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B.C.; Moore, J.C.; Moraru, D.; Gutierrez Moreno, M.; Morriss, S. R.; Mossavi, K.; Mossavi, K.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, A.; Mukherjee, S.D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P.G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Gutierrez-Neri, M.; Neunzert, A.; Newton-Howes, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J.; Oh, S. H.; Ohme, F.; Oliver, M. B.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palliyaguru, N.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.S; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Puerrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosinska, D.; Rowan, S.; Ruediger, A.; Ruggi, P.; Ryan, K.A.; Sachdev, Perminder S; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sanders, J. R.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J; Schmidt, P.; Schnabel, R.B.; Schofield, R. M. S.; Schoenbeck, A.; Schreiber, K.E.C.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, M.S.; Sellers, D.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shao, Z.M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, António Dias da; Simakov, D.; Singer, A; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, R. J. E.; Smith, N.D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Souradeep, T.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, J.R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepanczyk, M. J.; Tacca, M.D.; Talukder, D.; Tanner, D. B.; Tapai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, W.R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Toyra, D.; Travasso, F.; Traylor, G.; Trifiro, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; Van Beuzekom, Martin; van den Brand, J. F. J.; Van Den Broeck, C.F.F.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasuth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P.J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Vicere, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J. -Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, MT; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L. -W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.M.; Wessels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, D.R.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J.L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; Zadrozny, A.; Zangrando, L.; Zanolin, M.; Zendri, J. -P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; Allison, J.; Bannister, K.; Bell, E.M.; Chatterjee, S.; Chippendale, A. P.; Edwards, P. G.; Harvey-Smith, L.; Heywood, Ian; Hotan, A.; Indermuehle, B.; Marvil, J.; McConnell, D.; Murphy, Michael T.; Popping, A.; Reynolds, J.; Sault, R. J.; Voronkov, M. A.; Whiting, M. T.; Castro-Tirado, A. J.; Cunniffe, R.; Jelinek, M.; Tello, J. C.; Oates, S. R.; Hu, Y. -D.; Kubanek, P.; Guziy, S.; Castellon, A.; Garcia-Cerezo, A.; Munoz, V. F.; Perez del Pulgar, C.; Castillo-Carrion, S.; Hudec, R.; Caballero-Garcia, M. D.; Pata, P.; Vitek, S.; Adame, J. A.; Konig, S.; Rendon, F.; Mateo Sanguino, T. de J.; Munoz-Fernandez, R.; Yock, P. C.; Rattenbury, N.; Allen, W. H.; Querel, R.; Jeong, S.; Park, I. H.; Bai, J.C.; Cui, Ch.; Fan, Y.; Wang, Ch.; Hiriart, D.; Lee, W. H.; Claret, A.; Sanchez-Ramirez, R.; Pandey, S. B.; Mediavilla, T.; Sabau-Graziati, L.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Armstrong, R.; Benoit-Levy, A.; Berger, Charles E H; Bernstein, R. A.; Bertin, E.; Brout, D.; Buckley-Geer, E.; Burke, D. L.; Capozzi, D.; Carretero, J.; Castander, F. J.; Chornock, R.; Cowperthwaite, P. S.; Cowperthwaite, P. S.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Doctor, Z.; Drlica-Wagner, A.; Drout, M. R.; Eifler, T. F.; Estrada, J.; Evrard, A. E.; Fernandez, E.; Finley, D. A.; Flaugher, B.; Fong, W. -F.; Fosalba, P.; Fox, D. B.; Frieman, J.; Fryer, C. L.; Gaztanaga, E.; Gerdes, D. W.; Goldstein, D. A.; Gruen, D.; Gruendl, R. A.; Gutierrez, A.G.; Herner, K.; Honscheid, K.; James, J. D Mireles; Johnson, M.D.; Johnson, M. W. G.; Karliner, I.; Kasen, D.; Kent, S.; Kessler, R.; Kim, A. G.; Kind, M. C.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Li, T. S.; Lima, M.; Lin, H.H.; Maia, M. A. G.; Margutti, R.; Marriner, J.; San Martini, P.; Matheson, T.; Melchior, P.; Metzger, B. D.; Miller, C. J.; Miquel, R.; Neilsen, E.; Nichol, R. C.; Nord, B.; Nugent, P.; Ogando, R.; Petravick, D.; Plazas, A. A.; Quataert, E.; Roe, N. A.; Romer, A. K.; Roodman, A.; Rosell, A. C.; Rykoff, E. S.; Sako, M.; Sanchez, E.; Scarpine, V.; Schindler, R. H.; Schubnell, M.; Scolnic, D.; Sevilla-Noarbe, I.; Sheldon, E.; Smith, N.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Stebbins, A.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thaler, J.; Thomas, D.; Thomas, C.R.; Tucker, D. L.; Tucker, D. L.; Walker, A. R.; Wechsler, R. H.; Wester, W.; Yanny, B.; Zhang, Y.; Zuntz, J.; Connaughton, V.; Burns, J.E.; Goldstein, A.; Briggs, M. S.; Zhang, B.; Hui, C. M.; Jenke, P.; Wilson-Hodge, C. A.; Bhat, P. N.; Bissaldi, E.; Cleveland, W.; Fitzpatrick, G.; Giles, M. M.; Gibby, M. H.; Greiner, J.; von Kienlin, A.; Kippen, R. M.; McBreen, S.; Mailyan, B.; Meegan, C. A.; Paciesas, W. S.; Preece, R. D.; Roberts, W.O.; Sparke, L.; Stanbro, M.; Toelge, K.; Veres, P.; Yu, H. -F.; Blackburn, L.; Ackermann, M; Ajello, M.; Albert, M.A.; Anderson, B.; Atwood, W. B.; Axelsson, M.; Baldini, L.; Barbiellini, G.; Bastieri, D.; Bellazzini, R.; Bissaldi, E.; Blandford, R. D.; Bloom, E. D.; Bonino, R.; Bottacini, E.; Brandt, T. J.; Brandt, T. J.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Caragiulo, M.; Caraveo, P. A.; Cavazzuti, E.; Charles, E.; Chekhtman, A.; Chiang, J.; Chiaro, G.; Ciprini, S.; Cohen-Tanugi, J.; Cominsky, L. R.; Costanza, F.; Cuoco, A.; D'Ammando, F.; de Palma, F.; Desiante, R.; Desiante, R.; Di Lalla, N.; Di Mauro, M.; Di Venere, L.; Dominguez, A.; Drell, P. S.; DuBois, RN; Favuzzi, C.; Ferrara, E. C.; Franckowiak, A.; Fukazawa, Y.; Funk, S.; Fusco, P.; Gargano, F.; Gasparrini, D.; Giglietto, N.; Giommi, P.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Gomez-Vargas, G. A.; Green, D.; Grenier, I. A.; Grove, J. E.; Guiriec, S.; Hadasch, D.; Harding, A. K.; Hays, E.; Hewitt, J. W.; Hill, A. B.; Horan, D.; Jogler, T.; Johannesson, G.; Johnson, A.S.; Kensei, S.; Kocevski, D.; Kuss, M.; La Mura, G.; Larsson, S.; Latronico, L.; Li, J.; Li, L.; Lopez-Longo, F.J.; Loparco, F.; Lovellette, M. N.; Lubrano, P.; Magill, J.; Maldera, S.; Manfreda, A.; Marelli, M.; Mayer, M.; Mazziotta, M. N.; McEnery, J. E.; Meyer, M.; Michelson, P. F.; Mirabal, N.; Mizuno, T.; Moiseev, A. A.; Monzani, M. E.; Moretti, E.; Morselli, A.; Moskalenko, I. V.; Negro, M.; Nuss, E.; Ohsugi, T.; Omodei, N.; Orienti, M.; Orlando, E.; Ormes, J. F.; Paneque, D.; Perkins, J. S.; Pesce-Rollins, M.

    2016-01-01

    A gravitational-wave (GW) transient was identified in data recorded by the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO) detectors on 2015 September 14. The event, initially designated G184098 and later given the name GW150914, is described in detail elsewhere. By prior

  15. Give me a hand: Differential effects of gesture type in guiding young children's problem-solving.

    Science.gov (United States)

    Vallotton, Claire; Fusaro, Maria; Hayden, Julia; Decker, Kalli; Gutowski, Elizabeth

    2015-11-01

    Adults' gestures support children's learning in problem-solving tasks, but gestures may be differentially useful to children of different ages, and different features of gestures may make them more or less useful to children. The current study investigated parents' use of gestures to support their young children (1.5 - 6 years) in a block puzzle task (N = 126 parent-child dyads), and identified patterns in parents' gesture use indicating different gestural strategies. Further, we examined the effect of child age on both the frequency and types of gestures parents used, and on their usefulness to support children's learning. Children attempted to solve the puzzle independently before and after receiving help from their parent; half of the parents were instructed to sit on their hands while they helped. Parents who could use their hands appear to use gestures in three strategies: orienting the child to the task, providing abstract information, and providing embodied information; further, they adapted their gesturing to their child's age and skill level. Younger children elicited more frequent and more proximal gestures from parents. Despite the greater use of gestures with younger children, it was the oldest group (4.5-6.0 years) who were most affected by parents' gestures. The oldest group was positively affected by the total frequency of parents' gestures, and in particular, parents' use of embodying gestures (indexes that touched their referents, representational demonstrations with object in hand, and physically guiding child's hands). Though parents rarely used the embodying strategy with older children, it was this strategy which most enhanced the problem-solving of children 4.5 - 6 years.

  16. Give me a hand: Differential effects of gesture type in guiding young children's problem-solving

    Science.gov (United States)

    Vallotton, Claire; Fusaro, Maria; Hayden, Julia; Decker, Kalli; Gutowski, Elizabeth

    2015-01-01

    Adults’ gestures support children's learning in problem-solving tasks, but gestures may be differentially useful to children of different ages, and different features of gestures may make them more or less useful to children. The current study investigated parents’ use of gestures to support their young children (1.5 – 6 years) in a block puzzle task (N = 126 parent-child dyads), and identified patterns in parents’ gesture use indicating different gestural strategies. Further, we examined the effect of child age on both the frequency and types of gestures parents used, and on their usefulness to support children's learning. Children attempted to solve the puzzle independently before and after receiving help from their parent; half of the parents were instructed to sit on their hands while they helped. Parents who could use their hands appear to use gestures in three strategies: orienting the child to the task, providing abstract information, and providing embodied information; further, they adapted their gesturing to their child's age and skill level. Younger children elicited more frequent and more proximal gestures from parents. Despite the greater use of gestures with younger children, it was the oldest group (4.5-6.0 years) who were most affected by parents’ gestures. The oldest group was positively affected by the total frequency of parents’ gestures, and in particular, parents’ use of embodying gestures (indexes that touched their referents, representational demonstrations with object in hand, and physically guiding child's hands). Though parents rarely used the embodying strategy with older children, it was this strategy which most enhanced the problem-solving of children 4.5 – 6 years. PMID:26848192

  17. Generating Control Commands From Gestures Sensed by EMG

    Science.gov (United States)

    Wheeler, Kevin R.; Jorgensen, Charles

    2006-01-01

    An effort is under way to develop noninvasive neuro-electric interfaces through which human operators could control systems as diverse as simple mechanical devices, computers, aircraft, and even spacecraft. The basic idea is to use electrodes on the surface of the skin to acquire electromyographic (EMG) signals associated with gestures, digitize and process the EMG signals to recognize the gestures, and generate digital commands to perform the actions signified by the gestures. In an experimental prototype of such an interface, the EMG signals associated with hand gestures are acquired by use of several pairs of electrodes mounted in sleeves on a subject s forearm (see figure). The EMG signals are sampled and digitized. The resulting time-series data are fed as input to pattern-recognition software that has been trained to distinguish gestures from a given gesture set. The software implements, among other things, hidden Markov models, which are used to recognize the gestures as they are being performed in real time. Thus far, two experiments have been performed on the prototype interface to demonstrate feasibility: an experiment in synthesizing the output of a joystick and an experiment in synthesizing the output of a computer or typewriter keyboard. In the joystick experiment, the EMG signals were processed into joystick commands for a realistic flight simulator for an airplane. The acting pilot reached out into the air, grabbed an imaginary joystick, and pretended to manipulate the joystick to achieve left and right banks and up and down pitches of the simulated airplane. In the keyboard experiment, the subject pretended to type on a numerical keypad, and the EMG signals were processed into keystrokes. The results of the experiments demonstrate the basic feasibility of this method while indicating the need for further research to reduce the incidence of errors (including confusion among gestures). Topics that must be addressed include the numbers and arrangements

  18. Car Gestures - Advisory warning using additional steering wheel angles.

    Science.gov (United States)

    Maag, Christian; Schneider, Norbert; Lübbeke, Thomas; Weisswange, Thomas H; Goerick, Christian

    2015-10-01

    Advisory warning systems (AWS) notify the driver about upcoming hazards. This is in contrast to the majority of currently deployed advanced driver assistance systems (ADAS) that manage emergency situations. The target of this study is to investigate the effectiveness, acceptance, and controllability of a specific kind of AWS that uses the haptic information channel for warning the driver. This could be beneficial, as alternatives for using the visual modality can help to reduce the risk of visual overload. The driving simulator study (N=24) compared an AWS based on additional steering wheel angle control (Car Gestures) with a visual warning presented in a simulated head-up display (HUD). Both types of warning were activated 3.5s before the hazard object was reached. An additional condition of unassisted driving completed the experimental design. The subjects encountered potential hazards in a variety of urban situations (e.g. a pedestrian standing on the curbs). For the investigated situations, subjective ratings show that a majority of drivers prefer visual warnings over haptic information via gestures. An analysis of driving behavior indicates that both warning approaches guide the vehicle away from the potential hazard. Whereas gestures lead to a faster lateral driving reaction (compared to HUD warnings), the visual warnings result in a greater safety benefit (measured by the minimum distance to the hazard object). A controllability study with gestures in the wrong direction (i.e. leading toward the hazard object) shows that drivers are able to cope with wrong haptic warnings and safety is not reduced compared to unassisted driving as well as compared to (correct) haptic gestures and visual warnings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. A cross-species study of gesture and its role in symbolic development: Implications for the gestural theory of language evolution

    Directory of Open Access Journals (Sweden)

    Kristen eGillespie-Lynch

    2013-06-01

    Full Text Available Using a naturalistic video database, we examined whether gestures scaffolded the symbolic development of a language-enculturated chimpanzee, a language-enculturated bonobo, and a human child during the second year of life. These three species constitute a complete clade: species possessing a common immediate ancestor. A basic finding was the functional and formal similarity of many gestures between chimpanzee, bonobo, and human child. The child’s symbols were spoken words; the apes’ symbols were lexigrams, noniconic visual signifiers. A developmental pattern in which gestural representation of a referent preceded symbolic representation of the same referent appeared in all three species (but was statistically significant only for the child. Nonetheless, across species, the ratio of symbol to gesture increased significantly with age. But even though their symbol production increased, the apes continued to communicate more frequently by gesture than by symbol. In contrast, by15-18 months of age, the child used symbols more frequently than gestures. This ontogenetic sequence from gesture to symbol, present across the clade but more pronounced in child than ape, provides support for the role of gesture in language evolution. In all three species, the overwhelming majority of gestures were communicative (paired with eye-contact, vocalization, and/or persistence. However, vocalization was rare for the apes, but accompanied the majority of the child’s communicative gestures. This finding suggests the co-evolution of speech and gesture after the evolutionary divergence of the hominid line. Multimodal expressions of communicative intent (e.g., vocalization plus persistence were normative for the child, but less common for the apes. This finding suggests that multimodal expression of communicative intent was also strengthened after hominids diverged from apes.

  20. Personalized gesture interactions for cyber-physical smart-home environments

    Institute of Scientific and Technical Information of China (English)

    Yihua LOU; Wenjun WU; Radu-Daniel VATAVU; Wei-Tek TSAI

    2017-01-01

    A gesture-based interaction system for smart homes is a part of a complex cyber-physical environment,for which researchers and developers need to address major challenges in providing personalized gesture interactions.However,current research efforts have not tackled the problem of personalized gesture recognition that often involves user identification.To address this problem,we propose in this work a new event-driven service-oriented framework called gesture services for cyber-physical environments (GS-CPE) that extends the architecture of our previous work gesture profile for web services (GPWS).To provide user identification functionality,GS-CPE introduces a two-phase cascading gesture password recognition algorithm for gesture-based user identification using a two-phase cascading classifier with the hidden Markov model and the Golden Section Search,which achieves an accuracy rate of 96.2% with a small training dataset.To support personalized gesture interaction,an enhanced version of the Dynamic Time Warping algorithm with multiple gestural input sources and dynamic template adaptation support is implemented.Our experimental results demonstrate the performance of the algorithm can achieve an average accuracy rate of 98.5% in practical scenarios.Comparison results reveal that GS-CPE has faster response time and higher accuracy rate than other gesture interaction systems designed for smart-home environments.

  1. Beating time: How ensemble musicians' cueing gestures communicate beat position and tempo.

    Science.gov (United States)

    Bishop, Laura; Goebl, Werner

    2018-01-01

    Ensemble musicians typically exchange visual cues to coordinate piece entrances. "Cueing-in" gestures indicate when to begin playing and at what tempo. This study investigated how timing information is encoded in musicians' cueing-in gestures. Gesture acceleration patterns were expected to indicate beat position, while gesture periodicity, duration, and peak gesture velocity were expected to indicate tempo. Same-instrument ensembles (e.g., piano-piano) were expected to synchronize more successfully than mixed-instrument ensembles (e.g., piano-violin). Duos performed short passages as their head and (for violinists) bowing hand movements were tracked with accelerometers and Kinect sensors. Performers alternated between leader/follower roles; leaders heard a tempo via headphones and cued their partner in nonverbally. Violin duos synchronized more successfully than either piano duos or piano-violin duos, possibly because violinists were more experienced in ensemble playing than pianists. Peak acceleration indicated beat position in leaders' head-nodding gestures. Gesture duration and periodicity in leaders' head and bowing hand gestures indicated tempo. The results show that the spatio-temporal characteristics of cueing-in gestures guide beat perception, enabling synchronization with visual gestures that follow a range of spatial trajectories.

  2. Multisensory integration: the case of a time window of gesture-speech integration.

    Science.gov (United States)

    Obermeier, Christian; Gunter, Thomas C

    2015-02-01

    This experiment investigates the integration of gesture and speech from a multisensory perspective. In a disambiguation paradigm, participants were presented with short videos of an actress uttering sentences like "She was impressed by the BALL, because the GAME/DANCE...." The ambiguous noun (BALL) was accompanied by an iconic gesture fragment containing information to disambiguate the noun toward its dominant or subordinate meaning. We used four different temporal alignments between noun and gesture fragment: the identification point (IP) of the noun was either prior to (+120 msec), synchronous with (0 msec), or lagging behind the end of the gesture fragment (-200 and -600 msec). ERPs triggered to the IP of the noun showed significant differences for the integration of dominant and subordinate gesture fragments in the -200, 0, and +120 msec conditions. The outcome of this integration was revealed at the target words. These data suggest a time window for direct semantic gesture-speech integration ranging from at least -200 up to +120 msec. Although the -600 msec condition did not show any signs of direct integration at the homonym, significant disambiguation was found at the target word. An explorative analysis suggested that gesture information was directly integrated at the verb, indicating that there are multiple positions in a sentence where direct gesture-speech integration takes place. Ultimately, this would implicate that in natural communication, where a gesture lasts for some time, several aspects of that gesture will have their specific and possibly distinct impact on different positions in an utterance.

  3. Selection of suitable hand gestures for reliable myoelectric human computer interface.

    Science.gov (United States)

    Castro, Maria Claudia F; Arjunan, Sridhar P; Kumar, Dinesh K

    2015-04-09

    Myoelectric controlled prosthetic hand requires machine based identification of hand gestures using surface electromyogram (sEMG) recorded from the forearm muscles. This study has observed that a sub-set of the hand gestures have to be selected for an accurate automated hand gesture recognition, and reports a method to select these gestures to maximize the sensitivity and specificity. Experiments were conducted where sEMG was recorded from the muscles of the forearm while subjects performed hand gestures and then was classified off-line. The performances of ten gestures were ranked using the proposed Positive-Negative Performance Measurement Index (PNM), generated by a series of confusion matrices. When using all the ten gestures, the sensitivity and specificity was 80.0% and 97.8%. After ranking the gestures using the PNM, six gestures were selected and these gave sensitivity and specificity greater than 95% (96.5% and 99.3%); Hand open, Hand close, Little finger flexion, Ring finger flexion, Middle finger flexion and Thumb flexion. This work has shown that reliable myoelectric based human computer interface systems require careful selection of the gestures that have to be recognized and without such selection, the reliability is poor.

  4. The Hunt for a Counterpart to GW150914

    Science.gov (United States)

    Kohler, Susanna

    2016-07-01

    On 14 September 2015, the Laser Interferometer Gravitational-wave Observatory (LIGO) in a pre-operative testing state at the time detected its first sign of gravitational-waves. The LIGO team sprang into action, performing data-quality checks on this unexpected signal. Within two days, they had sent a notification to 63 observing teams at observatories representing the entire electromagnetic spectrum, from radio to gamma-ray wavelengths.Illustration of a binary neutron star merger. The neutron stars 1) inspiral, 2) can produce a short gamma-ray burst, 3) can fling out hot, radioactive material in the form of a kilonova, and 4) form a massive neutron star or black hole with a possible remnant debris disk around it. [NASA/ESA/A. Feild (STScI)]Thus began the very first hunt for an electromagnetic counterpart to a detected gravitational wave signal.What were they looking for?As two compact objects in a binary system merge, the system is expected to emit energy in the form of gravitational waves. If both of the compact objects are black holes, were unlikely to see any electromagnetic radiation in the process, unless the merger is occurring in an (improbable) environment filled with gas and dust.But if one or both of the two compact objects is a neutron star, then there are a number of electromagnetic signatures that could occur due to energetic outflows. If a relativistic jet forms, we could see a short gamma-ray burst and X-ray, optical, and radio afterglows. Sub-relativistic outflows could produce optical and near-infrared signals, or a radio blast wave.Timeline of observations of GW150914, separated by wavelength band, and relative to the time of the gravitational-wave trigger. The top row shows LIGO information releases. The bottom four rows show high-energy, optical, near-infrared, and radio observations, respectively. Click for a closer look! [Abbott et al. 2016]Surprise SignalSince LIGO and Virgo (LIGOs European counterpart), wereprimarily expecting to detect

  5. Workshop introduction

    International Nuclear Information System (INIS)

    Streeper, Charles

    2010-01-01

    The Department of Energy's National Nuclear Security Administration's Global Threat Reduction Initiative (GTRI) has three subprograms that directly reduce the nuclear/radiological threat; Convert (Highly Enriched Uranium), Protect (Facilities), and Remove (Materials). The primary mission of the Off-Site Source Recovery Project (OSRP) falls under the 'Remove' subset. The purpose of this workshop is to provide a venue for joint-technical collaboration between the OSRP and the Nuclear Radiation Safety Service (NRSS). Eisenhower's Atoms for Peace initiative and the Soviet equivalent both promoted the spread of the paradoxical (peaceful and harmful) properties of the atom. The focus of nonproliferation efforts has been rightly dedicated to fissile materials and the threat they pose. Continued emphasis on radioactive materials must also be encouraged. An unquantifiable threat still exists in the prolific quantity of sealed radioactive sources (sources) spread worldwide. It does not appear that the momentum of the evolution in the numerous beneficial applications of radioactive sources will subside in the near future. Numerous expert studies have demonstrated the potentially devastating economic and psychological impacts of terrorist use of a radiological dispersal or emitting device. The development of such a weapon, from the acquisition of the material to the technical knowledge needed to develop and use it, is straightforward. There are many documented accounts worldwide of accidental and purposeful diversions of radioactive materials from regulatory control. The burden of securing sealed sources often falls upon the source owner, who may not have a disposal pathway once the source reaches the end of its useful life. This disposal problem is exacerbated by some source owners not having the resources to safely and compliantly store them. US Nuclear Regulatory Commission (NRC) data suggests that, in the US alone, there are tens of thousands of high-activity (IAEA

  6. Coronary Heart Disease Preoperative Gesture Interactive Diagnostic System Based on Augmented Reality.

    Science.gov (United States)

    Zou, Yi-Bo; Chen, Yi-Min; Gao, Ming-Ke; Liu, Quan; Jiang, Si-Yu; Lu, Jia-Hui; Huang, Chen; Li, Ze-Yu; Zhang, Dian-Hua

    2017-08-01

    Coronary heart disease preoperative diagnosis plays an important role in the treatment of vascular interventional surgery. Actually, most doctors are used to diagnosing the position of the vascular stenosis and then empirically estimating vascular stenosis by selective coronary angiography images instead of using mouse, keyboard and computer during preoperative diagnosis. The invasive diagnostic modality is short of intuitive and natural interaction and the results are not accurate enough. Aiming at above problems, the coronary heart disease preoperative gesture interactive diagnostic system based on Augmented Reality is proposed. The system uses Leap Motion Controller to capture hand gesture video sequences and extract the features which that are the position and orientation vector of the gesture motion trajectory and the change of the hand shape. The training planet is determined by K-means algorithm and then the effect of gesture training is improved by multi-features and multi-observation sequences for gesture training. The reusability of gesture is improved by establishing the state transition model. The algorithm efficiency is improved by gesture prejudgment which is used by threshold discriminating before recognition. The integrity of the trajectory is preserved and the gesture motion space is extended by employing space rotation transformation of gesture manipulation plane. Ultimately, the gesture recognition based on SRT-HMM is realized. The diagnosis and measurement of the vascular stenosis are intuitively and naturally realized by operating and measuring the coronary artery model with augmented reality and gesture interaction techniques. All of the gesture recognition experiments show the distinguish ability and generalization ability of the algorithm and gesture interaction experiments prove the availability and reliability of the system.

  7. Distinguishing the processing of gestures from signs in deaf individuals: an fMRI study.

    Science.gov (United States)

    Husain, Fatima T; Patkin, Debra J; Thai-Van, Hung; Braun, Allen R; Horwitz, Barry

    2009-06-18

    Manual gestures occur on a continuum from co-speech gesticulations to conventionalized emblems to language signs. Our goal in the present study was to understand the neural bases of the processing of gestures along such a continuum. We studied four types of gestures, varying along linguistic and semantic dimensions: linguistic and meaningful American Sign Language (ASL), non-meaningful pseudo-ASL, meaningful emblematic, and nonlinguistic, non-meaningful made-up gestures. Pre-lingually deaf, native signers of ASL participated in the fMRI study and performed two tasks while viewing videos of the gestures: a visuo-spatial (identity) discrimination task and a category discrimination task. We found that the categorization task activated left ventral middle and inferior frontal gyrus, among other regions, to a greater extent compared to the visual discrimination task, supporting the idea of semantic-level processing of the gestures. The reverse contrast resulted in enhanced activity of bilateral intraparietal sulcus, supporting the idea of featural-level processing (analogous to phonological-level processing of speech sounds) of the gestures. Regardless of the task, we found that brain activation patterns for the nonlinguistic, non-meaningful gestures were the most different compared to the ASL gestures. The activation patterns for the emblems were most similar to those of the ASL gestures and those of the pseudo-ASL were most similar to the nonlinguistic, non-meaningful gestures. The fMRI results provide partial support for the conceptualization of different gestures as belonging to a continuum and the variance in the fMRI results was best explained by differences in the processing of gestures along the semantic dimension.

  8. An investigation of co-speech gesture production during action description in Parkinson's disease.

    Science.gov (United States)

    Cleary, Rebecca A; Poliakoff, Ellen; Galpin, Adam; Dick, Jeremy P R; Holler, Judith

    2011-12-01

    Parkinson's disease (PD) can impact enormously on speech communication. One aspect of non-verbal behaviour closely tied to speech is co-speech gesture production. In healthy people, co-speech gestures can add significant meaning and emphasis to speech. There is, however, little research into how this important channel of communication is affected in PD. The present study provides a systematic analysis of co-speech gestures which spontaneously accompany the description of actions in a group of PD patients (N = 23, Hoehn and Yahr Stage III or less) and age-matched healthy controls (N = 22). The analysis considers different co-speech gesture types, using established classification schemes from the field of gesture research. The analysis focuses on the rate of these gestures as well as on their qualitative nature. In doing so, the analysis attempts to overcome several methodological shortcomings of research in this area. Contrary to expectation, gesture rate was not significantly affected in our patient group, with relatively mild PD. This indicates that co-speech gestures could compensate for speech problems. However, while gesture rate seems unaffected, the qualitative precision of gestures representing actions was significantly reduced. This study demonstrates the feasibility of carrying out fine-grained, detailed analyses of gestures in PD and offers insights into an as yet neglected facet of communication in patients with PD. Based on the present findings, an important next step is the closer investigation of the qualitative changes in gesture (including different communicative situations) and an analysis of the heterogeneity in co-speech gesture production in PD. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Climatology of GW-TIDs in the magnetic equatorial upper thermosphere over India

    Science.gov (United States)

    Manju, G.; Aswathy, R. P.

    2017-11-01

    An analysis of Gravity wave induced travelling ionospheric disturbances (GW-TIDs) in the thermosphere during high and low solar epochs is undertaken using ionosonde data at Trivandrum (8.50N, 770E). Wavelet analysis is performed on the temporal variations of foF2 and the amplitudes of waves present in two period bands of (0.5-1.5) h and (2-4) h are extracted. The real height profiles are generated at 15 min internal for the whole day (for sample days) during high and low solar activity years. The study reveals that the GW-TID activity is significantly greater for solar minimum compared to solar maximum for the period 8.5-17.5 h. Diurnally the GW-TID activity in the (2-4) h period band peaks in the post sunset hours for both high and low solar epochs. For the 0.5-1.5 h period band, the diurnal maximum in GW-TID is occurring in the post sunset hours for high solar epoch while it occurs in the morning hours around 10 h LT for low solar epoch. Seasonally the day time GW-TID activity maximizes (minimizes) for winter (vernal equinox). The post sunset time GW-TID maximizes (minimizes) either for summer/winter (vernal equinox). The other interesting observation is the anti correlation of GW-TID in upper thermosphere with solar activity for day time and the correlation of the same with solar activity in the post sunset hours. The present results for daytime are in agreement with the equatorial daytime GW-TID behaviour reported from CHAMP satellite observations. The GW-TID activity during post sunset time for equatorial region upper thermosphere has not been reported so far.

  10. IPHE Infrastructure Workshop Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    None

    2010-02-01

    This proceedings contains information from the IPHE Infrastructure Workshop, a two-day interactive workshop held on February 25-26, 2010, to explore the market implementation needs for hydrogen fueling station development.

  11. Workshops as a Research Methodology

    Science.gov (United States)

    Ørngreen, Rikke; Levinsen, Karin

    2017-01-01

    This paper contributes to knowledge on workshops as a research methodology, and specifically on how such workshops pertain to e-learning. A literature review illustrated that workshops are discussed according to three different perspectives: workshops as a means, workshops as practice, and workshops as a research methodology. Focusing primarily on…

  12. Convergence of quasiparticle self-consistent GW calculations of transition metal monoxides

    OpenAIRE

    Das, Suvadip; Coulter, John E.; Manousakis, Efstratios

    2014-01-01

    Finding an accurate ab initio approach for calculating the electronic properties of transition metal oxides has been a problem for several decades. In this paper, we investigate the electronic structure of the transition metal monoxides MnO, CoO, and NiO in their undistorted rock-salt structure within a fully iterated quasiparticle self-consistent GW (QPscGW) scheme. We study the convergence of the QPscGW method, i.e., how the quasiparticle energy eigenvalues and wavefunctions converge as a f...

  13. [George Herbert Mead. Thought as the conversation of interior gestures].

    Science.gov (United States)

    Quéré, Louis

    2010-01-01

    For George Herbert Mead, thinking amounts to holding an "inner conversation of gestures ". Such a conception does not seem especially original at first glance. What makes it truly original is the "social-behavioral" approach of which it is a part, and, particularly, two ideas. The first is that the conversation in question is a conversation of gestures or attitudes, and the second, that thought and reflexive intelligence arise from the internalization of an external process supported by the social mechanism of communication: that of conduct organization. It imports then to understand what distinguishes such ideas from those of the founder of behavioral psychology, John B. Watson, for whom thinking amounts to nothing other than subvocal speech.

  14. Gesture recognition for smart home applications using portable radar sensors.

    Science.gov (United States)

    Wan, Qian; Li, Yiran; Li, Changzhi; Pal, Ranadip

    2014-01-01

    In this article, we consider the design of a human gesture recognition system based on pattern recognition of signatures from a portable smart radar sensor. Powered by AAA batteries, the smart radar sensor operates in the 2.4 GHz industrial, scientific and medical (ISM) band. We analyzed the feature space using principle components and application-specific time and frequency domain features extracted from radar signals for two different sets of gestures. We illustrate that a nearest neighbor based classifier can achieve greater than 95% accuracy for multi class classification using 10 fold cross validation when features are extracted based on magnitude differences and Doppler shifts as compared to features extracted through orthogonal transformations. The reported results illustrate the potential of intelligent radars integrated with a pattern recognition system for high accuracy smart home and health monitoring purposes.

  15. Historical-critical objects and gestures: Hebreia by Fabio Mauri

    Directory of Open Access Journals (Sweden)

    Maria Augusta Vilalba Nunes

    2015-09-01

    Full Text Available In 1971, Italian artist Fabio Mauri produces the performance / installation artwork Ebrea, which emerges a memory of the Shoah through the performatic gestures and sculpture-objects that make up the scene. In Ebrea the time unfolds and bring back to present the memory of a catastrophe. So, over twenty years after the end of the war, why would Mauri up this memory that was getting distant and was beginning to have an unrealistic form? The anachronism of Mauri’s gesture is intrinsically connected to the need of render account to the hidden memories that don’t leave him and whose tracks, in his opinion, should not be erased from history.

  16. Finger tips detection for two handed gesture recognition

    Science.gov (United States)

    Bhuyan, M. K.; Kar, Mithun Kumar; Neog, Debanga Raj

    2011-10-01

    In this paper, a novel algorithm is proposed for fingertips detection in view of two-handed static hand pose recognition. In our method, finger tips of both hands are detected after detecting hand regions by skin color-based segmentation. At first, the face is removed in the image by using Haar classifier and subsequently, the regions corresponding to the gesturing hands are isolated by a region labeling technique. Next, the key geometric features characterizing gesturing hands are extracted for two hands. Finally, for all possible/allowable finger movements, a probabilistic model is developed for pose recognition. Proposed method can be employed in a variety of applications like sign language recognition and human-robot-interactions etc.

  17. Feasibility of interactive gesture control of a robotic microscope

    Directory of Open Access Journals (Sweden)

    Antoni Sven-Thomas

    2015-09-01

    Full Text Available Robotic devices become increasingly available in the clinics. One example are motorized surgical microscopes. While there are different scenarios on how to use the devices for autonomous tasks, simple and reliable interaction with the device is a key for acceptance by surgeons. We study, how gesture tracking can be integrated within the setup of a robotic microscope. In our setup, a Leap Motion Controller is used to track hand motion and adjust the field of view accordingly. We demonstrate with a survey that moving the field of view over a specified course is possible even for untrained subjects. Our results indicate that touch-less interaction with robots carrying small, near field gesture sensors is feasible and can be of use in clinical scenarios, where robotic devices are used in direct proximity of patient and physicians.

  18. Gesture Interaction Browser-Based 3D Molecular Viewer.

    Science.gov (United States)

    Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela

    2016-01-01

    The paper presents an open source system that allows the user to interact with a 3D molecular viewer using associated hand gestures for rotating, scaling and panning the rendered model. The novelty of this approach is that the entire application is browser-based and doesn't require installation of third party plug-ins or additional software components in order to visualize the supported chemical file formats. This kind of solution is suitable for instruction of users in less IT oriented environments, like medicine or chemistry. For rendering various molecular geometries our team used GLmol (a molecular viewer written in JavaScript). The interaction with the 3D models is made with Leap Motion controller that allows real-time tracking of the user's hand gestures. The first results confirmed that the resulting application leads to a better way of understanding various types of translational bioinformatics related problems in both biomedical research and education.

  19. Intelligent RF-Based Gesture Input Devices Implemented Using e-Textiles

    Directory of Open Access Journals (Sweden)

    Dana Hughes

    2017-01-01

    Full Text Available We present an radio-frequency (RF-based approach to gesture detection and recognition, using e-textile versions of common transmission lines used in microwave circuits. This approach allows for easy fabrication of input swatches that can detect a continuum of finger positions and similarly basic gestures, using a single measurement line. We demonstrate that the swatches can perform gesture detection when under thin layers of cloth or when weatherproofed, providing a high level of versatility not present with other types of approaches. Additionally, using small convolutional neural networks, low-level gestures can be identified with a high level of accuracy using a small, inexpensive microcontroller, allowing for an intelligent fabric that reports only gestures of interest, rather than a simple sensor requiring constant surveillance from an external computing device. The resulting e-textile smart composite has applications in controlling wearable devices by providing a simple, eyes-free mechanism to input simple gestures.

  20. ICP-MS Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Carman, April J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Eiden, Gregory C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-11-01

    This is a short document that explains the materials that will be transmitted to LLNL and DNN HQ regarding the ICP-MS Workshop held at PNNL June 17-19th. The goal of the information is to pass on to LLNL information regarding the planning and preparations for the Workshop at PNNL in preparation of the SIMS workshop at LLNL.

  1. Fermi Observations of the LIGO Event GW170104

    Energy Technology Data Exchange (ETDEWEB)

    Goldstein, A.; Cleveland, W. H.; Connaughton, V. [Science and Technology Institute, Universities Space Research Association, Huntsville, AL 35805 (United States); Veres, P.; Briggs, M. S.; Hamburg, R.; Jenke, P. A.; Bhat, N. [Center for Space Plasma and Aeronomic Research, University of Alabama in Huntsville, 320 Sparkman Drive, Huntsville, AL 35899 (United States); Burns, E.; Canton, T. Dal [NASA Postdoctoral Program Fellow, Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Blackburn, L. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Christensen, N. [Physics and Astronomy, Carleton College, MN, 55057 (United States); Hui, C. M.; Kocevski, D.; Wilson-Hodge, C. A. [Astrophysics Office, ST12, NASA/Marshall Space Flight Center, Huntsville, AL 35812 (United States); Preece, R. D. [Department of Space Science, University of Alabama in Huntsville, 320 Sparkman Drive, Huntsville, AL 35899 (United States); Siellez, K. [Center for Relativistic Astrophysics and School of Physics, Georgia Institute of Technology, Atlanta, GA 30332 (United States); Veitch, J. [University of Birmingham, Birmingham B15 2TT (United Kingdom); Bissaldi, E. [Istituto Nazionale di Fisica Nucleare, Sezione di Bari, I-70126 Bari (Italy); Gibby, M. H., E-mail: kocevski@slac.stanford.edu, E-mail: melissa.pesce.rollins@pi.infn.it, E-mail: nicola.omodei@stanford.edu, E-mail: giacomov@slac.stanford.edu [Jacobs Technology, Inc., Huntsville, AL (United States); Collaboration: (Fermi-LAT Collaboration); and others

    2017-09-01

    We present the Fermi Gamma-ray Burst Monitor (GBM) and Large Area Telescope (LAT) observations of the LIGO binary black hole merger (BBH) event GW170104. No candidate electromagnetic counterpart was detected by either GBM or LAT. A detailed analysis of the GBM and LAT data over timescales from seconds to days covering the Laser Interferometer Gravitational-wave Observatory (LIGO) localization region is presented. The resulting flux upper bound from the GBM is (5.2–9.4) × 10{sup −7} erg cm{sup −2} s{sup −1} in the 10–1000 keV range and from the LAT is (0.2–90) × 10{sup −9} erg cm{sup −2} s{sup −1} in the 0.1–1 GeV range. We also describe the improvements to our automated pipelines and analysis techniques for searching for and characterizing the potential electromagnetic counterparts for future gravitational-wave events from Advanced LIGO/Virgo.

  2. Dark Energy after GW170817 and GRB170817A

    Science.gov (United States)

    Creminelli, Paolo; Vernizzi, Filippo

    2017-12-01

    The observation of GW170817 and its electromagnetic counterpart implies that gravitational waves travel at the speed of light, with deviations smaller than a few ×10-15 . We discuss the consequences of this experimental result for models of dark energy and modified gravity characterized by a single scalar degree of freedom. To avoid tuning, the speed of gravitational waves must be unaffected not only for our particular cosmological solution but also for nearby solutions obtained by slightly changing the matter abundance. For this to happen, the coefficients of various operators must satisfy precise relations that we discuss both in the language of the effective field theory of dark energy and in the covariant one, for Horndeski, beyond Horndeski, and degenerate higher-order theories. The simplification is dramatic: of the three functions describing quartic and quintic beyond Horndeski theories, only one remains and reduces to a standard conformal coupling to the Ricci scalar for Horndeski theories. We show that the deduced relations among operators do not introduce further tuning of the models, since they are stable under quantum corrections.

  3. Dark Energy after GW170817 and GRB170817A.

    Science.gov (United States)

    Creminelli, Paolo; Vernizzi, Filippo

    2017-12-22

    The observation of GW170817 and its electromagnetic counterpart implies that gravitational waves travel at the speed of light, with deviations smaller than a few×10^{-15}. We discuss the consequences of this experimental result for models of dark energy and modified gravity characterized by a single scalar degree of freedom. To avoid tuning, the speed of gravitational waves must be unaffected not only for our particular cosmological solution but also for nearby solutions obtained by slightly changing the matter abundance. For this to happen, the coefficients of various operators must satisfy precise relations that we discuss both in the language of the effective field theory of dark energy and in the covariant one, for Horndeski, beyond Horndeski, and degenerate higher-order theories. The simplification is dramatic: of the three functions describing quartic and quintic beyond Horndeski theories, only one remains and reduces to a standard conformal coupling to the Ricci scalar for Horndeski theories. We show that the deduced relations among operators do not introduce further tuning of the models, since they are stable under quantum corrections.

  4. GW and Bethe-Salpeter study of small water clusters

    Energy Technology Data Exchange (ETDEWEB)

    Blase, Xavier, E-mail: xavier.blase@neel.cnrs.fr; Boulanger, Paul [CNRS, Institut NEEL, F-38042 Grenoble (France); Bruneval, Fabien [CEA, DEN, Service de Recherches de Métallurgie Physique, F-91191 Gif-sur-Yvette (France); Fernandez-Serra, Marivi [Department of Physics and Astronomy, Stony Brook University, Stony Brook, New York 11794-3800 (United States); Institute for Advanced Computational Sciences, Stony Brook University, Stony Brook, New York 11794-3800 (United States); Duchemin, Ivan [INAC, SP2M/L-Sim, CEA/UJF Cedex 09, 38054 Grenoble (France)

    2016-01-21

    We study within the GW and Bethe-Salpeter many-body perturbation theories the electronic and optical properties of small (H{sub 2}O){sub n} water clusters (n = 1-6). Comparison with high-level CCSD(T) Coupled-Cluster at the Single Double (Triple) levels and ADC(3) Green’s function third order algebraic diagrammatic construction calculations indicates that the standard non-self-consistent G{sub 0}W{sub 0}@PBE or G{sub 0}W{sub 0}@PBE0 approaches significantly underestimate the ionization energy by about 1.1 eV and 0.5 eV, respectively. Consequently, the related Bethe-Salpeter lowest optical excitations are found to be located much too low in energy when building transitions from a non-self-consistent G{sub 0}W{sub 0} description of the quasiparticle spectrum. Simple self-consistent schemes, with update of the eigenvalues only, are shown to provide a weak dependence on the Kohn-Sham starting point and a much better agreement with reference calculations. The present findings rationalize the theory to experiment possible discrepancies observed in previous G{sub 0}W{sub 0} and Bethe-Salpeter studies of bulk water. The increase of the optical gap with increasing cluster size is consistent with the evolution from gas to dense ice or water phases and results from an enhanced screening of the electron-hole interaction.

  5. Music Conductor Gesture Recognized Interactive Music Generation System

    OpenAIRE

    CHEN, Shuai; MAEDA, Yoichiro; TAKAHASHI, Yasutake

    2012-01-01

    In the research of interactive music generation, we propose a music generation method, that the computer generates the music automatically, and then the music will be arranged under the human music conductor's gestures, before it outputs to us. In this research, the generated music is processed from chaotic sound, which is generated from the network of chaotic elements in realtime. The music conductor's hand motions are detected by Microsoft Kinect in this system. Music theories are embedded ...

  6. Basic Hand Gestures Classification Based on Surface Electromyography

    Directory of Open Access Journals (Sweden)

    Aleksander Palkowski

    2016-01-01

    Full Text Available This paper presents an innovative classification system for hand gestures using 2-channel surface electromyography analysis. The system developed uses the Support Vector Machine classifier, for which the kernel function and parameter optimisation are conducted additionally by the Cuckoo Search swarm algorithm. The system developed is compared with standard Support Vector Machine classifiers with various kernel functions. The average classification rate of 98.12% has been achieved for the proposed method.

  7. Gestural representation of event structure in dyadic interaction

    OpenAIRE

    Christensen, Peer; Tylén, Kristian

    2013-01-01

    What are the underlying motivations for the conceptualization of events? Recent studies show that when people are asked to use nonverbal gestures to describe transitive events they prefer the semantic order Agent-Patient-Act, analogous to SOV in grammatical terms. The original explanation has been that this pattern reflects a cognitively “natural order” for the conceptualization of events. However, other types of transitive events have not been investigated in earlier studies. We report exper...

  8. Surgical gesture classification from video and kinematic data.

    Science.gov (United States)

    Zappella, Luca; Béjar, Benjamín; Hager, Gregory; Vidal, René

    2013-10-01

    Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on dynamic cues (e.g., time to completion, speed, forces, torque) or kinematic data (e.g., robot trajectories and velocities). While videos could be equally or more discriminative (e.g., videos contain semantic information not present in kinematic data), they are typically not used because of the difficulties associated with automatic video interpretation. In this paper, we propose several methods for automatic surgical gesture classification from video data. We assume that the video of a surgical task (e.g., suturing) has been segmented into video clips corresponding to a single gesture (e.g., grabbing the needle, passing the needle) and propose three methods to classify the gesture of each video clip. In the first one, we model each video clip as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words, and use a bag-of-features (BoF) approach to classify new video clips. In the third one, we use multiple kernel learning (MKL) to combine the LDS and BoF approaches. Since the LDS approach is also applicable to kinematic data, we also use MKL to combine both types of data in order to exploit their complementarity. Our experiments on a typical surgical training setup show that methods based on video data perform equally well, if not better, than state-of-the-art approaches based on kinematic data. In turn, the combination of both kinematic and video data outperforms any other algorithm based on one type of data alone. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Designing natural gesture interaction for archaeological data in immersive environments

    Directory of Open Access Journals (Sweden)

    Niccolò Albertini

    2017-05-01

    Full Text Available Archaeological data are heterogeneous, making it difficult to correlate and combine different types.  Datasheets  and pictures,  stratigraphic  data  and  3D  models,  time and  space  mixed  together: these are  only a few  of  the  categories  a researcher has to deal with. New technologies may be able to help in this process and trying to solve research related problems needs innovative solutions. In this paper, we describe the whole process for the design and development of a prototype application that uses an Immersive Virtual Reality system to acces archaeological excavation3Ddata through the Gesture Variation Follower (GVF algorithm. This makes it possible to recognise which gesture is being performed and how it is performed. Archaeologist shave participated actively in the design of the interface and the set of gestures used for triggering the different tasks. Interactive machine learning techniques have been used for the real time detection of the gestures. As a case  study  the  agora  of  Segesta  (Sicily,  Italy  has  been  selected.  Indeed,  due  to  the  complex architectural  features  and  the  still  ongoing  fieldwork  activities,  Segesta  represents  an  ideal  context  where  to  test  and develop a research approach integrating both traditional and more innovative tools and methods.

  10. ConductHome: Gesture Interface Control of Home Automation Boxes

    OpenAIRE

    J. Branstett; V. Gagneux; A. Leleu; B. Levadoux; J. Pascale

    2015-01-01

    This paper presents the interface ConductHome which controls home automation systems with a Leap Motion using "invariant gesture protocols". This interface is meant to simplify the interaction of the user with its environment. A hardware part allows the Leap Motion to be carried around the house. A software part interacts with the home automation box and displays the useful information for the user. An objective of this work is the development of a natural/invariant/simpl...

  11. Voice and gesture-based 3D multimedia presentation tool

    Science.gov (United States)

    Fukutake, Hiromichi; Akazawa, Yoshiaki; Okada, Yoshihiro

    2007-09-01

    This paper proposes a 3D multimedia presentation tool that allows the user to manipulate intuitively only through the voice input and the gesture input without using a standard keyboard or a mouse device. The authors developed this system as a presentation tool to be used in a presentation room equipped a large screen like an exhibition room in a museum because, in such a presentation environment, it is better to use voice commands and the gesture pointing input rather than using a keyboard or a mouse device. This system was developed using IntelligentBox, which is a component-based 3D graphics software development system. IntelligentBox has already provided various types of 3D visible, reactive functional components called boxes, e.g., a voice input component and various multimedia handling components. IntelligentBox also provides a dynamic data linkage mechanism called slot-connection that allows the user to develop 3D graphics applications by combining already existing boxes through direct manipulations on a computer screen. Using IntelligentBox, the 3D multimedia presentation tool proposed in this paper was also developed as combined components only through direct manipulations on a computer screen. The authors have already proposed a 3D multimedia presentation tool using a stage metaphor and its voice input interface. This time, we extended the system to make it accept the user gesture input besides voice commands. This paper explains details of the proposed 3D multimedia presentation tool and especially describes its component-based voice and gesture input interfaces.

  12. Developing A Physical Gesture Acquisition System for Guqin Performance

    OpenAIRE

    He, Jingyin; Kapur, Ajay; Carnegie, Dale

    2015-01-01

    Motion- based musical interfaces are ubiquitous. With the plethora of sensing solutions and the possibility of developing custom designs, it is important that the new musical interface has the capability to perform any number of tasks. This paper presents the theoretical framework for defining, designing, and evaluation process of a physical gesture acquisition for Guqin performance. The framework is based on an iterative design process, and draws upon the knowledge in Guqin performance to de...

  13. Gestures of grieving and mourning: a transhistoric dance-scheme

    OpenAIRE

    Briand , Michel

    2013-01-01

    International audience; This short analysis refers to cultural anthropology and aesthetics of dance, and intends to present a few remarkable steps in the long history of a special kind of danced gestures: expressions of feelings and representations of activities related to grieving and mourning, like lifting up hands in the air or upon one’s head and dramatically waving long hair. The focus is set on some universals and similarities as well as on contextualized variations and differences, in ...

  14. The importance of gestural communication: a study of human-dog communication using incongruent information.

    Science.gov (United States)

    D'Aniello, Biagio; Scandurra, Anna; Alterisio, Alessandra; Valsecchi, Paola; Prato-Previde, Emanuela

    2016-11-01

    We assessed how water rescue dogs, which were equally accustomed to respond to gestural and verbal requests, weighted gestural versus verbal information when asked by their owner to perform an action. Dogs were asked to perform four different actions ("sit", "lie down", "stay", "come") providing them with a single source of information (in Phase 1, gestural, and in Phase 2, verbal) or with incongruent information (in Phase 3, gestural and verbal commands referred to two different actions). In Phases 1 and 2, we recorded the frequency of correct responses as 0 or 1, whereas in Phase 3, we computed a 'preference index' (percentage of gestural commands followed over the total commands responded). Results showed that dogs followed gestures significantly better than words when these two types of information were used separately. Females were more likely to respond to gestural than verbal commands and males responded to verbal commands significantly better than females. In the incongruent condition, when gestures and words simultaneously indicated two different actions, the dogs overall preferred to execute the action required by the gesture rather than that required verbally, except when the verbal command "come" was paired with the gestural command "stay" with the owner moving away from the dog. Our data suggest that in dogs accustomed to respond to both gestural and verbal requests, gestures are more salient than words. However, dogs' responses appeared to be dependent also on the contextual situation: dogs' motivation to maintain proximity with an owner who was moving away could have led them to make the more 'convenient' choices between the two incongruent instructions.

  15. Gesture-Based Controls for Robots: Overview and Implications for Use by Soldiers

    Science.gov (United States)

    2016-07-01

    application is that of the Kinect system, where camera-based interpretation of user body posture and movements serves to control videogame features...modalities will benefit from respective advantages. The combination of deictic gestures to support human-human interactions has been well...used in spontaneous gesture production were to clarify speech utterances. Studies have shown demonstrable benefits from use of gestures to support

  16. Effects of observing and producing deictic gestures on memory and learning in different age groups

    OpenAIRE

    Ouwehand, Kim

    2016-01-01

    markdownabstractThe studies presented in this dissertation aimed to investigate whether observing or producing deictic gestures (i.e., pointing and tracing gestures to index a referent in space or a movement pathway), could facilitate memory and learning in children, young adults, and older adults. More specifically, regarding memory it was investigated whether the use of deictic gestures would improve performance on tasks targeting cognitive functions that are found to change with age (worki...

  17. A Natural Interaction Interface for UAVs Using Intuitive Gesture Recognition

    Science.gov (United States)

    Chandarana, Meghan; Trujillo, Anna; Shimada, Kenji; Allen, Danette

    2016-01-01

    The popularity of unmanned aerial vehicles (UAVs) is increasing as technological advancements boost their favorability for a broad range of applications. One application is science data collection. In fields like Earth and atmospheric science, researchers are seeking to use UAVs to augment their current portfolio of platforms and increase their accessibility to geographic areas of interest. By increasing the number of data collection platforms UAVs will significantly improve system robustness and allow for more sophisticated studies. Scientists would like be able to deploy an available fleet of UAVs to fly a desired flight path and collect sensor data without needing to understand the complex low-level controls required to describe and coordinate such a mission. A natural interaction interface for a Ground Control System (GCS) using gesture recognition is developed to allow non-expert users (e.g., scientists) to define a complex flight path for a UAV using intuitive hand gesture inputs from the constructed gesture library. The GCS calculates the combined trajectory on-line, verifies the trajectory with the user, and sends it to the UAV controller to be flown.

  18. Gestural Interaction for Virtual Reality Environments through Data Gloves

    Directory of Open Access Journals (Sweden)

    G. Rodriguez

    2017-05-01

    Full Text Available In virtual environments, virtual hand interactions play a key role in interactivity and realism allowing to perform fine motions. Data glove is widely used in Virtual Reality (VR and through simulating a human hands natural anatomy (Avatar’s hands in its appearance and motion is possible to interact with the environment and virtual objects. Recently, hand gestures are considered as one of the most meaningful and expressive signals. As consequence, this paper explores the use of hand gestures as a mean of Human-Computer Interaction (HCI for VR applications through data gloves. Using a hand gesture recognition and tracking method, accurate and real-time interactive performance can be obtained. To verify the effectiveness and usability of the system, an experiment of ease learning based on execution’s time was performed. The experimental results demonstrate that this interaction’s approach does not present problems for people more experienced in the use of computer applications. While people with basic knowledge has some problems the system becomes easy to use with practice.

  19. Learning Semantics of Gestural Instructions for Human-Robot Collaboration

    Science.gov (United States)

    Shukla, Dadhichi; Erkent, Özgür; Piater, Justus

    2018-01-01

    Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions. PMID:29615888

  20. Conciliatory gestures promote forgiveness and reduce anger in humans.

    Science.gov (United States)

    McCullough, Michael E; Pedersen, Eric J; Tabak, Benjamin A; Carter, Evan C

    2014-07-29

    Conflict is an inevitable component of social life, and natural selection has exerted strong effects on many organisms to facilitate victory in conflict and to deter conspecifics from imposing harms upon them. Like many species, humans likely possess cognitive systems whose function is to motivate revenge as a means of deterring individuals who have harmed them from harming them again in the future. However, many social relationships often retain value even after conflicts have occurred between interactants, so natural selection has very likely also endowed humans with cognitive systems whose function is to motivate reconciliation with transgressors whom they perceive as valuable and nonthreatening, notwithstanding their harmful prior actions. In a longitudinal study with 337 participants who had recently been harmed by a relationship partner, we found that conciliatory gestures (e.g., apologies, offers of compensation) were associated with increases in victims' perceptions of their transgressors' relationship value and reductions in perceptions of their transgressors' exploitation risk. In addition, conciliatory gestures appeared to accelerate forgiveness and reduce reactive anger via their intermediate effects on relationship value and exploitation risk. These results strongly suggest that conciliatory gestures facilitate forgiveness and reduce anger by modifying victims' perceptions of their transgressors' value as relationship partners and likelihood of recidivism.

  1. Gesture and form in the Neolithic graphic expression

    Directory of Open Access Journals (Sweden)

    Philippe HAMEAU

    2011-10-01

    Full Text Available The parietal painted sign keeps on the memory of gesture that produces it. It is one of the distinctive features of this particular artefact. However, it is not possible to reconstruct this gesture if we do not give the context of the sign, if we do not present the numerous physical and cultural parameters which are in charge of their production. About schematic paintings of Neolithic age, we must take the union of criterions into account such as the parietal and site topography, the cultural constraints that appoint the location of figures and the ritual practices originally the graphical expression. The painter perceives, adapts and behaves according to this spatial and social environment. We refer here to several strategies: the attention for the parietal microtopography in accordance with the signs to draw, the respect of some criterions that specify the choice of the site like the hygrophily of places and the rubefaction of rock walls, the need to paint at the limits of the accessibility of site and wall, the use of drawing-tools for increase the capacities of the body. The efficiency of the gesture consists in realizing a sign bearing a meaning because in harmony with the features of its support.

  2. Learning Semantics of Gestural Instructions for Human-Robot Collaboration.

    Science.gov (United States)

    Shukla, Dadhichi; Erkent, Özgür; Piater, Justus

    2018-01-01

    Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions.

  3. A biometric authentication model using hand gesture images.

    Science.gov (United States)

    Fong, Simon; Zhuang, Yan; Fister, Iztok; Fister, Iztok

    2013-10-30

    A novel hand biometric authentication method based on measurements of the user's stationary hand gesture of hand sign language is proposed. The measurement of hand gestures could be sequentially acquired by a low-cost video camera. There could possibly be another level of contextual information, associated with these hand signs to be used in biometric authentication. As an analogue, instead of typing a password 'iloveu' in text which is relatively vulnerable over a communication network, a signer can encode a biometric password using a sequence of hand signs, 'i' , 'l' , 'o' , 'v' , 'e' , and 'u'. Subsequently the features from the hand gesture images are extracted which are integrally fuzzy in nature, to be recognized by a classification model for telling if this signer is who he claimed himself to be, by examining over his hand shape and the postures in doing those signs. It is believed that everybody has certain slight but unique behavioral characteristics in sign language, so are the different hand shape compositions. Simple and efficient image processing algorithms are used in hand sign recognition, including intensity profiling, color histogram and dimensionality analysis, coupled with several popular machine learning algorithms. Computer simulation is conducted for investigating the efficacy of this novel biometric authentication model which shows up to 93.75% recognition accuracy.

  4. The importance of considering gestures in the study of current spoken Yucatec Maya.

    Directory of Open Access Journals (Sweden)

    Olivier Le Guen

    2018-02-01

    Full Text Available For centuries, linguistic description has been somehow limited because it was not possible to record audio and video. For this reason, the intrinsic multimodal nature of human language has been left out, putting aside various types of information both prosodic and visual. This work analyzes the ways in which gestures complement speech, taking into account several levels of analysis: pragmatic, semantic and syntactic; but also how some gestures can be considered linguistic signs. In order to exemplify the argumentation, I will consider the Yucatec Maya language using examples of spontaneous productions. Although certain processes presented in this work are specific to Yucatec Maya, most can be found in various languages. This paper first presents a definition of language, speech and gestures, and how one can study the way in which speech and gestures are integrated in a composite utterance. Subsequently, I analyze examples of different types of gestures in various areas of communication in Yucatec Maya, such as deictic gestures, the use of expressive gestures, metaphors and the integration of gestures at the pragmatic level. Finally, I explain how gestures can become linguistic signs in Yucatec Maya.

  5. Effects of hand gestures on auditory learning of second-language vowel length contrasts.

    Science.gov (United States)

    Hirata, Yukari; Kelly, Spencer D; Huang, Jessica; Manansala, Michael

    2014-12-01

    Research has shown that hand gestures affect comprehension and production of speech at semantic, syntactic, and pragmatic levels for both native language and second language (L2). This study investigated a relatively less explored question: Do hand gestures influence auditory learning of an L2 at the segmental phonology level? To examine auditory learning of phonemic vowel length contrasts in Japanese, 88 native English-speaking participants took an auditory test before and after one of the following 4 types of training in which they (a) observed an instructor in a video speaking Japanese words while she made syllabic-rhythm hand gesture, (b) produced this gesture with the instructor, (c) observed the instructor speaking those words and her moraic-rhythm hand gesture, or (d) produced the moraic-rhythm gesture with the instructor. All of the training types yielded similar auditory improvement in identifying vowel length contrast. However, observing the syllabic-rhythm hand gesture yielded the most balanced improvement between word-initial and word-final vowels and between slow and fast speaking rates. The overall effect of hand gesture on learning of segmental phonology is limited. Implications for theories of hand gesture are discussed in terms of the role it plays at different linguistic levels.

  6. Gesture-controlled interfaces for self-service machines and other applications

    Science.gov (United States)

    Cohen, Charles J. (Inventor); Beach, Glenn (Inventor); Cavell, Brook (Inventor); Foulk, Gene (Inventor); Jacobus, Charles J. (Inventor); Obermark, Jay (Inventor); Paul, George (Inventor)

    2004-01-01

    A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestures are defined as a family of parametrically delimited oscillatory motions, modeled as a linear-in-parameters dynamic system with added geometric constraints to allow for real-time recognition using a small amount of memory and processing time. A linear least squares method is preferably used to determine the parameters which represent each gesture. Feature position measure is used in conjunction with a bank of predictor bins seeded with the gesture parameters, and the system determines which bin best fits the observed motion. Recognizing static pose gestures is preferably performed by localizing the body/object from the rest of the image, describing that object, and identifying that description. The disclosure details methods for gesture recognition, as well as the overall architecture for using gesture recognition to control of devices, including self-service machines.

  7. Better together: Simultaneous presentation of speech and gesture in math instruction supports generalization and retention.

    Science.gov (United States)

    Congdon, Eliza L; Novack, Miriam A; Brooks, Neon; Hemani-Lopez, Naureen; O'Keefe, Lucy; Goldin-Meadow, Susan

    2017-08-01

    When teachers gesture during instruction, children retain and generalize what they are taught (Goldin-Meadow, 2014). But why does gesture have such a powerful effect on learning? Previous research shows that children learn most from a math lesson when teachers present one problem-solving strategy in speech while simultaneously presenting a different, but complementary, strategy in gesture (Singer & Goldin-Meadow, 2005). One possibility is that gesture is powerful in this context because it presents information simultaneously with speech. Alternatively, gesture may be effective simply because it involves the body, in which case the timing of information presented in speech and gesture may be less important for learning. Here we find evidence for the importance of simultaneity: 3 rd grade children retain and generalize what they learn from a math lesson better when given instruction containing simultaneous speech and gesture than when given instruction containing sequential speech and gesture. Interpreting these results in the context of theories of multimodal learning, we find that gesture capitalizes on its synchrony with speech to promote learning that lasts and can be generalized.

  8. Iconic Gestures Facilitate Discourse Comprehension in Individuals With Superior Immediate Memory for Body Configurations.

    Science.gov (United States)

    Wu, Ying Choon; Coulson, Seana

    2015-11-01

    To understand a speaker's gestures, people may draw on kinesthetic working memory (KWM)-a system for temporarily remembering body movements. The present study explored whether sensitivity to gesture meaning was related to differences in KWM capacity. KWM was evaluated through sequences of novel movements that participants viewed and reproduced with their own bodies. Gesture sensitivity was assessed through a priming paradigm. Participants judged whether multimodal utterances containing congruent, incongruent, or no gestures were related to subsequent picture probes depicting the referents of those utterances. Individuals with low KWM were primarily inhibited by incongruent speech-gesture primes, whereas those with high KWM showed facilitation-that is, they were able to identify picture probes more quickly when preceded by congruent speech and gestures than by speech alone. Group differences were most apparent for discourse with weakly congruent speech and gestures. Overall, speech-gesture congruency effects were positively correlated with KWM abilities, which may help listeners match spatial properties of gestures to concepts evoked by speech. © The Author(s) 2015.

  9. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions.

    Science.gov (United States)

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-11-11

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials.

  10. Electronic and Optical Properties of CuO Based on DFT+U and GW Approximation

    International Nuclear Information System (INIS)

    Ahmad, F; Agusta, M K; Dipojono, H K

    2016-01-01

    We report ab initio calculations of electronic structure and optical properties of monoclinic CuO based on DFT+U and GW approximation. CuO is an antiferromagnetic material with strong electron correlations. Our calculation shows that DFT+U and GW approximation sufficiently reliable to investigate the material properties of CuO. The calculated band gap of DFT+U for reasonable value of U slightly underestimates. The use of GW approximation requires adjustment of U value to get realistic result. Hybridization Cu 3dxz, 3dyz with O 2p plays an important role in the formation of band gap. The calculated optical properties based on DFT+U and GW corrections by solving Bethe-Salpeter are in good agreement with the calculated electronic properties and the experimental result. (paper)

  11. HTTR workshop (workshop on hydrogen production technology)

    International Nuclear Information System (INIS)

    Shiina, Yasuaki; Takizuka, Takakazu

    2004-12-01

    Various research and development efforts have been performed to solve the global energy and environmental problems caused by large consumption of fossil fuels. Research activities on advanced hydrogen production technology by the use of nuclear heat from high temperature gas cooled reactors, for example, have been flourished in universities, research institutes and companies in many countries. The Department of HTTR Project and the Department of Advanced Nuclear Heat Technology of JAERI held the HTTR Workshop (Workshop on Hydrogen Production Technology) on July 5 and 6, 2004 to grasp the present status of R and D about the technology of HTGR and the nuclear hydrogen production in the world and to discuss about necessity of the nuclear hydrogen production and technical problems for the future development of the technology. More than 110 participants attended the Workshop including foreign participants from USA, France, Korea, Germany, Canada and United Kingdom. In the Workshop, the presentations were made on such topics as R and D programs for nuclear energy and hydrogen production technologies by thermo-chemical or other processes. Also, the possibility of the nuclear hydrogen production in the future society was discussed. The workshop showed that the R and D for the hydrogen production by the thermo-chemical process has been performed in many countries. The workshop affirmed that nuclear hydrogen production could be one of the competitive supplier of hydrogen in the future. The second HTTR Workshop will be held in the autumn next year. (author)

  12. Development of PPAR-agonist GW0742 as antidiabetic drug: study in animals

    Directory of Open Access Journals (Sweden)

    Niu HS

    2015-10-01

    Full Text Available Ho-Shan Niu,1 Po-Ming Ku,2,3 Chiang-Shan Niu,1 Juei-Tang Cheng,3,4 Kung-Shing Lee5–71Department of Nursing, Tzu Chi College of Technology, Hualien City, 2Department of Cardiology, 3Department of Medical Research, Chi-Mei Medical Center, Yong Kang, Tainan City, 4Institute of Medical Sciences, Chang Jung Christian University, Guiren, Tainan City, 5Department of Surgery, Division of Neurosurgery, Pingtung Hospital, 6Department of Surgery, Kaohsiung Medical University, 7School of Medicine, Chung-Ho Memorial Hospital, Kaohsiung Medical University, Kaohsiung City, TaiwanBackground: The development of new drugs for the treatment of diabetes mellitus (DM is critically important. Insulin resistance (IR is one of the main problems associated with type-2 DM (T2DM seen in clinics. GW0742, a selective peroxisome proliferator-activated receptor (PPAR-δ agonist, has been shown to ameliorate metabolic abnormalities including IR in skeletal muscle in mice fed high-fructose corn syrup. However, the influence of GW0742 on systemic insulin sensitivity has still not been elucidated. Therefore, it is important to investigate the effect of GW0742 on systemic IR in diabetic rats for the development of new drugs.Methods: The present study used a T2DM animal model to compare the effect of GW0742 on IR using homeostasis model assessment-IR (HOMA-IR and hyperinsulinemic euglycemic clamping. Additionally, the insulinotropic action of GW0742 was investigated in type-1 DM (T1DM rats. Changes in the protein expression of glucose transporter 4 (GLUT4 and phosphoenolpyruvate carboxykinase (PEPCK in skeletal muscle and in liver, respectively, were also identified by Western blots.Results: GW0742 attenuated the increased HOMA-IR in diabetic rats fed a fructose-rich diet. This action was blocked by GSK0660 at the dose sufficient to inhibit PPAR-δ. Improvement of IR by GW0742 was also characterized in diabetic rats using hyperinsulinemic euglycemic clamping. Additionally, an

  13. Applied antineutrino physics workshop

    International Nuclear Information System (INIS)

    Lund, James C.

    2008-01-01

    This workshop is the fourth one of a series that includes the Neutrino Geophysics Conference at Honolulu, Hawaii, which I attended in 2005. This workshop was organized by the Astro-Particle and Cosmology laboratory in the recently opened Condoret building of the University of Paris. More information, including copies of the presentations, on the workshop is available on the website: www.apc.univ-paris7.fr/AAP2007/. The workshop aims at opening neutrino physics to various fields such that it can be applied in geosciences, nuclear industry (reactor and spent fuel monitoring) and non-proliferation. The workshop was attended by over 60 people from Europe, USA, Asia and Brazil. The meeting was also attended by representatives of the Comprehensive nuclear-Test Ban Treaty (CTBT) and the International Atomic Energy Agency (IAEA). The workshop also included a workshop dinner on board of a river boat sailing the Seine river

  14. Performance comparison of multi-detector detection statistics in targeted compact binary coalescence GW search

    OpenAIRE

    Haris, K; Pai, Archana

    2016-01-01

    Global network of advanced Interferometric gravitational wave (GW) detectors are expected to be on-line soon. Coherent observation of GW from a distant compact binary coalescence (CBC) with a network of interferometers located in different continents give crucial information about the source such as source location and polarization information. In this paper we compare different multi-detector network detection statistics for CBC search. In maximum likelihood ratio (MLR) based detection appro...

  15. Remembering what was said and done: The activation and facilitation of memory for gesture as a consequence of retrieval.

    Science.gov (United States)

    Overoye, Acacia L; Storm, Benjamin C

    2018-04-26

    The gestures that occur alongside speech provide listeners with cues that both improve and alter memory for speech. The present research investigated the interplay of gesture and speech by examining the influence of retrieval on memory for gesture. In three experiments, participants watched video clips of an actor speaking a series of statements with or without gesture before being asked to retrieve the speech portions of half of those statements. Participants were then tested on their ability to recall whether the actor had gestured during each statement and, if so, to recall the nature of the gesture that was produced. Results indicated that attempting to retrieve the speech portion of the statements enhanced participants' ability to remember the gesture portion of the statements. This result was only observed, however, for representational gestures when the speech and gesture components were meaningfully related (Experiments 1 & 2). It was not observed for beat gestures or nonsense gestures (Experiments 2 & 3). These results are consistent with the idea that gestures can be coactivated during the retrieval of speech and that such coactivation is due to the integrated representation of speech and gesture in memory. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. Optical observations of LIGO source GW 170817 by the Antarctic Survey Telescopes at Dome A, Antarctica

    Science.gov (United States)

    Hu, Lei; Wu, Xuefeng; Andreoni, Igor; Ashley, Michael C. B.; Cooke, Jeff; Cui, Xiangqun; Du, Fujia; Dai, Zigao; Gu, Bozhong; Hu, Yi; Lu, Haiping; Li, Xiaoyan; Li, Zhengyang; Liang, Ensi; Liu, Liangduan; Ma, Bin; Shang, Zhaohui; Sun, Tianrui; Suntzeff, N. B.; Tao, Charling; Udden, Syed A.; Wang, Lifan; Wang, Xiaofeng; Wen, Haikun; Xiao, Di; Su, Jin; Yang, Ji; Yang, Shihai; Yuan, Xiangyan; Zhou, Hongyan; Zhang, Hui; Zhou, Jilin; Zhu, Zonghong

    2017-10-01

    The LIGO detection of gravitational waves (GW) from merging black holes in 2015 marked the beginning of a new era in observational astronomy. The detection of an electromagnetic signal from a GW source is the critical next step to explore in detail the physics involved. The Antarctic Survey Telescopes (AST3), located at Dome A, Antarctica, is uniquely situated for rapid response time-domain astronomy with its continuous night-time coverage during the austral winter. We report optical observations of the GW source (GW 170817) in the nearby galaxy NGC 4993 using AST3. The data show a rapidly fading transient at around 1 day after the GW trigger, with the i-band magnitude declining from 17.23±0.13 magnitude to 17.72±0.09 magnitude in ˜ 0.8 hour. The brightness and time evolution of the optical transient associated with GW 170817 are broadly consistent with the predictions of models involving merging binary neutron stars. We infer from our data that the merging process ejected about ˜ 10^{-2} solar mass of radioactive material at a speed of up to 30% the speed of light.

  17. TaGW2, a Good Reflection of Wheat Polyploidization and Evolution.

    Science.gov (United States)

    Qin, Lin; Zhao, Junjie; Li, Tian; Hou, Jian; Zhang, Xueyong; Hao, Chenyang

    2017-01-01

    Hexaploid wheat consists of three subgenomes, namely, A, B, and D. These well-characterized ancestral genomes also exist at the diploid and tetraploid levels, thereby rendering wheat as a good model species for studying polyploidization. Here, we performed intra- and inter-species comparative analyses of wheat and its relatives to dissect polymorphism and differentiation of the TaGW2 genes. Our results showed that genetic diversity of TaGW2 decreased with progression from the diploids to tetraploids and hexaploids. The strongest selection occurred in the promoter regions of TaGW2-6A and TaGW2-6B . Phylogenetic trees clearly indicated that Triticum urartu and Ae. speltoides were the donors of the A and B genomes in tetraploid and hexaploid wheats. Haplotypes detected among hexaploid genotypes traced back to the tetraploid level. Fst and π values revealed that the strongest selection on TaGW2 occurred at the tetraploid level rather than in hexaploid wheat. This infers that grain size enlargement, especially increased kernel width, mainly occurred in tetraploid genotypes. In addition, relative expression levels of TaGW2s significantly declined from the diploid level to tetraploids and hexaploids, further indicating that these genes negatively regulate kernel size. Our results also revealed that the polyploidization events possibly caused much stronger differentiation than domestication and breeding.

  18. The Sternheimer-GW method and the spectral signatures of plasmonic polarons

    Science.gov (United States)

    Giustino, Feliciano

    During the past three decades the GW method has emerged among the most promising electronic structure techniques for predictive calculations of quasiparticle band structures. In order to simplify the GW work-flow while at the same time improving the calculation accuracy, we developed the Sternheimer-GW method. In Sternheimer-GW both the screened Coulomb interaction and the electron Green's function are evaluated by using exclusively occupied Kohn-Sham states, as in density-functional perturbation theory. In this talk I will review the basics of Sternheimer-GW, and I will discuss two recent applications to semiconductors and superconductors. In the case of semiconductors we calculated complete energy- and momentum-resolved spectral functions by combining Sternheimer-GW with the cumulant expansion approach. This study revealed the existence of band structure replicas which arise from electron-plasmon interactions. In the case of superconductors we calculated the Coulomb pseudo-potential from first principles, and combined this approach with the Eliashberg theory of the superconducting critical temperature. This work was supported by the Leverhulme Trust (RL-2012-001), the European Research Council (EU FP7/ERC 239578), the UK Engineering and Physical Sciences Research Council (EP/J009857/1), and the Graphene Flagship (EU FP7/604391).

  19. Astrophysical Implications of the Binary Black-hole Merger GW150914

    Science.gov (United States)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Belczynski, C.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; DeRosa, R.; DeRosa, R. T.; DeSalvo, R.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; K, Haris; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, Nam-Gyu; Kim, Namjun; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R. M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Phelps, M.; Piccinni, O.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shao, Z.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, A. D.; Simakov, D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; van den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J. L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; and; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration

    2016-02-01

    The discovery of the gravitational-wave (GW) source GW150914 with the Advanced LIGO detectors provides the first observational evidence for the existence of binary black hole (BH) systems that inspiral and merge within the age of the universe. Such BH mergers have been predicted in two main types of formation models, involving isolated binaries in galactic fields or dynamical interactions in young and old dense stellar environments. The measured masses robustly demonstrate that relatively “heavy” BHs (≳ 25 {M}⊙ ) can form in nature. This discovery implies relatively weak massive-star winds and thus the formation of GW150914 in an environment with a metallicity lower than about 1/2 of the solar value. The rate of binary-BH (BBH) mergers inferred from the observation of GW150914 is consistent with the higher end of rate predictions (≳ 1 Gpc-3 yr-1) from both types of formation models. The low measured redshift (z≃ 0.1) of GW150914 and the low inferred metallicity of the stellar progenitor imply either BBH formation in a low-mass galaxy in the local universe and a prompt merger, or formation at high redshift with a time delay between formation and merger of several Gyr. This discovery motivates further studies of binary-BH formation astrophysics. It also has implications for future detections and studies by Advanced LIGO and Advanced Virgo, and GW detectors in space.

  20. A Magnetar Origin for the Kilonova Ejecta in GW170817

    Science.gov (United States)

    Metzger, Brian D.; Thompson, Todd A.; Quataert, Eliot

    2018-04-01

    The neutron star (NS) merger GW170817 was followed over several days by optical-wavelength (“blue”) kilonova (KN) emission likely powered by the radioactive decay of light r-process nuclei synthesized by ejecta with a low neutron abundance (electron fraction Y e ≈ 0.25–0.35). While the composition and high velocities of the blue KN ejecta are consistent with shock-heated dynamical material, the large quantity is in tension with the results of numerical simulations. We propose an alternative ejecta source: the neutrino-heated, magnetically accelerated wind from the strongly magnetized hypermassive NS (HMNS) remnant. A rapidly spinning HMNS with an ordered surface magnetic field of strength B ≈ (1–3) × 1014 G and lifetime t rem ∼ 0.1–1 s can simultaneously explain the velocity, total mass, and electron fraction of the blue KN ejecta. The inferred HMNS lifetime is close to its Alfvén crossing time, suggesting that global magnetic torques could be responsible for bringing the HMNS into solid-body rotation and instigating its gravitational collapse. Different origins for the KN ejecta may be distinguished by their predictions for the emission in the first hours after the merger, when the luminosity is enhanced by heating from internal shocks; the latter are likely generic to any temporally extended ejecta source (e.g., magnetar or accretion disk wind) and are not unique to the emergence of a relativistic jet. The same shocks could mix and homogenize the composition to a low but nonzero lanthanide mass fraction, {X}La}≈ {10}-3, as advocated by some authors, but only if the mixing occurs after neutrons are consumed in the r-process on a timescale ≳1 s.

  1. The benefit of gestures during communication: evidence from hearing and hearing-impaired individuals.

    Science.gov (United States)

    Obermeier, Christian; Dolk, Thomas; Gunter, Thomas C

    2012-07-01

    There is no doubt that gestures are communicative and can be integrated online with speech. Little is known, however, about the nature of this process, for example, its automaticity and how our own communicative abilities and also our environment influence the integration of gesture and speech. In two Event Related Potential (ERP) experiments, the effects of gestures during speech comprehension were explored. In both experiments, participants performed a shallow task thereby avoiding explicit gesture-speech integration. In the first experiment, participants with normal hearing viewed videos in which a gesturing actress uttered sentences which were either embedded in multi-speaker babble noise or not. The sentences contained a homonym which was disambiguated by the information in a gesture, which was presented asynchronous to speech (1000 msec earlier). Downstream, the sentence contained a target word that was either related to the dominant or subordinate meaning of the homonym and was used to indicate the success of the disambiguation. Both the homonym and the target word position showed clear ERP evidence of gesture-speech integration and disambiguation only under babble noise. Thus, during noise, gestures were taken into account as an important communicative cue. In Experiment 2, the same asynchronous stimuli were presented to a group of hearing-impaired students and age-matched controls. Only the hearing-impaired individuals showed significant speech-gesture integration and successful disambiguation at the target word. The age-matched controls did not show any effect. Thus, individuals who chronically experience suboptimal communicative situations in daily life automatically take gestures into account. The data from both experiments indicate that gestures are beneficial in countering difficult communication conditions independent of whether the difficulties are due to external (babble noise) or internal (hearing impairment) factors. Copyright © 2011 Elsevier

  2. GW-Bodies and P-Bodies Constitute Two Separate Pools of Sequestered Non-Translating RNAs.

    Directory of Open Access Journals (Sweden)

    Prajal H Patel

    Full Text Available Non-translating RNAs that have undergone active translational repression are culled from the cytoplasm into P-bodies for decapping-dependent decay or for sequestration. Organisms that use microRNA-mediated RNA silencing have an additional pathway to remove RNAs from active translation. Consequently, proteins that govern microRNA-mediated silencing, such as GW182/Gw and AGO1, are often associated with the P-bodies of higher eukaryotic organisms. Due to the presence of Gw, these structures have been referred to as GW-bodies. However, several reports have indicated that GW-bodies have different dynamics to P-bodies. Here, we use live imaging to examine GW-body and P-body dynamics in the early Drosophila melanogaster embryo. While P-bodies are present throughout early embryonic development, cytoplasmic GW-bodies only form in significant numbers at the midblastula transition. Unlike P-bodies, which are predominantly cytoplasmic, GW-bodies are present in both nuclei and the cytoplasm. RNA decapping factors such as DCP1, Me31B, and Hpat are not associated with GW-bodies, indicating that P-bodies and GW-bodies are distinct structures. Furthermore, known Gw interactors such as AGO1 and the CCR4-NOT deadenylation complex, which have been shown to be important for Gw function, are also not present in GW-bodies. Use of translational inhibitors puromycin and cycloheximide, which respectively increase or decrease cellular pools of non-translating RNAs, alter GW-body size, underscoring that GW-bodies are composed of non-translating RNAs. Taken together, these data indicate that active translational silencing most likely does not occur in GW-bodies. Instead GW-bodies most likely function as repositories for translationally silenced RNAs. Finally, inhibition of zygotic gene transcription is unable to block the formation of either P-bodies or GW-bodies in the early embryo, suggesting that these structures are composed of maternal RNAs.

  3. Individual Differences in Frequency and Saliency of Speech-Accompanying Gestures: The Role of Cognitive Abilities and Empathy

    NARCIS (Netherlands)

    Chu, M.; Meyer, A.S.; Foulkes, L.; Kita, S.

    2014-01-01

    The present study concerns individual differences in gesture production. We used correlational and multiple regression analyses to examine the relationship between individuals’ cognitive abilities and empathy levels and their gesture frequency and saliency. We chose predictor variables according to

  4. Systems Engineering Workshops | Wind | NREL

    Science.gov (United States)

    Workshops Systems Engineering Workshops The Wind Energy Systems Engineering Workshop is a biennial topics relevant to systems engineering and the wind industry. The presentations and agendas are available for all of the Systems Engineering Workshops: The 1st NREL Wind Energy Systems Engineering Workshop

  5. Gesture, Meaning-Making, and Embodiment: Second Language Learning in an Elementary Classroom

    Science.gov (United States)

    Rosborough, Alessandro

    2014-01-01

    The purpose of the present study was to investigate the mediational role of gesture and body movement/positioning between a teacher and an English language learner in a second-grade classroom. Responding to Thibault's (2011) call for understanding language through whole-body sense making, aspects of gesture and body positioning were analyzed for…

  6. Cross-Cultural Transfer in Gesture Frequency in Chinese-English Bilinguals

    Science.gov (United States)

    So, Wing Chee

    2010-01-01

    The purpose of this paper is to examine cross-cultural differences in gesture frequency and the extent to which exposure to two cultures would affect the gesture frequency of bilinguals when speaking in both languages. The Chinese-speaking monolinguals from China, English-speaking monolinguals from America, and Chinese-English bilinguals from…

  7. The Role of Gesture in Supporting Mental Representations: The Case of Mental Abacus Arithmetic

    Science.gov (United States)

    Brooks, Neon B.; Barner, David; Frank, Michael; Goldin-Meadow, Susan

    2018-01-01

    People frequently gesture when problem-solving, particularly on tasks that require spatial transformation. Gesture often facilitates task performance by interacting with internal mental representations, but how this process works is not well understood. We investigated this question by exploring the case of mental abacus (MA), a technique in which…

  8. Effects of gestures on older adults' learning from video-based models

    NARCIS (Netherlands)

    Ouwehand, Kim; van Gog, Tamara|info:eu-repo/dai/nl/294304975; Paas, Fred

    2015-01-01

    This study investigated whether the positive effects of gestures on learning by decreasing working memory load, found in children and young adults, also apply to older adults, who might especially benefit from gestures given memory deficits associated with aging. Participants learned a

  9. A Comparison of Intention and Pantomime Gesture Treatment for Noun Retrieval in People with Aphasia

    Science.gov (United States)

    Ferguson, Neina F.; Evans, Kelli; Raymer, Anastasia M.

    2012-01-01

    Purpose: The effects of intention gesture treatment (IGT) and pantomime gesture treatment (PGT) on word retrieval were compared in people with aphasia. Method: Four individuals with aphasia and word retrieval impairments subsequent to left-hemisphere stroke participated in a single-participant crossover treatment design. Each participant viewed…

  10. Coverbal Gestures in the Recovery from Severe Fluent Aphasia: A Pilot Study

    Science.gov (United States)

    Carlomagno, Sergio; Zulian, Nicola; Razzano, Carmelina; De Mercurio, Ilaria; Marini, Andrea

    2013-01-01

    This post hoc study investigated coverbal gesture patterns in two persons with chronic Wernicke's aphasia. They had both received therapy focusing on multimodal communication therapy, and their pre- and post-therapy verbal and gestural skills in face-to-face conversational interaction with their speech therapist were analysed by administering a…

  11. Gesture and Speech Integration: An Exploratory Study of a Man with Aphasia

    Science.gov (United States)

    Cocks, Naomi; Sautin, Laetitia; Kita, Sotaro; Morgan, Gary; Zlotowitz, Sally

    2009-01-01

    Background: In order to comprehend fully a speaker's intention in everyday communication, information is integrated from multiple sources, including gesture and speech. There are no published studies that have explored the impact of aphasia on iconic co-speech gesture and speech integration. Aims: To explore the impact of aphasia on co-speech…

  12. Exploring the Relationship between Gestural Recognition and Imitation: Evidence of Dyspraxia in Autism Spectrum Disorders

    Science.gov (United States)

    Ham, Heidi Stieglitz; Bartolo, Angela; Corley, Martin; Rajendran, Gnanathusharan; Szabo, Aniko; Swanson, Sara

    2011-01-01

    In this study, the relationship between gesture recognition and imitation was explored. Nineteen individuals with Autism Spectrum Disorder (ASD) were compared to a control group of 23 typically developing children on their ability to imitate and recognize three gesture types (transitive, intransitive, and pantomimes). The ASD group performed more…

  13. Signers and co-speech gesturers adopt similar strategies for portraying viewpoint in narratives.

    Science.gov (United States)

    Quinto-Pozos, David; Parrill, Fey

    2015-01-01

    Gestural viewpoint research suggests that several dimensions determine which perspective a narrator takes, including properties of the event described. Events can evoke gestures from the point of view of a character (CVPT), an observer (OVPT), or both perspectives. CVPT and OVPT gestures have been compared to constructed action (CA) and classifiers (CL) in signed languages. We ask how CA and CL, as represented in ASL productions, compare to previous results for CVPT and OVPT from English-speaking co-speech gesturers. Ten ASL signers described cartoon stimuli from Parrill (2010). Events shown by Parrill to elicit a particular gestural strategy (CVPT, OVPT, both) were coded for signers' instances of CA and CL. CA was divided into three categories: CA-torso, CA-affect, and CA-handling. Signers used CA-handling the most when gesturers used CVPT exclusively. Additionally, signers used CL the most when gesturers used OVPT exclusively and CL the least when gesturers used CVPT exclusively. Copyright © 2014 Cognitive Science Society, Inc.

  14. Using Robot Animation to Promote Gestural Skills in Children with Autism Spectrum Disorders

    Science.gov (United States)

    So, W.-C.; Wong, M. K.-Y.; Cabibihan, J.-J.; Lam, C. K.-Y.; Chan, R. Y.-Y.; Qian, H.-H.

    2016-01-01

    School-aged children with autism spectrum disorders (ASDs) have delayed gestural development, in comparison with age-matched typically developing children. In this study, an intervention program taught children with low-functioning ASD gestural comprehension and production using video modelling (VM) by a computer-generated robot animation. Six to…

  15. Maternal Gesture Use and Language Development in Infant Siblings of Children with Autism Spectrum Disorder

    Science.gov (United States)

    Talbott, Meagan R.; Nelson, Charles A.; Tager-Flusberg, Helen

    2015-01-01

    Impairments in language and communication are an early-appearing feature of autism spectrum disorders (ASD), with delays in language and gesture evident as early as the first year of life. Research with typically developing populations highlights the importance of both infant and maternal gesture use in infants' early language development.…

  16. RisQ: Recognizing Smoking Gestures with Inertial Sensors on a Wristband

    Science.gov (United States)

    Parate, Abhinav; Chiu, Meng-Chieh; Chadowitz, Chaniel; Ganesan, Deepak; Kalogerakis, Evangelos

    2015-01-01

    Smoking-induced diseases are known to be the leading cause of death in the United States. In this work, we design RisQ, a mobile solution that leverages a wristband containing a 9-axis inertial measurement unit to capture changes in the orientation of a person's arm, and a machine learning pipeline that processes this data to accurately detect smoking gestures and sessions in real-time. Our key innovations are fourfold: a) an arm trajectory-based method that extracts candidate hand-to-mouth gestures, b) a set of trajectory-based features to distinguish smoking gestures from confounding gestures including eating and drinking, c) a probabilistic model that analyzes sequences of hand-to-mouth gestures and infers which gestures are part of individual smoking sessions, and d) a method that leverages multiple IMUs placed on a person's body together with 3D animation of a person's arm to reduce burden of self-reports for labeled data collection. Our experiments show that our gesture recognition algorithm can detect smoking gestures with high accuracy (95.7%), precision (91%) and recall (81%). We also report a user study that demonstrates that we can accurately detect the number of smoking sessions with very few false positives over the period of a day, and that we can reliably extract the beginning and end of smoking session periods. PMID:26688835

  17. A neuropsychological approach to the study of gesture and pantomime in aphasa

    Directory of Open Access Journals (Sweden)

    Jocelyn Kadish

    1978-11-01

    Full Text Available The impairment of  gesture and pantomime in aphasia was examined from  a neuropsychological perspective. The Boston Diagnostic Test of  Aphasia, Luria's Neuro-psychological Investigation, Pickett's Tests for  gesture and pantomime and the Performance Scale of  the Wechsler Adult Intelligence Scale were administered to six aphasic subjects with varying etiology and severity. Results indicated that severity of  aphasia was positively related to severity of  gestural disturbance; gestural ability was associated with verbal and non-linguistic aspects of  ability, within receptive and expressive levels respectively; performance  on gestural tasks was superior to that on verbal tasks irrespective of  severity of aphasia; damage to Luria's second and third functional  brain units were positively related to deficits  in receptive and expressive gesture respectively; no relationship was found  between seventy of  general intellectual impairment and gestural deficit.  It was concluded that the gestural impairment may best be understood as a breakdown in complex sequential manual motor activity. Theoretical and therapeutic implications were discussed.

  18. The Role of Gestures in a Teacher-Student-Discourse about Atoms

    Science.gov (United States)

    Abels, Simone

    2016-01-01

    Recent educational research emphasises the importance of analysing talk and gestures to come to an understanding about students' conceptual learning. Gestures are perceived as complex hand movements being equivalent to other language modes. They can convey experienceable as well as abstract concepts. As well as technical language, gestures…

  19. Prototyping with your hands: the many roles of gesture in the communication of design concepts

    DEFF Research Database (Denmark)

    Cash, Philip; Maier, Anja

    2016-01-01

    There is an on-going focus exploring the use of gesture in design situations; however, there are still significant questions as to how this is related to the understanding and communication of design concepts. This work explores the use of gesture through observing and video-coding four teams of ...

  20. Balancing Direction and Independence in Second Language Vocabulary Learning: A Gesture Pilot Study

    Science.gov (United States)

    Mathison, Lake

    2017-01-01

    This pilot study looks at the effect of learning second language vocabulary with gesture. Specifically, this current study asks whether researcher-instructed or student-constructed gestures are more effective. Depth of processing theories (Craik and Lockhart 1972) as well as more recent educational frameworks like ICAP ("Interactive,"…

  1. A Functional Analysis of Gestural Behaviors Emitted by Young Children with Severe Developmental Disabilities

    Science.gov (United States)

    Ferreri, Summer J.; Plavnick, Joshua B.

    2011-01-01

    Many children with severe developmental disabilities emit idiosyncratic gestures that may function as verbal operants (Sigafoos et al., 2000). This study examined the effectiveness of a functional analysis methodology to identify the variables responsible for gestures emitted by 2 young children with severe developmental disabilities. Potential…

  2. Hospitable Gestures in the University Lecture: Analysing Derrida's Pedagogy

    Science.gov (United States)

    Ruitenberg, Claudia

    2014-01-01

    Based on archival research, this article analyses the pedagogical gestures in Derrida's (largely unpublished) lectures on hospitality (1995/96), with particular attention to the enactment of hospitality in these gestures. The motivation for this analysis is twofold. First, since the large-group university lecture has been widely critiqued as…

  3. Gesture as Input in Language Acquisition: Learning "Who She Is" from "Where She Is"

    Science.gov (United States)

    Goodrich, Whitney Sarah-Iverson

    2009-01-01

    This dissertation explores the role co-speech gesture plays as input in language learning, specifically with respect to the acquisition of anaphoric pronouns. Four studies investigate how both adults and children interpret ambiguous pronouns, and how the order-of-mention tendency develops in children. The results suggest that gesture is a useful…

  4. Long-Term Effects of Gestures on Memory for Foreign Language Words Trained in the Classroom

    Science.gov (United States)

    Macedonia, Manuela; Klimesch, Wolfgang

    2014-01-01

    Language and gesture are viewed as highly interdependent systems. Besides supporting communication, gestures also have an impact on memory for verbal information compared to pure verbal encoding in native but also in foreign language learning. This article presents a within-subject longitudinal study lasting 14 months that tested the use of…

  5. Bridging Gaps in Common Ground: Speakers Design Their Gestures for Their Listeners

    Science.gov (United States)

    Hilliard, Caitlin; Cook, Susan Wagner

    2016-01-01

    Communication is shaped both by what we are trying to say and by whom we are saying it to. We examined whether and how shared information influences the gestures speakers produce along with their speech. Unlike prior work examining effects of common ground on speech and gesture, we examined a situation in which some speakers have the same amount…

  6. Effects of observing and producing deictic gestures on memory and learning in different age groups

    NARCIS (Netherlands)

    K.H.R. Ouwehand (Kim)

    2016-01-01

    markdownabstractThe studies presented in this dissertation aimed to investigate whether observing or producing deictic gestures (i.e., pointing and tracing gestures to index a referent in space or a movement pathway), could facilitate memory and learning in children, young adults, and older adults.

  7. Gesture as a Resource for Intersubjectivity in Second-Language Learning Situations

    Science.gov (United States)

    Belhiah, Hassan

    2013-01-01

    This study documents the role of hand gestures in achieving mutual understanding in second-language learning situations. The study tracks the way gesture is coordinated with talk in tutorials between two Korean students and their American teachers. The study adopts an interactional approach to the study of participants' talk and gestural…

  8. Technological evaluation of gesture and speech interfaces for enabling dismounted soldier-robot dialogue

    Science.gov (United States)

    Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan

    2016-05-01

    With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.

  9. Differences in the Ability of Apes and Children to Instruct Others Using Gestures

    Science.gov (United States)

    Grosse, Katja; Call, Josep; Carpenter, Malinda; Tomasello, Michael

    2015-01-01

    In all human cultures, people gesture iconically. However, the evolutionary basis of iconic gestures is unknown. In this study, chimpanzees and bonobos, and 2- and 3-year-old children, learned how to operate two apparatuses to get rewards. Then, at test, only a human adult had access to the apparatuses, and participants could instruct her about…

  10. A gesture-controlled projection display for CT-guided interventions.

    Science.gov (United States)

    Mewes, A; Saalfeld, P; Riabikin, O; Skalej, M; Hansen, C

    2016-01-01

    The interaction with interventional imaging systems within a sterile environment is a challenging task for physicians. Direct physician-machine interaction during an intervention is rather limited because of sterility and workspace restrictions. We present a gesture-controlled projection display that enables a direct and natural physician-machine interaction during computed tomography (CT)-based interventions. Therefore, a graphical user interface is projected on a radiation shield located in front of the physician. Hand gestures in front of this display are captured and classified using a leap motion controller. We propose a gesture set to control basic functions of intervention software such as gestures for 2D image exploration, 3D object manipulation and selection. Our methods were evaluated in a clinically oriented user study with 12 participants. The results of the performed user study confirm that the display and the underlying interaction concept are accepted by clinical users. The recognition of the gestures is robust, although there is potential for improvements. The gesture training times are less than 10 min, but vary heavily between the participants of the study. The developed gestures are connected logically to the intervention software and intuitive to use. The proposed gesture-controlled projection display counters current thinking, namely it gives the radiologist complete control of the intervention software. It opens new possibilities for direct physician-machine interaction during CT-based interventions and is well suited to become an integral part of future interventional suites.

  11. Integrating gesture recognition in airplane seats for in-flight entertainment

    NARCIS (Netherlands)

    van de Westelaken, H.F.M.; Hu, J.; Liu, H.; Rauterberg, G.W.M.; Pan, Z.; Zhang, X.; El Rhalibi, A.; Woo, W.; Li, Y.

    2008-01-01

    In order to reduce both the psychological and physical stress in air travel, sensors are integrated in airplane seats to detect the gestures as input for in-flight entertainment systems. The content provided by the entertainment systems helps to reduce the psychological stress, and the gesture

  12. Embedding gesture recognition into airplane seats for in-flight entertainment

    NARCIS (Netherlands)

    van de Westelaken, H.F.M.; Hu, J.; Liu, H.; Rauterberg, G.W.M.

    2011-01-01

    In order to reduce both psychological and physical stress in air travel, sensors are integrated into airplane seats to detect gestures as input for in-flight entertainment systems. The content provided by the entertainment systems helps to reduce psychological stress, and gesture recognition is used

  13. Symbiotic Gesture and the Sociocognitive Visibility of Grammar in Second Language Acquisition

    Science.gov (United States)

    Churchill, Eton; Okada, Hanako; Nishino, Takako; Atkinson, Dwight

    2010-01-01

    This article argues for the embodied and environmentally embedded nature of second language acquisition (SLA). Through fine-grained analysis of interaction using Goodwin's (2003a) concept of "symbiotic gesture"--gesture coupled with its rich environmental context to produce complex social action--we illustrate how a tutor, learner, and grammar…

  14. Research on gesture recognition of augmented reality maintenance guiding system based on improved SVM

    Science.gov (United States)

    Zhao, Shouwei; Zhang, Yong; Zhou, Bin; Ma, Dongxi

    2014-09-01

    Interaction is one of the key techniques of augmented reality (AR) maintenance guiding system. Because of the complexity of the maintenance guiding system's image background and the high dimensionality of gesture characteristics, the whole process of gesture recognition can be divided into three stages which are gesture segmentation, gesture characteristic feature modeling and trick recognition. In segmentation stage, for solving the misrecognition of skin-like region, a segmentation algorithm combing background mode and skin color to preclude some skin-like regions is adopted. In gesture characteristic feature modeling of image attributes stage, plenty of characteristic features are analyzed and acquired, such as structure characteristics, Hu invariant moments features and Fourier descriptor. In trick recognition stage, a classifier based on Support Vector Machine (SVM) is introduced into the augmented reality maintenance guiding process. SVM is a novel learning method based on statistical learning theory, processing academic foundation and excellent learning ability, having a lot of issues in machine learning area and special advantages in dealing with small samples, non-linear pattern recognition at high dimension. The gesture recognition of augmented reality maintenance guiding system is realized by SVM after the granulation of all the characteristic features. The experimental results of the simulation of number gesture recognition and its application in augmented reality maintenance guiding system show that the real-time performance and robustness of gesture recognition of AR maintenance guiding system can be greatly enhanced by improved SVM.

  15. Crossover learning of gestures in two ideomotor apraxia patients: A single case experimental design study.

    Science.gov (United States)

    Shimizu, Daisuke; Tanemura, Rumi

    2017-06-01

    Crossover learning may aid rehabilitation in patients with neurological disorders. Ideomotor apraxia (IMA) is a common sequela of left-brain damage that comprises a deficit in the ability to perform gestures to verbal commands or by imitation. This study elucidated whether crossover learning occurred in two post-stroke IMA patients without motor paralysis after gesture training approximately 2 months after stroke onset. We quantitatively analysed the therapeutic intervention history and investigated whether revised action occurred during gesture production. Treatment intervention was to examine how to influence improvement and generalisation of the ability to produce the gesture. This study used an alternating treatments single-subject design, and the intervention method was errorless learning. Results indicated crossover learning in both patients. Qualitative analysis indicated that revised action occurred during the gesture-production process in one patient and that there were two types of post-revised action gestures: correct and incorrect gestures. We also discovered that even when a comparably short time had elapsed since stroke onset, generalisation was difficult. Information transfer between the left and right hemispheres of the brain via commissural fibres is important in crossover learning. In conclusion, improvements in gesture-production skill should be made with reference to the left cerebral hemisphere disconnection hypothesis.

  16. 77 FR 31371 - Public Workshop: Privacy Compliance Workshop

    Science.gov (United States)

    2012-05-25

    ... presentations, including the privacy compliance fundamentals, privacy and data security, and the privacy... DEPARTMENT OF HOMELAND SECURITY Office of the Secretary Public Workshop: Privacy Compliance... Homeland Security Privacy Office will host a public workshop, ``Privacy Compliance Workshop.'' DATES: The...

  17. The Organization of Words and Symbolic Gestures in 18-Month-Olds' Lexicons: Evidence from a Disambiguation Task

    Science.gov (United States)

    Suanda, Sumarga H.; Namy, Laura L.

    2013-01-01

    Infants' early communicative repertoires include both words and symbolic gestures. The current study examined the extent to which infants organize words and gestures in a single unified lexicon. As a window into lexical organization, eighteen-month-olds' ("N" = 32) avoidance of word-gesture overlap was examined and compared with…

  18. Age-Related Changes in Preschoolers' Ability to Communicate Using Iconic Gestures in the Absence of Speech

    Science.gov (United States)

    Vasc, Dermina; Miclea, Mircea

    2018-01-01

    Iconic gestures illustrate complex meanings and clarify and enrich the speech they accompany. Little is known, however, about how children use iconic gestures in the absence of speech. In this study, we used a cross-sectional design to investigate how 3-, 4- and 5-year-old children (N = 51) communicate using pantomime iconic gestures. Children…

  19. Hearing and seeing meaning in noise. Alpha, beta and gamma oscillations predict gestural enhancement of degraded speech comprehension

    NARCIS (Netherlands)

    Drijvers, L.; Özyürek, A.; Jensen, O.

    2018-01-01

    During face-to-face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal

  20. Observation of Depictive Versus Tracing Gestures Selectively Aids Verbal Versus Visual-Spatial Learning in Primary School Children

    NARCIS (Netherlands)

    van Wermeskerken, Margot; Fijan, Nathalie; Eielts, Charly; Pouw, Wim T. J. L.

    2016-01-01

    Previous research has established that gesture observation aids learning in children. The current study examined whether observation of gestures (i.e. depictive and tracing gestures) differentially affected verbal and visual-spatial retention when learning a route and its street names. Specifically,

  1. Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity

    NARCIS (Netherlands)

    W.T.J.L. Pouw (Wim); M.-F. Mavilidi (Myrto-Foteini); T.A.J.M. van Gog (Tamara); G.W.C. Paas (Fred)

    2016-01-01

    textabstractNon-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One

  2. Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity

    NARCIS (Netherlands)

    Pouw, Wim T J L; Mavilidi, Myrto Foteini; van Gog, Tamara; Paas, Fred

    2016-01-01

    Non-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypothesis is that

  3. A word in the hand: action, gesture and mental representation in humans and non-human primates

    Science.gov (United States)

    Cartmill, Erica A.; Beilock, Sian; Goldin-Meadow, Susan

    2012-01-01

    The movements we make with our hands both reflect our mental processes and help to shape them. Our actions and gestures can affect our mental representations of actions and objects. In this paper, we explore the relationship between action, gesture and thought in both humans and non-human primates and discuss its role in the evolution of language. Human gesture (specifically representational gesture) may provide a unique link between action and mental representation. It is kinaesthetically close to action and is, at the same time, symbolic. Non-human primates use gesture frequently to communicate, and do so flexibly. However, their gestures mainly resemble incomplete actions and lack the representational elements that characterize much of human gesture. Differences in the mirror neuron system provide a potential explanation for non-human primates' lack of representational gestures; the monkey mirror system does not respond to representational gestures, while the human system does. In humans, gesture grounds mental representation in action, but there is no evidence for this link in other primates. We argue that gesture played an important role in the transition to symbolic thought and language in human evolution, following a cognitive leap that allowed gesture to incorporate representational elements. PMID:22106432

  4. Gesture and Body-Movement as Teaching and Learning Tools in the Classical Voice Lesson: A Survey into Current Practice

    Science.gov (United States)

    Nafisi, Julia

    2013-01-01

    This article discusses the use of gesture and body-movement in the teaching of singing and reports on a survey amongst professional singing teachers in Germany regarding their use of gesture and body movement as pedagogic tools in their teaching. The nomenclature of gestures and movements used in the survey is based on a previous study by the…

  5. Tandem mirror theory workshop

    International Nuclear Information System (INIS)

    1981-05-01

    The workshop was divided into three sections which were constituted according to subject matter: RF Heating, MHD Equilibrium and Stability, and Transport and Microstability. An overview from Livermore's point of view was given at the beginning of each session. Each session was assigned a secretary to take notes. These notes have been used in preparing this report on the workshop. The report includes the activities, conclusions, and recommendations of the workshop

  6. Innovative confinement concepts workshop

    International Nuclear Information System (INIS)

    Kirkpatrick, R.C.

    1998-01-01

    The Innovative Confinement Concepts Workshop occurred in California during the week preceding the Second Symposium on Current Trends in International Fusion Research. An informal report was made to the Second Symposium. A summary of the Workshop concluded that some very promising ideas were presented, that innovative concept development is a central element of the restructured US DOE. Fusion Energy Sciences program, and that the Workshop should promote real scientific progress in fusion

  7. Emergency response workers workshop

    International Nuclear Information System (INIS)

    Agapeev, S.A.; Glukhikh, E.N.; Tyurin, R.L.

    2012-01-01

    A training workshop entitled Current issues and potential improvements in Rosatom Corporation emergency prevention and response system was held in May-June, 2012. The workshop combined theoretical training with full-scale practical exercise that demonstrated the existing innovative capabilities for radiation reconnaissance, diving equipment and robotics, aircraft, emergency response and rescue hardware and machinery. This paper describes the activities carried out during the workshop [ru

  8. Fermi GBM Observations of LIGO Gravitational-Wave Event Gw150914

    Science.gov (United States)

    Connaughton, V.; Burns, E.; Goldstein, A.; Blackburn, L.; Briggs, M. S.; Zhang, B.-B.; Camp, J.; Christensen, N.; Hui, C. M.; Jenke, P.; hide

    2016-01-01

    With an instantaneous view of 70% of the sky, the Fermi Gamma-ray Burst Monitor (GBM) is an excellent partner in the search for electromagnetic counterparts to gravitational-wave (GW) events. GBM observations at the time of the Laser Interferometer Gravitational-wave Observatory (LIGO) event GW150914 reveal the presence of a weak transient above 50 keV, 0.4 s after the GW event, with a false-alarm probability of 0.0022 (2.9(sigma)). This weak transient lasting 1 s was not detected by any other instrument and does not appear to be connected with other previously known astrophysical, solar, terrestrial, or magnetospheric activity. Its localization is ill-constrained but consistent with the direction of GW150914. The duration and spectrum of the transient event are consistent with a weak short gamma-ray burst (GRB) arriving at a large angle to the direction in which Fermi was pointing where the GBM detector response is not optimal. If the GBM transient is associated with GW150914, then this electromagnetic signal from a stellar mass black hole binary merger is unexpected. We calculate a luminosity in hard X-ray emission between 1 keV and 10 MeV of 1.8(sup +1.5, sub -1.0) x 10(exp 49) erg/s. Future joint observations of GW events by LIGO/Virgo and Fermi GBM could reveal whether the weak transient reported here is a plausible counterpart to GW150914 or a chance coincidence, and will further probe the connection between compact binary mergers and short GRBs.

  9. Recent Progress in GW-based Methods for Excited-State Calculations of Reduced Dimensional Systems

    Science.gov (United States)

    da Jornada, Felipe H.

    2015-03-01

    Ab initio calculations of excited-state phenomena within the GW and GW-Bethe-Salpeter equation (GW-BSE) approaches allow one to accurately study the electronic and optical properties of various materials, including systems with reduced dimensionality. However, several challenges arise when dealing with complicated nanostructures where the electronic screening is strongly spatially and directionally dependent. In this talk, we discuss some recent developments to address these issues. First, we turn to the slow convergence of quasiparticle energies and exciton binding energies with respect to k-point sampling. This is very effectively dealt with using a new hybrid sampling scheme, which results in savings of several orders of magnitude in computation time. A new ab initio method is also developed to incorporate substrate screening into GW and GW-BSE calculations. These two methods have been applied to mono- and few-layer MoSe2, and yielded strong environmental dependent behaviors in good agreement with experiment. Other issues that arise in confined systems and materials with reduced dimensionality, such as the effect of the Tamm-Dancoff approximation to GW-BSE, and the calculation of non-radiative exciton lifetime, are also addressed. These developments have been efficiently implemented and successfully applied to real systems in an ab initio framework using the BerkeleyGW package. I would like to acknowledge collaborations with Diana Y. Qiu, Steven G. Louie, Meiyue Shao, Chao Yang, and the experimental groups of M. Crommie and F. Wang. This work was supported by Department of Energy under Contract No. DE-AC02-05CH11231 and by National Science Foundation under Grant No. DMR10-1006184.

  10. Insight-HXMT observations of the first binary neutron star merger GW170817

    Science.gov (United States)

    Li, TiPei; Xiong, ShaoLin; Zhang, ShuangNan; Lu, FangJun; Song, LiMing; Cao, XueLei; Chang, Zhi; Chen, Gang; Chen, Li; Chen, TianXiang; Chen, Yong; Chen, YiBao; Chen, YuPeng; Cui, Wei; Cui, WeiWei; Deng, JingKang; Dong, YongWei; Du, YuanYuan; Fu, MinXue; Gao, GuanHua; Gao, He; Gao, Min; Ge, MingYu; Gu, YuDong; Guan, Ju; Guo, ChengCheng; Han, DaWei; Hu, Wei; Huang, Yue; Huo, Jia; Jia, ShuMei; Jiang, LuHua; Jiang, WeiChun; Jin, Jing; Jin, YongJie; Li, Bing; Li, ChengKui; Li, Gang; Li, MaoShun; Li, Wei; Li, Xian; Li, XiaoBo; Li, XuFang; Li, YanGuo; Li, ZiJian; Li, ZhengWei; Liang, XiaoHua; Liao, JinYuan; Liu, CongZhan; Liu, GuoQing; Liu, HongWei; Liu, ShaoZhen; Liu, XiaoJing; Liu, Yuan; Liu, YiNong; Lu, Bo; Lu, XueFeng; Luo, Tao; Ma, Xiang; Meng, Bin; Nang, Yi; Nie, JianYin; Ou, Ge; Qu, JinLu; Sai, Na; Sun, Liang; Tan, Yin; Tao, Lian; Tao, WenHui; Tuo, YouLi; Wang, GuoFeng; Wang, HuanYu; Wang, Juan; Wang, WenShuai; Wang, YuSa; Wen, XiangYang; Wu, BoBing; Wu, Mei; Xiao, GuangCheng; Xu, He; Xu, YuPeng; Yan, LinLi; Yang, JiaWei; Yang, Sheng; Yang, YanJi; Zhang, AiMei; Zhang, ChunLei; Zhang, ChengMo; Zhang, Fan; Zhang, HongMei; Zhang, Juan; Zhang, Qiang; Zhang, Shu; Zhang, Tong; Zhang, Wei; Zhang, WanChang; Zhang, WenZhao; Zhang, Yi; Zhang, Yue; Zhang, YiFei; Zhang, YongJie; Zhang, Zhao; Zhang, ZiLiang; Zhao, HaiSheng; Zhao, JianLing; Zhao, XiaoFan; Zheng, ShiJie; Zhu, Yue; Zhu, YuXuan; Zou, ChangLin

    2018-03-01

    Finding the electromagnetic (EM) counterpart of binary compact star merger, especially the binary neutron star (BNS) merger, is critically important for gravitational wave (GW) astronomy, cosmology and fundamental physics. On Aug. 17, 2017, Advanced LIGO and Fermi/GBM independently triggered the first BNS merger, GW170817, and its high energy EM counterpart, GRB 170817A, respectively, resulting in a global observation campaign covering gamma-ray, X-ray, UV, optical, IR, radio as well as neutrinos. The High Energy X-ray telescope (HE) onboard Insight-HXMT (Hard X-ray Modulation Telescope) is the unique high-energy gamma-ray telescope that monitored the entire GW localization area and especially the optical counterpart (SSS17a/AT2017gfo) with very large collection area ( 1000 cm2) and microsecond time resolution in 0.2-5 MeV. In addition, Insight-HXMT quickly implemented a Target of Opportunity (ToO) observation to scan the GW localization area for potential X-ray emission from the GW source. Although Insight-HXMT did not detect any significant high energy (0.2-5 MeV) radiation from GW170817, its observation helped to confirm the unexpected weak and soft nature of GRB 170817A. Meanwhile, Insight-HXMT/HE provides one of the most stringent constraints ( 10‒7 to 10‒6 erg/cm2/s) for both GRB170817A and any other possible precursor or extended emissions in 0.2-5 MeV, which help us to better understand the properties of EM radiation from this BNS merger. Therefore the observation of Insight-HXMT constitutes an important chapter in the full context of multi-wavelength and multi-messenger observation of this historical GW event.

  11. Person and gesture tracking with smart stereo cameras

    Science.gov (United States)

    Gordon, Gaile; Chen, Xiangrong; Buck, Ron

    2008-02-01

    Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems

  12. Alternate fusion fuels workshop

    International Nuclear Information System (INIS)

    1981-06-01

    The workshop was organized to focus on a specific confinement scheme: the tokamak. The workshop was divided into two parts: systems and physics. The topics discussed in the systems session were narrowly focused on systems and engineering considerations in the tokamak geometry. The workshop participants reviewed the status of system studies, trade-offs between d-t and d-d based reactors and engineering problems associated with the design of a high-temperature, high-field reactor utilizing advanced fuels. In the physics session issues were discussed dealing with high-beta stability, synchrotron losses and transport in alternate fuel systems. The agenda for the workshop is attached

  13. MOOC Design Workshop

    DEFF Research Database (Denmark)

    Nørgård, Rikke Toft; Mor, Yishay; Warburton, Steven

    2016-01-01

    For the last two years we have been running a series of successful MOOC design workshops. These workshops build on previous work in learning design and MOOC design patterns. The aim of these workshops is to aid practitioners in defining and conceptualising educational innovations (predominantly......, but not exclusively MOOCs) which are based on an empathic user-centered view of the target learners and teachers. In this paper, we share the main principles, patterns and resources of our workshops and present some initial results for their effectiveness...

  14. Differentiation of energy concepts through speech and gesture in interaction

    Science.gov (United States)

    Close, Hunter G.; Scherr, Rachel E.

    2012-02-01

    Through microanalysis of speech and gesture in one interaction between learners (in a course on energy for in-service teachers), we observe coherent states of conceptual differentiation of different learners. We observe that the interaction among learners across different states of differentiation is not in itself sufficient to accomplish differentiation; however, the real-time receptivity of the learners to conceptually relevant details in each other's actions suggests that future instruction that focuses explicitly on such actions and their meaning in context may assist differentiation.

  15. The Perception of Sound Movements as Expressive Gestures

    DEFF Research Database (Denmark)

    Götzen, Amalia De; Sikström, Erik; Korsgaard, Dannie

    2014-01-01

    This paper is a preliminary attempt to investigate the perception of sound movements as expressive gestures. The idea is that if sound movement is used as a musical parameter, a listener (or a subject) should be able to distinguish among dierent movements and she/he should be able to group them a...... by drawing it on a tablet. Preliminary results show that subjects could consistently group the stimuli, and that they primarily used paths and legato{staccato patterns to discriminate among dierent sound movements/expressive intention....

  16. NUI framework based on real-time head pose estimation and hand gesture recognition

    Directory of Open Access Journals (Sweden)

    Kim Hyunduk

    2016-01-01

    Full Text Available The natural user interface (NUI is used for the natural motion interface without using device or tool such as mice, keyboards, pens and markers. In this paper, we develop natural user interface framework based on two recognition module. First module is real-time head pose estimation module using random forests and second module is hand gesture recognition module, named Hand gesture Key Emulation Toolkit (HandGKET. Using the head pose estimation module, we can know where the user is looking and what the user’s focus of attention is. Moreover, using the hand gesture recognition module, we can also control the computer using the user’s hand gesture without mouse and keyboard. In proposed framework, the user’s head direction and hand gesture are mapped into mouse and keyboard event, respectively.

  17. The role of gestures in achieving understanding in Early English teaching in Denmark

    DEFF Research Database (Denmark)

    aus der Wieschen, Maria Vanessa; Eskildsen, Søren Wind

    school in Denmark. The use of multimodal resources employed by teachers in foreign language classrooms has been studied by e.g. Muramuto (1999), Lazaraton (2004), Taleghani-Nikazm (2008), Eskildsen & Wagner (2013), Sert (2015). This research has established gestures as a pervasive phenomenon in language...... brings this established agreement on the importance of gestures in classroom interaction to bear on early foreign language learning: Whereas prior work on gestures in L2 classrooms has predominantly dealt with adult L2 learners, this paper investigates the extent to which a teacher makes use of gestures...... in early child foreign language teaching. Using multimodal conversation analysis of three hours of classroom instruction in a Danish primary school, we uncover how a teacher uses gestures to enhance the comprehension of his L2 talk when teaching English in the 1st and 3rd grade, both of which are beginning...

  18. Magic Ring: A Finger-Worn Device for Multiple Appliances Control Using Static Finger Gestures

    Directory of Open Access Journals (Sweden)

    Tongjun Huang

    2012-05-01

    Full Text Available An ultimate goal for Ubiquitous Computing is to enable people to interact with the surrounding electrical devices using their habitual body gestures as they communicate with each other. The feasibility of such an idea is demonstrated through a wearable gestural device named Magic Ring (MR, which is an original compact wireless sensing mote in a ring shape that can recognize various finger gestures. A scenario of wireless multiple appliances control is selected as a case study to evaluate the usability of such a gestural interface. Experiments comparing the MR and a Remote Controller (RC were performed to evaluate the usability. From the results, only with 10 minutes practice, the proposed paradigm of gestural-based control can achieve a performance of completing about six tasks per minute, which is in the same level of the RC-based method.

  19. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.

    Science.gov (United States)

    Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu

    2016-04-19

    Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  20. Investigation of the Reationship between Hand Gestures and Speech in Adults Who Stutter

    Directory of Open Access Journals (Sweden)

    Ali Barikrou

    2008-12-01

    Full Text Available Objective: Gestures of the hands and arms have long been observed to accompany speech in spontaneous conversation. However, the way in which these two modes of expression are related in production is not yet fully understood. So, the present study aims to investigate the spontaneous gestures that accompany speech in adults who stutter in comparison to fluent controls.  Materials & Methods: In this cross-sectional and comparative research, ten adults who stutter were selected randomly from speech and language pathology clinics and compared with ten healthy persons as control group who were matched with stutterers according to sex, age and education. The cartoon story-retelling task used to elicit spontaneous gestures that accompany speech. Participants were asked to watch the animation carefully and then retell the storyline in as much detail as possible to a listener sitting across from him or her and his or her narration was video recorded simultaneously. Then recorded utterances and gestures were analyzed. The statistical methods such as Kolmogorov- Smirnov and Independent t-test were used for data analyzing. Results: The results indicated that stutterers in comparison to controls in average use fewer iconic gestures in their narration (P=0.005. Also, stutterers in comparison to controls in average use fewer iconic gestures per each utterance and word (P=0.019. Furthermore, the execution of gesture production during moments of dysfluency revealed that more than 70% of the gestures produced with stuttering were frozen or abandoned at the moment of dysfluency. Conclusion: It seems gesture and speech have such an intricate and deep association that show similar frequency and timing patterns and move completely parallel to each other in such a way that deficit in speech results in deficiency in hand gesture.

  1. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework

    Directory of Open Access Journals (Sweden)

    Shengjing Wei

    2016-04-01

    Full Text Available Sign language recognition (SLR can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG sensors, accelerometers (ACC, and gyroscopes (GYRO. In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set suggested by two reference subjects, (82.6 ± 13.2% and (79.7 ± 13.4% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7% and (86.3 ± 13.7% when the training set included 50~60 gestures (about half of the target gesture set. The proposed framework can significantly reduce the user’s training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  2. Touch and You’re Trapp(cked: Quantifying the Uniqueness of Touch Gestures for Tracking

    Directory of Open Access Journals (Sweden)

    Masood Rahat

    2018-04-01

    Full Text Available We argue that touch-based gestures on touch-screen devices enable the threat of a form of persistent and ubiquitous tracking which we call touch-based tracking. Touch-based tracking goes beyond the tracking of virtual identities and has the potential for cross-device tracking as well as identifying multiple users using the same device. We demonstrate the likelihood of touch-based tracking by focusing on touch gestures widely used to interact with touch devices such as swipes and taps.. Our objective is to quantify and measure the information carried by touch-based gestures which may lead to tracking users. For this purpose, we develop an information theoretic method that measures the amount of information about users leaked by gestures when modelled as feature vectors. Our methodology allows us to evaluate the information leaked by individual features of gestures, samples of gestures, as well as samples of combinations of gestures. Through our purpose-built app, called TouchTrack, we gather gesture samples from 89 users, and demonstrate that touch gestures contain sufficient information to uniquely identify and track users. Our results show that writing samples (on a touch pad can reveal 73.7% of information (when measured in bits, and left swipes can reveal up to 68.6% of information. Combining different combinations of gestures results in higher uniqueness, with the combination of keystrokes, swipes and writing revealing up to 98.5% of information about users. We further show that, through our methodology, we can correctly re-identify returning users with a success rate of more than 90%.

  3. Relating Gestures and Speech: An analysis of students' conceptions about geological sedimentary processes

    Science.gov (United States)

    Herrera, Juan Sebastian; Riggs, Eric M.

    2013-08-01

    Advances in cognitive science and educational research indicate that a significant part of spatial cognition is facilitated by gesture (e.g. giving directions, or describing objects or landscape features). We aligned the analysis of gestures with conceptual metaphor theory to probe the use of mental image schemas as a source of concept representations for students' learning of sedimentary processes. A hermeneutical approach enabled us to access student meaning-making from students' verbal reports and gestures about four core geological ideas that involve sea-level change and sediment deposition. The study included 25 students from three US universities. Participants were enrolled in upper-level undergraduate courses on sedimentology and stratigraphy. We used semi-structured interviews for data collection. Our gesture coding focused on three types of gestures: deictic, iconic, and metaphoric. From analysis of video recorded interviews, we interpreted image schemas in gestures and verbal reports. Results suggested that students attempted to make more iconic and metaphoric gestures when dealing with abstract concepts, such as relative sea level, base level, and unconformities. Based on the analysis of gestures that recreated certain patterns including time, strata, and sea-level fluctuations, we reasoned that proper representational gestures may indicate completeness in conceptual understanding. We concluded that students rely on image schemas to develop ideas about complex sedimentary systems. Our research also supports the hypothesis that gestures provide an independent and non-linguistic indicator of image schemas that shape conceptual development, and also play a role in the construction and communication of complex spatial and temporal concepts in the geosciences.

  4. Azasordarins: Susceptibility of Fluconazole-Susceptible and Fluconazole-Resistant Clinical Isolates of Candida spp. to GW 471558

    OpenAIRE

    Cuenca-Estrella, Manuel; Mellado, Emilia; Díaz-Guerra, Teresa M.; Monzón, Araceli; Rodríguez-Tudela, Juan L.

    2001-01-01

    The in vitro activity of the azasordarin GW 471558 was compared with those of amphotericin B, flucytosine, itraconazole, and ketoconazole against 177 clinical isolates of Candida spp. GW 471558 showed potent activity against Candida albicans, Candida glabrata, and Candida tropicalis, even against isolates with decreased susceptibility to azoles. Candida krusei, Candida parapsilosis, Candida lusitaniae, and Candida guilliermondii are resistant to GW 471558 in vitro (MICs, >128 μg/ml).

  5. IMPLICATIONS OF THE TENTATIVE ASSOCIATION BETWEEN GW150914 AND A FERMI -GBM TRANSIENT

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xiang; Yuan, Qiang; Jin, Zhi-Ping; Fan, Yi-Zhong; Liu, Si-Ming; Wei, Da-Ming [Key Laboratory of dark Matter and Space Astronomy, Purple Mountain Observatory, Chinese Academy of Science, Nanjing 210008 (China); Zhang, Fu-Wen, E-mail: yzfan@pmo.ac.cn, E-mail: dmwei@pmo.ac.cn, E-mail: fwzhang@glut.edu.cn [College of Science, Guilin University of Technology, Guilin 541004 (China)

    2016-08-10

    The merger-driven gamma-ray bursts (GRBs) and their associated gravitational-wave (GW) radiation, if both are successfully detected, have some far-reaching implications, including, for instance: (i) the statistical comparison of the physical properties of the short/long-short GRBs with and without GW detection can test the general origin model; (ii) revealing the physical processes taking place at the central engine; (iii) measuring the velocity of the gravitational wave directly/accurately. In this work, we discuss these implications in the case of a possible association of GW150914/Gamma-ray Burst Monitor (GBM) transient 150914. We compared GBM transient 150914 with other SGRBs and found that such an event may be a distinct outlier in some statistical diagrams, possibly due to its specific binary black hole merger origin. However, the presence of a “new” group of SGRBs with “unusual” physical parameters is also possible. If the outflow of GBM transient 150914 was launched by the accretion onto the nascent black hole, the magnetic activity rather than the neutrino process is likely responsible for the energy extraction, and the accretion disk mass is estimated to be ∼10{sup −5} M {sub ⊙}. The GW150914/GBM transient 150914 association, if confirmed, would provide the first opportunity to directly measure the GW velocity, and its departure from the speed of the light should be within a factor of ∼10{sup −17}.

  6. Dark Energy After GW170817: Dead Ends and the Road Ahead

    Science.gov (United States)

    Ezquiaga, Jose María; Zumalacárregui, Miguel

    2017-12-01

    Multimessenger gravitational-wave (GW) astronomy has commenced with the detection of the binary neutron star merger GW170817 and its associated electromagnetic counterparts. The almost coincident observation of both signals places an exquisite bound on the GW speed |cg/c -1 |≤5 ×10-16 . We use this result to probe the nature of dark energy (DE), showing that a large class of scalar-tensor theories and DE models are highly disfavored. As an example we consider the covariant Galileon, a cosmologically viable, well motivated gravity theory which predicts a variable GW speed at low redshift. Our results eliminate any late-universe application of these models, as well as their Horndeski and most of their beyond Horndeski generalizations. Three alternatives (and their combinations) emerge as the only possible scalar-tensor DE models: (1) restricting Horndeski's action to its simplest terms, (2) applying a conformal transformation which preserves the causal structure, and (3) compensating the different terms that modify the GW speed (to be robust, the compensation has to be independent on the background on which GWs propagate). Our conclusions extend to any other gravity theory predicting varying cg such as Einstein-Aether, Hořava gravity, Generalized Proca, tensor-vector-scalar gravity (TEVES), and other MOND-like gravities.

  7. Quasiparticle self-consistent GW method for the spectral properties of complex materials.

    Science.gov (United States)

    Bruneval, Fabien; Gatti, Matteo

    2014-01-01

    The GW approximation to the formally exact many-body perturbation theory has been applied successfully to materials for several decades. Since the practical calculations are extremely cumbersome, the GW self-energy is most commonly evaluated using a first-order perturbative approach: This is the so-called G 0 W 0 scheme. However, the G 0 W 0 approximation depends heavily on the mean-field theory that is employed as a basis for the perturbation theory. Recently, a procedure to reach a kind of self-consistency within the GW framework has been proposed. The quasiparticle self-consistent GW (QSGW) approximation retains some positive aspects of a self-consistent approach, but circumvents the intricacies of the complete GW theory, which is inconveniently based on a non-Hermitian and dynamical self-energy. This new scheme allows one to surmount most of the flaws of the usual G 0 W 0 at a moderate calculation cost and at a reasonable implementation burden. In particular, the issues of small band gap semiconductors, of large band gap insulators, and of some transition metal oxides are then cured. The QSGW method broadens the range of materials for which the spectral properties can be predicted with confidence.

  8. Dark Energy After GW170817: Dead Ends and the Road Ahead.

    Science.gov (United States)

    Ezquiaga, Jose María; Zumalacárregui, Miguel

    2017-12-22

    Multimessenger gravitational-wave (GW) astronomy has commenced with the detection of the binary neutron star merger GW170817 and its associated electromagnetic counterparts. The almost coincident observation of both signals places an exquisite bound on the GW speed |c_{g}/c-1|≤5×10^{-16}. We use this result to probe the nature of dark energy (DE), showing that a large class of scalar-tensor theories and DE models are highly disfavored. As an example we consider the covariant Galileon, a cosmologically viable, well motivated gravity theory which predicts a variable GW speed at low redshift. Our results eliminate any late-universe application of these models, as well as their Horndeski and most of their beyond Horndeski generalizations. Three alternatives (and their combinations) emerge as the only possible scalar-tensor DE models: (1) restricting Horndeski's action to its simplest terms, (2) applying a conformal transformation which preserves the causal structure, and (3) compensating the different terms that modify the GW speed (to be robust, the compensation has to be independent on the background on which GWs propagate). Our conclusions extend to any other gravity theory predicting varying c_{g} such as Einstein-Aether, Hořava gravity, Generalized Proca, tensor-vector-scalar gravity (TEVES), and other MOND-like gravities.

  9. Functional Marker Development and Effect Analysis of Grain Size Gene GW2 in Extreme Grain Size Germplasm in Rice

    Directory of Open Access Journals (Sweden)

    Zhang Ya-dong

    2015-03-01

    Full Text Available GW2 is an important gene that regulates grain width and weight. We used cDNA clone to obtain the sequences of GW2 from large- and small-grained rice varieties, TD70 and Kasalath, respectively. Then, we developed a dCAPS (derived cleaved amplified polymorphic sequence marker on the basis of the sequence difference between functional and nonfunctional GW2 genes to analyze the genotypes and phenotypes of recombinant inbred lines. Results showed that the sequence of GW2TD70 had a single nucleotide deletion at site 316 that generates a termination codon. This codon terminated the GW2 protein in advance. By contrast, the sequence of GW2Kasalath encoded an intact protein. A novel dCAPS marker was designed in accordance with a base A deletion at site 316 of the sequence. After the PCR product was digested by ApoI, TD70 showed 21 and 30 bp fragments, and Kasalath showed a 51 bp fragment. Up to 82 lines contained GW2TD70, and 158 lines contained GW2Kasalath. The lines that contained TD70 alleles displayed substantial increases in width and 1000-grain weight. This result suggested that GW2 played a critical role in rice breeding.

  10. Sampling results, DNAPL monitoring well GW-727, Oak Ridge Y-12 Plant, Oak Ridge, Tennessee. Quarterly report, 1995

    International Nuclear Information System (INIS)

    1996-05-01

    In January 1990, dense, non aqueous phase liquids (DNAPLs) were discovered at a depth of approximately 274 feet below ground surface along the southern border of the Y-12 Plant Burial Grounds. Immediately after the discovery, an investigation was conducted to assess the occurrence of DNAPL at the site and to make recommendations for further action. A major task in the work plan calls for the construction and installation of five multiport wells. This report summarizes purging and sampling activities for one of these multiport wells, GW-727, and presents analytical results for GW- 727. This report summarizes purging and sampling activities for GW-727 and presents analytical results for GW-727

  11. Comparing GW+DMFT and LDA+DMFT for the testbed material SrVO{sub 3}

    Energy Technology Data Exchange (ETDEWEB)

    Taranto, Ciro; Toschi, Alessandro; Held, Karsten [Institute for Solid State Physics, Vienna University of Technology (Austria); Kaltak, Merzuk; Kresse, Georg [University of Vienna, Faculty of Physics and Center for Computational Materials Science (Austria); Parragh, Nicolaus; Sangiovanni, Giorgio [Institut fuer Theoretische Physik und Astrophysik, Universitaet Wuerzburg (Germany)

    2013-07-01

    We have implemented the GW+dynamical mean field theory (DMFT) approach in the Vienna ab initio simulation package. Employing the interaction values obtained from the locally unscreened random phase approximation (RPA), we compare GW+DMFT and LDA+DMFT against each other and against experiment for SrVO{sub 3}. We observed a partial compensation of stronger electronic correlations due to the reduced GW bandwidth and weaker correlations due to a larger screening of the RPA interaction, so that the obtained spectra are quite similar and well agree with experiment. Noteworthily, the GW+DMFT better reproduces the position of the lower Hubbard side band.

  12. DNA Encoding Training Using 3D Gesture Interaction.

    Science.gov (United States)

    Nicola, Stelian; Handrea, Flavia-Laura; Crişan-Vida, Mihaela; Stoicu-Tivadar, Lăcrămioara

    2017-01-01

    The work described in this paper summarizes the development process and presents the results of a human genetics training application, studying the 20 amino acids formed by the combination of the 3 nucleotides of DNA targeting mainly medical and bioinformatics students. Currently, the domain applications using recognized human gestures of the Leap Motion sensor are used in molecules controlling and learning from Mendeleev table or in visualizing the animated reactions of specific molecules with water. The novelty in the current application consists in using the Leap Motion sensor creating new gestures for the application control and creating a tag based algorithm corresponding to each amino acid, depending on the position in the 3D virtual space of the 4 nucleotides of DNA and their type. The team proposes a 3D application based on Unity editor and on Leap Motion sensor where the user has the liberty of forming different combinations of the 20 amino acids. The results confirm that this new type of study of medicine/biochemistry using the Leap Motion sensor for handling amino acids is suitable for students. The application is original and interactive and the users can create their own amino acid structures in a 3D-like environment which they could not do otherwise using traditional pen-and-paper.

  13. Modelling gesture use and early language development in autism spectrum disorder.

    Science.gov (United States)

    Manwaring, Stacy S; Mead, Danielle L; Swineford, Lauren; Thurm, Audrey

    2017-09-01

    Nonverbal communication abilities, including gesture use, are impaired in autism spectrum disorder (ASD). However, little is known about how common gestures may influence or be influenced by other areas of development. To examine the relationships between gesture, fine motor and language in young children with ASD compared with a comparison group using multiple measures and methods in a structural equation modelling framework. Participants included 110 children with ASD and a non-ASD comparison group of 87 children (that included children with developmental delays (DD) or typical development (TD)), from 12 to 48 months of age. A construct of gesture use as measured by the Communication and Symbolic Behavior Scales-Developmental Profile Caregiver Questionnaire (CQ) and the Autism Diagnostic Observation Schedule (ADOS), as well as fine motor from the Mullen Scales of Early Learning and Vineland Adaptive Behavior Scales-II (VABS-II) was examined using second-order confirmatory factor analysis (CFA). A series of structural equation models then examined concurrent relationships between the aforementioned latent gesture construct and expressive and receptive language. A series of hierarchical regression analyses was run in a subsample of 36 children with ASD with longitudinal data to determine how gesture factor scores predicted later language outcomes. Across study groups, the gesture CFA model with indicators of gesture use from both the CQ (parent-reported) and ADOS (direct observation), and measures of fine motor provided good fit with all indicators significantly and strongly loading onto one gesture factor. This model of gesture use, controlling for age, was found to correlate strongly with concurrent expressive and receptive language. The correlations between gestures and concurrent language were similar in magnitude in both the ASD and non-ASD groups. In the longitudinal subsample of children with ASD, gestures at time 1 predicted later receptive (but not

  14. Workshop of medical physics

    International Nuclear Information System (INIS)

    1988-01-01

    This event was held in San Carlos de Bariloche, Argentine Republic from 14 th. through 18 th. November, 1988. A great part of the physicians in the area of medical physics participated in this workshop. This volume includes the papers presented at this Workshop of Medical Physics [es

  15. Workshops on Writing Science

    Indian Academy of Sciences (India)

    2017-09-30

    Sep 30, 2017 ... hands-on practice, feedback, mentoring and highly interactive sessions. The focus will be on work done as individuals and in teams. Maximum number of participants for the workshop is limited. The workshop is compulso- rily residential. Boarding and lodging free for selected candidates. Re-imbursement ...

  16. Warehouse Sanitation Workshop Handbook.

    Science.gov (United States)

    Food and Drug Administration (DHHS/PHS), Washington, DC.

    This workshop handbook contains information and reference materials on proper food warehouse sanitation. The materials have been used at Food and Drug Administration (FDA) food warehouse sanitation workshops, and are selected by the FDA for use by food warehouse operators and for training warehouse sanitation employees. The handbook is divided…

  17. SPLASH'13 workshops summary

    DEFF Research Database (Denmark)

    Balzer, S.; Schultz, U. P.

    2013-01-01

    Following its long-standing tradition, SPLASH 2013 will host 19 high-quality workshops, allowing their participants to meet and discuss research questions with peers, to mature new and exciting ideas, and to build up communities and start new collaborations. SPLASH workshops complement the main t...

  18. GW150914: First results from the search for binary black hole coalescence with Advanced LIGO

    Science.gov (United States)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bohémier, K.; Bojtos, P.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Clayton, J. H.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Cokelaer, T.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; DeRosa, R. T.; De Rosa, R.; DeSalvo, R.; Dhurandhar, S.; Díaz, M. C.; Dietz, A.; Di Fiore, L.; Di Giovanni, M.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fotopoulos, N.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frei, M.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, A.; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Goggin, L. M.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, G.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Keppel, D. G.; Kennedy, R.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, Nam-Gyu; Kim, Namjun; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R. M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McKechan, D. J. A.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E.; Merzougui, M.; Meshkov, S.; Messaritaki, E.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pan, Y.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Phelps, M.; Piccinni, O.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Robinson, C.; Rocchi, A.; Rodriguez, A. C.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Santamaría, L.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shao, Z.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, A. D.; Simakov, D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Weßels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Wiesner, K.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Wittel, H.; Woan, G.; Worden, J.; Wright, J. L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration

    2016-06-01

    On September 14, 2015, at 09∶50:45 UTC the two detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO) simultaneously observed the binary black hole merger GW150914. We report the results of a matched-filter search using relativistic models of compact-object binaries that recovered GW150914 as the most significant event during the coincident observations between the two LIGO detectors from September 12 to October 20, 2015 GW150914 was observed with a matched-filter signal-to-noise ratio of 24 and a false alarm rate estimated to be less than 1 event per 203000 years, equivalent to a significance greater than 5.1 σ .

  19. The Fermi GBM and LAT follow-up of GW150914

    Directory of Open Access Journals (Sweden)

    Bissaldi E.

    2017-01-01

    Here we present observations by the Fermi Gamma-Ray BurstMonitor (GBM [1] and by the Large Area Telescope (LAT [2] of the LIGO Gravitational Wave event GW150914, which has been associated to the merger of two stellar-mass BHs. We report the presence of a weak transient event in GBM data, close in time to the LIGO one. We discuss the characteristics of this GBM transient, which are consistent with a weak short GRB arriving at a large angle to the direction in which Fermi was pointing. Furthermore, we report LAT upper limits (ULs for GW150914, and we present the strategy for follow-up observations of GW events with the LAT.

  20. Implications from GW170817 and I-Love-Q relations for relativistic hybrid stars

    Science.gov (United States)

    Paschalidis, Vasileios; Yagi, Kent; Alvarez-Castillo, David; Blaschke, David B.; Sedrakian, Armen

    2018-04-01

    Gravitational wave observations of GW170817 placed bounds on the tidal deformabilities of compact stars, allowing one to probe equations of state for matter at supranuclear densities. Here we design new parametrizations for hybrid hadron-quark equations of state, which give rise to low-mass twin stars, and test them against GW170817. We find that GW170817 is consistent with the coalescence of a binary hybrid star-neutron star. We also test and find that the I-Love-Q relations for hybrid stars in the third family agree with those for purely hadronic and quark stars within ˜3 % for both slowly and rapidly rotating configurations, implying that these relations can be used to perform equation-of-state independent tests of general relativity and to break degeneracies in gravitational waveforms for hybrid stars in the third family as well.

  1. GW150914: First Results from the Search for Binary Black Hole Coalescence with Advanced LIGO

    Science.gov (United States)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; hide

    2016-01-01

    On September 14, 2015, at 09:50:45 UTC the two detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO) simultaneously observed the binary black hole merger GW150914. We report the results of a matched-filter search using relativistic models of compact-object binaries that recovered GW150914 as the most significant event during the coincident observations between the two LIGO detectors from September 12 to October 20, 2015 GW150914 was observed with a matched-filter signal-to-noise ratio of 24 and a false alarm rate estimated to be less than 1 event per 203000 years, equivalent to a significance greater than 5.1 sigma.

  2. The use of hand gestures to communicate about nonpresent objects in mind among children with autism spectrum disorder.

    Science.gov (United States)

    So, Wing-Chee; Lui, Ming; Wong, Tze-Kiu; Sit, Long-Tin

    2015-04-01

    The current study examined whether children with autism spectrum disorder (ASD), in comparison with typically developing children, perceive and produce gestures to identify nonpresent objects (i.e., referent-identifying gestures), which is crucial for communicating ideas in a discourse. An experimenter described the uses of daily-life objects to 6- to 12-year-old children both orally and with gestures. The children were then asked to describe how they performed daily activities using those objects. All children gestured. A gesture identified a nonpresent referent if it was produced in the same location that had previously been established by the experimenter. Children with ASD gestured at the specific locations less often than typically developing children. Verbal and spatial memory were positively correlated with the ability to produce referent-identifying gestures for all children. However, the positive correlation between Raven's Children Progressive Matrices score and the production of referent-identifying gestures was found only in children with ASD. Children with ASD might be less able to perceive and produce referent-identifying gestures and may rely more heavily on visual-spatial skills in producing referent-identifying gestures. The results have clinical implications for designing an intervention program to enhance the ability of children with ASD to communicate about nonpresent objects with gestures.

  3. Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity.

    Science.gov (United States)

    Pouw, Wim T J L; Mavilidi, Myrto-Foteini; van Gog, Tamara; Paas, Fred

    2016-08-01

    Non-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypothesis is that gesturing is a means to spatially index mental simulations, thereby reducing the need for visually projecting the mental simulation onto the visual presentation of the task. If that hypothesis is correct, less eye movements should be made when participants gesture during problem solving than when they do not gesture. We therefore used mobile eye tracking to investigate the effect of co-thought gesturing and visual working memory capacity on eye movements during mental solving of the Tower of Hanoi problem. Results revealed that gesturing indeed reduced the number of eye movements (lower saccade counts), especially for participants with a relatively lower visual working memory capacity. Subsequent problem-solving performance was not affected by having (not) gestured during the mental solving phase. The current findings suggest that our understanding of gestures in problem solving could be improved by taking into account eye movements during gesturing.

  4. Activities of two novel macrolides, GW 773546 and GW 708408, compared with those of telithromycin, erythromycin, azithromycin, and clarithromycin against Haemophilus influenzae.

    Science.gov (United States)

    Kosowska, Klaudia; Credito, Kim; Pankuch, Glenn A; Hoellman, Dianne; Lin, Gengrong; Clark, Catherine; Dewasse, Bonifacio; McGhee, Pamela; Jacobs, Michael R; Appelbaum, Peter C

    2004-11-01

    The MIC at which 50% of strains are inhibited (MIC(50)) and the MIC(90) of GW 773546, a novel macrolide, were 1.0 and 2.0 microg/ml, respectively, for 223 beta-lactamase-positive, beta-lactamase-negative, and beta-lactamase-negative ampicillin-resistant Haemophilus influenzae strains. The MIC(50)s and MIC(90)s of GW 708408, a second novel macrolide, and telithromycin, an established ketolide, were 2.0 and 4.0 microg/ml, respectively, while the MIC(50) and MIC(90) of azithromycin were 1.0 and 2.0 microg/ml, respectively. The MIC(50) and MIC(90) of erythromycin were 4.0 and 8.0 microg/ml, respectively; and those of clarithromycin were 4.0 and 16.0 microg/ml, respectively. All compounds except telithromycin were bactericidal (99.9% killing) against nine strains at two times the MIC after 24 h. Telithromycin was bactericidal against eight of the nine strains. In addition, both novel macrolides and telithromycin at two times the MIC showed 99% killing of all nine strains after 12 h and 90% killing of all strains after 6 h. After 24 h, all drugs were bactericidal against four to seven strains when they were tested at the MIC. Ten of 11 strains tested by multistep selection analysis yielded resistant clones after 14 to 43 passages with erythromycin. Azithromycin gave resistant clones of all strains after 20 to 50 passages, and clarithromycin gave resistant clones of 9 of 11 strains after 14 to 41 passages. By comparison, GW 708408 gave resistant clones of 9 of 11 strains after 14 to 44 passages, and GW 773546 gave resistant clones of 10 of 11 strains after 14 to 45 passages. Telithromycin gave resistant clones of 7 of 11 strains after 18 to 45 passages. Mutations mostly in the L22 and L4 ribosomal proteins and 23S rRNA were detected in resistant strains selected with all compounds, with alterations in the L22 protein predominating. Single-step resistance selection studies at the MIC yielded spontaneous resistant mutants at frequencies of 1.5 x 10(-9) to 2.2 x 10(-6) with

  5. Effects of waveform model systematics on the interpretation of GW150914

    Science.gov (United States)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; E Barclay, S.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Beer, C.; Bejger, M.; Belahcene, I.; Belgin, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; E Brau, J.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; E Broida, J.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conti, L.; Cooper, S. J.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; E Cowan, E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; E Creighton, J. D.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; E Dwyer, S.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fernández Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; E Gossan, S.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; E Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; E Holz, D.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, Whansun; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; E Lord, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; E McClelland, D.; McCormick, S.; McGrath, C.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; E Mikhailov, E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Mytidis, A.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; E Pace, A.; Page, J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Rhoades, E.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; E Smith, R. J.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; E Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tippens, T.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tse, M.; Tso, R.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; E Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; E Zucker, M.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration; Boyle, M.; Chu, T.; Hemberger, D.; Hinder, I.; E Kidder, L.; Ossokine, S.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Vano Vinuales, A.

    2017-05-01

    Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein’s equations, any such calibration is accurate only to some non-zero tolerance and is limited by the accuracy of the underlying phenomenology, availability, quality, and parameter-space coverage of numerical simulations. This paper complements the original analyses of GW150914 with an investigation of the effects of possible systematic errors in the waveform models on estimates of its source parameters. To test for systematic errors we repeat the original Bayesian analysis on mock signals from numerical simulations of a series of binary configurations with parameters similar to those found for GW150914. Overall, we find no evidence for a systematic bias relative to the statistical error of the original parameter recovery of GW150914 due to modeling approximations or modeling inaccuracies. However, parameter biases are found to occur for some configurations disfavored by the data of GW150914: for binaries inclined edge-on to the detector over a small range of choices of polarization angles, and also for eccentricities greater than  ˜0.05. For signals with higher signal-to-noise ratio than GW150914, or in other regions of the binary parameter space (lower masses, larger mass ratios, or higher spins), we expect that systematic errors in current waveform models may impact gravitational-wave measurements, making more accurate models desirable for future observations.

  6. "Slight" of hand: the processing of visually degraded gestures with speech.

    Science.gov (United States)

    Kelly, Spencer D; Hansen, Bruce C; Clark, David T

    2012-01-01

    Co-speech hand gestures influence language comprehension. The present experiment explored what part of the visual processing system is optimized for processing these gestures. Participants viewed short video clips of speech and gestures (e.g., a person saying "chop" or "twist" while making a chopping gesture) and had to determine whether the two modalities were congruent or incongruent. Gesture videos were designed to stimulate the parvocellular or magnocellular visual pathways by filtering out low or high spatial frequencies (HSF versus LSF) at two levels of degradation severity (moderate and severe). Participants were less accurate and slower at processing gesture and speech at severe versus moderate levels of degradation. In addition, they were slower for LSF versus HSF stimuli, and this difference was most pronounced in the severely degraded condition. However, exploratory item analyses showed that the HSF advantage was modulated by the range of motion and amount of motion energy in each video. The results suggest that hand gestures exploit a wide range of spatial frequencies, and depending on what frequencies carry the most motion energy, parvocellular or magnocellular visual pathways are maximized to quickly and optimally extract meaning.

  7. The Different Patterns of Gesture between Genders in Mathematical Problem Solving of Geometry

    Science.gov (United States)

    Harisman, Y.; Noto, M. S.; Bakar, M. T.; Amam, A.

    2017-02-01

    This article discusses about students’ gesture between genders in answering problems of geometry. Gesture aims to check students’ understanding which is undefined from their writings. This study is a qualitative research, there were seven questions given to two students of eight grade Junior High School who had the equal ability. The data of this study were collected from mathematical problem solving test, videoing students’ presentation, and interviewing students by asking questions to check their understandings in geometry problems, in this case the researchers would observe the students’ gesture. The result of this study revealed that there were patterns of gesture through students’ conversation and prosodic cues, such as tones, intonation, speech rate and pause. Female students tended to give indecisive gestures, for instance bowing, hesitating, embarrassing, nodding many times in shifting cognitive comprehension, forwarding their body and asking questions to the interviewer when they found tough questions. However, male students acted some gestures such as playing their fingers, focusing on questions, taking longer time to answer hard questions, staying calm in shifting cognitive comprehension. We suggest to observe more sample and focus on students’ gesture consistency in showing their understanding to solve the given problems.

  8. Appearance-based human gesture recognition using multimodal features for human computer interaction

    Science.gov (United States)

    Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun

    2011-03-01

    The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.

  9. Real-Time Multiview Recognition of Human Gestures by Distributed Image Processing

    Directory of Open Access Journals (Sweden)

    Sato Kosuke

    2010-01-01

    Full Text Available Since a gesture involves a dynamic and complex motion, multiview observation and recognition are desirable. For the better representation of gestures, one needs to know, in the first place, from which views a gesture should be observed. Furthermore, it becomes increasingly important how the recognition results are integrated when larger numbers of camera views are considered. To investigate these problems, we propose a framework under which multiview recognition is carried out, and an integration scheme by which the recognition results are integrated online and in realtime. For performance evaluation, we use the ViHASi (Virtual Human Action Silhouette public image database as a benchmark and our Japanese sign language (JSL image database that contains 18 kinds of hand signs. By examining the recognition rates of each gesture for each view, we found gestures that exhibit view dependency and the gestures that do not. Also, we found that the view dependency itself could vary depending on the target gesture sets. By integrating the recognition results of different views, our swarm-based integration provides more robust and better recognition performance than individual fixed-view recognition agents.

  10. Simple vertex correction improves GW band energies of bulk and two-dimensional crystals

    DEFF Research Database (Denmark)

    Schmidt, Per Simmendefeldt; Patrick, Christopher E.; Thygesen, Kristian Sommer

    2017-01-01

    The GW self-energy method has long been recognized as the gold standard for quasiparticle (QP) calculations of solids in spite of the fact that the neglect of vertex corrections and the use of a density-functional theory starting point lack rigorous justification. In this work we remedy this situ......The GW self-energy method has long been recognized as the gold standard for quasiparticle (QP) calculations of solids in spite of the fact that the neglect of vertex corrections and the use of a density-functional theory starting point lack rigorous justification. In this work we remedy...

  11. Characterization of Transient Noise in Advanced LIGO Relevant to Gravitational Wave Signal GW150914

    Science.gov (United States)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adamo, M.; Adams, C.; Adams, T.; Camp, Jordan B.

    2016-01-01

    On 14 September 2015, a gravitational wave signal from a coalescing black hole binary system was observed by the Advanced LIGO detectors. This paper describes the transient noise backgrounds used to determine the significance of the event (designated GW150914) and presents the results of investigations into potential correlated or uncorrelated sources of transient noise in the detectors around the time of the event. The detectors were operating nominally at the time of GW150914. We have ruled out environmental influences and non-Gaussian instrument noise at either LIGO detector as the cause of the observed gravitational wave signal.

  12. First-principles modeling of localized d states with the GW@LDA+U approach

    Science.gov (United States)

    Jiang, Hong; Gomez-Abal, Ricardo I.; Rinke, Patrick; Scheffler, Matthias

    2010-07-01

    First-principles modeling of systems with localized d states is currently a great challenge in condensed-matter physics. Density-functional theory in the standard local-density approximation (LDA) proves to be problematic. This can be partly overcome by including local Hubbard U corrections (LDA+U) but itinerant states are still treated on the LDA level. Many-body perturbation theory in the GW approach offers both a quasiparticle perspective (appropriate for itinerant states) and an exact treatment of exchange (appropriate for localized states), and is therefore promising for these systems. LDA+U has previously been viewed as an approximate GW scheme. We present here a derivation that is simpler and more general, starting from the static Coulomb-hole and screened exchange approximation to the GW self-energy. Following our previous work for f -electron systems [H. Jiang, R. I. Gomez-Abal, P. Rinke, and M. Scheffler, Phys. Rev. Lett. 102, 126403 (2009)10.1103/PhysRevLett.102.126403] we conduct a systematic investigation of the GW method based on LDA+U(GW@LDA+U) , as implemented in our recently developed all-electron GW code FHI-gap (Green’s function with augmented plane waves) for a series of prototypical d -electron systems: (1) ScN with empty d states, (2) ZnS with semicore d states, and (3) late transition-metal oxides (MnO, FeO, CoO, and NiO) with partially occupied d states. We show that for ZnS and ScN, the GW band gaps only weakly depend on U but for the other transition-metal oxides the dependence on U is as strong as in LDA+U . These different trends can be understood in terms of changes in the hybridization and screening. Our work demonstrates that GW@LDA+U with “physical” values of U provides a balanced and accurate description of both localized and itinerant states.

  13. GW151226: Observation of Gravitational Waves from a 22-Solar-Mass Binary Black Hole Coalescence

    OpenAIRE

    Abbott, B. P.; Abbott, R.; Adhikari, R. X.; Anderson, S. B.; Arai, K.; Araya, M. C.; Barayoga, J. C.; Barish, B. C.; Berger, B. K.; Billingsley, G.; Blackburn, J. K.; Bork, R.; Brooks, A. F.; Brunett, S.; Cahillane, C.

    2016-01-01

    We report the observation of a gravitational-wave signal produced by the coalescence of two stellar-mass black holes. The signal, GW151226, was observed by the twin detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO) on December 26, 2015 at 03:38:53 UTC. The signal was initially identified within 70 s by an online matched-filter search targeting binary coalescences. Subsequent off-line analyses recovered GW151226 with a network signal-to-noise ratio of 13 and a signifi...

  14. Good and bad in the hands of politicians: spontaneous gestures during positive and negative speech.

    Directory of Open Access Journals (Sweden)

    Daniel Casasanto

    2010-07-01

    Full Text Available According to the body-specificity hypothesis, people with different bodily characteristics should form correspondingly different mental representations, even in highly abstract conceptual domains. In a previous test of this proposal, right- and left-handers were found to associate positive ideas like intelligence, attractiveness, and honesty with their dominant side and negative ideas with their non-dominant side. The goal of the present study was to determine whether 'body-specific' associations of space and valence can be observed beyond the laboratory in spontaneous behavior, and whether these implicit associations have visible consequences.We analyzed speech and gesture (3012 spoken clauses, 1747 gestures from the final debates of the 2004 and 2008 US presidential elections, which involved two right-handers (Kerry, Bush and two left-handers (Obama, McCain. Blind, independent coding of speech and gesture allowed objective hypothesis testing. Right- and left-handed candidates showed contrasting associations between gesture and speech. In both of the left-handed candidates, left-hand gestures were associated more strongly with positive-valence clauses and right-hand gestures with negative-valence clauses; the opposite pattern was found in both right-handed candidates.Speakers associate positive messages more strongly with dominant hand gestures and negative messages with non-dominant hand gestures, revealing a hidden link between action and emotion. This pattern cannot be explained by conventions in language or culture, which associate 'good' with 'right' but not with 'left'; rather, results support and extend the body-specificity hypothesis. Furthermore, results suggest that the hand speakers use to gesture may have unexpected (and probably unintended communicative value, providing the listener with a subtle index of how the speaker feels about the content of the co-occurring speech.

  15. A comparison of sung and spoken phonation onset gestures using high-speed digital imaging.

    Science.gov (United States)

    Freeman, Ena; Woo, Peak; Saxman, John H; Murry, Thomas

    2012-03-01

    Phonation onset is important in the maintenance of healthy vocal production for speech and singing. The purpose of this preliminary study was to examine differences in vocal fold vibratory behavior between sung and spoken phonation onset gestures. Given the greater degree of precision required for the abrupt onset sung gestures, we hypothesize that differences exist in the timing and coordination of the vocal fold adductory gesture with the onset of vocal fold vibration. Staccato and German (a modified glottal plosive, so named for its occurrence in German classical singing) onset gestures were compared with breathy, normal, and hard onset gestures, using high-speed digital imaging. Samples were obtained from two subjects with no history of voice disorders (a female trained singer and a male nonsinger). Simultaneous capture of acoustical data confirmed the distinction among gestures. Image data were compared for glottal area configurations, degree of adductory positioning, number of small-amplitude prephonatory oscillations (PPOs), and timing of onset gesture events, the latter marked by maximum vocal fold abduction, maximum adduction, beginning of PPOs, and beginning of steady-state oscillation. Results reveal closer adductory positioning of the vocal folds for the staccato and German gestures. The data also suggest a direct relationship between the degree of adductory positioning and the number of PPOs. Results for the timing of onset gesture events suggest a relationship between discrete adductory positioning and more evenly spaced PPOs. By contrast, the overlapping of prephonatory adductory positioning with vibration onset revealed more unevenly spaced PPOs. This may support an existing hypothesis that less well-defined boundaries interfere with normal modes of vibration of the vocal fold tissue. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  16. Evaluation of the safety and usability of touch gestures in operating in-vehicle information systems with visual occlusion.

    Science.gov (United States)

    Kim, Huhn; Song, Haewon

    2014-05-01

    Nowadays, many automobile manufacturers are interested in applying the touch gestures that are used in smart phones to operate their in-vehicle information systems (IVISs). In this study, an experiment was performed to verify the applicability of touch gestures in the operation of IVISs from the viewpoints of both driving safety and usability. In the experiment, two devices were used: one was the Apple iPad, with which various touch gestures such as flicking, panning, and pinching were enabled; the other was the SK EnNavi, which only allowed tapping touch gestures. The participants performed the touch operations using the two devices under visually occluded situations, which is a well-known technique for estimating load of visual attention while driving. In scrolling through a list, the flicking gestures required more time than the tapping gestures. Interestingly, both the flicking and simple tapping gestures required slightly higher visual attention. In moving a map, the average time taken per operation and the visual attention load required for the panning gestures did not differ from those of the simple tapping gestures that are used in existing car navigation systems. In zooming in/out of a map, the average time taken per pinching gesture was similar to that of the tapping gesture but required higher visual attention. Moreover, pinching gestures at a display angle of 75° required that the participants severely bend their wrists. Because the display angles of many car navigation systems tends to be more than 75°, pinching gestures can cause severe fatigue on users' wrists. Furthermore, contrary to participants' evaluation of other gestures, several participants answered that the pinching gesture was not necessary when operating IVISs. It was found that the panning gesture is the only touch gesture that can be used without negative consequences when operating IVISs while driving. The flicking gesture is likely to be used if the screen moving speed is slower or

  17. Data Fusion Research of Triaxial Human Body Motion Gesture based on Decision Tree

    Directory of Open Access Journals (Sweden)

    Feihong Zhou

    2014-05-01

    Full Text Available The development status of human body motion gesture data fusion domestic and overseas has been analyzed. A triaxial accelerometer is adopted to develop a wearable human body motion gesture monitoring system aimed at old people healthcare. On the basis of a brief introduction of decision tree algorithm, the WEKA workbench is adopted to generate a human body motion gesture decision tree. At last, the classification quality of the decision tree has been validated through experiments. The experimental results show that the decision tree algorithm could reach an average predicting accuracy of 97.5 % with lower time cost.

  18. Common neural substrates support speech and non-speech vocal tract gestures

    OpenAIRE

    Chang, Soo-Eun; Kenney, Mary Kay; Loucks, Torrey M.J.; Poletto, Christopher J.; Ludlow, Christy L.

    2009-01-01

    The issue of whether speech is supported by the same neural substrates as non-speech vocal-tract gestures has been contentious. In this fMRI study we tested whether producing non-speech vocal tract gestures in humans shares the same functional neuroanatomy as non-sense speech syllables. Production of non-speech vocal tract gestures, devoid of phonological content but similar to speech in that they had familiar acoustic and somatosensory targets, were compared to the production of speech sylla...

  19. PV radiometrics workshop proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Myers, D.R.

    1995-09-01

    This report documents presentations and discussions held at the Photovoltaics Radiometeric Measurements Workshop conducted at Vail, Colorado, on July 24 and 25, 1995. The workshop was sponsored and financed by the Photovoltaic Module and Systems Performance and Engineering Project managed by Richard DeBlasio, Principal Investigator. That project is a component of the National Renewable Energy Laboratory (NREL) Photovoltaic Research and Development Program, conducted by NREL for the US Department of Energy, through the NREL Photovoltaic Engineering and Applications Branch, managed by Roland Hulstrom. Separate abstracts have been prepared for articles from this workshop.

  20. Nuclear Innovation Workshops Report

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, John Howard [Idaho National Lab. (INL), Idaho Falls, ID (United States); Allen, Todd Randall [Idaho National Lab. (INL), Idaho Falls, ID (United States); Hildebrandt, Philip Clay [Idaho National Lab. (INL), Idaho Falls, ID (United States); Baker, Suzanne Hobbs [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The Nuclear Innovation Workshops were held at six locations across the United States on March 3-5, 2015. The data collected during these workshops has been analyzed and sorted to bring out consistent themes toward enhancing innovation in nuclear energy. These themes include development of a test bed and demonstration platform, improved regulatory processes, improved communications, and increased public-private partnerships. This report contains a discussion of the workshops and resulting themes. Actionable steps are suggested at the end of the report. This revision has a small amount of the data in Appendix C removed in order to avoid potential confusion.

  1. The OsSPL16-GW7 regulatory module determines grain shape and simultaneously improves rice yield and grain quality.

    Science.gov (United States)

    Wang, Shaokui; Li, Shan; Liu, Qian; Wu, Kun; Zhang, Jianqing; Wang, Shuansuo; Wang, Yi; Chen, Xiangbin; Zhang, Yi; Gao, Caixia; Wang, Feng; Huang, Haixiang; Fu, Xiangdong

    2015-08-01

    The deployment of heterosis in the form of hybrid rice varieties has boosted grain yield, but grain quality improvement still remains a challenge. Here we show that a quantitative trait locus for rice grain quality, qGW7, reflects allelic variation of GW7, a gene encoding a TONNEAU1-recruiting motif protein with similarity to C-terminal motifs of the human centrosomal protein CAP350. Upregulation of GW7 expression was correlated with the production of more slender grains, as a result of increased cell division in the longitudinal direction and decreased cell division in the transverse direction. OsSPL16 (GW8), an SBP-domain transcription factor that regulates grain width, bound directly to the GW7 promoter and repressed its expression. The presence of a semidominant GW7(TFA) allele from tropical japonica rice was associated with higher grain quality without the yield penalty imposed by the Basmati gw8 allele. Manipulation of the OsSPL16-GW7 module thus represents a new strategy to simultaneously improve rice yield and grain quality.

  2. Gesturing Entangled Journeys - Mobilities Design in Aalborg East, Denmark

    DEFF Research Database (Denmark)

    Lanng, Ditte Bendix

    2015-01-01

    situations are used in re-design experiments, which explore potential architectures of the suburban functionalist transit spaces to not only invite safe and effective transport but also gesture towards a richness of wayfaring ways of life. The empirical and experimental work was developed in a reciprocal...... and for being desensitised, placeless environments with little cultural and social value. This critique is acknowledged in the thesis, but it is also challenged by asking whether these transit Spaces could, in fact, be some of our cherished public spaces. The thesis explores how social and sensorial mobile...... on-the-move in transit spaces, the thesis explores what happens between point A and point B during daily life journeys to and from school, work, shopping, and other destinations. Along these journeys life is lived in transit spaces. While wayfarers are on the move they are also dwelling...

  3. Localization and Broadband Follow-up of the Gravitational-wave Transient GW150914

    Science.gov (United States)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Barthelmy, S.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Behnke, B.; Bejger, M.; Bell, A. S.; Bell, C. J.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bojtos, P.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Bustillo, J. C.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Diaz, J. C.; Casentini, C.; Caudill, S.; Cavagliá, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Baiardi, L. C.; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; DeRosa, R. T.; De Rosa, R.; DeSalvo, R.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gaur, G.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, A.; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Castro, J. M. G.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Haris, K.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Hofman, D.; Hollitt, S. E.; Holt, K.; Holz, D. E.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Islas, G.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalaidovski, A.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, C.; Kim, J.; Kim, K.; Kim, N.; Kim, N.; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Logue, J.; Lombardi, A. L.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R. M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Muir, A. W.; Mukherjee, A.; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, R. J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palliyaguru, N.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Phelps, M.; Piccinni, O.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Setyawati, Y.; Sevigny, A.; Shaddock, D. A.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shao, Z.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, A. D.; Simakov, D.; Singer, A.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Wright, J. L.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, H.; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, F.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration; Allison, J.; Bannister, K.; Bell, M. E.; Chatterjee, S.; Chippendale, A. P.; Edwards, P. G.; Harvey-Smith, L.; Heywood, Ian; Hotan, A.; Indermuehle, B.; Marvil, J.; McConnell, D.; Murphy, T.; Popping, A.; Reynolds, J.; Sault, R. J.; Voronkov, M. A.; Whiting, M. T.; Australian Square Kilometer Array Pathfinder (ASKAP Collaboration); Castro-Tirado, A. J.; Cunniffe, R.; Jelínek, M.; Tello, J. C.; Oates, S. R.; Hu, Y.-D.; Kubánek, P.; Guziy, S.; Castellón, A.; García-Cerezo, A.; Muñoz, V. F.; Pérez del Pulgar, C.; Castillo-Carrión, S.; Castro Cerón, J. M.; Hudec, R.; Caballero-García, M. D.; Páta, P.; Vitek, S.; Adame, J. A.; Konig, S.; Rendón, F.; Mateo Sanguino, T. de J.; Fernández-Muñoz, R.; Yock, P. C.; Rattenbury, N.; Allen, W. H.; Querel, R.; Jeong, S.; Park, I. H.; Bai, J.; Cui, Ch.; Fan, Y.; Wang, Ch.; Hiriart, D.; Lee, W. H.; Claret, A.; Sánchez-Ramírez, R.; Pandey, S. B.; Mediavilla, T.; Sabau-Graziati, L.;