WorldWideScience

Sample records for multimodal human-computer interaction

  1. Modeling multimodal human-computer interaction

    NARCIS (Netherlands)

    Obrenovic, Z.; Starcevic, D.

    2004-01-01

    Incorporating the well-known Unified Modeling Language into a generic modeling framework makes research on multimodal human-computer interaction accessible to a wide range off software engineers. Multimodal interaction is part of everyday human discourse: We speak, move, gesture, and shift our gaze

  2. Multimodal Information Presentation for High-Load Human Computer Interaction

    NARCIS (Netherlands)

    Cao, Y.

    2011-01-01

    This dissertation addresses multimodal information presentation in human computer interaction. Information presentation refers to the manner in which computer systems/interfaces present information to human users. More specifically, the focus of our work is not on which information to present, but

  3. A Software Framework for Multimodal Human-Computer Interaction Systems

    NARCIS (Netherlands)

    Shen, Jie; Pantic, Maja

    2009-01-01

    This paper describes a software framework we designed and implemented for the development and research in the area of multimodal human-computer interface. The proposed framework is based on publish / subscribe architecture, which allows developers and researchers to conveniently configure, test and

  4. Measuring Multimodal Synchrony for Human-Computer Interaction

    NARCIS (Netherlands)

    Reidsma, Dennis; Nijholt, Antinus; Tschacher, Wolfgang; Ramseyer, Fabian; Sourin, A.

    2010-01-01

    Nonverbal synchrony is an important and natural element in human-human interaction. It can also play various roles in human-computer interaction. In particular this is the case in the interaction between humans and the virtual humans that inhabit our cyberworlds. Virtual humans need to adapt their

  5. HCI^2 Workbench: A Development Tool for Multimodal Human-Computer Interaction Systems

    NARCIS (Netherlands)

    Shen, Jie; Wenzhe, Shi; Pantic, Maja

    In this paper, we present a novel software tool designed and implemented to simplify the development process of Multimodal Human-Computer Interaction (MHCI) systems. This tool, which is called the HCI^2 Workbench, exploits a Publish / Subscribe (P/S) architecture [13] [14] to facilitate efficient

  6. Human-computer interaction for alert warning and attention allocation systems of the multimodal watchstation

    Science.gov (United States)

    Obermayer, Richard W.; Nugent, William A.

    2000-11-01

    The SPAWAR Systems Center San Diego is currently developing an advanced Multi-Modal Watchstation (MMWS); design concepts and software from this effort are intended for transition to future United States Navy surface combatants. The MMWS features multiple flat panel displays and several modes of user interaction, including voice input and output, natural language recognition, 3D audio, stylus and gestural inputs. In 1999, an extensive literature review was conducted on basic and applied research concerned with alerting and warning systems. After summarizing that literature, a human computer interaction (HCI) designer's guide was prepared to support the design of an attention allocation subsystem (AAS) for the MMWS. The resultant HCI guidelines are being applied in the design of a fully interactive AAS prototype. An overview of key findings from the literature review, a proposed design methodology with illustrative examples, and an assessment of progress made in implementing the HCI designers guide are presented.

  7. HCI^2 Framework: A software framework for multimodal human-computer interaction systems

    NARCIS (Netherlands)

    Shen, Jie; Pantic, Maja

    2013-01-01

    This paper presents a novel software framework for the development and research in the area of multimodal human-computer interface (MHCI) systems. The proposed software framework, which is called the HCI∧2 Framework, is built upon publish/subscribe (P/S) architecture. It implements a

  8. Integrated multimodal human-computer interface and augmented reality for interactive display applications

    Science.gov (United States)

    Vassiliou, Marius S.; Sundareswaran, Venkataraman; Chen, S.; Behringer, Reinhold; Tam, Clement K.; Chan, M.; Bangayan, Phil T.; McGee, Joshua H.

    2000-08-01

    We describe new systems for improved integrated multimodal human-computer interaction and augmented reality for a diverse array of applications, including future advanced cockpits, tactical operations centers, and others. We have developed an integrated display system featuring: speech recognition of multiple concurrent users equipped with both standard air- coupled microphones and novel throat-coupled sensors (developed at Army Research Labs for increased noise immunity); lip reading for improving speech recognition accuracy in noisy environments, three-dimensional spatialized audio for improved display of warnings, alerts, and other information; wireless, coordinated handheld-PC control of a large display; real-time display of data and inferences from wireless integrated networked sensors with on-board signal processing and discrimination; gesture control with disambiguated point-and-speak capability; head- and eye- tracking coupled with speech recognition for 'look-and-speak' interaction; and integrated tetherless augmented reality on a wearable computer. The various interaction modalities (speech recognition, 3D audio, eyetracking, etc.) are implemented a 'modality servers' in an Internet-based client-server architecture. Each modality server encapsulates and exposes commercial and research software packages, presenting a socket network interface that is abstracted to a high-level interface, minimizing both vendor dependencies and required changes on the client side as the server's technology improves.

  9. Appearance-based human gesture recognition using multimodal features for human computer interaction

    Science.gov (United States)

    Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun

    2011-03-01

    The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.

  10. Minimal mobile human computer interaction

    NARCIS (Netherlands)

    el Ali, A.

    2013-01-01

    In the last 20 years, the widespread adoption of personal, mobile computing devices in everyday life, has allowed entry into a new technological era in Human Computer Interaction (HCI). The constant change of the physical and social context in a user's situation made possible by the portability of

  11. Occupational stress in human computer interaction.

    Science.gov (United States)

    Smith, M J; Conway, F T; Karsh, B T

    1999-04-01

    There have been a variety of research approaches that have examined the stress issues related to human computer interaction including laboratory studies, cross-sectional surveys, longitudinal case studies and intervention studies. A critical review of these studies indicates that there are important physiological, biochemical, somatic and psychological indicators of stress that are related to work activities where human computer interaction occurs. Many of the stressors of human computer interaction at work are similar to those stressors that have historically been observed in other automated jobs. These include high workload, high work pressure, diminished job control, inadequate employee training to use new technology, monotonous tasks, por supervisory relations, and fear for job security. New stressors have emerged that can be tied primarily to human computer interaction. These include technology breakdowns, technology slowdowns, and electronic performance monitoring. The effects of the stress of human computer interaction in the workplace are increased physiological arousal; somatic complaints, especially of the musculoskeletal system; mood disturbances, particularly anxiety, fear and anger; and diminished quality of working life, such as reduced job satisfaction. Interventions to reduce the stress of computer technology have included improved technology implementation approaches and increased employee participation in implementation. Recommendations for ways to reduce the stress of human computer interaction at work are presented. These include proper ergonomic conditions, increased organizational support, improved job content, proper workload to decrease work pressure, and enhanced opportunities for social support. A model approach to the design of human computer interaction at work that focuses on the system "balance" is proposed.

  12. HCIDL: Human-computer interface description language for multi-target, multimodal, plastic user interfaces

    Directory of Open Access Journals (Sweden)

    Lamia Gaouar

    2018-06-01

    Full Text Available From the human-computer interface perspectives, the challenges to be faced are related to the consideration of new, multiple interactions, and the diversity of devices. The large panel of interactions (touching, shaking, voice dictation, positioning … and the diversification of interaction devices can be seen as a factor of flexibility albeit introducing incidental complexity. Our work is part of the field of user interface description languages. After an analysis of the scientific context of our work, this paper introduces HCIDL, a modelling language staged in a model-driven engineering approach. Among the properties related to human-computer interface, our proposition is intended for modelling multi-target, multimodal, plastic interaction interfaces using user interface description languages. By combining plasticity and multimodality, HCIDL improves usability of user interfaces through adaptive behaviour by providing end-users with an interaction-set adapted to input/output of terminals and, an optimum layout. Keywords: Model driven engineering, Human-computer interface, User interface description languages, Multimodal applications, Plastic user interfaces

  13. Language evolution and human-computer interaction

    Science.gov (United States)

    Grudin, Jonathan; Norman, Donald A.

    1991-01-01

    Many of the issues that confront designers of interactive computer systems also appear in natural language evolution. Natural languages and human-computer interfaces share as their primary mission the support of extended 'dialogues' between responsive entities. Because in each case one participant is a human being, some of the pressures operating on natural languages, causing them to evolve in order to better support such dialogue, also operate on human-computer 'languages' or interfaces. This does not necessarily push interfaces in the direction of natural language - since one entity in this dialogue is not a human, this is not to be expected. Nonetheless, by discerning where the pressures that guide natural language evolution also appear in human-computer interaction, we can contribute to the design of computer systems and obtain a new perspective on natural languages.

  14. Proxemics in Human-Computer Interaction

    OpenAIRE

    Greenberg, Saul; Honbaek, Kasper; Quigley, Aaron; Reiterer, Harald; Rädle, Roman

    2014-01-01

    In 1966, anthropologist Edward Hall coined the term "proxemics." Proxemics is an area of study that identifies the culturally dependent ways in which people use interpersonal distance to understand and mediate their interactions with others. Recent research has demonstrated the use of proxemics in human-computer interaction (HCI) for supporting users' explicit and implicit interactions in a range of uses, including remote office collaboration, home entertainment, and games. One promise of pro...

  15. Human-Computer Interaction in Smart Environments

    Science.gov (United States)

    Paravati, Gianluca; Gatteschi, Valentina

    2015-01-01

    Here, we provide an overview of the content of the Special Issue on “Human-computer interaction in smart environments”. The aim of this Special Issue is to highlight technologies and solutions encompassing the use of mass-market sensors in current and emerging applications for interacting with Smart Environments. Selected papers address this topic by analyzing different interaction modalities, including hand/body gestures, face recognition, gaze/eye tracking, biosignal analysis, speech and activity recognition, and related issues.

  16. Quantifying Quality Aspects of Multimodal Interactive Systems

    CERN Document Server

    Kühnel, Christine

    2012-01-01

    This book systematically addresses the quantification of quality aspects of multimodal interactive systems. The conceptual structure is based on a schematic view on human-computer interaction where the user interacts with the system and perceives it via input and output interfaces. Thus, aspects of multimodal interaction are analyzed first, followed by a discussion of the evaluation of output and input and concluding with a view on the evaluation of a complete system.

  17. Human-Computer Interaction in Smart Environments

    Directory of Open Access Journals (Sweden)

    Gianluca Paravati

    2015-08-01

    Full Text Available Here, we provide an overview of the content of the Special Issue on “Human-computer interaction in smart environments”. The aim of this Special Issue is to highlight technologies and solutions encompassing the use of mass-market sensors in current and emerging applications for interacting with Smart Environments. Selected papers address this topic by analyzing different interaction modalities, including hand/body gestures, face recognition, gaze/eye tracking, biosignal analysis, speech and activity recognition, and related issues.

  18. Fundamentals of human-computer interaction

    CERN Document Server

    Monk, Andrew F

    1985-01-01

    Fundamentals of Human-Computer Interaction aims to sensitize the systems designer to the problems faced by the user of an interactive system. The book grew out of a course entitled """"The User Interface: Human Factors for Computer-based Systems"""" which has been run annually at the University of York since 1981. This course has been attended primarily by systems managers from the computer industry. The book is organized into three parts. Part One focuses on the user as processor of information with studies on visual perception; extracting information from printed and electronically presented

  19. Introduction to human-computer interaction

    CERN Document Server

    Booth, Paul

    2014-01-01

    Originally published in 1989 this title provided a comprehensive and authoritative introduction to the burgeoning discipline of human-computer interaction for students, academics, and those from industry who wished to know more about the subject. Assuming very little knowledge, the book provides an overview of the diverse research areas that were at the time only gradually building into a coherent and well-structured field. It aims to explain the underlying causes of the cognitive, social and organizational problems typically encountered when computer systems are introduced. It is clear and co

  20. Human computer interaction using hand gestures

    CERN Document Server

    Premaratne, Prashan

    2014-01-01

    Human computer interaction (HCI) plays a vital role in bridging the 'Digital Divide', bringing people closer to consumer electronics control in the 'lounge'. Keyboards and mouse or remotes do alienate old and new generations alike from control interfaces. Hand Gesture Recognition systems bring hope of connecting people with machines in a natural way. This will lead to consumers being able to use their hands naturally to communicate with any electronic equipment in their 'lounge.' This monograph will include the state of the art hand gesture recognition approaches and how they evolved from their inception. The author would also detail his research in this area for the past 8 years and how the future might turn out to be using HCI. This monograph will serve as a valuable guide for researchers (who would endeavour into) in the world of HCI.

  1. Human-Computer Interaction The Agency Perspective

    CERN Document Server

    Oliveira, José

    2012-01-01

    Agent-centric theories, approaches and technologies are contributing to enrich interactions between users and computers. This book aims at highlighting the influence of the agency perspective in Human-Computer Interaction through a careful selection of research contributions. Split into five sections; Users as Agents, Agents and Accessibility, Agents and Interactions, Agent-centric Paradigms and Approaches, and Collective Agents, the book covers a wealth of novel, original and fully updated material, offering:   ü  To provide a coherent, in depth, and timely material on the agency perspective in HCI ü  To offer an authoritative treatment of the subject matter presented by carefully selected authors ü  To offer a balanced and broad coverage of the subject area, including, human, organizational, social, as well as technological concerns. ü  To offer a hands-on-experience by covering representative case studies and offering essential design guidelines   The book will appeal to a broad audience of resea...

  2. Human-computer interaction : Guidelines for web animation

    OpenAIRE

    Galyani Moghaddam, Golnessa; Moballeghi, Mostafa

    2006-01-01

    Human-computer interaction in the large is an interdisciplinary area which attracts researchers, educators, and practioners from many differenf fields. Human-computer interaction studies a human and a machine in communication, it draws from supporting knowledge on both the machine and the human side. This paper is related to the human side of human-computer interaction and focuses on animations. The growing use of animation in Web pages testifies to the increasing ease with which such multim...

  3. Mobile human-computer interaction perspective on mobile learning

    CSIR Research Space (South Africa)

    Botha, Adèle

    2010-10-01

    Full Text Available Applying a Mobile Human Computer Interaction (MHCI) view to the domain of education using Mobile Learning (Mlearning), the research outlines its understanding of the influences and effects of different interactions on the use of mobile technology...

  4. Human-Computer Interaction and Information Management Research Needs

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — In a visionary future, Human-Computer Interaction HCI and Information Management IM have the potential to enable humans to better manage their lives through the use...

  5. Benefits of Subliminal Feedback Loops in Human-Computer Interaction

    OpenAIRE

    Walter Ritter

    2011-01-01

    A lot of efforts have been directed to enriching human-computer interaction to make the user experience more pleasing or efficient. In this paper, we briefly present work in the fields of subliminal perception and affective computing, before we outline a new approach to add analog communication channels to the human-computer interaction experience. In this approach, in addition to symbolic predefined mappings of input to output, a subliminal feedback loop is used that provides feedback in evo...

  6. Humor in Human-Computer Interaction : A Short Survey

    NARCIS (Netherlands)

    Nijholt, Anton; Niculescu, Andreea; Valitutti, Alessandro; Banchs, Rafael E.; Joshi, Anirudha; Balkrishan, Devanuj K.; Dalvi, Girish; Winckler, Marco

    2017-01-01

    This paper is a short survey on humor in human-computer interaction. It describes how humor is designed and interacted with in social media, virtual agents, social robots and smart environments. Benefits and future use of humor in interactions with artificial entities are discussed based on

  7. Audio Technology and Mobile Human Computer Interaction

    DEFF Research Database (Denmark)

    Chamberlain, Alan; Bødker, Mads; Hazzard, Adrian

    2017-01-01

    Audio-based mobile technology is opening up a range of new interactive possibilities. This paper brings some of those possibilities to light by offering a range of perspectives based in this area. It is not only the technical systems that are developing, but novel approaches to the design...... and understanding of audio-based mobile systems are evolving to offer new perspectives on interaction and design and support such systems to be applied in areas, such as the humanities....

  8. Human-computer interaction and management information systems

    CERN Document Server

    Galletta, Dennis F

    2014-01-01

    ""Human-Computer Interaction and Management Information Systems: Applications"" offers state-of-the-art research by a distinguished set of authors who span the MIS and HCI fields. The original chapters provide authoritative commentaries and in-depth descriptions of research programs that will guide 21st century scholars, graduate students, and industry professionals. Human-Computer Interaction (or Human Factors) in MIS is concerned with the ways humans interact with information, technologies, and tasks, especially in business, managerial, organizational, and cultural contexts. It is distinctiv

  9. Humans, computers and wizards human (simulated) computer interaction

    CERN Document Server

    Fraser, Norman; McGlashan, Scott; Wooffitt, Robin

    2013-01-01

    Using data taken from a major European Union funded project on speech understanding, the SunDial project, this book considers current perspectives on human computer interaction and argues for the value of an approach taken from sociology which is based on conversation analysis.

  10. The epistemology and ontology of human-computer interaction

    NARCIS (Netherlands)

    Brey, Philip A.E.

    2005-01-01

    This paper analyzes epistemological and ontological dimensions of Human-Computer Interaction (HCI) through an analysis of the functions of computer systems in relation to their users. It is argued that the primary relation between humans and computer systems has historically been epistemic:

  11. Choice of Human-Computer Interaction Mode in Stroke Rehabilitation.

    Science.gov (United States)

    Mousavi Hondori, Hossein; Khademi, Maryam; Dodakian, Lucy; McKenzie, Alison; Lopes, Cristina V; Cramer, Steven C

    2016-03-01

    Advances in technology are providing new forms of human-computer interaction. The current study examined one form of human-computer interaction, augmented reality (AR), whereby subjects train in the real-world workspace with virtual objects projected by the computer. Motor performances were compared with those obtained while subjects used a traditional human-computer interaction, that is, a personal computer (PC) with a mouse. Patients used goal-directed arm movements to play AR and PC versions of the Fruit Ninja video game. The 2 versions required the same arm movements to control the game but had different cognitive demands. With AR, the game was projected onto the desktop, where subjects viewed the game plus their arm movements simultaneously, in the same visual coordinate space. In the PC version, subjects used the same arm movements but viewed the game by looking up at a computer monitor. Among 18 patients with chronic hemiparesis after stroke, the AR game was associated with 21% higher game scores (P = .0001), 19% faster reaching times (P = .0001), and 15% less movement variability (P = .0068), as compared to the PC game. Correlations between game score and arm motor status were stronger with the AR version. Motor performances during the AR game were superior to those during the PC game. This result is due in part to the greater cognitive demands imposed by the PC game, a feature problematic for some patients but clinically useful for others. Mode of human-computer interface influences rehabilitation therapy demands and can be individualized for patients. © The Author(s) 2015.

  12. Multimodal Desktop Interaction: The Face –Object-Gesture–Voice Example

    DEFF Research Database (Denmark)

    Vidakis, Nikolas; Vlasopoulos, Anastasios; Kounalakis, Tsampikos

    2013-01-01

    This paper presents a natural user interface system based on multimodal human computer interaction, which operates as an intermediate module between the user and the operating system. The aim of this work is to demonstrate a multimodal system which gives users the ability to interact with desktop...

  13. Human-Computer Interaction, Tourism and Cultural Heritage

    Science.gov (United States)

    Cipolla Ficarra, Francisco V.

    We present a state of the art of the human-computer interaction aimed at tourism and cultural heritage in some cities of the European Mediterranean. In the work an analysis is made of the main problems deriving from training understood as business and which can derail the continuous growth of the HCI, the new technologies and tourism industry. Through a semiotic and epistemological study the current mistakes in the context of the interrelations of the formal and factual sciences will be detected and also the human factors that have an influence on the professionals devoted to the development of interactive systems in order to safeguard and boost cultural heritage.

  14. Human-computer systems interaction backgrounds and applications 3

    CERN Document Server

    Kulikowski, Juliusz; Mroczek, Teresa; Wtorek, Jerzy

    2014-01-01

    This book contains an interesting and state-of the art collection of papers on the recent progress in Human-Computer System Interaction (H-CSI). It contributes the profound description of the actual status of the H-CSI field and also provides a solid base for further development and research in the discussed area. The contents of the book are divided into the following parts: I. General human-system interaction problems; II. Health monitoring and disabled people helping systems; and III. Various information processing systems. This book is intended for a wide audience of readers who are not necessarily experts in computer science, machine learning or knowledge engineering, but are interested in Human-Computer Systems Interaction. The level of particular papers and specific spreading-out into particular parts is a reason why this volume makes fascinating reading. This gives the reader a much deeper insight than he/she might glean from research papers or talks at conferences. It touches on all deep issues that ...

  15. From Human-Computer Interaction to Human-Robot Social Interaction

    OpenAIRE

    Toumi, Tarek; Zidani, Abdelmadjid

    2014-01-01

    Human-Robot Social Interaction became one of active research fields in which researchers from different areas propose solutions and directives leading robots to improve their interactions with humans. In this paper we propose to introduce works in both human robot interaction and human computer interaction and to make a bridge between them, i.e. to integrate emotions and capabilities concepts of the robot in human computer model to become adequate for human robot interaction and discuss chall...

  16. The Past, Present and Future of Human Computer Interaction

    KAUST Repository

    Churchill, Elizabeth

    2018-01-16

    Human Computer Interaction (HCI) focuses on how people interact with, and are transformed by computation. Our current technology landscape is changing rapidly. Interactive applications, devices and services are increasingly becoming embedded into our environments. From our homes to the urban and rural spaces, we traverse everyday. We are increasingly able toヨoften required toヨmanage and configure multiple, interconnected devices and program their interactions. Artificial intelligence (AI) techniques are being used to create dynamic services that learn about us and others, that make conclusions about our intents and affiliations, and that mould our digital interactions based in predictions about our actions and needs, nudging us toward certain behaviors. Computation is also increasingly embedded into our bodies. Understanding human interactions in the everyday digital and physical context. During this lecture, Elizabeth Churchill -Director of User Experience at Google- will talk about how an emerging landscape invites us to revisit old methods and tactics for understanding how people interact with computers and computation, and how it challenges us to think about new methods and frameworks for understanding the future of human-centered computation.

  17. A multimodal parallel architecture: A cognitive framework for multimodal interactions.

    Science.gov (United States)

    Cohn, Neil

    2016-01-01

    Human communication is naturally multimodal, and substantial focus has examined the semantic correspondences in speech-gesture and text-image relationships. However, visual narratives, like those in comics, provide an interesting challenge to multimodal communication because the words and/or images can guide the overall meaning, and both modalities can appear in complicated "grammatical" sequences: sentences use a syntactic structure and sequential images use a narrative structure. These dual structures create complexity beyond those typically addressed by theories of multimodality where only a single form uses combinatorial structure, and also poses challenges for models of the linguistic system that focus on single modalities. This paper outlines a broad theoretical framework for multimodal interactions by expanding on Jackendoff's (2002) parallel architecture for language. Multimodal interactions are characterized in terms of their component cognitive structures: whether a particular modality (verbal, bodily, visual) is present, whether it uses a grammatical structure (syntax, narrative), and whether it "dominates" the semantics of the overall expression. Altogether, this approach integrates multimodal interactions into an existing framework of language and cognition, and characterizes interactions between varying complexity in the verbal, bodily, and graphic domains. The resulting theoretical model presents an expanded consideration of the boundaries of the "linguistic" system and its involvement in multimodal interactions, with a framework that can benefit research on corpus analyses, experimentation, and the educational benefits of multimodality. Copyright © 2015.

  18. Experimental evaluation of multimodal human computer interface for tactical audio applications

    NARCIS (Netherlands)

    Obrenovic, Z.; Starcevic, D.; Jovanov, E.; Oy, S.

    2002-01-01

    Mission critical and information overwhelming applications require careful design of the human computer interface. Typical applications include night vision or low visibility mission navigation, guidance through a hostile territory, and flight navigation and orientation. Additional channels of

  19. Institutionalizing human-computer interaction for global health.

    Science.gov (United States)

    Gulliksen, Jan

    2017-06-01

    Digitalization is the societal change process in which new ICT-based solutions bring forward completely new ways of doing things, new businesses and new movements in the society. Digitalization also provides completely new ways of addressing issues related to global health. This paper provides an overview of the field of human-computer interaction (HCI) and in what way the field has contributed to international development in different regions of the world. Additionally, it outlines the United Nations' new sustainability goals from December 2015 and what these could contribute to the development of global health and its relationship to digitalization. Finally, it argues why and how HCI could be adopted and adapted to fit the contextual needs, the need for localization and for the development of new digital innovations. The research methodology is mostly qualitative following an action research paradigm in which the actual change process that the digitalization is evoking is equally important as the scientific conclusions that can be drawn. In conclusion, the paper argues that digitalization is fundamentally changing the society through the development and use of digital technologies and may have a profound effect on the digital development of every country in the world. But it needs to be developed based on local practices, it needs international support and to not be limited by any technological constraints. Particularly digitalization to support global health requires a profound understanding of the users and their context, arguing for user-centred systems design methodologies as particularly suitable.

  20. The Emotiv EPOC interface paradigm in Human-Computer Interaction

    OpenAIRE

    Ancău Dorina; Roman Nicolae-Marius; Ancău Mircea

    2017-01-01

    Numerous studies have suggested the use of decoded error potentials in the brain to improve human-computer communication. Together with state-of-the-art scientific equipment, experiments have also tested instruments with more limited performance for the time being, such as Emotiv EPOC. This study presents a review of these trials and a summary of the results obtained. However, the level of these results indicates a promising prospect for using this headset as a human-computer interface for er...

  1. Evidence Report: Risk of Inadequate Human-Computer Interaction

    Science.gov (United States)

    Holden, Kritina; Ezer, Neta; Vos, Gordon

    2013-01-01

    Human-computer interaction (HCI) encompasses all the methods by which humans and computer-based systems communicate, share information, and accomplish tasks. When HCI is poorly designed, crews have difficulty entering, navigating, accessing, and understanding information. HCI has rarely been studied in an operational spaceflight context, and detailed performance data that would support evaluation of HCI have not been collected; thus, we draw much of our evidence from post-spaceflight crew comments, and from other safety-critical domains like ground-based power plants, and aviation. Additionally, there is a concern that any potential or real issues to date may have been masked by the fact that crews have near constant access to ground controllers, who monitor for errors, correct mistakes, and provide additional information needed to complete tasks. We do not know what types of HCI issues might arise without this "safety net". Exploration missions will test this concern, as crews may be operating autonomously due to communication delays and blackouts. Crew survival will be heavily dependent on available electronic information for just-in-time training, procedure execution, and vehicle or system maintenance; hence, the criticality of the Risk of Inadequate HCI. Future work must focus on identifying the most important contributing risk factors, evaluating their contribution to the overall risk, and developing appropriate mitigations. The Risk of Inadequate HCI includes eight core contributing factors based on the Human Factors Analysis and Classification System (HFACS): (1) Requirements, policies, and design processes, (2) Information resources and support, (3) Allocation of attention, (4) Cognitive overload, (5) Environmentally induced perceptual changes, (6) Misperception and misinterpretation of displayed information, (7) Spatial disorientation, and (8) Displays and controls.

  2. The Emotiv EPOC interface paradigm in Human-Computer Interaction

    Directory of Open Access Journals (Sweden)

    Ancău Dorina

    2017-01-01

    Full Text Available Numerous studies have suggested the use of decoded error potentials in the brain to improve human-computer communication. Together with state-of-the-art scientific equipment, experiments have also tested instruments with more limited performance for the time being, such as Emotiv EPOC. This study presents a review of these trials and a summary of the results obtained. However, the level of these results indicates a promising prospect for using this headset as a human-computer interface for error decoding.

  3. Stereo Vision for Unrestricted Human-Computer Interaction

    OpenAIRE

    Eldridge, Ross; Rudolph, Heiko

    2008-01-01

    Human computer interfaces have come long way in recent years, but the goal of a computer interpreting unrestricted human movement remains elusive. The use of stereo vision in this field has enabled the development of systems that begin to approach this goal. As computer technology advances we come ever closer to a system that can react to the ambiguities of human movement in real-time. In the foreseeable future stereo computer vision is not likely to replace the keyboard or mouse. There is at...

  4. Design of a compact low-power human-computer interaction equipment for hand motion

    Science.gov (United States)

    Wu, Xianwei; Jin, Wenguang

    2017-01-01

    Human-Computer Interaction (HCI) raises demand of convenience, endurance, responsiveness and naturalness. This paper describes a design of a compact wearable low-power HCI equipment applied to gesture recognition. System combines multi-mode sense signals: the vision sense signal and the motion sense signal, and the equipment is equipped with the depth camera and the motion sensor. The dimension (40 mm × 30 mm) and structure is compact and portable after tight integration. System is built on a module layered framework, which contributes to real-time collection (60 fps), process and transmission via synchronous confusion with asynchronous concurrent collection and wireless Blue 4.0 transmission. To minimize equipment's energy consumption, system makes use of low-power components, managing peripheral state dynamically, switching into idle mode intelligently, pulse-width modulation (PWM) of the NIR LEDs of the depth camera and algorithm optimization by the motion sensor. To test this equipment's function and performance, a gesture recognition algorithm is applied to system. As the result presents, general energy consumption could be as low as 0.5 W.

  5. Virtual reality/ augmented reality technology : the next chapter of human-computer interaction

    OpenAIRE

    Huang, Xing

    2015-01-01

    No matter how many different size and shape the computer has, the basic components of computers are still the same. If we use the user perspective to look for the development of computer history, we can surprisingly find that it is the input output device that leads the development of the industry development, in one word, human-computer interaction changes the development of computer history. Human computer interaction has been gone through three stages, the first stage relies on the inpu...

  6. Applying systemic-structural activity theory to design of human-computer interaction systems

    CERN Document Server

    Bedny, Gregory Z; Bedny, Inna

    2015-01-01

    Human-Computer Interaction (HCI) is an interdisciplinary field that has gained recognition as an important field in ergonomics. HCI draws on ideas and theoretical concepts from computer science, psychology, industrial design, and other fields. Human-Computer Interaction is no longer limited to trained software users. Today people interact with various devices such as mobile phones, tablets, and laptops. How can you make such interaction user friendly, even when user proficiency levels vary? This book explores methods for assessing the psychological complexity of computer-based tasks. It also p

  7. The Past, Present and Future of Human Computer Interaction

    KAUST Repository

    Churchill, Elizabeth

    2018-01-01

    into our environments. From our homes to the urban and rural spaces, we traverse everyday. We are increasingly able toヨoften required toヨmanage and configure multiple, interconnected devices and program their interactions. Artificial intelligence (AI

  8. Proceedings of the Third International Conference on Intelligent Human Computer Interaction

    CERN Document Server

    Pokorný, Jaroslav; Snášel, Václav; Abraham, Ajith

    2013-01-01

    The Third International Conference on Intelligent Human Computer Interaction 2011 (IHCI 2011) was held at Charles University, Prague, Czech Republic from August 29 - August 31, 2011. This conference was third in the series, following IHCI 2009 and IHCI 2010 held in January at IIIT Allahabad, India. Human computer interaction is a fast growing research area and an attractive subject of interest for both academia and industry. There are many interesting and challenging topics that need to be researched and discussed. This book aims to provide excellent opportunities for the dissemination of interesting new research and discussion about presented topics. It can be useful for researchers working on various aspects of human computer interaction. Topics covered in this book include user interface and interaction, theoretical background and applications of HCI and also data mining and knowledge discovery as a support of HCI applications.

  9. Situated dialog in speech-based human-computer interaction

    CERN Document Server

    Raux, Antoine; Lane, Ian; Misu, Teruhisa

    2016-01-01

    This book provides a survey of the state-of-the-art in the practical implementation of Spoken Dialog Systems for applications in everyday settings. It includes contributions on key topics in situated dialog interaction from a number of leading researchers and offers a broad spectrum of perspectives on research and development in the area. In particular, it presents applications in robotics, knowledge access and communication and covers the following topics: dialog for interacting with robots; language understanding and generation; dialog architectures and modeling; core technologies; and the analysis of human discourse and interaction. The contributions are adapted and expanded contributions from the 2014 International Workshop on Spoken Dialog Systems (IWSDS 2014), where researchers and developers from industry and academia alike met to discuss and compare their implementation experiences, analyses and empirical findings.

  10. Human-computer interaction handbook fundamentals, evolving technologies and emerging applications

    CERN Document Server

    Sears, Andrew

    2007-01-01

    This second edition of The Human-Computer Interaction Handbook provides an updated, comprehensive overview of the most important research in the field, including insights that are directly applicable throughout the process of developing effective interactive information technologies. It features cutting-edge advances to the scientific knowledge base, as well as visionary perspectives and developments that fundamentally transform the way in which researchers and practitioners view the discipline. As the seminal volume of HCI research and practice, The Human-Computer Interaction Handbook feature

  11. Advancements in Violin-Related Human-Computer Interaction

    DEFF Research Database (Denmark)

    Overholt, Daniel

    2014-01-01

    of human intelligence and emotion is at the core of the Musical Interface Technology Design Space, MITDS. This is a framework that endeavors to retain and enhance such traits of traditional instruments in the design of interactive live performance interfaces. Utilizing the MITDS, advanced Human...

  12. Computerized Cognitive Rehabilitation: Comparing Different Human-Computer Interactions.

    Science.gov (United States)

    Quaglini, Silvana; Alloni, Anna; Cattani, Barbara; Panzarasa, Silvia; Pistarini, Caterina

    2017-01-01

    In this work we describe an experiment involving aphasic patients, where the same speech rehabilitation exercise was administered in three different modalities, two of which are computer-based. In particular, one modality exploits the "Makey Makey", an electronic board which allows interacting with the computer using physical objects.

  13. Transnational HCI: Humans, Computers and Interactions in Global Contexts

    DEFF Research Database (Denmark)

    Vertesi, Janet; Lindtner, Silvia; Shklovski, Irina

    2011-01-01

    , but as evolving in relation to global processes, boundary crossings, frictions and hybrid practices. In doing so, we expand upon existing research in HCI to consider the effects, implications for individuals and communities, and design opportunities in times of increased transnational interactions. We hope...... to broaden the conversation around the impact of technology in global processes by bringing together scholars from HCI and from related humanities, media arts and social sciences disciplines....

  14. The Human-Computer Interaction of Cross-Cultural Gaming Strategy

    Science.gov (United States)

    Chakraborty, Joyram; Norcio, Anthony F.; Van Der Veer, Jacob J.; Andre, Charles F.; Miller, Zachary; Regelsberger, Alexander

    2015-01-01

    This article explores the cultural dimensions of the human-computer interaction that underlies gaming strategies. The article is a desktop study of existing literature and is organized into five sections. The first examines the cultural aspects of knowledge processing. The social constructs technology interaction is discussed. Following this, the…

  15. Enhancing Human-Computer Interaction Design Education: Teaching Affordance Design for Emerging Mobile Devices

    Science.gov (United States)

    Faiola, Anthony; Matei, Sorin Adam

    2010-01-01

    The evolution of human-computer interaction design (HCID) over the last 20 years suggests that there is a growing need for educational scholars to consider new and more applicable theoretical models of interactive product design. The authors suggest that such paradigms would call for an approach that would equip HCID students with a better…

  16. Multimodal Desktop Interaction: The Face –Object-Gesture–Voice Example

    OpenAIRE

    Vidakis, Nikolas; Vlasopoulos, Anastasios; Kounalakis, Tsampikos; Varchalamas, Petros; Dimitriou, Michalis; Kalliatakis, Gregory; Syntychakis, Efthimios; Christofakis, John; Triantafyllidis, Georgios

    2013-01-01

    This paper presents a natural user interface systembased on multimodal human computer interaction, whichoperates as an intermediate module between the user and theoperating system. The aim of this work is to demonstrate amultimodal system which gives users the ability to interact withdesktop applications using face, objects, voice and gestures.These human behaviors constitute the input qualifiers to thesystem. Microsoft Kinect multi-sensor was utilized as inputdevice in order to succeed the n...

  17. A Multimodal Interaction Framework for Blended Learning

    DEFF Research Database (Denmark)

    Vidakis, Nikolaos; Kalafatis, Konstantinos; Triantafyllidis, Georgios

    2016-01-01

    Humans interact with each other by utilizing the five basic senses as input modalities, whereas sounds, gestures, facial expressions etc. are utilized as output modalities. Multimodal interaction is also used between humans and their surrounding environment, although enhanced with further senses ...... framework enabling deployment of a vast variety of modalities, tailored appropriately for use in blended learning environment....

  18. Multimodal Student Interaction Online: An Ecological Perspective

    Science.gov (United States)

    Berglund, Therese Ornberg

    2009-01-01

    This article describes the influence of tool and task design on student interaction in language learning at a distance. Interaction in a multimodal desktop video conferencing environment, FlashMeeting, is analyzed from an ecological perspective with two main foci: participation rates and conversational feedback strategies. The quantitative…

  19. Implementations of the CC'01 Human-Computer Interaction Guidelines Using Bloom's Taxonomy

    Science.gov (United States)

    Manaris, Bill; Wainer, Michael; Kirkpatrick, Arthur E.; Stalvey, RoxAnn H.; Shannon, Christine; Leventhal, Laura; Barnes, Julie; Wright, John; Schafer, J. Ben; Sanders, Dean

    2007-01-01

    In today's technology-laden society human-computer interaction (HCI) is an important knowledge area for computer scientists and software engineers. This paper surveys existing approaches to incorporate HCI into computer science (CS) and such related issues as the perceived gap between the interests of the HCI community and the needs of CS…

  20. The Study on Human-Computer Interaction Design Based on the Users’ Subconscious Behavior

    Science.gov (United States)

    Li, Lingyuan

    2017-09-01

    Human-computer interaction is human-centered. An excellent interaction design should focus on the study of user experience, which greatly comes from the consistence between design and human behavioral habit. However, users’ behavioral habits often result from subconsciousness. Therefore, it is smart to utilize users’ subconscious behavior to achieve design's intention and maximize the value of products’ functions, which gradually becomes a new trend in this field.

  1. Multimodal interaction in image and video applications

    CERN Document Server

    Sappa, Angel D

    2013-01-01

    Traditional Pattern Recognition (PR) and Computer Vision (CV) technologies have mainly focused on full automation, even though full automation often proves elusive or unnatural in many applications, where the technology is expected to assist rather than replace the human agents. However, not all the problems can be automatically solved being the human interaction the only way to tackle those applications. Recently, multimodal human interaction has become an important field of increasing interest in the research community. Advanced man-machine interfaces with high cognitive capabilities are a hot research topic that aims at solving challenging problems in image and video applications. Actually, the idea of computer interactive systems was already proposed on the early stages of computer science. Nowadays, the ubiquity of image sensors together with the ever-increasing computing performance has open new and challenging opportunities for research in multimodal human interaction. This book aims to show how existi...

  2. Multimodal Embodied Mimicry in Interaction

    NARCIS (Netherlands)

    Sun, X.; Esposito, Anna; Vinciarelli, Alessandro; Vicsi, Klára; Pelachaud, Catherine; Nijholt, Antinus

    2011-01-01

    Nonverbal behaviors play an important role in communicating with others. One particular kind of nonverbal interaction behavior is mimicry. It has been argued that behavioral mimicry supports harmonious relationships in social interaction through creating affiliation, rapport, and liking between

  3. Reference Resolution in Multi-modal Interaction: Position paper

    NARCIS (Netherlands)

    Fernando, T.; Nijholt, Antinus

    2002-01-01

    In this position paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can

  4. Reference resolution in multi-modal interaction: Preliminary observations

    NARCIS (Netherlands)

    González González, G.R.; Nijholt, Antinus

    2002-01-01

    In this paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can apply

  5. USING RESEARCH METHODS IN HUMAN COMPUTER INTERACTION TO DESIGN TECHNOLOGY FOR RESILIENCE

    OpenAIRE

    Lopes, Arminda Guerra

    2016-01-01

    ABSTRACT Research in human computer interaction (HCI) covers both technological and human behavioural concerns. As a consequence, the contributions made in HCI research tend to be aware to either engineering or the social sciences. In HCI the purpose of practical research contributions is to reveal unknown insights about human behaviour and its relationship to technology. Practical research methods normally used in HCI include formal experiments, field experiments, field studies, interviews, ...

  6. Cross-cultural human-computer interaction and user experience design a semiotic perspective

    CERN Document Server

    Brejcha, Jan

    2015-01-01

    This book describes patterns of language and culture in human-computer interaction (HCI). Through numerous examples, it shows why these patterns matter and how to exploit them to design a better user experience (UX) with computer systems. It provides scientific information on the theoretical and practical areas of the interaction and communication design for research experts and industry practitioners and covers the latest research in semiotics and cultural studies, bringing a set of tools and methods to benefit the process of designing with the cultural background in mind.

  7. Engageability: a new sub-principle of the learnability principle in human-computer interaction

    Directory of Open Access Journals (Sweden)

    B Chimbo

    2011-12-01

    Full Text Available The learnability principle relates to improving the usability of software, as well as users’ performance and productivity. A gap has been identified as the current definition of the principle does not distinguish between users of different ages. To determine the extent of the gap, this article compares the ways in which two user groups, adults and children, learn how to use an unfamiliar software application. In doing this, we bring together the research areas of human-computer interaction (HCI, adult and child learning, learning theories and strategies, usability evaluation and interaction design. A literature survey conducted on learnability and learning processes considered the meaning of learnability of software applications across generations. In an empirical investigation, users aged from 9 to 12 and from 35 to 50 were observed in a usability laboratory while learning to use educational software applications. Insights that emerged from data analysis showed different tactics and approaches that children and adults use when learning unfamiliar software. Eye tracking data was also recorded. Findings indicated that subtle re- interpretation of the learnability principle and its associated sub-principles was required. An additional sub-principle, namely engageability was proposed to incorporate aspects of learnability that are not covered by the existing sub-principles. Our re-interpretation of the learnability principle and the resulting design recommendations should help designers to fulfill the varying needs of different-aged users, and improve the learnability of their designs. Keywords: Child computer interaction, Design principles, Eye tracking, Generational differences, human-computer interaction, Learning theories, Learnability, Engageability, Software applications, Uasability Disciplines: Human-Computer Interaction (HCI Studies, Computer science, Observational Studies

  8. Human-Computer Interaction Handbook Fundamentals, Evolving Technologies, and Emerging Applications

    CERN Document Server

    Jacko, Julie A

    2012-01-01

    The third edition of a groundbreaking reference, The Human--Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications raises the bar for handbooks in this field. It is the largest, most complete compilation of HCI theories, principles, advances, case studies, and more that exist within a single volume. The book captures the current and emerging sub-disciplines within HCI related to research, development, and practice that continue to advance at an astonishing rate. It features cutting-edge advances to the scientific knowledge base as well as visionary perspe

  9. Advances in Human-Computer Interaction: Graphics and Animation Components for Interface Design

    Science.gov (United States)

    Cipolla Ficarra, Francisco V.; Nicol, Emma; Cipolla-Ficarra, Miguel; Richardson, Lucy

    We present an analysis of communicability methodology in graphics and animation components for interface design, called CAN (Communicability, Acceptability and Novelty). This methodology has been under development between 2005 and 2010, obtaining excellent results in cultural heritage, education and microcomputing contexts. In studies where there is a bi-directional interrelation between ergonomics, usability, user-centered design, software quality and the human-computer interaction. We also present the heuristic results about iconography and layout design in blogs and websites of the following countries: Spain, Italy, Portugal and France.

  10. Real-time non-invasive eyetracking and gaze-point determination for human-computer interaction and biomedicine

    Science.gov (United States)

    Talukder, Ashit; Morookian, John-Michael; Monacos, S.; Lam, R.; Lebaw, C.; Bond, A.

    2004-01-01

    Eyetracking is one of the latest technologies that has shown potential in several areas including human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological problems in individuals.

  11. Soft Electronics Enabled Ergonomic Human-Computer Interaction for Swallowing Training

    Science.gov (United States)

    Lee, Yongkuk; Nicholls, Benjamin; Sup Lee, Dong; Chen, Yanfei; Chun, Youngjae; Siang Ang, Chee; Yeo, Woon-Hong

    2017-04-01

    We introduce a skin-friendly electronic system that enables human-computer interaction (HCI) for swallowing training in dysphagia rehabilitation. For an ergonomic HCI, we utilize a soft, highly compliant (“skin-like”) electrode, which addresses critical issues of an existing rigid and planar electrode combined with a problematic conductive electrolyte and adhesive pad. The skin-like electrode offers a highly conformal, user-comfortable interaction with the skin for long-term wearable, high-fidelity recording of swallowing electromyograms on the chin. Mechanics modeling and experimental quantification captures the ultra-elastic mechanical characteristics of an open mesh microstructured sensor, conjugated with an elastomeric membrane. Systematic in vivo studies investigate the functionality of the soft electronics for HCI-enabled swallowing training, which includes the application of a biofeedback system to detect swallowing behavior. The collection of results demonstrates clinical feasibility of the ergonomic electronics in HCI-driven rehabilitation for patients with swallowing disorders.

  12. Cognitive engineering models: A prerequisite to the design of human-computer interaction in complex dynamic systems

    Science.gov (United States)

    Mitchell, Christine M.

    1993-01-01

    This chapter examines a class of human-computer interaction applications, specifically the design of human-computer interaction for the operators of complex systems. Such systems include space systems (e.g., manned systems such as the Shuttle or space station, and unmanned systems such as NASA scientific satellites), aviation systems (e.g., the flight deck of 'glass cockpit' airplanes or air traffic control) and industrial systems (e.g., power plants, telephone networks, and sophisticated, e.g., 'lights out,' manufacturing facilities). The main body of human-computer interaction (HCI) research complements but does not directly address the primary issues involved in human-computer interaction design for operators of complex systems. Interfaces to complex systems are somewhat special. The 'user' in such systems - i.e., the human operator responsible for safe and effective system operation - is highly skilled, someone who in human-machine systems engineering is sometimes characterized as 'well trained, well motivated'. The 'job' or task context is paramount and, thus, human-computer interaction is subordinate to human job interaction. The design of human interaction with complex systems, i.e., the design of human job interaction, is sometimes called cognitive engineering.

  13. Interactivity in Educational Apps for Young Children: A Multimodal Analysis

    Science.gov (United States)

    Blitz-Raith, Alexandra H.; Liu, Jianxin

    2017-01-01

    Interactivity is an important indicator of an educational app's reception. Since most educational apps are multimodal, it justifies a methodological initiative to understand meaningful involvement of multimodality in enacting and even amplifying interactivity in an educational app. Yet research so far has largely concentrated on algorithm…

  14. Human-Computer Interaction and Sociological Insight: A Theoretical Examination and Experiment in Building Affinity in Small Groups

    Science.gov (United States)

    Oren, Michael Anthony

    2011-01-01

    The juxtaposition of classic sociological theory and the, relatively, young discipline of human-computer interaction (HCI) serves as a powerful mechanism for both exploring the theoretical impacts of technology on human interactions as well as the application of technological systems to moderate interactions. It is the intent of this dissertation…

  15. Multimodal interaction for human-robot teams

    Science.gov (United States)

    Burke, Dustin; Schurr, Nathan; Ayers, Jeanine; Rousseau, Jeff; Fertitta, John; Carlin, Alan; Dumond, Danielle

    2013-05-01

    Unmanned ground vehicles have the potential for supporting small dismounted teams in mapping facilities, maintaining security in cleared buildings, and extending the team's reconnaissance and persistent surveillance capability. In order for such autonomous systems to integrate with the team, we must move beyond current interaction methods using heads-down teleoperation which require intensive human attention and affect the human operator's ability to maintain local situational awareness and ensure their own safety. This paper focuses on the design, development and demonstration of a multimodal interaction system that incorporates naturalistic human gestures, voice commands, and a tablet interface. By providing multiple, partially redundant interaction modes, our system degrades gracefully in complex environments and enables the human operator to robustly select the most suitable interaction method given the situational demands. For instance, the human can silently use arm and hand gestures for commanding a team of robots when it is important to maintain stealth. The tablet interface provides an overhead situational map allowing waypoint-based navigation for multiple ground robots in beyond-line-of-sight conditions. Using lightweight, wearable motion sensing hardware either worn comfortably beneath the operator's clothing or integrated within their uniform, our non-vision-based approach enables an accurate, continuous gesture recognition capability without line-of-sight constraints. To reduce the training necessary to operate the system, we designed the interactions around familiar arm and hand gestures.

  16. Twenty Years of Creativity Research in Human-Computer Interaction: Current State and Future Directions

    DEFF Research Database (Denmark)

    Frich Pedersen, Jonas; Biskjaer, Michael Mose; Dalsgaard, Peter

    2018-01-01

    Creativity has been a growing topic within the ACM community since the 1990s. However, no clear overview of this trend has been offered. We present a thorough survey of 998 creativity-related publications in the ACM Digital Library collected using keyword search to determine prevailing approaches......, topics, and characteristics of creativity-oriented Human-Computer Interaction (HCI) research. . A selected sample based on yearly citations yielded 221 publications, which were analyzed using constant comparison analysis. We found that HCI is almost exclusively responsible for creativity......-oriented publications; they focus on collaborative creativity rather than individual creativity; there is a general lack of definition of the term ‘creativity’; empirically based contributions are prevalent; and many publications focus on new tools, often developed by researchers. On this basis, we present three...

  17. Multimodality

    DEFF Research Database (Denmark)

    Buhl, Mie

    2010-01-01

    In this paper, I address an ongoing discussion in Danish E-learning research about how to take advantage of the fact that digital media facilitate other communication forms than text, so-called ‘multimodal' communication, which should not be confused with the term ‘multimedia'. While multimedia...... on their teaching and learning situations. The choices they make involve e-learning resources like videos, social platforms and mobile devices, not just as digital artefacts we interact with, but the entire practice of using digital media. In a life-long learning perspective, multimodality is potentially very...

  18. Enrichment of Human-Computer Interaction in Brain-Computer Interfaces via Virtual Environments

    Directory of Open Access Journals (Sweden)

    Alonso-Valerdi Luz María

    2017-01-01

    Full Text Available Tridimensional representations stimulate cognitive processes that are the core and foundation of human-computer interaction (HCI. Those cognitive processes take place while a user navigates and explores a virtual environment (VE and are mainly related to spatial memory storage, attention, and perception. VEs have many distinctive features (e.g., involvement, immersion, and presence that can significantly improve HCI in highly demanding and interactive systems such as brain-computer interfaces (BCI. BCI is as a nonmuscular communication channel that attempts to reestablish the interaction between an individual and his/her environment. Although BCI research started in the sixties, this technology is not efficient or reliable yet for everyone at any time. Over the past few years, researchers have argued that main BCI flaws could be associated with HCI issues. The evidence presented thus far shows that VEs can (1 set out working environmental conditions, (2 maximize the efficiency of BCI control panels, (3 implement navigation systems based not only on user intentions but also on user emotions, and (4 regulate user mental state to increase the differentiation between control and noncontrol modalities.

  19. Multi-step EMG Classification Algorithm for Human-Computer Interaction

    Science.gov (United States)

    Ren, Peng; Barreto, Armando; Adjouadi, Malek

    A three-electrode human-computer interaction system, based on digital processing of the Electromyogram (EMG) signal, is presented. This system can effectively help disabled individuals paralyzed from the neck down to interact with computers or communicate with people through computers using point-and-click graphic interfaces. The three electrodes are placed on the right frontalis, the left temporalis and the right temporalis muscles in the head, respectively. The signal processing algorithm used translates the EMG signals during five kinds of facial movements (left jaw clenching, right jaw clenching, eyebrows up, eyebrows down, simultaneous left & right jaw clenching) into five corresponding types of cursor movements (left, right, up, down and left-click), to provide basic mouse control. The classification strategy is based on three principles: the EMG energy of one channel is typically larger than the others during one specific muscle contraction; the spectral characteristics of the EMG signals produced by the frontalis and temporalis muscles during different movements are different; the EMG signals from adjacent channels typically have correlated energy profiles. The algorithm is evaluated on 20 pre-recorded EMG signal sets, using Matlab simulations. The results show that this method provides improvements and is more robust than other previous approaches.

  20. Eye Tracking Based Control System for Natural Human-Computer Interaction

    Directory of Open Access Journals (Sweden)

    Xuebai Zhang

    2017-01-01

    Full Text Available Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user’s eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design.

  1. Eye Tracking Based Control System for Natural Human-Computer Interaction.

    Science.gov (United States)

    Zhang, Xuebai; Liu, Xiaolong; Yuan, Shyan-Ming; Lin, Shu-Fan

    2017-01-01

    Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user's eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design.

  2. Multimodal Sensing Interface for Haptic Interaction

    Directory of Open Access Journals (Sweden)

    Carlos Diaz

    2017-01-01

    Full Text Available This paper investigates the integration of a multimodal sensing system for exploring limits of vibrato tactile haptic feedback when interacting with 3D representation of real objects. In this study, the spatial locations of the objects are mapped to the work volume of the user using a Kinect sensor. The position of the user’s hand is obtained using the marker-based visual processing. The depth information is used to build a vibrotactile map on a haptic glove enhanced with vibration motors. The users can perceive the location and dimension of remote objects by moving their hand inside a scanning region. A marker detection camera provides the location and orientation of the user’s hand (glove to map the corresponding tactile message. A preliminary study was conducted to explore how different users can perceive such haptic experiences. Factors such as total number of objects detected, object separation resolution, and dimension-based and shape-based discrimination were evaluated. The preliminary results showed that the localization and counting of objects can be attained with a high degree of success. The users were able to classify groups of objects of different dimensions based on the perceived haptic feedback.

  3. USING OLFACTORY DISPLAYS AS A NONTRADITIONAL INTERFACE IN HUMAN COMPUTER INTERACTION

    Directory of Open Access Journals (Sweden)

    Alper Efe

    2017-07-01

    Full Text Available Smell has its limitations and disadvantages as a display medium, but it also has its strengths and many have recognized its potential. At present, in communications and virtual technologies, smell is either forgotten or improperly stimulated, because non controlled odorants present in the physical space surrounding the user. Nonetheless a controlled presentation of olfactory information can give advantages in various application fields. Therefore, two enabling technologies, electronic noses and especially olfactory displays are reviewed. Scenarios of usage are discussed together with relevant psycho-physiological issues. End-to-end systems including olfactory interfaces are quantitatively characterised under many respects. Recent works done by the authors on field are reported. The article will touch briefly on the control of scent emissions; an important factor to consider when building scented computer systems. As a sample application SUBSMELL system investigated. A look at areas of human computer interaction where olfaction output may prove useful will be presented. The article will finish with some brief conclusions and discuss some shortcomings and gaps of the topic. In particular, the addition of olfactory cues to a virtual environment increased the user's sense of presence and memory of the environment. Also, this article discusses the educational aspect of the subsmell systems.

  4. Redesign of a computerized clinical reminder for colorectal cancer screening: a human-computer interaction evaluation

    Directory of Open Access Journals (Sweden)

    Saleem Jason J

    2011-11-01

    Full Text Available Abstract Background Based on barriers to the use of computerized clinical decision support (CDS learned in an earlier field study, we prototyped design enhancements to the Veterans Health Administration's (VHA's colorectal cancer (CRC screening clinical reminder to compare against the VHA's current CRC reminder. Methods In a controlled simulation experiment, 12 primary care providers (PCPs used prototypes of the current and redesigned CRC screening reminder in a within-subject comparison. Quantitative measurements were based on a usability survey, workload assessment instrument, and workflow integration survey. We also collected qualitative data on both designs. Results Design enhancements to the VHA's existing CRC screening clinical reminder positively impacted aspects of usability and workflow integration but not workload. The qualitative analysis revealed broad support across participants for the design enhancements with specific suggestions for improving the reminder further. Conclusions This study demonstrates the value of a human-computer interaction evaluation in informing the redesign of information tools to foster uptake, integration into workflow, and use in clinical practice.

  5. How should Fitts' Law be applied to human-computer interaction?

    Science.gov (United States)

    Gillan, D. J.; Holden, K.; Adam, S.; Rudisill, M.; Magee, L.

    1992-01-01

    The paper challenges the notion that any Fitts' Law model can be applied generally to human-computer interaction, and proposes instead that applying Fitts' Law requires knowledge of the users' sequence of movements, direction of movement, and typical movement amplitudes as well as target sizes. Two experiments examined a text selection task with sequences of controlled movements (point-click and point-drag). For the point-click sequence, a Fitts' Law model that used the diagonal across the text object in the direction of pointing (rather than the horizontal extent of the text object) as the target size provided the best fit for the pointing time data, whereas for the point-drag sequence, a Fitts' Law model that used the vertical size of the text object as the target size gave the best fit. Dragging times were fitted well by Fitts' Law models that used either the vertical or horizontal size of the terminal character in the text object. Additional results of note were that pointing in the point-click sequence was consistently faster than in the point-drag sequence, and that pointing in either sequence was consistently faster than dragging. The discussion centres around the need to define task characteristics before applying Fitts' Law to an interface design or analysis, analyses of pointing and of dragging, and implications for interface design.

  6. User involvement in the design of human-computer interactions: some similarities and differences between design approaches

    NARCIS (Netherlands)

    Bekker, M.M.; Long, J.B.

    1998-01-01

    This paper presents a general review of user involvement in the design of human-computer interactions, as advocated by a selection of different approaches to design. The selection comprises User-Centred Design, Participatory Design, Socio-Technical Design, Soft Systems Methodology, and Joint

  7. Using GOMS and NASA-TLX to Evaluate Human-Computer Interaction Process in Interactive Segmentation

    NARCIS (Netherlands)

    Ramkumar, A.; Stappers, P.J.; Niessen, W.J.; Adebahr, S; Schimek-Jasch, T; Nestle, U; Song, Y.

    2016-01-01

    HCI plays an important role in interactive medical image segmentation. The Goals, Operators, Methods, and Selection rules (GOMS) model and the National Aeronautics and Space Administration Task Load Index (NASA-TLX) questionnaire are different methods that are often used to evaluate the HCI

  8. Exploring multimodal robotic interaction through storytelling for aphasics

    NARCIS (Netherlands)

    Mubin, O.; Al Mahmud, A.; Abuelma'atti, O.; England, D.

    2008-01-01

    In this poster, we propose the design of a multimodal robotic interaction mechanism that is intended to be used by Aphasics for storytelling. Through limited physical interaction, mild to moderate aphasic people can interact with a robot that may help them to be more active in their day to day

  9. Multimodality

    DEFF Research Database (Denmark)

    Buhl, Mie

    In this paper, I address an ongoing discussion in Danish E-learning research about how to take advantage of the fact that digital media facilitate other communication forms than text, so-called ‘multimodal’ communication, which should not be confused with the term ‘multimedia’. While multimedia...... and learning situations. The choices they make involve E-learning resources like videos, social platforms and mobile devices, not just as digital artefacts we interact with, but the entire practice of using digital media. In a life-long learning perspective, multimodality is potentially very useful...

  10. Ontology for assessment studies of human-computer-interaction in surgery.

    Science.gov (United States)

    Machno, Andrej; Jannin, Pierre; Dameron, Olivier; Korb, Werner; Scheuermann, Gerik; Meixensberger, Jürgen

    2015-02-01

    New technologies improve modern medicine, but may result in unwanted consequences. Some occur due to inadequate human-computer-interactions (HCI). To assess these consequences, an investigation model was developed to facilitate the planning, implementation and documentation of studies for HCI in surgery. The investigation model was formalized in Unified Modeling Language and implemented as an ontology. Four different top-level ontologies were compared: Object-Centered High-level Reference, Basic Formal Ontology, General Formal Ontology (GFO) and Descriptive Ontology for Linguistic and Cognitive Engineering, according to the three major requirements of the investigation model: the domain-specific view, the experimental scenario and the representation of fundamental relations. Furthermore, this article emphasizes the distinction of "information model" and "model of meaning" and shows the advantages of implementing the model in an ontology rather than in a database. The results of the comparison show that GFO fits the defined requirements adequately: the domain-specific view and the fundamental relations can be implemented directly, only the representation of the experimental scenario requires minor extensions. The other candidates require wide-ranging extensions, concerning at least one of the major implementation requirements. Therefore, the GFO was selected to realize an appropriate implementation of the developed investigation model. The ensuing development considered the concrete implementation of further model aspects and entities: sub-domains, space and time, processes, properties, relations and functions. The investigation model and its ontological implementation provide a modular guideline for study planning, implementation and documentation within the area of HCI research in surgery. This guideline helps to navigate through the whole study process in the form of a kind of standard or good clinical practice, based on the involved foundational frameworks

  11. Multimodal interaction design in collocated mobile phone use

    NARCIS (Netherlands)

    El-Ali, A.; Lucero, A.; Aaltonen, V.

    2011-01-01

    In the context of the Social and Spatial Interactions (SSI) platform, we explore how multimodal interaction design (input and output) can augment and improve the experience of collocated, collaborative activities using mobile phones. Based largely on our prototype evaluations, we reflect on and

  12. An agent-based architecture for multimodal interaction

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.

    In this paper, an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The agent-based architecture can be used to create multimodal interaction. The generic process model has been designed, implemented and used to simulate

  13. An agent-based architecture for multimodal interaction

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.

    2001-01-01

    In this paper, an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The agent-based architecture can be used to create multimodal interaction. The generic process model has been designed, implemented and used to simulate

  14. Interactivity in Educational Apps for Young children: A Multimodal Analysis

    Directory of Open Access Journals (Sweden)

    Alexandra H. Blitz-Raith

    2017-11-01

    Full Text Available Interactivity is an important indicator of an educational app's reception. Since most educational apps are multimodal, it justifies a methodological initiative to understand meaningful involvement of multimodality in enacting and even amplifying interactivity in an educational app. Yet research so far has largely concentrated on algorithm construct and user feedback rather than on multimodal interactions, especially from a social semiotics perspective. Drawing from social semiotics approaches, this article proposes a multimodal analytic framework to examine three layers of mode in engendering interaction; namely, multiplicity, function, and relationship. Using the analytic framework in an analysis of The Farm Adventure for Kids, a popular educational app for pre-school children, we found that still images are dominant proportionally and are central in the interactive process. We also found that tapping still images of animals on screen is the main action, with other screen actions deliberately excluded. Such findings suggest that aligning children’s cognitive and physical capabilities to the use of mode become the primary consideration in educational app design and that consistent attendance to this alignment in mobilizing modes significantly affect an educational app’s interactivity, and consequently its reception by young children

  15. Stress and Cognitive Load in Multimodal Conversational Interactions

    NARCIS (Netherlands)

    Niculescu, A.I.; Cao, Y.; Nijholt, Antinus; Stephanides, C.

    2009-01-01

    The quality assessment of multimodal conversational interactions is determined by many influence parameters. Stress and cognitive load are two of them. In order to assess the impact of stress and cognitive load on the perceived conversational quality it is essential to control their levels during

  16. Artifical Intelligence for Human Computing

    NARCIS (Netherlands)

    Huang, Th.S.; Nijholt, Antinus; Pantic, Maja; Pentland, A.; Unknown, [Unknown

    2007-01-01

    This book constitutes the thoroughly refereed post-proceedings of two events discussing AI for Human Computing: one Special Session during the Eighth International ACM Conference on Multimodal Interfaces (ICMI 2006), held in Banff, Canada, in November 2006, and a Workshop organized in conjunction

  17. Knowledge translation in health care as a multimodal interactional accomplishment

    DEFF Research Database (Denmark)

    Kjær, Malene

    2014-01-01

    of their education where they are in clinical practice. The analysis is made possible through video recordings of how student nurses translate their theoretical knowledge into a professional situational conduct in everyday interactional accomplishments among supervisors and patients. The analysis shows how some......In the theory of health care, knowledge translation is regarded as a crucial phenomenon that makes the whole health care system work in a desired manner. The present paper studies knowledge translation from the student nurses’ perspective and does that through a close analysis of the part...... knowledge gets translated through the use of rich multimodal embodied interactions, whereas the more abstract aspects of knowledge remain untranslated. Overall, the study contributes to the understanding of knowledge translation as a multimodal, locally situated accomplishment....

  18. Quality of human-computer interaction - results of a national usability survey of hospital-IT in Germany

    Directory of Open Access Journals (Sweden)

    Bundschuh Bettina B

    2011-11-01

    Full Text Available Abstract Background Due to the increasing functionality of medical information systems, it is hard to imagine day to day work in hospitals without IT support. Therefore, the design of dialogues between humans and information systems is one of the most important issues to be addressed in health care. This survey presents an analysis of the current quality level of human-computer interaction of healthcare-IT in German hospitals, focused on the users' point of view. Methods To evaluate the usability of clinical-IT according to the design principles of EN ISO 9241-10 the IsoMetrics Inventory, an assessment tool, was used. The focus of this paper has been put on suitability for task, training effort and conformity with user expectations, differentiated by information systems. Effectiveness has been evaluated with the focus on interoperability and functionality of different IT systems. Results 4521 persons from 371 hospitals visited the start page of the study, while 1003 persons from 158 hospitals completed the questionnaire. The results show relevant variations between different information systems. Conclusions Specialised information systems with defined functionality received better assessments than clinical information systems in general. This could be attributed to the improved customisation of these specialised systems for specific working environments. The results can be used as reference data for evaluation and benchmarking of human computer engineering in clinical health IT context for future studies.

  19. An Egocentric Approach Towards Ubiquitous Multimodal Interaction

    DEFF Research Database (Denmark)

    Pederson, Thomas; Jalaliniya, Shahram

    2015-01-01

    In this position paper we present our take on the possibilities that emerge from a mix of recent ideas in interaction design, wearable computers, and context-aware systems which taken together could allow us to get closer to Marc Weiser's vision of calm computing. Multisensory user experience plays...

  20. Blackthorn: Large-Scale Interactive Multimodal Learning

    DEFF Research Database (Denmark)

    Zahálka, Jan; Rudinac, Stevan; Jónsson, Björn Thór

    2018-01-01

    learning process. The Ratio-64 data representation introduced in this work only costs tens of bytes per item yet preserves most of the visual and textual semantic information with good accuracy. The optimized interactive learning model scores the Ratio-64- compressed data directly, greatly reducing...... outperforming the baseline with respect to the relevance of results: it vastly outperforms the baseline on recall over time and reaches up to 108% of its precision. Compared to the product quantization variant, Blackthorn is just as fast, while producing more relevant results. On the full YFCC100M dataset...

  1. Multimode interaction in axially excited cylindrical shells

    Directory of Open Access Journals (Sweden)

    Silva F. M. A.

    2014-01-01

    Full Text Available Cylindrical shells exhibit a dense frequency spectrum, especially near the lowest frequency range. In addition, due to the circumferential symmetry, frequencies occur in pairs. So, in the vicinity of the lowest natural frequencies, several equal or nearly equal frequencies may occur, leading to a complex dynamic behavior. So, the aim of the present work is to investigate the dynamic behavior and stability of cylindrical shells under axial forcing with multiple equal or nearly equal natural frequencies. The shell is modelled using the Donnell nonlinear shallow shell theory and the discretized equations of motion are obtained by applying the Galerkin method. For this, a modal solution that takes into account the modal interaction among the relevant modes and the influence of their companion modes (modes with rotational symmetry, which satisfies the boundary and continuity conditions of the shell, is derived. Special attention is given to the 1:1:1:1 internal resonance (four interacting modes. Solving numerically the governing equations of motion and using several tools of nonlinear dynamics, a detailed parametric analysis is conducted to clarify the influence of the internal resonances on the bifurcations, stability boundaries, nonlinear vibration modes and basins of attraction of the structure.

  2. Characterizing multimode interaction in renal autoregulation

    International Nuclear Information System (INIS)

    Pavlov, A N; Pavlova, O N; Sosnovtseva, O V; Mosekilde, E; Holstein-Rathlou, N-H

    2008-01-01

    The purpose of this paper is to demonstrate how modern statistical techniques of non-stationary time-series analysis can be used to characterize the mutual interaction among three coexisting rhythms in nephron pressure and flow regulation. Besides a relatively fast vasomotoric rhythm with a period of 5–8 s and a somewhat slower mode arising from an instability in the tubuloglomerular feedback mechanism, we also observe a very slow mode with a period of 100–200 s. Double-wavelet techniques are used to study how the very slow rhythm influences the two faster modes. In a broader perspective, the paper emphasizes the significance of complex dynamic phenomena in the normal and pathological function of physiological systems and discusses how simulation methods can help to understand the underlying biological mechanisms. At the present there is no causal explanation of the very slow mode. However, vascular oscillations with similar frequencies have been observed in other tissues

  3. Gaze-and-brain-controlled interfaces for human-computer and human-robot interaction

    Directory of Open Access Journals (Sweden)

    Shishkin S. L.

    2017-09-01

    Full Text Available Background. Human-machine interaction technology has greatly evolved during the last decades, but manual and speech modalities remain single output channels with their typical constraints imposed by the motor system’s information transfer limits. Will brain-computer interfaces (BCIs and gaze-based control be able to convey human commands or even intentions to machines in the near future? We provide an overview of basic approaches in this new area of applied cognitive research. Objective. We test the hypothesis that the use of communication paradigms and a combination of eye tracking with unobtrusive forms of registering brain activity can improve human-machine interaction. Methods and Results. Three groups of ongoing experiments at the Kurchatov Institute are reported. First, we discuss the communicative nature of human-robot interaction, and approaches to building a more e cient technology. Specifically, “communicative” patterns of interaction can be based on joint attention paradigms from developmental psychology, including a mutual “eye-to-eye” exchange of looks between human and robot. Further, we provide an example of “eye mouse” superiority over the computer mouse, here in emulating the task of selecting a moving robot from a swarm. Finally, we demonstrate a passive, noninvasive BCI that uses EEG correlates of expectation. This may become an important lter to separate intentional gaze dwells from non-intentional ones. Conclusion. The current noninvasive BCIs are not well suited for human-robot interaction, and their performance, when they are employed by healthy users, is critically dependent on the impact of the gaze on selection of spatial locations. The new approaches discussed show a high potential for creating alternative output pathways for the human brain. When support from passive BCIs becomes mature, the hybrid technology of the eye-brain-computer (EBCI interface will have a chance to enable natural, fluent, and the

  4. Proceedings of the 5th Danish Human-Computer Interaction Research Symposium

    DEFF Research Database (Denmark)

    Clemmensen, Torkil; Nielsen, Lene

    2005-01-01

    Lene Nielsen DEALING WITH REALITY - IN THEORY Gitte Skou PetersenA NEW IFIP WORKING GROUP - HUMAN WORK INTERACTION DESIGN Rikke Ørngreen, Torkil Clemmensen & Annelise Mark-Pejtersen CLASSIFICATION OF DESCRIPTIONS USED IN SOFTWARE AND INTERACTION DESIGN Georg Strøm OBSTACLES TO DESIGN IN VOLUNTEER BASED...... for the symposium, of which 14 were presented orally in four panel sessions. Previously the symposium has been held at University of Aarhus 2001, University of Copenhagen 2002, Roskilde University Center 2003, Aalborg University 2004. Torkil Clemmensen & Lene Nielsen Copenhagen, November 2005 CONTENT INTRODUCTION...

  5. The Importance of Human-Computer Interaction in Radiology E-learning

    NARCIS (Netherlands)

    den Harder, Annemarie M; Frijlingh, Marissa; Ravesloot, Cécile J; Oosterbaan, Anne E; van der Gijp, Anouk

    2016-01-01

    With the development of cross-sectional imaging techniques and transformation to digital reading of radiological imaging, e-learning might be a promising tool in undergraduate radiology education. In this systematic review of the literature, we evaluate the emergence of image interaction

  6. Design Science in Human-Computer Interaction: A Model and Three Examples

    Science.gov (United States)

    Prestopnik, Nathan R.

    2013-01-01

    Humanity has entered an era where computing technology is virtually ubiquitous. From websites and mobile devices to computers embedded in appliances on our kitchen counters and automobiles parked in our driveways, information and communication technologies (ICTs) and IT artifacts are fundamentally changing the ways we interact with our world.…

  7. Trends in Human-Computer Interaction to Support Future Intelligence Analysis Capabilities

    Science.gov (United States)

    2011-06-01

    Oblong Industries Inc. (Oblong, 2011). In addition to the camera-based gesture interaction (Figure 4), this system offers a management capability...EyeTap Lumus Eyewear LOE FogScreen HP LiM PC Microvision PEK and SHOWWX Pico Projectors Head Mounted Display Chinese Holo Screen 10 Advanced Analyst

  8. Brain computer interfaces as intelligent sensors for enhancing human-computer interaction

    NARCIS (Netherlands)

    Poel, M.; Nijboer, F.; Broek, E.L. van den; Fairclough, S.; Nijholt, A.

    2012-01-01

    BCIs are traditionally conceived as a way to control apparatus, an interface that allows you to act on" external devices as a form of input control. We propose an alternative use of BCIs, that of monitoring users as an additional intelligent sensor to enrich traditional means of interaction. This

  9. Brain-Computer Interfaces. Applying our Minds to Human-Computer Interaction

    NARCIS (Netherlands)

    Tan, Desney S.; Nijholt, Antinus

    2010-01-01

    For generations, humans have fantasized about the ability to create devices that can see into a person’s mind and thoughts, or to communicate and interact with machines through thought alone. Such ideas have long captured the imagination of humankind in the form of ancient myths and modern science

  10. Brain computer interfaces as intelligent sensors for enhancing human-computer interaction

    NARCIS (Netherlands)

    Poel, Mannes; Nijboer, Femke; van den Broek, Egon; Fairclough, Stephen; Morency, Louis-Philippe; Bohus, Dan; Aghajan, Hamid; Nijholt, Antinus; Cassell, Justine; Epps, Julien

    2012-01-01

    BCIs are traditionally conceived as a way to control apparatus, an interface that allows you to "act on" external devices as a form of input control. We propose an alternative use of BCIs, that of monitoring users as an additional intelligent sensor to enrich traditional means of interaction. This

  11. The Importance of Human-Computer Interaction in Radiology E-learning.

    Science.gov (United States)

    den Harder, Annemarie M; Frijlingh, Marissa; Ravesloot, Cécile J; Oosterbaan, Anne E; van der Gijp, Anouk

    2016-04-01

    With the development of cross-sectional imaging techniques and transformation to digital reading of radiological imaging, e-learning might be a promising tool in undergraduate radiology education. In this systematic review of the literature, we evaluate the emergence of image interaction possibilities in radiology e-learning programs and evidence for effects of radiology e-learning on learning outcomes and perspectives of medical students and teachers. A systematic search in PubMed, EMBASE, Cochrane, ERIC, and PsycInfo was performed. Articles were screened by two authors and included when they concerned the evaluation of radiological e-learning tools for undergraduate medical students. Nineteen articles were included. Seven studies evaluated e-learning programs with image interaction possibilities. Students perceived e-learning with image interaction possibilities to be a useful addition to learning with hard copy images and to be effective for learning 3D anatomy. Both e-learning programs with and without image interaction possibilities were found to improve radiological knowledge and skills. In general, students found e-learning programs easy to use, rated image quality high, and found the difficulty level of the courses appropriate. Furthermore, they felt that their knowledge and understanding of radiology improved by using e-learning. In conclusion, the addition of radiology e-learning in undergraduate medical education can improve radiological knowledge and image interpretation skills. Differences between the effect of e-learning with and without image interpretation possibilities on learning outcomes are unknown and should be subject to future research.

  12. Brain-Computer Interfaces Applying Our Minds to Human-computer Interaction

    CERN Document Server

    Tan, Desney S

    2010-01-01

    For generations, humans have fantasized about the ability to create devices that can see into a person's mind and thoughts, or to communicate and interact with machines through thought alone. Such ideas have long captured the imagination of humankind in the form of ancient myths and modern science fiction stories. Recent advances in cognitive neuroscience and brain imaging technologies have started to turn these myths into a reality, and are providing us with the ability to interface directly with the human brain. This ability is made possible through the use of sensors that monitor physical p

  13. AirDraw: Leveraging Smart Watch Motion Sensors for Mobile Human Computer Interactions

    OpenAIRE

    Sajjadi, Seyed A; Moazen, Danial; Nahapetian, Ani

    2017-01-01

    Wearable computing is one of the fastest growing technologies today. Smart watches are poised to take over at least of half the wearable devices market in the near future. Smart watch screen size, however, is a limiting factor for growth, as it restricts practical text input. On the other hand, wearable devices have some features, such as consistent user interaction and hands-free, heads-up operations, which pave the way for gesture recognition methods of text entry. This paper proposes a new...

  14. Guest Editorial Special Issue on Human Computing

    NARCIS (Netherlands)

    Pantic, Maja; Santos, E.; Pentland, A.; Nijholt, Antinus

    2009-01-01

    The seven articles in this special issue focus on human computing. Most focus on two challenging issues in human computing, namely, machine analysis of human behavior in group interactions and context-sensitive modeling.

  15. Ergonomic guidelines for using notebook personal computers. Technical Committee on Human-Computer Interaction, International Ergonomics Association.

    Science.gov (United States)

    Saito, S; Piccoli, B; Smith, M J; Sotoyama, M; Sweitzer, G; Villanueva, M B; Yoshitake, R

    2000-10-01

    In the 1980's, the visual display terminal (VDT) was introduced in workplaces of many countries. Soon thereafter, an upsurge in reported cases of related health problems, such as musculoskeletal disorders and eyestrain, was seen. Recently, the flat panel display or notebook personal computer (PC) became the most remarkable feature in modern workplaces with VDTs and even in homes. A proactive approach must be taken to avert foreseeable ergonomic and occupational health problems from the use of this new technology. Because of its distinct physical and optical characteristics, the ergonomic requirements for notebook PCs in terms of machine layout, workstation design, lighting conditions, among others, should be different from the CRT-based computers. The Japan Ergonomics Society (JES) technical committee came up with a set of guidelines for notebook PC use following exploratory discussions that dwelt on its ergonomic aspects. To keep in stride with this development, the Technical Committee on Human-Computer Interaction under the auspices of the International Ergonomics Association worked towards the international issuance of the guidelines. This paper unveils the result of this collaborative effort.

  16. A mobile Nursing Information System based on human-computer interaction design for improving quality of nursing.

    Science.gov (United States)

    Su, Kuo-Wei; Liu, Cheng-Li

    2012-06-01

    A conventional Nursing Information System (NIS), which supports the role of nurse in some areas, is typically deployed as an immobile system. However, the traditional information system can't response to patients' conditions in real-time, causing delays on the availability of this information. With the advances of information technology, mobile devices are increasingly being used to extend the human mind's limited capacity to recall and process large numbers of relevant variables and to support information management, general administration, and clinical practice. Unfortunately, there have been few studies about the combination of a well-designed small-screen interface with a personal digital assistant (PDA) in clinical nursing. Some researchers found that user interface design is an important factor in determining the usability and potential use of a mobile system. Therefore, this study proposed a systematic approach to the development of a mobile nursing information system (MNIS) based on Mobile Human-Computer Interaction (M-HCI) for use in clinical nursing. The system combines principles of small-screen interface design with user-specified requirements. In addition, the iconic functions were designed with metaphor concept that will help users learn the system more quickly with less working-memory. An experiment involving learnability testing, thinking aloud and a questionnaire investigation was conducted for evaluating the effect of MNIS on PDA. The results show that the proposed MNIS is good on learning and higher satisfaction on symbol investigation, terminology and system information.

  17. Using Noninvasive Brain Measurement to Explore the Psychological Effects of Computer Malfunctions on Users during Human-Computer Interactions

    Directory of Open Access Journals (Sweden)

    Leanne M. Hirshfield

    2014-01-01

    Full Text Available In today’s technologically driven world, there is a need to better understand the ways that common computer malfunctions affect computer users. These malfunctions may have measurable influences on computer user’s cognitive, emotional, and behavioral responses. An experiment was conducted where participants conducted a series of web search tasks while wearing functional near-infrared spectroscopy (fNIRS and galvanic skin response sensors. Two computer malfunctions were introduced during the sessions which had the potential to influence correlates of user trust and suspicion. Surveys were given after each session to measure user’s perceived emotional state, cognitive load, and perceived trust. Results suggest that fNIRS can be used to measure the different cognitive and emotional responses associated with computer malfunctions. These cognitive and emotional changes were correlated with users’ self-report levels of suspicion and trust, and they in turn suggest future work that further explores the capability of fNIRS for the measurement of user experience during human-computer interactions.

  18. After-effects of human-computer interaction indicated by P300 of the event-related brain potential.

    Science.gov (United States)

    Trimmel, M; Huber, R

    1998-05-01

    After-effects of human-computer interaction (HCI) were investigated by using the P300 component of the event-related brain potential (ERP). Forty-nine subjects (naive non-users, beginners, experienced users, programmers) completed three paper/pencil tasks (text editing, solving intelligence test items, filling out a questionnaire on sensation seeking) and three HCI tasks (text editing, executing a tutor program or programming, playing Tetris). The sequence of 7-min tasks was randomized between subjects and balanced between groups. After each experimental condition ERPs were recorded during an acoustic discrimination task at F3, F4, Cz, P3 and P4. Data indicate that: (1) mental after-effects of HCI can be detected by P300 of the ERP; (2) HCI showed in general a reduced amplitude; (3) P300 amplitude varied also with type of task, mainly at F4 where it was smaller after cognitive tasks (intelligence test/programming) and larger after emotion-based tasks (sensation seeking/Tetris); (4) cognitive tasks showed shorter latencies; (5) latencies were widely location-independent (within the range of 356-358 ms at F3, F4, P3 and P4) after executing the tutor program or programming; and (6) all observed after-effects were independent of the user's experience in operating computers and may therefore reflect short-term after-effects only and no structural changes of information processing caused by HCI.

  19. Multimodal human-machine interaction for service robots in home-care environments

    OpenAIRE

    Goetze, Stefan; Fischer, S.; Moritz, Niko; Appell, Jens-E.; Wallhoff, Frank

    2012-01-01

    This contribution focuses on multimodal interaction techniques for a mobile communication and assistance system on a robot platform. The system comprises of acoustic, visual and haptic input modalities. Feedback is given to the user by a graphical user interface and a speech synthesis system. By this, multimodal and natural communication with the robot system is possible.

  20. Adaptive multimodal interaction in mobile augmented reality: A conceptual framework

    Science.gov (United States)

    Abidin, Rimaniza Zainal; Arshad, Haslina; Shukri, Saidatul A'isyah Ahmad

    2017-10-01

    Recently, Augmented Reality (AR) is an emerging technology in many mobile applications. Mobile AR was defined as a medium for displaying information merged with the real world environment mapped with augmented reality surrounding in a single view. There are four main types of mobile augmented reality interfaces and one of them are multimodal interfaces. Multimodal interface processes two or more combined user input modes (such as speech, pen, touch, manual gesture, gaze, and head and body movements) in a coordinated manner with multimedia system output. In multimodal interface, many frameworks have been proposed to guide the designer to develop a multimodal applications including in augmented reality environment but there has been little work reviewing the framework of adaptive multimodal interface in mobile augmented reality. The main goal of this study is to propose a conceptual framework to illustrate the adaptive multimodal interface in mobile augmented reality. We reviewed several frameworks that have been proposed in the field of multimodal interfaces, adaptive interface and augmented reality. We analyzed the components in the previous frameworks and measure which can be applied in mobile devices. Our framework can be used as a guide for designers and developer to develop a mobile AR application with an adaptive multimodal interfaces.

  1. Human computer interaction and communication aids for hearing-impaired, deaf and deaf-blind people: Introduction to the special thematic session

    DEFF Research Database (Denmark)

    Bothe, Hans-Heinrich

    2008-01-01

    This paper gives ail overview and extends the Special Thematic Session (STS) oil research and development of technologies for hearing-impaired, deaf, and deaf-blind people. The topics of the session focus oil special equipment or services to improve communication and human computer interaction....... The papers are related to visual communication using captions, sign language, speech-reading, to vibro-tactile stimulation, or to general services for hearing-impaired persons....

  2. Une approche pragmatique cognitive de l'interaction personne/système informatisé A Cognitive Pragmatic Approach of Human/Computer Interaction

    Directory of Open Access Journals (Sweden)

    Madeleine Saint-Pierre

    1998-06-01

    Full Text Available Dans cet article, nous proposons une approche inférentielle de l'interaction humain/ordinateur. C'est par la prise en compte de l'activité cognitive de l'utilisateur pendant son travail avec un système que nous voulons comprendre ce type d'interaction. Ceci mènera à une véritable évaluation des interfaces/utilisateurs et pourra servir de guide pour des interfaces en développement. Nos analyses décrivent le processus inférentiel impliqué dans le contexte dynamique d'exécution de tâche, grâce à une catégorisation de l'activité cognitive issue des verbalisations recueillies auprès d'utilisateurs qui " pensent à haute voix " en travaillant. Nous présentons des instruments méthodologiques mis au point dans notre recherche pour l'analyses et la catégorisation des protocoles. Les résultats sont interprétés dans le cadre de la théorie de la pertinence de Sperber et Wilson (1995 en termes d'effort cognitif dans le traitement des objets (linguistique, iconique, graphique... apparaissant à l'écran et d'effet cognitif de ces derniers. Cette approche est généralisable à tout autre contexte d'interaction humain/ordinateur comme, par exemple, le télé-apprentissage.This article proposes an inferential approach for the study of human/computer interaction. It is by taking into account the user's cognitive activity while working at a computer that we propose to understand this interaction. This approach leads to a real user/interface evaluation and, hopefully, will serve as guidelines for the design of new interfaces. Our analysis describe the inferential process involved in the dynamics of task performance. The cognitive activity of the user is grasped by the mean of a " thinking aloud " method through which the user is asked to verbalize while working at the computer. Tools developped by our research team for the categorization of the verbal protocols are presented. The results are interpreted within the relevance theory

  3. Realtime Interaction Analysis of Social Interplay in a Multimodal Musical-Sonic Interaction Context

    DEFF Research Database (Denmark)

    Hansen, Anne-Marie

    2010-01-01

    This paper presents an approach to the analysis of social interplay among users in a multimodal interaction and musical performance situation. The approach consists of a combined method of realtime sensor data analysis for the description and interpretation of player gestures and video micro......-analysis methods used to describe the interaction situation and the context in which the social interplay takes place. This combined method is used in an iterative process, where the design of interactive games with musical-sonic feedback is improved according to newly discovered understandings and interpretations...

  4. Optimization Model for Web Based Multimodal Interactive Simulations.

    Science.gov (United States)

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-07-15

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update . In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.

  5. A Multimodal Emotion Detection System during Human-Robot Interaction

    Science.gov (United States)

    Alonso-Martín, Fernando; Malfaz, María; Sequeira, João; Gorostiza, Javier F.; Salichs, Miguel A.

    2013-01-01

    In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately. PMID:24240598

  6. Multimodal interaction with W3C standards toward natural user interfaces to everything

    CERN Document Server

    2017-01-01

    This book presents new standards for multimodal interaction published by the W3C and other standards bodies in straightforward and accessible language, while also illustrating the standards in operation through case studies and chapters on innovative implementations. The book illustrates how, as smart technology becomes ubiquitous, and appears in more and more different shapes and sizes, vendor-specific approaches to multimodal interaction become impractical, motivating the need for standards. This book covers standards for voice, emotion, natural language understanding, dialog, and multimodal architectures. The book describes the standards in a practical manner, making them accessible to developers, students, and researchers. Comprehensive resource that explains the W3C standards for multimodal interaction clear and straightforward way; Includes case studies of the use of the standards on a wide variety of devices, including mobile devices, tablets, wearables and robots, in applications such as assisted livi...

  7. An evaluation framework for multimodal interaction determining quality aspects and modality choice

    CERN Document Server

    Wechsung, Ina

    2014-01-01

    This book presents (1) an exhaustive and empirically validated taxonomy of quality aspects of multimodal interaction as well as respective measurement methods, (2) a validated questionnaire specifically tailored to the evaluation of multimodal systems and covering most of the taxonomy‘s quality aspects, (3) insights on how the quality perceptions of multimodal systems relate to the quality perceptions of its individual components, (4) a set of empirically tested factors which influence modality choice, and (5) models regarding the relationship of the perceived quality of a modality and the actual usage of a modality.

  8. Creating Standardized Video Recordings of Multimodal Interactions across Cultures

    DEFF Research Database (Denmark)

    Rehm, Matthias; André, Elisabeth; Bee, Nikolaus

    2009-01-01

    the literature is often too anecdotal to serve as the basis for modeling a system’s behavior, making it necessary to collect multimodal corpora in a standardized fashion in different cultures. In this chapter, the challenges of such an endeavor are introduced and solutions are presented by examples from a German......-Japanese project that aims at modeling culture-specific behaviors for Embodied Conversational Agents....

  9. Toward Optimization of Gaze-Controlled Human-Computer Interaction: Application to Hindi Virtual Keyboard for Stroke Patients.

    Science.gov (United States)

    Meena, Yogesh Kumar; Cecotti, Hubert; Wong-Lin, Kongfatt; Dutta, Ashish; Prasad, Girijesh

    2018-04-01

    Virtual keyboard applications and alternative communication devices provide new means of communication to assist disabled people. To date, virtual keyboard optimization schemes based on script-specific information, along with multimodal input access facility, are limited. In this paper, we propose a novel method for optimizing the position of the displayed items for gaze-controlled tree-based menu selection systems by considering a combination of letter frequency and command selection time. The optimized graphical user interface layout has been designed for a Hindi language virtual keyboard based on a menu wherein 10 commands provide access to type 88 different characters, along with additional text editing commands. The system can be controlled in two different modes: eye-tracking alone and eye-tracking with an access soft-switch. Five different keyboard layouts have been presented and evaluated with ten healthy participants. Furthermore, the two best performing keyboard layouts have been evaluated with eye-tracking alone on ten stroke patients. The overall performance analysis demonstrated significantly superior typing performance, high usability (87% SUS score), and low workload (NASA TLX with 17 scores) for the letter frequency and time-based organization with script specific arrangement design. This paper represents the first optimized gaze-controlled Hindi virtual keyboard, which can be extended to other languages.

  10. Sensor-based assessment of the in-situ quality of human computer interaction in the cars : final research report.

    Science.gov (United States)

    2016-01-01

    Human attention is a finite resource. When interrupted while performing a task, this : resource is split between two interactive tasks. People have to decide whether the benefits : from the interruptive interaction will be enough to offset the loss o...

  11. Cooperation in human-computer communication

    OpenAIRE

    Kronenberg, Susanne

    2000-01-01

    The goal of this thesis is to simulate cooperation in human-computer communication to model the communicative interaction process of agents in natural dialogs in order to provide advanced human-computer interaction in that coherence is maintained between contributions of both agents, i.e. the human user and the computer. This thesis contributes to certain aspects of understanding and generation and their interaction in the German language. In spontaneous dialogs agents cooperate by the pro...

  12. Human Computer Music Performance

    OpenAIRE

    Dannenberg, Roger B.

    2012-01-01

    Human Computer Music Performance (HCMP) is the study of music performance by live human performers and real-time computer-based performers. One goal of HCMP is to create a highly autonomous artificial performer that can fill the role of a human, especially in a popular music setting. This will require advances in automated music listening and understanding, new representations for music, techniques for music synchronization, real-time human-computer communication, music generation, sound synt...

  13. The effect of a pretest in an interactive, multimodal pretraining system for learning science concepts

    NARCIS (Netherlands)

    Bos, Floor/Floris; Terlouw, C.; Pilot, Albert

    2009-01-01

    In line with the cognitive theory of multimedia learning by Moreno and Mayer (2007), an interactive, multimodal learning environment was designed for the pretraining of science concepts in the joint area of physics, chemistry, biology, applied mathematics, and computer sciences. In the experimental

  14. International workshop on multimodal analyses enabling artificial agents in human-machine interaction (workshop summary)

    NARCIS (Netherlands)

    Böck, Ronald; Bonin, Francesca; Campbell, Nick; Poppe, R.W.

    2016-01-01

    In this paper a brief overview of the third workshop on Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction. The paper is focussing on the main aspects intended to be discussed in the workshop reflecting the main scope of the papers presented during the meeting. The MA3HMI

  15. Contradictory Explorative Assessment. Multimodal Teacher/Student Interaction in Scandinavian Digital Learning Environments

    Science.gov (United States)

    Kjällander, Susanne

    2018-01-01

    Assessment in the much-discussed digital divide in Scandinavian technologically advanced schools, is the study object of this article. Interaction is studied to understand assessment; and to see how assessment can be didactically designed to recognise students' learning. With a multimodal, design theoretical perspective on learning teachers' and…

  16. A multimodal architecture for simulating natural interactive walking in virtual environments

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Serafin, Stefania; Turchet, Luca

    2011-01-01

    We describe a multimodal system that exploits the use of footwear-based interaction in virtual environments. We developed a pair of shoes enhanced with pressure sensors, actuators, and markers. These shoes control a multichannel surround sound system and drive a physically based audio...

  17. SPATIO-TEMPORAL CLUSTERING OF MOVEMENT DATA: AN APPLICATION TO TRAJECTORIES GENERATED BY HUMAN-COMPUTER INTERACTION

    Directory of Open Access Journals (Sweden)

    G. McArdle

    2012-07-01

    Full Text Available Advances in ubiquitous positioning technologies and their increasing availability in mobile devices has generated large volumes of movement data. Analysing these datasets is challenging. While data mining techniques can be applied to this data, knowledge of the underlying spatial region can assist interpreting the data. We have developed a geovisual analysis tool for studying movement data. In addition to interactive visualisations, the tool has features for analysing movement trajectories, in terms of their spatial and temporal similarity. The focus in this paper is on mouse trajectories of users interacting with web maps. The results obtained from a user trial can be used as a starting point to determine which parts of a mouse trajectory can assist personalisation of spatial web maps.

  18. Time-dependent, multimode interaction analysis of the gyroklystron amplifier

    Energy Technology Data Exchange (ETDEWEB)

    Swati, M. V., E-mail: swati.mv.ece10@iitbhu.ac.in; Chauhan, M. S.; Jain, P. K. [Department of Electronics Engineering, Indian Institute of Technology, Banaras Hindu University, Varanasi 221005 (India)

    2016-08-15

    In this paper, a time-dependent multimode nonlinear analysis for the gyroklystron amplifier has been developed by extending the analysis of gyrotron oscillators by employing the self-consistent approach. The nonlinear analysis developed here has been validated by taking into account the reported experimental results for a 32.3 GHz, three cavity, second harmonic gyroklystron operating in the TE{sub 02} mode. The analysis has been used to estimate the temporal RF growth in the operating mode as well as the nearby competing modes. Device gain and bandwidth have been computed for different drive powers and frequencies. The effect of various beam parameters, such as beam voltage, beam current, and pitch factor, has also been studied. The computational results have estimated the gyroklystron saturated RF power ∼319 kW at 32.3 GHz with efficiency ∼23% and gain ∼26.3 dB with device bandwidth ∼0.027% (8 MHz) for a 70 kV, 20 A electron beam. The computed results are found to be in agreement with the experimental values within 10%.

  19. Ubiquitous human computing.

    Science.gov (United States)

    Zittrain, Jonathan

    2008-10-28

    Ubiquitous computing means network connectivity everywhere, linking devices and systems as small as a drawing pin and as large as a worldwide product distribution chain. What could happen when people are so readily networked? This paper explores issues arising from two possible emerging models of ubiquitous human computing: fungible networked brainpower and collective personal vital sign monitoring.

  20. Exploring the requirements for multimodal interaction for mobile devices in an end-to-end journey context.

    Science.gov (United States)

    Krehl, Claudia; Sharples, Sarah

    2012-01-01

    The paper investigates the requirements for multimodal interaction on mobile devices in an end-to-end journey context. Traditional interfaces are deemed cumbersome and inefficient for exchanging information with the user. Multimodal interaction provides a different user-centred approach allowing for more natural and intuitive interaction between humans and computers. It is especially suitable for mobile interaction as it can overcome additional constraints including small screens, awkward keypads, and continuously changing settings - an inherent property of mobility. This paper is based on end-to-end journeys where users encounter several contexts during their journeys. Interviews and focus groups explore the requirements for multimodal interaction design for mobile devices by examining journey stages and identifying the users' information needs and sources. Findings suggest that multimodal communication is crucial when users multitask. Choosing suitable modalities depend on user context, characteristics and tasks.

  1. Multimodal Challenge: Analytics Beyond User-computer Interaction Data

    NARCIS (Netherlands)

    Di Mitri, Daniele; Schneider, Jan; Specht, Marcus; Drachsler, Hendrik

    2018-01-01

    This contribution describes one the challenges explored in the Fourth LAK Hackathon. This challenge aims at shifting the focus from learning situations which can be easily traced through user-computer interactions data and concentrate more on user-world interactions events, typical of co-located and

  2. Human computer interactions in next-generation of aircraft smart navigation management systems: task analysis and architecture under an agent-oriented methodological approach.

    Science.gov (United States)

    Canino-Rodríguez, José M; García-Herrero, Jesús; Besada-Portas, Juan; Ravelo-García, Antonio G; Travieso-González, Carlos; Alonso-Hernández, Jesús B

    2015-03-04

    The limited efficiency of current air traffic systems will require a next-generation of Smart Air Traffic System (SATS) that relies on current technological advances. This challenge means a transition toward a new navigation and air-traffic procedures paradigm, where pilots and air traffic controllers perform and coordinate their activities according to new roles and technological supports. The design of new Human-Computer Interactions (HCI) for performing these activities is a key element of SATS. However efforts for developing such tools need to be inspired on a parallel characterization of hypothetical air traffic scenarios compatible with current ones. This paper is focused on airborne HCI into SATS where cockpit inputs came from aircraft navigation systems, surrounding traffic situation, controllers' indications, etc. So the HCI is intended to enhance situation awareness and decision-making through pilot cockpit. This work approach considers SATS as a system distributed on a large-scale with uncertainty in a dynamic environment. Therefore, a multi-agent systems based approach is well suited for modeling such an environment. We demonstrate that current methodologies for designing multi-agent systems are a useful tool to characterize HCI. We specifically illustrate how the selected methodological approach provides enough guidelines to obtain a cockpit HCI design that complies with future SATS specifications.

  3. Human Computer Interactions in Next-Generation of Aircraft Smart Navigation Management Systems: Task Analysis and Architecture under an Agent-Oriented Methodological Approach

    Science.gov (United States)

    Canino-Rodríguez, José M.; García-Herrero, Jesús; Besada-Portas, Juan; Ravelo-García, Antonio G.; Travieso-González, Carlos; Alonso-Hernández, Jesús B.

    2015-01-01

    The limited efficiency of current air traffic systems will require a next-generation of Smart Air Traffic System (SATS) that relies on current technological advances. This challenge means a transition toward a new navigation and air-traffic procedures paradigm, where pilots and air traffic controllers perform and coordinate their activities according to new roles and technological supports. The design of new Human-Computer Interactions (HCI) for performing these activities is a key element of SATS. However efforts for developing such tools need to be inspired on a parallel characterization of hypothetical air traffic scenarios compatible with current ones. This paper is focused on airborne HCI into SATS where cockpit inputs came from aircraft navigation systems, surrounding traffic situation, controllers’ indications, etc. So the HCI is intended to enhance situation awareness and decision-making through pilot cockpit. This work approach considers SATS as a system distributed on a large-scale with uncertainty in a dynamic environment. Therefore, a multi-agent systems based approach is well suited for modeling such an environment. We demonstrate that current methodologies for designing multi-agent systems are a useful tool to characterize HCI. We specifically illustrate how the selected methodological approach provides enough guidelines to obtain a cockpit HCI design that complies with future SATS specifications. PMID:25746092

  4. What is the value of embedding artificial emotional prosody in human computer interactions? Implications for theory and design in psychological science.

    Directory of Open Access Journals (Sweden)

    Rachel L. C. Mitchell

    2015-11-01

    Full Text Available In computerised technology, artificial speech is becoming increasingly important, and is already used in ATMs, online gaming and healthcare contexts. However, today’s artificial speech typically sounds monotonous, a main reason for this being the lack of meaningful prosody. One particularly important function of prosody is to convey different emotions. This is because successful encoding and decoding of emotions is vital for effective social cognition, which is increasingly recognised in human-computer interaction contexts. Current attempts to artificially synthesise emotional prosody are much improved relative to early attempts, but there remains much work to be done due to methodological problems, lack of agreed acoustic correlates, and lack of theoretical grounding. If the addition of synthetic emotional prosody is not of sufficient quality, it may risk alienating users instead of enhancing their experience. So the value of embedding emotion cues in artificial speech may ultimately depend on the quality of the synthetic emotional prosody. However, early evidence on reactions to synthesised nonverbal cues in the facial modality bodes well. Attempts to implement the recognition of emotional prosody into artificial applications and interfaces have perhaps been met with greater success, but the ultimate test of synthetic emotional prosody will be to critically compare how people react to synthetic emotional prosody vs. natural emotional prosody, at the behavioural, socio-cognitive and neural levels.

  5. Human Computer Interactions in Next-Generation of Aircraft Smart Navigation Management Systems: Task Analysis and Architecture under an Agent-Oriented Methodological Approach

    Directory of Open Access Journals (Sweden)

    José M. Canino-Rodríguez

    2015-03-01

    Full Text Available The limited efficiency of current air traffic systems will require a next-generation of Smart Air Traffic System (SATS that relies on current technological advances. This challenge means a transition toward a new navigation and air-traffic procedures paradigm, where pilots and air traffic controllers perform and coordinate their activities according to new roles and technological supports. The design of new Human-Computer Interactions (HCI for performing these activities is a key element of SATS. However efforts for developing such tools need to be inspired on a parallel characterization of hypothetical air traffic scenarios compatible with current ones. This paper is focused on airborne HCI into SATS where cockpit inputs came from aircraft navigation systems, surrounding traffic situation, controllers’ indications, etc. So the HCI is intended to enhance situation awareness and decision-making through pilot cockpit. This work approach considers SATS as a system distributed on a large-scale with uncertainty in a dynamic environment. Therefore, a multi-agent systems based approach is well suited for modeling such an environment. We demonstrate that current methodologies for designing multi-agent systems are a useful tool to characterize HCI. We specifically illustrate how the selected methodological approach provides enough guidelines to obtain a cockpit HCI design that complies with future SATS specifications.

  6. ARZombie: A Mobile Augmented Reality Game with Multimodal Interaction

    Directory of Open Access Journals (Sweden)

    Diogo Cordeiro

    2015-11-01

    Full Text Available Augmented reality games have the power to extend virtual gaming into real world scenarios with real people, while enhancing the senses of the user. This paper describes the AR- Zombie game developed with the aim of studying and developing mobile augmented reality applications, specifically for tablets, using face recognition interaction techniques. The goal of the ARZombie player is to kill zombies that are detected through the display of the device. Instead of using markers as a mean of tracking the zombies, this game incorporates a facial recognition system, which will enhance the user experience by improving the interaction of players with the real world. As the player moves around the environment, the game will display virtual zombies on the screen if the detected faces are recognized as belonging to the class of the zombies. ARZombie was tested with users to evaluate the interaction proposals and its components were evaluated regarding the performance in order to ensure a better gaming experience.

  7. Spatial Sound and Multimodal Interaction in Immersive Environments

    DEFF Research Database (Denmark)

    Grani, Francesco; Overholt, Daniel; Erkut, Cumhur

    2015-01-01

    primary problem areas: 1) creation of interactive spatial audio experiences for immersive virtual and augmented reality scenarios, and 2) production and mixing of spatial audio for cinema, music, and other artistic contexts. Several ongoing research projects are described, wherein the latest developments...

  8. Eyeblink Synchrony in Multimodal Human-Android Interaction.

    Science.gov (United States)

    Tatsukawa, Kyohei; Nakano, Tamami; Ishiguro, Hiroshi; Yoshikawa, Yuichiro

    2016-12-23

    As the result of recent progress in technology of communication robot, robots are becoming an important social partner for humans. Behavioral synchrony is understood as an important factor in establishing good human-robot relationships. In this study, we hypothesized that biasing a human's attitude toward a robot changes the degree of synchrony between human and robot. We first examined whether eyeblinks were synchronized between a human and an android in face-to-face interaction and found that human listeners' eyeblinks were entrained to android speakers' eyeblinks. This eyeblink synchrony disappeared when the android speaker spoke while looking away from the human listeners but was enhanced when the human participants listened to the speaking android while touching the android's hand. These results suggest that eyeblink synchrony reflects a qualitative state in human-robot interactions.

  9. Multimodality and Design of Interactive Virtual Environments for Creative Collaboration

    DEFF Research Database (Denmark)

    Gürsimsek, Remzi Ates

    . The three-dimensional representation of space and the resources for non-verbal communication enable the users to interact with the digital content in more complex yet engaging ways. However, understanding the communicative resources in virtual spaces with the theoretical tools that are conventionally used...... perspective particularly emphasizes the role of audio-visual resources in co-creating representations for effective collaboration, and the socio-cultural factors in construction of meaningful virtual environments....

  10. IMMERSE: Interactive Mentoring for Multimodal Experiences in Realistic Social Encounters

    Science.gov (United States)

    2015-08-28

    ride to the airport, asking someone to watch your kids for an hour or pet for a few days; 1000 is characteristic of a large imposition—borrowing a...Influences of Sex and Status in Group Interactions. Journal of Nonverbal Behavior, Vol.(34), pp. 137-153. 44 Dovidio, J. F., Ellyson, S. L., Keating, C...Studies 1, pp. 328–333. 65 de Waal, F. (1982). Chimpanzee politics: Sex and power among apes. Baltimore: Johns Hopkins University Press. 67

  11. Multi-mode interactions in an FEL oscillator

    CERN Document Server

    Dong Zhi Wei; Masuda, K; Yamazaki, T; Yoshikawa, K

    2000-01-01

    A 3D time-dependent FEL oscillator simulation code has been developed by using the transverse mode spectral method to analyze interaction among transverse modes. The competition among them in an FEL oscillator was investigated based on the parameters of LANL FEL experiments. It is found that under typical FEL oscillator operation conditions, the TEM sub 0 sub 0 mode is dominant, and the effects of other transverse modes can be negligible.

  12. Interactive visualization and analysis of multimodal datasets for surgical applications.

    Science.gov (United States)

    Kirmizibayrak, Can; Yim, Yeny; Wakid, Mike; Hahn, James

    2012-12-01

    Surgeons use information from multiple sources when making surgical decisions. These include volumetric datasets (such as CT, PET, MRI, and their variants), 2D datasets (such as endoscopic videos), and vector-valued datasets (such as computer simulations). Presenting all the information to the user in an effective manner is a challenging problem. In this paper, we present a visualization approach that displays the information from various sources in a single coherent view. The system allows the user to explore and manipulate volumetric datasets, display analysis of dataset values in local regions, combine 2D and 3D imaging modalities and display results of vector-based computer simulations. Several interaction methods are discussed: in addition to traditional interfaces including mouse and trackers, gesture-based natural interaction methods are shown to control these visualizations with real-time performance. An example of a medical application (medialization laryngoplasty) is presented to demonstrate how the combination of different modalities can be used in a surgical setting with our approach.

  13. Handbook of human computation

    CERN Document Server

    Michelucci, Pietro

    2013-01-01

    This volume addresses the emerging area of human computation, The chapters, written by leading international researchers, explore existing and future opportunities to combine the respective strengths of both humans and machines in order to create powerful problem-solving capabilities. The book bridges scientific communities, capturing and integrating the unique perspective and achievements of each. It coalesces contributions from industry and across related disciplines in order to motivate, define, and anticipate the future of this exciting new frontier in science and cultural evolution. Reade

  14. Interactive Multimodal Molecular Set – Designing Ludic Engaging Science Learning Content

    DEFF Research Database (Denmark)

    Thorsen, Tine Pinholt; Christiansen, Kasper Holm Bonde; Jakobsen Sillesen, Kristian

    2014-01-01

    This paper reports on an exploratory study investigating 10 primary school students’ interaction with an interactive multimodal molecular set fostering ludic engaging science learning content in primary schools (8th and 9th grade). The concept of the prototype design was to bridge the physical...... and virtual worlds with electronic tags and, through this, blend the familiarity of the computer and toys, to create a tool that provided a ludic approach to learning about atoms and molecules. The study was inspired by the participatory design and informant design methodologies and included design...

  15. The integration of audio−tactile information is modulated by multimodal social interaction with physical contact in infancy

    Directory of Open Access Journals (Sweden)

    Yukari Tanaka

    2018-04-01

    Full Text Available Interaction between caregivers and infants is multimodal in nature. To react interactively and smoothly to such multimodal signals, infants must integrate all these signals. However, few empirical infant studies have investigated how multimodal social interaction with physical contact facilitates multimodal integration, especially regarding audio − tactile (A-T information. By using electroencephalogram (EEG and event-related potentials (ERPs, the present study investigated how neural processing involved in A-T integration is modulated by tactile interaction. Seven- to 8-months-old infants heard one pseudoword both whilst being tickled (multimodal ‘A-T’ condition, and not being tickled (unimodal ‘A’ condition. Thereafter, their EEG was measured during the perception of the same words. Compared to the A condition, the A-T condition resulted in enhanced ERPs and higher beta-band activity within the left temporal regions, indicating neural processing of A-T integration. Additionally, theta-band activity within the middle frontal region was enhanced, which may reflect enhanced attention to social information. Furthermore, differential ERPs correlated with the degree of engagement in the tickling interaction. We provide neural evidence that the integration of A-T information in infants’ brains is facilitated through tactile interaction with others. Such plastic changes in neural processing may promote harmonious social interaction and effective learning in infancy. Keywords: Electroencephalogram (EEG, Infants, Multisensory integration, Touch interaction

  16. The integration of audio-tactile information is modulated by multimodal social interaction with physical contact in infancy.

    Science.gov (United States)

    Tanaka, Yukari; Kanakogi, Yasuhiro; Kawasaki, Masahiro; Myowa, Masako

    2018-04-01

    Interaction between caregivers and infants is multimodal in nature. To react interactively and smoothly to such multimodal signals, infants must integrate all these signals. However, few empirical infant studies have investigated how multimodal social interaction with physical contact facilitates multimodal integration, especially regarding audio - tactile (A-T) information. By using electroencephalogram (EEG) and event-related potentials (ERPs), the present study investigated how neural processing involved in A-T integration is modulated by tactile interaction. Seven- to 8-months-old infants heard one pseudoword both whilst being tickled (multimodal 'A-T' condition), and not being tickled (unimodal 'A' condition). Thereafter, their EEG was measured during the perception of the same words. Compared to the A condition, the A-T condition resulted in enhanced ERPs and higher beta-band activity within the left temporal regions, indicating neural processing of A-T integration. Additionally, theta-band activity within the middle frontal region was enhanced, which may reflect enhanced attention to social information. Furthermore, differential ERPs correlated with the degree of engagement in the tickling interaction. We provide neural evidence that the integration of A-T information in infants' brains is facilitated through tactile interaction with others. Such plastic changes in neural processing may promote harmonious social interaction and effective learning in infancy. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Interactive multimodal ambulatory monitoring to investigate the association between physical activity and affect

    Directory of Open Access Journals (Sweden)

    Ulrich W. Ebner-Priemer

    2013-01-01

    Full Text Available Although there is a wealth of evidence that physical activity has positive effects on psychological health, a large proportion of people are inactive. Data regarding counts, steps, and movement patterns are limited in their ability to explain why people remain inactive. We propose that multimodal ambulatory monitoring, which combines the assessment of physical activity with the assessment of psychological variables, helps to elucidate real world physical activity. Whereas physical activity can be monitored continuously, psychological variables can only be assessed at discrete intervals, such as every hour. Moreover, the assessment of psychological variables must be linked to the activity of interest. For example, if an inactive and overweight person is physically active once a week, psychological variables should be assessed during this episode. Linking the assessment of psychological variables to episodes of an activity of interest can be achieved with interactive monitoring. The primary aim of our interactive multimodal ambulatory monitoring approach was to intentionally increase the number of e-diary assessments during active episodes.We developed and tested an interactive monitoring algorithm that continuously monitors physical activity in everyday life. When predefined thresholds are surpassed, the algorithm triggers a signal for participants to answer questions in their electronic diary.Using data from 70 participants wearing an accelerative device for 24 hours each, we found that our algorithm quadrupled the frequency of e-diary assessments during the activity episodes of interest compared to random sampling. Multimodal interactive ambulatory monitoring appears to be a promising approach to enhancing our understanding of real world physical activity and movement.

  18. A new multimodal interactive way of subjective scoring of 3D video quality of experience

    Science.gov (United States)

    Kim, Taewan; Lee, Kwanghyun; Lee, Sanghoon; Bovik, Alan C.

    2014-03-01

    People that watch today's 3D visual programs, such as 3D cinema, 3D TV and 3D games, experience wide and dynamically varying ranges of 3D visual immersion and 3D quality of experience (QoE). It is necessary to be able to deploy reliable methodologies that measure each viewers subjective experience. We propose a new methodology that we call Multimodal Interactive Continuous Scoring of Quality (MICSQ). MICSQ is composed of a device interaction process between the 3D display and a separate device (PC, tablet, etc.) used as an assessment tool, and a human interaction process between the subject(s) and the device. The scoring process is multimodal, using aural and tactile cues to help engage and focus the subject(s) on their tasks. Moreover, the wireless device interaction process makes it possible for multiple subjects to assess 3D QoE simultaneously in a large space such as a movie theater, and at di®erent visual angles and distances.

  19. Child-Computer Interaction: ICMI 2012 special session

    NARCIS (Netherlands)

    Nijholt, Antinus; Morency, L.P.; Bohus, L.; Aghajan, H.; Nijholt, Antinus; Cassell, J.; Epps, J.

    2012-01-01

    This is a short introduction to the special session on child computer interaction at the International Conference on Multimodal Interaction 2012 (ICMI 2012). In human-computer interaction users have become participants in the design process. This is not different for child computer interaction

  20. Emotional pictures and sounds: A review of multimodal interactions of emotion cues in multiple domains

    Directory of Open Access Journals (Sweden)

    Antje B M Gerdes

    2014-12-01

    Full Text Available In everyday life, multiple sensory channels jointly trigger emotional experiences and one channel may alter processing in another channel. For example, seeing an emotional facial expression and hearing the voice’s emotional tone will jointly create the emotional experience. This example, where auditory and visual input is related to social communication, has gained considerable attention by researchers. However, interactions of visual and auditory emotional information are not limited to social communication but can extend to much broader contexts including human, animal, and environmental cues. In this article, we review current research on audiovisual emotion processing beyond face-voice stimuli to develop a broader perspective on multimodal interactions in emotion processing. We argue that current concepts of multimodality should be extended in considering an ecologically valid variety of stimuli in audiovisual emotion processing. Therefore, we provide an overview of studies in which emotional sounds and interactions with complex pictures of scenes were investigated. In addition to behavioral studies, we focus on neuroimaging, electro- and peripher-physiological findings. Furthermore, we integrate these findings and identify similarities or differences. We conclude with suggestions for future research.

  1. A Qualitative Analysis of NASA’s Human Computer Interaction Group Examining the Root Causes of Focusing on Derivative System Improvements Versus Core User Needs

    Science.gov (United States)

    2017-12-01

    toward qualitative analysis methods where they excelled at user research and workflow process analysis consistent with their formal training, rather...to a single one (e.g., one type of user research or graphic design) at larger Silicon Valley firms. The core competency of the design team tended...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA MBA PROFESSIONAL REPORT A QUALITATIVE ANALYSIS OF NASA’S HUMAN COMPUTER

  2. Towards an intelligent framework for multimodal affective data analysis.

    Science.gov (United States)

    Poria, Soujanya; Cambria, Erik; Hussain, Amir; Huang, Guang-Bin

    2015-03-01

    An increasingly large amount of multimodal content is posted on social media websites such as YouTube and Facebook everyday. In order to cope with the growth of such so much multimodal data, there is an urgent need to develop an intelligent multi-modal analysis framework that can effectively extract information from multiple modalities. In this paper, we propose a novel multimodal information extraction agent, which infers and aggregates the semantic and affective information associated with user-generated multimodal data in contexts such as e-learning, e-health, automatic video content tagging and human-computer interaction. In particular, the developed intelligent agent adopts an ensemble feature extraction approach by exploiting the joint use of tri-modal (text, audio and video) features to enhance the multimodal information extraction process. In preliminary experiments using the eNTERFACE dataset, our proposed multi-modal system is shown to achieve an accuracy of 87.95%, outperforming the best state-of-the-art system by more than 10%, or in relative terms, a 56% reduction in error rate. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Toward Multimodal Human-Robot Interaction to Enhance Active Participation of Users in Gait Rehabilitation.

    Science.gov (United States)

    Gui, Kai; Liu, Honghai; Zhang, Dingguo

    2017-11-01

    Robotic exoskeletons for physical rehabilitation have been utilized for retraining patients suffering from paraplegia and enhancing motor recovery in recent years. However, users are not voluntarily involved in most systems. This paper aims to develop a locomotion trainer with multiple gait patterns, which can be controlled by the active motion intention of users. A multimodal human-robot interaction (HRI) system is established to enhance subject's active participation during gait rehabilitation, which includes cognitive HRI (cHRI) and physical HRI (pHRI). The cHRI adopts brain-computer interface based on steady-state visual evoked potential. The pHRI is realized via admittance control based on electromyography. A central pattern generator is utilized to produce rhythmic and continuous lower joint trajectories, and its state variables are regulated by cHRI and pHRI. A custom-made leg exoskeleton prototype with the proposed multimodal HRI is tested on healthy subjects and stroke patients. The results show that voluntary and active participation can be effectively involved to achieve various assistive gait patterns.

  4. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction

    Science.gov (United States)

    XU, TIAN (LINGER); ZHANG, HUI; YU, CHEN

    2016-01-01

    We focus on a fundamental looking behavior in human-robot interactions – gazing at each other’s face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user’s face as a response to the human’s gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot’s gaze toward the human partner’s face in real time and then analyzed the human’s gaze behavior as a response to the robot’s gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot’s face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained. PMID:28966875

  5. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction.

    Science.gov (United States)

    Xu, Tian Linger; Zhang, Hui; Yu, Chen

    2016-05-01

    We focus on a fundamental looking behavior in human-robot interactions - gazing at each other's face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user's face as a response to the human's gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot's gaze toward the human partner's face in real time and then analyzed the human's gaze behavior as a response to the robot's gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot's face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained.

  6. Designing Multimodal Mobile Interaction for a Text Messaging Application for Visually Impaired Users

    Directory of Open Access Journals (Sweden)

    Carlos Duarte

    2017-12-01

    Full Text Available While mobile devices have experienced important accessibility advances in the past years, people with visual impairments still face important barriers, especially in specific contexts when both their hands are not free to hold the mobile device, like when walking outside. By resorting to a multimodal combination of body based gestures and voice, we aim to achieve full hands and vision free interaction with mobile devices. In this article, we describe this vision and present the design of a prototype, inspired by that vision, of a text messaging application. The article also presents a user study where the suitability of the proposed approach was assessed, and a performance comparison between our prototype and existing SMS applications was conducted. Study participants received positively the prototype, which also supported better performance in tasks that involved text editing.

  7. A System for Multimodal Interaction with Kinect-Enabled Virtual Windows

    Directory of Open Access Journals (Sweden)

    Ana M. Bernardos

    2015-11-01

    Full Text Available Commercial off-the-shelf gaming devices (e.g. such as Kinect are demonstrating to have a great potential beyond their initial service purpose. In particular, when integrated within the environment or as part of smart objects, peripheral COTS for gaming may facilitate the definition of novel interaction methods, particularly applicable to smart spaces service concepts. In this direction, this paper describes a system prototype that makes possible to deliver multimodal interaction with the media contents in a Virtual Window. Using a Kinect device, the Interactive Window itself adjusts the video clipping to the real time perspective of the user – who can freely move within the sensor coverage are. On the clipped video, the user is able to select objects by pointing at meaningful image sections and to initiate actions related to them. Voice orders may also complete the interaction when necessary. Although implemented for smart spaces, the service concept can also be applied to learning, remote control processes or teleconference.

  8. A 3D character animation engine for multimodal interaction on mobile devices

    Science.gov (United States)

    Sandali, Enrico; Lavagetto, Fabio; Pisano, Paolo

    2005-03-01

    Talking virtual characters are graphical simulations of real or imaginary persons that enable natural and pleasant multimodal interaction with the user, by means of voice, eye gaze, facial expression and gestures. This paper presents an implementation of a 3D virtual character animation and rendering engine, compliant with the MPEG-4 standard, running on Symbian-based SmartPhones. Real-time animation of virtual characters on mobile devices represents a challenging task, since many limitations must be taken into account with respect to processing power, graphics capabilities, disk space and execution memory size. The proposed optimization techniques allow to overcome these issues, guaranteeing a smooth and synchronous animation of facial expressions and lip movements on mobile phones such as Sony-Ericsson's P800 and Nokia's 6600. The animation engine is specifically targeted to the development of new "Over The Air" services, based on embodied conversational agents, with applications in entertainment (interactive story tellers), navigation aid (virtual guides to web sites and mobile services), news casting (virtual newscasters) and education (interactive virtual teachers).

  9. Tunable-Range, Photon-Mediated Atomic Interactions in Multimode Cavity QED

    Directory of Open Access Journals (Sweden)

    Varun D. Vaidya

    2018-01-01

    Full Text Available Optical cavity QED provides a platform with which to explore quantum many-body physics in driven-dissipative systems. Single-mode cavities provide strong, infinite-range photon-mediated interactions among intracavity atoms. However, these global all-to-all couplings are limiting from the perspective of exploring quantum many-body physics beyond the mean-field approximation. The present work demonstrates that local couplings can be created using multimode cavity QED. This is established through measurements of the threshold of a superradiant, self-organization phase transition versus atomic position. Specifically, we experimentally show that the interference of near-degenerate cavity modes leads to both a strong and tunable-range interaction between Bose-Einstein condensates (BECs trapped within the cavity. We exploit the symmetry of a confocal cavity to measure the interaction between real BECs and their virtual images without unwanted contributions arising from the merger of real BECs. Atom-atom coupling may be tuned from short range to long range. This capability paves the way toward future explorations of exotic, strongly correlated systems such as quantum liquid crystals and driven-dissipative spin glasses.

  10. Multi-Modal, Multi-Touch Interaction with Maps in Disaster Management Applications

    Directory of Open Access Journals (Sweden)

    V. Paelke

    2012-07-01

    Full Text Available Multi-touch interaction has become popular in recent years and impressive advances in technology have been demonstrated, with the presentation of digital maps as a common presentation scenario. However, most existing systems are really technology demonstrators and have not been designed with real applications in mind. A critical factor in the management of disaster situations is the access to current and reliable data. New sensors and data acquisition platforms (e.g. satellites, UAVs, mobile sensor networks have improved the supply of spatial data tremendously. However, in many cases this data is not well integrated into current crisis management systems and the capabilities to analyze and use it lag behind sensor capabilities. Therefore, it is essential to develop techniques that allow the effective organization, use and management of heterogeneous data from a wide variety of data sources. Standard user interfaces are not well suited to provide this information to crisis managers. Especially in dynamic situations conventional cartographic displays and mouse based interaction techniques fail to address the need to review a situation rapidly and act on it as a team. The development of novel interaction techniques like multi-touch and tangible interaction in combination with large displays provides a promising base technology to provide crisis managers with an adequate overview of the situation and to share relevant information with other stakeholders in a collaborative setting. However, design expertise on the use of such techniques in interfaces for real-world applications is still very sparse. In this paper we report on interdisciplinary research with a user and application centric focus to establish real-world requirements, to design new multi-modal mapping interfaces, and to validate them in disaster management applications. Initial results show that tangible and pen-based interaction are well suited to provide an intuitive and visible way to

  11. Model-based acquisition and analysis of multimodal interactions for improving human-robot interaction

    OpenAIRE

    Renner, Patrick; Pfeiffer, Thies

    2014-01-01

    For solving complex tasks cooperatively in close interaction with robots, they need to understand natural human communication. To achieve this, robots could benefit from a deeper understanding of the processes that humans use for successful communication. Such skills can be studied by investigating human face-to-face interactions in complex tasks. In our work the focus lies on shared-space interactions in a path planning task and thus 3D gaze directions and hand movements are of particular in...

  12. Analysis of psychological factors for quality assessment of interactive multimodal service

    Science.gov (United States)

    Yamagishi, Kazuhisa; Hayashi, Takanori

    2005-03-01

    We proposed a subjective quality assessment model for interactive multimodal services. First, psychological factors of an audiovisual communication service were extracted by using the semantic differential (SD) technique and factor analysis. Forty subjects participated in subjective tests and performed point-to-point conversational tasks on a PC-based TV phone that exhibits various network qualities. The subjects assessed those qualities on the basis of 25 pairs of adjectives. Two psychological factors, i.e., an aesthetic feeling and a feeling of activity, were extracted from the results. Then, quality impairment factors affecting these two psychological factors were analyzed. We found that the aesthetic feeling is mainly affected by IP packet loss and video coding bit rate, and the feeling of activity depends on delay time and video frame rate. We then proposed an opinion model derived from the relationships among quality impairment factors, psychological factors, and overall quality. The results indicated that the estimation error of the proposed model is almost equivalent to the statistical reliability of the subjective score. Finally, using the proposed model, we discuss guidelines for quality design of interactive audiovisual communication services.

  13. Best of Affective Computing and Intelligent Interaction 2013 in Multimodal Interactions

    NARCIS (Netherlands)

    Soleymani, Mohammad; Soleymani, M.; Pun, T.; Pun, Thierry; Nijholt, Antinus

    The fifth biannual Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII 2013) was held in Geneva, Switzerland. This conference featured the recent advancement in affective computing and relevant applications in education, entertainment and health. A number of

  14. Introducing the Interactive Model for the Training of Audiovisual Translators and Analysis of Multimodal Texts

    Directory of Open Access Journals (Sweden)

    Pietro Luigi Iaia

    2015-07-01

    Full Text Available Abstract – This paper introduces the ‘Interactive Model’ of audiovisual translation developed in the context of my PhD research on the cognitive-semantic, functional and socio-cultural features of the Italian-dubbing translation of a corpus of humorous texts. The Model is based on two interactive macro-phases – ‘Multimodal Critical Analysis of Scripts’ (MuCrAS and ‘Multimodal Re-Textualization of Scripts’ (MuReTS. Its construction and application are justified by a multidisciplinary approach to the analysis and translation of audiovisual texts, so as to focus on the linguistic and extralinguistic dimensions affecting both the reception of source texts and the production of target ones (Chaume 2004; Díaz Cintas 2004. By resorting to Critical Discourse Analysis (Fairclough 1995, 2001, to a process-based approach to translation and to a socio-semiotic analysis of multimodal texts (van Leeuwen 2004; Kress and van Leeuwen 2006, the Model is meant to be applied to the training of audiovisual translators and discourse analysts in order to help them enquire into the levels of pragmalinguistic equivalence between the source and the target versions. Finally, a practical application shall be discussed, detailing the Italian rendering of a comic sketch from the American late-night talk show Conan.Abstract – Questo studio introduce il ‘Modello Interattivo’ di traduzione audiovisiva sviluppato durante il mio dottorato di ricerca incentrato sulle caratteristiche cognitivo-semantiche, funzionali e socio-culturali della traduzione italiana per il doppiaggio di un corpus di testi comici. Il Modello è costituito da due fasi: la prima, di ‘Analisi critica e multimodale degli script’ (MuCrAS e la seconda, di ‘Ritestualizzazione critica e multimodale degli script’ (MuReTS, e la sua costruzione e applicazione sono frutto di un approccio multidisciplinare all’analisi e traduzione dei testi audiovisivi, al fine di esaminare le

  15. Facial Emotion Recognition Using Context Based Multimodal Approach

    Directory of Open Access Journals (Sweden)

    Priya Metri

    2011-12-01

    Full Text Available Emotions play a crucial role in person to person interaction. In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers. The ability to understand human emotions is desirable for the computer in several applications especially by observing facial expressions. This paper explores a ways of human-computer interaction that enable the computer to be more aware of the user’s emotional expressions we present a approach for the emotion recognition from a facial expression, hand and body posture. Our model uses multimodal emotion recognition system in which we use two different models for facial expression recognition and for hand and body posture recognition and then combining the result of both classifiers using a third classifier which give the resulting emotion . Multimodal system gives more accurate result than a signal or bimodal system

  16. Interactive natural language acquisition in a multi-modal recurrent neural architecture

    Science.gov (United States)

    Heinrich, Stefan; Wermter, Stefan

    2018-01-01

    For the complex human brain that enables us to communicate in natural language, we gathered good understandings of principles underlying language acquisition and processing, knowledge about sociocultural conditions, and insights into activity patterns in the brain. However, we were not yet able to understand the behavioural and mechanistic characteristics for natural language and how mechanisms in the brain allow to acquire and process language. In bridging the insights from behavioural psychology and neuroscience, the goal of this paper is to contribute a computational understanding of appropriate characteristics that favour language acquisition. Accordingly, we provide concepts and refinements in cognitive modelling regarding principles and mechanisms in the brain and propose a neurocognitively plausible model for embodied language acquisition from real-world interaction of a humanoid robot with its environment. In particular, the architecture consists of a continuous time recurrent neural network, where parts have different leakage characteristics and thus operate on multiple timescales for every modality and the association of the higher level nodes of all modalities into cell assemblies. The model is capable of learning language production grounded in both, temporal dynamic somatosensation and vision, and features hierarchical concept abstraction, concept decomposition, multi-modal integration, and self-organisation of latent representations.

  17. Interaction between visual and chemical cues in a Liolaemus lizard: a multimodal approach.

    Science.gov (United States)

    Vicente, Natalin S; Halloy, Monique

    2017-12-01

    Multimodal communication involves the use of signals and cues across two or more sensory modalities. The genus Liolaemus (Iguania: Liolaemidae) offers a great potential for studies on the ecology and evolution of multimodal communication, including visual and chemical signals. In this study, we analyzed the response of male and female Liolaemus pacha to chemical, visual and combined (multimodal) stimuli. Using cue-isolation tests, we registered the number of tongue flicks and headbob displays from exposure to signals in each modality. Number of tongue flicks was greater when a chemical stimulus was presented alone than in the presence of visual or multimodal stimuli. In contrast, headbob displays were fewer in number with visual and chemical stimuli alone, but significantly higher in number when combined. Female signallers triggered significantly more tongue flicks than male signallers, suggesting that chemical cues are involved in sexual recognition. We did not find an inhibition between chemical and visual cues. On the contrary, we observed a dominance of the chemical modality, because when presented with visual stimuli, lizards also responded with more tongue flicks than headbob displays. The total response produced by multimodal stimuli was similar to that of the chemical stimuli alone, possibly suggesting non-redundancy. We discuss whether the visual component of a multimodal signal could attract attention at a distance, increasing the effectiveness of transmission and reception of the information in chemical cues. Copyright © 2017 Elsevier GmbH. All rights reserved.

  18. A hardware and software architecture to deal with multimodal and collaborative interactions in multiuser virtual reality environments

    Science.gov (United States)

    Martin, P.; Tseu, A.; Férey, N.; Touraine, D.; Bourdot, P.

    2014-02-01

    Most advanced immersive devices provide collaborative environment within several users have their distinct head-tracked stereoscopic point of view. Combining with common used interactive features such as voice and gesture recognition, 3D mouse, haptic feedback, and spatialized audio rendering, these environments should faithfully reproduce a real context. However, even if many studies have been carried out on multimodal systems, we are far to definitively solve the issue of multimodal fusion, which consists in merging multimodal events coming from users and devices, into interpretable commands performed by the application. Multimodality and collaboration was often studied separately, despite of the fact that these two aspects share interesting similarities. We discuss how we address this problem, thought the design and implementation of a supervisor that is able to deal with both multimodal fusion and collaborative aspects. The aim of this supervisor is to ensure the merge of user's input from virtual reality devices in order to control immersive multi-user applications. We deal with this problem according to a practical point of view, because the main requirements of this supervisor was defined according to a industrial task proposed by our automotive partner, that as to be performed with multimodal and collaborative interactions in a co-located multi-user environment. In this task, two co-located workers of a virtual assembly chain has to cooperate to insert a seat into the bodywork of a car, using haptic devices to feel collision and to manipulate objects, combining speech recognition and two hands gesture recognition as multimodal instructions. Besides the architectural aspect of this supervisor, we described how we ensure the modularity of our solution that could apply on different virtual reality platforms, interactive contexts and virtual contents. A virtual context observer included in this supervisor in was especially designed to be independent to the

  19. Multimodal approaches for emotion recognition: a survey

    Science.gov (United States)

    Sebe, Nicu; Cohen, Ira; Gevers, Theo; Huang, Thomas S.

    2005-01-01

    Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing-emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user's emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and physiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.

  20. Multimodal interaction in the perception of impact events displayed via a multichannel audio and simulated structure-borne vibration

    Science.gov (United States)

    Martens, William L.; Woszczyk, Wieslaw

    2005-09-01

    For multimodal display systems in which realistic reproduction of impact events is desired, presenting structure-borne vibration along with multichannel audio recordings has been observed to create a greater sense of immersion in a virtual acoustic environment. Furthermore, there is an increased proportion of reports that the impact event took place within the observer's local area (this is termed ``presence with'' the event, in contrast to ``presence in'' the environment in which the event occurred). While holding the audio reproduction constant, varying the intermodal arrival time and level of mechanically displayed, synthetic whole-body vibration revealed a number of other subjective attributes that depend upon multimodal interaction in the perception of a representative impact event. For example, when the structure-borne component of the displayed impact event arrived 10 to 20 ms later than the airborne component, the intermodal delay was not only tolerated, but gave rise to an increase in the proportion of reports that the impact event had greater power. These results have enabled the refinement of a multimodal simulation in which the manipulation of synthetic whole-body vibration can be used to control perceptual attributes of impact events heard within an acoustic environment reproduced via a multichannel loudspeaker array.

  1. project SENSE : multimodal simulation with full-body real-time verbal and nonverbal interactions

    NARCIS (Netherlands)

    Miri, Hossein; Kolkmeier, Jan; Taylor, Paul Jonathon; Poppe, Ronald; Heylen, Dirk; Poppe, Ronald; Meyer, John-Jules; Veltkamp, Remco; Dastani, Mehdi

    2016-01-01

    This paper presents a multimodal simulation system, project-SENSE, that combines virtual reality and full-body motion capture technologies with real-time verbal and nonverbal communication. We introduce the technical setup and employed hardware and software of a first prototype. We discuss the

  2. Accident sequence analysis of human-computer interface design

    International Nuclear Information System (INIS)

    Fan, C.-F.; Chen, W.-H.

    2000-01-01

    It is important to predict potential accident sequences of human-computer interaction in a safety-critical computing system so that vulnerable points can be disclosed and removed. We address this issue by proposing a Multi-Context human-computer interaction Model along with its analysis techniques, an Augmented Fault Tree Analysis, and a Concurrent Event Tree Analysis. The proposed augmented fault tree can identify the potential weak points in software design that may induce unintended software functions or erroneous human procedures. The concurrent event tree can enumerate possible accident sequences due to these weak points

  3. The Sweet-Home speech and multimodal corpus for home automation interaction

    OpenAIRE

    Vacher , Michel; Lecouteux , Benjamin; Chahuara , Pedro; Portet , François; Meillon , Brigitte; Bonnefond , Nicolas

    2014-01-01

    International audience; Ambient Assisted Living aims at enhancing the quality of life of older and disabled people at home thanks to Smart Homes and Home Automation. However, many studies do not include tests in real settings, because data collection in this domain is very expensive and challenging and because of the few available data sets. The SWEET-H OME multimodal corpus is a dataset recorded in realistic conditions in D OMUS, a fully equipped Smart Home with microphones and home automati...

  4. Human-computer interaction fundamentals and practice

    CERN Document Server

    Kim, Gerard Jounghyun

    2015-01-01

    Introduction What HCI Is and Why It Is Important Principles of HCI     ""Know Thy User""      Understand the Task      Reduce Memory Load      Strive for Consistency      Remind Users and Refresh Their Memory      Prevent Errors/Reversal of Action      Naturalness SummaryReferences Specific HCI Guidelines Guideline Categories Examples of HCI Guidelines      Visual Display Layout (General HCI Design)      Information Structuring and Navigation (General HCI Design)      Taking User Input (General H

  5. Aesthetic Approaches to Human-Computer Interaction

    DEFF Research Database (Denmark)

    This volume consists of revised papers from the First International Workshop on Activity Theory Based Practical Methods for IT Design. The workshop took place in Copenhagen, Denmark, September 2-3, 2004. The particular focus of the workshop was the development of methods based on activity theory ...

  6. Measuring Appeal in Human Computer Interaction

    DEFF Research Database (Denmark)

    Neben, Tillmann; Xiao, Bo Sophia; Lim, Eric T.

    2015-01-01

    Appeal refers to the positive emotional response to an aesthetic, beautiful, or in another way desirable stimulus. It is a recurring topic in information systems (IS) research, and is important for understanding many phenomena of user behavior and decision-making. While past IS research on appeal...

  7. Handbook of human-computer interaction

    National Research Council Canada - National Science Library

    Helander, Martin; Landauer, Thomas K; Prabhu, Prasad V

    1997-01-01

    ... of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior wr...

  8. MIDA - Optimizing control room performance through multi-modal design

    International Nuclear Information System (INIS)

    Ronan, A. M.

    2006-01-01

    Multi-modal interfaces can support the integration of humans with information processing systems and computational devices to maximize the unique qualities that comprise a complex system. In a dynamic environment, such as a nuclear power plant control room, multi-modal interfaces, if designed correctly, can provide complementary interaction between the human operator and the system which can improve overall performance while reducing human error. Developing such interfaces can be difficult for a designer without explicit knowledge of Human Factors Engineering principles. The Multi-modal Interface Design Advisor (MIDA) was developed as a support tool for system designers and developers. It provides design recommendations based upon a combination of Human Factors principles, a knowledge base of historical research, and current interface technologies. MIDA's primary objective is to optimize available multi-modal technologies within a human computer interface in order to balance operator workload with efficient operator performance. The purpose of this paper is to demonstrate MIDA and illustrate its value as a design evaluation tool within the nuclear power industry. (authors)

  9. A multimodal dataset for authoring and editing multimedia content: The MAMEM project

    Directory of Open Access Journals (Sweden)

    Spiros Nikolopoulos

    2017-12-01

    Full Text Available We present a dataset that combines multimodal biosignals and eye tracking information gathered under a human-computer interaction framework. The dataset was developed in the vein of the MAMEM project that aims to endow people with motor disabilities with the ability to edit and author multimedia content through mental commands and gaze activity. The dataset includes EEG, eye-tracking, and physiological (GSR and Heart rate signals collected from 34 individuals (18 able-bodied and 16 motor-impaired. Data were collected during the interaction with specifically designed interface for web browsing and multimedia content manipulation and during imaginary movement tasks. The presented dataset will contribute towards the development and evaluation of modern human-computer interaction systems that would foster the integration of people with severe motor impairments back into society.

  10. The Next Wave: Humans, Computers, and Redefining Reality

    Science.gov (United States)

    Little, William

    2018-01-01

    The Augmented/Virtual Reality (AVR) Lab at KSC is dedicated to " exploration into the growing computer fields of Extended Reality and the Natural User Interface (it is) a proving ground for new technologies that can be integrated into future NASA projects and programs." The topics of Human Computer Interface, Human Computer Interaction, Augmented Reality, Virtual Reality, and Mixed Reality are defined; examples of work being done in these fields in the AVR Lab are given. Current new and future work in Computer Vision, Speech Recognition, and Artificial Intelligence are also outlined.

  11. A new strategic neurosurgical planning tool for brainstem cavernous malformations using interactive computer graphics with multimodal fusion images.

    Science.gov (United States)

    Kin, Taichi; Nakatomi, Hirofumi; Shojima, Masaaki; Tanaka, Minoru; Ino, Kenji; Mori, Harushi; Kunimatsu, Akira; Oyama, Hiroshi; Saito, Nobuhito

    2012-07-01

    In this study, the authors used preoperative simulation employing 3D computer graphics (interactive computer graphics) to fuse all imaging data for brainstem cavernous malformations. The authors evaluated whether interactive computer graphics or 2D imaging correlated better with the actual operative field, particularly in identifying a developmental venous anomaly (DVA). The study population consisted of 10 patients scheduled for surgical treatment of brainstem cavernous malformations. Data from preoperative imaging (MRI, CT, and 3D rotational angiography) were automatically fused using a normalized mutual information method, and then reconstructed by a hybrid method combining surface rendering and volume rendering methods. With surface rendering, multimodality and multithreshold techniques for 1 tissue were applied. The completed interactive computer graphics were used for simulation of surgical approaches and assumed surgical fields. Preoperative diagnostic rates for a DVA associated with brainstem cavernous malformation were compared between conventional 2D imaging and interactive computer graphics employing receiver operating characteristic (ROC) analysis. The time required for reconstruction of 3D images was 3-6 hours for interactive computer graphics. Observation in interactive mode required approximately 15 minutes. Detailed anatomical information for operative procedures, from the craniotomy to microsurgical operations, could be visualized and simulated three-dimensionally as 1 computer graphic using interactive computer graphics. Virtual surgical views were consistent with actual operative views. This technique was very useful for examining various surgical approaches. Mean (±SEM) area under the ROC curve for rate of DVA diagnosis was significantly better for interactive computer graphics (1.000±0.000) than for 2D imaging (0.766±0.091; pcomputer graphics than with 2D images. Interactive computer graphics was also useful in helping to plan the surgical

  12. The case for multimodal analysis of atypical interaction: questions, answers and gaze in play involving a child with autism.

    Science.gov (United States)

    Muskett, Tom; Body, Richard

    2013-01-01

    Conversation analysis (CA) continues to accrue interest within clinical linguistics as a methodology that can enable elucidation of structural and sequential orderliness in interactions involving participants who produce ostensibly disordered communication behaviours. However, it can be challenging to apply CA to re-examine clinical phenomena that have initially been defined in terms of linguistics, as a logical starting point for analysis may be to focus primarily on the organisation of language ("talk") in such interactions. In this article, we argue that CA's methodological power can only be fully exploited in this research context when a multimodal analytic orientation is adopted, where due consideration is given to participants' co-ordinated use of multiple semiotic resources including, but not limited to, talk (e.g., gaze, embodied action, object use and so forth). To evidence this argument, a two-layered analysis of unusual question-answer sequences in a play episode involving a child with autism is presented. It is thereby demonstrated that only when the scope of enquiry is broadened to include gaze and other embodied action can an account be generated of orderliness within these sequences. This finding has important implications for CA's application as a research methodology within clinical linguistics.

  13. Feasibility Study of Increasing Multimodal Interaction between Private and Public Transport Based on the Use of Intellectual Transport Systems and Services

    Directory of Open Access Journals (Sweden)

    Ulrich Weidmann

    2011-04-01

    Full Text Available The introduction of intellectual transport systems and services (ITS into the public and private transport sectors is closely connected with the development of multimodality in transport system (particularly, in towns and their suburbs. Taking into consideration the problems of traffic jams, the need for increasing the efficiency of power consumption and reducing the amount of burnt gases ejected into the air and the harmful effect of noise, the use of multimodal transport concept has been growing fast recently in most cities. It embraces a system of integrated tickets, the infrastructure, allowing a passenger to leave a car or a bike near a public transport station and to continue his/her travel by public transport (referred to as ‘Park&Ride’, ‘Bike&Ride’, as well as, real-time information system, universal design, and computer-aided traffic control. These concepts seem to be even more effective, when multimodal intellectual transport systems and services (ITS are introduced. In Lithuania, ITS is not widely used in passenger transportation, though its potential is great, particularly, taking into consideration the critical state of the capacity of public transport infrastructure. The paper considers the possibilities of increasing the effectiveness of public transport system ITS by increasing its interaction with private transport in the context of multimodal concept realization.Article in Lithuanian

  14. Human computer confluence applied in healthcare and rehabilitation.

    Science.gov (United States)

    Viaud-Delmon, Isabelle; Gaggioli, Andrea; Ferscha, Alois; Dunne, Stephen

    2012-01-01

    Human computer confluence (HCC) is an ambitious research program studying how the emerging symbiotic relation between humans and computing devices can enable radically new forms of sensing, perception, interaction, and understanding. It is an interdisciplinary field, bringing together researches from horizons as various as pervasive computing, bio-signals processing, neuroscience, electronics, robotics, virtual & augmented reality, and provides an amazing potential for applications in medicine and rehabilitation.

  15. Predicting protein-protein interactions from multimodal biological data sources via nonnegative matrix tri-factorization.

    Science.gov (United States)

    Wang, Hua; Huang, Heng; Ding, Chris; Nie, Feiping

    2013-04-01

    Protein interactions are central to all the biological processes and structural scaffolds in living organisms, because they orchestrate a number of cellular processes such as metabolic pathways and immunological recognition. Several high-throughput methods, for example, yeast two-hybrid system and mass spectrometry method, can help determine protein interactions, which, however, suffer from high false-positive rates. Moreover, many protein interactions predicted by one method are not supported by another. Therefore, computational methods are necessary and crucial to complete the interactome expeditiously. In this work, we formulate the problem of predicting protein interactions from a new mathematical perspective--sparse matrix completion, and propose a novel nonnegative matrix factorization (NMF)-based matrix completion approach to predict new protein interactions from existing protein interaction networks. Through using manifold regularization, we further develop our method to integrate different biological data sources, such as protein sequences, gene expressions, protein structure information, etc. Extensive experimental results on four species, Saccharomyces cerevisiae, Drosophila melanogaster, Homo sapiens, and Caenorhabditis elegans, have shown that our new methods outperform related state-of-the-art protein interaction prediction methods.

  16. Learning multimodal dictionaries.

    Science.gov (United States)

    Monaci, Gianluca; Jost, Philippe; Vandergheynst, Pierre; Mailhé, Boris; Lesage, Sylvain; Gribonval, Rémi

    2007-09-01

    Real-world phenomena involve complex interactions between multiple signal modalities. As a consequence, humans are used to integrate at each instant perceptions from all their senses in order to enrich their understanding of the surrounding world. This paradigm can be also extremely useful in many signal processing and computer vision problems involving mutually related signals. The simultaneous processing of multimodal data can, in fact, reveal information that is otherwise hidden when considering the signals independently. However, in natural multimodal signals, the statistical dependencies between modalities are in general not obvious. Learning fundamental multimodal patterns could offer deep insight into the structure of such signals. In this paper, we present a novel model of multimodal signals based on their sparse decomposition over a dictionary of multimodal structures. An algorithm for iteratively learning multimodal generating functions that can be shifted at all positions in the signal is proposed, as well. The learning is defined in such a way that it can be accomplished by iteratively solving a generalized eigenvector problem, which makes the algorithm fast, flexible, and free of user-defined parameters. The proposed algorithm is applied to audiovisual sequences and it is able to discover underlying structures in the data. The detection of such audio-video patterns in audiovisual clips allows to effectively localize the sound source on the video in presence of substantial acoustic and visual distractors, outperforming state-of-the-art audiovisual localization algorithms.

  17. A multimodal virtual reality interface for 3D interaction with VTK

    NARCIS (Netherlands)

    Kok, A.J.F.; Liere, van R.

    2007-01-01

    The object-oriented visualization Toolkit (VTK) is widely used for scientific visualization. VTK is a visualization library that provides a large number of functions for presenting three-dimensional data. Interaction with the visualized data is controlled with two-dimensional input devices, such as

  18. Multimodal Interaction in Ambient Intelligence Environments Using Speech, Localization and Robotics

    Science.gov (United States)

    Galatas, Georgios

    2013-01-01

    An Ambient Intelligence Environment is meant to sense and respond to the presence of people, using its embedded technology. In order to effectively sense the activities and intentions of its inhabitants, such an environment needs to utilize information captured from multiple sensors and modalities. By doing so, the interaction becomes more natural…

  19. Multimodal feedback for finger-based interaction in mobile augmented reality

    NARCIS (Netherlands)

    Hürst, W.O.; Vriens, Kevin

    2016-01-01

    Mobile or handheld augmented reality uses a smartphone's live video stream and enriches it with superimposed graphics. In such scenarios, tracking one's fingers in front of the camera and interpreting these traces as gestures offers interesting perspectives for interaction. Yet, the lack of haptic

  20. An Evaluation of Multimodal Interactions with Technology while Learning Science Concepts

    Science.gov (United States)

    Anastopoulou, Stamatina; Sharples, Mike; Baber, Chris

    2011-01-01

    This paper explores the value of employing multiple modalities to facilitate science learning with technology. In particular, it is argued that when multiple modalities are employed, learners construct strong relations between physical movement and visual representations of motion. Body interactions with visual representations, enabled by…

  1. Augmented reality as a tool for linguistic research: Intercepting and manipulating multimodal interaction

    OpenAIRE

    Pitsch, Karola; Neumann, Alexander; Schnier, Christian; Hermann, Thomas

    2013-01-01

    We suggest that an Augmented Reality (AR) system for coupled interaction partners provides a new tool for linguistic research that allows to manipulate the coparticipants’ real-time perception and action. It encompasses novel facilities for recording heterogeneous sensor-rich data sets to be accessed in parallel with qualitative/manual and quantitative/computational methods.

  2. The collision of multimode dromions and a firewall in the two-component long-wave-short-wave resonance interaction equation

    International Nuclear Information System (INIS)

    Radha, R; Kumar, C Senthil; Lakshmanan, M; Gilson, C R

    2009-01-01

    In this communication, we investigate the two-component long-wave-short-wave resonance interaction equation and show that it admits the Painleve property. We then suitably exploit the recently developed truncated Painleve approach to generate exponentially localized solutions for the short-wave components S (1) and S (2) while the long wave L admits a line soliton only. The exponentially localized solutions driving the short waves S (1) and S (2) in the y-direction are endowed with different energies (intensities) and are called 'multimode dromions'. We also observe that the multimode dromions suffer from intramodal inelastic collision while the existence of a firewall across the modes prevents the switching of energy between the modes. (fast track communication)

  3. Navigating the fifth dimension: new concepts in interactive multimodality and multidimensional image navigation

    Science.gov (United States)

    Ratib, Osman; Rosset, Antoine; Dahlbom, Magnus; Czernin, Johannes

    2005-04-01

    Display and interpretation of multi dimensional data obtained from the combination of 3D data acquired from different modalities (such as PET-CT) require complex software tools allowing the user to navigate and modify the different image parameters. With faster scanners it is now possible to acquire dynamic images of a beating heart or the transit of a contrast agent adding a fifth dimension to the data. We developed a DICOM-compliant software for real time navigation in very large sets of 5 dimensional data based on an intuitive multidimensional jog-wheel widely used by the video-editing industry. The software, provided under open source licensing, allows interactive, single-handed, navigation through 3D images while adjusting blending of image modalities, image contrast and intensity and the rate of cine display of dynamic images. In this study we focused our effort on the user interface and means for interactively navigating in these large data sets while easily and rapidly changing multiple parameters such as image position, contrast, intensity, blending of colors, magnification etc. Conventional mouse-driven user interface requiring the user to manipulate cursors and sliders on the screen are too cumbersome and slow. We evaluated several hardware devices and identified a category of multipurpose jogwheel device that is used in the video-editing industry that is particularly suitable for rapidly navigating in five dimensions while adjusting several display parameters interactively. The application of this tool will be demonstrated in cardiac PET-CT imaging and functional cardiac MRI studies.

  4. Multimodal training between agents

    DEFF Research Database (Denmark)

    Rehm, Matthias

    2003-01-01

    In the system Locator1, agents are treated as individual and autonomous subjects that are able to adapt to heterogenous user groups. Applying multimodal information from their surroundings (visual and linguistic), they acquire the necessary concepts for a successful interaction. This approach has...

  5. Applying Human Computation Methods to Information Science

    Science.gov (United States)

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  6. Feedback Loops in Communication and Human Computing

    NARCIS (Netherlands)

    op den Akker, Hendrikus J.A.; Heylen, Dirk K.J.; Pantic, Maja; Pentland, Alex; Nijholt, Antinus; Huang, Thomas S.

    Building systems that are able to analyse communicative behaviours or take part in conversations requires a sound methodology in which the complex organisation of conversations is understood and tested on real-life samples. The data-driven approaches to human computing not only have a value for the

  7. Multimodal user interfaces to improve social integration of elderly and mobility impaired.

    Science.gov (United States)

    Dias, Miguel Sales; Pires, Carlos Galinho; Pinto, Fernando Miguel; Teixeira, Vítor Duarte; Freitas, João

    2012-01-01

    Technologies for Human-Computer Interaction (HCI) and Communication have evolved tremendously over the past decades. However, citizens such as mobility impaired or elderly or others, still face many difficulties interacting with communication services, either due to HCI issues or intrinsic design problems with the services. In this paper we start by presenting the results of two user studies, the first one conducted with a group of mobility impaired users, comprising paraplegic and quadriplegic individuals; and the second one with elderly. The study participants carried out a set of tasks with a multimodal (speech, touch, gesture, keyboard and mouse) and multi-platform (mobile, desktop) system, offering an integrated access to communication and entertainment services, such as email, agenda, conferencing, instant messaging and social media, referred to as LHC - Living Home Center. The system was designed to take into account the requirements captured from these users, with the objective of evaluating if the adoption of multimodal interfaces for audio-visual communication and social media services, could improve the interaction with such services. Our study revealed that a multimodal prototype system, offering natural interaction modalities, especially supporting speech and touch, can in fact improve access to the presented services, contributing to the reduction of social isolation of mobility impaired, as well as elderly, and improving their digital inclusion.

  8. INTERACT

    DEFF Research Database (Denmark)

    Jochum, Elizabeth; Borggreen, Gunhild; Murphey, TD

    This paper considers the impact of visual art and performance on robotics and human-computer interaction and outlines a research project that combines puppetry and live performance with robotics. Kinesics—communication through movement—is the foundation of many theatre and performance traditions ...

  9. Evaluation of Binocular Eye Trackers and Algorithms for 3D Gaze Interaction in Virtual Reality Environments

    OpenAIRE

    Thies Pfeiffer; Ipke Wachsmuth; Marc E. Latoschik

    2009-01-01

    Tracking user's visual attention is a fundamental aspect in novel human-computer interaction paradigms found in Virtual Reality. For example, multimodal interfaces or dialogue-based communications with virtual and real agents greatly benefit from the analysis of the user's visual attention as a vital source for deictic references or turn-taking signals. Current approaches to determine visual attention rely primarily on monocular eye trackers. Hence they are restricted to the interpretation of...

  10. Human-computer interface incorporating personal and application domains

    Science.gov (United States)

    Anderson, Thomas G [Albuquerque, NM

    2011-03-29

    The present invention provides a human-computer interface. The interface includes provision of an application domain, for example corresponding to a three-dimensional application. The user is allowed to navigate and interact with the application domain. The interface also includes a personal domain, offering the user controls and interaction distinct from the application domain. The separation into two domains allows the most suitable interface methods in each: for example, three-dimensional navigation in the application domain, and two- or three-dimensional controls in the personal domain. Transitions between the application domain and the personal domain are under control of the user, and the transition method is substantially independent of the navigation in the application domain. For example, the user can fly through a three-dimensional application domain, and always move to the personal domain by moving a cursor near one extreme of the display.

  11. Multimodality and Ambient Intelligence

    NARCIS (Netherlands)

    Nijholt, Antinus; Verhaegh, W.; Aarts, E.; Korst, J.

    2004-01-01

    In this chapter we discuss multimodal interface technology. We present eexamples of multimodal interfaces and show problems and opportunities. Fusion of modalities is discussed and some roadmap discussions on research in multimodality are summarized. This chapter also discusses future developments

  12. An innovative multimodal virtual platform for communication with devices in a natural way

    Science.gov (United States)

    Kinkar, Chhayarani R.; Golash, Richa; Upadhyay, Akhilesh R.

    2012-03-01

    As technology grows people are diverted and are more interested in communicating with machine or computer naturally. This will make machine more compact and portable by avoiding remote, keyboard etc. also it will help them to live in an environment free from electromagnetic waves. This thought has made 'recognition of natural modality in human computer interaction' a most appealing and promising research field. Simultaneously it has been observed that using single mode of interaction limit the complete utilization of commands as well as data flow. In this paper a multimodal platform, where out of many natural modalities like eye gaze, speech, voice, face etc. human gestures are combined with human voice is proposed which will minimize the mean square error. This will loosen the strict environment needed for accurate and robust interaction while using single mode. Gesture complement Speech, gestures are ideal for direct object manipulation and natural language is used for descriptive tasks. Human computer interaction basically requires two broad sections recognition and interpretation. Recognition and interpretation of natural modality in complex binary instruction is a tough task as it integrate real world to virtual environment. The main idea of the paper is to develop a efficient model for data fusion coming from heterogeneous sensors, camera and microphone. Through this paper we have analyzed that the efficiency is increased if heterogeneous data (image & voice) is combined at feature level using artificial intelligence. The long term goal of this paper is to design a robust system for physically not able or having less technical knowledge.

  13. Multimodal label-free microscopy

    Directory of Open Access Journals (Sweden)

    Nicolas Pavillon

    2014-09-01

    Full Text Available This paper reviews the different multimodal applications based on a large extent of label-free imaging modalities, ranging from linear to nonlinear optics, while also including spectroscopic measurements. We put specific emphasis on multimodal measurements going across the usual boundaries between imaging modalities, whereas most multimodal platforms combine techniques based on similar light interactions or similar hardware implementations. In this review, we limit the scope to focus on applications for biology such as live cells or tissues, since by their nature of being alive or fragile, we are often not free to take liberties with the image acquisition times and are forced to gather the maximum amount of information possible at one time. For such samples, imaging by a given label-free method usually presents a challenge in obtaining sufficient optical signal or is limited in terms of the types of observable targets. Multimodal imaging is then particularly attractive for these samples in order to maximize the amount of measured information. While multimodal imaging is always useful in the sense of acquiring additional information from additional modes, at times it is possible to attain information that could not be discovered using any single mode alone, which is the essence of the progress that is possible using a multimodal approach.

  14. Human-computer interface glove using flexible piezoelectric sensors

    Science.gov (United States)

    Cha, Youngsu; Seo, Jeonggyu; Kim, Jun-Sik; Park, Jung-Min

    2017-05-01

    In this note, we propose a human-computer interface glove based on flexible piezoelectric sensors. We select polyvinylidene fluoride as the piezoelectric material for the sensors because of advantages such as a steady piezoelectric characteristic and good flexibility. The sensors are installed in a fabric glove by means of pockets and Velcro bands. We detect changes in the angles of the finger joints from the outputs of the sensors, and use them for controlling a virtual hand that is utilized in virtual object manipulation. To assess the sensing ability of the piezoelectric sensors, we compare the processed angles from the sensor outputs with the real angles from a camera recoding. With good agreement between the processed and real angles, we successfully demonstrate the user interaction system with the virtual hand and interface glove based on the flexible piezoelectric sensors, for four hand motions: fist clenching, pinching, touching, and grasping.

  15. Investigation and evaluation into the usability of human-computer interfaces using a typical CAD system

    Energy Technology Data Exchange (ETDEWEB)

    Rickett, J D

    1987-01-01

    This research program covers three topics relating to the human-computer interface namely, voice recognition, tools and techniques for evaluation, and user and interface modeling. An investigation into the implementation of voice-recognition technologies examines how voice recognizers may be evaluated in commercial software. A prototype system was developed with the collaboration of FEMVIEW Ltd. (marketing a CAD package). A theoretical approach to evaluation leads to the hypothesis that human-computer interaction is affected by personality, influencing types of dialogue, preferred methods for providing helps, etc. A user model based on personality traits, or habitual-behavior patterns (HBP) is presented. Finally, a practical framework is provided for the evaluation of human-computer interfaces. It suggests that evaluation is an integral part of design and that the iterative use of evaluation techniques throughout the conceptualization, design, implementation and post-implementation stages will ensure systems that satisfy the needs of the users and fulfill the goal of usability.

  16. The Bursts and Lulls of Multimodal Interaction: Temporal Distributions of Behavior Reveal Differences Between Verbal and Non-Verbal Communication.

    Science.gov (United States)

    Abney, Drew H; Dale, Rick; Louwerse, Max M; Kello, Christopher T

    2018-04-06

    Recent studies of naturalistic face-to-face communication have demonstrated coordination patterns such as the temporal matching of verbal and non-verbal behavior, which provides evidence for the proposal that verbal and non-verbal communicative control derives from one system. In this study, we argue that the observed relationship between verbal and non-verbal behaviors depends on the level of analysis. In a reanalysis of a corpus of naturalistic multimodal communication (Louwerse, Dale, Bard, & Jeuniaux, ), we focus on measuring the temporal patterns of specific communicative behaviors in terms of their burstiness. We examined burstiness estimates across different roles of the speaker and different communicative modalities. We observed more burstiness for verbal versus non-verbal channels, and for more versus less informative language subchannels. Using this new method for analyzing temporal patterns in communicative behaviors, we show that there is a complex relationship between verbal and non-verbal channels. We propose a "temporal heterogeneity" hypothesis to explain how the language system adapts to the demands of dialog. Copyright © 2018 Cognitive Science Society, Inc.

  17. Overview Electrotactile Feedback for Enhancing Human Computer Interface

    Science.gov (United States)

    Pamungkas, Daniel S.; Caesarendra, Wahyu

    2018-04-01

    To achieve effective interaction between a human and a computing device or machine, adequate feedback from the computing device or machine is required. Recently, haptic feedback is increasingly being utilised to improve the interactivity of the Human Computer Interface (HCI). Most existing haptic feedback enhancements aim at producing forces or vibrations to enrich the user’s interactive experience. However, these force and/or vibration actuated haptic feedback systems can be bulky and uncomfortable to wear and only capable of delivering a limited amount of information to the user which can limit both their effectiveness and the applications they can be applied to. To address this deficiency, electrotactile feedback is used. This involves delivering haptic sensations to the user by electrically stimulating nerves in the skin via electrodes placed on the surface of the skin. This paper presents a review and explores the capability of electrotactile feedback for HCI applications. In addition, a description of the sensory receptors within the skin for sensing tactile stimulus and electric currents alsoseveral factors which influenced electric signal to transmit to the brain via human skinare explained.

  18. Individual Difference Effects in Human-Computer Interaction

    Science.gov (United States)

    1991-10-01

    service staff. The subjects who participated in the experiment constituted the organization’s decision network . The subjects were presented the same...at the centek of an information-collection network . From the center of this network , the person can access and conmunicate with a variety of...with reverse video option) in slot 3; a software-controlled switch (i.e., the Videx " Softswitch ") for switching between 40- and 80-column display

  19. Questioning Mechanisms During Tutoring, Conversation, and Human-Computer Interaction

    Science.gov (United States)

    1993-06-01

    Robert J. Seidel Dept. Psicologia Basica OERI US Army Research Institute Univ. Barcelona 555 New Jersey Ave., NW 5001 Eisenhower Ave. 08028 Barcelona...having a different set of random starting weights in the weight space and running the simulation 10 times. As a crude, but conservative estimate, a GOP

  20. Impact of Cognitive Architectures on Human-Computer Interaction

    Science.gov (United States)

    2014-09-01

    activation, reinforced learning, emotion, semantic memory , episodic memory , and visual imagery.12 In 2010 Rosenbloom created a variant of the Soar...being added to almost every new version. In 2004 Nuxoll and Laird added episodic memory to the Soar architecture.11 In 2008 Laird presented...York (NY): Psychology Press; 2014; p. 1–50. 11. Nuxoll A, Laird JE. A cognitive model of episodic memory integrated with a general cognitive

  1. Brain-Computer Interfaces Revolutionizing Human-Computer Interaction

    CERN Document Server

    Graimann, Bernhard; Allison, Brendan

    2010-01-01

    A brain-computer interface (BCI) establishes a direct output channel between the human brain and external devices. BCIs infer user intent via direct measures of brain activity and thus enable communication and control without movement. This book, authored by experts in the field, provides an accessible introduction to the neurophysiological and signal-processing background required for BCI, presents state-of-the-art non-invasive and invasive approaches, gives an overview of current hardware and software solutions, and reviews the most interesting as well as new, emerging BCI applications. The book is intended not only for students and young researchers, but also for newcomers and other readers from diverse backgrounds keen to learn about this vital scientific endeavour.

  2. Visualization of hierarchically structured information for human-computer interaction

    Energy Technology Data Exchange (ETDEWEB)

    Cheon, Suh Hyun; Lee, J. K.; Choi, I. K.; Kye, S. C.; Lee, N. K. [Dongguk University, Seoul (Korea)

    2001-11-01

    Visualization techniques can be used to support operator's information navigation tasks on the system especially consisting of an enormous volume of information, such as operating information display system and computerized operating procedure system in advanced control room of nuclear power plants. By offering an easy understanding environment of hierarchically structured information, these techniques can reduce the operator's supplementary navigation task load. As a result of that, operators can pay more attention on the primary tasks and ultimately improve the cognitive task performance. In this report, an interface was designed and implemented using hyperbolic visualization technique, which is expected to be applied as a means of optimizing operator's information navigation tasks. 15 refs., 19 figs., 32 tabs. (Author)

  3. Questioning Mechanisms during Tutoring, Conversation, and Human-Computer Interaction

    Science.gov (United States)

    1993-06-01

    Dialogue Paterris Researchers in discourse processing. sociology, and sociolinguistics have analy/ed prominent dialogue patterns (Clark & Schaefer...78284-7801 Dr. M. Diane Langston Dr. Marcy Lansman Richard Lanterman ICL North America University of North Carolina Commandant (G-PWP) 11490 Commerce

  4. Human-Computer Interaction Software: Lessons Learned, Challenges Ahead

    Science.gov (United States)

    1989-01-01

    domain communi- Iatelligent s t s s Me cation. Users familiar with problem Inteligent support systes. High-func- anddomains but inxperienced with comput...8217i. April 1987, pp. 7.3-78. His research interests include artificial intel- Creating better HCI softw-are will have a 8. S.K Catrd. I.P. Moran. arid

  5. Eye Detection and Tracking for Intelligent Human Computer Interaction

    National Research Council Canada - National Science Library

    Yin, Lijun

    2006-01-01

    .... In this project, Dr. Lijun Yin has developed a new algorithm for detecting and tracking eyes under an unconstrained environment using a single ordinary camera or webcam. The new algorithm is advantageous in that it works in a non-intrusive way based on a socalled Topographic Context approach.

  6. The Review of Visual Analysis Methods of Multi-modal Spatio-temporal Big Data

    Directory of Open Access Journals (Sweden)

    ZHU Qing

    2017-10-01

    Full Text Available The visual analysis of spatio-temporal big data is not only the state-of-art research direction of both big data analysis and data visualization, but also the core module of pan-spatial information system. This paper reviews existing visual analysis methods at three levels:descriptive visual analysis, explanatory visual analysis and exploratory visual analysis, focusing on spatio-temporal big data's characteristics of multi-source, multi-granularity, multi-modal and complex association.The technical difficulties and development tendencies of multi-modal feature selection, innovative human-computer interaction analysis and exploratory visual reasoning in the visual analysis of spatio-temporal big data were discussed. Research shows that the study of descriptive visual analysis for data visualizationis is relatively mature.The explanatory visual analysis has become the focus of the big data analysis, which is mainly based on interactive data mining in a visual environment to diagnose implicit reason of problem. And the exploratory visual analysis method needs a major break-through.

  7. Human Computation An Integrated Approach to Learning from the Crowd

    CERN Document Server

    Law, Edith

    2011-01-01

    Human computation is a new and evolving research area that centers around harnessing human intelligence to solve computational problems that are beyond the scope of existing Artificial Intelligence (AI) algorithms. With the growth of the Web, human computation systems can now leverage the abilities of an unprecedented number of people via the Web to perform complex computation. There are various genres of human computation applications that exist today. Games with a purpose (e.g., the ESP Game) specifically target online gamers who generate useful data (e.g., image tags) while playing an enjoy

  8. Nucleophosmin integrates within the nucleolus via multi-modal interactions with proteins displaying R-rich linear motifs and rRNA.

    Science.gov (United States)

    Mitrea, Diana M; Cika, Jaclyn A; Guy, Clifford S; Ban, David; Banerjee, Priya R; Stanley, Christopher B; Nourse, Amanda; Deniz, Ashok A; Kriwacki, Richard W

    2016-02-02

    The nucleolus is a membrane-less organelle formed through liquid-liquid phase separation of its components from the surrounding nucleoplasm. Here, we show that nucleophosmin (NPM1) integrates within the nucleolus via a multi-modal mechanism involving multivalent interactions with proteins containing arginine-rich linear motifs (R-motifs) and ribosomal RNA (rRNA). Importantly, these R-motifs are found in canonical nucleolar localization signals. Based on a novel combination of biophysical approaches, we propose a model for the molecular organization within liquid-like droplets formed by the N-terminal domain of NPM1 and R-motif peptides, thus providing insights into the structural organization of the nucleolus. We identify multivalency of acidic tracts and folded nucleic acid binding domains, mediated by N-terminal domain oligomerization, as structural features required for phase separation of NPM1 with other nucleolar components in vitro and for localization within mammalian nucleoli. We propose that one mechanism of nucleolar localization involves phase separation of proteins within the nucleolus.

  9. Proposition d’une grille d’analyse d’interactions tutorales dans un dispositif multimodal en ligne

    Directory of Open Access Journals (Sweden)

    Vincent Caroline

    2012-07-01

    Full Text Available Cet article présente une recherche menée sur un corpus d’interactions pédagogiques synchrones entre des tuteurs novices et des apprenants de français. Les interactants communiquent via le logiciel Skype, les tuteurs et les apprenants disposent ainsi de trois modalités : le chat, l’audio et la vidéo, pouvant ainsi écrire, voir, entendre et parler à leur(s interlocuteur(s. Lesquelles vont être utilisés par chaque tuteur, dans quelle proportion et avec quel "degré investissement" ? Quelles conséquences ces différents choix d’utilisation ont sur la nature de l’interaction ? Nous émettons l’hypothèse que des corrélations pourraient être établies à deux niveaux : Premièrement, le profil initial des tuteurs (compétences individuelles, expériences professionnelles en présentiel ou à distance, habitude de l’environnement informatique et le contexte des interactions (perturbations extérieures, problèmes techniques, type de tâche, besoin exprimé des apprenants ont une influence sur la façon dont le tuteur utilise les outils à disposition. Deuxièmement, les choix d’utilisations de ces différentes modalités influencent la nature de l’interaction et la relation entre tuteurs et apprenants. Nous nous proposons donc dans cet article de présenter la grille d’analyse que nous avons mise au point dans un double objectif : d’une part analyser un corpus d’interactions pédagogiques en ligne en prenant en charge la globalité de leur multimodalité, et d’autre part participer à la réflexion sur la méthodologie des corpus multimodaux et la formation des tuteurs à la pédagogie synchrone en ligne.

  10. Working memory and referential communication – multimodal aspects of interaction between children with sensorineural hearing impairment and normal hearing peers

    Directory of Open Access Journals (Sweden)

    Olof eSandgren

    2015-03-01

    Full Text Available Whereas the language development of children with sensorineural hearing impairment (SNHI has repeatedly been shown to differ from that of peers with normal hearing (NH, few studies have used an experimental approach to investigate the consequences on everyday communicative interaction. This mini review gives an overview of a range of studies on children with SNHI and NH exploring intra- and inter-individual cognitive and linguistic systems during communication.Over the last decade, our research group has studied the conversational strategies of Swedish speaking children and adolescents with SNHI and NH using referential communication, an experimental analogue to problem-solving in the classroom. We have established verbal and nonverbal control and validation mechanisms, related to working memory capacity (WMC and phonological short term memory (PSTM. We present main findings and future directions relevant for the field of cognitive hearing science and for the clinical and school-based management of children and adolescents with SNHI.

  11. A multimodal interface to resolve the Midas-Touch problem in gaze controlled wheelchair.

    Science.gov (United States)

    Meena, Yogesh Kumar; Cecotti, Hubert; Wong-Lin, KongFatt; Prasad, Girijesh

    2017-07-01

    Human-computer interaction (HCI) research has been playing an essential role in the field of rehabilitation. The usability of the gaze controlled powered wheelchair is limited due to Midas-Touch problem. In this work, we propose a multimodal graphical user interface (GUI) to control a powered wheelchair that aims to help upper-limb mobility impaired people in daily living activities. The GUI was designed to include a portable and low-cost eye-tracker and a soft-switch wherein the wheelchair can be controlled in three different ways: 1) with a touchpad 2) with an eye-tracker only, and 3) eye-tracker with soft-switch. The interface includes nine different commands (eight directions and stop) and integrated within a powered wheelchair system. We evaluated the performance of the multimodal interface in terms of lap-completion time, the number of commands, and the information transfer rate (ITR) with eight healthy participants. The analysis of the results showed that the eye-tracker with soft-switch provides superior performance with an ITR of 37.77 bits/min among the three different conditions (pusers.

  12. Evaluation of multimodal ground cues

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Lecuyer, Anatole; Serafin, Stefania

    2012-01-01

    This chapter presents an array of results on the perception of ground surfaces via multiple sensory modalities,with special attention to non visual perceptual cues, notably those arising from audition and haptics, as well as interactions between them. It also reviews approaches to combining...... synthetic multimodal cues, from vision, haptics, and audition, in order to realize virtual experiences of walking on simulated ground surfaces or other features....

  13. Human Computing and Machine Understanding of Human Behavior: A Survey

    NARCIS (Netherlands)

    Pantic, Maja; Pentland, Alex; Nijholt, Antinus; Huang, Thomas; Quek, F.; Yang, Yie

    2006-01-01

    A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should

  14. Multimodal fluorescence imaging spectroscopy

    NARCIS (Netherlands)

    Stopel, Martijn H W; Blum, Christian; Subramaniam, Vinod; Engelborghs, Yves; Visser, Anthonie J.W.G.

    2014-01-01

    Multimodal fluorescence imaging is a versatile method that has a wide application range from biological studies to materials science. Typical observables in multimodal fluorescence imaging are intensity, lifetime, excitation, and emission spectra which are recorded at chosen locations at the sample.

  15. Multimodality in organization studies

    DEFF Research Database (Denmark)

    Van Leeuwen, Theo

    2017-01-01

    This afterword reviews the chapters in this volume and reflects on the synergies between organization and management studies and multimodality studies that emerge from the volume. These include the combination of strong sociological theorizing and detailed multimodal analysis, a focus on material...

  16. An intelligent multi-media human-computer dialogue system

    Science.gov (United States)

    Neal, J. G.; Bettinger, K. E.; Byoun, J. S.; Dobes, Z.; Thielman, C. Y.

    1988-01-01

    Sophisticated computer systems are being developed to assist in the human decision-making process for very complex tasks performed under stressful conditions. The human-computer interface is a critical factor in these systems. The human-computer interface should be simple and natural to use, require a minimal learning period, assist the user in accomplishing his task(s) with a minimum of distraction, present output in a form that best conveys information to the user, and reduce cognitive load for the user. In pursuit of this ideal, the Intelligent Multi-Media Interfaces project is devoted to the development of interface technology that integrates speech, natural language text, graphics, and pointing gestures for human-computer dialogues. The objective of the project is to develop interface technology that uses the media/modalities intelligently in a flexible, context-sensitive, and highly integrated manner modelled after the manner in which humans converse in simultaneous coordinated multiple modalities. As part of the project, a knowledge-based interface system, called CUBRICON (CUBRC Intelligent CONversationalist) is being developed as a research prototype. The application domain being used to drive the research is that of military tactical air control.

  17. Interaction Design in a Context of Rehabilitation

    DEFF Research Database (Denmark)

    Høgn, Pia; Lykke, Marianne; Missel, Pernille

    2016-01-01

    Information and communication technology(ICT) mediated learning processes for people suffering from aphasia after an acquired brain injury are a relatively uncovered area of research. Helmer-Nielsen et al. (2014)[1] report about projects that study the effect of ICT-mediated speech therapists...... suffering from aphasia, with the purpose of increasing their possibilities for living independent and active lives. In the project, we design a digital environment for communication and learning for people with aphasia who require special learning processes for rebuilding language after a brain injury...... of a combination of learning theory, theories on the brain function and human-computer interaction. The user study showed the needs for collaborative learning processes with communication and social interaction as their focus, tools to support multimodality expressions and customised teaching materials...

  18. Multimodal freight investment criteria.

    Science.gov (United States)

    2010-07-01

    Literature was reviewed on multi-modal investment criteria for freight projects, examining measures and techniques for quantifying project benefits and costs, as well as ways to describe the economic importance of freight transportation. : A limited ...

  19. Multimodality Registration without a Dedicated Multimodality Scanner

    Directory of Open Access Journals (Sweden)

    Bradley J. Beattie

    2007-03-01

    Full Text Available Multimodality scanners that allow the acquisition of both functional and structural image sets on a single system have recently become available for animal research use. Although the resultant registered functional/structural image sets can greatly enhance the interpretability of the functional data, the cost of multimodality systems can be prohibitive, and they are often limited to two modalities, which generally do not include magnetic resonance imaging. Using a thin plastic wrap to immobilize and fix a mouse or other small animal atop a removable bed, we are able to calculate registrations between all combinations of four different small animal imaging scanners (positron emission tomography, single-photon emission computed tomography, magnetic resonance, and computed tomography [CT] at our disposal, effectively equivalent to a quadruple-modality scanner. A comparison of serially acquired CT images, with intervening acquisitions on other scanners, demonstrates the ability of the proposed procedures to maintain the rigidity of an anesthetized mouse during transport between scanners. Movement of the bony structures of the mouse was estimated to be 0.62 mm. Soft tissue movement was predominantly the result of the filling (or emptying of the urinary bladder and thus largely constrained to this region. Phantom studies estimate the registration errors for all registration types to be less than 0.5 mm. Functional images using tracers targeted to known structures verify the accuracy of the functional to structural registrations. The procedures are easy to perform and produce robust and accurate results that rival those of dedicated multimodality scanners, but with more flexible registration combinations and while avoiding the expense and redundancy of multimodality systems.

  20. Evolution of the field quantum entropy and entanglement in a system of multimode light field interacting resonantly with a two-level atom through N_j-degenerate N~Σ-photon process

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The time evolution of the field quantum entropy and entanglement in a system of multi-mode coherent light field resonantly interacting with a two-level atom by de-generating the multi-photon process is studied by utilizing the Von Neumann re-duced entropy theory,and the analytical expressions of the quantum entropy of the multimode field and the numerical calculation results for three-mode field inter-acting with the atom are obtained. Our attention focuses on the discussion of the influences of the initial average photon number,the atomic distribution angle and the phase angle of the atom dipole on the evolution of the quantum field entropy and entanglement. The results obtained from the numerical calculation indicate that: the stronger the quantum field is,the weaker the entanglement between the quan-tum field and the atom will be,and when the field is strong enough,the two sub-systems may be in a disentangled state all the time; the quantum field entropy is strongly dependent on the atomic distribution angle,namely,the quantum field and the two-level atom are always in the entangled state,and are nearly stable at maximum entanglement after a short time of vibration; the larger the atomic dis-tribution angle is,the shorter the time for the field quantum entropy to evolve its maximum value is; the phase angles of the atom dipole almost have no influences on the entanglement between the quantum field and the two-level atom. Entangled states or pure states based on these properties of the field quantum entropy can be prepared.

  1. Multimodal Resources in Transnational Adoption

    DEFF Research Database (Denmark)

    Raudaskoski, Pirkko Liisa

    The paper discusses an empirical analysis which highlights the multimodal nature of identity construction. A documentary on transnational adoption provides real life incidents as research material. The incidents involve (or from them emerge) various kinds of multimodal resources and participants...

  2. Safety Metrics for Human-Computer Controlled Systems

    Science.gov (United States)

    Leveson, Nancy G; Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  3. Multimodal sequence learning.

    Science.gov (United States)

    Kemény, Ferenc; Meier, Beat

    2016-02-01

    While sequence learning research models complex phenomena, previous studies have mostly focused on unimodal sequences. The goal of the current experiment is to put implicit sequence learning into a multimodal context: to test whether it can operate across different modalities. We used the Task Sequence Learning paradigm to test whether sequence learning varies across modalities, and whether participants are able to learn multimodal sequences. Our results show that implicit sequence learning is very similar regardless of the source modality. However, the presence of correlated task and response sequences was required for learning to take place. The experiment provides new evidence for implicit sequence learning of abstract conceptual representations. In general, the results suggest that correlated sequences are necessary for implicit sequence learning to occur. Moreover, they show that elements from different modalities can be automatically integrated into one unitary multimodal sequence. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Robust Multimodal Dictionary Learning

    Science.gov (United States)

    Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc

    2014-01-01

    We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674

  5. Critical Analysis of Multimodal Discourse

    DEFF Research Database (Denmark)

    van Leeuwen, Theo

    2013-01-01

    This is an encyclopaedia article which defines the fields of critical discourse analysis and multimodality studies, argues that within critical discourse analysis more attention should be paid to multimodality, and within multimodality to critical analysis, and ends reviewing a few examples of re...

  6. Multimodality imaging techniques.

    Science.gov (United States)

    Martí-Bonmatí, Luis; Sopena, Ramón; Bartumeus, Paula; Sopena, Pablo

    2010-01-01

    In multimodality imaging, the need to combine morphofunctional information can be approached by either acquiring images at different times (asynchronous), and fused them through digital image manipulation techniques or simultaneously acquiring images (synchronous) and merging them automatically. The asynchronous post-processing solution presents various constraints, mainly conditioned by the different positioning of the patient in the two scans acquired at different times in separated machines. The best solution to achieve consistency in time and space is obtained by the synchronous image acquisition. There are many multimodal technologies in molecular imaging. In this review we will focus on those multimodality image techniques more commonly used in the field of diagnostic imaging (SPECT-CT, PET-CT) and new developments (as PET-MR). The technological innovations and development of new tracers and smart probes are the main key points that will condition multimodality image and diagnostic imaging professionals' future. Although SPECT-CT and PET-CT are standard in most clinical scenarios, MR imaging has some advantages, providing excellent soft-tissue contrast and multidimensional functional, structural and morphological information. The next frontier is to develop efficient detectors and electronics systems capable of detecting two modality signals at the same time. Not only PET-MR but also MR-US or optic-PET will be introduced in clinical scenarios. Even more, MR diffusion-weighted, pharmacokinetic imaging, spectroscopy or functional BOLD imaging will merge with PET tracers to further increase molecular imaging as a relevant medical discipline. Multimodality imaging techniques will play a leading role in relevant clinical applications. The development of new diagnostic imaging research areas, mainly in the field of oncology, cardiology and neuropsychiatry, will impact the way medicine is performed today. Both clinical and experimental multimodality studies, in

  7. Congruency versus strategic effects in multimodal affective picture categorization

    NARCIS (Netherlands)

    Lemmens, P.M.C.

    2005-01-01

    In communication between humans, emotion is an important aspect having a powerful influence on the structure as well as content of a conversation. In human-factors research, the interaction between humans is often used as a guide to improve the quality of human-computer interaction. Despite its

  8. Multimodal Processes Rescheduling

    DEFF Research Database (Denmark)

    Bocewicz, Grzegorz; Banaszak, Zbigniew A.; Nielsen, Peter

    2013-01-01

    Cyclic scheduling problems concerning multimodal processes are usually observed in FMSs producing multi-type parts where the Automated Guided Vehicles System (AGVS) plays a role of a material handling system. Schedulability analysis of concurrently flowing cyclic processes (SCCP) exe-cuted in the......Cyclic scheduling problems concerning multimodal processes are usually observed in FMSs producing multi-type parts where the Automated Guided Vehicles System (AGVS) plays a role of a material handling system. Schedulability analysis of concurrently flowing cyclic processes (SCCP) exe...

  9. Multimodal follow-up questions to multimodal answers in a QA system

    NARCIS (Netherlands)

    van Schooten, B.W.; op den Akker, Hendrikus J.A.

    2007-01-01

    We are developing a dialogue manager (DM) for a multimodal interactive Question Answering (QA) system. Our QA system presents answers using text and pictures, and the user may pose follow-up questions using text or speech, while indicating screen elements with the mouse. We developed a corpus of

  10. Multimode optical fiber

    Science.gov (United States)

    Bigot-Astruc, Marianne; Molin, Denis; Sillard, Pierre

    2014-11-04

    A depressed graded-index multimode optical fiber includes a central core, an inner depressed cladding, a depressed trench, an outer depressed cladding, and an outer cladding. The central core has an alpha-index profile. The depressed claddings limit the impact of leaky modes on optical-fiber performance characteristics (e.g., bandwidth, core size, and/or numerical aperture).

  11. Multimodal news framing effects

    NARCIS (Netherlands)

    Powell, T.E.

    2017-01-01

    Visuals in news media play a vital role in framing citizens’ political preferences. Yet, compared to the written word, visual images are undervalued in political communication research. Using framing theory, this thesis redresses the balance by studying the combined, or multimodal, effects of visual

  12. Multimodal Strategies of Theorization

    DEFF Research Database (Denmark)

    Cartel, Melodie; Colombero, Sylvain; Boxenbaum, Eva

    This paper examines the role of multimodal strategies in processes of theorization. Empirically, we investigate the theorization process of a highly disruptive innovation in the history of architecture: reinforced concrete. Relying on archival data from a dominant French architectural journal from...... with well-known rhetorical strategies and develop a process model of theorization....

  13. Multimodal integration in statistical learning

    DEFF Research Database (Denmark)

    Mitchell, Aaron; Christiansen, Morten Hyllekvist; Weiss, Dan

    2014-01-01

    , we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker’s face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally...... facilitated participants’ ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.......Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study...

  14. Multimodality image analysis work station

    International Nuclear Information System (INIS)

    Ratib, O.; Huang, H.K.

    1989-01-01

    The goal of this project is to design and implement a PACS (picture archiving and communication system) workstation for quantitative analysis of multimodality images. The Macintosh II personal computer was selected for its friendly user interface, its popularity among the academic and medical community, and its low cost. The Macintosh operates as a stand alone workstation where images are imported from a central PACS server through a standard Ethernet network and saved on a local magnetic or optical disk. A video digitizer board allows for direct acquisition of images from sonograms or from digitized cine angiograms. The authors have focused their project on the exploration of new means of communicating quantitative data and information through the use of an interactive and symbolic user interface. The software developed includes a variety of image analysis, algorithms for digitized angiograms, sonograms, scintigraphic images, MR images, and CT scans

  15. Learning new skills in Multimodal Enactive Environments

    Directory of Open Access Journals (Sweden)

    Bardy Benoît G.

    2011-12-01

    Full Text Available A European consortium of researchers in movement and cognitive sciences, robotics, and interaction design developed multimodal technologies to accelerate and transfer the (relearning of complex skills from virtual to real environments. The decomposition of skill into functional elements — the subskills — and the enactment of informational variables used as accelerators are here described. One illustration of accelerator using virtual reality in team rowing is described.

  16. Frequency tripling with multimode-lasers

    International Nuclear Information System (INIS)

    Langer, H.; Roehr, H.; Wrobel, W.G.

    1978-10-01

    The presence of different modes with random phases in a laser beam leads to fluctuations in nonlinear optical interactions. This paper describes the influence of the linewidth of a dye laser on the generation of intensive Lyman-alpha radiation by frequency tripling. Using this Lyman-alpha source for resonance scattering on strongly doppler-broadened lines in fusion plasmas the detection limit of neutral hydrogen is nearly two orders higher with the multimode than the singlemode dye laser. (orig.) [de

  17. Cultural differences in human-computer interaction towards culturally adaptive human-machine interaction

    CERN Document Server

    Heimgärtner, Rüdiger

    2012-01-01

    Es wird eine Methode zur Bestimmung von quantitativ klassifizierenden kulturellen Variablen der Mensch-Maschine-Interaktion (MMI) präsentiert und in einem Werkzeug für die interkulturelle Interaktionsanalyse umgesetzt. Rüdiger Heimgärtner zeigt, dass MMI anhand der kulturell geprägten Interaktionsmuster des Benutzers automatisch an dessen kulturellen Hintergrund angepasst werden kann. Empfehlungen für das Design interkultureller Benutzungsschnittstellen sowie für die Architekturbildung kulturell-adaptiver Systeme runden die Arbeit ab. Der Arbeitsbericht der Dissertation ist in elektronischer F

  18. Gestures and multimodal input

    OpenAIRE

    Keates, Simeon; Robinson, Peter

    1999-01-01

    For users with motion impairments, the standard keyboard and mouse arrangement for computer access often presents problems. Other approaches have to be adopted to overcome this. In this paper, we will describe the development of a prototype multimodal input system based on two gestural input channels. Results from extensive user trials of this system are presented. These trials showed that the physical and cognitive loads on the user can quickly become excessive and detrimental to the interac...

  19. Multimodal integration of anatomy and physiology classes: How instructors utilize multimodal teaching in their classrooms

    Science.gov (United States)

    McGraw, Gerald M., Jr.

    Multimodality is the theory of communication as it applies to social and educational semiotics (making meaning through the use of multiple signs and symbols). The term multimodality describes a communication methodology that includes multiple textual, aural, and visual applications (modes) that are woven together to create what is referred to as an artifact. Multimodal teaching methodology attempts to create a deeper meaning to course content by activating the higher cognitive areas of the student's brain, creating a more sustained retention of the information (Murray, 2009). The introduction of multimodality educational methodologies as a means to more optimally engage students has been documented within educational literature. However, studies analyzing the distribution and penetration into basic sciences, more specifically anatomy and physiology, have not been forthcoming. This study used a quantitative survey design to determine the degree to which instructors integrated multimodality teaching practices into their course curricula. The instrument used for the study was designed by the researcher based on evidence found in the literature and sent to members of three associations/societies for anatomy and physiology instructors: the Human Anatomy and Physiology Society; the iTeach Anatomy & Physiology Collaborate; and the American Physiology Society. Respondents totaled 182 instructor members of two- and four-year, private and public higher learning colleges collected from the three organizations collectively with over 13,500 members in over 925 higher learning institutions nationwide. The study concluded that the expansion of multimodal methodologies into anatomy and physiology classrooms is at the beginning of the process and that there is ample opportunity for expansion. Instructors continue to use lecture as their primary means of interaction with students. Email is still the major form of out-of-class communication for full-time instructors. Instructors with

  20. A structural approach to constructing perspective efficient and reliable human-computer interfaces

    International Nuclear Information System (INIS)

    Balint, L.

    1989-01-01

    The principles of human-computer interface (HCI) realizations are investigated with the aim of getting closer to a general framework and thus, to a more or less solid background of constructing perspective efficient, reliable and cost-effective human-computer interfaces. On the basis of characterizing and classifying the different HCI solutions, the fundamental problems of interface construction are pointed out especially with respect to human error occurrence possibilities. The evolution of HCI realizations is illustrated by summarizing the main properties of past, present and foreseeable future interface generations. HCI modeling is pointed out to be a crucial problem in theoretical and practical investigations. Suggestions concerning HCI structure (hierarchy and modularity), HCI functional dynamics (mapping from input to output information), minimization of human error caused system failures (error-tolerance, error-recovery and error-correcting) as well as cost-effective HCI design and realization methodology (universal and application-oriented vs. application-specific solutions) are presented. The concept of RISC-based and SCAMP-type HCI components is introduced with the aim of having a reduced interaction scheme in communication and a well defined architecture in HCI components' internal structure. HCI efficiency and reliability are dealt with, by taking into account complexity and flexibility. The application of fast computerized prototyping is also briefly investigated as an experimental device of achieving simple, parametrized, invariant HCI models. Finally, a concise outline of an approach of how to construct ideal HCI's is also suggested by emphasizing the open questions and the need of future work related to the proposals, as well. (author). 14 refs, 6 figs

  1. Multimodal processes scheduling in mesh-like network environment

    Directory of Open Access Journals (Sweden)

    Bocewicz Grzegorz

    2015-06-01

    Full Text Available Multimodal processes planning and scheduling play a pivotal role in many different domains including city networks, multimodal transportation systems, computer and telecommunication networks and so on. Multimodal process can be seen as a process partially processed by locally executed cyclic processes. In that context the concept of a Mesh-like Multimodal Transportation Network (MMTN in which several isomorphic subnetworks interact each other via distinguished subsets of common shared intermodal transport interchange facilities (such as a railway station, bus station or bus/tram stop as to provide a variety of demand-responsive passenger transportation services is examined. Consider a mesh-like layout of a passengers transport network equipped with different lines including buses, trams, metro, trains etc. where passenger flows are treated as multimodal processes. The goal is to provide a declarative model enabling to state a constraint satisfaction problem aimed at multimodal transportation processes scheduling encompassing passenger flow itineraries. Then, the main objective is to provide conditions guaranteeing solvability of particular transport lines scheduling, i.e. guaranteeing the right match-up of local cyclic acting bus, tram, metro and train schedules to a given passengers flow itineraries.

  2. Timing of Multimodal Robot Behaviors during Human-Robot Collaboration

    DEFF Research Database (Denmark)

    Jensen, Lars Christian; Fischer, Kerstin; Suvei, Stefan-Daniel

    2017-01-01

    In this paper, we address issues of timing between robot behaviors in multimodal human-robot interaction. In particular, we study what effects sequential order and simultaneity of robot arm and body movement and verbal behavior have on the fluency of interactions. In a study with the Care-O-bot, ...... output plays a special role because participants carry their expectations from human verbal interaction into the interactions with robots....

  3. Experiments in Multimodal Information Presentation

    NARCIS (Netherlands)

    van Hooijdonk, Charlotte; Bosma, W.E.; Krahmer, Emiel; Maes, Alfons; Theune, Mariet; van den Bosch, Antal; Bouma, Gosse

    In this chapter we describe three experiments investigating multimodal information presentation in the context of a medical QA system. In Experiment 1, we wanted to know how non-experts design (multimodal) answers to medical questions, distinguishing between what questions and how questions. In

  4. Simplified Multimodal Biometric Identification

    Directory of Open Access Journals (Sweden)

    Abhijit Shete

    2014-03-01

    Full Text Available Multibiometric systems are expected to be more reliable than unimodal biometric systems for personal identification due to the presence of multiple, fairly independent pieces of evidence e.g. Unique Identification Project "Aadhaar" of Government of India. In this paper, we present a novel wavelet based technique to perform fusion at the feature level and score level by considering two biometric modalities, face and fingerprint. The results indicate that the proposed technique can lead to substantial improvement in multimodal matching performance. The proposed technique is simple because of no preprocessing of raw biometric traits as well as no feature and score normalization.

  5. Multimodality, politics and ideology

    DEFF Research Database (Denmark)

    Machin, David; Van Leeuwen, T.

    2016-01-01

    This journal's editorial statement is clear that political discourse should be studied not only as regards parliamentary type politics. In this introduction we argue precisely for the need to pay increasing attention to the way that political ideologies are infused into culture more widely...... of power, requires meanings and identities which can hold them in place. We explain the processes by which critical multimodal discourse analysis can best draw out this ideology as it is realized through different semiotics resources. © John Benjamins Publishing Company....

  6. Multimodal Speaker Diarization.

    Science.gov (United States)

    Noulas, A; Englebienne, G; Krose, B J A

    2012-01-01

    We present a novel probabilistic framework that fuses information coming from the audio and video modality to perform speaker diarization. The proposed framework is a Dynamic Bayesian Network (DBN) that is an extension of a factorial Hidden Markov Model (fHMM) and models the people appearing in an audiovisual recording as multimodal entities that generate observations in the audio stream, the video stream, and the joint audiovisual space. The framework is very robust to different contexts, makes no assumptions about the location of the recording equipment, and does not require labeled training data as it acquires the model parameters using the Expectation Maximization (EM) algorithm. We apply the proposed model to two meeting videos and a news broadcast video, all of which come from publicly available data sets. The results acquired in speaker diarization are in favor of the proposed multimodal framework, which outperforms the single modality analysis results and improves over the state-of-the-art audio-based speaker diarization.

  7. Investigation of protein selectivity in multimodal chromatography using in silico designed Fab fragment variants.

    Science.gov (United States)

    Karkov, Hanne Sophie; Krogh, Berit Olsen; Woo, James; Parimal, Siddharth; Ahmadian, Haleh; Cramer, Steven M

    2015-11-01

    In this study, a unique set of antibody Fab fragments was designed in silico and produced to examine the relationship between protein surface properties and selectivity in multimodal chromatographic systems. We hypothesized that multimodal ligands containing both hydrophobic and charged moieties would interact strongly with protein surface regions where charged groups and hydrophobic patches were in close spatial proximity. Protein surface property characterization tools were employed to identify the potential multimodal ligand binding regions on the Fab fragment of a humanized antibody and to evaluate the impact of mutations on surface charge and hydrophobicity. Twenty Fab variants were generated by site-directed mutagenesis, recombinant expression, and affinity purification. Column gradient experiments were carried out with the Fab variants in multimodal, cation-exchange, and hydrophobic interaction chromatographic systems. The results clearly indicated that selectivity in the multimodal system was different from the other chromatographic modes examined. Column retention data for the reduced charge Fab variants identified a binding site comprising light chain CDR1 as the main electrostatic interaction site for the multimodal and cation-exchange ligands. Furthermore, the multimodal ligand binding was enhanced by additional hydrophobic contributions as evident from the results obtained with hydrophobic Fab variants. The use of in silico protein surface property analyses combined with molecular biology techniques, protein expression, and chromatographic evaluations represents a previously undescribed and powerful approach for investigating multimodal selectivity with complex biomolecules. © 2015 Wiley Periodicals, Inc.

  8. Design Principles for Interactive Software

    DEFF Research Database (Denmark)

    The book addresses the crucial intersection of human-computer interaction (HCI) and software engineering by asking both what users require from interactive systems and what developers need to produce well-engineered software. Needs are expressed as...

  9. Multimodal responsive action

    DEFF Research Database (Denmark)

    Oshima, Sae

    ; Raymond 2003; Schegloff and Lerner 2009), including those with multimodal actions (e.g. Olsher 2004; Fasulo & Monzoni 2009). Some responsive actions can also be completed with bodily behavior alone, such as: when an agreement display is achieved by using only nonvocal actions (Jarmon 1996), when...... the recipient’s gaze shift becomes a significant part of the speaker’s turn construction (Goodwin 1980), and when head nods show the recipient’s affiliation with the speaker’s stance (Stivers 2008). Still, much room remains for extending our current understanding of responding actions that necessarily involve...... a hairstylist and a client negotiate the quality of the service that has been provided. Here, the first action is usually the stylist’s question and/or explanation of the new cut that invites the client’s assessment/(dis)agreement, accompanied with embodied actions that project an imminent self...

  10. Investigation on heat transfer analysis and its effect on a multi-mode, beam-wave interaction for a 140 GHz, MW-class gyrotron

    Science.gov (United States)

    Liu, Qiao; Liu, Yinghui; Chen, Zhaowei; Niu, Xinjian; Li, Hongfu; Xu, Jianhua

    2018-04-01

    The interaction cavity of a 140 GHz, 1 MW continuous wave gyrotron developed in UESTC will be loaded with a very large heat load in the inner surface during operation. In order to reduce the heat, the axial wedge grooves of the outside surface of the cavity are considered and employed as the heat radiation structure. Thermoanalysis and structural analysis were discussed in detail to obtain the effects of heat on the cavity. In thermoanalysis, the external coolant-flow rates ranging from 20 L/min to 50 L/min were considered, and the distribution of wall loading was loaded as the heat flux source. In structural analysis, the cavity's deformation caused by the loads of heat and pressure was calculated. Compared with a non-deformed cavity, the effects of deformation on the performance of a cavity were discussed. For a cold-cavity, the results show that the quality factor would be reduced by 72, 89, 99 and 171 at the flow rates of 50 L/min, 40 L/min, 30 L/min and 20 L/min, respectively. Correspondingly, the cold-cavity frequencies would be decreased by 0.13 GHz, 0.15 GHz, 0.19 GHz and 0.38 GHz, respectively. For a hot-cavity, the results demonstrate that the output port frequencies would be dropped down, but the offset would be gradually decreased with increasing coolant-flow rate. Meanwhile, the output powers would be reduced dramatically with decreasing coolant-flow rate. In addition, when the coolant-flow rate reaches 40 L/min, the output power and the frequency are just reduced by 30 kW and 0.151 GHz, respectively.

  11. Impact of familiarity on information complexity in human-computer interfaces

    Directory of Open Access Journals (Sweden)

    Bakaev Maxim

    2016-01-01

    Full Text Available A quantitative measure of information complexity remains very much desirable in HCI field, since it may aid in optimization of user interfaces, especially in human-computer systems for controlling complex objects. Our paper is dedicated to exploration of subjective (subject-depended aspect of the complexity, conceptualized as information familiarity. Although research of familiarity in human cognition and behaviour is done in several fields, the accepted models in HCI, such as Human Processor or Hick-Hyman’s law do not generally consider this issue. In our experimental study the subjects performed search and selection of digits and letters, whose familiarity was conceptualized as frequency of occurrence in numbers and texts. The analysis showed significant effect of information familiarity on selection time and throughput in regression models, although the R2 values were somehow low. Still, we hope that our results might aid in quantification of information complexity and its further application for optimizing interaction in human-machine systems.

  12. Challenges in Transcribing Multimodal Data: A Case Study

    Science.gov (United States)

    Helm, Francesca; Dooly, Melinda

    2017-01-01

    Computer-mediated communication (CMC) once meant principally text-based communication mediated by computers, but rapid technological advances in recent years have heralded an era of multimodal communication with a growing emphasis on audio and video synchronous interaction. As CMC, in all its variants (text chats, video chats, forums, blogs, SMS,…

  13. User interface issues in supporting human-computer integrated scheduling

    Science.gov (United States)

    Cooper, Lynne P.; Biefeld, Eric W.

    1991-01-01

    The topics are presented in view graph form and include the following: characteristics of Operations Mission Planner (OMP) schedule domain; OMP architecture; definition of a schedule; user interface dimensions; functional distribution; types of users; interpreting user interaction; dynamic overlays; reactive scheduling; and transitioning the interface.

  14. A Human/Computer Learning Network to Improve Biodiversity Conservation and Research

    OpenAIRE

    Kelling, Steve; Gerbracht, Jeff; Fink, Daniel; Lagoze, Carl; Wong, Weng-Keen; Yu, Jun; Damoulas, Theodoros; Gomes, Carla

    2012-01-01

    In this paper we describe eBird, a citizen-science project that takes advantage of the human observational capacity to identify birds to species, which is then used to accurately represent patterns of bird occurrences across broad spatial and temporal extents. eBird employs artificial intelligence techniques such as machine learning to improve data quality by taking advantage of the synergies between human computation and mechanical computation. We call this a Human-Computer Learning Network,...

  15. My4Sight: A Human Computation Platform for Improving Flu Predictions

    OpenAIRE

    Akupatni, Vivek Bharath

    2015-01-01

    While many human computation (human-in-the-loop) systems exist in the field of Artificial Intelligence (AI) to solve problems that can't be solved by computers alone, comparatively fewer platforms exist for collecting human knowledge, and evaluation of various techniques for harnessing human insights in improving forecasting models for infectious diseases, such as Influenza and Ebola. In this thesis, we present the design and implementation of My4Sight, a human computation system develope...

  16. Multi-Modal Interaction for Robotic Mules

    Science.gov (United States)

    2014-02-26

    could be detected by adversaries. Field of view (FOV) and range are also both major issues to content with: it is very difficult for the user to know...another body part such as the head or helmet Body Frame: 77.2% Other: 22.8% Freeze Pace Count Continuous...The noise is similar to being next to a motorcycle that occasionally revs its engine, ranging from roughly 75-95 decibels. Results are given in

  17. Characterizing Multimode Interaction in Renal Autoregulation

    DEFF Research Database (Denmark)

    Pavlov, A. N.; Sosnovtseva, Olga; Pavlova, O. N.

    2008-01-01

    of 5-8 s and a somewhat slower mode arising from an instability in the tubuloglomerular feedback mechanism, we also observe a very slow mode with a period of 100-200 s. Double-wavelet techniques are used to study how the very slow rhythm influences the two faster modes. In a broader perspective......, the paper emphasizes the significance of complex dynamic phenomena in the normal and pathological function of physiological systems and discusses how simulation methods can help to understand the underlying biological mechanisms. At the present there is no causal explanation of the very slow mode. However...

  18. The Stability of Multi-modal Traffic Network

    International Nuclear Information System (INIS)

    Han Linghui; Sun Huijun; Zhu Chengjuan; Jia Bin; Wu Jianjun

    2013-01-01

    There is an explicit and implicit assumption in multimodal traffic equilibrium models, that is, if the equilibrium exists, then it will also occur. The assumption is very idealized; in fact, it may be shown that the quite contrary could happen, because in multimodal traffic network, especially in mixed traffic conditions the interaction among traffic modes is asymmetric and the asymmetric interaction may result in the instability of traffic system. In this paper, to study the stability of multimodal traffic system, we respectively present the travel cost function in mixed traffic conditions and in traffic network with dedicated bus lanes. Based on a day-to-day dynamical model, we study the evolution of daily route choice of travelers in multimodal traffic network using 10000 random initial values for different cases. From the results of simulation, it can be concluded that the asymmetric interaction between the cars and buses in mixed traffic conditions can lead the traffic system to instability when traffic demand is larger. We also study the effect of travelers' perception error on the stability of multimodal traffic network. Although the larger perception error can alleviate the effect of interaction between cars and buses and improve the stability of traffic system in mixed traffic conditions, the traffic system also become instable when the traffic demand is larger than a number. For all cases simulated in this study, with the same parameters, traffic system with dedicated bus lane has better stability for traffic demand than that in mixed traffic conditions. We also find that the network with dedicated bus lane has higher portion of travelers by bus than it of mixed traffic network. So it can be concluded that building dedicated bus lane can improve the stability of traffic system and attract more travelers to choose bus reducing the traffic congestion. (general)

  19. Multimodal Friction Ignition Tester

    Science.gov (United States)

    Davis, Eddie; Howard, Bill; Herald, Stephen

    2009-01-01

    The multimodal friction ignition tester (MFIT) is a testbed for experiments on the thermal and mechanical effects of friction on material specimens in pressurized, oxygen-rich atmospheres. In simplest terms, a test involves recording sensory data while rubbing two specimens against each other at a controlled normal force, with either a random stroke or a sinusoidal stroke having controlled amplitude and frequency. The term multimodal in the full name of the apparatus refers to a capability for imposing any combination of widely ranging values of the atmospheric pressure, atmospheric oxygen content, stroke length, stroke frequency, and normal force. The MFIT was designed especially for studying the tendency toward heating and combustion of nonmetallic composite materials and the fretting of metals subjected to dynamic (vibrational) friction forces in the presence of liquid oxygen or pressurized gaseous oxygen test conditions approximating conditions expected to be encountered in proposed composite material oxygen tanks aboard aircraft and spacecraft in flight. The MFIT includes a stainless-steel pressure vessel capable of retaining the required test atmosphere. Mounted atop the vessel is a pneumatic cylinder containing a piston for exerting the specified normal force between the two specimens. Through a shaft seal, the piston shaft extends downward into the vessel. One of the specimens is mounted on a block, denoted the pressure block, at the lower end of the piston shaft. This specimen is pressed down against the other specimen, which is mounted in a recess in another block, denoted the slip block, that can be moved horizontally but not vertically. The slip block is driven in reciprocating horizontal motion by an electrodynamic vibration exciter outside the pressure vessel. The armature of the electrodynamic exciter is connected to the slip block via a horizontal shaft that extends into the pressure vessel via a second shaft seal. The reciprocating horizontal

  20. Multimodal Discourse Analysis of the Movie "Argo"

    Science.gov (United States)

    Bo, Xu

    2018-01-01

    Based on multimodal discourse theory, this paper makes a multimodal discourse analysis of some shots in the movie "Argo" from the perspective of context of culture, context of situation and meaning of image. Results show that this movie constructs multimodal discourse through particular context, language and image, and successfully…

  1. Interaction Widget

    DEFF Research Database (Denmark)

    Ingstrup, Mads

    2003-01-01

    This pattern describes the idea of making a user interface of discrete, reusable entities---here called interaction widgets. The idea behind widgets is described using two perspectives, that of the user and that of the developer. It is the forces from these two perspectives that are balanced...... in the pattern. The intended audience of the pattern is developers and researchers within the field of human computer interaction....

  2. Using Tablet PCs in Classroom for Teaching Human-Computer Interaction: An Experience in High Education

    Science.gov (United States)

    da Silva, André Constantino; Marques, Daniela; de Oliveira, Rodolfo Francisco; Noda, Edgar

    2014-01-01

    The use of computers in the teaching and learning process is investigated by many researches and, nowadays, due the available diversity of computing devices, tablets are become popular in classroom too. So what are the advantages and disadvantages to use tablets in classroom? How can we shape the teaching and learning activities to get the best of…

  3. Human Computer Interaction (HCI) and Internet Residency: Implications for Both Personal Life and Teaching/Learning

    Science.gov (United States)

    Crearie, Linda

    2016-01-01

    Technological advances over the last decade have had a significant impact on the teaching and learning experiences students encounter today. We now take technologies such as Web 2.0, mobile devices, cloud computing, podcasts, social networking, super-fast broadband, and connectedness for granted. So what about the student use of these types of…

  4. Guidelines for the use of vibro-tactile displays in human computer interaction

    NARCIS (Netherlands)

    Erp, J.B.F. van

    2002-01-01

    Vibro-tactile displays convey messages by presenting vibration to the user's skin. In recent years, the interest in and application of vibro-tactile displays is growing. Vibratory displays are introduced in mobile devices, desktop applications and even in aircraft [1]. Despite the growing interest,

  5. A Single Camera Motion Capture System for Human-Computer Interaction

    Science.gov (United States)

    Okada, Ryuzo; Stenger, Björn

    This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.

  6. Risk Issues in Developing Novel User Interfaces for Human-Computer Interaction

    KAUST Repository

    Klinker, Gudrun; Huber, Manuel; Tö nnis, Marcus

    2014-01-01

    © 2014 Springer International Publishing Switzerland. All rights are reserved. When new user interfaces or information visualization schemes are developed for complex information processing systems, it is not readily clear how much they do, in fact, support and improve users' understanding and use of such systems. Is a new interface better than an older one? In what respect, and in which situations? To provide answers to such questions, user testing schemes are employed. This chapter reports on a range of risks pertaining to the design and implementation of user interfaces in general, and to newly emerging interfaces (3-dimensionally, immersive, mobile) in particular.

  7. An Innovative Solution Based on Human-Computer Interaction to Support Cognitive Rehabilitation

    Directory of Open Access Journals (Sweden)

    José M. Cogollor

    2014-10-01

    Full Text Available This contribution focuses its objective in describing the design and implementation of an innovative system to provide cognitive rehabilitation. People who will take advantage of this platform suffer from a post-stroke disease called Apraxia and Action Disorganisation Syndrome (AADS. The platform has been integrated at Universidad Politécnica de Madrid and tries to reduce the stay in hospital or rehabilitation center by supporting self-rehabilitation at home. So, the system acts as an intelligent machine which guides patients while executing Activities of Daily Living (ADL, such as preparing a simple tea, by informing them about the errors committed and possible actions to correct them. A short introduction to other works related to stroke, patients to work with, how the system works and how it is implemented are provided in the document. Finally, some relevant information from experiment made with healthy people for technical validation is also shown.

  8. Modeling Goal-Directed User Exploration in Human-Computer Interaction

    Science.gov (United States)

    2011-02-01

    sculptures architecture Theater Musicians & Composers Cinema , Television, & Broadcasting Music Dance Musical Instruments” The text entered for the...Pirolli, Chen & Pitkow, 2001), Scent- based Navigation and Information Foraging in the ACT cognitive architecture 1.0 (SNIF-ACT 1.0; Pirolli & Fu...2003) is the first of two process models of Web navigation based on Information Foraging Theory and implemented in the ACT-R cognitive architecture

  9. Incorporating a Human-Computer Interaction Course into Software Development Curriculums

    Science.gov (United States)

    Janicki, Thomas N.; Cummings, Jeffrey; Healy, R. Joseph

    2015-01-01

    Individuals have increasing options on retrieving information related to hardware and software. Specific hardware devices include desktops, tablets and smart devices. Also, the number of software applications has significantly increased the user's capability to access data. Software applications include the traditional web site, smart device…

  10. Cognitive engineering in the design of human-computer interaction and expert systems

    International Nuclear Information System (INIS)

    Salvendy, G.

    1987-01-01

    The 68 papers contributing to this book cover the following areas: Theories of Interface Design; Methodologies of Interface Design; Applications of Interface Design; Software Design; Human Factors in Speech Technology and Telecommunications; Design of Graphic Dialogues; Knowledge Acquisition for Knowledge-Based Systems; Design, Evaluation and Use of Expert Systems. This demonstrates the dual role of cognitive engineering. On the one hand cognitive engineering is utilized to design computing systems which are compatible with human cognition and can be effectively and be easily utilized by all individuals. On the other hand, cognitive engineering is utilized to transfer human cognition into the computer for the purpose of building expert systems. Two papers are of interest to INIS

  11. A Data Collection and Representation Framework for Software and Human-Computer Interaction Measurements.

    Science.gov (United States)

    2000-01-04

    by Miara , Musselman, Navarro, and Shneiderman [ Miara et al. 1983] they found that indentation correlated strongly with comprehension. They tested 47...Dissertation, Auburn University, Auburn, AL, August 1996. MIARA , R.J., MUSSELMAN, JA., NAVARRO, JA., AND SHNEIDERMAN, B. 1983. Program Indentation and

  12. Risk Issues in Developing Novel User Interfaces for Human-Computer Interaction

    KAUST Repository

    Klinker, Gudrun

    2014-01-01

    © 2014 Springer International Publishing Switzerland. All rights are reserved. When new user interfaces or information visualization schemes are developed for complex information processing systems, it is not readily clear how much they do, in fact, support and improve users\\' understanding and use of such systems. Is a new interface better than an older one? In what respect, and in which situations? To provide answers to such questions, user testing schemes are employed. This chapter reports on a range of risks pertaining to the design and implementation of user interfaces in general, and to newly emerging interfaces (3-dimensionally, immersive, mobile) in particular.

  13. Investigating the Human Computer Interaction Problems with Automated Teller Machine Navigation Menus

    Science.gov (United States)

    Curran, Kevin; King, David

    2008-01-01

    Purpose: The automated teller machine (ATM) has become an integral part of our society. However, using the ATM can often be a frustrating experience as people frequently reinsert cards to conduct multiple transactions. This has led to the research question of whether ATM menus are designed in an optimal manner. This paper aims to address the…

  14. Knowledge Management of Web Financial Reporting in Human-Computer Interactive Perspective

    Science.gov (United States)

    Wang, Dong; Chen, Yujing; Xu, Jing

    2017-01-01

    Handling and analyzing to web financial data is becoming a challenge issue in knowledge management and education to accounting practitioners. eXtensible Business Reporting Language (XBRL), which is a type of web financial reporting, describes and recognizes financial items by tagging metadata. The goal is to make it possible for financial reports…

  15. Psychosocial and Cultural Modeling in Human Computation Systems: A Gamification Approach

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Riensche, Roderick M.; Haack, Jereme N.; Butner, R. Scott

    2013-11-20

    “Gamification”, the application of gameplay to real-world problems, enables the development of human computation systems that support decision-making through the integration of social and machine intelligence. One of gamification’s major benefits includes the creation of a problem solving environment where the influence of cognitive and cultural biases on human judgment can be curtailed through collaborative and competitive reasoning. By reducing biases on human judgment, gamification allows human computation systems to exploit human creativity relatively unhindered by human error. Operationally, gamification uses simulation to harvest human behavioral data that provide valuable insights for the solution of real-world problems.

  16. Advanced Multimodal Solutions for Information Presentation

    Science.gov (United States)

    Wenzel, Elizabeth M.; Godfroy-Cooper, Martine

    2018-01-01

    High-workload, fast-paced, and degraded sensory environments are the likeliest candidates to benefit from multimodal information presentation. For example, during EVA (Extra-Vehicular Activity) and telerobotic operations, the sensory restrictions associated with a space environment provide a major challenge to maintaining the situation awareness (SA) required for safe operations. Multimodal displays hold promise to enhance situation awareness and task performance by utilizing different sensory modalities and maximizing their effectiveness based on appropriate interaction between modalities. During EVA, the visual and auditory channels are likely to be the most utilized with tasks such as monitoring the visual environment, attending visual and auditory displays, and maintaining multichannel auditory communications. Previous studies have shown that compared to unimodal displays (spatial auditory or 2D visual), bimodal presentation of information can improve operator performance during simulated extravehicular activity on planetary surfaces for tasks as diverse as orientation, localization or docking, particularly when the visual environment is degraded or workload is increased. Tactile displays offer a third sensory channel that may both offload information processing effort and provide a means to capture attention when urgently required. For example, recent studies suggest that including tactile cues may result in increased orientation and alerting accuracy, improved task response time and decreased workload, as well as provide self-orientation cues in microgravity on the ISS (International Space Station). An important overall issue is that context-dependent factors like task complexity, sensory degradation, peripersonal vs. extrapersonal space operations, workload, experience level, and operator fatigue tend to vary greatly in complex real-world environments and it will be difficult to design a multimodal interface that performs well under all conditions. As a

  17. Multimodal Aspects of Corporate Social Responsibility Communication

    Directory of Open Access Journals (Sweden)

    Carmen Daniela Maier

    2014-12-01

    Full Text Available This article addresses how the multimodal persuasive strategies of corporate social responsibility communication can highlight a company’s commitment to gender empowerment and environmental protection while advertising simultaneously its products. Drawing on an interdisciplinary methodological framework related to CSR communication, multimodal discourse analysis and gender theory, the article proposes a multimodal analysis model through which it is possible to map and explain the multimodal persuasive strategies employed by Coca-Cola company in their community-related films. By examining the semiotic modes’ interconnectivity and functional differentiation, this analytical endeavour expands the existing research work as the usual textual focus is extended to a multimodal one.

  18. The Multimodal Possibilities of Online Instructions

    DEFF Research Database (Denmark)

    Kampf, Constance

    2006-01-01

    The WWW simplifies the process of delivering online instructions through multimodal channels because of the ease of use for voice, video, pictures, and text modes of communication built into it.  Given that instructions are being produced in multimodal format for the WWW, how do multi-modal analy......The WWW simplifies the process of delivering online instructions through multimodal channels because of the ease of use for voice, video, pictures, and text modes of communication built into it.  Given that instructions are being produced in multimodal format for the WWW, how do multi...

  19. Developing Human-Computer Interface Models and Representation Techniques(Dialogue Management as an Integral Part of Software Engineering)

    OpenAIRE

    Hartson, H. Rex; Hix, Deborah; Kraly, Thomas M.

    1987-01-01

    The Dialogue Management Project at Virginia Tech is studying the poorly understood problem of human-computer dialogue development. This problem often leads to low usability in human-computer dialogues. The Dialogue Management Project approaches solutions to low usability in interfaces by addressing human-computer dialogue development as an integral and equal part of the total system development process. This project consists of two rather distinct, but dependent, parts. One is development of ...

  20. A Learning Algorithm for Multimodal Grammar Inference.

    Science.gov (United States)

    D'Ulizia, A; Ferri, F; Grifoni, P

    2011-12-01

    The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.

  1. Multimodal network design and assessment

    NARCIS (Netherlands)

    Brands, Ties; Alkim, T.P.; van Eck, Gijs; van Arem, Bart; Arentze, T.

    2010-01-01

    A framework is proposed for the design of an optimal multimodal transport network for the Randstad area. This research framework consists of a multi-objective optimization heuristic and a fast network assessment module, which results in a set of Pareto optimal solutions. Subsequently, a proper

  2. Rational behavior in decision making. A comparison between humans, computers and fast and frugal strategies

    NARCIS (Netherlands)

    Snijders, C.C.P.

    2007-01-01

    Rational behavior in decision making. A comparison between humans, computers, and fast and frugal strategies Chris Snijders and Frits Tazelaar (Eindhoven University of Technology, The Netherlands) Real life decisions often have to be made in "noisy" circumstances: not all crucial information is

  3. Human Computation

    CERN Multimedia

    CERN. Geneva

    2008-01-01

    What if people could play computer games and accomplish work without even realizing it? What if billions of people collaborated to solve important problems for humanity or generate training data for computers? My work aims at a general paradigm for doing exactly that: utilizing human processing power to solve computational problems in a distributed manner. In particular, I focus on harnessing human time and energy for addressing problems that computers cannot yet solve. Although computers have advanced dramatically in many respects over the last 50 years, they still do not possess the basic conceptual intelligence or perceptual capabilities...

  4. Contextual Interaction Design Research: Enabling HCI

    OpenAIRE

    Murer , Martin; Meschtscherjakov , Alexander; Fuchsberger , Verena; Giuliani , Manuel; Neureiter , Katja; Moser , Christiane; Aslan , Ilhan; Tscheligi , Manfred

    2015-01-01

    International audience; Human-Computer Interaction (HCI) has always been about humans, their needs and desires. Contemporary HCI thinking investigates interactions in everyday life and puts an emphasis on the emotional and experiential qualities of interactions. At the Center for Human-Computer Interaction we seek to bridge meandering strands in the field by following a guiding metaphor that shifts focus to what has always been the core quality of our research field: Enabling HCI, as a leitmo...

  5. Connecting multimodality in human communication.

    Science.gov (United States)

    Regenbogen, Christina; Habel, Ute; Kellermann, Thilo

    2013-01-01

    A successful reciprocal evaluation of social signals serves as a prerequisite for social coherence and empathy. In a previous fMRI study we studied naturalistic communication situations by presenting video clips to our participants and recording their behavioral responses regarding empathy and its components. In two conditions, all three channels transported congruent emotional or neutral information, respectively. Three conditions selectively presented two emotional channels and one neutral channel and were thus bimodally emotional. We reported channel-specific emotional contributions in modality-related areas, elicited by dynamic video clips with varying combinations of emotionality in facial expressions, prosody, and speech content. However, to better understand the underlying mechanisms accompanying a naturalistically displayed human social interaction in some key regions that presumably serve as specific processing hubs for facial expressions, prosody, and speech content, we pursued a reanalysis of the data. Here, we focused on two different descriptions of temporal characteristics within these three modality-related regions [right fusiform gyrus (FFG), left auditory cortex (AC), left angular gyrus (AG) and left dorsomedial prefrontal cortex (dmPFC)]. By means of a finite impulse response (FIR) analysis within each of the three regions we examined the post-stimulus time-courses as a description of the temporal characteristics of the BOLD response during the video clips. Second, effective connectivity between these areas and the left dmPFC was analyzed using dynamic causal modeling (DCM) in order to describe condition-related modulatory influences on the coupling between these regions. The FIR analysis showed initially diminished activation in bimodally emotional conditions but stronger activation than that observed in neutral videos toward the end of the stimuli, possibly by bottom-up processes in order to compensate for a lack of emotional information. The

  6. Testing Two Tools for Multimodal Navigation

    Directory of Open Access Journals (Sweden)

    Mats Liljedahl

    2012-01-01

    Full Text Available The latest smartphones with GPS, electronic compasses, directional audio, touch screens, and so forth, hold a potential for location-based services that are easier to use and that let users focus on their activities and the environment around them. Rather than interpreting maps, users can search for information by pointing in a direction and database queries can be created from GPS location and compass data. Users can also get guidance to locations through point and sweep gestures, spatial sound, and simple graphics. This paper describes two studies testing two applications with multimodal user interfaces for navigation and information retrieval. The applications allow users to search for information and get navigation support using combinations of point and sweep gestures, nonspeech audio, graphics, and text. Tests show that users appreciated both applications for their ease of use and for allowing users to interact directly with the surrounding environment.

  7. Improving treatment planning accuracy through multimodality imaging

    International Nuclear Information System (INIS)

    Sailer, Scott L.; Rosenman, Julian G.; Soltys, Mitchel; Cullip, Tim J.; Chen, Jun

    1996-01-01

    Purpose: In clinical practice, physicians are constantly comparing multiple images taken at various times during the patient's treatment course. One goal of such a comparison is to accurately define the gross tumor volume (GTV). The introduction of three-dimensional treatment planning has greatly enhanced the ability to define the GTV, but there are times when the GTV is not visible on the treatment-planning computed tomography (CT) scan. We have modified our treatment-planning software to allow for interactive display of multiple, registered images that enhance the physician's ability to accurately determine the GTV. Methods and Materials: Images are registered using interactive tools developed at the University of North Carolina at Chapel Hill (UNC). Automated methods are also available. Images registered with the treatment-planning CT scan are digitized from film. After a physician has approved the registration, the registered images are made available to the treatment-planning software. Structures and volumes of interest are contoured on all images. In the beam's eye view, wire loop representations of these structures can be visualized from all image types simultaneously. Each registered image can be seamlessly viewed during the treatment-planning process, and all contours from all image types can be seen on any registered image. A beam may, therefore, be designed based on any contour. Results: Nineteen patients have been planned and treated using multimodality imaging from November 1993 through August 1994. All registered images were digitized from film, and many were from outside institutions. Brain has been the most common site (12), but the techniques of registration and image display have also been used for the thorax (4), abdomen (2), and extremity (1). The registered image has been an magnetic resonance (MR) scan in 15 cases and a diagnostic CT scan in 5 cases. In one case, sequential MRs, one before treatment and another after 30 Gy, were used to plan

  8. Glove-Enabled Computer Operations (GECO): Design and Testing of an Extravehicular Activity Glove Adapted for Human-Computer Interface

    Science.gov (United States)

    Adams, Richard J.; Olowin, Aaron; Krepkovich, Eileen; Hannaford, Blake; Lindsay, Jack I. C.; Homer, Peter; Patrie, James T.; Sands, O. Scott

    2013-01-01

    The Glove-Enabled Computer Operations (GECO) system enables an extravehicular activity (EVA) glove to be dual-purposed as a human-computer interface device. This paper describes the design and human participant testing of a right-handed GECO glove in a pressurized glove box. As part of an investigation into the usability of the GECO system for EVA data entry, twenty participants were asked to complete activities including (1) a Simon Says Games in which they attempted to duplicate random sequences of targeted finger strikes and (2) a Text Entry activity in which they used the GECO glove to enter target phrases in two different virtual keyboard modes. In a within-subjects design, both activities were performed both with and without vibrotactile feedback. Participants mean accuracies in correctly generating finger strikes with the pressurized glove were surprisingly high, both with and without the benefit of tactile feedback. Five of the subjects achieved mean accuracies exceeding 99 in both conditions. In Text Entry, tactile feedback provided a statistically significant performance benefit, quantified by characters entered per minute, as well as reduction in error rate. Secondary analyses of responses to a NASA Task Loader Index (TLX) subjective workload assessments reveal a benefit for tactile feedback in GECO glove use for data entry. This first-ever investigation of employment of a pressurized EVA glove for human-computer interface opens up a wide range of future applications, including text chat communications, manipulation of procedureschecklists, cataloguingannotating images, scientific note taking, human-robot interaction, and control of suit andor other EVA systems.

  9. Multimode-singlemode-multimode fiber sensor for alcohol sensing application

    Science.gov (United States)

    Rofi'ah, Iftihatur; Hatta, A. M.; Sekartedjo, Sekartedjo

    2016-11-01

    Alcohol is volatile and flammable liquid which is soluble substances both on polar and non polar substances that has been used in some industrial sectors. Alcohol detection method now widely used one of them is the optical fiber sensor. In this paper used fiber optic sensor based on Multimode-Single-mode-Multimode (MSM) to detect alcohol solution at a concentration range of 0-3%. The working principle of sensor utilizes the modal interference between the core modes and the cladding modes, thus make the sensor sensitive to environmental changes. The result showed that characteristic of the sensor not affect the length of the single-mode fiber (SMF). We obtain that the sensor with a length of 5 mm of single-mode can sensing the alcohol with a sensitivity of 0.107 dB/v%.

  10. Systemic multimodal approach to speech therapy treatment in autistic children.

    Science.gov (United States)

    Tamas, Daniela; Marković, Slavica; Milankov, Vesela

    2013-01-01

    Conditions in which speech therapy treatment is applied in autistic children are often not in accordance with characteristics of opinions and learning of people with autism. A systemic multimodal approach means motivating autistic people to develop their language speech skill through the procedure which allows reliving of their personal experience according to the contents that are presented in the their natural social environment. This research was aimed at evaluating the efficiency of speech treatment based on the systemic multimodal approach to the work with autistic children. The study sample consisted of 34 children, aged from 8 to 16 years, diagnosed to have different autistic disorders, whose results showed a moderate and severe clinical picture of autism on the Childhood Autism Rating Scale. The applied instruments for the evaluation of ability were the Childhood Autism Rating Scale and Ganzberg II test. The study subjects were divided into two groups according to the type of treatment: children who were covered by the continuing treatment and systemic multimodal approach in the treatment, and children who were covered by classical speech treatment. It is shown that the systemic multimodal approach in teaching autistic children affects the stimulation of communication, socialization, self-service and work as well as that the progress achieved in these areas of functioning was retainable after long time, too. By applying the systemic multimodal approach when dealing with autistic children and by comparing their achievements on tests applied before, during and after the application of this mode, it has been concluded that certain improvement has been achieved in the functionality within the diagnosed category. The results point to a possible direction in the creation of new methods, plans and programs in dealing with autistic children based on empirical and interactive learning.

  11. Gastric Adenocarcinoma: A Multimodal Approach

    Directory of Open Access Journals (Sweden)

    Humair S. Quadri

    2017-08-01

    Full Text Available Despite its declining incidence, gastric cancer (GC remains a leading cause of cancer-related deaths worldwide. A multimodal approach to GC is critical to ensure optimal patient outcomes. Pretherapy fine resolution contrast-enhanced cross-sectional imaging, endoscopic ultrasound and staging laparoscopy play an important role in patients with newly diagnosed ostensibly operable GC to avoid unnecessary non-therapeutic laparotomies. Currently, margin negative gastrectomy and adequate lymphadenectomy performed at high volume hospitals remain the backbone of GC treatment. Importantly, adequate GC surgery should be integrated in the setting of a multimodal treatment approach. Treatment for advanced GC continues to expand with the emergence of additional lines of systemic and targeted therapies.

  12. Robustness of multimodal processes itineraries

    DEFF Research Database (Denmark)

    Bocewicz, G.; Banaszak, Z.; Nielsen, Izabela Ewa

    2013-01-01

    itineraries for assumed (O-D) trip. Since itinerary planning problem, constitutes a common routing and scheduling decision faced by travelers, hence the main question regards of itinerary replanning and particularly a method aimed at prototyping of mode sequences and paths selections. The declarative model......This paper concerns multimodal transport systems (MTS) represented by a supernetworks in which several unimodal networks are connected by transfer links and focuses on the scheduling problems encountered in these systems. Assuming unimodal networks are modeled as cyclic lines, i.e. the routes...... of multimodal processes driven itinerary planning problem is our main contribution. Illustrative examples providing alternative itineraries in some cases of MTS malfunction are presented....

  13. Inorganic Nanoparticles for Multimodal Molecular Imaging

    Directory of Open Access Journals (Sweden)

    Magdalena Swierczewska

    2011-01-01

    Full Text Available Multimodal molecular imaging can offer a synergistic improvement of diagnostic ability over a single imaging modality. Recent development of hybrid imaging systems has profoundly impacted the pool of available multimodal imaging probes. In particular, much interest has been focused on biocompatible, inorganic nanoparticle-based multimodal probes. Inorganic nanoparticles offer exceptional advantages to the field of multimodal imaging owing to their unique characteristics, such as nanometer dimensions, tunable imaging properties, and multifunctionality. Nanoparticles mainly based on iron oxide, quantum dots, gold, and silica have been applied to various imaging modalities to characterize and image specific biologic processes on a molecular level. A combination of nanoparticles and other materials such as biomolecules, polymers, and radiometals continue to increase functionality for in vivo multimodal imaging and therapeutic agents. In this review, we discuss the unique concepts, characteristics, and applications of the various multimodal imaging probes based on inorganic nanoparticles.

  14. Coherent multimoded dielectric wakefield accelerators

    International Nuclear Information System (INIS)

    Power, J.

    1998-01-01

    There has recently been a study of the potential uses of multimode dielectric structures for wakefield acceleration [1]. This technique is based on adjusting the wakefield modes of the structure to constructively interfere at certain delays with respect to the drive bunch, thus providing an accelerating gradient enhancement over single mode devices. In this report we examine and attempt to clarify the issues raised by this work in the light of the present state of the art in wakefield acceleration

  15. An audio-visual dataset of human-human interactions in stressful situations

    NARCIS (Netherlands)

    Lefter, I.; Burghouts, G.J.; Rothkrantz, L.J.M.

    2014-01-01

    Stressful situations are likely to occur at human operated service desks, as well as at human-computer interfaces used in public domain. Automatic surveillance can help notifying when extra assistance is needed. Human communication is inherently multimodal e.g. speech, gestures, facial expressions.

  16. Multimodality imaging of pulmonary infarction

    International Nuclear Information System (INIS)

    Bray, T.J.P.; Mortensen, K.H.; Gopalan, D.

    2014-01-01

    Highlights: • A plethora of pulmonary and systemic disorders, often associated with grave outcomes, may cause pulmonary infarction. • A stereotypical infarct is a peripheral wedge shaped pleurally based opacity but imaging findings can be highly variable. • Multimodality imaging is key to diagnosing the presence, aetiology and complications of pulmonary infarction. • Multimodality imaging of pulmonary infarction together with any ancillary features often guide to early targeted treatment. • CT remains the principal imaging modality with MRI increasingly used alongside nuclear medicine studies and ultrasound. - Abstract: The impact of absent pulmonary arterial and venous flow on the pulmonary parenchyma depends on a host of factors. These include location of the occlusive insult, the speed at which the occlusion develops and the ability of the normal dual arterial supply to compensate through increased bronchial arterial flow. Pulmonary infarction occurs when oxygenation is cut off secondary to sudden occlusion with lack of recruitment of the dual supply arterial system. Thromboembolic disease is the commonest cause of such an insult but a whole range of disease processes intrinsic and extrinsic to the pulmonary arterial and venous lumen may also result in infarcts. Recognition of the presence of infarction can be challenging as imaging manifestations often differ from the classically described wedge shaped defect and a number of weighty causes need consideration. This review highlights aetiologies and imaging appearances of pulmonary infarction, utilising cases to illustrate the essential role of a multimodality imaging approach in order to arrive at the appropriate diagnosis

  17. Multimodality imaging of pulmonary infarction

    Energy Technology Data Exchange (ETDEWEB)

    Bray, T.J.P., E-mail: timothyjpbray@gmail.com [Department of Radiology, Papworth Hospital NHS Foundation Trust, Ermine Street, Papworth Everard, Cambridge CB23 3RE (United Kingdom); Mortensen, K.H., E-mail: mortensen@doctors.org.uk [Department of Radiology, Papworth Hospital NHS Foundation Trust, Ermine Street, Papworth Everard, Cambridge CB23 3RE (United Kingdom); University Department of Radiology, Addenbrookes Hospital, Cambridge University Hospitals NHS Foundation Trust, Hills Road, Box 318, Cambridge CB2 0QQ (United Kingdom); Gopalan, D., E-mail: deepa.gopalan@btopenworld.com [Department of Radiology, Papworth Hospital NHS Foundation Trust, Ermine Street, Papworth Everard, Cambridge CB23 3RE (United Kingdom)

    2014-12-15

    Highlights: • A plethora of pulmonary and systemic disorders, often associated with grave outcomes, may cause pulmonary infarction. • A stereotypical infarct is a peripheral wedge shaped pleurally based opacity but imaging findings can be highly variable. • Multimodality imaging is key to diagnosing the presence, aetiology and complications of pulmonary infarction. • Multimodality imaging of pulmonary infarction together with any ancillary features often guide to early targeted treatment. • CT remains the principal imaging modality with MRI increasingly used alongside nuclear medicine studies and ultrasound. - Abstract: The impact of absent pulmonary arterial and venous flow on the pulmonary parenchyma depends on a host of factors. These include location of the occlusive insult, the speed at which the occlusion develops and the ability of the normal dual arterial supply to compensate through increased bronchial arterial flow. Pulmonary infarction occurs when oxygenation is cut off secondary to sudden occlusion with lack of recruitment of the dual supply arterial system. Thromboembolic disease is the commonest cause of such an insult but a whole range of disease processes intrinsic and extrinsic to the pulmonary arterial and venous lumen may also result in infarcts. Recognition of the presence of infarction can be challenging as imaging manifestations often differ from the classically described wedge shaped defect and a number of weighty causes need consideration. This review highlights aetiologies and imaging appearances of pulmonary infarction, utilising cases to illustrate the essential role of a multimodality imaging approach in order to arrive at the appropriate diagnosis.

  18. Diffusion Maps for Multimodal Registration

    Directory of Open Access Journals (Sweden)

    Gemma Piella

    2014-06-01

    Full Text Available Multimodal image registration is a difficult task, due to the significant intensity variations between the images. A common approach is to use sophisticated similarity measures, such as mutual information, that are robust to those intensity variations. However, these similarity measures are computationally expensive and, moreover, often fail to capture the geometry and the associated dynamics linked with the images. Another approach is the transformation of the images into a common space where modalities can be directly compared. Within this approach, we propose to register multimodal images by using diffusion maps to describe the geometric and spectral properties of the data. Through diffusion maps, the multimodal data is transformed into a new set of canonical coordinates that reflect its geometry uniformly across modalities, so that meaningful correspondences can be established between them. Images in this new representation can then be registered using a simple Euclidean distance as a similarity measure. Registration accuracy was evaluated on both real and simulated brain images with known ground-truth for both rigid and non-rigid registration. Results showed that the proposed approach achieved higher accuracy than the conventional approach using mutual information.

  19. Multimodal Estimation of Distribution Algorithms.

    Science.gov (United States)

    Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun

    2016-02-15

    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

  20. Multimodal Diversity of Postmodernist Fiction Text

    Directory of Open Access Journals (Sweden)

    U. I. Tykha

    2016-12-01

    Full Text Available The article is devoted to the analysis of structural and functional manifestations of multimodal diversity in postmodernist fiction texts. Multimodality is defined as the coexistence of more than one semiotic mode within a certain context. Multimodal texts feature a diversity of semiotic modes in the communication and development of their narrative. Such experimental texts subvert conventional patterns by introducing various semiotic resources – verbal or non-verbal.

  1. Experiencia de enseñanza multimodal en una clase de idiomas [Experience of multimodal teaching in a language classroom

    Directory of Open Access Journals (Sweden)

    María Martínez Lirola

    2013-12-01

    Full Text Available Resumen: Nuestra sociedad es cada vez más tecnológica y multimodal por lo que es necesario que la enseñanza se adapte a los nuevos tiempos. Este artículo analiza el modo en que la asignatura Lengua Inglesa IV de la Licenciatura en Filología Inglesa en la Universidad de Alicante combina el desarrollo de las cinco destrezas (escucha, habla, lectura, escritura e interacción evaluadas por medio de un portafolio con la multimodalidad en las prácticas docentes y en cada una de las actividades que componen el portafolio. Los resultados de una encuesta preparada al final del curso académico 2011-2012 ponen de manifiesto las competencias principales que el alumnado universitario desarrolla gracias a la docencia multimodal y la importancia de las tutorías en este tipo de enseñanza. Abstract: Our society becomes more technological and multimodal and, consequently, teaching has to be adapted to the new time. This article analyses the way in which the subject English Language IV of the degree English Studies at the University of Alicante combines the development of the five skills (listening, speaking, reading, writing and interacting evaluated through a portfolio with multimodality in the teaching practices and in each of the activities that are part of the portfolio. The results of a survey prepared at the end of the academic year 2011-2012 point out the main competences that university students develop thanks to multimodal teaching and the importance of tutorials in this kind of teaching.

  2. Multimodal exemplification: The expansion of meaning in electronic ...

    African Journals Online (AJOL)

    Functional Multimodal Discourse Analysis (SF-MDA) and argues for improving their exemplifica-tion multimodally. Multimodal devices, if well coordinated, can help optimize e-dictionary exam-ples in informativity, diversity, dynamicity and ...

  3. Human Computer Collaboration at the Edge: Enhancing Collective Situation Understanding with Controlled Natural Language

    Science.gov (United States)

    2016-09-06

    conversational agent with information exchange disabled until the end of the experiment run. The meaning of the indicator in the top- right of the agent... Human Computer Collaboration at the Edge: Enhancing Collective Situation Understanding with Controlled Natural Language Alun Preece∗, William...email: PreeceAD@cardiff.ac.uk †Emerging Technology Services, IBM United Kingdom Ltd, Hursley Park, Winchester, UK ‡US Army Research Laboratory, Human

  4. Preterm EEG: a multimodal neurophysiological protocol.

    Science.gov (United States)

    Stjerna, Susanna; Voipio, Juha; Metsäranta, Marjo; Kaila, Kai; Vanhatalo, Sampsa

    2012-02-18

    Since its introduction in early 1950s, electroencephalography (EEG) has been widely used in the neonatal intensive care units (NICU) for assessment and monitoring of brain function in preterm and term babies. Most common indications are the diagnosis of epileptic seizures, assessment of brain maturity, and recovery from hypoxic-ischemic events. EEG recording techniques and the understanding of neonatal EEG signals have dramatically improved, but these advances have been slow to penetrate through the clinical traditions. The aim of this presentation is to bring theory and practice of advanced EEG recording available for neonatal units. In the theoretical part, we will present animations to illustrate how a preterm brain gives rise to spontaneous and evoked EEG activities, both of which are unique to this developmental phase, as well as crucial for a proper brain maturation. Recent animal work has shown that the structural brain development is clearly reflected in early EEG activity. Most important structures in this regard are the growing long range connections and the transient cortical structure, subplate. Sensory stimuli in a preterm baby will generate responses that are seen at a single trial level, and they have underpinnings in the subplate-cortex interaction. This brings neonatal EEG readily into a multimodal study, where EEG is not only recording cortical function, but it also tests subplate function via different sensory modalities. Finally, introduction of clinically suitable dense array EEG caps, as well as amplifiers capable of recording low frequencies, have disclosed multitude of brain activities that have as yet been overlooked. In the practical part of this video, we show how a multimodal, dense array EEG study is performed in neonatal intensive care unit from a preterm baby in the incubator. The video demonstrates preparation of the baby and incubator, application of the EEG cap, and performance of the sensory stimulations.

  5. The integration of emotional and symbolic components in multimodal communication

    Directory of Open Access Journals (Sweden)

    Marc eMehu

    2015-07-01

    Full Text Available Human multimodal communication can be said to serve two main purposes: information transfer and social influence. In this paper, I argue that different components of multimodal signals play different roles in the processes of information transfer and social influence. Although the symbolic components of communication (e.g. verbal and denotative signals are well suited to transfer conceptual information, emotional components (e.g. nonverbal signals that are difficult to manipulate voluntarily likely take a function that is closer to social influence. I suggest that emotion should be considered a property of communicative signals, rather than an entity that is transferred as content by nonverbal signals. In this view, the effect of emotional processes on communication serve to change the quality of social signals to make them more efficient at producing responses in perceivers, whereas symbolic components increase the signals’ efficiency at interacting with the cognitive processes dedicated to the assessment of relevance. The interaction between symbolic and emotional components will be discussed in relation to the need for perceivers to evaluate the reliability of multimodal signals.

  6. The integration of emotional and symbolic components in multimodal communication

    Science.gov (United States)

    Mehu, Marc

    2015-01-01

    Human multimodal communication can be said to serve two main purposes: information transfer and social influence. In this paper, I argue that different components of multimodal signals play different roles in the processes of information transfer and social influence. Although the symbolic components of communication (e.g., verbal and denotative signals) are well suited to transfer conceptual information, emotional components (e.g., non-verbal signals that are difficult to manipulate voluntarily) likely take a function that is closer to social influence. I suggest that emotion should be considered a property of communicative signals, rather than an entity that is transferred as content by non-verbal signals. In this view, the effect of emotional processes on communication serve to change the quality of social signals to make them more efficient at producing responses in perceivers, whereas symbolic components increase the signals’ efficiency at interacting with the cognitive processes dedicated to the assessment of relevance. The interaction between symbolic and emotional components will be discussed in relation to the need for perceivers to evaluate the reliability of multimodal signals. PMID:26217280

  7. Multimodal targeted high relaxivity thermosensitive liposome for in vivo imaging

    Science.gov (United States)

    Kuijten, Maayke M. P.; Hannah Degeling, M.; Chen, John W.; Wojtkiewicz, Gregory; Waterman, Peter; Weissleder, Ralph; Azzi, Jamil; Nicolay, Klaas; Tannous, Bakhos A.

    2015-11-01

    Liposomes are spherical, self-closed structures formed by lipid bilayers that can encapsulate drugs and/or imaging agents in their hydrophilic core or within their membrane moiety, making them suitable delivery vehicles. We have synthesized a new liposome containing gadolinium-DOTA lipid bilayer, as a targeting multimodal molecular imaging agent for magnetic resonance and optical imaging. We showed that this liposome has a much higher molar relaxivities r1 and r2 compared to a more conventional liposome containing gadolinium-DTPA-BSA lipid. By incorporating both gadolinium and rhodamine in the lipid bilayer as well as biotin on its surface, we used this agent for multimodal imaging and targeting of tumors through the strong biotin-streptavidin interaction. Since this new liposome is thermosensitive, it can be used for ultrasound-mediated drug delivery at specific sites, such as tumors, and can be guided by magnetic resonance imaging.

  8. Multimodal Pedagogies for Teacher Education in TESOL

    Science.gov (United States)

    Yi, Youngjoo; Angay-Crowder, Tuba

    2016-01-01

    As a growing number of English language learners (ELLs) engage in digital and multimodal literacy practices in their daily lives, teachers are starting to incorporate multimodal approaches into their instruction. However, anecdotal and empirical evidence shows that teachers often feel unprepared for integrating such practices into their curricula…

  9. Multimode optical fibers: steady state mode exciter.

    Science.gov (United States)

    Ikeda, M; Sugimura, A; Ikegami, T

    1976-09-01

    The steady state mode power distribution of the multimode graded index fiber was measured. A simple and effective steady state mode exciter was fabricated by an etching technique. Its insertion loss was 0.5 dB for an injection laser. Deviation in transmission characteristics of multimode graded index fibers can be avoided by using the steady state mode exciter.

  10. Filter. Remix. Make.: Cultivating Adaptability through Multimodality

    Science.gov (United States)

    Dusenberry, Lisa; Hutter, Liz; Robinson, Joy

    2015-01-01

    This article establishes traits of adaptable communicators in the 21st century, explains why adaptability should be a goal of technical communication educators, and shows how multimodal pedagogy supports adaptability. Three examples of scalable, multimodal assignments (infographics, research interviews, and software demonstrations) that evidence…

  11. (Re-)Examination of Multimodal Augmented Reality

    NARCIS (Netherlands)

    Rosa, N.E.; Werkhoven, P.J.; Hürst, W.O.

    2016-01-01

    The majority of augmented reality (AR) research has been concerned with visual perception, however the move towards multimodality is imminent. At the same time, there is no clear vision of what multimodal AR is. The purpose of this position paper is to consider possible ways of examining AR other

  12. Multimodal pain management after arthroscopic surgery

    DEFF Research Database (Denmark)

    Rasmussen, Sten

    Multimodal Pain Management after Arthroscopic Surgery By Sten Rasmussen, M.D. The thesis is based on four randomized controlled trials. The main hypothesis was that multimodal pain treatment provides faster recovery after arthroscopic surgery. NSAID was tested against placebo after knee arthroscopy...

  13. Adhesion of multimode adhesives to enamel and dentin after one year of water storage.

    Science.gov (United States)

    Vermelho, Paulo Moreira; Reis, André Figueiredo; Ambrosano, Glaucia Maria Bovi; Giannini, Marcelo

    2017-06-01

    This study aimed to evaluate the ultramorphological characteristics of tooth-resin interfaces and the bond strength (BS) of multimode adhesive systems to enamel and dentin. Multimode adhesives (Scotchbond Universal (SBU) and All-Bond Universal) were tested in both self-etch and etch-and-rinse modes and compared to control groups (Optibond FL and Clearfil SE Bond (CSB)). Adhesives were applied to human molars and composite blocks were incrementally built up. Teeth were sectioned to obtain specimens for microtensile BS and TEM analysis. Specimens were tested after storage for either 24 h or 1 year. SEM analyses were performed to classify the failure pattern of beam specimens after BS testing. Etching increased the enamel BS of multimode adhesives; however, BS decreased after storage for 1 year. No significant differences in dentin BS were noted between multimode and control in either evaluation period. Storage for 1 year only reduced the dentin BS for SBU in self-etch mode. TEM analysis identified hybridization and interaction zones in dentin and enamel for all adhesives. Silver impregnation was detected on dentin-resin interfaces after storage of specimens for 1 year only with the SBU and CSB. Storage for 1 year reduced enamel BS when adhesives are applied on etched surface; however, BS of multimode adhesives did not differ from those of the control group. In dentin, no significant difference was noted between the multimode and control group adhesives, regardless of etching mode. In general, multimode adhesives showed similar behavior when compared to traditional adhesive techniques. Multimode adhesives are one-step self-etching adhesives that can also be used after enamel/dentin phosphoric acid etching, but each product may work better in specific conditions.

  14. Drusen Characterization with Multimodal Imaging

    Science.gov (United States)

    Spaide, Richard F.; Curcio, Christine A.

    2010-01-01

    Summary Multimodal imaging findings and histological demonstration of soft drusen, cuticular drusen, and subretinal drusenoid deposits provided information used to develop a model explaining their imaging characteristics. Purpose To characterize the known appearance of cuticular drusen, subretinal drusenoid deposits (reticular pseudodrusen), and soft drusen as revealed by multimodal fundus imaging; to create an explanatory model that accounts for these observations. Methods Reported color, fluorescein angiographic, autofluorescence, and spectral domain optical coherence tomography (SD-OCT) images of patients with cuticular drusen, soft drusen, and subretinal drusenoid deposits were reviewed, as were actual images from affected eyes. Representative histological sections were examined. The geometry, location, and imaging characteristics of these lesions were evaluated. A hypothesis based on the Beer-Lambert Law of light absorption was generated to fit these observations. Results Cuticular drusen appear as numerous uniform round yellow-white punctate accumulations under the retinal pigment epithelium (RPE). Soft drusen are larger yellow-white dome-shaped mounds of deposit under the RPE. Subretinal drusenoid deposits are polymorphous light-grey interconnected accumulations above the RPE. Based on the model, both cuticular and soft drusen appear yellow due to the removal of shorter wavelength light by a double pass through the RPE. Subretinal drusenoid deposits, which are located on the RPE, are not subjected to short wavelength attenuation and therefore are more prominent when viewed with blue light. The location and morphology of extracellular material in relationship to the RPE, and associated changes to RPE morphology and pigmentation, appeared to be primary determinants of druse appearance in different imaging modalities. Conclusion Although cuticular drusen, subretinal drusenoid deposits, and soft drusen are composed of common components, they are distinguishable

  15. Multifuel multimodal network design; Projeto de redes multicombustiveis multimodal

    Energy Technology Data Exchange (ETDEWEB)

    Lage, Carolina; Dias, Gustavo; Bahiense, Laura; Ferreira Filho, Virgilio J.M. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE). Programa de Engenharia de Producao

    2008-07-01

    The objective of the Multi commodity Multimodal Network Project is the development of modeling tools and methodologies for the optimal sizing of production networks and multimodal distribution of multiple fuel and its incomes, considering investments and transportation costs. Given the inherently non-linear combinatory nature of the problem, the resolution of real instances by the complete model, in an exact way, becomes computationally intractable. Thus, the strategy for resolution should contain a combination of exacts and heuristics methods, that must be applied to subdivisions of the original problem. This paper deals with one of these subdivisions, tackling the problem of modeling a network of pipelines in order to drain the production of ethanol away from the producing plants. The objective consists in defining the best network topology, minimizing investment and operational costs, and attending the total demand. In order to do that, the network was considered a tree, where the nodes are the center of producing regions and the edges are the pipelines, trough where the ethanol produced by plants must be drained away. The main objective also includes the decision over the optimal diameter of each pipeline and the optimal size of the bombs, in order to minimize the pumping costs. (author)

  16. Metawidgets in the multimodal interface

    Energy Technology Data Exchange (ETDEWEB)

    Blattner, M.M. (Lawrence Livermore National Lab., CA (United States) Anderson (M.D.) Cancer Center, Houston, TX (United States)); Glinert, E.P.; Jorge, J.A.; Ormsby, G.R. (Rensselaer Polytechnic Inst., Troy, NY (United States). Dept. of Computer Science)

    1991-01-01

    We analyze two intertwined and fundamental issues concerning computer-to-human communication in the multimodal interfaces: the interplay between sound and graphics, and the role of object persistence. Our observations lead us to introduce metawidgets as abstract entities capable of manifesting themselves to users as image, as sound, or as various combinations and/or sequences of the two media. We show examples of metawidgets in action, and discuss mechanisms for choosing among alternative media for metawidget instantiation. Finally, we describe a couple of experimental microworlds we have implemented to test out some of our ideas. 17 refs., 7 figs.

  17. Multimodal approach to postoperative recovery

    DEFF Research Database (Denmark)

    Kehlet, Henrik

    2009-01-01

    PURPOSE OF REVIEW: To provide updated information on recent developments within individual components of multimodal interventions to improve postoperative outcome (fast-track methodology). RECENT FINDINGS: The value of the fast-track methodology to improve recovery and decrease hospital stay...... and morbidity has been firmly consolidated, especially in colorectal procedures. An increasing amount of data from other procedures supports the value of the fast-track concept across procedures. Fast-track programs should be based on the analysis of procedure-specific factors that may influence outcome...

  18. Ketamina en analgesia multimodal postcesarea

    OpenAIRE

    Monzón Rubio, Eva María

    2011-01-01

    Mediante la analgesia multimodal influimos en las diferentes vías del dolor a la vez que minimizamos los potenciales efectos adversos de los diferentes fármacos administrados. En el caso del dolor postcesárea esto adquiere un importante matiz debido a la necesidad de disminuir el uso de opioides que pasan a la leche materna en caso de lactancia natural. El uso de dosis subanestésicas de Ketamina ha demostrado en diferentes estudios la disminución de requerimientos de opioides en las primer...

  19. Haptic-Multimodal Flight Control System Update

    Science.gov (United States)

    Goodrich, Kenneth H.; Schutte, Paul C.; Williams, Ralph A.

    2011-01-01

    The rapidly advancing capabilities of autonomous aircraft suggest a future where many of the responsibilities of today s pilot transition to the vehicle, transforming the pilot s job into something akin to driving a car or simply being a passenger. Notionally, this transition will reduce the specialized skills, training, and attention required of the human user while improving safety and performance. However, our experience with highly automated aircraft highlights many challenges to this transition including: lack of automation resilience; adverse human-automation interaction under stress; and the difficulty of developing certification standards and methods of compliance for complex systems performing critical functions traditionally performed by the pilot (e.g., sense and avoid vs. see and avoid). Recognizing these opportunities and realities, researchers at NASA Langley are developing a haptic-multimodal flight control (HFC) system concept that can serve as a bridge between today s state of the art aircraft that are highly automated but have little autonomy and can only be operated safely by highly trained experts (i.e., pilots) to a future in which non-experts (e.g., drivers) can safely and reliably use autonomous aircraft to perform a variety of missions. This paper reviews the motivation and theoretical basis of the HFC system, describes its current state of development, and presents results from two pilot-in-the-loop simulation studies. These preliminary studies suggest the HFC reshapes human-automation interaction in a way well-suited to revolutionary ease-of-use.

  20. Multimodality localization of epileptic foci

    Science.gov (United States)

    Desco, Manuel; Pascau, Javier; Pozo, M. A.; Santos, Andres; Reig, Santiago; Gispert, Juan D.; Garcia-Barreno, Pedro

    2001-05-01

    This paper presents a multimodality approach for the localization of epileptic foci using PET, MRI and EEG combined without the need of external markers. Mutual Information algorithm is used for MRI-PET registration. Dipole coordinates (provided by BESA software) are projected onto the MRI using a specifically developed algorithm. The four anatomical references used for electrode positioning (nasion, inion and two preauricular points) are located on the MRI using a triplanar viewer combined with a surface-rendering tool. Geometric transformation using deformation of the ideal sphere used for dipole calculations is then applied to match the patient's brain size and shape. Eight treatment-refractory epileptic patients have been studied. The combination of the anatomical information from the MRI, hipoperfusion areas in PET and dipole position and orientation helped the physician in the diagnosis of epileptic focus location. Neurosurgery was not indicated for patients where PET and dipole results were inconsistent; in two cases it was clinically indicated despite the mismatch, showing a negative follow up. The multimodality approach presented does not require external markers for dipole projection onto the MRI, this being the main difference with previous methods. The proposed method may play an important role in the indication of surgery for treatment- refractory epileptic patients.

  1. Cardiac imaging. A multimodality approach

    Energy Technology Data Exchange (ETDEWEB)

    Thelen, Manfred [Johannes Gutenberg University Hospital, Mainz (Germany); Erbel, Raimund [University Hospital Essen (Germany). Dept. of Cardiology; Kreitner, Karl-Friedrich [Johannes Gutenberg University Hospital, Mainz (Germany). Clinic and Polyclinic for Diagnostic and Interventional Radiology; Barkhausen, Joerg (eds.) [University Hospital Schleswig-Holstein, Luebeck (Germany). Dept. of Radiology and Nuclear Medicine

    2009-07-01

    An excellent atlas on modern diagnostic imaging of the heart Written by an interdisciplinary team of experts, Cardiac Imaging: A Multimodality Approach features an in-depth introduction to all current imaging modalities for the diagnostic assessment of the heart as well as a clinical overview of cardiac diseases and main indications for cardiac imaging. With a particular emphasis on CT and MRI, the first part of the atlas also covers conventional radiography, echocardiography, angiography and nuclear medicine imaging. Leading specialists demonstrate the latest advances in the field, and compare the strengths and weaknesses of each modality. The book's second part features clinical chapters on heart defects, endocarditis, coronary heart disease, cardiomyopathies, myocarditis, cardiac tumors, pericardial diseases, pulmonary vascular diseases, and diseases of the thoracic aorta. The authors address anatomy, pathophysiology, and clinical features, and evaluate the various diagnostic options. Key features: - Highly regarded experts in cardiology and radiology off er image-based teaching of the latest techniques - Readers learn how to decide which modality to use for which indication - Visually highlighted tables and essential points allow for easy navigation through the text - More than 600 outstanding images show up-to-date technology and current imaging protocols Cardiac Imaging: A Multimodality Approach is a must-have desk reference for cardiologists and radiologists in practice, as well as a study guide for residents in both fields. It will also appeal to cardiac surgeons, general practitioners, and medical physicists with a special interest in imaging of the heart. (orig.)

  2. Cardiac imaging. A multimodality approach

    International Nuclear Information System (INIS)

    Thelen, Manfred; Erbel, Raimund; Kreitner, Karl-Friedrich; Barkhausen, Joerg

    2009-01-01

    An excellent atlas on modern diagnostic imaging of the heart Written by an interdisciplinary team of experts, Cardiac Imaging: A Multimodality Approach features an in-depth introduction to all current imaging modalities for the diagnostic assessment of the heart as well as a clinical overview of cardiac diseases and main indications for cardiac imaging. With a particular emphasis on CT and MRI, the first part of the atlas also covers conventional radiography, echocardiography, angiography and nuclear medicine imaging. Leading specialists demonstrate the latest advances in the field, and compare the strengths and weaknesses of each modality. The book's second part features clinical chapters on heart defects, endocarditis, coronary heart disease, cardiomyopathies, myocarditis, cardiac tumors, pericardial diseases, pulmonary vascular diseases, and diseases of the thoracic aorta. The authors address anatomy, pathophysiology, and clinical features, and evaluate the various diagnostic options. Key features: - Highly regarded experts in cardiology and radiology off er image-based teaching of the latest techniques - Readers learn how to decide which modality to use for which indication - Visually highlighted tables and essential points allow for easy navigation through the text - More than 600 outstanding images show up-to-date technology and current imaging protocols Cardiac Imaging: A Multimodality Approach is a must-have desk reference for cardiologists and radiologists in practice, as well as a study guide for residents in both fields. It will also appeal to cardiac surgeons, general practitioners, and medical physicists with a special interest in imaging of the heart. (orig.)

  3. Multimodal Hyper-connectivity Networks for MCI Classification.

    Science.gov (United States)

    Li, Yang; Gao, Xinqiang; Jie, Biao; Yap, Pew-Thian; Kim, Min-Jeong; Wee, Chong-Yaw; Shen, Dinggang

    2017-09-01

    Hyper-connectivity network is a network where every edge is connected to more than two nodes, and can be naturally denoted using a hyper-graph. Hyper-connectivity brain network, either based on structural or functional interactions among the brain regions, has been used for brain disease diagnosis. However, the conventional hyper-connectivity network is constructed solely based on single modality data, ignoring potential complementary information conveyed by other modalities. The integration of complementary information from multiple modalities has been shown to provide a more comprehensive representation about the brain disruptions. In this paper, a novel multimodal hyper-network modelling method was proposed for improving the diagnostic accuracy of mild cognitive impairment (MCI). Specifically, we first constructed a multimodal hyper-connectivity network by simultaneously considering information from diffusion tensor imaging and resting-state functional magnetic resonance imaging data. We then extracted different types of network features from the hyper-connectivity network, and further exploited a manifold regularized multi-task feature selection method to jointly select the most discriminative features. Our proposed multimodal hyper-connectivity network demonstrated a better MCI classification performance than the conventional single modality based hyper-connectivity networks.

  4. Human-computer interfaces applied to numerical solution of the Plateau problem

    Science.gov (United States)

    Elias Fabris, Antonio; Soares Bandeira, Ivana; Ramos Batista, Valério

    2015-09-01

    In this work we present a code in Matlab to solve the Problem of Plateau numerically, and the code will include human-computer interface. The Problem of Plateau has applications in areas of knowledge like, for instance, Computer Graphics. The solution method will be the same one of the Surface Evolver, but the difference will be a complete graphical interface with the user. This will enable us to implement other kinds of interface like ocular mouse, voice, touch, etc. To date, Evolver does not include any graphical interface, which restricts its use by the scientific community. Specially, its use is practically impossible for most of the Physically Challenged People.

  5. MOBILTEL - Mobile Multimodal Telecommunications dialogue system based on VoIP telephony

    Directory of Open Access Journals (Sweden)

    Anton Čižmár

    2009-10-01

    Full Text Available In this paper the project MobilTel ispresented. The communication itself is becoming amultimodal interactive process. The MobilTel projectprovides research and development activities inmultimodal interfaces area. The result is a functionalarchitecture for mobile multimodal telecommunicationsystem running on handheld device. The MobilTelcommunicator is a multimodal Slovak speech andgraphical interface with integrated VoIP client. Theother possible modalities are pen – touch screeninteraction, keyboard, and display on which theinformation is more user friendly presented (icons,emoticons, etc., and provides hyperlink and scrollingmenu availability.We describe the method of interaction between mobileterminal (PDA and MobilTel multimodal PCcommunicator over a VoIP WLAN connection basedon SIP protocol. We also present the graphicalexamples of services that enable users to obtaininformation about weather or information about trainconnection between two train stations.

  6. Practical multimodal care for cancer cachexia.

    Science.gov (United States)

    Maddocks, Matthew; Hopkinson, Jane; Conibear, John; Reeves, Annie; Shaw, Clare; Fearon, Ken C H

    2016-12-01

    Cancer cachexia is common and reduces function, treatment tolerability and quality of life. Given its multifaceted pathophysiology a multimodal approach to cachexia management is advocated for, but can be difficult to realise in practice. We use a case-based approach to highlight practical approaches to the multimodal management of cachexia for patients across the cancer trajectory. Four cases with lung cancer spanning surgical resection, radical chemoradiotherapy, palliative chemotherapy and no anticancer treatment are presented. We propose multimodal care approaches that incorporate nutritional support, exercise, and anti-inflammatory agents, on a background of personalized oncology care and family-centred education. Collectively, the cases reveal that multimodal care is part of everyone's remit, often focuses on supported self-management, and demands buy-in from the patient and their family. Once operationalized, multimodal care approaches can be tested pragmatically, including alongside emerging pharmacological cachexia treatments. We demonstrate that multimodal care for cancer cachexia can be achieved using simple treatments and without a dedicated team of specialists. The sharing of advice between health professionals can help build collective confidence and expertise, moving towards a position in which every team member feels they can contribute towards multimodal care.

  7. A Model-based Framework for Risk Assessment in Human-Computer Controlled Systems

    Science.gov (United States)

    Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems. This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions. Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  8. Effects of muscle fatigue on the usability of a myoelectric human-computer interface.

    Science.gov (United States)

    Barszap, Alexander G; Skavhaug, Ida-Maria; Joshi, Sanjay S

    2016-10-01

    Electromyography-based human-computer interface development is an active field of research. However, knowledge on the effects of muscle fatigue for specific devices is limited. We have developed a novel myoelectric human-computer interface in which subjects continuously navigate a cursor to targets by manipulating a single surface electromyography (sEMG) signal. Two-dimensional control is achieved through simultaneous adjustments of power in two frequency bands through a series of dynamic low-level muscle contractions. Here, we investigate the potential effects of muscle fatigue during the use of our interface. In the first session, eight subjects completed 300 cursor-to-target trials without breaks; four using a wrist muscle and four using a head muscle. The wrist subjects returned for a second session in which a static fatiguing exercise took place at regular intervals in-between cursor-to-target trials. In the first session we observed no declines in performance as a function of use, even after the long period of use. In the second session, we observed clear changes in cursor trajectories, paired with a target-specific decrease in hit rates. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. US Army Weapon Systems Human-Computer Interface (WSHCI) style guide, Version 1

    Energy Technology Data Exchange (ETDEWEB)

    Avery, L.W.; O`Mara, P.A.; Shepard, A.P.

    1996-09-30

    A stated goal of the U.S. Army has been the standardization of the human computer interfaces (HCIS) of its system. Some of the tools being used to accomplish this standardization are HCI design guidelines and style guides. Currently, the Army is employing a number of style guides. While these style guides provide good guidance for the command, control, communications, computers, and intelligence (C4I) domain, they do not necessarily represent the more unique requirements of the Army`s real time and near-real time (RT/NRT) weapon systems. The Office of the Director of Information for Command, Control, Communications, and Computers (DISC4), in conjunction with the Weapon Systems Technical Architecture Working Group (WSTAWG), recognized this need as part of their activities to revise the Army Technical Architecture (ATA). To address this need, DISC4 tasked the Pacific Northwest National Laboratory (PNNL) to develop an Army weapon systems unique HCI style guide. This document, the U.S. Army Weapon Systems Human-Computer Interface (WSHCI) Style Guide, represents the first version of that style guide. The purpose of this document is to provide HCI design guidance for RT/NRT Army systems across the weapon systems domains of ground, aviation, missile, and soldier systems. Each domain should customize and extend this guidance by developing their domain-specific style guides, which will be used to guide the development of future systems within their domains.

  10. Mediating multimodal environmental knowledge across animation techniques

    DEFF Research Database (Denmark)

    Maier, Carmen Daniela

    2011-01-01

    ://www.sustainlane.com/. The multimodal discourse analysis is meant to reveal how selection and representation of environmental knowledge about social actors, social actions, resources, time and space are influenced by animation techniques. Furthermore, in the context of this multimodal discourse analysis, their influence upon......The growing awareness of and concern about present environmental problems generate a proliferation of new forms of environmental discourses that are mediated in various ways. This chapter explores issues related to the ways in which environmental knowledge is multimodally communicated...

  11. Simulation of the dynamics of a multimode bipolarisation class B laser with intracavity frequency doubling

    International Nuclear Information System (INIS)

    Khandokhin, Pavel A

    2006-01-01

    A model of a multimode bipolarisation solid-state laser with intracavity frequency doubling is developed. The interaction of different longitudinal modes is described within the framework of rate-equation approximation while the interaction of each pair of orthogonally polarised modes with identical longitudinal indices is described taking into account the phase-sensitive interaction of these modes. Comparison with the experimental data is performed. (dinamics processes in lasers)

  12. Multimodal signalling in estrildid finches

    DEFF Research Database (Denmark)

    Gomes, A. C. R.; Funghi, C.; Soma, M.

    2017-01-01

    Sexual traits (e.g. visual ornaments, acoustic signals, courtship behaviour) are often displayed together as multimodal signals. Some hypotheses predict joint evolution of different sexual signals (e.g. to increase the efficiency of communication) or that different signals trade off with each other...... (e.g. due to limited resources). Alternatively, multiple signals may evolve independently for different functions, or to communicate different information (multiple message hypothesis). We evaluated these hypotheses with a comparative study in the family Estrildidae, one of the largest songbird...... compromise, but generally courtship dance also evolved independently from other signals. Instead of correlated evolution, we found that song, dance and colour are each related to different socio-ecological traits. Song complexity evolved together with ecological generalism, song performance with investment...

  13. Multispectral analysis of multimodal images

    Energy Technology Data Exchange (ETDEWEB)

    Kvinnsland, Yngve; Brekke, Njaal (Dept. of Surgical Sciences, Univ. of Bergen, Bergen (Norway)); Taxt, Torfinn M.; Gruener, Renate (Dept. of Biomedicine, Univ. of Bergen, Bergen (Norway))

    2009-02-15

    An increasing number of multimodal images represent a valuable increase in available image information, but at the same time it complicates the extraction of diagnostic information across the images. Multispectral analysis (MSA) has the potential to simplify this problem substantially as unlimited number of images can be combined, and tissue properties across the images can be extracted automatically. Materials and methods. We have developed a software solution for MSA containing two algorithms for unsupervised classification, an EM-algorithm finding multinormal class descriptions and the k-means clustering algorithm, and two for supervised classification, a Bayesian classifier using multinormal class descriptions and a kNN-algorithm. The software has an efficient user interface for the creation and manipulation of class descriptions, and it has proper tools for displaying the results. Results. The software has been tested on different sets of images. One application is to segment cross-sectional images of brain tissue (T1- and T2-weighted MR images) into its main normal tissues and brain tumors. Another interesting set of images are the perfusion maps and diffusion maps, derived images from raw MR images. The software returns segmentation that seem to be sensible. Discussion. The MSA software appears to be a valuable tool for image analysis with multimodal images at hand. It readily gives a segmentation of image volumes that visually seems to be sensible. However, to really learn how to use MSA, it will be necessary to gain more insight into what tissues the different segments contain, and the upcoming work will therefore be focused on examining the tissues through for example histological sections.

  14. Evaluation of 3D Positioned Sound in Multimodal Scenarios

    DEFF Research Database (Denmark)

    Møller, Anders Kalsgaard

    present but interacts with the other meeting members using different virtual reality technologies. The thesis also dealt with a 3D sound system in trucks. it was investigated if 3D-sound could be used to give the truck driver an audible and lifelike experience of the cyclists’ position, in relation......This Ph.D. study has dealt with different binaural methods for implementing 3D sound in selected multimodal applications, with the purpose of evaluating the feasibility of using 3D sound in these applications. The thesis dealt with a teleconference application in which one person is not physically...

  15. Nonreciprocal frequency conversion in a multimode microwave optomechanical circuit

    Science.gov (United States)

    Feofanov, A. K.; Bernier, N. R.; Toth, L. D.; Koottandavida, A.; Kippenberg, T. J.

    Nonreciprocal devices such as isolators, circulators, and directional amplifiers are pivotal to quantum signal processing with superconducting circuits. In the microwave domain, commercially available nonreciprocal devices are based on ferrite materials. They are barely compatible with superconducting quantum circuits, lossy, and cannot be integrated on chip. Significant potential exists for implementing non-magnetic chip-scale nonreciprocal devices using microwave optomechanical circuits. Here we demonstrate a possibility of nonreciprocal frequency conversion in a multimode microwave optomechanical circuit using solely optomechanical interaction between modes. The conversion scheme and the results reflecting the actual progress on the experimental implementation of the scheme will be presented.

  16. From Annotated Multimodal Corpora to Simulated Human-Like Behaviors

    DEFF Research Database (Denmark)

    Rehm, Matthias; André, Elisabeth

    2008-01-01

    Multimodal corpora prove useful at different stages of the development process of embodied conversational agents. Insights into human-human communicative behaviors can be drawn from such corpora. Rules for planning and generating such behavior in agents can be derived from this information....... And even the evaluation of human-agent interactions can rely on corpus data from human-human communication. In this paper, we exemplify how corpora can be exploited at the different development steps, starting with the question of how corpora are annotated and on what level of granularity. The corpus data...

  17. Kinesthetic Interaction

    DEFF Research Database (Denmark)

    Fogtmann, Maiken Hillerup; Fritsch, Jonas; Kortbek, Karen Johanne

    2008-01-01

    Within the Human-Computer Interaction community there is a growing interest in designing for the whole body in interaction design. The attempts aimed at addressing the body have very different outcomes spanning from theoretical arguments for understanding the body in the design process, to more...... practical examples of designing for bodily potential. This paper presents Kinesthetic Interaction as a unifying concept for describing the body in motion as a foundation for designing interactive systems. Based on the theoretical foundation for Kinesthetic Interaction, a conceptual framework is introduced...... to reveal bodily potential in relation to three design themes – kinesthetic development, kinesthetic means and kinesthetic disorder; and seven design parameters – engagement, sociality, movability, explicit motivation, implicit motivation, expressive meaning and kinesthetic empathy. The framework is a tool...

  18. Multilinguality, Multimodality, and Multicompetence: Code- and Modeswitching by Minority Ethnic Children in Complementary Schools

    Science.gov (United States)

    Wei, Li

    2011-01-01

    This article examines the multilingual and multimodal practices of British Chinese children in complementary school classes from a multicompetence perspective. Using classroom interaction data from a number of Chinese complementary schools in 3 different cities in England, the article argues that the multicompetence perspective enables a holistic…

  19. Multimodal Classification of Violent Online Political Extremism Content with Graph Convolutional Networks

    NARCIS (Netherlands)

    Rudinac, S.; Gornishka, I.; Worring, M.

    2017-01-01

    In this paper we present a multimodal approach to categorizing user posts based on their discussion topic. To integrate heterogeneous information extracted from the posts, i.e. text, visual content and the information about user interactions with the online platform, we deploy graph convolutional

  20. Multimodal Corpus Analysis as a Method for Ensuring Cultural Usability of Embodied Conversational Agents

    DEFF Research Database (Denmark)

    Nakano, Yukiko; Rehm, Matthias

    2009-01-01

    In this paper we propose the method of multimodal corpus analysis to collect enough empirical data for modeling the behavior of embodied conversational agents. This is a prerequisite to ensure the usability of such complex interactive systems. So far, the development of embodied agents suffers fr...

  1. Double-Wavelet Approach to Studying the Modulation Properties of Nonstationary Multimode Dynamics

    DEFF Research Database (Denmark)

    Sosnovtseva, Olga; Mosekilde, Erik; Pavlov, A.N.

    2005-01-01

    On the basis of double-wavelet analysis, the paper proposes a method to study interactions in the form of frequency and amplitude modulation in nonstationary multimode data series. Special emphasis is given to the problem of quantifying the strength of modulation for a fast signal by a coexisting...

  2. Driver Education for New Multimodal Facilities

    Science.gov (United States)

    2016-05-24

    Local and state transportation agencies are redesigning roads to accommodate multimodal travel, including the addition of new configurations, infrastructures, and rules that may be unfamiliar to current drivers and other road users. Education and out...

  3. Responsive Multimodal Transportation Management Strategies And IVHS

    Science.gov (United States)

    1995-02-01

    THE PURPOSE OF THIS STUDY WAS TO INVESTIGATE NEW AND INNOVATIVE WAYS TO INCORPORATE IVHS TECHNOLOGIES INTO MULTIMODAL TRANSPORTATION MANAGEMENT STRATEGIES. MUCH OF THE IVHS RESEARCH DONE TO DATE HAS ADDRESSED THE MODES INDIVIDUALLY. THIS PROJECT FOCU...

  4. Influence of Blood Contamination During Multimode Adhesive ...

    African Journals Online (AJOL)

    2018-01-30

    Jan 30, 2018 ... (μTBS) of multimode adhesives to dentin when using the self‑etch approach. Materials and Methods: ... adhesion, the collagen fibers collapse during the. Introduction ..... The failure mode was determined using an optical.

  5. Spatial sound in the use of multimodal interfaces for the acquisition of motor skills

    DEFF Research Database (Denmark)

    Hoffmann, Pablo F.

    2008-01-01

    This paper discusses the potential effectiveness of spatial sound in the use of multimodal interfaces and virtual environment technologies for the acquisition of motor skills. Because skills are generally of multimodal nature, spatial sound is discussed in terms of the role that it may play...... as to convey information considered critical for the transfer of motor skills....... in facilitating skill acquisition by complementing, or substituting, other sensory modalities. An overview of related research areas on audiovisual and audiotactile interaction is given in connection to the potential benefits of spatial sound as a means to improve the perceptual quality of the interfaces as well...

  6. Multimodal and ubiquitous computing systems: supporting independent-living older users.

    Science.gov (United States)

    Perry, Mark; Dowdall, Alan; Lines, Lorna; Hone, Kate

    2004-09-01

    We document the rationale and design of a multimodal interface to a pervasive/ubiquitous computing system that supports independent living by older people in their own homes. The Millennium Home system involves fitting a resident's home with sensors--these sensors can be used to trigger sequences of interaction with the resident to warn them about dangerous events, or to check if they need external help. We draw lessons from the design process and conclude the paper with implications for the design of multimodal interfaces to ubiquitous systems developed for the elderly and in healthcare, as well as for more general ubiquitous computing applications.

  7. Treatment of human-computer interface in a decision support system

    International Nuclear Information System (INIS)

    Heger, A.S.; Duran, F.A.; Cox, R.G.

    1992-01-01

    One of the most challenging applications facing the computer community is development of effective adaptive human-computer interface. This challenge stems from the complex nature of the human part of this symbiosis. The application of this discipline to the environmental restoration and waste management is further complicated due to the nature of environmental data. The information that is required to manage environmental impacts of human activity is fundamentally complex. This paper will discuss the efforts at Sandia National Laboratories in developing the adaptive conceptual model manager within the constraint of the environmental decision-making. A computer workstation, that hosts the Conceptual Model Manager and the Sandia Environmental Decision Support System will also be discussed

  8. U.S. Army weapon systems human-computer interface style guide. Version 2

    Energy Technology Data Exchange (ETDEWEB)

    Avery, L.W.; O`Mara, P.A.; Shepard, A.P.; Donohoo, D.T.

    1997-12-31

    A stated goal of the US Army has been the standardization of the human computer interfaces (HCIs) of its system. Some of the tools being used to accomplish this standardization are HCI design guidelines and style guides. Currently, the Army is employing a number of HCI design guidance documents. While these style guides provide good guidance for the command, control, communications, computers, and intelligence (C4I) domain, they do not necessarily represent the more unique requirements of the Army`s real time and near-real time (RT/NRT) weapon systems. The Office of the Director of Information for Command, Control, Communications, and Computers (DISC4), in conjunction with the Weapon Systems Technical Architecture Working Group (WSTAWG), recognized this need as part of their activities to revise the Army Technical Architecture (ATA), now termed the Joint Technical Architecture-Army (JTA-A). To address this need, DISC4 tasked the Pacific Northwest National Laboratory (PNNL) to develop an Army weapon systems unique HCI style guide, which resulted in the US Army Weapon Systems Human-Computer Interface (WSHCI) Style Guide Version 1. Based on feedback from the user community, DISC4 further tasked PNNL to revise Version 1 and publish Version 2. The intent was to update some of the research and incorporate some enhancements. This document provides that revision. The purpose of this document is to provide HCI design guidance for the RT/NRT Army system domain across the weapon systems subdomains of ground, aviation, missile, and soldier systems. Each subdomain should customize and extend this guidance by developing their domain-specific style guides, which will be used to guide the development of future systems within their subdomains.

  9. Empowering Prospective Teachers to Become Active Sense-Makers: Multimodal Modeling of the Seasons

    Science.gov (United States)

    Kim, Mi Song

    2015-10-01

    Situating science concepts in concrete and authentic contexts, using information and communications technologies, including multimodal modeling tools, is important for promoting the development of higher-order thinking skills in learners. However, teachers often struggle to integrate emergent multimodal models into a technology-rich informal learning environment. Our design-based research co-designs and develops engaging, immersive, and interactive informal learning activities called "Embodied Modeling-Mediated Activities" (EMMA) to support not only Singaporean learners' deep learning of astronomy but also the capacity of teachers. As part of the research on EMMA, this case study describes two prospective teachers' co-design processes involving multimodal models for teaching and learning the concept of the seasons in a technology-rich informal learning setting. Our study uncovers four prominent themes emerging from our data concerning the contextualized nature of learning and teaching involving multimodal models in informal learning contexts: (1) promoting communication and emerging questions, (2) offering affordances through limitations, (3) explaining one concept involving multiple concepts, and (4) integrating teaching and learning experiences. This study has an implication for the development of a pedagogical framework for teaching and learning in technology-enhanced learning environments—that is empowering teachers to become active sense-makers using multimodal models.

  10. The Catchment Feature Model: A Device for Multimodal Fusion and a Bridge between Signal and Sense

    Science.gov (United States)

    Quek, Francis

    2004-12-01

    The catchment feature model addresses two questions in the field of multimodal interaction: how we bridge video and audio processing with the realities of human multimodal communication, and how information from the different modes may be fused. We argue from a detailed literature review that gestural research has clustered around manipulative and semaphoric use of the hands, motivate the catchment feature model psycholinguistic research, and present the model. In contrast to "whole gesture" recognition, the catchment feature model applies a feature decomposition approach that facilitates cross-modal fusion at the level of discourse planning and conceptualization. We present our experimental framework for catchment feature-based research, cite three concrete examples of catchment features, and propose new directions of multimodal research based on the model.

  11. The Catchment Feature Model: A Device for Multimodal Fusion and a Bridge between Signal and Sense

    Directory of Open Access Journals (Sweden)

    Francis Quek

    2004-09-01

    Full Text Available The catchment feature model addresses two questions in the field of multimodal interaction: how we bridge video and audio processing with the realities of human multimodal communication, and how information from the different modes may be fused. We argue from a detailed literature review that gestural research has clustered around manipulative and semaphoric use of the hands, motivate the catchment feature model psycholinguistic research, and present the model. In contrast to “whole gesture” recognition, the catchment feature model applies a feature decomposition approach that facilitates cross-modal fusion at the level of discourse planning and conceptualization. We present our experimental framework for catchment feature-based research, cite three concrete examples of catchment features, and propose new directions of multimodal research based on the model.

  12. GAIML: A New Language for Verbal and Graphical Interaction in Chatbots

    Directory of Open Access Journals (Sweden)

    Roberto Pirrone

    2008-01-01

    Full Text Available Natural and intuitive interaction between users and complex systems is a crucial research topic in human-computer interaction. A major direction is the definition and implementation of systems with natural language understanding capabilities. The interaction in natural language is often performed by means of systems called chatbots. A chatbot is a conversational agent with a proper knowledge base able to interact with users. Chatbots appearance can be very sophisticated with 3D avatars and speech processing modules. However the interaction between the system and the user is only performed through textual areas for inputs and replies. An interaction able to add to natural language also graphical widgets could be more effective. On the other side, a graphical interaction involving also the natural language can increase the comfort of the user instead of using only graphical widgets. In many applications multi-modal communication must be preferred when the user and the system have a tight and complex interaction. Typical examples are cultural heritages applications (intelligent museum guides, picture browsing or systems providing the user with integrated information taken from different and heterogenous sources as in the case of the iGoogle™ interface. We propose to mix the two modalities (verbal and graphical to build systems with a reconfigurable interface, which is able to change with respect to the particular application context. The result of this proposal is the Graphical Artificial Intelligence Markup Language (GAIML an extension of AIML allowing merging both interaction modalities. In this context a suitable chatbot system called Graphbot is presented to support this language. With this language is possible to define personalized interface patterns that are the most suitable ones in relation to the data types exchanged between the user and the system according to the context of the dialogue.

  13. Recent developments in multimodality fluorescence imaging probes

    Directory of Open Access Journals (Sweden)

    Jianhong Zhao

    2018-05-01

    Full Text Available Multimodality optical imaging probes have emerged as powerful tools that improve detection sensitivity and accuracy, important in disease diagnosis and treatment. In this review, we focus on recent developments of optical fluorescence imaging (OFI probe integration with other imaging modalities such as X-ray computed tomography (CT, magnetic resonance imaging (MRI, positron emission tomography (PET, single-photon emission computed tomography (SPECT, and photoacoustic imaging (PAI. The imaging technologies are briefly described in order to introduce the strengths and limitations of each techniques and the need for further multimodality optical imaging probe development. The emphasis of this account is placed on how design strategies are currently implemented to afford physicochemically and biologically compatible multimodality optical fluorescence imaging probes. We also present studies that overcame intrinsic disadvantages of each imaging technique by multimodality approach with improved detection sensitivity and accuracy. KEY WORDS: Optical imaging, Fluorescence, Multimodality, Near-infrared fluorescence, Nanoprobe, Computed tomography, Magnetic resonance imaging, Positron emission tomography, Single-photon emission computed tomography, Photoacoustic imaging

  14. Multimodality, creativity and children's meaning-making: Drawings ...

    African Journals Online (AJOL)

    Multimodality, creativity and children's meaning-making: Drawings, writings, imaginings. ... Framed by social semiotic theories of communication, multimodal ... to create imaginary worlds and express meanings according to their interests.

  15. Multimodal Behavior Therapy: Case Study of a High School Student.

    Science.gov (United States)

    Seligman, Linda

    1981-01-01

    A case study of a high school student concerned with weight problems illustrates multimodal behavior therapy and its use in a high school setting. Multimodal therapy allows the school counselor to maximize referral sources while emphasizing growth and actualization. (JAC)

  16. Polarization Characterization of a Multi-Moded Feed Structure

    Data.gov (United States)

    National Aeronautics and Space Administration — The Polarization Characterization of a Multi-Moded Feed Structure projects characterize the polarization response of a multi-moded feed horn as an innovative...

  17. Multimodale trafiknet i GIS (Multimodal Traffic Network in GIS)

    DEFF Research Database (Denmark)

    Kronbak, Jacob; Brems, Camilla Riff

    1996-01-01

    The report introduces the use of multi-modal traffic networks within a geographical Information System (GIS). The necessary theory of modelling multi-modal traffic network is reviewed and applied to the ARC/INFO GIS by an explorative example.......The report introduces the use of multi-modal traffic networks within a geographical Information System (GIS). The necessary theory of modelling multi-modal traffic network is reviewed and applied to the ARC/INFO GIS by an explorative example....

  18. Multimodality instrument for tissue characterization

    Science.gov (United States)

    Mah, Robert W. (Inventor); Andrews, Russell J. (Inventor)

    2004-01-01

    A system with multimodality instrument for tissue identification includes a computer-controlled motor driven heuristic probe with a multisensory tip. For neurosurgical applications, the instrument is mounted on a stereotactic frame for the probe to penetrate the brain in a precisely controlled fashion. The resistance of the brain tissue being penetrated is continually monitored by a miniaturized strain gauge attached to the probe tip. Other modality sensors may be mounted near the probe tip to provide real-time tissue characterizations and the ability to detect the proximity of blood vessels, thus eliminating errors normally associated with registration of pre-operative scans, tissue swelling, elastic tissue deformation, human judgement, etc., and rendering surgical procedures safer, more accurate, and efficient. A neural network program adaptively learns the information on resistance and other characteristic features of normal brain tissue during the surgery and provides near real-time modeling. A fuzzy logic interface to the neural network program incorporates expert medical knowledge in the learning process. Identification of abnormal brain tissue is determined by the detection of change and comparison with previously learned models of abnormal brain tissues. The operation of the instrument is controlled through a user friendly graphical interface. Patient data is presented in a 3D stereographics display. Acoustic feedback of selected information may optionally be provided. Upon detection of the close proximity to blood vessels or abnormal brain tissue, the computer-controlled motor immediately stops probe penetration. The use of this system will make surgical procedures safer, more accurate, and more efficient. Other applications of this system include the detection, prognosis and treatment of breast cancer, prostate cancer, spinal diseases, and use in general exploratory surgery.

  19. Multimodality Data Integration in Epilepsy

    Directory of Open Access Journals (Sweden)

    Otto Muzik

    2007-01-01

    Full Text Available An important goal of software development in the medical field is the design of methods which are able to integrate information obtained from various imaging and nonimaging modalities into a cohesive framework in order to understand the results of qualitatively different measurements in a larger context. Moreover, it is essential to assess the various features of the data quantitatively so that relationships in anatomical and functional domains between complementing modalities can be expressed mathematically. This paper presents a clinically feasible software environment for the quantitative assessment of the relationship among biochemical functions as assessed by PET imaging and electrophysiological parameters derived from intracranial EEG. Based on the developed software tools, quantitative results obtained from individual modalities can be merged into a data structure allowing a consistent framework for advanced data mining techniques and 3D visualization. Moreover, an effort was made to derive quantitative variables (such as the spatial proximity index, SPI characterizing the relationship between complementing modalities on a more generic level as a prerequisite for efficient data mining strategies. We describe the implementation of this software environment in twelve children (mean age 5.2±4.3 years with medically intractable partial epilepsy who underwent both high-resolution structural MR and functional PET imaging. Our experiments demonstrate that our approach will lead to a better understanding of the mechanisms of epileptogenesis and might ultimately have an impact on treatment. Moreover, our software environment holds promise to be useful in many other neurological disorders, where integration of multimodality data is crucial for a better understanding of the underlying disease mechanisms.

  20. The mind-writing pupil : A human-computer interface based on decoding of covert attention through pupillometry

    NARCIS (Netherlands)

    Mathôt, Sebastiaan; Melmi, Jean Baptiste; Van Der Linden, Lotje; Van Der Stigchel, Stefan

    2016-01-01

    We present a new human-computer interface that is based on decoding of attention through pupillometry. Our method builds on the recent finding that covert visual attention affects the pupillary light response: Your pupil constricts when you covertly (without looking at it) attend to a bright,

  1. Video genre classification using multimodal features

    Science.gov (United States)

    Jin, Sung Ho; Bae, Tae Meon; Choo, Jin Ho; Ro, Yong Man

    2003-12-01

    We propose a video genre classification method using multimodal features. The proposed method is applied for the preprocessing of automatic video summarization or the retrieval and classification of broadcasting video contents. Through a statistical analysis of low-level and middle-level audio-visual features in video, the proposed method can achieve good performance in classifying several broadcasting genres such as cartoon, drama, music video, news, and sports. In this paper, we adopt MPEG-7 audio-visual descriptors as multimodal features of video contents and evaluate the performance of the classification by feeding the features into a decision tree-based classifier which is trained by CART. The experimental results show that the proposed method can recognize several broadcasting video genres with a high accuracy and the classification performance with multimodal features is superior to the one with unimodal features in the genre classification.

  2. Training of Perceptual Motor Skills in Multimodal Virtual Environments

    Directory of Open Access Journals (Sweden)

    Gopher Daniel

    2011-12-01

    Full Text Available Multimodal, immersive, virtual reality (VR techniques open new perspectives for perceptualmotor skill trainers. They also introduce new risks and dangers. This paper describes the benefits and pitfalls of multimodal training and the cognitive building blocks of a multimodal, VR training simulators.

  3. A Novel Wearable Forehead EOG Measurement System for Human Computer Interfaces.

    Science.gov (United States)

    Heo, Jeong; Yoon, Heenam; Park, Kwang Suk

    2017-06-23

    Amyotrophic lateral sclerosis (ALS) patients whose voluntary muscles are paralyzed commonly communicate with the outside world using eye movement. There have been many efforts to support this method of communication by tracking or detecting eye movement. An electrooculogram (EOG), an electro-physiological signal, is generated by eye movements and can be measured with electrodes placed around the eye. In this study, we proposed a new practical electrode position on the forehead to measure EOG signals, and we developed a wearable forehead EOG measurement system for use in Human Computer/Machine interfaces (HCIs/HMIs). Four electrodes, including the ground electrode, were placed on the forehead. The two channels were arranged vertically and horizontally, sharing a positive electrode. Additionally, a real-time eye movement classification algorithm was developed based on the characteristics of the forehead EOG. Three applications were employed to evaluate the proposed system: a virtual keyboard using a modified Bremen BCI speller and an automatic sequential row-column scanner, and a drivable power wheelchair. The mean typing speeds of the modified Bremen brain-computer interface (BCI) speller and automatic row-column scanner were 10.81 and 7.74 letters per minute, and the mean classification accuracies were 91.25% and 95.12%, respectively. In the power wheelchair demonstration, the user drove the wheelchair through an 8-shape course without collision with obstacles.

  4. Controlling a human-computer interface system with a novel classification method that uses electrooculography signals.

    Science.gov (United States)

    Wu, Shang-Lin; Liao, Lun-De; Lu, Shao-Wei; Jiang, Wei-Ling; Chen, Shi-An; Lin, Chin-Teng

    2013-08-01

    Electrooculography (EOG) signals can be used to control human-computer interface (HCI) systems, if properly classified. The ability to measure and process these signals may help HCI users to overcome many of the physical limitations and inconveniences in daily life. However, there are currently no effective multidirectional classification methods for monitoring eye movements. Here, we describe a classification method used in a wireless EOG-based HCI device for detecting eye movements in eight directions. This device includes wireless EOG signal acquisition components, wet electrodes and an EOG signal classification algorithm. The EOG classification algorithm is based on extracting features from the electrical signals corresponding to eight directions of eye movement (up, down, left, right, up-left, down-left, up-right, and down-right) and blinking. The recognition and processing of these eight different features were achieved in real-life conditions, demonstrating that this device can reliably measure the features of EOG signals. This system and its classification procedure provide an effective method for identifying eye movements. Additionally, it may be applied to study eye functions in real-life conditions in the near future.

  5. Selection of suitable hand gestures for reliable myoelectric human computer interface.

    Science.gov (United States)

    Castro, Maria Claudia F; Arjunan, Sridhar P; Kumar, Dinesh K

    2015-04-09

    Myoelectric controlled prosthetic hand requires machine based identification of hand gestures using surface electromyogram (sEMG) recorded from the forearm muscles. This study has observed that a sub-set of the hand gestures have to be selected for an accurate automated hand gesture recognition, and reports a method to select these gestures to maximize the sensitivity and specificity. Experiments were conducted where sEMG was recorded from the muscles of the forearm while subjects performed hand gestures and then was classified off-line. The performances of ten gestures were ranked using the proposed Positive-Negative Performance Measurement Index (PNM), generated by a series of confusion matrices. When using all the ten gestures, the sensitivity and specificity was 80.0% and 97.8%. After ranking the gestures using the PNM, six gestures were selected and these gave sensitivity and specificity greater than 95% (96.5% and 99.3%); Hand open, Hand close, Little finger flexion, Ring finger flexion, Middle finger flexion and Thumb flexion. This work has shown that reliable myoelectric based human computer interface systems require careful selection of the gestures that have to be recognized and without such selection, the reliability is poor.

  6. Resolving the Paradox of the Active User: Stable Suboptimal Performance in Interactive Tasks

    National Research Council Canada - National Science Library

    Fu, Wai-Tat; Gray, Wayne D

    2004-01-01

    ...: Cognitive Aspects of Human Computer Interaction, Cambridge, MIT Press, MA, USA] the persistent use of inefficient procedures in interactive tasks by experienced or even expert users when demonstrably more efficient procedures exist...

  7. PET-MRI and multimodal cancer imaging

    International Nuclear Information System (INIS)

    Wang Taisong; Zhao Jinhua; Song Jianhua

    2011-01-01

    Multimodality imaging, specifically PET-CT, brought a new perspective into the fields of clinical imaging. Clinical cases have shown that PET-CT has great value in clinical diagnosis and experimental research. But PET-CT still bears some limitations. A major drawback is that CT provides only limited soft tissue contrast and exposes the patient to a significant radiation dose. MRI overcome these limitations, it has excellent soft tissue contrast, high temporal and spatial resolution and no radiation damage. Additionally, since MRI provides also functional information, PET-MRI will show a new direction of multimodality imaging in the future. (authors)

  8. Strategy development management of Multimodal Transport Network

    Directory of Open Access Journals (Sweden)

    Nesterova Natalia S.

    2016-01-01

    Full Text Available The article gives a brief overview of works on the development of transport infrastructure for multimodal transportation and integration of Russian transport system into the international transport corridors. The technology for control of the strategy, that changes shape and capacity of Multi-modal Transport Network (MTN, is considered as part of the methodology for designing and development of MTN. This technology allows to carry out strategic and operational management of the strategy implementation based on the use of the balanced scorecard.

  9. Multimodal surveillance sensors, algorithms, and systems

    CERN Document Server

    Zhu, Zhigang

    2007-01-01

    From front-end sensors to systems and environmental issues, this practical resource guides you through the many facets of multimodal surveillance. The book examines thermal, vibration, video, and audio sensors in a broad context of civilian and military applications. This cutting-edge volume provides an in-depth treatment of data fusion algorithms that takes you to the core of multimodal surveillance, biometrics, and sentient computing. The book discusses such people and activity topics as tracking people and vehicles and identifying individuals by their speech.Systems designers benefit from d

  10. Interruption of People in Human-Computer Interaction: A General Unifying Definition of Human Interruption and Taxonomy

    National Research Council Canada - National Science Library

    McFarlane, Daniel

    1997-01-01

    .... This report asserts that a single unifying definition of user-interruption and the accompanying practical taxonomy would be useful theoretical tools for driving effective investigation of this crucial...

  11. Combining multivariate statistics and the think-aloud protocol to assess Human-Computer Interaction barriers in symptom checkers.

    Science.gov (United States)

    Marco-Ruiz, Luis; Bønes, Erlend; de la Asunción, Estela; Gabarron, Elia; Aviles-Solis, Juan Carlos; Lee, Eunji; Traver, Vicente; Sato, Keiichi; Bellika, Johan G

    2017-10-01

    Symptom checkers are software tools that allow users to submit a set of symptoms and receive advice related to them in the form of a diagnosis list, health information or triage. The heterogeneity of their potential users and the number of different components in their user interfaces can make testing with end-users unaffordable. We designed and executed a two-phase method to test the respiratory diseases module of the symptom checker Erdusyk. Phase I consisted of an online test with a large sample of users (n=53). In Phase I, users evaluated the system remotely and completed a questionnaire based on the Technology Acceptance Model. Principal Component Analysis was used to correlate each section of the interface with the questionnaire responses, thus identifying which areas of the user interface presented significant contributions to the technology acceptance. In the second phase, the think-aloud procedure was executed with a small number of samples (n=15), focusing on the areas with significant contributions to analyze the reasons for such contributions. Our method was used effectively to optimize the testing of symptom checker user interfaces. The method allowed kept the cost of testing at reasonable levels by restricting the use of the think-aloud procedure while still assuring a high amount of coverage. The main barriers detected in Erdusyk were related to problems understanding time repetition patterns, the selection of levels in scales to record intensities, navigation, the quantification of some symptom attributes, and the characteristics of the symptoms. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Enhancing a CAVE with Eye Tracking System for Human-Computer Interaction Research in 3D Visualization

    National Research Council Canada - National Science Library

    Hix, Deborah

    1999-01-01

    The objective of this award was to purchase and install two ISCAN Inc. Eye Tracking Systems and associated equipment to create a unique set-up for research in fully immersive virtual environments (VEs...

  13. Medical students' cognitive load in volumetric image interpretation : Insights from human-computer interaction and eye movements

    NARCIS (Netherlands)

    Stuijfzand, Bobby G.; Van Der Schaaf, Marieke F.; Kirschner, Femke C.; Ravesloot, Cécile J.; Van Der Gijp, Anouk; Vincken, Koen L.

    2016-01-01

    Medical image interpretation is moving from using 2D- to volumetric images, thereby changing the cognitive and perceptual processes involved. This is expected to affect medical students' experienced cognitive load, while learning image interpretation skills. With two studies this explorative

  14. Open-Box Muscle-Computer Interface: Introduction to Human-Computer Interactions in Bioengineering, Physiology, and Neuroscience Courses

    Science.gov (United States)

    Landa-Jiménez, M. A.; González-Gaspar, P.; Pérez-Estudillo, C.; López-Meraz, M. L.; Morgado-Valle, C.; Beltran-Parrazal, L.

    2016-01-01

    A Muscle-Computer Interface (muCI) is a human-machine system that uses electromyographic (EMG) signals to communicate with a computer. Surface EMG (sEMG) signals are currently used to command robotic devices, such as robotic arms and hands, and mobile robots, such as wheelchairs. These signals reflect the motor intention of a user before the…

  15. Custom-designed motion-based games for older adults: a review of literature in human-computer interaction

    OpenAIRE

    Gerling, Kathrin; Mandryk, Regan

    2014-01-01

    Many older adults, particularly persons living in senior residences and care homes, lead sedentary lifestyles, which reduces their life expectancy. Motion-based video games encourage physical activity and might be an opportunity for these adults to remain active and engaged; however, research efforts in the field have frequently focused on younger audiences and little is known about the requirements and benefits of motion-based games for elderly players. In this paper, we present an overview ...

  16. Reading Multimodal Texts for Learning – a Model for Cultivating Multimodal Literacy

    Directory of Open Access Journals (Sweden)

    Kristina Danielsson

    2016-08-01

    Full Text Available The re-conceptualisation of texts over the last 20 years, as well as the development of a multimodal understanding of communication and representation of knowledge, has profound consequences for the reading and understanding of multimodal texts, not least in educational contexts. However, if teachers and students are given tools to “unwrap” multimodal texts, they can develop a deeper understanding of texts, information structures, and the textual organisation of knowledge. This article presents a model for working with multimodal texts in education with the intention to highlight mutual multimodal text analysis in relation to the subject content. Examples are taken from a Singaporean science textbook as well as a Chilean science textbook, in order to demonstrate that the framework is versatile and applicable across different cultural contexts. The model takes into account the following aspects of texts: the general structure, how different semiotic resources operate, the ways in which different resources are combined (including coherence, the use of figurative language, and explicit/implicit values. Since learning operates on different dimensions – such as social and affective dimensions besides the cognitive ones – our inclusion of figurative language and values as components for textual analysis is a contribution to multimodal text analysis for learning.

  17. Influence of Blood Contamination During Multimode Adhesive ...

    African Journals Online (AJOL)

    Objectives: The present study evaluated the effects of blood contamination performed at different steps of bonding on the microtensile bond strength (μTBS) of multimode adhesives to dentin when using the self-etch approach. Materials and Methods: Seventy-five molars were randomly assigned to three adhesive groups ...

  18. Multimodal representations in collaborative history learning

    NARCIS (Netherlands)

    Prangsma, M.E.

    2007-01-01

    This dissertation focuses on the question: How does making and connecting different types of multimodal representations affect the collaborative learning process and the acquisition of a chronological frame of reference in 12 to 14-year olds in pre vocational education? A chronological frame of

  19. Multimodal Dialogue Management - State of the art

    NARCIS (Netherlands)

    Bui Huu Trung, B.H.T.

    This report is about the state of the art in dialogue management. We first introduce an overview of a multimodal dialogue system and its components. Second, four main approaches to dialogue management are described (finite-state and frame-based, information-state based and probabilistic, plan-based,

  20. Multimode waveguide speckle patterns for compressive sensing.

    Science.gov (United States)

    Valley, George C; Sefler, George A; Justin Shaw, T

    2016-06-01

    Compressive sensing (CS) of sparse gigahertz-band RF signals using microwave photonics may achieve better performances with smaller size, weight, and power than electronic CS or conventional Nyquist rate sampling. The critical element in a CS system is the device that produces the CS measurement matrix (MM). We show that passive speckle patterns in multimode waveguides potentially provide excellent MMs for CS. We measure and calculate the MM for a multimode fiber and perform simulations using this MM in a CS system. We show that the speckle MM exhibits the sharp phase transition and coherence properties needed for CS and that these properties are similar to those of a sub-Gaussian MM with the same mean and standard deviation. We calculate the MM for a multimode planar waveguide and find dimensions of the planar guide that give a speckle MM with a performance similar to that of the multimode fiber. The CS simulations show that all measured and calculated speckle MMs exhibit a robust performance with equal amplitude signals that are sparse in time, in frequency, and in wavelets (Haar wavelet transform). The planar waveguide results indicate a path to a microwave photonic integrated circuit for measuring sparse gigahertz-band RF signals using CS.

  1. Single versus multimodality training basic laparoscopic skills

    NARCIS (Netherlands)

    Brinkman, W.M.; Havermans, S.Y.; Buzink, S.N.; Botden, S.M.B.I.; Jakimowicz, J.J.; Schoot, B.C.

    2012-01-01

    Introduction - Even though literature provides compelling evidence of the value of simulators for training of basic laparoscopic skills, the best way to incorporate them into a surgical curriculum is unclear. This study compares the training outcome of single modality training with multimodality

  2. New two-port multimode interference reflectors

    NARCIS (Netherlands)

    Kleijn, E.; Smit, M.K.; Wale, M.J.; Leijtens, X.J.M.

    2012-01-01

    Multi-mode interference reflectors (MIRs) are versatile components. Two new MIR designs with a fixed 50/50 reflection to transmission ratio are introduced. Measurements on these new devices and on devices similar to those in [1] are presented and compared to the design values. Measured losses are

  3. Scenemash: multimodal route summarization for city exploration

    NARCIS (Netherlands)

    Berg, J. van den; Rudinac, S.; Worring, M.

    2016-01-01

    The potential of mining tourist information from social multimedia data gives rise to new applications offering much richer impressions of the city. In this paper we propose Scenemash, a system that generates multimodal summaries of multiple alternative routes between locations in a city. To get

  4. Naming Block Structures: A Multimodal Approach

    Science.gov (United States)

    Cohen, Lynn; Uhry, Joanna

    2011-01-01

    This study describes symbolic representation in block play in a culturally diverse suburban preschool classroom. Block play is "multimodal" and can allow children to experiment with materials to represent the world in many forms of literacy. Combined qualitative and quantitative data from seventy-seven block structures were collected and analyzed.…

  5. Academic Knowledge Construction and Multimodal Curriculum Development

    Science.gov (United States)

    Loveless, Douglas J., Ed.; Griffith, Bryant, Ed.; Bérci, Margaret E., Ed.; Ortlieb, Evan, Ed.; Sullivan, Pamela, Ed.

    2014-01-01

    While incorporating digital technologies into the classroom has offered new ways of teaching and learning into educational processes, it is essential to take a look at how the digital shift impacts teachers, school administration, and curriculum development. "Academic Knowledge Construction and Multimodal Curriculum Development" presents…

  6. Vision-based interaction

    CERN Document Server

    Turk, Matthew

    2013-01-01

    In its early years, the field of computer vision was largely motivated by researchers seeking computational models of biological vision and solutions to practical problems in manufacturing, defense, and medicine. For the past two decades or so, there has been an increasing interest in computer vision as an input modality in the context of human-computer interaction. Such vision-based interaction can endow interactive systems with visual capabilities similar to those important to human-human interaction, in order to perceive non-verbal cues and incorporate this information in applications such

  7. Designing Interactions for Learning: Physicality, Interactivity, and Interface Effects in Digital Environments

    Science.gov (United States)

    Hoffman, Daniel L.

    2013-01-01

    The purpose of the study is to better understand the role of physicality, interactivity, and interface effects in learning with digital content. Drawing on work in cognitive science, human-computer interaction, and multimedia learning, the study argues that interfaces that promote physical interaction can provide "conceptual leverage"…

  8. Facilitating Multiple Intelligences Through Multimodal Learning Analytics

    Directory of Open Access Journals (Sweden)

    Ayesha PERVEEN

    2018-01-01

    Full Text Available This paper develops a theoretical framework for employing learning analytics in online education to trace multiple learning variations of online students by considering their potential of being multiple intelligences based on Howard Gardner’s 1983 theory of multiple intelligences. The study first emphasizes the need to facilitate students as multiple intelligences by online education systems and then suggests a framework of the advanced form of learning analytics i.e., multimodal learning analytics for tracing and facilitating multiple intelligences while they are engaged in online ubiquitous learning. As multimodal learning analytics is still an evolving area, it poses many challenges for technologists, educationists as well as organizational managers. Learning analytics make machines meet humans, therefore, the educationists with an expertise in learning theories can help technologists devise latest technological methods for multimodal learning analytics and organizational managers can implement them for the improvement of online education. Therefore, a careful instructional design based on a deep understanding of students’ learning abilities, is required to develop teaching plans and technological possibilities for monitoring students’ learning paths. This is how learning analytics can help design an adaptive instructional design based on a quick analysis of the data gathered. Based on that analysis, the academicians can critically reflect upon the quick or delayed implementation of the existing instructional design based on students’ cognitive abilities or even about the single or double loop learning design. The researcher concludes that the online education is multimodal in nature, has the capacity to endorse multiliteracies and, therefore, multiple intelligences can be tracked and facilitated through multimodal learning analytics in an online mode. However, online teachers’ training both in technological implementations and

  9. A Conceptual Architecture for Adaptive Human-Computer Interface of a PT Operation Platform Based on Context-Awareness

    Directory of Open Access Journals (Sweden)

    Qing Xue

    2014-01-01

    Full Text Available We present a conceptual architecture for adaptive human-computer interface of a PT operation platform based on context-awareness. This architecture will form the basis of design for such an interface. This paper describes components, key technologies, and working principles of the architecture. The critical contents covered context information modeling, processing, relationship establishing between contexts and interface design knowledge by use of adaptive knowledge reasoning, and visualization implementing of adaptive interface with the aid of interface tools technology.

  10. "Look at what I am saying": Multimodal science teaching

    Science.gov (United States)

    Pozzer-Ardenghi, Lilian

    Language constitutes the dominant representational mode in science teaching, and lectures are still the most prevalent of the teaching methods in school science. In this dissertation, I investigate lectures from a multimodal and communicative perspective to better understand how teaching as a cultural-historical and social activity unfolds; that is, I am concerned with teaching as a communicative event, where a variety of signs (or semiotic resources), expressed in diverse modalities (or modes of communication) are produced and reproduced while the teacher articulates very specific conceptual meanings for the students. Within a trans-disciplinary approach that merges theoretical and methodical frameworks of social and cultural studies of human activity and interaction, communicative and gestures studies, linguistics, semiotics, pragmatics, and studies on teaching and learning science, I investigate teaching as a communicative, dynamic, multimodal, and social activity. My research questions include: What are the resources produced and reproduced in the classroom when the teacher is lecturing? How do these resources interact with each other? What meanings do they carry and how are these associated to achieve the coherence necessary to accomplish the communication of complex and abstract scientific concepts, not only within one lecture, but also within an entire unit of the curricula encompassing various lectures? My results show that, when lecturing, the communication of scientific concepts occur along trajectories driven by the dialectical relation among the various semiotic resources a lecturer makes available that together constitute a unit---the idea. Speech, gestures, and other nonverbal resources are but one-sided expressions of a higher order communicative meaning unit. The iterable nature of the signs produced and reproduced during science lectures permits, supports, and encourages the repetition, variation, and translation of ideas, themes, and languages and

  11. Historical Overview, Current Status, and Future Trends in Human-Computer Interfaces for Process Control

    International Nuclear Information System (INIS)

    Owre, Fridtjov

    2003-01-01

    Approximately 25 yr ago, the first computer-based process control systems, including computer-generated displays, appeared. It is remarkable how slowly the human-computer interfaces (HCI's) of such systems have developed over the years. The display design approach in those early days had its roots in the topology of the process. Usually, the information came from the piping and instrumentation diagrams. Later, some important additional functions were added to the basic system, such as alarm and trend displays. Today, these functions are still the basic ones, and the end-user displays have not changed much except for improved display quality in terms of colors, font types and sizes, resolution, and object shapes, resulting from improved display hardware.Today, there are two schools of display design competing for supremacy in the process control segment of the HCI community. One can be characterized by extension and integration of current practice, while the other is more revolutionary.The extension of the current practice approach can be described in terms of added system functionality and integration. This means that important functions for the plant operator - such as signal validation, plant overview information, safety parameter displays, procedures, prediction of future states, and plant performance optimization - are added to the basic functions and integrated in a total unified HCI for the plant operator.The revolutionary approach, however, takes as its starting point the design process itself. The functioning of the plant is described in terms of the plant goals and subgoals, as well as the means available to reach these goals. Then, displays are designed representing this functional structure - in clear contrast to the earlier plant topology representation. Depending on the design approach used, the corresponding displays have various designations, e.g., function-oriented, task-oriented, or ecological displays.This paper gives a historical overview of past

  12. A Novel Feature Optimization for Wearable Human-Computer Interfaces Using Surface Electromyography Sensors

    Directory of Open Access Journals (Sweden)

    Han Sun

    2018-03-01

    Full Text Available The novel human-computer interface (HCI using bioelectrical signals as input is a valuable tool to improve the lives of people with disabilities. In this paper, surface electromyography (sEMG signals induced by four classes of wrist movements were acquired from four sites on the lower arm with our designed system. Forty-two features were extracted from the time, frequency and time-frequency domains. Optimal channels were determined from single-channel classification performance rank. The optimal-feature selection was according to a modified entropy criteria (EC and Fisher discrimination (FD criteria. The feature selection results were evaluated by four different classifiers, and compared with other conventional feature subsets. In online tests, the wearable system acquired real-time sEMG signals. The selected features and trained classifier model were used to control a telecar through four different paradigms in a designed environment with simple obstacles. Performance was evaluated based on travel time (TT and recognition rate (RR. The results of hardware evaluation verified the feasibility of our acquisition systems, and ensured signal quality. Single-channel analysis results indicated that the channel located on the extensor carpi ulnaris (ECU performed best with mean classification accuracy of 97.45% for all movement’s pairs. Channels placed on ECU and the extensor carpi radialis (ECR were selected according to the accuracy rank. Experimental results showed that the proposed FD method was better than other feature selection methods and single-type features. The combination of FD and random forest (RF performed best in offline analysis, with 96.77% multi-class RR. Online results illustrated that the state-machine paradigm with a 125 ms window had the highest maneuverability and was closest to real-life control. Subjects could accomplish online sessions by three sEMG-based paradigms, with average times of 46.02, 49.06 and 48.08 s

  13. Optimal design methods for a digital human-computer interface based on human reliability in a nuclear power plant

    International Nuclear Information System (INIS)

    Jiang, Jianjun; Zhang, Li; Xie, Tian; Wu, Daqing; Li, Min; Wang, Yiqun; Peng, Yuyuan; Peng, Jie; Zhang, Mengjia; Li, Peiyao; Ma, Congmin; Wu, Xing

    2017-01-01

    Highlights: • A complete optimization process is established for digital human-computer interfaces of Npps. • A quick convergence search method is proposed. • The authors propose an affinity error probability mapping function to test human reliability. - Abstract: This is the second in a series of papers describing the optimal design method for a digital human-computer interface of nuclear power plant (Npp) from three different points based on human reliability. The purpose of this series is to explore different optimization methods from varying perspectives. This present paper mainly discusses the optimal design method for quantity of components of the same factor. In monitoring process, quantity of components has brought heavy burden to operators, thus, human errors are easily triggered. To solve the problem, the authors propose an optimization process, a quick convergence search method and an affinity error probability mapping function. Two balanceable parameter values of the affinity error probability function are obtained by experiments. The experimental results show that the affinity error probability mapping function about human-computer interface has very good sensitivity and stability, and that quick convergence search method for fuzzy segments divided by component quantity has better performance than general algorithm.

  14. Recontextualizing social practices and globalization: Multimodal metaphor and fictional storytelling in printed and internet ads

    OpenAIRE

    Downing, Laura Hidalgo; Mujic, Blanca Kraljevic

    2015-01-01

    This article presents a study of ongoing global and local changing practices by exploring the interaction between multimodal metaphor and narrative in advertising discourse. Thus, we make use of Conceptual Metaphor Theory (CMT) and Conceptual Integration Theory to compare how social changes and continuities are represented and re-contextualized in advertising discourse, across time, genres and cultures. Changes in time and across genres are addressed through the analysis of printed ads from 2...

  15. Mode-selective mapping and control of vectorial nonlinear-optical processes in multimode photonic-crystal fibers.

    Science.gov (United States)

    Hu, Ming-Lie; Wang, Ching-Yue; Song, You-Jian; Li, Yan-Feng; Chai, Lu; Serebryannikov, Evgenii; Zheltikov, Aleksei

    2006-02-06

    We demonstrate an experimental technique that allows a mapping of vectorial nonlinear-optical processes in multimode photonic-crystal fibers (PCFs). Spatial and polarization modes of PCFs are selectively excited in this technique by varying the tilt angle of the input beam and rotating the polarization of the input field. Intensity spectra of the PCF output plotted as a function of the input field power and polarization then yield mode-resolved maps of nonlinear-optical interactions in multimode PCFs, facilitating the analysis and control of nonlinear-optical transformations of ultrashort laser pulses in such fibers.

  16. Computer-aided psychotherapy based on multimodal elicitation, estimation and regulation of emotion.

    Science.gov (United States)

    Cosić, Krešimir; Popović, Siniša; Horvat, Marko; Kukolja, Davor; Dropuljić, Branimir; Kovač, Bernard; Jakovljević, Miro

    2013-09-01

    Contemporary psychiatry is looking at affective sciences to understand human behavior, cognition and the mind in health and disease. Since it has been recognized that emotions have a pivotal role for the human mind, an ever increasing number of laboratories and research centers are interested in affective sciences, affective neuroscience, affective psychology and affective psychopathology. Therefore, this paper presents multidisciplinary research results of Laboratory for Interactive Simulation System at Faculty of Electrical Engineering and Computing, University of Zagreb in the stress resilience. Patient's distortion in emotional processing of multimodal input stimuli is predominantly consequence of his/her cognitive deficit which is result of their individual mental health disorders. These emotional distortions in patient's multimodal physiological, facial, acoustic, and linguistic features related to presented stimulation can be used as indicator of patient's mental illness. Real-time processing and analysis of patient's multimodal response related to annotated input stimuli is based on appropriate machine learning methods from computer science. Comprehensive longitudinal multimodal analysis of patient's emotion, mood, feelings, attention, motivation, decision-making, and working memory in synchronization with multimodal stimuli provides extremely valuable big database for data mining, machine learning and machine reasoning. Presented multimedia stimuli sequence includes personalized images, movies and sounds, as well as semantically congruent narratives. Simultaneously, with stimuli presentation patient provides subjective emotional ratings of presented stimuli in terms of subjective units of discomfort/distress, discrete emotions, or valence and arousal. These subjective emotional ratings of input stimuli and corresponding physiological, speech, and facial output features provides enough information for evaluation of patient's cognitive appraisal deficit

  17. Multimodal mechanisms of food creaminess sensation.

    Science.gov (United States)

    Chen, Jianshe; Eaton, Louise

    2012-12-01

    In this work, the sensory creaminess of a set of four viscosity-matched fluid foods (single cream, evaporated milk, corn starch solution, and corn starch solution containing long chain free fatty acids) was tested by a panel of 16 assessors via controlled sensation mechanisms of smell only, taste only, taste and tactile, and integrated multimodal. It was found that all sensation channels were able to discriminate between creamy and non-creamy foods, but only the multimodal method gave creaminess ratings in agreement with the samples' fat content. Results from this study show that the presence of long chain free fatty acids has no influence on creaminess perception. It is certain that food creaminess is not a primary sensory property but an integrated sensory perception (or sensory experience) derived from combined sensations of visual, olfactory, gustatory, and tactile cues. Creamy colour, milky flavour, and smooth texture are probably the most important sensory features of food creaminess.

  18. Multimodal location estimation of videos and images

    CERN Document Server

    Friedland, Gerald

    2015-01-01

    This book presents an overview of the field of multimodal location estimation, i.e. using acoustic, visual, and/or textual cues to estimate the shown location of a video recording. The authors' sample research results in this field in a unified way integrating research work on this topic that focuses on different modalities, viewpoints, and applications. The book describes fundamental methods of acoustic, visual, textual, social graph, and metadata processing as well as multimodal integration methods used for location estimation. In addition, the text covers benchmark metrics and explores the limits of the technology based on a human baseline. ·         Discusses localization of multimedia data; ·         Examines fundamental methods of establishing location metadata for images and videos (other than GPS tagging); ·         Covers Data-Driven as well as Semantic Location Estimation.

  19. Multimodality imaging of the postoperative shoulder

    Energy Technology Data Exchange (ETDEWEB)

    Woertler, Klaus [Technische Universitaet Muenchen, Department of Radiology, Munich (Germany)

    2007-12-15

    Multimodality imaging of the postoperative shoulder includes radiography, magnetic resonance (MR) imaging, MR arthrography, computed tomography (CT), CT arthrography, and ultrasound. Target-oriented evaluation of the postoperative shoulder necessitates familiarity with surgical techniques, their typical complications and sources of failure, knowledge of normal and abnormal postoperative findings, awareness of the advantages and weaknesses with the different radiologic techniques, and clinical information on current symptoms and function. This article reviews the most commonly used surgical procedures for treatment of anterior glenohumeral instability, lesions of the labral-bicipital complex, subacromial impingement, and rotator cuff lesions and highlights the significance of imaging findings with a view to detection of recurrent lesions and postoperative complications in a multimodality approach. (orig.)

  20. Semiconductor laser using multimode interference principle

    Science.gov (United States)

    Gong, Zisu; Yin, Rui; Ji, Wei; Wu, Chonghao

    2018-01-01

    Multimode interference (MMI) structure is introduced in semiconductor laser used in optical communication system to realize higher power and better temperature tolerance. Using beam propagation method (BPM), Multimode interference laser diode (MMI-LD) is designed and fabricated in InGaAsP/InP based material. As a comparison, conventional semiconductor laser using straight single-mode waveguide is also fabricated in the same wafer. With a low injection current (about 230 mA), the output power of the implemented MMI-LD is up to 2.296 mW which is about four times higher than the output power of the conventional semiconductor laser. The implemented MMI-LD exhibits stable output operating at the wavelength of 1.52 μm and better temperature tolerance when the temperature varies from 283.15 K to 293.15 K.

  1. Multimodality therapy of local regional esophageal cancer.

    Science.gov (United States)

    Kelsen, David P

    2005-12-01

    Recent trials regarding the use of multimodality therapy for patients with cancers of the esophagus and gastroesophageal junction have not conclusively shown benefit. Regimens containing cisplatin and fluorouracil administered preoperatively appear to be tolerable and do not increase operative morbidity or mortality when compared with surgery alone. Yet clinical trials have not clearly shown that such regimens improve outcome as measured by survival. Likewise, trials of postoperative chemoradiation have not reported a significant improvement in median or overall survival. The reasons for the lack of clinical benefit from multimodality therapy are not completely understood, but improvements in systemic therapy will probably be necessary before disease-free or overall survival improves substantially. Some new single agents such as the taxanes (docetaxel or paclitaxel) and the camptothecan analog irinotecan have shown modest activity for palliative therapy.

  2. Pattern recognition of neurotransmitters using multimode sensing.

    Science.gov (United States)

    Stefan-van Staden, Raluca-Ioana; Moldoveanu, Iuliana; van Staden, Jacobus Frederick

    2014-05-30

    Pattern recognition is essential in chemical analysis of biological fluids. Reliable and sensitive methods for neurotransmitters analysis are needed. Therefore, we developed for pattern recognition of neurotransmitters: dopamine, epinephrine, norepinephrine a method based on multimode sensing. Multimode sensing was performed using microsensors based on diamond paste modified with 5,10,15,20-tetraphenyl-21H,23H-porphyrine, hemin and protoporphyrin IX in stochastic and differential pulse voltammetry modes. Optimized working conditions: phosphate buffer solution of pH 3.01 and KCl 0.1mol/L (as electrolyte support), were determined using cyclic voltammetry and used in all measurements. The lowest limits of quantification were: 10(-10)mol/L for dopamine and epinephrine, and 10(-11)mol/L for norepinephrine. The multimode microsensors were selective over ascorbic and uric acids and the method facilitated reliable assay of neurotransmitters in urine samples, and therefore, the pattern recognition showed high reliability (RSDneurotransmitters on biological fluids at a lower determination level than chromatographic methods. The sampling of the biological fluids referees only to the buffering (1:1, v/v) with a phosphate buffer pH 3.01, while for chromatographic methods the sampling is laborious. Accordingly with the statistic evaluation of the results at 99.00% confidence level, both modes can be used for pattern recognition and quantification of neurotransmitters with high reliability. The best multimode microsensor was the one based on diamond paste modified with protoporphyrin IX. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. The multimodal treatment of eating disorders

    OpenAIRE

    HALMI, KATHERINE A.

    2005-01-01

    The treatment of eating disorders is based on a multimodal model, recognizing that these disorders do not have a single cause or a predictable course. The treatment strategy is determined by the severity of illness and the specific eating disorder diagnosis. For the treatment of anorexia nervosa, the key elements are medical management, behavioral therapy, cognitive therapy and family therapy, while pharmacotherapy is at best an adjunct to other therapies. In bulimia nervosa...

  4. A Multimodal Robot Game for Seniors

    DEFF Research Database (Denmark)

    Hansen, Søren Tranberg; Krogsager, Anders; Fredslund, Jakob

    2017-01-01

    This paper describes the initial findings of a multimodal game which has been implemented on a humanoid robot platform and tested with seniors suffering from dementia. Physical and cognitive activities can improve the overall wellbeing of seniors, but it is often difficult to motivate seniors...... feedback and includes animated gestures and sounds. The game has been tested in a nursing home with four seniors suffering from moderate to severe dementia....

  5. Speech Perception as a Multimodal Phenomenon

    OpenAIRE

    Rosenblum, Lawrence D.

    2008-01-01

    Speech perception is inherently multimodal. Visual speech (lip-reading) information is used by all perceivers and readily integrates with auditory speech. Imaging research suggests that the brain treats auditory and visual speech similarly. These findings have led some researchers to consider that speech perception works by extracting amodal information that takes the same form across modalities. From this perspective, speech integration is a property of the input information itself. Amodal s...

  6. Digital Storytelling and Multimodal Literacy in Education

    OpenAIRE

    Gregori-Signes, Carmen

    2014-01-01

    This article argues in favour of using digital storytelling to encourage a critical socio-educational focus in education and include multimodal explicit teaching in the curriculum.The analysis of fifty digital stories indicates that the students developed a certain awareness of the issue chosen for their story (e.g. violence, racism, war) since the final product transmits a critical perspective on the topic itself. Further work, however, needs to be invested in the development of the digital ...

  7. Multimodal treatment for unresectable pancreatic cancer

    International Nuclear Information System (INIS)

    Katayama, Kanji; Iida, Atsushi; Fujita, Takashi; Kobayashi, Taizo; Shinmoto, Syuichi; Hirose, Kazuo; Yamaguchi, Akio; Yoshida, Masanori

    1998-01-01

    In order to improve in prognosis and quality of life (QOL), the multimodal treatment for unresectable pancreatic cancers were performed. Bypass surgery was carried out for unresectable pancreatic cancer with intraoperative irradiation (IOR). After surgery, patients were treated with the combination of CDDP (25 mg) and MMC (4 mg) administration, intravenously continuous injection of 5-FU (250 mg for 24 hours), external radiation by the high voltage X-ray (1.5 Gy per irradiation, 4 times a week, and during hyperthermia 3 Gy per irradiation) and hyperthermia using the Thermotron RF-8 warmer. Six out of 13 patients received hyperthermia at over 40degC, were obtained PR, and their survival periods were 22, 21, 19, 18, 11 and 8 months and they could return to work. For all patients with pain, the symptom was abolished or reduced. The survival periods in cases of the multimodal treatment were longer than those of only bypass-surgery or of the resective cases with the curability C. The multimodal treatment combined with radiation, hyperthermia and surgery is more useful for the removal of pain and the improvement of QOL, and also expected the improvement of the prognosis than pancreatectomy. And hyperthermia has an important role on the effect of this treatment. (K.H.)

  8. Multimodal treatment for unresectable pancreatic cancer

    Energy Technology Data Exchange (ETDEWEB)

    Katayama, Kanji; Iida, Atsushi; Fujita, Takashi; Kobayashi, Taizo; Shinmoto, Syuichi; Hirose, Kazuo; Yamaguchi, Akio; Yoshida, Masanori [Fukui Medical School, Matsuoka (Japan)

    1998-07-01

    In order to improve in prognosis and quality of life (QOL), the multimodal treatment for unresectable pancreatic cancers were performed. Bypass surgery was carried out for unresectable pancreatic cancer with intraoperative irradiation (IOR). After surgery, patients were treated with the combination of CDDP (25 mg) and MMC (4 mg) administration, intravenously continuous injection of 5-FU (250 mg for 24 hours), external radiation by the high voltage X-ray (1.5 Gy per irradiation, 4 times a week, and during hyperthermia 3 Gy per irradiation) and hyperthermia using the Thermotron RF-8 warmer. Six out of 13 patients received hyperthermia at over 40degC, were obtained PR, and their survival periods were 22, 21, 19, 18, 11 and 8 months and they could return to work. For all patients with pain, the symptom was abolished or reduced. The survival periods in cases of the multimodal treatment were longer than those of only bypass-surgery or of the resective cases with the curability C. The multimodal treatment combined with radiation, hyperthermia and surgery is more useful for the removal of pain and the improvement of QOL, and also expected the improvement of the prognosis than pancreatectomy. And hyperthermia has an important role on the effect of this treatment. (K.H.)

  9. Optimal Face-Iris Multimodal Fusion Scheme

    Directory of Open Access Journals (Sweden)

    Omid Sharifi

    2016-06-01

    Full Text Available Multimodal biometric systems are considered a way to minimize the limitations raised by single traits. This paper proposes new schemes based on score level, feature level and decision level fusion to efficiently fuse face and iris modalities. Log-Gabor transformation is applied as the feature extraction method on face and iris modalities. At each level of fusion, different schemes are proposed to improve the recognition performance and, finally, a combination of schemes at different fusion levels constructs an optimized and robust scheme. In this study, CASIA Iris Distance database is used to examine the robustness of all unimodal and multimodal schemes. In addition, Backtracking Search Algorithm (BSA, a novel population-based iterative evolutionary algorithm, is applied to improve the recognition accuracy of schemes by reducing the number of features and selecting the optimized weights for feature level and score level fusion, respectively. Experimental results on verification rates demonstrate a significant improvement of proposed fusion schemes over unimodal and multimodal fusion methods.

  10. A novel balance training system using multimodal biofeedback.

    Science.gov (United States)

    Afzal, Muhammad Raheel; Oh, Min-Kyun; Choi, Hye Young; Yoon, Jungwon

    2016-04-22

    A biofeedback-based balance training system can be used to provide the compromised sensory information to subjects in order to retrain their sensorimotor function. In this study, the design and evaluation of the low-cost, intuitive biofeedback system developed at Gyeongsang National University is extended to provide multimodal biofeedback for balance training by utilization of visual and haptic modalities. The system consists of a smartphone attached to the waist of the subject to provide information about tilt of the torso, a personal computer running a purpose built software to process the smartphone data and provide visual biofeedback to the subject by means of a dedicated monitor and a dedicated Phantom Omni(®) device for haptic biofeedback. For experimental verification of the system, eleven healthy young participants performed balance tasks assuming two distinct postures for 30 s each while acquiring torso tilt. The postures used were the one foot stance and the tandem Romberg stance. For both the postures, the subjects stood on a foam platform which provided a certain amount of ground instability. Post-experiment data analysis was performed using MATLAB(®) to analyze reduction in body sway. Analysis parameters based on the projection of trunk tilt information were calculated in order to ascertain the reduction in body sway and improvements in postural control. Two-way analysis of variance (ANOVA) showed no statistically significant interactions between postures and biofeedback. Post-hoc analysis revealed statistically significant reduction in body sway on provision of biofeedback. Subjects exhibited maximum body sway during no biofeedback trial, followed by either haptic or visual biofeedback and in most of the trials the multimodal biofeedback of visual and haptic together resulted in minimization of body sway, thus indicating that the multimodal biofeedback system worked well to provide significant (p biofeedback system can offer more customized training

  11. Multi-modal Interfacing for Human-Robot Interaction

    National Research Council Canada - National Science Library

    Perzanowski, Dennis; Schultz, Alan; Adams, William; Bugajska, Magda; March, Elaine

    2001-01-01

    ...., predicates in the stack that have not been completed. 3. "Locative" expressions, e.g. "there," give us a kind of handle in command and control applications to attempt error correction when locative goals are being discussed. 4...

  12. The Danish NOMCO Corpus Multimodal Interaction in First Acquaintance Conversations

    DEFF Research Database (Denmark)

    Paggio, Patrizia; Navarretta, Costanza

    2016-01-01

    , specifically head movements, facial expressions, and body posture. The corpus has served as the empirical basis for a number of studies of communication phenomena related to turn management, feedback exchange, information packaging and the expression of emotional attitudes. We describe the annotation scheme...

  13. Effects of arginine on multimodal anion exchange chromatography.

    Science.gov (United States)

    Hirano, Atsushi; Arakawa, Tsutomu; Kameda, Tomoshi

    2015-12-01

    The effects of arginine on binding and elution properties of a multimodal anion exchanger, Capto adhere, were examined using bovine serum albumin (BSA) and a monoclonal antibody against interleukin-8 (mAb-IL8). Negatively charged BSA was bound to the positively charged Capto adhere and was readily eluted from the column with a stepwise or gradient elution using 1M NaCl at pH 7.0. For heat-treated BSA, small oligomers and remaining monomers were also eluted using a NaCl gradient, whereas larger oligomers required arginine for effective elution. The positively charged mAb-IL8 was bound to Capto adhere at pH 7.0. Arginine was also more effective for elution of the bound mAb-IL8 than was NaCl. The results imply that arginine interacts with the positively charged Capto adhere. The mechanism underlying the interactions of arginine with Capto adhere was examined by calculating the binding free energy between an arginine molecule and a Capto adhere ligand in water through molecular dynamics simulations. The overall affinity of arginine for Capto adhere is attributed to the hydrophobic and π-π interactions between an arginine side chain and the aromatic moiety of the ligand as well as hydrogen bonding between arginine and the ligand hydroxyl group, which may account for the characteristics of protein elution using arginine. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. A multimodal interface for real-time soldier-robot teaming

    Science.gov (United States)

    Barber, Daniel J.; Howard, Thomas M.; Walter, Matthew R.

    2016-05-01

    Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.

  15. Releasing the constraints on aphasia therapy: the positive impact of gesture and multimodality treatments.

    Science.gov (United States)

    Rose, Miranda L

    2013-05-01

    There is a 40-year history of interest in the use of arm and hand gestures in treatments that target the reduction of aphasic linguistic impairment and compensatory methods of communication (Rose, 2006). Arguments for constraining aphasia treatment to the verbal modality have arisen from proponents of constraint-induced aphasia therapy (Pulvermüller et al., 2001). Confusion exists concerning the role of nonverbal treatments in treating people with aphasia. The central argument of this paper is that given the state of the empirical evidence and the strong theoretical accounts of modality interactions in human communication, gesture-based and multimodality aphasia treatments are at least as legitimate an option as constraint-based aphasia treatment. Theoretical accounts of modality interactions in human communication and the gesture production abilities of individuals with aphasia that are harnessed in treatments are reviewed. The negative effects on word retrieval of restricting gesture production are also reviewed, and an overview of the neurological architecture subserving language processing is provided as rationale for multimodality treatments. The evidence for constrained and unconstrained treatments is critically reviewed. Together, these data suggest that constraint treatments and multimodality treatments are equally efficacious, and there is limited support for constraining client responses to the spoken modality.

  16. Language and Identity in Multimodal Text: Case Study of Thailand’s Bank Pamphlet

    Directory of Open Access Journals (Sweden)

    Korapat Pruekchaikul

    2017-12-01

    Full Text Available With the main objective of presenting a linguistic model for the analysis of identity construction in multimodal texts, particularly in advertising, this article attempts to integrate three theoretical frameworks, namely the types of discourse of the Socio-Discursive Interactionism, Greimas’ actantial roles and the symbolic processes of the Grammar of Visual Design proposed by Kress e van Leeuwen. The first two theories are used to analyze verbal language form whereas the third is exclusively for images in advertising. The data sample is a Thai bank pamphlet of Siam Commercial Bank, collected in Bangkok, Thailand, in June, 2015. According to the data analysis, the theoretical frameworks employed here proves that identity, the psychological product, exists in the human mind and can be indexed by language in interaction. Also, the analysis found that identity could be projected as multimodally as language manifestation, of which forms are not only verbal but also pictorial.

  17. How multimodality shapes creative choice in dance

    Directory of Open Access Journals (Sweden)

    Muntanyola, Dafne

    2014-12-01

    Full Text Available Creative choice is an individual act. As in other fields such as filmmaking, dance creation is based on a cognitive dualism that considers the choreographer as the creative decision-maker, while the dancer is objectified. The dancer’s body is an instrument for exploration of the choreographer’s imagery. We claim that the products of creativity are minute but crucial modifications of transitory stages of a dance rehearsal. On the one hand, attention is given to a dance company as a distributed cognitive system. The choreographer communicates in diverse modalities, which carry specific information, physical as well as symbolic. Through the analysis of an audiovisual and cognitive ethnography with ELAN software we find differences in decision-making patterns across multimodal instructions. On the other hand, we apply Social Network Analysis and UCINET software as a methodological innovation in order to formalize data from observed rehearsal settings. In all, the choice of modalities in the chorographical instruction shapes movement production, which is based on dyads, triads and other forms of creative interaction.La toma de decisión creativa es un acto individual. El cuerpo de la bailarina es un instrumento para la exploración de las imágenes del coreógrafo. Al igual que en otros campos artísticos, como la industria cinematográfica, la creación en danza se basa en un dualismo cognitivo. Se considera al coreógrafo como el tomador de decisiones creativo, mientras que el bailarín se objetiva. En este artículo, afirmamos que los productos de la creatividad son modificaciones pequeñas pero cruciales de etapas transitorias de un ensayo de baile. Por un lado, se analiza una compañía de danza como un sistema cognitivo distribuido. A través del análisis de una etnografía audiovisual y cognitiva con ELAN encontramos diferencias en los patrones de toma de decisions. El coreógrafo se comunica con diversas modalidades, que llevan informaci

  18. Human Computer Confluence in Rehabilitation: Digital Media Plasticity and Human Performance Plasticity

    DEFF Research Database (Denmark)

    Brooks, Anthony Lewis

    2013-01-01

    Digital media plasticity evocative to embodied interaction is presented as a utilitarian tool when mixed and matched to target human performance potentials specific to nuance of development for those with impairment. A distinct intervention strategy trains via alternative channeling of external s...

  19. Human Computing in the Life Sciences: What does the future hold?

    NARCIS (Netherlands)

    Fikkert, F.W.

    2007-01-01

    In future computing environments you will be surrounded and supported by all kinds of technologies. Characteristic is that you can interact with them in a natural way: you can speak to, point at, or even frown about some piece of presented information: the environment understands your intent.

  20. Developing multimodal conversational agents for an enhanced e-learning experience

    Directory of Open Access Journals (Sweden)

    David GRIOL

    2014-10-01

    Full Text Available Conversational agents have become a strong alternative to enhance educational systems with intelligent communicative capabilities, provide motivation and engagement, and increment significant learning and helping in the acquisition of meta-cognitive skills. In this paper, we present Geranium, a multimodal conversational agent that helps children to appreciate and protect their environment. The system, which integrates an interactive chatbot, has been developed by means of a modular and scalable framework that eases building pedagogic conversational agents that can interact with the students using speech and natural language.

  1. Interference of Multi-Mode Gaussian States and "non Appearance" of Quantum Correlations

    Science.gov (United States)

    Olivares, Stefano

    2012-01-01

    We theoretically investigate bilinear, mode-mixing interactions involving two modes of uncorrelated multi-mode Gaussian states. In particular, we introduce the notion of "locally the same states" (LSS) and prove that two uncorrelated LSS modes are invariant under the mode mixing, i.e. the interaction does not lead to the birth of correlations between the outgoing modes. We also study the interference of orthogonally polarized Gaussian states by means of an interferometric scheme based on a beam splitter, rotators of polarization and polarization filters.

  2. Connecting multimodality in human communication

    Directory of Open Access Journals (Sweden)

    Christina eRegenbogen

    2013-11-01

    Full Text Available A successful reciprocal evaluation of social signals serves as a prerequisite for social coherence and empathy. In a previous fMRI study we studied naturalistic communication situations by presenting video clips to our participants and recording their behavioral responses regarding empathy and its components. In two conditions, all three channels transported congruent emotional or neutral information, respectively. Three conditions selectively presented two emotional channels and one neutral channel and were thus bimodally emotional. We reported channel-specific emotional contributions in modality-related areas, elicited by dynamic video clips with varying combinations of emotionality in facial expressions, prosody, and speech content. However, to better understand the underlying mechanisms accompanying a naturalistically displayed human social interaction in some key regions that presumably serve as specific processing hubs for facial expressions, prosody, and speech content, we pursued a reanalysis of the data. In addition to this specificity of these regions to information channels we demonstrated that they were also sensitive to variations of the respective emotional content.Here, we focused on two different descriptions of temporal characteristics within these three modality-related regions (right fusiform gyrus (FFG, left auditory cortex (AC, left angular gyrus (AG and left dorsomedial prefrontal cortex (dmPFC. By means of a finite impulse response (FIR analysis within each of the three regions we examined the post-stimulus time-courses as a description of the temporal characteristics of the BOLD response during the video clips. Second, effective connectivity between these areas and the left dmPFC was analyzed using dynamic causal modeling (DCM in order to describe condition-related modulatory influences on the coupling between these regions. The FIR analysis showed initially diminished activation in bimodally emotional conditions but

  3. The semiotics of typography in literary texts. A multimodal approach

    DEFF Research Database (Denmark)

    Nørgaard, Nina

    2009-01-01

    to multimodal discourse proposed, for instance, by Kress & Van Leeuwen (2001) and Baldry & Thibault (2006), and, more specifically, the multimodal approach to typography suggested by Van Leeuwen (2005b; 2006), in order to sketch out a methodological framework applicable to the description and analysis...... of the semiotic potential of typography in literary texts....

  4. Validation of a multimodal travel simulator with travel information provision

    NARCIS (Netherlands)

    Chorus, C.G.; Molin, E.J.E.; Arentze, T.A.; Hoogendoorn, S.P.; Timmermans, H.J.P.; Wee, van G.P.

    2007-01-01

    This paper presents a computer based travel simulator for collecting data concerning the use of next-generation ATIS and their effects on traveler decision making in a multimodal travel environment. The tool distinguishes itself by presenting a completely abstract multimodal transport network, where

  5. A Multimodal Discourse Analysis of Tmall's Double Eleven Advertisement

    Science.gov (United States)

    Hu, Chunyu; Luo, Mengxi

    2016-01-01

    From the 1990s, the multimodal turn in discourse studies makes multimodal discourse analysis a popular topic in linguistics and communication studies. An important approach to applying Systemic Functional Linguistics to non-verbal modes is Visual Grammar initially proposed by Kress and van Leeuwen (1996). Considering that commercial advertisement…

  6. Deterministic multimode photonic device for quantum-information processing

    DEFF Research Database (Denmark)

    Nielsen, Anne E. B.; Mølmer, Klaus

    2010-01-01

    We propose the implementation of a light source that can deterministically generate a rich variety of multimode quantum states. The desired states are encoded in the collective population of different ground hyperfine states of an atomic ensemble and converted to multimode photonic states by exci...

  7. Cultural Shifts, Multimodal Representations, and Assessment Practices: A Case Study

    Science.gov (United States)

    Curwood, Jen Scott

    2012-01-01

    Multimodal texts involve the presence, absence, and co-occurrence of alphabetic text with visual, audio, tactile, gestural, and spatial representations. This article explores how teachers' evaluation of students' multimodal work can be understood in terms of cognition and culture. When teachers apply a paradigm of assessment rooted in print-based…

  8. Multimodal versus Unimodal Instruction in a Complex Learning Context.

    Science.gov (United States)

    Gellevij, Mark; van der Meij, Hans; de Jong, Ton; Pieters, Jules

    2002-01-01

    Compared multimodal instruction with text and pictures with unimodal text-only instruction as 44 college students used a visual or textual manual to learn a complex software application. Results initially support dual coding theory and indicate that multimodal instruction led to better performance than unimodal instruction. (SLD)

  9. Multimodal warnings to enhance risk communication and safety

    NARCIS (Netherlands)

    Haas, E.C.; Erp, J.B.F. van

    2014-01-01

    Multimodal warnings incorporate audio and/or skin-based (tactile) cues to supplement or replace visual cues in environments where the user’s visual perception is busy, impaired, or nonexistent. This paper describes characteristics of audio, tactile, and multimodal warning displays and their role in

  10. Player/Avatar Body Relations in Multimodal Augmented Reality Games

    NARCIS (Netherlands)

    Rosa, N.E.

    2016-01-01

    Augmented reality research is finally moving towards multimodal experiences: more and more applications do not only include visuals, but also audio and even haptics. The purpose of multimodality in these applications can be to increase realism or to increase the amount or quality of communicated

  11. Multimodal network design for sustainable household plastic recycling

    NARCIS (Netherlands)

    Bing Xiaoyun, Xiaoyun; Groot, J.J.; Bloemhof, J.M.; Vorst, van der J.G.A.J.

    2013-01-01

    Purpose – This research studies a plastic recycling system from a reverse logistics angle and investigates the potential benefits of a multimodality strategy to the network design of plastic recycling. This research aims to quantify the impact of multimodality on the network, to provide decision

  12. A Multimodal Database for Affect Recognition and Implicit Tagging

    NARCIS (Netherlands)

    Soleymani, Mohammad; Lichtenauer, Jeroen; Pun, Thierry; Pantic, Maja

    MAHNOB-HCI is a multimodal database recorded in response to affective stimuli with the goal of emotion recognition and implicit tagging research. A multimodal setup was arranged for synchronized recording of face videos, audio signals, eye gaze data, and peripheral/central nervous system

  13. The Big Five: Addressing Recurrent Multimodal Learning Data Challenges

    NARCIS (Netherlands)

    Di Mitri, Daniele; Schneider, Jan; Specht, Marcus; Drachsler, Hendrik

    2018-01-01

    The analysis of multimodal data in learning is a growing field of research, which has led to the development of different analytics solutions. However, there is no standardised approach to handle multimodal data. In this paper, we describe and outline a solution for five recurrent challenges in

  14. Composition at Washington State University: Building a Multimodal Bricolage

    Science.gov (United States)

    Ericsson, Patricia; Hunter, Leeann Downing; Macklin, Tialitha Michelle; Edwards, Elizabeth Sue

    2016-01-01

    Multimodal pedagogy is increasingly accepted among composition scholars. However, putting such pedagogy into practice presents significant challenges. In this profile of Washington State University's first-year composition program, we suggest a multi-vocal and multi-theoretical approach to addressing the challenges of multimodal pedagogy. Patricia…

  15. Multimodal Scaffolding in the Secondary English Classroom Curriculum

    Science.gov (United States)

    Boche, Benjamin; Henning, Megan

    2015-01-01

    This article examines the topic of multimodal scaffolding in the secondary English classroom curriculum through the viewpoint of one teacher's experiences. With technology becoming more commonplace and readily available in the English classroom, we must pinpoint specific and tangible ways to help teachers use and teach multimodalities in their…

  16. MULTIMODAL ANALGESIA AFTER TOTAL HIP ARTHROPLASTY

    Directory of Open Access Journals (Sweden)

    I. G. Mukutsa

    2012-01-01

    Full Text Available Purpose - to assess the effect of multimodal analgesia in the early rehabilitation of patients after hip replacement. Materials and methods. A prospective single-centre randomized research, which included 32 patients. Patients of the 1st group received paracetamol, ketorolac and tramadol, the 2nd group of patients - ketorolac intravenously and the 3rd group of patients - etoricoxib and gabapentin. Patients of the 2nd and the 3rd groups underwent epidural analgesia with ropivacaine. Multimodal analgesia was carried out for 48 hours after the surgery. Assessment of pain intensity was performed by the VAS (visual analogue scale, a neuropathic pain component - on the DN4 questionnaire . Time was recorded during the first and second verticalization of patients, using the distance walkers and by fixing the distance covered with in 2 minutes. Results. The intensity of pain for more than 50 mm on VAS at movement at least once every 48 hours after the surgery was occurred among 9% of the 1st group, 22% of patients from the 2nd group and 8% of patients of the 3rd group. Number of patients with neuropathic pain component decreased from 25% to 3% (p ≤ 0.05. The first verticalization was performed 10 ± 8 hours after the surgery, the second - 21 ± 8 hours later. Two-minute walk distance was 5 ± 3 and 8 ± 4 m, respectively. It is noted more frequent adverse events in patients of the 1st group was noted compared to patients of the 2nd and the 3rd groups during first (91%, 33% and 25%, p ≤ 0.05 and the second verticalization (70%, 25% and 17%, p ≤ 0.05. Multimodal analgesia allows to proceed with the successful activation of patients after hip replacement with in the first day after the surgery. The 3rd group patients are noted with a tendency for the optimal combination of efficient and safe of analgetic therapy.

  17. New developments in multimodal clinical multiphoton tomography

    Science.gov (United States)

    König, Karsten

    2011-03-01

    80 years ago, the PhD student Maria Goeppert predicted in her thesis in Goettingen, Germany, two-photon effects. It took 30 years to prove her theory, and another three decades to realize the first two-photon microscope. With the beginning of this millennium, first clinical multiphoton tomographs started operation in research institutions, hospitals, and in the cosmetic industry. The multiphoton tomograph MPTflexTM with its miniaturized flexible scan head became the Prism-Award 2010 winner in the category Life Sciences. Multiphoton tomographs with its superior submicron spatial resolution can be upgraded to 5D imaging tools by adding spectral time-correlated single photon counting units. Furthermore, multimodal hybrid tomographs provide chemical fingerprinting and fast wide-field imaging. The world's first clinical CARS studies have been performed with a hybrid multimodal multiphoton tomograph in spring 2010. In particular, nonfluorescent lipids and water as well as mitochondrial fluorescent NAD(P)H, fluorescent elastin, keratin, and melanin as well as SHG-active collagen have been imaged in patients with dermatological disorders. Further multimodal approaches include the combination of multiphoton tomographs with low-resolution imaging tools such as ultrasound, optoacoustic, OCT, and dermoscopy systems. Multiphoton tomographs are currently employed in Australia, Japan, the US, and in several European countries for early diagnosis of skin cancer (malignant melanoma), optimization of treatment strategies (wound healing, dermatitis), and cosmetic research including long-term biosafety tests of ZnO sunscreen nanoparticles and the measurement of the stimulated biosynthesis of collagen by anti-ageing products.

  18. Invariant measures on multimode quantum Gaussian states

    Science.gov (United States)

    Lupo, C.; Mancini, S.; De Pasquale, A.; Facchi, P.; Florio, G.; Pascazio, S.

    2012-12-01

    We derive the invariant measure on the manifold of multimode quantum Gaussian states, induced by the Haar measure on the group of Gaussian unitary transformations. To this end, by introducing a bipartition of the system in two disjoint subsystems, we use a parameterization highlighting the role of nonlocal degrees of freedom—the symplectic eigenvalues—which characterize quantum entanglement across the given bipartition. A finite measure is then obtained by imposing a physically motivated energy constraint. By averaging over the local degrees of freedom we finally derive the invariant distribution of the symplectic eigenvalues in some cases of particular interest for applications in quantum optics and quantum information.

  19. Invariant measures on multimode quantum Gaussian states

    International Nuclear Information System (INIS)

    Lupo, C.; Mancini, S.; De Pasquale, A.; Facchi, P.; Florio, G.; Pascazio, S.

    2012-01-01

    We derive the invariant measure on the manifold of multimode quantum Gaussian states, induced by the Haar measure on the group of Gaussian unitary transformations. To this end, by introducing a bipartition of the system in two disjoint subsystems, we use a parameterization highlighting the role of nonlocal degrees of freedom—the symplectic eigenvalues—which characterize quantum entanglement across the given bipartition. A finite measure is then obtained by imposing a physically motivated energy constraint. By averaging over the local degrees of freedom we finally derive the invariant distribution of the symplectic eigenvalues in some cases of particular interest for applications in quantum optics and quantum information.

  20. Invariant measures on multimode quantum Gaussian states

    Energy Technology Data Exchange (ETDEWEB)

    Lupo, C. [School of Science and Technology, Universita di Camerino, I-62032 Camerino (Italy); Mancini, S. [School of Science and Technology, Universita di Camerino, I-62032 Camerino (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Perugia, I-06123 Perugia (Italy); De Pasquale, A. [NEST, Scuola Normale Superiore and Istituto Nanoscienze-CNR, I-56126 Pisa (Italy); Facchi, P. [Dipartimento di Matematica and MECENAS, Universita di Bari, I-70125 Bari (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Bari, I-70126 Bari (Italy); Florio, G. [Istituto Nazionale di Fisica Nucleare, Sezione di Bari, I-70126 Bari (Italy); Museo Storico della Fisica e Centro Studi e Ricerche Enrico Fermi, Piazza del Viminale 1, I-00184 Roma (Italy); Dipartimento di Fisica and MECENAS, Universita di Bari, I-70126 Bari (Italy); Pascazio, S. [Istituto Nazionale di Fisica Nucleare, Sezione di Bari, I-70126 Bari (Italy); Dipartimento di Fisica and MECENAS, Universita di Bari, I-70126 Bari (Italy)

    2012-12-15

    We derive the invariant measure on the manifold of multimode quantum Gaussian states, induced by the Haar measure on the group of Gaussian unitary transformations. To this end, by introducing a bipartition of the system in two disjoint subsystems, we use a parameterization highlighting the role of nonlocal degrees of freedom-the symplectic eigenvalues-which characterize quantum entanglement across the given bipartition. A finite measure is then obtained by imposing a physically motivated energy constraint. By averaging over the local degrees of freedom we finally derive the invariant distribution of the symplectic eigenvalues in some cases of particular interest for applications in quantum optics and quantum information.

  1. Sensory Substitution and Multimodal Mental Imagery.

    Science.gov (United States)

    Nanay, Bence

    2017-09-01

    Many philosophers use findings about sensory substitution devices in the grand debate about how we should individuate the senses. The big question is this: Is "vision" assisted by (tactile) sensory substitution really vision? Or is it tactile perception? Or some sui generis novel form of perception? My claim is that sensory substitution assisted "vision" is neither vision nor tactile perception, because it is not perception at all. It is mental imagery: visual mental imagery triggered by tactile sensory stimulation. But it is a special form of mental imagery that is triggered by corresponding sensory stimulation in a different sense modality, which I call "multimodal mental imagery."

  2. Multimodal Landscaping in Higher Education Research

    DEFF Research Database (Denmark)

    Lueg, Klarissa

    2018-01-01

    The purpose of this paper is to introduce Multimodal Landscaping (ML) as a conceptual framework, and to illustrate how this approach can be applied within the field of higher education research. It is argued that ML is a suitable tool, especially, in studies investigating university...... internationalization, and in studies focusing on the agent level of higher education organizations. ML is argued to add to the diversity of methods within a social constructivist methodology. The author illustrates how ML is connected and/or different from kindred approaches. Pathways are proposed as to how...

  3. Multimodality Imaging of Heart Valve Disease

    International Nuclear Information System (INIS)

    Rajani, Ronak; Khattar, Rajdeep; Chiribiri, Amedeo; Victor, Kelly; Chambers, John

    2014-01-01

    Unidentified heart valve disease is associated with a significant morbidity and mortality. It has therefore become important to accurately identify, assess and monitor patients with this condition in order that appropriate and timely intervention can occur. Although echocardiography has emerged as the predominant imaging modality for this purpose, recent advances in cardiac magnetic resonance and cardiac computed tomography indicate that they may have an important contribution to make. The current review describes the assessment of regurgitant and stenotic heart valves by multimodality imaging (echocardiography, cardiac computed tomography and cardiac magnetic resonance) and discusses their relative strengths and weaknesses

  4. Multimodality Imaging of Heart Valve Disease

    Energy Technology Data Exchange (ETDEWEB)

    Rajani, Ronak, E-mail: Dr.R.Rajani@gmail.com [Department of Cardiology, St. Thomas’ Hospital, London (United Kingdom); Khattar, Rajdeep [Department of Cardiology, Royal Brompton Hospital, London (United Kingdom); Chiribiri, Amedeo [Divisions of Imaging Sciences, The Rayne Institute, St. Thomas' Hospital, London (United Kingdom); Victor, Kelly; Chambers, John [Department of Cardiology, St. Thomas’ Hospital, London (United Kingdom)

    2014-09-15

    Unidentified heart valve disease is associated with a significant morbidity and mortality. It has therefore become important to accurately identify, assess and monitor patients with this condition in order that appropriate and timely intervention can occur. Although echocardiography has emerged as the predominant imaging modality for this purpose, recent advances in cardiac magnetic resonance and cardiac computed tomography indicate that they may have an important contribution to make. The current review describes the assessment of regurgitant and stenotic heart valves by multimodality imaging (echocardiography, cardiac computed tomography and cardiac magnetic resonance) and discusses their relative strengths and weaknesses.

  5. Multimodal Imaging Brain Connectivity Analysis (MIBCA toolbox

    Directory of Open Access Journals (Sweden)

    Andre Santos Ribeiro

    2015-07-01

    Full Text Available Aim. In recent years, connectivity studies using neuroimaging data have increased the understanding of the organization of large-scale structural and functional brain networks. However, data analysis is time consuming as rigorous procedures must be assured, from structuring data and pre-processing to modality specific data procedures. Until now, no single toolbox was able to perform such investigations on truly multimodal image data from beginning to end, including the combination of different connectivity analyses. Thus, we have developed the Multimodal Imaging Brain Connectivity Analysis (MIBCA toolbox with the goal of diminishing time waste in data processing and to allow an innovative and comprehensive approach to brain connectivity.Materials and Methods. The MIBCA toolbox is a fully automated all-in-one connectivity toolbox that offers pre-processing, connectivity and graph theoretical analyses of multimodal image data such as diffusion-weighted imaging, functional magnetic resonance imaging (fMRI and positron emission tomography (PET. It was developed in MATLAB environment and pipelines well-known neuroimaging softwares such as Freesurfer, SPM, FSL, and Diffusion Toolkit. It further implements routines for the construction of structural, functional and effective or combined connectivity matrices, as well as, routines for the extraction and calculation of imaging and graph-theory metrics, the latter using also functions from the Brain Connectivity Toolbox. Finally, the toolbox performs group statistical analysis and enables data visualization in the form of matrices, 3D brain graphs and connectograms. In this paper the MIBCA toolbox is presented by illustrating its capabilities using multimodal image data from a group of 35 healthy subjects (19–73 years old with volumetric T1-weighted, diffusion tensor imaging, and resting state fMRI data, and 10 subjets with 18F-Altanserin PET data also.Results. It was observed both a high inter

  6. Registration of deformed multimodality medical images

    International Nuclear Information System (INIS)

    Moshfeghi, M.; Naidich, D.

    1989-01-01

    The registration and combination of images from different modalities have several potential applications, such as functional and anatomic studies, 3D radiation treatment planning, surgical planning, and retrospective studies. Image registration algorithms should correct for any local deformations caused by respiration, heart beat, imaging device distortions, and so forth. This paper reports on an elastic matching technique for registering deformed multimodality images. Correspondences between contours in the two images are used to stretch the deformed image toward its goal image. This process is repeated a number of times, with decreasing image stiffness. As the iterations continue, the stretched image better approximates its goal image

  7. Multimodal neuromonitoring in pediatric cardiac anesthesia

    Directory of Open Access Journals (Sweden)

    Alexander J. C. Mittnacht

    2014-01-01

    Full Text Available Despite significant improvements in overall outcome, neurological injury remains a feared complication following pediatric congenital heart surgery (CHS. Only if adverse events are detected early enough, can effective actions be initiated preventing potentially serious injury. The multifactorial etiology of neurological injury in CHS patients makes it unlikely that one single monitoring modality will be effective in capturing all possible threats. Improving current and developing new technologies and combining them according to the concept of multimodal monitoring may allow for early detection and possible intervention with the goal to further improve neurological outcome in children undergoing CHS.

  8. Development of Multimodal Human Interface Technology

    Science.gov (United States)

    Hirose, Michitaka

    About 20 years have passed since the word “Virtual Reality” became popular. During these two decades, novel human interface technology so called “multimodal interface technology” has been formed. In this paper, firstly, recent progress in realtime CG, BCI and five senses IT is quickly reviewed. Since the life cycle of the information technology is said to be 20 years or so, novel directions and paradigms of VR technology can be found in conjunction with the technologies forementioned. At the end of the paper, these futuristic directions such as ultra-realistic media are briefly introduced.

  9. Multimodal approach to the international transit transport

    Directory of Open Access Journals (Sweden)

    D. Bazaras

    2003-12-01

    Full Text Available In the article not only the problems of multi-modal and inter-modal conveyances in Lithuania and the concept of transit and the transit system stimulating factors are analysed, but also the modelling of transit transport and the flows of the loads are given. The main part of the article comes to the analysis of resent situation of Lithuania. In this part the place of transport sector in the market of transit services is analysed and the transit profit for Lithuanian economy is evaluated. The conclusions and proposals are given at the end of the article.

  10. Percorsi linguistici e semiotici: Critical Multimodal Analysis of Digital Discourse

    Directory of Open Access Journals (Sweden)

    edited by Ilaria Moschini

    2014-12-01

    Full Text Available The language section of LEA - edited by Ilaria Moschini - is dedicated to the Critical Multimodal Analysis of Digital Discourse, an approach that encompasses the linguistic and semiotic detailed investigation of texts within a socio-cultural perspective. It features an interview with Professor Theo van Leeuwen by Ilaria Moschini and four essays: “Retwitting, reposting, repinning; reshaping identities online: Towards a social semiotic multimodal analysis of digital remediation” by Elisabetta Adami; “Multimodal aspects of corporate social responsibility communication” by Carmen Daniela Maier; “Pervasive Technologies and the Paradoxes of Multimodal Digital Communication” by Sandra Petroni and “Can the powerless speak? Linguistic and multimodal corporate media manipulation in digital environments: the case of Malala Yousafzai” by Maria Grazia Sindoni. 

  11. Parallel and Cooperative Particle Swarm Optimizer for Multimodal Problems

    Directory of Open Access Journals (Sweden)

    Geng Zhang

    2015-01-01

    Full Text Available Although the original particle swarm optimizer (PSO method and its related variant methods show some effectiveness for solving optimization problems, it may easily get trapped into local optimum especially when solving complex multimodal problems. Aiming to solve this issue, this paper puts forward a novel method called parallel and cooperative particle swarm optimizer (PCPSO. In case that the interacting of the elements in D-dimensional function vector X=[x1,x2,…,xd,…,xD] is independent, cooperative particle swarm optimizer (CPSO is used. Based on this, the PCPSO is presented to solve real problems. Since the dimension cannot be split into several lower dimensional search spaces in real problems because of the interacting of the elements, PCPSO exploits the cooperation of two parallel CPSO algorithms by orthogonal experimental design (OED learning. Firstly, the CPSO algorithm is used to generate two locally optimal vectors separately; then the OED is used to learn the merits of these two vectors and creates a better combination of them to generate further search. Experimental studies on a set of test functions show that PCPSO exhibits better robustness and converges much closer to the global optimum than several other peer algorithms.

  12. Multi-Modal Inference in Animacy Perception for Artificial Object

    Directory of Open Access Journals (Sweden)

    Kohske Takahashi

    2011-10-01

    Full Text Available Sometimes we feel animacy for artificial objects and their motion. Animals usually interact with environments through multiple sensory modalities. Here we investigated how the sensory responsiveness of artificial objects to the environment would contribute to animacy judgment for them. In a 90-s trial, observers freely viewed four objects moving in a virtual 3D space. The objects, whose position and motion were determined following Perlin-noise series, kept drifting independently in the space. Visual flashes, auditory bursts, or synchronous flashes and bursts appeared with 1–2 s intervals. The first object abruptly accelerated their motion just after visual flashes, giving an impression of responding to the flash. The second object responded to bursts. The third object responded to synchronous flashes and bursts. The forth object accelerated at a random timing independent of flashes and bursts. The observers rated how strongly they felt animacy for each object. The results showed that the object responding to the auditory bursts was rated as having weaker animacy compared to the other objects. This implies that sensory modality through which an object interacts with the environment may be a factor for animacy perception in the object and may serve as the basis of multi-modal and cross-modal inference of animacy.

  13. MULTIMODAL CONSTRUCTION OF CHILDREN'S ARGUMENTS IN DISPUTES DURING PLAY

    Directory of Open Access Journals (Sweden)

    Rosemberg, Celia Renata

    2013-09-01

    Full Text Available Within the framework of a sociocultural theory of human development and learning (Vigotsky, 2009; Bruner, 1986; Nelson, 1996; Tomasello, 1998, 2003, this paper aims to investigate the multimodal construction of arguments produced by 5 year-old children during disputes in a kindergarten play situation. We considered the juxtaposition of information provided by resources from different semiotic fields (Goodwin, 2000, 2007. The corpus consists of the interactions in a group of children while they play with building blocks. This play situation was videotaped in a kindergarten classroom that is attended by an urban marginalized population of outer Buenos Aires, Argentina. The analysis makes use of the qualitative logic derived from the methodological tools of Conversation Analysis developed in previous research (Goodwin, 2000, 2007; Goodwin and Goodwin, 1990, 2000; Goodwin, Goodwin and Yaeger-Dror, 2002. The results show the different semiotic fields that overlap with the linguistic expression of the arguments or points of view that children maintain while quarrelling during play situations. This demonstrates the importance of attending to intonation, the use of space, the direction of gaze, gestures, and body positioning as they are components that contribute to the argumentative force of the utterances in disputes. These elements emerge as indicators of the emotions that parties experience in disputes which can not be disregarded when attempting to account for how argumentation occurs in real situations of interaction. This paper is written in Spanish.

  14. E-publishing and multimodalities

    Directory of Open Access Journals (Sweden)

    Yngve Nordkvelle

    2008-12-01

    Full Text Available In the literature of e-publishing there has been a consistent call from the advent of e-publishing on, until now, to explore new ways of expressing ideas through the new media. It has been claimed that the Internet opens an alley of possibilities and opportunites for publishing that will change the ways of publishing once and for all. In the area of publication of e-journals, however, the call for changes has received very modest responds.The thing is, it appears, that the conventional paper journal has a solid grip on the accepted formats of publishing. In a published research paper Mayernik (2007 explaines some of the reasons for that. Although pioneers of e-publishing suggested various areas where academic publishing could be expanded on, the opportunities given are scarsely used. Mayernik outlines "Non-linearity", "Multimedia", "Multiple use", "Interactivity" and "Rapid Publication" as areas of expansion for the academic e-journal. (2007. The paper deserves a thorough reading in itself, and I will briefly quote from his conclusion: "It is likely that the traditional linear article will continue to be the prevalent format for scholarly journals, both print and electronic, for the foreseeable future, and while electronic features will garner more and more use as technology improves, they will continue to be used to supplement, and not supplant, the traditional article."This is a challenging situation. If we accept the present dominant style of presenting scientific literature, we would use our energy best in seeking a way of improving the efficiency of that communication style. The use of multimedia, non-linearity etc. would perfect the present state, but still keep the scientific article as the main template. It is very unlikely that scientific publication will substitute the scholarly article with unproven alternatives. What we face is a rather conservative style of remediation that blurs the impact of the new media, - or "transparency" if

  15. Modeling Strategic Use of Human Computer Interfaces with Novel Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Laura Jane Mariano

    2015-07-01

    Full Text Available Immersive software tools are virtual environments designed to give their users an augmented view of real-world data and ways of manipulating that data. As virtual environments, every action users make while interacting with these tools can be carefully logged, as can the state of the software and the information it presents to the user, giving these actions context. This data provides a high-resolution lens through which dynamic cognitive and behavioral processes can be viewed. In this report, we describe new methods for the analysis and interpretation of such data, utilizing a novel implementation of the Beta Process Hidden Markov Model (BP-HMM for analysis of software activity logs. We further report the results of a preliminary study designed to establish the validity of our modeling approach. A group of 20 participants were asked to play a simple computer game, instrumented to log every interaction with the interface. Participants had no previous experience with the game’s functionality or rules, so the activity logs collected during their naïve interactions capture patterns of exploratory behavior and skill acquisition as they attempted to learn the rules of the game. Pre- and post-task questionnaires probed for self-reported styles of problem solving, as well as task engagement, difficulty, and workload. We jointly modeled the activity log sequences collected from all participants using the BP-HMM approach, identifying a global library of activity patterns representative of the collective behavior of all the participants. Analyses show systematic relationships between both pre- and post-task questionnaires, self-reported approaches to analytic problem solving, and metrics extracted from the BP-HMM decomposition. Overall, we find that this novel approach to decomposing unstructured behavioral data within software environments provides a sensible means for understanding how users learn to integrate software functionality for strategic

  16. Rhythmic interaction in VR

    DEFF Research Database (Denmark)

    Erkut, Cumhur

    2017-01-01

    Cinematic virtual reality is a new and relatively unexplored area in academia. While research in guiding the spectator's attention in this new medium has been conducted for some time, a focus on editing in conjunction with spectator orientation is only currently emerging. In this paper, we consid...... in rhythm perception, and complement it with applications in traditional editing. Through the notion of multimodal listening we provide guidelines that can be used in rhythmic and sonic interaction design in VR....

  17. Multimodal EEG Recordings, Psychometrics and Behavioural Analysis.

    Science.gov (United States)

    Boeijinga, Peter H

    2015-01-01

    High spatial and temporal resolution measurements of neuronal activity are preferably combined. In an overview on how this approach can take shape, multimodal electroencephalography (EEG) is treated in 2 main parts: by experiments without a task and in the experimentally cued working brain. It concentrates first on the alpha rhythm properties and next on data-driven search for patterns such as the default mode network. The high-resolution volumic distributions of neuronal metabolic indices result in distributed cortical regions and possibly relate to numerous nuclei, observable in a non-invasive manner in the central nervous system of humans. The second part deals with paradigms in which nowadays assessment of target-related networks can align level-dependent blood oxygenation, electrical responses and behaviour, taking the temporal resolution advantages of event-related potentials. Evidence-based electrical propagation in serial tasks during performance is now to a large extent attributed to interconnected pathways, particularly chronometry-dependent ones, throughout a chain including a dorsal stream, next ventral cortical areas taking the flow of information towards inferior temporal domains. The influence of aging is documented, and results of the first multimodal studies in neuropharmacology are consistent. Finally a scope on implementation of advanced clinical applications and personalized marker strategies in neuropsychiatry is indicated. © 2016 S. Karger AG, Basel.

  18. Fiber Optic Pressure Sensor using Multimode Interference

    International Nuclear Information System (INIS)

    Ruiz-Perez, V I; Sanchez-Mondragon, J J; Basurto-Pensado, M A; LiKamWa, P; May-Arrioja, D A

    2011-01-01

    Based on the theory of multimode interference (MMI) and self-image formation, we developed a novel intrinsic optical fiber pressure sensor. The sensing element consists of a section of multimode fiber (MMF) without cladding spliced between two single mode fibers (SMF). The MMI pressure sensor is based on the intensity changes that occur in the transmitted light when the effective refractive index of the MMF is changed. Basically, a thick layer of Polydimethylsiloxane (PDMS) is placed in direct contact with the MMF section, such that the contact area between the PDMS and the fiber will change proportionally with the applied pressure, which results in a variation of the transmitted light intensity. Using this configuration, a good correlation between the measured intensity variations and the applied pressure is obtained. The sensitivity of the sensor is 3 μV/psi, for a range of 0-60 psi, and the maximum resolution of our system is 0.25 psi. Good repeatability is also observed with a standard deviation of 0.0019. The key feature of the proposed pressure sensor is its low fabrication cost, since the cost of the MMF is minimal.

  19. Fiber Optic Pressure Sensor using Multimode Interference

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz-Perez, V I; Sanchez-Mondragon, J J [INAOE, Apartado Postal 51 y 216, Puebla 72000 (Mexico); Basurto-Pensado, M A [CIICAp, Universidad Autonoma del Estado de Morelos (Mexico); LiKamWa, P [CREOL, University of Central Florida, Orlando, FL 32816 (United States); May-Arrioja, D A, E-mail: iruiz@inaoep.mx, E-mail: mbasurto@uaem.mx, E-mail: delta_dirac@hotmail.com, E-mail: daniel_may_arrioja@hotmail.com [UAT Reynosa Rodhe, Universidad Autonoma de Tamaulipas (Mexico)

    2011-01-01

    Based on the theory of multimode interference (MMI) and self-image formation, we developed a novel intrinsic optical fiber pressure sensor. The sensing element consists of a section of multimode fiber (MMF) without cladding spliced between two single mode fibers (SMF). The MMI pressure sensor is based on the intensity changes that occur in the transmitted light when the effective refractive index of the MMF is changed. Basically, a thick layer of Polydimethylsiloxane (PDMS) is placed in direct contact with the MMF section, such that the contact area between the PDMS and the fiber will change proportionally with the applied pressure, which results in a variation of the transmitted light intensity. Using this configuration, a good correlation between the measured intensity variations and the applied pressure is obtained. The sensitivity of the sensor is 3 {mu}V/psi, for a range of 0-60 psi, and the maximum resolution of our system is 0.25 psi. Good repeatability is also observed with a standard deviation of 0.0019. The key feature of the proposed pressure sensor is its low fabrication cost, since the cost of the MMF is minimal.

  20. ASIC3 channels in multimodal sensory perception.

    Science.gov (United States)

    Li, Wei-Guang; Xu, Tian-Le

    2011-01-19

    Acid-sensing ion channels (ASICs), which are members of the sodium-selective cation channels belonging to the epithelial sodium channel/degenerin (ENaC/DEG) family, act as membrane-bound receptors for extracellular protons as well as nonproton ligands. At least five ASIC subunits have been identified in mammalian neurons, which form both homotrimeric and heterotrimeric channels. The highly proton sensitive ASIC3 channels are predominantly distributed in peripheral sensory neurons, correlating with their roles in multimodal sensory perception, including nociception, mechanosensation, and chemosensation. Different from other ASIC subunit composing ion channels, ASIC3 channels can mediate a sustained window current in response to mild extracellular acidosis (pH 7.3-6.7), which often occurs accompanied by many sensory stimuli. Furthermore, recent evidence indicates that the sustained component of ASIC3 currents can be enhanced by nonproton ligands including the endogenous metabolite agmatine. In this review, we first summarize the growing body of evidence for the involvement of ASIC3 channels in multimodal sensory perception and then discuss the potential mechanisms underlying ASIC3 activation and mediation of sensory perception, with a special emphasis on its role in nociception. We conclude that ASIC3 activation and modulation by diverse sensory stimuli represent a new avenue for understanding the role of ASIC3 channels in sensory perception. Furthermore, the emerging implications of ASIC3 channels in multiple sensory dysfunctions including nociception allow the development of new pharmacotherapy.

  1. Multimodal 2D Brain Computer Interface.

    Science.gov (United States)

    Almajidy, Rand K; Boudria, Yacine; Hofmann, Ulrich G; Besio, Walter; Mankodiya, Kunal

    2015-08-01

    In this work we used multimodal, non-invasive brain signal recording systems, namely Near Infrared Spectroscopy (NIRS), disc electrode electroencephalography (EEG) and tripolar concentric ring electrodes (TCRE) electroencephalography (tEEG). 7 healthy subjects participated in our experiments to control a 2-D Brain Computer Interface (BCI). Four motor imagery task were performed, imagery motion of the left hand, the right hand, both hands and both feet. The signal slope (SS) of the change in oxygenated hemoglobin concentration measured by NIRS was used for feature extraction while the power spectrum density (PSD) of both EEG and tEEG in the frequency band 8-30Hz was used for feature extraction. Linear Discriminant Analysis (LDA) was used to classify different combinations of the aforementioned features. The highest classification accuracy (85.2%) was achieved by using features from all the three brain signals recording modules. The improvement in classification accuracy was highly significant (p = 0.0033) when using the multimodal signals features as compared to pure EEG features.

  2. Photonic lantern with multimode fibers embedded

    Science.gov (United States)

    Yu, Hai-Jiao; Yan, Qi; Huang, Zong-Jun; Tian, He; Jiang, Yu; Liu, Yong-Jun; Zhang, Jian-Zhong; Sun, Wei-Min

    2014-08-01

    A photonic lantern is studied which is formed by seven multimode fibers inserted into a pure silica capillary tube. The core of the tapered end has a uniform refractive index because the polymer claddings are removed before the fibers are inserted. Consequently, the light distribution is also uniform. Two theories describing a slowly varying waveguide and multimode coupling are used to analyze the photonic lantern. The transmission loss decreases as the length of the tapered part increases. For a device with a taper length of 3.4 cm, the loss is about 1.06 dB on average for light propagating through the taper from an inserted fiber to the tapered end and 0.99 dB in the reverse direction. For a device with a taper length of 0.7 cm, the two loss values are 2.63 dB and 2.53 dB, respectively. The results show that it is possible to achieve a uniform light distribution with the tapered end and a low-loss transmission in the device if parameters related to the lantern are reasonably defined.

  3. Photonic lantern with multimode fibers embedded

    International Nuclear Information System (INIS)

    Yu Hai-Jiao; Yan Qi; Huang Zong-Jun; Tian He; Jiang Yu; Liu Yong-Jun; Zhang Jian-Zhong; Sun Wei-Min

    2014-01-01

    A photonic lantern is studied which is formed by seven multimode fibers inserted into a pure silica capillary tube. The core of the tapered end has a uniform refractive index because the polymer claddings are removed before the fibers are inserted. Consequently, the light distribution is also uniform. Two theories describing a slowly varying waveguide and multimode coupling are used to analyze the photonic lantern. The transmission loss decreases as the length of the tapered part increases. For a device with a taper length of 3.4 cm, the loss is about 1.06 dB on average for light propagating through the taper from an inserted fiber to the tapered end and 0.99 dB in the reverse direction. For a device with a taper length of 0.7 cm, the two loss values are 2.63 dB and 2.53 dB, respectively. The results show that it is possible to achieve a uniform light distribution with the tapered end and a low-loss transmission in the device if parameters related to the lantern are reasonably defined. (research papers)

  4. Multimodal brain monitoring in fulminant hepatic failure

    Science.gov (United States)

    Paschoal Jr, Fernando Mendes; Nogueira, Ricardo Carvalho; Ronconi, Karla De Almeida Lins; de Lima Oliveira, Marcelo; Teixeira, Manoel Jacobsen; Bor-Seng-Shu, Edson

    2016-01-01

    Acute liver failure, also known as fulminant hepatic failure (FHF), embraces a spectrum of clinical entities characterized by acute liver injury, severe hepatocellular dysfunction, and hepatic encephalopathy. Cerebral edema and intracranial hypertension are common causes of mortality in patients with FHF. The management of patients who present acute liver failure starts with determining the cause and an initial evaluation of prognosis. Regardless of whether or not patients are listed for liver transplantation, they should still be monitored for recovery, death, or transplantation. In the past, neuromonitoring was restricted to serial clinical neurologic examination and, in some cases, intracranial pressure monitoring. Over the years, this monitoring has proven insufficient, as brain abnormalities were detected at late and irreversible stages. The need for real-time monitoring of brain functions to favor prompt treatment and avert irreversible brain injuries led to the concepts of multimodal monitoring and neurophysiological decision support. New monitoring techniques, such as brain tissue oxygen tension, continuous electroencephalogram, transcranial Doppler, and cerebral microdialysis, have been developed. These techniques enable early diagnosis of brain hemodynamic, electrical, and biochemical changes, allow brain anatomical and physiological monitoring-guided therapy, and have improved patient survival rates. The purpose of this review is to discuss the multimodality methods available for monitoring patients with FHF in the neurocritical care setting. PMID:27574545

  5. Quantitative multi-modal NDT data analysis

    International Nuclear Information System (INIS)

    Heideklang, René; Shokouhi, Parisa

    2014-01-01

    A single NDT technique is often not adequate to provide assessments about the integrity of test objects with the required coverage or accuracy. In such situations, it is often resorted to multi-modal testing, where complementary and overlapping information from different NDT techniques are combined for a more comprehensive evaluation. Multi-modal material and defect characterization is an interesting task which involves several diverse fields of research, including signal and image processing, statistics and data mining. The fusion of different modalities may improve quantitative nondestructive evaluation by effectively exploiting the augmented set of multi-sensor information about the material. It is the redundant information in particular, whose quantification is expected to lead to increased reliability and robustness of the inspection results. There are different systematic approaches to data fusion, each with its specific advantages and drawbacks. In our contribution, these will be discussed in the context of nondestructive materials testing. A practical study adopting a high-level scheme for the fusion of Eddy Current, GMR and Thermography measurements on a reference metallic specimen with built-in grooves will be presented. Results show that fusion is able to outperform the best single sensor regarding detection specificity, while retaining the same level of sensitivity

  6. Multimodal Network Equilibrium with Stochastic Travel Times

    Directory of Open Access Journals (Sweden)

    M. Meng

    2014-01-01

    Full Text Available The private car, unlike public traffic modes (e.g., subway, trolley running along dedicated track-ways, is invariably subject to various uncertainties resulting in travel time variation. A multimodal network equilibrium model is formulated that explicitly considers stochastic link capacity variability in the road network. The travel time of combined-mode trips is accumulated based on the concept of the mean excess travel time (METT which is a summation of estimated buffer time and tardy time. The problem is characterized by an equivalent VI (variational inequality formulation where the mode choice is expressed in a hierarchical logit structure. Specifically, the supernetwork theory and expansion technique are used herein to represent the multimodal transportation network, which completely represents the combined-mode trips as constituting multiple modes within a trip. The method of successive weighted average is adopted for problem solutions. The model and solution method are further applied to study the trip distribution and METT variations caused by the different levels of the road conditions. Results of numerical examples show that travelers prefer to choose the combined travel mode as road capacity decreases. Travelers with different attitudes towards risk are shown to exhibit significant differences when making travel choice decisions.

  7. Multimodal human communication--targeting facial expressions, speech content and prosody.

    Science.gov (United States)

    Regenbogen, Christina; Schneider, Daniel A; Gur, Raquel E; Schneider, Frank; Habel, Ute; Kellermann, Thilo

    2012-05-01

    Human communication is based on a dynamic information exchange of the communication channels facial expressions, prosody, and speech content. This fMRI study elucidated the impact of multimodal emotion processing and the specific contribution of each channel on behavioral empathy and its prerequisites. Ninety-six video clips displaying actors who told self-related stories were presented to 27 healthy participants. In two conditions, all channels uniformly transported only emotional or neutral information. Three conditions selectively presented two emotional channels and one neutral channel. Subjects indicated the actors' emotional valence and their own while fMRI was recorded. Activation patterns of tri-channel emotional communication reflected multimodal processing and facilitative effects for empathy. Accordingly, subjects' behavioral empathy rates significantly deteriorated once one source was neutral. However, emotionality expressed via two of three channels yielded activation in a network associated with theory-of-mind-processes. This suggested participants' effort to infer mental states of their counterparts and was accompanied by a decline of behavioral empathy, driven by the participants' emotional responses. Channel-specific emotional contributions were present in modality-specific areas. The identification of different network-nodes associated with human interactions constitutes a prerequisite for understanding dynamics that underlie multimodal integration and explain the observed decline in empathy rates. This task might also shed light on behavioral deficits and neural changes that accompany psychiatric diseases. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. MANIFESTATION OF MANIPULATION IN POLITICAL TALK-SHOWS: COGNITIVE AND MULTIMODAL ASPECTS

    Directory of Open Access Journals (Sweden)

    Petrova Anna Aleksandrovna

    2014-11-01

    Full Text Available The article deals with the problems of the manipulation manifestation in political television talk-shows. The suggestive processes of interaction in the analyzed genre of the media political discourse are studied in two aspects: а monomodal – as speech manipulation by verbal means at the level of emotional suggestion; b multimodal – as counter-suggestion, that restricts the effect of suggestion with visual and kinetic resources. The foundation of the cognitive analysis is a modeling method with a linguistic model which contains components of the cognitive and emotional processing of meaning, conclusions and reasoning. According to this three-component model, the speech manipulation consists in activation of dominant scripts of an addressee and is assured by the verbal resources of suggestion which associate with these scripts. The foundation of the multimodal research of the situations with counter-suggestion in the mass-media discourse is an ethnomethodological method with a reconstruction device. With this scientific attitude the authors have divided the resources of protection from the activating manipulation into two groups: 1 passive interactive communication of a suggestee in a verbal pause 2 active interactive communication of a suggestee aimed at changing the status and role domination. The empiric study of two isolated modalities and their correlations in specific situations of political talk shows allowed to develop the hypothesis on the existence of the fourth visual and kinetic component which represents space and corporal constellations with other models (or modalities of communication and their configurations. This study emphasizes the need to extend the research frames for the complex interactive processes of communication through their study in the multimodal aspect.

  9. Nuclear power plant human computer interface design incorporating console simulation, operations personnel, and formal evaluation techniques

    International Nuclear Information System (INIS)

    Chavez, C.; Edwards, R.M.; Goldberg, J.H.

    1993-01-01

    New CRT-based information displays which enhance the human machine interface are playing a very important role and are being increasingly used in control rooms since they present a higher degree of flexibility compared to conventional hardwired instrumentation. To prototype a new console configuration and information display system at the Experimental Breeder Reactor II (EBR-II), an iterative process of console simulation and evaluation involving operations personnel is being pursued. Entire panels including selector switches and information displays are simulated and driven by plant dynamical simulations with realistic responses that reproduce the actual cognitive and physical environment. Careful analysis and formal evaluation of operator interaction while using the simulated console will be conducted to determine underlying principles for effective control console design for this particular group of operation personnel. Additional iterations of design, simulation, and evaluation will then be conducted as necessary

  10. Human-Centered Design of Human-Computer-Human Dialogs in Aerospace Systems

    Science.gov (United States)

    Mitchell, Christine M.

    1998-01-01

    A series of ongoing research programs at Georgia Tech established a need for a simulation support tool for aircraft computer-based aids. This led to the design and development of the Georgia Tech Electronic Flight Instrument Research Tool (GT-EFIRT). GT-EFIRT is a part-task flight simulator specifically designed to study aircraft display design and single pilot interaction. ne simulator, using commercially available graphics and Unix workstations, replicates to a high level of fidelity the Electronic Flight Instrument Systems (EFIS), Flight Management Computer (FMC) and Auto Flight Director System (AFDS) of the Boeing 757/767 aircraft. The simulator can be configured to present information using conventional looking B757n67 displays or next generation Primary Flight Displays (PFD) such as found on the Beech Starship and MD-11.

  11. Cortical inter-hemispheric circuits for multimodal vocal learning in songbirds.

    Science.gov (United States)

    Paterson, Amy K; Bottjer, Sarah W

    2017-10-15

    Vocal learning in songbirds and humans is strongly influenced by social interactions based on sensory inputs from several modalities. Songbird vocal learning is mediated by cortico-basal ganglia circuits that include the SHELL region of lateral magnocellular nucleus of the anterior nidopallium (LMAN), but little is known concerning neural pathways that could integrate multimodal sensory information with SHELL circuitry. In addition, cortical pathways that mediate the precise coordination between hemispheres required for song production have been little studied. In order to identify candidate mechanisms for multimodal sensory integration and bilateral coordination for vocal learning in zebra finches, we investigated the anatomical organization of two regions that receive input from SHELL: the dorsal caudolateral nidopallium (dNCL SHELL ) and a region within the ventral arcopallium (Av). Anterograde and retrograde tracing experiments revealed a topographically organized inter-hemispheric circuit: SHELL and dNCL SHELL , as well as adjacent nidopallial areas, send axonal projections to ipsilateral Av; Av in turn projects to contralateral SHELL, dNCL SHELL , and regions of nidopallium adjacent to each. Av on each side also projects directly to contralateral Av. dNCL SHELL and Av each integrate inputs from ipsilateral SHELL with inputs from sensory regions in surrounding nidopallium, suggesting that they function to integrate multimodal sensory information with song-related responses within LMAN-SHELL during vocal learning. Av projections share this integrated information from the ipsilateral hemisphere with contralateral sensory and song-learning regions. Our results suggest that the inter-hemispheric pathway through Av may function to integrate multimodal sensory feedback with vocal-learning circuitry and coordinate bilateral vocal behavior. © 2017 Wiley Periodicals, Inc.

  12. Immersion in Movement-Based Interaction

    NARCIS (Netherlands)

    Pasch, M.; Bianchi-Berthouze, N.; van Dijk, Elisabeth M.A.G.; Nijholt, Antinus; Nijholt, A.; Reidsma, Dennis; Reidsma, D.; Hondorp, G.H.W.

    2009-01-01

    The phenomenon of immersing oneself into virtual environments has been established widely. Yet to date (to our best knowledge) the physical dimension has been neglected in studies investigating immersion in Human-Computer Interaction (HCI). In this paper we investigate how the concept of immersion

  13. Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification

    Directory of Open Access Journals (Sweden)

    Gayathri Rajagopal

    2015-01-01

    Full Text Available This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.

  14. Spaces of interaction, places for experience

    CERN Document Server

    Benyon, David

    2014-01-01

    Spaces of Interaction, Places for Experience is a book about Human-Computer Interaction (HCI), interaction design (ID) and user experience (UX) in the age of ubiquitous computing. The book explores interaction and experience through the different spaces that contribute to interaction until it arrives at an understanding of the rich and complex places for experience that will be the focus of the next period for interaction design. The book begins by looking at the multilayered nature of interaction and UX-not just with new technologies, but with technologies that are embedded in the world. Peop

  15. Acoustic multimode interference and self-imaging phenomena realized in multimodal phononic crystal waveguides

    International Nuclear Information System (INIS)

    Zou, Qiushun; Yu, Tianbao; Liu, Jiangtao; Wang, Tongbiao; Liao, Qinghua; Liu, Nianhua

    2015-01-01

    We report an acoustic multimode interference effect and self-imaging phenomena in an acoustic multimode waveguide system which consists of M parallel phononic crystal waveguides (M-PnCWs). Results show that the self-imaging principle remains applicable for acoustic waveguides just as it does for optical multimode waveguides. To achieve the dispersions and replicas of the input acoustic waves produced along the propagation direction, we performed the finite element method on M-PnCWs, which support M guided modes within the target frequency range. The simulation results show that single images (including direct and mirrored images) and N-fold images (N is an integer) are identified along the propagation direction with asymmetric and symmetric incidence discussed separately. The simulated positions of the replicas agree well with the calculated values that are theoretically decided by self-imaging conditions based on the guided mode propagation analysis. Moreover, the potential applications based on this self-imaging effect for acoustic wavelength de-multiplexing and beam splitting in the acoustic field are also presented. (paper)

  16. Sensitivity-Bandwidth Limit in a Multimode Optoelectromechanical Transducer

    Science.gov (United States)

    Moaddel Haghighi, I.; Malossi, N.; Natali, R.; Di Giuseppe, G.; Vitali, D.

    2018-03-01

    An optoelectromechanical system formed by a nanomembrane capacitively coupled to an L C resonator and to an optical interferometer has recently been employed for the highly sensitive optical readout of rf signals [T. Bagci et al., Nature (London) 507, 81 (2013), 10.1038/nature13029]. We propose and experimentally demonstrate how the bandwidth of such a transducer can be increased by controlling the interference between two electromechanical interaction pathways of a two-mode mechanical system. With a proof-of-principle device operating at room temperature, we achieve a sensitivity of 300 nV /√{Hz } over a bandwidth of 15 kHz in the presence of radio-frequency noise, and an optimal shot-noise-limited sensitivity of 10 nV /√{Hz } over a bandwidth of 5 kHz. We discuss strategies for improving the performance of the device, showing that, for the same given sensitivity, a mechanical multimode transducer can achieve a bandwidth significantly larger than that for a single-mode one.

  17. Multimodal charge-induction chromatography for antibody purification.

    Science.gov (United States)

    Tong, Hong-Fei; Lin, Dong-Qiang; Chu, Wen-Ning; Zhang, Qi-Lei; Gao, Dong; Wang, Rong-Zhu; Yao, Shan-Jing

    2016-01-15

    Hydrophobic charge-induction chromatography (HCIC) has advantages of high capacity, salt-tolerance and convenient pH-controlled elution. However, the binding specificity might be improved with multimodal molecular interactions. New ligand W-ABI that combining tryptophan and 5-amino-benzimidazole was designed with the concept of mutimodal charge-induction chromatography (MCIC). The indole and benzimidazole groups of the ligand could provide orientated mutimodal binding to target IgG under neutral pH, while the imidazole groups could induce the electrostatic repulsion forces for efficient elution under acidic pH. W-ABI ligand was coupled successfully onto agarose gel, and IgG adsorption behaviors were investigated. High affinity to IgG was found with the saturated adsorption capacity of 70.4 mg/ml at pH 7, and the flow rate of mobile phase showed little impact on the dynamic binding capacity. In addition, efficient elution could be achieved at mild acidic pH with high recovery. Two separation cases (IgG separation from albumin containing feedstock and monoclonal antibody purification from cell culture supernatant) were verified with high purity and recovery. In general, MCIC with the specially-designed ligand is an expanding of HCIC with improved adsorption selectivity, which would be a potential alternative to Protein A-based capture for the cost-effective purification of antibodies. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Multi-modality molecular imaging: pre-clinical laboratory configuration

    Science.gov (United States)

    Wu, Yanjun; Wellen, Jeremy W.; Sarkar, Susanta K.

    2006-02-01

    In recent years, the prevalence of in vivo molecular imaging applications has rapidly increased. Here we report on the construction of a multi-modality imaging facility in a pharmaceutical setting that is expected to further advance existing capabilities for in vivo imaging of drug distribution and the interaction with their target. The imaging instrumentation in our facility includes a microPET scanner, a four wavelength time-domain optical imaging scanner, a 9.4T/30cm MRI scanner and a SPECT/X-ray CT scanner. An electronics shop and a computer room dedicated to image analysis are additional features of the facility. The layout of the facility was designed with a central animal preparation room surrounded by separate laboratory rooms for each of the major imaging modalities to accommodate the work-flow of simultaneous in vivo imaging experiments. This report will focus on the design of and anticipated applications for our microPET and optical imaging laboratory spaces. Additionally, we will discuss efforts to maximize the daily throughput of animal scans through development of efficient experimental work-flows and the use of multiple animals in a single scanning session.

  19. Investigating multimodal communication in virtual meetings

    DEFF Research Database (Denmark)

    Persson, John Stouby; Mathiassen, Lars

    2014-01-01

    recordings of their oral exchanges and video recordings of their shared dynamic representation of the project’s status and plans, our analysis reveals how their interrelating of visual and verbal communication acts enabled effective communication and coordination. In conclusion, we offer theoretical......To manage distributed work, organizations increasingly rely on virtual meetings based on multimodal, synchronous communication technologies. However, despite technological advances, it is still challenging to coordinate knowledge through these meetings with spatial and cultural separation. Against...... propositions that explain how interrelating of verbal and visual acts based on shared dynamic representations enable communication repairs during virtual meetings. We argue the proposed framework provides researchers with a novel and practical approach to investigate the complex data involved in virtual...

  20. Automatic processing of multimodal tomography datasets.

    Science.gov (United States)

    Parsons, Aaron D; Price, Stephen W T; Wadeson, Nicola; Basham, Mark; Beale, Andrew M; Ashton, Alun W; Mosselmans, J Frederick W; Quinn, Paul D

    2017-01-01

    With the development of fourth-generation high-brightness synchrotrons on the horizon, the already large volume of data that will be collected on imaging and mapping beamlines is set to increase by orders of magnitude. As such, an easy and accessible way of dealing with such large datasets as quickly as possible is required in order to be able to address the core scientific problems during the experimental data collection. Savu is an accessible and flexible big data processing framework that is able to deal with both the variety and the volume of data of multimodal and multidimensional scientific datasets output such as those from chemical tomography experiments on the I18 microfocus scanning beamline at Diamond Light Source.