WorldWideScience

Sample records for multimodal human-computer interaction

  1. A Software Framework for Multimodal Human-Computer Interaction Systems

    NARCIS (Netherlands)

    Shen, Jie; Pantic, Maja

    2009-01-01

    This paper describes a software framework we designed and implemented for the development and research in the area of multimodal human-computer interface. The proposed framework is based on publish / subscribe architecture, which allows developers and researchers to conveniently configure, test and

  2. A Software Framework for Multimodal Human-Computer Interaction Systems

    NARCIS (Netherlands)

    Shen, Jie; Pantic, Maja

    2009-01-01

    This paper describes a software framework we designed and implemented for the development and research in the area of multimodal human-computer interface. The proposed framework is based on publish / subscribe architecture, which allows developers and researchers to conveniently configure, test and

  3. HCI^2 Workbench: A Development Tool for Multimodal Human-Computer Interaction Systems

    NARCIS (Netherlands)

    Shen, Jie; Wenzhe, Shi; Pantic, Maja

    2011-01-01

    In this paper, we present a novel software tool designed and implemented to simplify the development process of Multimodal Human-Computer Interaction (MHCI) systems. This tool, which is called the HCI^2 Workbench, exploits a Publish / Subscribe (P/S) architecture [13] [14] to facilitate efficient an

  4. Appearance-based human gesture recognition using multimodal features for human computer interaction

    Science.gov (United States)

    Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun

    2011-03-01

    The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.

  5. Speech Dialogue with Facial Displays Multimodal Human-Computer Conversation

    CERN Document Server

    Nagao, K; Nagao, Katashi; Takeuchi, Akikazu

    1994-01-01

    Human face-to-face conversation is an ideal model for human-computer dialogue. One of the major features of face-to-face communication is its multiplicity of communication channels that act on multiple modalities. To realize a natural multimodal dialogue, it is necessary to study how humans perceive information and determine the information to which humans are sensitive. A face is an independent communication channel that conveys emotional and conversational signals, encoded as facial expressions. We have developed an experimental system that integrates speech dialogue and facial animation, to investigate the effect of introducing communicative facial expressions as a new modality in human-computer conversation. Our experiments have shown that facial expressions are helpful, especially upon first contact with the system. We have also discovered that featuring facial expressions at an early stage improves subsequent interaction.

  6. Minimal mobile human computer interaction

    NARCIS (Netherlands)

    el Ali, A.

    2013-01-01

    In the last 20 years, the widespread adoption of personal, mobile computing devices in everyday life, has allowed entry into a new technological era in Human Computer Interaction (HCI). The constant change of the physical and social context in a user's situation made possible by the portability of m

  7. 多模态人机交互中基于笔输入的手势识别%Pen-Based Gesture Recognition in Multimodal Human-Computer Interaction

    Institute of Scientific and Technical Information of China (English)

    王延江; 袁保宗

    2001-01-01

    为研究多模态人机交互系统的理论及构造方法,提出了一种快速的、单笔划手势识别方法.该方法通过提取手势轨迹的关键点及各关键点的运移方向,形成特征码,然后与标准手势符号的各种可能的特征码进行匹配.其中方向特征用于预分类,而关键点位置信息用于细分类.实验结果表明该方法速度快、识别率高.%This paper proposes a fast, one-stroke pen gesture recognition approac h to the studying of multimodal human-computer interaction theory and building method. In the approach, a pen gesture is characterized by a sequence of dominan t points along the gesture trajectory and a sequence of writing directions betwe en consecutive dominant points. The recognition result can be obtained by matchi ng the feature code of the input gesture with the various possible feature codes of each standard gesture. The directional feature is used for gesture pre-class ification and the positional information is used for fine classification. Experi mental results show that this approach is fast and can get a high recognition rate.

  8. Language evolution and human-computer interaction

    Science.gov (United States)

    Grudin, Jonathan; Norman, Donald A.

    1991-01-01

    Many of the issues that confront designers of interactive computer systems also appear in natural language evolution. Natural languages and human-computer interfaces share as their primary mission the support of extended 'dialogues' between responsive entities. Because in each case one participant is a human being, some of the pressures operating on natural languages, causing them to evolve in order to better support such dialogue, also operate on human-computer 'languages' or interfaces. This does not necessarily push interfaces in the direction of natural language - since one entity in this dialogue is not a human, this is not to be expected. Nonetheless, by discerning where the pressures that guide natural language evolution also appear in human-computer interaction, we can contribute to the design of computer systems and obtain a new perspective on natural languages.

  9. Language evolution and human-computer interaction

    Science.gov (United States)

    Grudin, Jonathan; Norman, Donald A.

    1991-01-01

    Many of the issues that confront designers of interactive computer systems also appear in natural language evolution. Natural languages and human-computer interfaces share as their primary mission the support of extended 'dialogues' between responsive entities. Because in each case one participant is a human being, some of the pressures operating on natural languages, causing them to evolve in order to better support such dialogue, also operate on human-computer 'languages' or interfaces. This does not necessarily push interfaces in the direction of natural language - since one entity in this dialogue is not a human, this is not to be expected. Nonetheless, by discerning where the pressures that guide natural language evolution also appear in human-computer interaction, we can contribute to the design of computer systems and obtain a new perspective on natural languages.

  10. Quantifying Quality Aspects of Multimodal Interactive Systems

    CERN Document Server

    Kühnel, Christine

    2012-01-01

    This book systematically addresses the quantification of quality aspects of multimodal interactive systems. The conceptual structure is based on a schematic view on human-computer interaction where the user interacts with the system and perceives it via input and output interfaces. Thus, aspects of multimodal interaction are analyzed first, followed by a discussion of the evaluation of output and input and concluding with a view on the evaluation of a complete system.

  11. Deep architectures for Human Computer Interaction

    NARCIS (Netherlands)

    Noulas, A.K.; Kröse, B.J.A.

    2008-01-01

    In this work we present the application of Conditional Restricted Boltzmann Machines in Human Computer Interaction. These provide a well suited framework to model the complex temporal patterns produced from humans in the audio and video modalities. They can be trained in a semisupervised fashion and

  12. Human Computer Interaction: An intellectual approach

    Directory of Open Access Journals (Sweden)

    Kuntal Saroha

    2011-08-01

    Full Text Available This paper discusses the research that has been done in thefield of Human Computer Interaction (HCI relating tohuman psychology. Human-computer interaction (HCI isthe study of how people design, implement, and useinteractive computer systems and how computers affectindividuals, organizations, and society. This encompassesnot only ease of use but also new interaction techniques forsupporting user tasks, providing better access toinformation, and creating more powerful forms ofcommunication. It involves input and output devices andthe interaction techniques that use them; how information ispresented and requested; how the computer’s actions arecontrolled and monitored; all forms of help, documentation,and training; the tools used to design, build, test, andevaluate user interfaces; and the processes that developersfollow when creating Interfaces.

  13. Human-Computer Interaction in Smart Environments

    Directory of Open Access Journals (Sweden)

    Gianluca Paravati

    2015-08-01

    Full Text Available Here, we provide an overview of the content of the Special Issue on “Human-computer interaction in smart environments”. The aim of this Special Issue is to highlight technologies and solutions encompassing the use of mass-market sensors in current and emerging applications for interacting with Smart Environments. Selected papers address this topic by analyzing different interaction modalities, including hand/body gestures, face recognition, gaze/eye tracking, biosignal analysis, speech and activity recognition, and related issues.

  14. Fundamentals of human-computer interaction

    CERN Document Server

    Monk, Andrew F

    1985-01-01

    Fundamentals of Human-Computer Interaction aims to sensitize the systems designer to the problems faced by the user of an interactive system. The book grew out of a course entitled """"The User Interface: Human Factors for Computer-based Systems"""" which has been run annually at the University of York since 1981. This course has been attended primarily by systems managers from the computer industry. The book is organized into three parts. Part One focuses on the user as processor of information with studies on visual perception; extracting information from printed and electronically presented

  15. Introduction to human-computer interaction

    CERN Document Server

    Booth, Paul

    2014-01-01

    Originally published in 1989 this title provided a comprehensive and authoritative introduction to the burgeoning discipline of human-computer interaction for students, academics, and those from industry who wished to know more about the subject. Assuming very little knowledge, the book provides an overview of the diverse research areas that were at the time only gradually building into a coherent and well-structured field. It aims to explain the underlying causes of the cognitive, social and organizational problems typically encountered when computer systems are introduced. It is clear and co

  16. Unimodal and Multimodal Human Perception of Affective States During Human-computer Interaction%人机交互中情感状态单多模的人类感知

    Institute of Scientific and Technical Information of China (English)

    姜雪婷; 王宇; 田丽迎

    2016-01-01

    情感识别是人机交互领域的重要研究课题之一,随着研究通道的增多,研究成本和工作量也越来越大。本文在7种模式下通过未受训的观察员来检测人机交互过程中人类的基本情感状态(包括:厌恶、恐惧、快乐、悲伤和惊讶)的自然表达,并评估稳定性。用混合效应逻辑回归模型对2个观察员( oo)之间的一致性进行计算和分析,结果显示一致性普遍偏低。除了比较单模态和多模态的整体一致性,还比较了单个情感状态在单个模式下的一致性,而比较结果则用超可加性、可加性、冗余性和抑制性效应进行分类。目前,自动情感检测结果的意义还在研究中。%Emotion perception is one of the most important research topics in the field of human-computer interaction. With the increase of channels, research costs and the workload are also increasing. In this paper, the human’ s basic emotion states were been detected by untrained people in seven conditions during human-computer interaction, and assess stability. It computed and analyzed the agreement between two observers( oo) with mixed-effects logistic regression models. The result is generally low. In addition to the overall consistency of the unimodal and multimodal condition, it also compared the consistency of individual affec-tive states in a single model, and classified the results with the superadditive, additive, redundancy and inhibitory effect. The significance of the results of automatic emotion detection is still been discussed.

  17. Human computer interaction using hand gestures

    CERN Document Server

    Premaratne, Prashan

    2014-01-01

    Human computer interaction (HCI) plays a vital role in bridging the 'Digital Divide', bringing people closer to consumer electronics control in the 'lounge'. Keyboards and mouse or remotes do alienate old and new generations alike from control interfaces. Hand Gesture Recognition systems bring hope of connecting people with machines in a natural way. This will lead to consumers being able to use their hands naturally to communicate with any electronic equipment in their 'lounge.' This monograph will include the state of the art hand gesture recognition approaches and how they evolved from their inception. The author would also detail his research in this area for the past 8 years and how the future might turn out to be using HCI. This monograph will serve as a valuable guide for researchers (who would endeavour into) in the world of HCI.

  18. Human-Computer Interaction The Agency Perspective

    CERN Document Server

    Oliveira, José

    2012-01-01

    Agent-centric theories, approaches and technologies are contributing to enrich interactions between users and computers. This book aims at highlighting the influence of the agency perspective in Human-Computer Interaction through a careful selection of research contributions. Split into five sections; Users as Agents, Agents and Accessibility, Agents and Interactions, Agent-centric Paradigms and Approaches, and Collective Agents, the book covers a wealth of novel, original and fully updated material, offering:   ü  To provide a coherent, in depth, and timely material on the agency perspective in HCI ü  To offer an authoritative treatment of the subject matter presented by carefully selected authors ü  To offer a balanced and broad coverage of the subject area, including, human, organizational, social, as well as technological concerns. ü  To offer a hands-on-experience by covering representative case studies and offering essential design guidelines   The book will appeal to a broad audience of resea...

  19. On the Rhetorical Contract in Human-Computer Interaction.

    Science.gov (United States)

    Wenger, Michael J.

    1991-01-01

    An exploration of the rhetorical contract--i.e., the expectations for appropriate interaction--as it develops in human-computer interaction revealed that direct manipulation interfaces were more likely to establish social expectations. Study results suggest that the social nature of human-computer interactions can be examined with reference to the…

  20. Human-Computer Interactions and Decision Behavior

    Science.gov (United States)

    1984-01-01

    software interfaces. The major components of the reseach program included the Diaiogue Management System. (DMS) operating environment, the role of...specification; and new methods for modeling, designing, and developing human-computer interfaces based on syntactic and semantic specification. The DMS...achieving communication is language. Accordingly, the transaction model employs a linguistic model consisting of parts that relate computer responses

  1. Human-Computer Interaction and Information Management Research Needs

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — In a visionary future, Human-Computer Interaction HCI and Information Management IM have the potential to enable humans to better manage their lives through the use...

  2. Human-computer interaction and management information systems

    CERN Document Server

    Galletta, Dennis F

    2014-01-01

    ""Human-Computer Interaction and Management Information Systems: Applications"" offers state-of-the-art research by a distinguished set of authors who span the MIS and HCI fields. The original chapters provide authoritative commentaries and in-depth descriptions of research programs that will guide 21st century scholars, graduate students, and industry professionals. Human-Computer Interaction (or Human Factors) in MIS is concerned with the ways humans interact with information, technologies, and tasks, especially in business, managerial, organizational, and cultural contexts. It is distinctiv

  3. Audio Technology and Mobile Human Computer Interaction

    DEFF Research Database (Denmark)

    Chamberlain, Alan; Bødker, Mads; Hazzard, Adrian

    2017-01-01

    Audio-based mobile technology is opening up a range of new interactive possibilities. This paper brings some of those possibilities to light by offering a range of perspectives based in this area. It is not only the technical systems that are developing, but novel approaches to the design...

  4. Human computer interaction issues in Clinical Trials Management Systems.

    Science.gov (United States)

    Starren, Justin B; Payne, Philip R O; Kaufman, David R

    2006-01-01

    Clinical trials increasingly rely upon web-based Clinical Trials Management Systems (CTMS). As with clinical care systems, Human Computer Interaction (HCI) issues can greatly affect the usefulness of such systems. Evaluation of the user interface of one web-based CTMS revealed a number of potential human-computer interaction problems, in particular, increased workflow complexity associated with a web application delivery model and potential usability problems resulting from the use of ambiguous icons. Because these design features are shared by a large fraction of current CTMS, the implications extend beyond this individual system.

  5. Formal modelling techniques in human-computer interaction

    NARCIS (Netherlands)

    Haan, de G.; Veer, van der G.C.; Vliet, van J.C.

    1991-01-01

    This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in hum

  6. The epistemology and ontology of human-computer interaction

    NARCIS (Netherlands)

    Brey, Philip

    2005-01-01

    This paper analyzes epistemological and ontological dimensions of Human-Computer Interaction (HCI) through an analysis of the functions of computer systems in relation to their users. It is argued that the primary relation between humans and computer systems has historically been epistemic: computer

  7. Humans, computers and wizards human (simulated) computer interaction

    CERN Document Server

    Fraser, Norman; McGlashan, Scott; Wooffitt, Robin

    2013-01-01

    Using data taken from a major European Union funded project on speech understanding, the SunDial project, this book considers current perspectives on human computer interaction and argues for the value of an approach taken from sociology which is based on conversation analysis.

  8. Visual Interpretation Of Hand Gestures For Human Computer Interaction

    Directory of Open Access Journals (Sweden)

    M.S.Sahane

    2014-01-01

    Full Text Available The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI. In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. This discussion is organized on the basis of the method used for modeling, analyzing, and recognizing gestures. We propose pointing gesture-based large display interaction using a depth camera. A user interacts with applications for large display by using pointing gestures with the barehand. The calibration between large display and depth camera can be automatically performed by using RGB-D camera.. We also discuss implemented gestural systems as well as other potential applications of vision-based gesture recognition. We discuss directions of future research in gesture recognition, including its integration with other natural modes of human computer interaction.

  9. A Glance into the Future of Human Computer Interactions

    CERN Document Server

    Farooq, Umer; Nazir, Sohail

    2011-01-01

    Computers have a direct impact on our lives nowadays. Human's interaction with the computer has modified with the passage of time as improvement in technology occurred the better the human computer interaction became. Today we are facilitated by the operating system that has reduced all the complexity of hardware and we undergo our computation in a very convenient way irrespective of the process occurring at the hardware level. Though the human computer interaction has improved but it's not done yet. If we come to the future the computer's role in our lives would be a lot more rather our life would be of the artificial intelligence. In our future the biggest resource would be component of time and wasting time for a key board entry or a mouse input would be unbearable so the need would be of the computer interaction environment that along with the complexity reduction also minimizes the time wastage in the human computer interaction. Accordingly in our future the computation would also be increased it would n...

  10. A Glance into the Future of Human Computer Interaction

    CERN Document Server

    Farooq, Umer; Nazir, Sohail

    2011-01-01

    Computers have a direct impact on our lives nowadays. Human's interaction with the computer has modified with the passage of time as improvement in technology occurred the better the human computer interaction became. Today we are facilitated by the operating system that has reduced all the complexity of hardware and we undergo our computation in a very convenient way irrespective of the process occurring at the hardware level. Though the human computer interaction has improved but it's not done yet. If we come to the future the computer's role in our lives would be a lot more rather our life would be of the artificial intelligence. In our future the biggest resource would be component of time and wasting time for a key board entry or a mouse input would be unbearable so the need would be of the computer interaction environment that along with the complexity reduction also minimizes the time wastage in the human computer interaction. Accordingly in our future the computation would also be increased it would n...

  11. Study on Human-Computer Interaction in Immersive Virtual Environment

    Institute of Scientific and Technical Information of China (English)

    段红; 黄柯棣

    2002-01-01

    Human-computer interaction is one of the most important issues in research of Virtual Environments. This paper introduces interaction software developed for a virtual operating environment for space experiments. Core components of the interaction software are: an object-oriented database for behavior management of virtual objects, a software agent called virtual eye for viewpoint control, and a software agent called virtual hand for object manipulation. Based on the above components, some instance programs for object manipulation have been developed. The user can observe the virtual environment through head-mounted display system, control viewpoint by head tracker and/or keyboard, and select and manipulate virtual objects by 3D mouse.

  12. Human-Computer Interaction, Tourism and Cultural Heritage

    Science.gov (United States)

    Cipolla Ficarra, Francisco V.

    We present a state of the art of the human-computer interaction aimed at tourism and cultural heritage in some cities of the European Mediterranean. In the work an analysis is made of the main problems deriving from training understood as business and which can derail the continuous growth of the HCI, the new technologies and tourism industry. Through a semiotic and epistemological study the current mistakes in the context of the interrelations of the formal and factual sciences will be detected and also the human factors that have an influence on the professionals devoted to the development of interactive systems in order to safeguard and boost cultural heritage.

  13. Combining Natural Human-Computer Interaction and Wireless Communication

    Directory of Open Access Journals (Sweden)

    Ştefan Gheorghe PENTIUC

    2011-01-01

    Full Text Available In this paper we present how human-computer interaction can be improved by using wireless communication between devices. Devices that offer a natural user interaction, like the Microsoft Surface Table and tablet PCs, can work together to enhance the experience of an application. Users can use physical objects for a more natural way of handling the virtual world on one hand, and interact with other users wirelessly connected on the other. Physical objects, that interact with the surface table, have a tag attached to them, allowing us to identify them, and take the required action. The TCP/IP protocol was used to handle the wireless communication over the wireless network. A server and a client application were developed for the used devices. To get a wide range of targeted mobile devices, different frameworks for developing cross platform applications were analyzed.

  14. Human-computer systems interaction backgrounds and applications 3

    CERN Document Server

    Kulikowski, Juliusz; Mroczek, Teresa; Wtorek, Jerzy

    2014-01-01

    This book contains an interesting and state-of the art collection of papers on the recent progress in Human-Computer System Interaction (H-CSI). It contributes the profound description of the actual status of the H-CSI field and also provides a solid base for further development and research in the discussed area. The contents of the book are divided into the following parts: I. General human-system interaction problems; II. Health monitoring and disabled people helping systems; and III. Various information processing systems. This book is intended for a wide audience of readers who are not necessarily experts in computer science, machine learning or knowledge engineering, but are interested in Human-Computer Systems Interaction. The level of particular papers and specific spreading-out into particular parts is a reason why this volume makes fascinating reading. This gives the reader a much deeper insight than he/she might glean from research papers or talks at conferences. It touches on all deep issues that ...

  15. Advancements in Violin-Related Human-Computer Interaction

    DEFF Research Database (Denmark)

    Overholt, Daniel

    2014-01-01

    Finesse is required while performing with many traditional musical instruments, as they are extremely responsive to human inputs. The violin is specifically examined here, as it excels at translating a performer’s gestures into sound in manners that evoke a wide range of affective qualities...... of human intelligence and emotion is at the core of the Musical Interface Technology Design Space, MITDS. This is a framework that endeavors to retain and enhance such traits of traditional instruments in the design of interactive live performance interfaces. Utilizing the MITDS, advanced Human......-Computer Interaction technologies for the violin are developed in order to allow musicians to explore new methods of creating music. Through this process, the aim is to provide musicians with control systems that let them transcend the interface itself, and focus on musically compelling performances....

  16. Interaction in Information Systems - Beyond Human-Computer Interaction

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    The purpose of this paper is to discuss and analyze the role of interaction in information systems. Interaction represents dynamic relations between actors and other elements in information systems. We introduce a semi-formal notation that we use to describe a set of interaction patterns and we...... illustrate how the notation can be used to describe mediated interaction. We use the interaction patterns to evaluate a set of modeling languages. No single language supports all relevant aspects of interaction modeling. We use the interaction patterns to identify to general and supplementary forms...... of interaction-interaction based on exchange of objects and interaction based on exchange of commands. None of the modeling languages that we analyze support both forms in a rich way....

  17. Human-computer interaction: psychology as a science of design.

    Science.gov (United States)

    Carroll, J M

    1997-01-01

    Human-computer interaction (HCI) study is the region of intersection between psychology and the social sciences, on the one hand, and computer science and technology, on the other. HCI researchers analyze and design specific user interface technologies (e.g. pointing devices). They study and improve the processes of technology development (e.g. task analysis, design rationale). They develop and evaluate new applications of technology (e.g. word processors, digital libraries). Throughout the past two decades, HCI has progressively integrated its scientific concerns with the engineering goal of improving the usability of computer systems and applications, which has resulted in a body of technical knowledge and methodology. HCI continues to provide a challenging test domain for applying and developing psychological and social theory in the context of technology development and use.

  18. Multimodal Interaction Control

    Science.gov (United States)

    Beskow, Jonas; Carlson, Rolf; Edlund, Jens; Granström, Björn; Heldner, Mattias; Hjalmarsson, Anna; Skantze, Gabriel

    No matter how well hidden our systems are and how well they do their magic unnoticed in the background, there are times when direct interaction between system and human is a necessity. As long as the interaction can take place unobtrusively and without techno-clutter, this is desirable. It is hard to picture a means of interaction less obtrusive and techno-cluttered than spoken communication on human terms. Spoken face-to-face communication is the most intuitive and robust form of communication between humans imaginable. In order to exploit such human spoken communication to its full potential as an interface between human and machine, we need a much better understanding of how the more human-like aspects of spoken communication work.

  19. Wearable joystick for gloves-on human/computer interaction

    Science.gov (United States)

    Bae, Jaewook; Voyles, Richard M.

    2006-05-01

    In this paper, we present preliminary work on a novel wearable joystick for gloves-on human/computer interaction in hazardous environments. Interacting with traditional input devices can be clumsy and inconvenient for the operator in hazardous environments due to the bulkiness of multiple system components and troublesome wires. During a collapsed structure search, for example, protective clothing, uneven footing, and "snag" points in the environment can render traditional input devices impractical. Wearable computing has been studied by various researchers to increase the portability of devices and to improve the proprioceptive sense of the wearer's intentions. Specifically, glove-like input devices to recognize hand gestures have been developed for general-purpose applications. But, regardless of their performance, prior gloves have been fragile and cumbersome to use in rough environments. In this paper, we present a new wearable joystick to remove the wires from a simple, two-degree of freedom glove interface. Thus, we develop a wearable joystick that is low cost, durable and robust, and wire-free at the glove. In order to evaluate the wearable joystick, we take into consideration two metrics during operator tests of a commercial robot: task completion time and path tortuosity. We employ fractal analysis to measure path tortuosity. Preliminary user test results are presented that compare the performance of both a wearable joystick and a traditional joystick.

  20. Human Computer Interaction Approach in Developing Customer Relationship Management

    Directory of Open Access Journals (Sweden)

    Mohd H.N.M. Nasir

    2008-01-01

    Full Text Available Problem statement: Many published studies have found that more than 50% of Customer Relationship Management (CRM system implementations have failed due to the failure of system usability and does not fulfilled user expectation. This study presented the issues that contributed to the failures of CRM system and proposed a prototype of CRM system developed using Human Computer Interaction approaches in order to resolve the identified issues. Approach: In order to capture the users' requirements, a single in-depth case study of a multinational company was chosen in this research, in which the background, current conditions and environmental interactions were observed, recorded and analyzed for stages of patterns in relation to internal and external influences. Some techniques of blended data gathering which are interviews, naturalistic observation and studying user documentation were employed and then the prototype of CRM system was developed which incorporated User-Centered Design (UCD approach, Hierarchical Task Analysis (HTA, metaphor and identification of users' behaviors and characteristics. The implementation of these techniques, were then measured in terms of usability. Results: Based on the usability testing conducted, the results showed that most of the users agreed that the system is comfortable to work with by taking the quality attributes of learnability, memorizeablity, utility, sortability, font, visualization, user metaphor, information easy view and color as measurement parameters. Conclusions/Recommendations: By combining all these techniques, a comfort level for the users that leads to user satisfaction and higher usability degree can be achieved in a proposed CRM system. Thus, it is important that the companies should put usability quality attribute into a consideration before developing or procuring CRM system to ensure the implementation successfulness of the CRM system.

  1. Effective Use of Human Computer Interaction in Digital Academic Supportive Devices

    OpenAIRE

    Thuseethan, S.; Kuhanesan, S.

    2015-01-01

    In this research, a literature in human-computer interaction is reviewed and the technology aspect of human computer interaction related with digital academic supportive devices is also analyzed. According to all these concerns, recommendations to design good human-computer digital academic supportive devices are analyzed and proposed. Due to improvements in both hardware and software, digital devices have unveiled continuous advances in efficiency and processing capacity. However, many of th...

  2. Multimodal Interactions with Agents in Virtual Worlds

    NARCIS (Netherlands)

    Nijholt, A.; Hulstijn, J.; Kasabov, N.

    2000-01-01

    In this chapter we discuss our research on multimodal interaction in a virtual environment. The environment we have developed can be considered as a ‘laboratory’ for research on multimodal interactions and multimedia presentation, where we have multiple users and various agents that help the users t

  3. Design of a compact low-power human-computer interaction equipment for hand motion

    Science.gov (United States)

    Wu, Xianwei; Jin, Wenguang

    2017-01-01

    Human-Computer Interaction (HCI) raises demand of convenience, endurance, responsiveness and naturalness. This paper describes a design of a compact wearable low-power HCI equipment applied to gesture recognition. System combines multi-mode sense signals: the vision sense signal and the motion sense signal, and the equipment is equipped with the depth camera and the motion sensor. The dimension (40 mm × 30 mm) and structure is compact and portable after tight integration. System is built on a module layered framework, which contributes to real-time collection (60 fps), process and transmission via synchronous confusion with asynchronous concurrent collection and wireless Blue 4.0 transmission. To minimize equipment's energy consumption, system makes use of low-power components, managing peripheral state dynamically, switching into idle mode intelligently, pulse-width modulation (PWM) of the NIR LEDs of the depth camera and algorithm optimization by the motion sensor. To test this equipment's function and performance, a gesture recognition algorithm is applied to system. As the result presents, general energy consumption could be as low as 0.5 W.

  4. Design of Food Management Information System Based on Human-computer Interaction

    Directory of Open Access Journals (Sweden)

    Xingkai Cui

    2015-07-01

    Full Text Available Food safety problem is directly related with public health. This study takes the necessity of establishing food management information system as the breakthrough point, through the interpretation of the overview of human-computer interaction technology, as well as the conceptual framework of human-computer interaction, it discusses the construction of food management information system, expecting to promote China's food safety management process so as to guarantee public health guarantee.

  5. The human-computer interaction design of self-operated mobile telemedicine devices

    OpenAIRE

    Zheng, Shaoqing

    2015-01-01

    Human-computer interaction (HCI) is an important issue in the area of medicine, for example, the operation of surgical simulators, virtual rehabilitation systems, telemedicine treatments, and so on. In this thesis, the human-computer interaction of a self-operated mobile telemedicine device is designed. The mobile telemedicine device (i.e. intelligent Medication Box or iMedBox) is used for remotely monitoring patient health and activity information such as ECG (electrocardiogram) signals, hom...

  6. Applying systemic-structural activity theory to design of human-computer interaction systems

    CERN Document Server

    Bedny, Gregory Z; Bedny, Inna

    2015-01-01

    Human-Computer Interaction (HCI) is an interdisciplinary field that has gained recognition as an important field in ergonomics. HCI draws on ideas and theoretical concepts from computer science, psychology, industrial design, and other fields. Human-Computer Interaction is no longer limited to trained software users. Today people interact with various devices such as mobile phones, tablets, and laptops. How can you make such interaction user friendly, even when user proficiency levels vary? This book explores methods for assessing the psychological complexity of computer-based tasks. It also p

  7. Proceedings of the Third International Conference on Intelligent Human Computer Interaction

    CERN Document Server

    Pokorný, Jaroslav; Snášel, Václav; Abraham, Ajith

    2013-01-01

    The Third International Conference on Intelligent Human Computer Interaction 2011 (IHCI 2011) was held at Charles University, Prague, Czech Republic from August 29 - August 31, 2011. This conference was third in the series, following IHCI 2009 and IHCI 2010 held in January at IIIT Allahabad, India. Human computer interaction is a fast growing research area and an attractive subject of interest for both academia and industry. There are many interesting and challenging topics that need to be researched and discussed. This book aims to provide excellent opportunities for the dissemination of interesting new research and discussion about presented topics. It can be useful for researchers working on various aspects of human computer interaction. Topics covered in this book include user interface and interaction, theoretical background and applications of HCI and also data mining and knowledge discovery as a support of HCI applications.

  8. Human-computer interaction handbook fundamentals, evolving technologies and emerging applications

    CERN Document Server

    Sears, Andrew

    2007-01-01

    This second edition of The Human-Computer Interaction Handbook provides an updated, comprehensive overview of the most important research in the field, including insights that are directly applicable throughout the process of developing effective interactive information technologies. It features cutting-edge advances to the scientific knowledge base, as well as visionary perspectives and developments that fundamentally transform the way in which researchers and practitioners view the discipline. As the seminal volume of HCI research and practice, The Human-Computer Interaction Handbook feature

  9. Situated dialog in speech-based human-computer interaction

    CERN Document Server

    Raux, Antoine; Lane, Ian; Misu, Teruhisa

    2016-01-01

    This book provides a survey of the state-of-the-art in the practical implementation of Spoken Dialog Systems for applications in everyday settings. It includes contributions on key topics in situated dialog interaction from a number of leading researchers and offers a broad spectrum of perspectives on research and development in the area. In particular, it presents applications in robotics, knowledge access and communication and covers the following topics: dialog for interacting with robots; language understanding and generation; dialog architectures and modeling; core technologies; and the analysis of human discourse and interaction. The contributions are adapted and expanded contributions from the 2014 International Workshop on Spoken Dialog Systems (IWSDS 2014), where researchers and developers from industry and academia alike met to discuss and compare their implementation experiences, analyses and empirical findings.

  10. Transnational HCI: Humans, Computers and Interactions in Global Contexts

    DEFF Research Database (Denmark)

    Vertesi, Janet; Lindtner, Silvia; Shklovski, Irina

    2011-01-01

    , but as evolving in relation to global processes, boundary crossings, frictions and hybrid practices. In doing so, we expand upon existing research in HCI to consider the effects, implications for individuals and communities, and design opportunities in times of increased transnational interactions. We hope...

  11. A Multimodal Interaction Framework for Blended Learning

    DEFF Research Database (Denmark)

    Vidakis, Nikolaos; Kalafatis, Konstantinos; Triantafyllidis, Georgios

    2017-01-01

    Humans interact with each other by utilizing the five basic senses as input modalities, whereas sounds, gestures, facial expressions etc. are utilized as output modalities. Multimodal interaction is also used between humans and their surrounding environment, although enhanced with further senses ...... framework enabling deployment of a vast variety of modalities, tailored appropriately for use in blended learning environment....

  12. Brain-Computer Interaction: Can Multimodality Help?

    NARCIS (Netherlands)

    Nijholt, Antinus; Allison, Brendan Z.; Jacobs, Robert J.K.; Vidal, E.; Gatica-Perez, D.; Morency, L.P.; Sebe, N.

    2011-01-01

    This paper is a short introduction to a special ICMI session on brain-computer interaction. During this paper, we first discuss problems, solutions, and a five-year view for brain-computer interaction. We then talk further about unique issues with multimodal and hybrid brain-computer interfaces,

  13. Interactive Multimodal Learning for Venue Recommendation

    NARCIS (Netherlands)

    Zahálka, J.; Rudinac, S.; Worring, M.

    2015-01-01

    In this paper, we propose City Melange, an interactive and multimodal content-based venue explorer. Our framework matches the interacting user to the users of social media platforms exhibiting similar taste. The data collection integrates location-based social networks such as Foursquare with genera

  14. Enhancing Human-Computer Interaction Design Education: Teaching Affordance Design for Emerging Mobile Devices

    Science.gov (United States)

    Faiola, Anthony; Matei, Sorin Adam

    2010-01-01

    The evolution of human-computer interaction design (HCID) over the last 20 years suggests that there is a growing need for educational scholars to consider new and more applicable theoretical models of interactive product design. The authors suggest that such paradigms would call for an approach that would equip HCID students with a better…

  15. The Human-Computer Interaction of Cross-Cultural Gaming Strategy

    Science.gov (United States)

    Chakraborty, Joyram; Norcio, Anthony F.; Van Der Veer, Jacob J.; Andre, Charles F.; Miller, Zachary; Regelsberger, Alexander

    2015-01-01

    This article explores the cultural dimensions of the human-computer interaction that underlies gaming strategies. The article is a desktop study of existing literature and is organized into five sections. The first examines the cultural aspects of knowledge processing. The social constructs technology interaction is discussed. Following this, the…

  16. A Project-Based Learning Setting to Human-Computer Interaction for Teenagers

    Science.gov (United States)

    Geyer, Cornelia; Geisler, Stefan

    2012-01-01

    Knowledge of fundamentals of human-computer interaction resp. usability engineering is getting more and more important in technical domains. However this interdisciplinary field of work and corresponding degree programs are not broadly known. Therefore at the Hochschule Ruhr West, University of Applied Sciences, a program was developed to give…

  17. A Framework and Implementation of User Interface and Human-Computer Interaction Instruction

    Science.gov (United States)

    Peslak, Alan

    2005-01-01

    Researchers have suggested that up to 50 % of the effort in development of information systems is devoted to user interface development (Douglas, Tremaine, Leventhal, Wills, & Manaris, 2002; Myers & Rosson, 1992). Yet little study has been performed on the inclusion of important interface and human-computer interaction topics into a current…

  18. Human-Computer Interaction (HCI) in Educational Environments: Implications of Understanding Computers as Media.

    Science.gov (United States)

    Berg, Gary A.

    2000-01-01

    Reviews literature in the field of human-computer interaction (HCI) as it applies to educational environments. Topics include the origin of HCI; human factors; usability; computer interface design; goals, operations, methods, and selection (GOMS) models; command language versus direct manipulation; hypertext; visual perception; interface…

  19. Multimodal interaction in image and video applications

    CERN Document Server

    Sappa, Angel D

    2013-01-01

    Traditional Pattern Recognition (PR) and Computer Vision (CV) technologies have mainly focused on full automation, even though full automation often proves elusive or unnatural in many applications, where the technology is expected to assist rather than replace the human agents. However, not all the problems can be automatically solved being the human interaction the only way to tackle those applications. Recently, multimodal human interaction has become an important field of increasing interest in the research community. Advanced man-machine interfaces with high cognitive capabilities are a hot research topic that aims at solving challenging problems in image and video applications. Actually, the idea of computer interactive systems was already proposed on the early stages of computer science. Nowadays, the ubiquity of image sensors together with the ever-increasing computing performance has open new and challenging opportunities for research in multimodal human interaction. This book aims to show how existi...

  20. Towards a semio-cognitive theory of human-computer interaction

    OpenAIRE

    Scolari, Carlos Alberto

    2001-01-01

    The research here presented is theoretical and introduces a critical analysis of instrumental approaches in Human-Computer Interaction (HCI). From a semiotic point of view interfaces are not "natural" or "neutral" instruments, but rather complex sense production devices. Interaction, in other words, is far from being a "transparent" process.In this abstract we present the fundaments of a theoretical model that combines Semiotics with Cognitive Science approaches.

  1. Reference Resolution in Multi-modal Interaction: Position paper

    NARCIS (Netherlands)

    Fernando, T.; Nijholt, Antinus

    2002-01-01

    In this position paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can

  2. Reference resolution in multi-modal interaction: Preliminary observations

    NARCIS (Netherlands)

    González González, G.R.; Nijholt, Antinus

    2002-01-01

    In this paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can apply

  3. Reference resolution in multi-modal interaction: Preliminary observations

    NARCIS (Netherlands)

    Nijholt, A.; González González, G.R.

    2002-01-01

    In this paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can apply mo

  4. Multimodal Embodied Mimicry in Interaction

    NARCIS (Netherlands)

    Sun, X.; Esposito, Anna; Vinciarelli, Alessandro; Vicsi, Klára; Pelachaud, Catherine; Nijholt, Antinus

    2011-01-01

    Nonverbal behaviors play an important role in communicating with others. One particular kind of nonverbal interaction behavior is mimicry. It has been argued that behavioral mimicry supports harmonious relationships in social interaction through creating affiliation, rapport, and liking between

  5. Real Time Multiple Hand Gesture Recognition System for Human Computer Interaction

    Directory of Open Access Journals (Sweden)

    Siddharth S. Rautaray

    2012-05-01

    Full Text Available With the increasing use of computing devices in day to day life, the need of user friendly interfaces has lead towards the evolution of different types of interfaces for human computer interaction. Real time vision based hand gesture recognition affords users the ability to interact with computers in more natural and intuitive ways. Direct use of hands as an input device is an attractive method which can communicate much more information by itself in comparison to mice, joysticks etc allowing a greater number of recognition system that can be used in a variety of human computer interaction applications. The gesture recognition system consist of three main modules like hand segmentation, hand tracking and gesture recognition from hand features. The designed system further integrated with different applications like image browser, virtual game etc. possibilities for human computer interaction. Computer Vision based systems has the potential to provide more natural, non-contact solutions. The present research work focuses on to design and develops a practical framework for real time hand gesture.

  6. Multi-Modal Interaction for Robotic Mules

    Science.gov (United States)

    2014-02-26

    Multi-Modal Interaction for Robotic Mules Glenn Taylor, Mike Quist , Matt Lanting, Cory Dunham, Patrick Theisen, Paul Muench Abstract...Taylor, Mike Quist , Matt Lanting, Cory Dunham, and Patrick Theisen are with Soar Technology, Inc. (corresponding author: 734-887- 7620; email: glenn...soartech.com; quist @soartech.com; matt.lanting@soartech.com; dunham@soartech.com; patrick.theisen@soartech.com Paul Muench is with US Army TARDEC

  7. AFFECTIVE AND EMOTIONAL ASPECTS OF HUMAN-COMPUTER INTERACTION: Game-Based and Innovative Learning Approaches

    Directory of Open Access Journals (Sweden)

    A. Askim GULUMBAY, Anadolu University, TURKEY

    2006-07-01

    Full Text Available This book was edited by, Maja Pivec, an educator at the University of Applied Sciences, and published by IOS Pres in 2006. The learning process can be seen as an emotional and personal experience that is addictive and leads learners to proactive behavior. New research methods in this field are related to affective and emotional approaches to computersupported learning and human-computer interactions.Bringing together scientists and research aspects from psychology, educational sciences, cognitive sciences, various aspects of communication and human computer interaction, interface design andcomputer science on one hand and educators and game industry on the other, this should open gates to evolutionary changes of the learning industry. The major topics discussed are emotions, motivation, games and game-experience.

  8. 08292 Abstracts Collection -- The Study of Visual Aesthetics in Human-Computer Interaction

    OpenAIRE

    Hassenzahl, Marc; Lindgaard, Gitte; Platz, Axel; Tractinsky, Noam

    2008-01-01

    From 13.07. to 16.07.2008, the Dagstuhl Seminar 08292 ``The Study of Visual Aesthetics in Human-Computer Interaction'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first secti...

  9. AFFECTIVE AND EMOTIONAL ASPECTS OF HUMAN-COMPUTER INTERACTION: Game-Based and Innovative Learning Approaches

    OpenAIRE

    A. Askim GULUMBAY, Anadolu University, TURKEY

    2006-01-01

    This book was edited by, Maja Pivec, an educator at the University of Applied Sciences, and published by IOS Pres in 2006. The learning process can be seen as an emotional and personal experience that is addictive and leads learners to proactive behavior. New research methods in this field are related to affective and emotional approaches to computersupported learning and human-computer interactions.Bringing together scientists and research aspects from psychology, educational sciences, cogni...

  10. Cross-cultural human-computer interaction and user experience design a semiotic perspective

    CERN Document Server

    Brejcha, Jan

    2015-01-01

    This book describes patterns of language and culture in human-computer interaction (HCI). Through numerous examples, it shows why these patterns matter and how to exploit them to design a better user experience (UX) with computer systems. It provides scientific information on the theoretical and practical areas of the interaction and communication design for research experts and industry practitioners and covers the latest research in semiotics and cultural studies, bringing a set of tools and methods to benefit the process of designing with the cultural background in mind.

  11. Portable tongue-supported human computer interaction system design and implementation.

    Science.gov (United States)

    Quain, Rohan; Khan, Masood Mehmood

    2014-01-01

    Tongue supported human-computer interaction (TSHCI) systems can help critically ill patients interact with both computers and people. These systems can be particularly useful for patients suffering injuries above C7 on their spinal vertebrae. Despite recent successes in their application, several limitations restrict performance of existing TSHCI systems and discourage their use in real life situations. This paper proposes a low-cost, less-intrusive, portable and easy to use design for implementing a TSHCI system. Two applications of the proposed system are reported. Design considerations and performance of the proposed system are also presented.

  12. Multimodality

    DEFF Research Database (Denmark)

    Buhl, Mie

    2010-01-01

    In this paper, I address an ongoing discussion in Danish E-learning research about how to take advantage of the fact that digital media facilitate other communication forms than text, so-called ‘multimodal' communication, which should not be confused with the term ‘multimedia'. While multimedia...... on their teaching and learning situations. The choices they make involve e-learning resources like videos, social platforms and mobile devices, not just as digital artefacts we interact with, but the entire practice of using digital media. In a life-long learning perspective, multimodality is potentially very...... represent the use of various media for communication, multimodality refers to the different symbol systems we employ in communication practices. As new educational practices emerge from the application of ICT, multimodality becomes a matter for all teachers when they plan, practice and reflect...

  13. Multimodality

    DEFF Research Database (Denmark)

    Buhl, Mie

    2010-01-01

    on their teaching and learning situations. The choices they make involve e-learning resources like videos, social platforms and mobile devices, not just as digital artefacts we interact with, but the entire practice of using digital media. In a life-long learning perspective, multimodality is potentially very...... represent the use of various media for communication, multimodality refers to the different symbol systems we employ in communication practices. As new educational practices emerge from the application of ICT, multimodality becomes a matter for all teachers when they plan, practice and reflect......In this paper, I address an ongoing discussion in Danish E-learning research about how to take advantage of the fact that digital media facilitate other communication forms than text, so-called ‘multimodal' communication, which should not be confused with the term ‘multimedia'. While multimedia...

  14. Real-time non-invasive eyetracking and gaze-point determination for human-computer interaction and biomedicine

    Science.gov (United States)

    Talukder, Ashit; Morookian, John-Michael; Monacos, S.; Lam, R.; Lebaw, C.; Bond, A.

    2004-01-01

    Eyetracking is one of the latest technologies that has shown potential in several areas including human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological problems in individuals.

  15. Human-Computer Interaction Handbook Fundamentals, Evolving Technologies, and Emerging Applications

    CERN Document Server

    Jacko, Julie A

    2012-01-01

    The third edition of a groundbreaking reference, The Human--Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications raises the bar for handbooks in this field. It is the largest, most complete compilation of HCI theories, principles, advances, case studies, and more that exist within a single volume. The book captures the current and emerging sub-disciplines within HCI related to research, development, and practice that continue to advance at an astonishing rate. It features cutting-edge advances to the scientific knowledge base as well as visionary perspe

  16. Advances in Human-Computer Interaction: Graphics and Animation Components for Interface Design

    Science.gov (United States)

    Cipolla Ficarra, Francisco V.; Nicol, Emma; Cipolla-Ficarra, Miguel; Richardson, Lucy

    We present an analysis of communicability methodology in graphics and animation components for interface design, called CAN (Communicability, Acceptability and Novelty). This methodology has been under development between 2005 and 2010, obtaining excellent results in cultural heritage, education and microcomputing contexts. In studies where there is a bi-directional interrelation between ergonomics, usability, user-centered design, software quality and the human-computer interaction. We also present the heuristic results about iconography and layout design in blogs and websites of the following countries: Spain, Italy, Portugal and France.

  17. Hand gesture recognition based on motion history images for a simple human-computer interaction system

    Science.gov (United States)

    Timotius, Ivanna K.; Setyawan, Iwan

    2013-03-01

    A human-computer interaction can be developed using several kind of tools. One choice is using images captured using a camera. This paper proposed a simple human-computer interaction system based on hand movement captured by a web camera. The system aims to classify the captured movement into one of three classes. The first two classes contain hand movements to the left and right, respectively. The third class contains non-hand movements or hand movements to other directions. The method used in this paper is based on Motion History Images (MHIs) and nearest neighbor classifier. The resulting MHIs are processed in two manners, namely by summing the pixel values along the vertical axis and reshaping into vectors. We also use two distance criteria in this paper, respectively the Euclidian distance and cross correlation. This paper compared the performance of the combinations of different MHI data processing and distance criteria using 10 runs of 2-fold cross validation. Our experiments show that reshaping the MHI data into vectors combined with a Euclidean distance criterion gives the highest average accuracy, namely 55.67%.

  18. Metaphors for the Nature of Human-Computer Interaction in an Empowering Environment: Interaction Style Influences the Manner of Human Accomplishment.

    Science.gov (United States)

    Weller, Herman G.; Hartson, H. Rex

    1992-01-01

    Describes human-computer interface needs for empowering environments in computer usage in which the machine handles the routine mechanics of problem solving while the user concentrates on its higher order meanings. A closed-loop model of interaction is described, interface as illusion is discussed, and metaphors for human-computer interaction are…

  19. Cognitive engineering models: A prerequisite to the design of human-computer interaction in complex dynamic systems

    Science.gov (United States)

    Mitchell, Christine M.

    1993-01-01

    This chapter examines a class of human-computer interaction applications, specifically the design of human-computer interaction for the operators of complex systems. Such systems include space systems (e.g., manned systems such as the Shuttle or space station, and unmanned systems such as NASA scientific satellites), aviation systems (e.g., the flight deck of 'glass cockpit' airplanes or air traffic control) and industrial systems (e.g., power plants, telephone networks, and sophisticated, e.g., 'lights out,' manufacturing facilities). The main body of human-computer interaction (HCI) research complements but does not directly address the primary issues involved in human-computer interaction design for operators of complex systems. Interfaces to complex systems are somewhat special. The 'user' in such systems - i.e., the human operator responsible for safe and effective system operation - is highly skilled, someone who in human-machine systems engineering is sometimes characterized as 'well trained, well motivated'. The 'job' or task context is paramount and, thus, human-computer interaction is subordinate to human job interaction. The design of human interaction with complex systems, i.e., the design of human job interaction, is sometimes called cognitive engineering.

  20. Categorisation of visualisation methods to support the design of Human-Computer Interaction Systems.

    Science.gov (United States)

    Li, Katie; Tiwari, Ashutosh; Alcock, Jeffrey; Bermell-Garcia, Pablo

    2016-07-01

    During the design of Human-Computer Interaction (HCI) systems, the creation of visual artefacts forms an important part of design. On one hand producing a visual artefact has a number of advantages: it helps designers to externalise their thought and acts as a common language between different stakeholders. On the other hand, if an inappropriate visualisation method is employed it could hinder the design process. To support the design of HCI systems, this paper reviews the categorisation of visualisation methods used in HCI. A keyword search is conducted to identify a) current HCI design methods, b) approaches of selecting these methods. The resulting design methods are filtered to create a list of just visualisation methods. These are then categorised using the approaches identified in (b). As a result 23 HCI visualisation methods are identified and categorised in 5 selection approaches (The Recipient, Primary Purpose, Visual Archetype, Interaction Type, and The Design Process).

  1. Soft Electronics Enabled Ergonomic Human-Computer Interaction for Swallowing Training

    Science.gov (United States)

    Lee, Yongkuk; Nicholls, Benjamin; Sup Lee, Dong; Chen, Yanfei; Chun, Youngjae; Siang Ang, Chee; Yeo, Woon-Hong

    2017-04-01

    We introduce a skin-friendly electronic system that enables human-computer interaction (HCI) for swallowing training in dysphagia rehabilitation. For an ergonomic HCI, we utilize a soft, highly compliant (“skin-like”) electrode, which addresses critical issues of an existing rigid and planar electrode combined with a problematic conductive electrolyte and adhesive pad. The skin-like electrode offers a highly conformal, user-comfortable interaction with the skin for long-term wearable, high-fidelity recording of swallowing electromyograms on the chin. Mechanics modeling and experimental quantification captures the ultra-elastic mechanical characteristics of an open mesh microstructured sensor, conjugated with an elastomeric membrane. Systematic in vivo studies investigate the functionality of the soft electronics for HCI-enabled swallowing training, which includes the application of a biofeedback system to detect swallowing behavior. The collection of results demonstrates clinical feasibility of the ergonomic electronics in HCI-driven rehabilitation for patients with swallowing disorders.

  2. Multimodality

    DEFF Research Database (Denmark)

    Buhl, Mie

    In this paper, I address an ongoing discussion in Danish E-learning research about how to take advantage of the fact that digital media facilitate other communication forms than text, so-called ‘multimodal’ communication, which should not be confused with the term ‘multimedia’. While multimedia...... and learning situations. The choices they make involve E-learning resources like videos, social platforms and mobile devices, not just as digital artefacts we interact with, but the entire practice of using digital media. In a life-long learning perspective, multimodality is potentially very useful...... represent the use of various media for communication, multimodality refers to the different symbol systems we employ in communication practices. As new educational practices emerge from the application of ICT, all teachers address multimodality when they plan, practice and reflect on their teaching...

  3. Multimodality

    DEFF Research Database (Denmark)

    Buhl, Mie

    and learning situations. The choices they make involve E-learning resources like videos, social platforms and mobile devices, not just as digital artefacts we interact with, but the entire practice of using digital media. In a life-long learning perspective, multimodality is potentially very useful...... represent the use of various media for communication, multimodality refers to the different symbol systems we employ in communication practices. As new educational practices emerge from the application of ICT, all teachers address multimodality when they plan, practice and reflect on their teaching......In this paper, I address an ongoing discussion in Danish E-learning research about how to take advantage of the fact that digital media facilitate other communication forms than text, so-called ‘multimodal’ communication, which should not be confused with the term ‘multimedia’. While multimedia...

  4. Multimodal interaction for human-robot teams

    Science.gov (United States)

    Burke, Dustin; Schurr, Nathan; Ayers, Jeanine; Rousseau, Jeff; Fertitta, John; Carlin, Alan; Dumond, Danielle

    2013-05-01

    Unmanned ground vehicles have the potential for supporting small dismounted teams in mapping facilities, maintaining security in cleared buildings, and extending the team's reconnaissance and persistent surveillance capability. In order for such autonomous systems to integrate with the team, we must move beyond current interaction methods using heads-down teleoperation which require intensive human attention and affect the human operator's ability to maintain local situational awareness and ensure their own safety. This paper focuses on the design, development and demonstration of a multimodal interaction system that incorporates naturalistic human gestures, voice commands, and a tablet interface. By providing multiple, partially redundant interaction modes, our system degrades gracefully in complex environments and enables the human operator to robustly select the most suitable interaction method given the situational demands. For instance, the human can silently use arm and hand gestures for commanding a team of robots when it is important to maintain stealth. The tablet interface provides an overhead situational map allowing waypoint-based navigation for multiple ground robots in beyond-line-of-sight conditions. Using lightweight, wearable motion sensing hardware either worn comfortably beneath the operator's clothing or integrated within their uniform, our non-vision-based approach enables an accurate, continuous gesture recognition capability without line-of-sight constraints. To reduce the training necessary to operate the system, we designed the interactions around familiar arm and hand gestures.

  5. An Overview of a Decade of Journal Publications about Culture and Human-Computer Interaction (HCI)

    Science.gov (United States)

    Clemmensen, Torkil; Roese, Kerstin

    In this paper, we analyze the concept of human-computer interaction in cultural and national contexts. Building and extending upon the framework for understanding research in usability and culture by Honold [3], we give an overview of publications in culture and HCI between 1998 and 2008, with a narrow focus on high-level journal publications only. The purpose is to review current practice in how cultural HCI issues are studied, and to analyse problems with the measures and interpretation of this studies. We find that Hofstede's cultural dimensions has been the dominating model of culture, participants have been picked because they could speak English, and most studies have been large scale quantitative studies. In order to balance this situation, we recommend that more researchers and practitioners do qualitative, empirical work studies.

  6. Multi-mode interaction middleware for software services

    Institute of Scientific and Technical Information of China (English)

    TAO XianPing; MA XiaoXing; LU Jian; YU Ping; ZHOU Yu

    2008-01-01

    Due to the independency, variability, and tailorability of software service in the open environment, the research of middleware which supports software services multi-mode interaction is thus of great importance. In this paper, an agent-based multi-mode interaction middleware model and its supporting system for software services were proposed. This model includes an interaction feature decomposition and configuration model to enable interaction programming, an agent-based mid-dleware model, and a programmable coordination media based on reflection technology. The decomposition and configuration model for interaction features can assist programmers in interaction programming by analyzing and synthesizing interaction features. The agent-based middleware model provides a runtime framework for service multi-mode interaction. The programmable coordination media is able to effectively support software service coordination based on multi-mode interaction. To verify feasibility and efficiency of the above method, the de-sign, implementation and performance analysis of Artemis-M3C, a multi-mode in-teraction middleware for software services, were introduced. The result shows that the above method is feasible and that the Artemis-M3C system is practical and ef-fective in multi-mode interaction.

  7. The experience of agency in human-computer interactions: a review.

    Science.gov (United States)

    Limerick, Hannah; Coyle, David; Moore, James W

    2014-01-01

    The sense of agency is the experience of controlling both one's body and the external environment. Although the sense of agency has been studied extensively, there is a paucity of studies in applied "real-life" situations. One applied domain that seems highly relevant is human-computer-interaction (HCI), as an increasing number of our everyday agentive interactions involve technology. Indeed, HCI has long recognized the feeling of control as a key factor in how people experience interactions with technology. The aim of this review is to summarize and examine the possible links between sense of agency and understanding control in HCI. We explore the overlap between HCI and sense of agency for computer input modalities and system feedback, computer assistance, and joint actions between humans and computers. An overarching consideration is how agency research can inform HCI and vice versa. Finally, we discuss the potential ethical implications of personal responsibility in an ever-increasing society of technology users and intelligent machine interfaces.

  8. Multimodal Sensing Interface for Haptic Interaction

    Directory of Open Access Journals (Sweden)

    Carlos Diaz

    2017-01-01

    Full Text Available This paper investigates the integration of a multimodal sensing system for exploring limits of vibrato tactile haptic feedback when interacting with 3D representation of real objects. In this study, the spatial locations of the objects are mapped to the work volume of the user using a Kinect sensor. The position of the user’s hand is obtained using the marker-based visual processing. The depth information is used to build a vibrotactile map on a haptic glove enhanced with vibration motors. The users can perceive the location and dimension of remote objects by moving their hand inside a scanning region. A marker detection camera provides the location and orientation of the user’s hand (glove to map the corresponding tactile message. A preliminary study was conducted to explore how different users can perceive such haptic experiences. Factors such as total number of objects detected, object separation resolution, and dimension-based and shape-based discrimination were evaluated. The preliminary results showed that the localization and counting of objects can be attained with a high degree of success. The users were able to classify groups of objects of different dimensions based on the perceived haptic feedback.

  9. Human-Centered Software Engineering: Software Engineering Architectures, Patterns, and Sodels for Human Computer Interaction

    Science.gov (United States)

    Seffah, Ahmed; Vanderdonckt, Jean; Desmarais, Michel C.

    The Computer-Human Interaction and Software Engineering (CHISE) series of edited volumes originated from a number of workshops and discussions over the latest research and developments in the field of Human Computer Interaction (HCI) and Software Engineering (SE) integration, convergence and cross-pollination. A first volume in this series (CHISE Volume I - Human-Centered Software Engineering: Integrating Usability in the Development Lifecycle) aims at bridging the gap between the field of SE and HCI, and addresses specifically the concerns of integrating usability and user-centered systems design methods and tools into the software development lifecycle and practices. This has been done by defining techniques, tools and practices that can fit into the entire software engineering lifecycle as well as by defining ways of addressing the knowledge and skills needed, and the attitudes and basic values that a user-centered development methodology requires. The first volume has been edited as Vol. 8 in the Springer HCI Series (Seffah, Gulliksen and Desmarais, 2005).

  10. Computer Aided Design in Digital Human Modeling for Human Computer Interaction in Ergonomic Assessment: A Review

    Directory of Open Access Journals (Sweden)

    Suman Mukhopadhyay , Sanjib Kumar Das and Tania Chakraborty

    2012-12-01

    Full Text Available Research in Human-Computer Interaction (HCI hasbeen enormously successful in the area of computeraidedergonomics or human-centric designs. Perfectfit for people has always been a target for productdesign. Designers traditionally used anthropometricdimensions for 3D product design which created a lotof fitting problems when dealing with thecomplexities of the human body shapes. Computeraided design (CAD, also known as Computer aideddesign and drafting (CADD is the computertechnology used for the design processing and designdocumentation. CAD has now been used extensivelyin many applications such as automotive,shipbuilding, aerospace industries, architectural andindustrial designs, prosthetics, computer animationfor special effects in movies, advertising andtechnical manuals. As a technology, digital humanmodeling (DHM has rapidly emerged as atechnology that creates, manipulates and controlhuman representations and human-machine systemsscenes on computers for interactive ergonomic designproblem solving. DHM promises to profoundlychange how products or systems are designed, howergonomics analysis is performed, how disorders andimpairments are assessed and how therapies andsurgeries are conducted. The imperative andemerging need for the DHM appears to be consistentwith the fact that the past decade has witnessedsignificant growth in both the software systemsoffering DHM capabilities as well as the corporateadapting the technology.The authors shall dwell atlength and deliberate on how research in DHM hasfinally brought about an enhanced HCI, in thecontext of computer-aided ergonomics or humancentricdesign and discuss about future trends in thiscontext.

  11. Combining heterogeneous inputs for the development of adaptive and multimodal interaction systems

    Directory of Open Access Journals (Sweden)

    David GRIOL

    2013-11-01

    Full Text Available In this paper we present a novel framework for the integration of visual sensor networks and speech-based interfaces. Our proposal follows the standard reference architecture in fusion systems (JDL, and combines different techniques related to Artificial Intelligence, Natural Language Processing and User Modeling to provide an enhanced interaction with their users. Firstly, the framework integrates a Cooperative Surveillance Multi-Agent System (CS-MAS, which includes several types of autonomous agents working in a coalition to track and make inferences on the positions of the targets. Secondly, enhanced conversational agents facilitate human-computer interaction by means of speech interaction. Thirdly, a statistical methodology allows modeling the user conversational behavior, which is learned from an initial corpus and improved with the knowledge acquired from the successive interactions. A technique is proposed to facilitate the multimodal fusion of these information sources and consider the result for the decision of the next system action.

  12. Using minimal human-computer interfaces for studying the interactive development of social awareness

    Directory of Open Access Journals (Sweden)

    Tom eFroese

    2014-09-01

    Full Text Available According to the enactive approach to cognitive science, perception is essentially a skillful engagement with the world. Learning how to engage via a human-computer interface (HCI can therefore be taken as an instance of developing a new mode of experiencing. Similarly, social perception is theorized to be primarily constituted by skillful engagement between people, which implies that it is possible to investigate the origins and development of social awareness using multi-user HCIs. We analyzed the trial-by-trial objective and subjective changes in sociality that took place during a perceptual crossing experiment in which embodied interaction between pairs of adults was mediated over a minimalist haptic HCI. Since that study required participants to implicitly relearn how to mutually engage so as to perceive each other’s presence, we hypothesized that there would be indications that the initial developmental stages of social awareness were recapitulated. Preliminary results reveal that, despite the lack of explicit feedback about task performance, there was a trend for the clarity of social awareness to increase over time. We discuss the methodological challenges involved in evaluating whether this trend was characterized by distinct developmental stages of objective behavior and subjective experience.

  13. Using minimal human-computer interfaces for studying the interactive development of social awareness.

    Science.gov (United States)

    Froese, Tom; Iizuka, Hiroyuki; Ikegami, Takashi

    2014-01-01

    According to the enactive approach to cognitive science, perception is essentially a skillful engagement with the world. Learning how to engage via a human-computer interface (HCI) can therefore be taken as an instance of developing a new mode of experiencing. Similarly, social perception is theorized to be primarily constituted by skillful engagement between people, which implies that it is possible to investigate the origins and development of social awareness using multi-user HCIs. We analyzed the trial-by-trial objective and subjective changes in sociality that took place during a perceptual crossing experiment in which embodied interaction between pairs of adults was mediated over a minimalist haptic HCI. Since that study required participants to implicitly relearn how to mutually engage so as to perceive each other's presence, we hypothesized that there would be indications that the initial developmental stages of social awareness were recapitulated. Preliminary results reveal that, despite the lack of explicit feedback about task performance, there was a trend for the clarity of social awareness to increase over time. We discuss the methodological challenges involved in evaluating whether this trend was characterized by distinct developmental stages of objective behavior and subjective experience.

  14. An Human-Computer Interactive Augmented Reality System for Coronary Artery Diagnosis Planning and Training.

    Science.gov (United States)

    Li, Qiming; Huang, Chen; Lv, Shengqing; Li, Zeyu; Chen, Yimin; Ma, Lizhuang

    2017-09-02

    In order to let the doctor carry on the coronary artery diagnosis and preoperative planning in a more intuitive and more natural way, and to improve the training effect for interns, an augmented reality system for coronary artery diagnosis planning and training (ARS-CADPT) is designed and realized in this paper. At first, a 3D reconstruction algorithm based on computed tomographic (CT) images is proposed to model the coronary artery vessels (CAV). Secondly, the algorithms of static gesture recognition and dynamic gesture spotting and recognition are presented to realize the real-time and friendly human-computer interaction (HCI), which is the characteristic of ARS-CADPT. Thirdly, a Sort-First parallel rendering and splicing display subsystem is developed, which greatly expands the capacity of student users. The experimental results show that, with the use of ARS-CADPT, the reconstruction accuracy of CAV model is high, the HCI is natural and fluent, and the visual effect is good. In a word, the system fully meets the application requirement.

  15. Adaptation of hybrid human-computer interaction systems using EEG error-related potentials.

    Science.gov (United States)

    Chavarriaga, Ricardo; Biasiucci, Andrea; Forster, Killian; Roggen, Daniel; Troster, Gerhard; Millan, Jose Del R

    2010-01-01

    Performance improvement in both humans and artificial systems strongly relies in the ability of recognizing erroneous behavior or decisions. This paper, that builds upon previous studies on EEG error-related signals, presents a hybrid approach for human computer interaction that uses human gestures to send commands to a computer and exploits brain activity to provide implicit feedback about the recognition of such commands. Using a simple computer game as a case study, we show that EEG activity evoked by erroneous gesture recognition can be classified in single trials above random levels. Automatic artifact rejection techniques are used, taking into account that subjects are allowed to move during the experiment. Moreover, we present a simple adaptation mechanism that uses the EEG signal to label newly acquired samples and can be used to re-calibrate the gesture recognition system in a supervised manner. Offline analysis show that, although the achieved EEG decoding accuracy is far from being perfect, these signals convey sufficient information to significantly improve the overall system performance.

  16. Redesign of a computerized clinical reminder for colorectal cancer screening: a human-computer interaction evaluation

    Directory of Open Access Journals (Sweden)

    Saleem Jason J

    2011-11-01

    Full Text Available Abstract Background Based on barriers to the use of computerized clinical decision support (CDS learned in an earlier field study, we prototyped design enhancements to the Veterans Health Administration's (VHA's colorectal cancer (CRC screening clinical reminder to compare against the VHA's current CRC reminder. Methods In a controlled simulation experiment, 12 primary care providers (PCPs used prototypes of the current and redesigned CRC screening reminder in a within-subject comparison. Quantitative measurements were based on a usability survey, workload assessment instrument, and workflow integration survey. We also collected qualitative data on both designs. Results Design enhancements to the VHA's existing CRC screening clinical reminder positively impacted aspects of usability and workflow integration but not workload. The qualitative analysis revealed broad support across participants for the design enhancements with specific suggestions for improving the reminder further. Conclusions This study demonstrates the value of a human-computer interaction evaluation in informing the redesign of information tools to foster uptake, integration into workflow, and use in clinical practice.

  17. Delays and user performance in human-computer-network interaction tasks.

    Science.gov (United States)

    Caldwell, Barrett S; Wang, Enlie

    2009-12-01

    This article describes a series of studies conducted to examine factors affecting user perceptions, responses, and tolerance for network-based computer delays affecting distributed human-computer-network interaction (HCNI) tasks. HCNI tasks, even with increasing computing and network bandwidth capabilities, are still affected by human perceptions of delay and appropriate waiting times for information flow latencies. Conducted were 6 laboratory studies with university participants in China (Preliminary Experiments 1 through 3) and the United States (Experiments 4 through 6) to examine users' perceptions of elapsed time, effect of perceived network task performance partners on delay tolerance, and expectations of appropriate delays based on task, situation, and network conditions. Results across the six experiments indicate that users' delay tolerance and estimated delay were affected by multiple task and expectation factors, including task complexity and importance, situation urgency and time availability, file size, and network bandwidth capacity. Results also suggest a range of user strategies for incorporating delay tolerance in task planning and performance. HCNI user experience is influenced by combinations of task requirements, constraints, and understandings of system performance; tolerance is a nonlinear function of time constraint ratios or decay. Appropriate user interface tools providing delay feedback information can help modify user expectations and delay tolerance. These tools are especially valuable when delay conditions exceed a few seconds or when task constraints and system demands are high. Interface designs for HCNI tasks should consider assistant-style presentations of delay feedback, information freshness, and network characteristics. Assistants should also gather awareness of user time constraints.

  18. Using minimal human-computer interfaces for studying the interactive development of social awareness

    Science.gov (United States)

    Froese, Tom; Iizuka, Hiroyuki; Ikegami, Takashi

    2014-01-01

    According to the enactive approach to cognitive science, perception is essentially a skillful engagement with the world. Learning how to engage via a human-computer interface (HCI) can therefore be taken as an instance of developing a new mode of experiencing. Similarly, social perception is theorized to be primarily constituted by skillful engagement between people, which implies that it is possible to investigate the origins and development of social awareness using multi-user HCIs. We analyzed the trial-by-trial objective and subjective changes in sociality that took place during a perceptual crossing experiment in which embodied interaction between pairs of adults was mediated over a minimalist haptic HCI. Since that study required participants to implicitly relearn how to mutually engage so as to perceive each other's presence, we hypothesized that there would be indications that the initial developmental stages of social awareness were recapitulated. Preliminary results reveal that, despite the lack of explicit feedback about task performance, there was a trend for the clarity of social awareness to increase over time. We discuss the methodological challenges involved in evaluating whether this trend was characterized by distinct developmental stages of objective behavior and subjective experience. PMID:25309490

  19. Knowledge translation in health care as a multimodal interactional accomplishment

    DEFF Research Database (Denmark)

    Kjær, Malene

    2014-01-01

    In the theory of health care, knowledge translation is regarded as a crucial phenomenon that makes the whole health care system work in a desired manner. The present paper studies knowledge translation from the student nurses’ perspective and does that through a close analysis of the part...... knowledge gets translated through the use of rich multimodal embodied interactions, whereas the more abstract aspects of knowledge remain untranslated. Overall, the study contributes to the understanding of knowledge translation as a multimodal, locally situated accomplishment....

  20. Multimodal Interaction with a Virtual Guide

    NARCIS (Netherlands)

    Hofs, D.H.W.; Theune, Mariet; op den Akker, Hendrikus J.A.

    2008-01-01

    We demonstrate the Virtual Guide, an embodied conversational agent that gives directions in a 3D environment. We briefly describe multimodal dialogue management, language and gesture generation, and a special feature of the Virtual Guide: the ability to align her linguistic style to the user’s level

  1. An agent-based architecture for multimodal interaction

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.

    2001-01-01

    In this paper, an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The agent-based architecture can be used to create multimodal interaction. The generic process model has been designed, implemented and used to simulate

  2. An agent-based architecture for multimodal interaction

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.

    2001-01-01

    In this paper, an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The agent-based architecture can be used to create multimodal interaction. The generic process model has been designed, implemented and used to simulate di

  3. REVIEW: Affective and Emotional Aspects of Human-Computer Interaction: Game-Based and Innovative Learning Approaches

    OpenAIRE

    GULUMBAY, Reviewed By Dr. A. Askim

    2006-01-01

    This book was edited by, Maja Pivec, an educator at the University of Applied Sciences, and published by IOS Pres in 2006. The learning process can be seen as an emotional and personal experience that is addictive and leads learners to proactive behavior. New research methods in this field are related to affective and emotional approaches to computer-supported learning and human-computer interactions. Bringing together scientists and research aspects from psychology, educational sciences, cog...

  4. Ontology for assessment studies of human-computer-interaction in surgery.

    Science.gov (United States)

    Machno, Andrej; Jannin, Pierre; Dameron, Olivier; Korb, Werner; Scheuermann, Gerik; Meixensberger, Jürgen

    2015-02-01

    New technologies improve modern medicine, but may result in unwanted consequences. Some occur due to inadequate human-computer-interactions (HCI). To assess these consequences, an investigation model was developed to facilitate the planning, implementation and documentation of studies for HCI in surgery. The investigation model was formalized in Unified Modeling Language and implemented as an ontology. Four different top-level ontologies were compared: Object-Centered High-level Reference, Basic Formal Ontology, General Formal Ontology (GFO) and Descriptive Ontology for Linguistic and Cognitive Engineering, according to the three major requirements of the investigation model: the domain-specific view, the experimental scenario and the representation of fundamental relations. Furthermore, this article emphasizes the distinction of "information model" and "model of meaning" and shows the advantages of implementing the model in an ontology rather than in a database. The results of the comparison show that GFO fits the defined requirements adequately: the domain-specific view and the fundamental relations can be implemented directly, only the representation of the experimental scenario requires minor extensions. The other candidates require wide-ranging extensions, concerning at least one of the major implementation requirements. Therefore, the GFO was selected to realize an appropriate implementation of the developed investigation model. The ensuing development considered the concrete implementation of further model aspects and entities: sub-domains, space and time, processes, properties, relations and functions. The investigation model and its ontological implementation provide a modular guideline for study planning, implementation and documentation within the area of HCI research in surgery. This guideline helps to navigate through the whole study process in the form of a kind of standard or good clinical practice, based on the involved foundational frameworks

  5. Rapid Human-Computer Interactive Conceptual Design of Mobile and Manipulative Robot Systems

    Science.gov (United States)

    2015-05-19

    Learning Comparative User Models for Accelerating Human-Computer Collaborative Search, Evolutionary and Biologically Inspired Music , Sound, Art and...has been investigated theoretically to some extent ([12]) and successfully applied to artistic tasks ([11, 5]). Our hypothesis is that it is possible...model’s prediction to the sign of the original entry. If the signs coincide for all entries, the network is considered to be successfully trained

  6. Advancements in remote physiological measurement and applications in human-computer interaction

    Science.gov (United States)

    McDuff, Daniel

    2017-04-01

    Physiological signals are important for tracking health and emotional states. Imaging photoplethysmography (iPPG) is a set of techniques for remotely recovering cardio-pulmonary signals from video of the human body. Advances in iPPG methods over the past decade combined with the ubiquity of digital cameras presents the possibility for many new, lowcost applications of physiological monitoring. This talk will highlight methods for recovering physiological signals, work characterizing the impact of video parameters and hardware on these measurements, and applications of this technology in human-computer interfaces.

  7. 人机交互的若干关键技术%Some Key Techniques on Human-Computer Interaction

    Institute of Scientific and Technical Information of China (English)

    王红兵; 瞿裕忠; 徐冬梅; 王; 尧

    2001-01-01

    人机交互(Human-Computer Interaction)是研究人、计算机以及它们相互影响的技术.人机结合以人为主,将是未来计算机系统的特点,实现人机高效合作将是新一代人机界面的主要目的.多通道用户界面、计算机支持的协同工作、三维人机交互等是实现高效自然的人机交互的关键技术.

  8. 人机交互中的场景开发%Scenarios Development in Human-Computer Interaction

    Institute of Scientific and Technical Information of China (English)

    张向波; 邢朝伟

    2003-01-01

    场景是人机交互HCI(Human-Computer Interaction)中的重要技术.文章针对交互系统设计中通常存在的问题,比较深入地分析了基于模型的人机交互过程,对任务分析中场景的作用、应用、包含内容作了较深入的探讨.结果说明场景开发是交互系统深入研究、成功开发的关键步骤之一.

  9. INTERACTIVE SPACE OF MEDIA POLITICAL DISCOURSE: COMMUNICATIVE AND MULTIMODAL ASPECTS

    Directory of Open Access Journals (Sweden)

    Shamne Nikolay Leonidovich

    2014-09-01

    Full Text Available The article concerns the communication issues in two aspects: а a classical conversation analysis of interaction process; b the category of multimodality which gives a new interpretation of communication and reveals new communication units and their functioning in mass media discourse. Multiple theoretical models of the person's speech behavior in the media political discourse space are developed and tested on video footage which let study complex microunits of interaction including verbal and nonverbal components which are called multimodal signs. The interactional space and interactional ensemble concepts are the key ones in considering communication substructures. Two types of interaction space of political talk show are revealed: 1 primary, formed by all the people present in the television studio, including the audience, participants of a talk show – guests, the moderator, the camera man and other people in the recording studio; all of them form certain body-spatial constellations; 2 secondary, formed by a television camera for people who are not currently in the studio and "quasi" participate in the discussion of the problem. A camera-significant generating primary interaction space creates interaction dyads, triads and larger units which include speech, sign and corporal communication modules, i.e. body-spatial constellations. Interactional space is ordered by inter- and intrapersonal connections, which represent the process of coordination, where verbal activity of communicants is necessarily accompanied by simultaneous-reflexive unemphasized actions with nonverbal character. The multimodal perspective of interaction process study, being a type of conversation analysis, does not just follow the conversation analysis method but also changes the research aspects of communication as a social action, reveals new concepts in discourse construction of the social world.

  10. Characterizing Multimode Interaction in Renal Autoregulation

    DEFF Research Database (Denmark)

    Pavlov, A. N.; Sosnovtseva, Olga; Pavlova, O. N.;

    2008-01-01

    The purpose of this paper is to demonstrate how modern statistical techniques of non-stationary time-series analysis can be used to characterize the mutual interaction among three coexisting rhythms in nephron pressure and flow regulation. Besides a relatively fast vasomotoric rhythm with a period...

  11. An Egocentric Approach Towards Ubiquitous Multimodal Interaction

    DEFF Research Database (Denmark)

    Pederson, Thomas; Jalaliniya, Shahram

    2015-01-01

    In this position paper we present our take on the possibilities that emerge from a mix of recent ideas in interaction design, wearable computers, and context-aware systems which taken together could allow us to get closer to Marc Weiser's vision of calm computing. Multisensory user experience pla...

  12. Perspectives on the Design of Human-Computer Interactions: Issues and Implications.

    Science.gov (United States)

    Gavora, Mark J.; Hannafin, Michael

    1994-01-01

    Considers several perspectives on interaction strategies for computer-aided learning; examines dimensions of interaction; and presents a model for the design of interaction strategies. Topics include pacing; navigation; mental processes; cognitive and physical responses; the role of quality and quantity; a conceptual approach; and suggestions for…

  13. Multimodality and Design of Interactive Virtual Environments for Creative Collaboration

    DEFF Research Database (Denmark)

    Gürsimsek, Remzi Ates

    . The three-dimensional representation of space and the resources for non-verbal communication enable the users to interact with the digital content in more complex yet engaging ways. However, understanding the communicative resources in virtual spaces with the theoretical tools that are conventionally used......-user interaction, customization and interdisciplinary collaboration. These spaces accommodate new forms of spatial and social practices, provide multimodal communication resources in physical and virtual environments, and allow individuals (or groups) to actively engage with collaborative creative experiences...

  14. Multimode interaction in axially excited cylindrical shells

    Directory of Open Access Journals (Sweden)

    Silva F. M. A.

    2014-01-01

    Full Text Available Cylindrical shells exhibit a dense frequency spectrum, especially near the lowest frequency range. In addition, due to the circumferential symmetry, frequencies occur in pairs. So, in the vicinity of the lowest natural frequencies, several equal or nearly equal frequencies may occur, leading to a complex dynamic behavior. So, the aim of the present work is to investigate the dynamic behavior and stability of cylindrical shells under axial forcing with multiple equal or nearly equal natural frequencies. The shell is modelled using the Donnell nonlinear shallow shell theory and the discretized equations of motion are obtained by applying the Galerkin method. For this, a modal solution that takes into account the modal interaction among the relevant modes and the influence of their companion modes (modes with rotational symmetry, which satisfies the boundary and continuity conditions of the shell, is derived. Special attention is given to the 1:1:1:1 internal resonance (four interacting modes. Solving numerically the governing equations of motion and using several tools of nonlinear dynamics, a detailed parametric analysis is conducted to clarify the influence of the internal resonances on the bifurcations, stability boundaries, nonlinear vibration modes and basins of attraction of the structure.

  15. An Investigation of Human-Computer Interaction Approaches Beneficial to Weak Learners in Complex Animation Learning

    Science.gov (United States)

    Yeh, Yu-Fang

    2016-01-01

    Animation is one of the useful contemporary educational technologies in teaching complex subjects. There is a growing interest in proper use of learner-technology interaction to promote learning quality for different groups of learner needs. The purpose of this study is to investigate if an interaction approach supports weak learners, who have…

  16. 计算机人机界面交互的美感体现%Beauty of Human-computer Interface Interaction

    Institute of Scientific and Technical Information of China (English)

    高超; 王坤茜

    2014-01-01

    By the angle of the application of aesthetic principles in human-computer interface, the paper explores the application of aesthetics in human-computer interface, and sums up improve the use efficient and use feeling in human-computer interaction by enhancing the beauty of human-computer interface.%本文从美学原则在计算机人机界面中的应用的角度进行分析,探讨美学在计算机人机界面中的应用,从而总结出,如何通过提高计算机人机交互界面的美感来提升用户进行人机交互时的使用效率及使用感受。

  17. Emotion Modelling and Facial Affect Recognition in Human-Computer and Human-Robot Interaction

    OpenAIRE

    Malatesta, Lori; Murray, John; Raouzaiou, Amaryllis; Hiolle, Antoine; Ca?amero, Lola; Karpouzis, Kostas

    2009-01-01

    This work is funded by the EU FP6 project Feelix Growing: FEEL, Interact, eXpress: a Global appRoach to develOpment With INterdisciplinary Grounding, Contract FP6 IST-045169 (http://feelix-growing.org).

  18. The Electronic Mirror: Human-Computer Interaction and Change in Self-Appraisals.

    Science.gov (United States)

    De Laere, Kevin H.; Lundgren, David C.; Howe, Steven R.

    1998-01-01

    Compares humanlike versus machinelike interactional styles of computer interfaces, testing hypotheses that evaluative feedback conveyed through a humanlike interface will have greater impact on individuals' self-appraisals. Reflected appraisals were more influenced by computer feedback than were self-appraisals. Humanlike and machinelike interface…

  19. Brain-Computer Interfaces. Applying our Minds to Human-Computer Interaction

    NARCIS (Netherlands)

    Tan, Desney S.; Nijholt, Antinus

    2010-01-01

    For generations, humans have fantasized about the ability to create devices that can see into a person’s mind and thoughts, or to communicate and interact with machines through thought alone. Such ideas have long captured the imagination of humankind in the form of ancient myths and modern science

  20. Design Science in Human-Computer Interaction: A Model and Three Examples

    Science.gov (United States)

    Prestopnik, Nathan R.

    2013-01-01

    Humanity has entered an era where computing technology is virtually ubiquitous. From websites and mobile devices to computers embedded in appliances on our kitchen counters and automobiles parked in our driveways, information and communication technologies (ICTs) and IT artifacts are fundamentally changing the ways we interact with our world.…

  1. Brain computer interfaces as intelligent sensors for enhancing human-computer interaction

    NARCIS (Netherlands)

    Poel, M.; Nijboer, F.; Broek, E.L. van den; Fairclough, S.; Nijholt, A.

    2012-01-01

    BCIs are traditionally conceived as a way to control apparatus, an interface that allows you to act on" external devices as a form of input control. We propose an alternative use of BCIs, that of monitoring users as an additional intelligent sensor to enrich traditional means of interaction. This vi

  2. Brain-Computer Interfaces: Applying our Minds to Human-Computer Interaction

    NARCIS (Netherlands)

    Tan, Desney S.; Nijholt, Anton

    2010-01-01

    For generations, humans have fantasized about the ability to create devices that can see into a person’s mind and thoughts, or to communicate and interact with machines through thought alone. Such ideas have long captured the imagination of humankind in the form of ancient myths and modern science f

  3. The Importance of Human-Computer Interaction in Radiology E-learning

    NARCIS (Netherlands)

    den Harder, Annemarie M; Frijlingh, Marissa; Ravesloot, Cécile J; Oosterbaan, Anne E; van der Gijp, Anouk

    2015-01-01

    With the development of cross-sectional imaging techniques and transformation to digital reading of radiological imaging, e-learning might be a promising tool in undergraduate radiology education. In this systematic review of the literature, we evaluate the emergence of image interaction possibiliti

  4. Human-Computer Interaction for BCI Games: Usability and User Experience

    NARCIS (Netherlands)

    Plass-Oude Bos, Danny; Reuderink, Boris; Laar, van de Bram; Gürkök, Hayrettin; Mühl, Christian; Poel, Mannes; Heylen, Dirk; Nijholt, Anton; Sourin, A.

    2010-01-01

    Brain-computer interfaces (BCI) come with a lot of issues, such as delays, bad recognition, long training times, and cumbersome hardware. Gamers are a large potential target group for this new interaction modality, but why would healthy subjects want to use it? BCI provides a combination of informat

  5. Improving Human-Computer Interaction by Developing Culture-sensitive Applications based on Common Sense Knowledge

    CERN Document Server

    Anacleto, Junia Coutinho

    2010-01-01

    The advent of Web 3.0, claiming for personalization in interactive systems (Lassila & Hendler, 2007), and the need for systems capable of interacting in a more natural way in the future society flooded with computer systems and devices (Harper et al., 2008) show that great advances in HCI should be done. This chapter presents some contributions of LIA for the future of HCI, defending that using common sense knowledge is a possibility for improving HCI, especially because people assign meaning to their messages based on their common sense and, therefore, the use of this knowledge in developing user interfaces can make them more intuitive to the end-user. Moreover, as common sense knowledge varies from group to group of people, it can be used for developing applications capable of giving different feedback for different target groups, as the applications presented along this chapter illustrate, allowing, in this way, interface personalization taking into account cultural issues. For the purpose of using com...

  6. The Importance of Human-Computer Interaction in Radiology E-learning.

    Science.gov (United States)

    den Harder, Annemarie M; Frijlingh, Marissa; Ravesloot, Cécile J; Oosterbaan, Anne E; van der Gijp, Anouk

    2016-04-01

    With the development of cross-sectional imaging techniques and transformation to digital reading of radiological imaging, e-learning might be a promising tool in undergraduate radiology education. In this systematic review of the literature, we evaluate the emergence of image interaction possibilities in radiology e-learning programs and evidence for effects of radiology e-learning on learning outcomes and perspectives of medical students and teachers. A systematic search in PubMed, EMBASE, Cochrane, ERIC, and PsycInfo was performed. Articles were screened by two authors and included when they concerned the evaluation of radiological e-learning tools for undergraduate medical students. Nineteen articles were included. Seven studies evaluated e-learning programs with image interaction possibilities. Students perceived e-learning with image interaction possibilities to be a useful addition to learning with hard copy images and to be effective for learning 3D anatomy. Both e-learning programs with and without image interaction possibilities were found to improve radiological knowledge and skills. In general, students found e-learning programs easy to use, rated image quality high, and found the difficulty level of the courses appropriate. Furthermore, they felt that their knowledge and understanding of radiology improved by using e-learning. In conclusion, the addition of radiology e-learning in undergraduate medical education can improve radiological knowledge and image interpretation skills. Differences between the effect of e-learning with and without image interpretation possibilities on learning outcomes are unknown and should be subject to future research.

  7. Brain-Computer Interfaces Applying Our Minds to Human-computer Interaction

    CERN Document Server

    Tan, Desney S

    2010-01-01

    For generations, humans have fantasized about the ability to create devices that can see into a person's mind and thoughts, or to communicate and interact with machines through thought alone. Such ideas have long captured the imagination of humankind in the form of ancient myths and modern science fiction stories. Recent advances in cognitive neuroscience and brain imaging technologies have started to turn these myths into a reality, and are providing us with the ability to interface directly with the human brain. This ability is made possible through the use of sensors that monitor physical p

  8. Social network extraction and analysis based on multimodal dyadic interaction.

    Science.gov (United States)

    Escalera, Sergio; Baró, Xavier; Vitrià, Jordi; Radeva, Petia; Raducanu, Bogdan

    2012-01-01

    Social interactions are a very important component in people's lives. Social network analysis has become a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted from multimodal dyadic interactions. For our study, we used a set of videos belonging to New York Times' Blogging Heads opinion blog. The Social Network is represented as an oriented graph, whose directed links are determined by the Influence Model. The links' weights are a measure of the "influence" a person has over the other. The states of the Influence Model encode automatically extracted audio/visual features from our videos using state-of-the art algorithms. Our results are reported in terms of accuracy of audio/visual data fusion for speaker segmentation and centrality measures used to characterize the extracted social network.

  9. Social Network Extraction and Analysis Based on Multimodal Dyadic Interaction

    Directory of Open Access Journals (Sweden)

    Bogdan Raducanu

    2012-02-01

    Full Text Available Social interactions are a very important component in people’s lives. Social network analysis has become a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted from multimodal dyadic interactions. For our study, we used a set of videos belonging to New York Times’ Blogging Heads opinion blog. The Social Network is represented as an oriented graph, whose directed links are determined by the Influence Model. The links’ weights are a measure of the “influence” a person has over the other. The states of the Influence Model encode automatically extracted audio/visual features from our videos using state-of-the art algorithms. Our results are reported in terms of accuracy of audio/visual data fusion for speaker segmentation and centrality measures used to characterize the extracted social network.

  10. The effects of syntactic complexity on the human-computer interaction

    Science.gov (United States)

    Chechile, R. A.; Fleischman, R. N.; Sadoski, D. M.

    1986-01-01

    Three divided-attention experiments were performed to evaluate the effectiveness of a syntactic analysis of the primary task of editing flight route-way-point information. For all editing conditions, a formal syntactic expression was developed for the operator's interaction with the computer. In terms of the syntactic expression, four measures of syntactic were examined. Increased syntactic complexity did increase the time to train operators, but once the operators were trained, syntactic complexity did not influence the divided-attention performance. However, the number of memory retrievals required of the operator significantly accounted for the variation in the accuracy, workload, and task completion time found on the different editing tasks under attention-sharing conditions.

  11. Composite pattern structured light projection for human computer interaction in space

    Science.gov (United States)

    Guan, Chun; Hassebrook, Laurence G.; Lau, Daniel L.; Yalla, Veera Ganesh

    2005-05-01

    Interacting with computer technology while wearing a space suit is difficult at best. We present a sensor that can interpret body gestures in 3-Dimensions. Having the depth dimension allows simple thresholding to isolate the hands as well as use their positioning and orientation as input controls to digital devices such as computers and/or robotic devices. Structured light pattern projection is a well known method of accurately extracting 3-Dimensional information of a scene. Traditional structured light methods require several different patterns to recover the depth, without ambiguity and albedo sensitivity, and are corrupted by object motion during the projection/capture process. The authors have developed a methodology for combining multiple patterns into a single composite pattern by using 2-Dimensional spatial modulation techniques. A single composite pattern projection does not require synchronization with the camera so the data acquisition rate is only limited by the video rate. We have incorporated dynamic programming to greatly improve the resolution of the scan. Other applications include machine vision, remote controlled robotic interfacing in space, advanced cockpit controls and computer interfacing for the disabled. We will present performance analysis, experimental results and video examples.

  12. Human computer interaction and communication aids for hearing-impaired, deaf and deaf-blind people: Introduction to the special thematic session

    DEFF Research Database (Denmark)

    Bothe, Hans-Heinrich

    2008-01-01

    This paper gives ail overview and extends the Special Thematic Session (STS) oil research and development of technologies for hearing-impaired, deaf, and deaf-blind people. The topics of the session focus oil special equipment or services to improve communication and human computer interaction...

  13. Human computer interaction and communication aids for hearing-impaired, deaf and deaf-blind people: Introduction to the special thematic session

    DEFF Research Database (Denmark)

    Bothe, Hans-Heinrich

    2008-01-01

    This paper gives ail overview and extends the Special Thematic Session (STS) oil research and development of technologies for hearing-impaired, deaf, and deaf-blind people. The topics of the session focus oil special equipment or services to improve communication and human computer interaction...

  14. Support vector machines to detect physiological patterns for EEG and EMG-based human-computer interaction: a review

    Science.gov (United States)

    Quitadamo, L. R.; Cavrini, F.; Sbernini, L.; Riillo, F.; Bianchi, L.; Seri, S.; Saggio, G.

    2017-02-01

    Support vector machines (SVMs) are widely used classifiers for detecting physiological patterns in human-computer interaction (HCI). Their success is due to their versatility, robustness and large availability of free dedicated toolboxes. Frequently in the literature, insufficient details about the SVM implementation and/or parameters selection are reported, making it impossible to reproduce study analysis and results. In order to perform an optimized classification and report a proper description of the results, it is necessary to have a comprehensive critical overview of the applications of SVM. The aim of this paper is to provide a review of the usage of SVM in the determination of brain and muscle patterns for HCI, by focusing on electroencephalography (EEG) and electromyography (EMG) techniques. In particular, an overview of the basic principles of SVM theory is outlined, together with a description of several relevant literature implementations. Furthermore, details concerning reviewed papers are listed in tables and statistics of SVM use in the literature are presented. Suitability of SVM for HCI is discussed and critical comparisons with other classifiers are reported.

  15. A mobile Nursing Information System based on human-computer interaction design for improving quality of nursing.

    Science.gov (United States)

    Su, Kuo-Wei; Liu, Cheng-Li

    2012-06-01

    A conventional Nursing Information System (NIS), which supports the role of nurse in some areas, is typically deployed as an immobile system. However, the traditional information system can't response to patients' conditions in real-time, causing delays on the availability of this information. With the advances of information technology, mobile devices are increasingly being used to extend the human mind's limited capacity to recall and process large numbers of relevant variables and to support information management, general administration, and clinical practice. Unfortunately, there have been few studies about the combination of a well-designed small-screen interface with a personal digital assistant (PDA) in clinical nursing. Some researchers found that user interface design is an important factor in determining the usability and potential use of a mobile system. Therefore, this study proposed a systematic approach to the development of a mobile nursing information system (MNIS) based on Mobile Human-Computer Interaction (M-HCI) for use in clinical nursing. The system combines principles of small-screen interface design with user-specified requirements. In addition, the iconic functions were designed with metaphor concept that will help users learn the system more quickly with less working-memory. An experiment involving learnability testing, thinking aloud and a questionnaire investigation was conducted for evaluating the effect of MNIS on PDA. The results show that the proposed MNIS is good on learning and higher satisfaction on symbol investigation, terminology and system information.

  16. Using Noninvasive Brain Measurement to Explore the Psychological Effects of Computer Malfunctions on Users during Human-Computer Interactions

    Directory of Open Access Journals (Sweden)

    Leanne M. Hirshfield

    2014-01-01

    Full Text Available In today’s technologically driven world, there is a need to better understand the ways that common computer malfunctions affect computer users. These malfunctions may have measurable influences on computer user’s cognitive, emotional, and behavioral responses. An experiment was conducted where participants conducted a series of web search tasks while wearing functional near-infrared spectroscopy (fNIRS and galvanic skin response sensors. Two computer malfunctions were introduced during the sessions which had the potential to influence correlates of user trust and suspicion. Surveys were given after each session to measure user’s perceived emotional state, cognitive load, and perceived trust. Results suggest that fNIRS can be used to measure the different cognitive and emotional responses associated with computer malfunctions. These cognitive and emotional changes were correlated with users’ self-report levels of suspicion and trust, and they in turn suggest future work that further explores the capability of fNIRS for the measurement of user experience during human-computer interactions.

  17. 人机交互技术在现代展示设计中的应用%Application of Human-computer Interaction in Modern Display Design

    Institute of Scientific and Technical Information of China (English)

    周波; 杨京玲

    2011-01-01

    Taking the application of human-computer interaction in modem display design as an inspiration, it analyzed the key technologies of human-computer interaction such as multi-channel user interface, computer supported cooperative work, three-dimensional human-computer interaction, etc. And then, it discussed the significance of the interactive modes in multimedia and hypermedia through the history of human-computer interaction. Further on, it analyzed the principles and advantages of the application of human-computer interaction in display design. On this basis, it pointed out that the future of the development in display design should be oriented to human-computer interaction. In order to achieve the desired result of display design, designer should choose the appropriate method base on the correct analysis and understanding of display design.%以人机交互技术在现代展示设计中的应用为启示,分析了多通道用户界面、计算机支持的协同工作、三维人机交互等实现高效自然人机交互的关键技术,并结合人机交互技术的发展历程,论述了多媒体与超媒体的交互方式对展示设计的意义,进而分析了在展示设计中人机交互技术的应用原则和使用优势。在此基础上,提出了人机交互是展示设计的发展方向,会展设计师应该基于对展示主客体的正确分析和理解,遵循相应的设计原则,选择恰当的交互实现方式,以达到预期的展示设计效果。

  18. High-Level Modeling of Multimodal Interaction Techniques Using NiMMiT

    Directory of Open Access Journals (Sweden)

    Chris Raymaekers

    2007-09-01

    Full Text Available The past few years, multimodal interaction has been gaining importance in virtual environments. Although multimodality renders interacting with an environment more natural and intuitive, the development cycle of such an application is often long and expensive. In our overall field of research, we investigate how model-based design can facilitate the development process by designing environments through the use of high-level diagrams. In this scope, we present ‘NiMMiT′, a graphical notation for expressing and evaluating multimodal user interaction; we elaborate on the NiMMiT primitives and demonstrate its use by means of a comprehensive example.

  19. Vocabularies for description of accessibility issues in multimodal user interfaces

    NARCIS (Netherlands)

    Z. Obrenovic; R. Troncy (Raphael); L. Hardman (Lynda)

    2007-01-01

    textabstractIn previous work, we proposed a unified approach for describing multimodal human-computer interaction and interaction constraints in terms of sensual, motor, perceptual and cognitive functions of users. In this paper, we extend this work by providing formalised vocabularies that express

  20. Realtime Interaction Analysis of Social Interplay in a Multimodal Musical-Sonic Interaction Context

    DEFF Research Database (Denmark)

    Hansen, Anne-Marie

    2010-01-01

    This paper presents an approach to the analysis of social interplay among users in a multimodal interaction and musical performance situation. The approach consists of a combined method of realtime sensor data analysis for the description and interpretation of player gestures and video micro......-analysis methods used to describe the interaction situation and the context in which the social interplay takes place. This combined method is used in an iterative process, where the design of interactive games with musical-sonic feedback is improved according to newly discovered understandings and interpretations...

  1. Une approche pragmatique cognitive de l'interaction personne/système informatisé A Cognitive Pragmatic Approach of Human/Computer Interaction

    Directory of Open Access Journals (Sweden)

    Madeleine Saint-Pierre

    1998-06-01

    Full Text Available Dans cet article, nous proposons une approche inférentielle de l'interaction humain/ordinateur. C'est par la prise en compte de l'activité cognitive de l'utilisateur pendant son travail avec un système que nous voulons comprendre ce type d'interaction. Ceci mènera à une véritable évaluation des interfaces/utilisateurs et pourra servir de guide pour des interfaces en développement. Nos analyses décrivent le processus inférentiel impliqué dans le contexte dynamique d'exécution de tâche, grâce à une catégorisation de l'activité cognitive issue des verbalisations recueillies auprès d'utilisateurs qui " pensent à haute voix " en travaillant. Nous présentons des instruments méthodologiques mis au point dans notre recherche pour l'analyses et la catégorisation des protocoles. Les résultats sont interprétés dans le cadre de la théorie de la pertinence de Sperber et Wilson (1995 en termes d'effort cognitif dans le traitement des objets (linguistique, iconique, graphique... apparaissant à l'écran et d'effet cognitif de ces derniers. Cette approche est généralisable à tout autre contexte d'interaction humain/ordinateur comme, par exemple, le télé-apprentissage.This article proposes an inferential approach for the study of human/computer interaction. It is by taking into account the user's cognitive activity while working at a computer that we propose to understand this interaction. This approach leads to a real user/interface evaluation and, hopefully, will serve as guidelines for the design of new interfaces. Our analysis describe the inferential process involved in the dynamics of task performance. The cognitive activity of the user is grasped by the mean of a " thinking aloud " method through which the user is asked to verbalize while working at the computer. Tools developped by our research team for the categorization of the verbal protocols are presented. The results are interpreted within the relevance theory

  2. Multimodal interaction with W3C standards toward natural user interfaces to everything

    CERN Document Server

    2017-01-01

    This book presents new standards for multimodal interaction published by the W3C and other standards bodies in straightforward and accessible language, while also illustrating the standards in operation through case studies and chapters on innovative implementations. The book illustrates how, as smart technology becomes ubiquitous, and appears in more and more different shapes and sizes, vendor-specific approaches to multimodal interaction become impractical, motivating the need for standards. This book covers standards for voice, emotion, natural language understanding, dialog, and multimodal architectures. The book describes the standards in a practical manner, making them accessible to developers, students, and researchers. Comprehensive resource that explains the W3C standards for multimodal interaction clear and straightforward way; Includes case studies of the use of the standards on a wide variety of devices, including mobile devices, tablets, wearables and robots, in applications such as assisted livi...

  3. A multimodal emotion detection system during human-robot interaction.

    Science.gov (United States)

    Alonso-Martín, Fernando; Malfaz, María; Sequeira, João; Gorostiza, Javier F; Salichs, Miguel A

    2013-11-14

    In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human-robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human-robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately.

  4. A Multimodal Emotion Detection System during Human–Robot Interaction

    Directory of Open Access Journals (Sweden)

    Miguel A. Salichs

    2013-11-01

    Full Text Available In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS. Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA, which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA, has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE and Computer Expression Recognition Toolbox (CERT. Once these new components (GEVA and GEFA give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System. Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual separately.

  5. An evaluation framework for multimodal interaction determining quality aspects and modality choice

    CERN Document Server

    Wechsung, Ina

    2014-01-01

    This book presents (1) an exhaustive and empirically validated taxonomy of quality aspects of multimodal interaction as well as respective measurement methods, (2) a validated questionnaire specifically tailored to the evaluation of multimodal systems and covering most of the taxonomy‘s quality aspects, (3) insights on how the quality perceptions of multimodal systems relate to the quality perceptions of its individual components, (4) a set of empirically tested factors which influence modality choice, and (5) models regarding the relationship of the perceived quality of a modality and the actual usage of a modality.

  6. Multimodal interfaces : a framework based on modality appropriateness

    NARCIS (Netherlands)

    Erp, J.B.F. van; Kooi, F.L.; Bronkhorst, A.; Leeuwen, D.L. van; Esch, M. van; Wijngaarden, S.J. van

    2006-01-01

    Our sensory modalities are specialized in perceiving different attributes of an object or event. This fact is the basis of the approach towards multimodal interfaces we describe in this paper. We rated the match between 20 possible information attributes (common in human computer interaction) and th

  7. Wearable Computing System with Input-Output Devices Based on Eye-Based Human Computer Interaction Allowing Location Based Web Services

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2013-08-01

    Full Text Available Wearable computing with Input-Output devices Base on Eye-Based Human Computer Interaction: EBHCI which allows location based web services including navigation, location/attitude/health condition monitoring is proposed. Through implementation of the proposed wearable computing system, all the functionality is confirmed. It is also found that the system does work well. It can be used easily and also is not expensive. Experimental results for EBHCI show excellent performance in terms of key-in accuracy as well as input speed. It is accessible to internet, obviously, and has search engine capability.

  8. Human-computer Interaction Based on Gaze Tracking and Gesture Recognition%基于视线跟踪和手势识别的人机交互

    Institute of Scientific and Technical Information of China (English)

    肖志勇; 秦华标

    2009-01-01

    提出一种新的基于视线跟踪和手势识别的交互方式用于远距离操作计算机.系统通过摄像头采集用户的图像,利用图像识别算法检测人眼和手指的位置,由人眼和指尖的连线确定用户指向屏幕的位置,通过判别用户手势的变化实现各种操作,达到人机交互的目的.实验结果表明,该交互方式可以较好地定位屏幕和判断用户的操作,实现自然、友好的远距离人机交互.%This paper presents a novel human-computer interaction for long-distance operation based on gaze tracking and gesture recognition. The system analyzes the image captured by camera and finds the position of eyes and fingers through some recognition algorithms. The position which user points to the screen finds through the line from the eye to the finger. By recognizing user's gestures, the system executes various operations. Experimental results demonstrate that the interaction can locate the position on the screen and recognize user's gesture. This method achieves friendly and natural long-distance human-computer interaction.

  9. Interactive Multimodal Molecular Set – Designing Ludic Engaging Science Learning Content

    DEFF Research Database (Denmark)

    Thorsen, Tine Pinholt; Christiansen, Kasper Holm Bonde; Jakobsen Sillesen, Kristian

    2014-01-01

    This paper reports on an exploratory study investigating 10 primary school students’ interaction with an interactive multimodal molecular set fostering ludic engaging science learning content in primary schools (8th and 9th grade). The concept of the prototype design was to bridge the physical...

  10. Multimodality and interactivity: connecting properties of serious games with educational outcomes.

    Science.gov (United States)

    Ritterfeld, Ute; Shen, Cuihua; Wang, Hua; Nocera, Luciano; Wong, Wee Ling

    2009-12-01

    Serious games have become an important genre of digital media and are often acclaimed for their potential to enhance deeper learning because of their unique technological properties. Yet the discourse has largely remained at a conceptual level. For an empirical evaluation of educational games, extra effort is needed to separate intertwined and confounding factors in order to manipulate and thus attribute the outcome to one property independent of another. This study represents one of the first attempts to empirically test the educational impact of two important properties of serious games, multimodality and interactivity, through a partial 2 x 3 (interactive, noninteractive by high, moderate, low in multimodality) factorial between-participants follow-up experiment. Results indicate that both multimodality and interactivity contribute to educational outcomes individually. Implications for educational strategies and future research directions are discussed.

  11. The Danish NOMCO Corpus Multimodal Interaction in First Acquaintance Conversations

    DEFF Research Database (Denmark)

    Paggio, Patrizia; Navarretta, Costanza

    2016-01-01

    This article presents the Danish NOMCO Corpus, an annotated multimodal collection of video-recorded first acquaintance conversations between Danish speakers. The annotation includes speech transcription including word boundaries, and formal as well as functional coding of gestural behaviours......, specifically head movements, facial expressions, and body posture. The corpus has served as the empirical basis for a number of studies of communication phenomena related to turn management, feedback exchange, information packaging and the expression of emotional attitudes. We describe the annotation scheme...

  12. Human computer interaction positioning system based on RFID for museum%基于RFID的博物馆人机互动定位系统

    Institute of Scientific and Technical Information of China (English)

    周祥云; 钱慧; 余轮

    2011-01-01

    Designed a positioning system of human-computer interaction based on RFID for digital museum, bringing about the functions of zone location of people, the monitoring of people-flow's distribution and the track of people's moving trace. It met the management and application requirement of museum. Based on the track of people's moving trace, the paper proposed an application program of human-computer game-interaction combined with the location technology of RFID, and the scheme has been applied in the museum.%设计了一种基于RFID技术的数字博物馆人机互动定位系统。该系统具备人员区域定位、人流量分布监测和人员移动轨迹的追踪功能,满足了博物馆的管理应用需求。在实现人员移动轨迹追踪的基础上提出了一种结合RFID定位技术的人机互动游戏应用方案,并将该方案应用到博物馆中。

  13. 人机交互中的语音情感识别研究进展%A survey of speech emotion recognition in human computer interaction

    Institute of Scientific and Technical Information of China (English)

    张石清; 李乐民; 赵知劲

    2013-01-01

    Speech emotion recognition is a current active research topic in the fields of signal processing,pattern recognition,artificial intelligence,human computer interaction,etc.The ultimate purpose of such research is to endow computers with emotion ability and make human computer interaction be genuinely harmonic and natural.This paper reviews the recent advance of several key problems involved in speech emotion recognition,including emotional description theory,emotional speech databases,emotional acoustic analysis as well as emotion recognition methods.In addition,the existing research problems and the future direction are presented.%语音情感识别是当前信号处理、模式识别、人工智能、人机交互等领域的热点研究课题,其研究的最终目的是赋予计算机情感能力,使得人机交互做到真正的和谐和自然.本文综述了语音情感识别所涉及到的几个关键问题,包括情感表示理论、情感语音数据库、情感声学特征分析以及情感识别方法四个方面的最新进展,并指出了研究中存在的问题及下一步发展的方向.

  14. Cooperation in human-computer communication

    OpenAIRE

    Kronenberg, Susanne

    2000-01-01

    The goal of this thesis is to simulate cooperation in human-computer communication to model the communicative interaction process of agents in natural dialogs in order to provide advanced human-computer interaction in that coherence is maintained between contributions of both agents, i.e. the human user and the computer. This thesis contributes to certain aspects of understanding and generation and their interaction in the German language. In spontaneous dialogs agents cooperate by the pro...

  15. The effect of a pretest in an interactive, multimodal pretraining system for learning science concepts

    NARCIS (Netherlands)

    Bos, Floris A.B.H.; Terlouw, Cees; Pilot, Albert

    2009-01-01

    In line with the cognitive theory of multimedia learning by Moreno and Mayer (2007), an interactive, multimodal learning environment was designed for the pretraining of science concepts in the joint area of physics, chemistry, biology, applied mathematics, and computer sciences. In the experimental

  16. Unraveling Students' Interaction around a Tangible Interface Using Multimodal Learning Analytics

    Science.gov (United States)

    Schneider, Bertrand; Blikstein, Paulo

    2015-01-01

    In this paper, we describe multimodal learning analytics (MMLA) techniques to analyze data collected around an interactive learning environment. In a previous study (Schneider & Blikstein, submitted), we designed and evaluated a Tangible User Interface (TUI) where dyads of students were asked to learn about the human hearing system by…

  17. Using the Interactive Whiteboard to Resource Continuity and Support Multimodal Teaching in a Primary Science Classroom

    Science.gov (United States)

    Gillen, J.; Littleton, K.; Twiner, A.; Staarman, J. K.; Mercer, N.

    2008-01-01

    All communication is inherently multimodal, and understandings of science need to be multidimensional. The interactive whiteboard offers a range of potential benefits to the primary science classroom in terms of relative ease of integration of a number of presentational and ICT functions, which, taken together, offers new opportunities for…

  18. Multimodal Desktop Interaction: The Face –Object-Gesture–Voice Example

    DEFF Research Database (Denmark)

    Vidakis, Nikolas; Vlasopoulos, Anastasios; Kounalakis, Tsampikos

    2013-01-01

    applications using face, objects, voice and gestures. These human behaviors constitute the input qualifiers to the system. Microsoft Kinect multi-sensor was utilized as input device in order to succeed the natural user interaction, mainly due to the multimodal capabilities offered by this device. We...

  19. Citation Counting, Citation Ranking, and h-Index of Human-Computer Interaction Researchers: A Comparison between Scopus and Web of Science

    CERN Document Server

    Meho, Lokman I

    2008-01-01

    This study examines the differences between Scopus and Web of Science in the citation counting, citation ranking, and h-index of 22 top human-computer interaction (HCI) researchers from EQUATOR--a large British Interdisciplinary Research Collaboration project. Results indicate that Scopus provides significantly more coverage of HCI literature than Web of Science, primarily due to coverage of relevant ACM and IEEE peer-reviewed conference proceedings. No significant differences exist between the two databases if citations in journals only are compared. Although broader coverage of the literature does not significantly alter the relative citation ranking of individual researchers, Scopus helps distinguish between the researchers in a more nuanced fashion than Web of Science in both citation counting and h-index. Scopus also generates significantly different maps of citation networks of individual scholars than those generated by Web of Science. The study also presents a comparison of h-index scores based on Goo...

  20. Handling emotions in human-computer dialogues

    CERN Document Server

    Pittermann, Johannes; Minker, Wolfgang

    2010-01-01

    This book presents novel methods to perform robust speech-based emotion recognition at low complexity. It describes a flexible dialogue model to conveniently integrate emotions and other dialogue-influencing parameters in human-computer interaction.

  1. Interaction Analysis in Performing Arts: A Case Study in Multimodal Choreography

    Science.gov (United States)

    Christou, Maria; Luciani, Annie

    The growing overture towards interacting virtual words and the variety of uses, have brought great changes in the performing arts, that worth a profound analysis in order to understand the emerging issues. We examine the performance conception for its embodiment capacity with a methodology based on interaction analysis. Finally, we propose a new situation of multimodal choreography that respects the aforementioned analysis, and we evaluate the results on a simulation exercise.

  2. A multimodal architecture for simulating natural interactive walking in virtual environments

    OpenAIRE

    Nordahl, Rolf; Serafin, Stefania; Turchet, Luca; Nilsson, Niels Christian

    2011-01-01

    We describe a multimodal system that exploits the use of footwear-based interaction in virtual environments. We developed a pair of shoes enhanced with pressure sensors, actuators, and markers. These shoes control a multichannel surround sound system and drive a physically based audio-haptic synthesis engine that simulates the act of walking on different surfaces. We present the system in all its components, and explain its ability to simulate natural interactive walking in virtual environmen...

  3. 感知媒体--机器感知与人机交互%Perceptive Media: Machine Perception and Human Computer Interaction

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Computer hardware has always changed rapidly, but input/output devices, interaction techniques, and software for human-computer interaction have not experienced similar growth and improvement. The GUI-based style of interaction has made computers simpler and easier to use, especially for office productivity applications where computers are used as tools to accomplish specific tasks. However, as the way we use computers changes and computing becomes more pervasive and ubiquitous, largely due to advances in bandwidth and mobility, GUIs will not easily support the range of interactions necessary to meet users' needs. In order to accommodate a wider range of scenarios, tasks, users, and preferences, we need to move toward interfaces that are natural, intuitive, adaptive, and unobtrusive. "Perceptive media" is an interdisciplinary initiative to combine multimedia display and machine perception to create useful, adaptive, responsive interfaces between people and technology. This article describes and investigates aspects of perceptive media and gives examples of work in one particular sub-area, Vision Based Interfaces.

  4. Evaluation of mental workload and familiarity in human computer interaction with integrated development environments using single-channel EEG

    OpenAIRE

    2015-01-01

    With modern developments in sensing technology it has become possible to detect and classify brain activity into distinct states such as attention and relaxation using commercially avail- able EEG devices. These devices provide a low-cost and minimally intrusive method to observe a subject’s cognitive load whilst interacting with a computer system, thus providing a basis for deter- mining the overall effectiveness of the design of a computer interface. In this paper, a single-channel dry sens...

  5. A multimodal architecture for simulating natural interactive walking in virtual environments

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Serafin, Stefania; Turchet, Luca

    2011-01-01

    We describe a multimodal system that exploits the use of footwear-based interaction in virtual environments. We developed a pair of shoes enhanced with pressure sensors, actuators, and markers. These shoes control a multichannel surround sound system and drive a physically based audio-haptic synt......We describe a multimodal system that exploits the use of footwear-based interaction in virtual environments. We developed a pair of shoes enhanced with pressure sensors, actuators, and markers. These shoes control a multichannel surround sound system and drive a physically based audio......-haptic synthesis engine that simulates the act of walking on different surfaces. We present the system in all its components, and explain its ability to simulate natural interactive walking in virtual environments. We describe two experiments where the possibilities offered by the system are tested. In the first...

  6. Ubiquitous Human Computing

    OpenAIRE

    Zittrain, Jonathan L.

    2008-01-01

    Ubiquitous computing means network connectivity everywhere, linking devices and systems as small as a thumb tack and as large as a worldwide product distribution chain. What could happen when people are so readily networked? This short essay explores issues arising from two possible emerging models of ubiquitous human computing: fungible networked brainpower and collective personal vital sign monitoring.

  7. COGNITIVE TECHNOLOGIES OF DEVELOPMENT, TRANSLATING AND PERCEPTION OF CONCEPTUAL SYSTEMS OF MULTIMODAL INTERACTION PARTICIPANTS

    Directory of Open Access Journals (Sweden)

    Korobova Olga Valeryevna

    2014-11-01

    Full Text Available The article deals with the cognitive technologies and translating techniques, as well as the literal perception and construction of conceptual systems of participants of TV political talk shows in the process of multimodal interaction. It is shown that cognitive technologies and techniques aim at effecting the processes of acquiring, assimilating and storing the human knowledge in different forms within a personal conceptual system which is formed in the human consciousness in the process of the reality being mentally acquired. The communication between TV political talk show participants is based on such effect. The basic intention of such communication is translating and construction of conceptual systems of participants of TV political talk show in multimodal interaction. It is realized with the help of verbal and non- verbal cognitive factors which are actualized by means of different modalities. The communicants who find themselves in the united media scene of TV political talk show, rather than interactants (actual show participants are the main objects of the effect. Thus, the authors determine that the interaction of communicants in multimodal communication is possible due to the correspondence of fragments of their conceptual systems.

  8. 1-B it 人机交互系统%One-Bit human-computer interactive system

    Institute of Scientific and Technical Information of China (English)

    程煜; 张鸣宇; 陶霖密

    2015-01-01

    通过研制普适的非对称交互系统,解决运动能力障碍人群借助通用计算设备通过互联网进行交流这一问题。虚拟鼠标替代传统鼠标、中文输入法替代传统键盘输入功能、快捷键替代传统键盘快捷操作功能,作为交互系统的基础支持完全根据用户自身意志完成各类操作;配置1-Bit输入设备有效减轻了运动障碍人群的操作负担。基于上述方法,设计并开发实现了1-Bit人机交互系统,该系统能够利用1-Bit输入设备完成对普通计算机的操作。实验测试和分析表明,用户可以基于1-Bit输入设备操作电脑,包括文字输入、网页浏览、音视频播放、日常护理等功能,满足用户的日常电脑使用需求,同时该交互系统具有易于学习、功能方便扩展的特性。%Through developing a universal asymmetric interactive system, the obstacle of communication in the In-ternet among movement disability crowds through general computing equipment is solved.A virtual mouse took the place of the traditional mouse.A Chinese input method and shortcut keys were used instead of the original keyboard input.Users can operate computers according to their own will.The set of the one-Bit input device can further re-duce the burden for the disabled people.Based on the above-mentioned method, a one-Bit interaction system was developed and implemented, which can manipulate computers by the one-Bit input device.The user tests and anal-ysis of the system indicate that users can manipulate computers with multi-functions, including text entry, website browsing, audio and video play, and health care via one-Bit input device.The system is easy to learn and its func-tions can be developed and expanded easily, which meets the daily requirements for a user to operate a computer.

  9. 近距离3D人机交互技术研究实现%Research and implementation of 3D human computer interactive technology in close scenario

    Institute of Scientific and Technical Information of China (English)

    刘川; 邬春学

    2015-01-01

    3D human-computer interaction is an amalgamation of computer graphics, virtual reality and pattern recognition, which consists of virtual environment and 3D object recognition. A total solution consisting of virtual environment rendering and 3D object recognition is proposed and applied in the close human-computer interaction scenes, which simulate virtual scenario with the ratio of 1∶1. The research analyses the transformation between virtual environment and reality; three main factors which have influence on stereo display of virtual object have been explored, which include the camera angle, the distance between two cameras in OpenGL and the stereo image pairs generation;3D object recognition is implemented on Intel Perceptual Computing. Experiments show that the solution has excellent 3D effect in simulation virtual scenario with 1∶1 ratio and high gesture recognition rate.%3D人机交互技术是计算机图形学、虚拟现实和模式识别的交叉融合领域,可分为虚拟环境的显示和三维物体识别。该研究将虚拟环境显示和三维物体识别整合成一个完整的解决方案并应用到1∶1模拟虚拟场景的近距离交互。研究了虚拟现实之间的坐标转换;分析了影响虚拟物体立体显示的三个主要因素:OpenGL中摄像机的张角,摄像机间距和立体图像对的产生;并实现了基于Intel Perceptual Computing的三维物体识别。实验结果显示:该方案在1∶1模拟虚拟场景方面具有良好的3D显示效果,同时在手势识别方面有较高的识别率。

  10. Spatial Sound and Multimodal Interaction in Immersive Environments

    DEFF Research Database (Denmark)

    Grani, Francesco; Overholt, Daniel; Erkut, Cumhur

    2015-01-01

    are discussed. These include elements in which we have provided sonic interaction in virtual environments, interactivity with volumetric sound sources using VBAP and Wave Field Synthesis (WFS), and binaural sound for virtual environments and spatial audio mixing. We show that the variety of approaches presented......Spatial sound and interactivity are key elements of investigation at the Sound And Music Computing master program at Aalborg University Copenhagen. We present a collection of research directions and recent results from work in these areas, with the focus on our multi- faceted approaches to two...

  11. ARZombie: A Mobile Augmented Reality Game with Multimodal Interaction

    Directory of Open Access Journals (Sweden)

    Diogo Cordeiro

    2015-11-01

    Full Text Available Augmented reality games have the power to extend virtual gaming into real world scenarios with real people, while enhancing the senses of the user. This paper describes the AR- Zombie game developed with the aim of studying and developing mobile augmented reality applications, specifically for tablets, using face recognition interaction techniques. The goal of the ARZombie player is to kill zombies that are detected through the display of the device. Instead of using markers as a mean of tracking the zombies, this game incorporates a facial recognition system, which will enhance the user experience by improving the interaction of players with the real world. As the player moves around the environment, the game will display virtual zombies on the screen if the detected faces are recognized as belonging to the class of the zombies. ARZombie was tested with users to evaluate the interaction proposals and its components were evaluated regarding the performance in order to ensure a better gaming experience.

  12. 基于人眼注视非穿戴自然人机交互%Gazing Based Non-Wearable and Natural Human-Computer Interaction

    Institute of Scientific and Technical Information of China (English)

    王佳雯; 管业鹏

    2016-01-01

    提出了一种基于人眼注视的非穿戴自然人机交互新方法。基于人体生物结构特征,采用主动形状模型确定人眼轮廓特征点,并根据HSV色彩空间构建人眼特征直方图,采用粒子滤波法,对人眼目标跟踪与定位。基于最大三角化划分人眼轮廓特征,构建人眼几何模型,通过图像帧间均值滤波,确定人眼注视交互目标,实现非穿戴的人机交互,满足用户交互的灵活性、舒适性和自由性等要求。通过实验对比,验证了该方法有效、可行。%A novel non-wearable and natural human-computer interaction(HCI)method has been proposed based on eye gazing. According to human being biological structure characteristics ,an active shape model is employed to locate some feature points in the eye profile. A histogram of eye feature has been built according to the HSV color space. A particle filter method has been adopted to track and locate the eye. A 2D eye geometric model is constructed based on the maximal triangulation of the eye contour features. A temporal median filter strategy has been developed to determine a stable gazing interactive target. Non-wearable and natural HCI modal is realized in which the user can move flexibly both in comfort and freedom interactive ways. Experiment results indicate that the developed approach is efficient and can be used to natural non-wearable HCI.

  13. Gesture Recognition for Human-computer Interaction by Using Neural Networks%手势语言识别的神经网络方法

    Institute of Scientific and Technical Information of China (English)

    袁景和; 王勇; 常胜江; 张延炘

    2002-01-01

    For the purpose of human-computer interaction(HCI),a visual approach based on gesture recognition is proposed in this paper.The technique essentially includes detection and segmentation,feature extraction,and recognition of a number of gestures which are assigned as some control commands.Each of the processing stages employs a neural network for skin-color detection,principal component analysis(PCA) as well as clustering encoding of the hand-gestures.Details of the approach and experiment results are provided.The experimental recognizing accuracy is 94 %.%提供了一种用于人机交互(HCI)的手势语言可视化识别方法.该方法包括用于几种控制命令的手势的探测、分割、特征提取及识别,每一步的处理都用到了神经网络方法,像肤色探测、主元分析(PCA)以及聚类编码识别.实验结果显示正确识别率高达94 %.

  14. Human Computer Interactions in Next-Generation of Aircraft Smart Navigation Management Systems: Task Analysis and Architecture under an Agent-Oriented Methodological Approach

    Directory of Open Access Journals (Sweden)

    José M. Canino-Rodríguez

    2015-03-01

    Full Text Available The limited efficiency of current air traffic systems will require a next-generation of Smart Air Traffic System (SATS that relies on current technological advances. This challenge means a transition toward a new navigation and air-traffic procedures paradigm, where pilots and air traffic controllers perform and coordinate their activities according to new roles and technological supports. The design of new Human-Computer Interactions (HCI for performing these activities is a key element of SATS. However efforts for developing such tools need to be inspired on a parallel characterization of hypothetical air traffic scenarios compatible with current ones. This paper is focused on airborne HCI into SATS where cockpit inputs came from aircraft navigation systems, surrounding traffic situation, controllers’ indications, etc. So the HCI is intended to enhance situation awareness and decision-making through pilot cockpit. This work approach considers SATS as a system distributed on a large-scale with uncertainty in a dynamic environment. Therefore, a multi-agent systems based approach is well suited for modeling such an environment. We demonstrate that current methodologies for designing multi-agent systems are a useful tool to characterize HCI. We specifically illustrate how the selected methodological approach provides enough guidelines to obtain a cockpit HCI design that complies with future SATS specifications.

  15. What is the value of embedding artificial emotional prosody in human computer interactions? Implications for theory and design in psychological science.

    Directory of Open Access Journals (Sweden)

    Rachel L. C. Mitchell

    2015-11-01

    Full Text Available In computerised technology, artificial speech is becoming increasingly important, and is already used in ATMs, online gaming and healthcare contexts. However, today’s artificial speech typically sounds monotonous, a main reason for this being the lack of meaningful prosody. One particularly important function of prosody is to convey different emotions. This is because successful encoding and decoding of emotions is vital for effective social cognition, which is increasingly recognised in human-computer interaction contexts. Current attempts to artificially synthesise emotional prosody are much improved relative to early attempts, but there remains much work to be done due to methodological problems, lack of agreed acoustic correlates, and lack of theoretical grounding. If the addition of synthetic emotional prosody is not of sufficient quality, it may risk alienating users instead of enhancing their experience. So the value of embedding emotion cues in artificial speech may ultimately depend on the quality of the synthetic emotional prosody. However, early evidence on reactions to synthesised nonverbal cues in the facial modality bodes well. Attempts to implement the recognition of emotional prosody into artificial applications and interfaces have perhaps been met with greater success, but the ultimate test of synthetic emotional prosody will be to critically compare how people react to synthetic emotional prosody vs. natural emotional prosody, at the behavioural, socio-cognitive and neural levels.

  16. Feature selection for speech emotion recognition in Spanish and Basque: on the use of machine learning to improve human-computer interaction.

    Directory of Open Access Journals (Sweden)

    Andoni Arruti

    Full Text Available Study of emotions in human-computer interaction is a growing research area. This paper shows an attempt to select the most significant features for emotion recognition in spoken Basque and Spanish Languages using different methods for feature selection. RekEmozio database was used as the experimental data set. Several Machine Learning paradigms were used for the emotion classification task. Experiments were executed in three phases, using different sets of features as classification variables in each phase. Moreover, feature subset selection was applied at each phase in order to seek for the most relevant feature subset. The three phases approach was selected to check the validity of the proposed approach. Achieved results show that an instance-based learning algorithm using feature subset selection techniques based on evolutionary algorithms is the best Machine Learning paradigm in automatic emotion recognition, with all different feature sets, obtaining a mean of 80,05% emotion recognition rate in Basque and a 74,82% in Spanish. In order to check the goodness of the proposed process, a greedy searching approach (FSS-Forward has been applied and a comparison between them is provided. Based on achieved results, a set of most relevant non-speaker dependent features is proposed for both languages and new perspectives are suggested.

  17. Feature selection for speech emotion recognition in Spanish and Basque: on the use of machine learning to improve human-computer interaction.

    Science.gov (United States)

    Arruti, Andoni; Cearreta, Idoia; Alvarez, Aitor; Lazkano, Elena; Sierra, Basilio

    2014-01-01

    Study of emotions in human-computer interaction is a growing research area. This paper shows an attempt to select the most significant features for emotion recognition in spoken Basque and Spanish Languages using different methods for feature selection. RekEmozio database was used as the experimental data set. Several Machine Learning paradigms were used for the emotion classification task. Experiments were executed in three phases, using different sets of features as classification variables in each phase. Moreover, feature subset selection was applied at each phase in order to seek for the most relevant feature subset. The three phases approach was selected to check the validity of the proposed approach. Achieved results show that an instance-based learning algorithm using feature subset selection techniques based on evolutionary algorithms is the best Machine Learning paradigm in automatic emotion recognition, with all different feature sets, obtaining a mean of 80,05% emotion recognition rate in Basque and a 74,82% in Spanish. In order to check the goodness of the proposed process, a greedy searching approach (FSS-Forward) has been applied and a comparison between them is provided. Based on achieved results, a set of most relevant non-speaker dependent features is proposed for both languages and new perspectives are suggested.

  18. Human computer interactions in next-generation of aircraft smart navigation management systems: task analysis and architecture under an agent-oriented methodological approach.

    Science.gov (United States)

    Canino-Rodríguez, José M; García-Herrero, Jesús; Besada-Portas, Juan; Ravelo-García, Antonio G; Travieso-González, Carlos; Alonso-Hernández, Jesús B

    2015-03-04

    The limited efficiency of current air traffic systems will require a next-generation of Smart Air Traffic System (SATS) that relies on current technological advances. This challenge means a transition toward a new navigation and air-traffic procedures paradigm, where pilots and air traffic controllers perform and coordinate their activities according to new roles and technological supports. The design of new Human-Computer Interactions (HCI) for performing these activities is a key element of SATS. However efforts for developing such tools need to be inspired on a parallel characterization of hypothetical air traffic scenarios compatible with current ones. This paper is focused on airborne HCI into SATS where cockpit inputs came from aircraft navigation systems, surrounding traffic situation, controllers' indications, etc. So the HCI is intended to enhance situation awareness and decision-making through pilot cockpit. This work approach considers SATS as a system distributed on a large-scale with uncertainty in a dynamic environment. Therefore, a multi-agent systems based approach is well suited for modeling such an environment. We demonstrate that current methodologies for designing multi-agent systems are a useful tool to characterize HCI. We specifically illustrate how the selected methodological approach provides enough guidelines to obtain a cockpit HCI design that complies with future SATS specifications.

  19. What is the Value of Embedding Artificial Emotional Prosody in Human-Computer Interactions? Implications for Theory and Design in Psychological Science.

    Science.gov (United States)

    Mitchell, Rachel L C; Xu, Yi

    2015-01-01

    In computerized technology, artificial speech is becoming increasingly important, and is already used in ATMs, online gaming and healthcare contexts. However, today's artificial speech typically sounds monotonous, a main reason for this being the lack of meaningful prosody. One particularly important function of prosody is to convey different emotions. This is because successful encoding and decoding of emotions is vital for effective social cognition, which is increasingly recognized in human-computer interaction contexts. Current attempts to artificially synthesize emotional prosody are much improved relative to early attempts, but there remains much work to be done due to methodological problems, lack of agreed acoustic correlates, and lack of theoretical grounding. If the addition of synthetic emotional prosody is not of sufficient quality, it may risk alienating users instead of enhancing their experience. So the value of embedding emotion cues in artificial speech may ultimately depend on the quality of the synthetic emotional prosody. However, early evidence on reactions to synthesized non-verbal cues in the facial modality bodes well. Attempts to implement the recognition of emotional prosody into artificial applications and interfaces have perhaps been met with greater success, but the ultimate test of synthetic emotional prosody will be to critically compare how people react to synthetic emotional prosody vs. natural emotional prosody, at the behavioral, socio-cognitive and neural levels.

  20. Multimodal Transcription of Video: Examining Interaction in Early Years Classrooms

    Science.gov (United States)

    Cowan, Kate

    2014-01-01

    Video is an increasingly popular data collection tool for those undertaking social research, offering a temporal, sequential, fine-grained record which is durable, malleable and sharable. These characteristics make video a valuable resource for researching Early Years classrooms, particularly with regard to the study of children's interaction in…

  1. Affordances in conversational interactions with multimodal QA systems

    NARCIS (Netherlands)

    Niculescu, Andreea; Holzinger, Andreas

    2008-01-01

    Implementation of adequate conversational structures is a key issue in developing successful interactive user interfaces. A way of testing the adequacy of the structures is to prove the correct orientation of each communicative action towards a preceding action. We refer to this orientation leading

  2. Eyeblink Synchrony in Multimodal Human-Android Interaction.

    Science.gov (United States)

    Tatsukawa, Kyohei; Nakano, Tamami; Ishiguro, Hiroshi; Yoshikawa, Yuichiro

    2016-12-23

    As the result of recent progress in technology of communication robot, robots are becoming an important social partner for humans. Behavioral synchrony is understood as an important factor in establishing good human-robot relationships. In this study, we hypothesized that biasing a human's attitude toward a robot changes the degree of synchrony between human and robot. We first examined whether eyeblinks were synchronized between a human and an android in face-to-face interaction and found that human listeners' eyeblinks were entrained to android speakers' eyeblinks. This eyeblink synchrony disappeared when the android speaker spoke while looking away from the human listeners but was enhanced when the human participants listened to the speaking android while touching the android's hand. These results suggest that eyeblink synchrony reflects a qualitative state in human-robot interactions.

  3. A Study on Potential of Integrating Multimodal Interaction into Musical Conducting Education

    CERN Document Server

    Siang, Gilbert Phuah Leong; Yong, Pang Yee

    2010-01-01

    With the rapid development of computer technology, computer music has begun to appear in the laboratory. Many potential utility of computer music is gradually increasing. The purpose of this paper is attempted to analyze the possibility of integrating multimodal interaction such as vision-based hand gesture and speech interaction into musical conducting education. To achieve this purpose, this paper is focus on discuss some related research and the traditional musical conducting education. To do so, six musical conductors had been interviewed to share their musical conducting learning/ teaching experience. These interviews had been analyzed in this paper to show the syllabus and the focus of musical conducting education for beginners.

  4. Crossover Method for Interactive Genetic Algorithms to Estimate Multimodal Preferences

    Directory of Open Access Journals (Sweden)

    Misato Tanaka

    2013-01-01

    Full Text Available We apply an interactive genetic algorithm (iGA to generate product recommendations. iGAs search for a single optimum point based on a user’s Kansei through the interaction between the user and machine. However, especially in the domain of product recommendations, there may be numerous optimum points. Therefore, the purpose of this study is to develop a new iGA crossover method that concurrently searches for multiple optimum points for multiple user preferences. The proposed method estimates the locations of the optimum area by a clustering method and then searches for the maximum values of the area by a probabilistic model. To confirm the effectiveness of this method, two experiments were performed. In the first experiment, a pseudouser operated an experiment system that implemented the proposed and conventional methods and the solutions obtained were evaluated using a set of pseudomultiple preferences. With this experiment, we proved that when there are multiple preferences, the proposed method searches faster and more diversely than the conventional one. The second experiment was a subjective experiment. This experiment showed that the proposed method was able to search concurrently for more preferences when subjects had multiple preferences.

  5. Handbook of human computation

    CERN Document Server

    Michelucci, Pietro

    2013-01-01

    This volume addresses the emerging area of human computation, The chapters, written by leading international researchers, explore existing and future opportunities to combine the respective strengths of both humans and machines in order to create powerful problem-solving capabilities. The book bridges scientific communities, capturing and integrating the unique perspective and achievements of each. It coalesces contributions from industry and across related disciplines in order to motivate, define, and anticipate the future of this exciting new frontier in science and cultural evolution. Reade

  6. 载人航天某装置人机交互式结构优化设计%Structural optimization using human-computer interaction for an aerospace assembly

    Institute of Scientific and Technical Information of China (English)

    刘磊; 刘洪英; 马爱军; 胡清华; 冯雪梅; 石蒙; 董睿; 赵亚雄

    2016-01-01

    To solve the problem of the structural optimization of a complicated structure under dynamic response constraints, a human-computer interaction method is proposed to take the advantages of human and computer in the structural optimization, and it is used in the structural optimization of an aerospace assembly. The assembly, after the structural optimization, exhibits remarkable performance improvement in that the first integral vibration frequency increases 41.1% and the maximal the frequency response acceleration of cared points drops 24.3% under the sinusoidal vibration test load conditions while the mass remains essentially unchanged. The result satisfies the requirement of the optimal design and proves the effectiveness and feasibility of the method.%为了解决复杂结构在动力学响应约束下优化的难题,综合人工以及计算机在复杂结构优化中的优点,提出一种人机交互式优化方法用于载人航天某复杂装置的优化设计。经过结构优化后的装置,在质量保持基本不变的情况下一阶振动频率提升41.1%,正弦试验条件下关心节点的最大加速度响应值减小24.3%,优化效果明显,满足优化设计要求,验证了该优化设计方法的可行、有效。

  7. Multimodal Study of Adult-Infant Interaction: A Review of Its Origins and Its Current Status

    Directory of Open Access Journals (Sweden)

    Soledad Carretero Pérez

    Full Text Available Abstract An interpretative review of research on adult-infant interactions involving the analysis of movement behaviors is presented, systematically linking previous studies to current research on the subject. Forty-two articles analyzing the dyad's interactive movement in the period 1970-2015 were found. Twelve papers were excluded, including only those that studied the phenomenon in the baby's first year of life. The results revealed that movement was a central topic in early interaction studies in the 70s. In the 1980's and 1990's, its study was marginal and it is currently resurging under the embodiment perspective. The conceptual framework and research methods used in the pioneering work are presented, and the thematic foci shared with current research are highlighted. Thus, essential keys are provided for the updated study of early interactions from a multimodal perspective.

  8. Emotional pictures and sounds: A review of multimodal interactions of emotion cues in multiple domains

    Directory of Open Access Journals (Sweden)

    Antje B M Gerdes

    2014-12-01

    Full Text Available In everyday life, multiple sensory channels jointly trigger emotional experiences and one channel may alter processing in another channel. For example, seeing an emotional facial expression and hearing the voice’s emotional tone will jointly create the emotional experience. This example, where auditory and visual input is related to social communication, has gained considerable attention by researchers. However, interactions of visual and auditory emotional information are not limited to social communication but can extend to much broader contexts including human, animal, and environmental cues. In this article, we review current research on audiovisual emotion processing beyond face-voice stimuli to develop a broader perspective on multimodal interactions in emotion processing. We argue that current concepts of multimodality should be extended in considering an ecologically valid variety of stimuli in audiovisual emotion processing. Therefore, we provide an overview of studies in which emotional sounds and interactions with complex pictures of scenes were investigated. In addition to behavioral studies, we focus on neuroimaging, electro- and peripher-physiological findings. Furthermore, we integrate these findings and identify similarities or differences. We conclude with suggestions for future research.

  9. 儿童学习型游戏人机交互界面的设计研究%Research on Design of Human Computer Interaction Interface for Childr- en's Learning Game

    Institute of Scientific and Technical Information of China (English)

    杨明朗; 郭峰; 刘贺

    2012-01-01

    Taking the rapid appear of children' s learning game as inspiration, it analyzed the general situation of the human computer interaction interface design for children' s learning game at present, and discussed the question of human computer interaction interface design for children' s learning game which didn' t accord with children' s cognitive habits, pay more attention to visual effect and neglected the sound effects, poor of interactive. Based on this, it proposed the design of the children' s learning game human computer interaction interface should proceed with graphical user interface, voice user interface, the entity of user interface, search for a most conforms to the children' s cognitive characteristics of the human computer interaction interface.%摘要:以儿童学习型游戏的大量出现为启示,分析了现阶段儿童学习型游戏人机交互界面设计的概况,进而论述了儿童学习型游戏人机交互界面设计存在的不符合儿童认知习惯、过分注重视觉效果而忽视听觉效果、交互性差的问题。在此基础上,提出了儿童学习型游戏人机交互界面的设计应从图形化用户交互界面、声音用户界面、实体用户界面入手,探寻最符合儿童认知特性的人机交互界面。

  10. Multimode Kapitza-Dirac interferometer on Bose-Einstein condensates with atomic interactions

    Science.gov (United States)

    He, Tianchen; Niu, Pengbin

    2017-03-01

    The dynamics of multimode interferometers for Bose Einstein condensation (BEC) with atomic interactions confined to a harmonic trap is investigated. At the initial time t = 0, several spatially addressable wave packets (modes) with different momenta are created by the first Kapitza-Dirac pulse. These modes are coherently recombined by the harmonic potential with atomic interactions. The second Kapitza-Dirac pulse splits the evolved modes a second time and separates them along different paths for a second time. The signal to noise ratio is numerically calculated by the Fisher information and the Cramér-Rao lower bound. We find that the small atomic interactions decrease the measurement accuracy for current atom interferometers when measuring the gravitational acceleration. Its impact on measurement precision can be reduced by improving the Kapitza-Dirac strength.

  11. Towards an intelligent framework for multimodal affective data analysis.

    Science.gov (United States)

    Poria, Soujanya; Cambria, Erik; Hussain, Amir; Huang, Guang-Bin

    2015-03-01

    An increasingly large amount of multimodal content is posted on social media websites such as YouTube and Facebook everyday. In order to cope with the growth of such so much multimodal data, there is an urgent need to develop an intelligent multi-modal analysis framework that can effectively extract information from multiple modalities. In this paper, we propose a novel multimodal information extraction agent, which infers and aggregates the semantic and affective information associated with user-generated multimodal data in contexts such as e-learning, e-health, automatic video content tagging and human-computer interaction. In particular, the developed intelligent agent adopts an ensemble feature extraction approach by exploiting the joint use of tri-modal (text, audio and video) features to enhance the multimodal information extraction process. In preliminary experiments using the eNTERFACE dataset, our proposed multi-modal system is shown to achieve an accuracy of 87.95%, outperforming the best state-of-the-art system by more than 10%, or in relative terms, a 56% reduction in error rate.

  12. 人机交互视角下网民符码传播心理探析%Code Communication P sychology of Internet Users from the P erspective of Human-Computer Interaction

    Institute of Scientific and Technical Information of China (English)

    方艳; 明珠; 陈佩

    2016-01-01

    人机交互( Human-Computer Interaction, HCI)是研究人、计算机以及它们之间相互影响的技术。网上的网民符号交流,通过的是技术的平台和技术的手段,同时也是双方心理的“印迹”、传播伦理的表征。本文从人机交互的心理层次对网民使用文字、图像以及图文并茂的符码传播心理进行考察。%Human-Computer Interaction ( HCI) is the technology that studies on people, computer and the relations between them. The Internet users contact each other by codes, through the technology platform and means, which imprints both sides’ psychologies and presents the Internet ethic as well. This paper explores the code communication psychology of the Internet users from Human-Computer Interaction.

  13. Discussion on Technology and Development of Submarine Command and Control System Human-Computer Interaction%潜艇指控系统人机交互技术发展分析*

    Institute of Scientific and Technical Information of China (English)

    程飞

    2013-01-01

      介绍和分析了国外潜艇指控系统人机交互技术的发展情况和技术特点,回顾了国内潜艇指控系统人机交互手段的发展,从满足未来海军潜艇作战需求出发,对潜艇指控人机交互技术的发展进行了前瞻性分析。%In this paper,an introduction and analysis are given for general development situation and technical feature of Human-computer Interaction for foreign submarine C2 system,and then the Human-computer Interaction for domestic submarine C2 systems are reviewed briefly.In the view of satisfying operational requirement of our future submarine,we analyzed the development of Human-computer Interaction for submarine C2 system.

  14. The Study on the Interactive Effect of Multimodality and Meta Cognition upon College Students' Listening Ability

    Institute of Scientific and Technical Information of China (English)

    崔中原

    2015-01-01

    Multimodality and meta cognition are the hot topic in SLA research and FLT research. This paper integrates the theoretical framework of multi-modality and meta cognition, proposing multimodal integrative and meta cognitive process approach can improve the learners' listening performance. Based on this, this paper also gives some enlightenment on the college English teaching and further listening teaching research in the end.

  15. Design of 3D Virtual Manipulatives Supported by Natural Human Computer Interaction%人机自然交互支持的3D虚拟教具设计

    Institute of Scientific and Technical Information of China (English)

    袁丽一; 张宝运

    2012-01-01

    现有学习软件或虚拟教具存在交互方式不自然、抽象性差、不智能的缺点,应用基于自然的人机交互技术,以物理教学中的平抛运动教具作为实例,设计实现一种3D虚拟教具用以模拟教学过程中的真实教具。这种虚拟教具主要采用手势识别技术实现人和计算机的自然她交互,同时采用虚拟现实技术构建微具实体。应用人机自然交互技术支持的3D虚拟教具,具有交互方式自然、空间感强和抽象表现力强的特点,体现了和谐自然地人机交互方式在教育中的广泛应用前景。%The current learning software or virtual manipulatives have the shortcomings of unnatural interaction, poor abstractness and unintelligent in application. Using natural human computer interaction technology and taking the horizontal projectile motion in physics instruction as an example, a kind of 3D virtual manipulatives was designed which had the ability to simulate the real teaching aids. Gesture recognition technology was used to realize the natural human-computer interaction. The well designed 3D virtual manipulatives has the characteristic of natural interaction, great abstractness and spatial impression which reflects the board application prospects of natural human computer interaction in education.

  16. A System for Multimodal Interaction with Kinect-Enabled Virtual Windows

    Directory of Open Access Journals (Sweden)

    Ana M. Bernardos

    2015-11-01

    Full Text Available Commercial off-the-shelf gaming devices (e.g. such as Kinect are demonstrating to have a great potential beyond their initial service purpose. In particular, when integrated within the environment or as part of smart objects, peripheral COTS for gaming may facilitate the definition of novel interaction methods, particularly applicable to smart spaces service concepts. In this direction, this paper describes a system prototype that makes possible to deliver multimodal interaction with the media contents in a Virtual Window. Using a Kinect device, the Interactive Window itself adjusts the video clipping to the real time perspective of the user – who can freely move within the sensor coverage are. On the clipped video, the user is able to select objects by pointing at meaningful image sections and to initiate actions related to them. Voice orders may also complete the interaction when necessary. Although implemented for smart spaces, the service concept can also be applied to learning, remote control processes or teleconference.

  17. A 3D character animation engine for multimodal interaction on mobile devices

    Science.gov (United States)

    Sandali, Enrico; Lavagetto, Fabio; Pisano, Paolo

    2005-03-01

    Talking virtual characters are graphical simulations of real or imaginary persons that enable natural and pleasant multimodal interaction with the user, by means of voice, eye gaze, facial expression and gestures. This paper presents an implementation of a 3D virtual character animation and rendering engine, compliant with the MPEG-4 standard, running on Symbian-based SmartPhones. Real-time animation of virtual characters on mobile devices represents a challenging task, since many limitations must be taken into account with respect to processing power, graphics capabilities, disk space and execution memory size. The proposed optimization techniques allow to overcome these issues, guaranteeing a smooth and synchronous animation of facial expressions and lip movements on mobile phones such as Sony-Ericsson's P800 and Nokia's 6600. The animation engine is specifically targeted to the development of new "Over The Air" services, based on embodied conversational agents, with applications in entertainment (interactive story tellers), navigation aid (virtual guides to web sites and mobile services), news casting (virtual newscasters) and education (interactive virtual teachers).

  18. Numerical investigation of nonlinear interactions between multimodal guided waves and delamination in composite structures

    Science.gov (United States)

    Shen, Yanfeng

    2017-04-01

    This paper presents a numerical investigation of the nonlinear interactions between multimodal guided waves and delamination in composite structures. The elastodynamic wave equations for anisotropic composite laminate were formulated using an explicit Local Interaction Simulation Approach (LISA). The contact dynamics was modeled using the penalty method. In order to capture the stick-slip contact motion, a Coulomb friction law was integrated into the computation procedure. A random gap function was defined for the contact pairs to model distributed initial closures or openings to approximate the nature of rough delamination interfaces. The LISA procedure was coded using the Compute Unified Device Architecture (CUDA), which enables the highly parallelized computation on powerful graphic cards. Several guided wave modes centered at various frequencies were investigated as the incident wave. Numerical case studies of different delamination locations across the thickness were carried out. The capability of different wave modes at various frequencies to trigger the Contact Acoustic Nonlinearity (CAN) was studied. The correlation between the delamination size and the signal nonlinearity was also investigated. Furthermore, the influence from the roughness of the delamination interfaces was discussed as well. The numerical investigation shows that the nonlinear features of wave delamination interactions can enhance the evaluation capability of guided wave Structural Health Monitoring (SHM) system. This paper finishes with discussion, concluding remarks, and suggestions for future work.

  19. Benchmarking a Multimodal and Multiview and Interactive Dataset for Human Action Recognition.

    Science.gov (United States)

    Liu, An-An; Xu, Ning; Nie, Wei-Zhi; Su, Yu-Ting; Wong, Yongkang; Kankanhalli, Mohan

    2016-07-18

    Human action recognition is an active research area in both computer vision and machine learning communities. In the past decades, the machine learning problem has evolved from conventional single-view learning problem, to cross-view learning, cross-domain learning and multitask learning, where a large number of algorithms have been proposed in the literature. Despite having large number of action recognition datasets, most of them are designed for a subset of the four learning problems, where the comparisons between algorithms can further limited by variances within datasets, experimental configurations, and other factors. To the best of our knowledge, there exists no dataset that allows concurrent analysis on the four learning problems. In this paper, we introduce a novel multimodal and multiview and interactive (M²I) dataset, which is designed for the evaluation of human action recognition methods under all four scenarios. This dataset consists of 1760 action samples from 22 action categories, including nine person-person interactive actions and 13 person-object interactive actions. We systematically benchmark state-of-the-art approaches on M²I dataset on all four learning problems. Overall, we evaluated 13 approaches with nine popular feature and descriptor combinations. Our comprehensive analysis demonstrates that M²I dataset is challenging due to significant intraclass and view variations, and multiple similar action categories, as well as provides solid foundation for the evaluation of existing state-of-the-art algorithms.

  20. Glycosylated Conductive Polymer: A Multimodal Biointerface for Studying Carbohydrate-Protein Interactions.

    Science.gov (United States)

    Zeng, Xiangqun; Qu, Ke; Rehman, Abdul

    2016-09-20

    Carbohydrate-protein interactions occur through glycoproteins, glycolipids, or polysaccharides displayed on the cell surface with lectins. However, studying these interactions is challenging because of the complexity and heterogeneity of the cell surface, the inherent structural complexity of carbohydrates, and the typically weak affinities of the binding reactions between the lectins and monovalent carbohydrates. The lack of chromophores and fluorophores in carbohydrate structures often drives such investigations toward fluorescence labeling techniques, which usually require tedious and complex synthetic work to conjugate fluorescent tags with additional risk of altering the reaction dynamics. Probing these interactions directly on the cell surface is even more difficult since cells could be too fragile for labeling or labile dynamics could be affected by the labeled molecules that may interfere with the cellular activities, resulting in unwanted cell responses. In contrast, label-free biosensors allow real-time monitoring of carbohydrate-protein interactions in their natural states. A prerequisite, though, for this strategy to work is to mimic the coding information on potential interactions of cell surfaces onto different biosensing platforms, while the complementary binding process can be transduced into a useful signal noninvasively. Through carbohydrate self-assembled monolayers and glycopolymer scaffolds, the multivalency of the naturally existing simple and complex carbohydrates can be mimicked and exploited with label-free readouts (e.g., optical, acoustic, mechanical, electrochemical, and electrical sensors), yet such inquiries reflect only limited aspects of complicated biointeraction processes due to the unimodal transduction. In this Account, we illustrate that functionalized glycosylated conductive polymer scaffolds are the ideal multimodal biointerfaces that not only simplify the immobilization process for surface fabrication via electrochemical

  1. Multi-Modal, Multi-Touch Interaction with Maps in Disaster Management Applications

    Directory of Open Access Journals (Sweden)

    V. Paelke

    2012-07-01

    Full Text Available Multi-touch interaction has become popular in recent years and impressive advances in technology have been demonstrated, with the presentation of digital maps as a common presentation scenario. However, most existing systems are really technology demonstrators and have not been designed with real applications in mind. A critical factor in the management of disaster situations is the access to current and reliable data. New sensors and data acquisition platforms (e.g. satellites, UAVs, mobile sensor networks have improved the supply of spatial data tremendously. However, in many cases this data is not well integrated into current crisis management systems and the capabilities to analyze and use it lag behind sensor capabilities. Therefore, it is essential to develop techniques that allow the effective organization, use and management of heterogeneous data from a wide variety of data sources. Standard user interfaces are not well suited to provide this information to crisis managers. Especially in dynamic situations conventional cartographic displays and mouse based interaction techniques fail to address the need to review a situation rapidly and act on it as a team. The development of novel interaction techniques like multi-touch and tangible interaction in combination with large displays provides a promising base technology to provide crisis managers with an adequate overview of the situation and to share relevant information with other stakeholders in a collaborative setting. However, design expertise on the use of such techniques in interfaces for real-world applications is still very sparse. In this paper we report on interdisciplinary research with a user and application centric focus to establish real-world requirements, to design new multi-modal mapping interfaces, and to validate them in disaster management applications. Initial results show that tangible and pen-based interaction are well suited to provide an intuitive and visible way to

  2. Facial Emotion Recognition Using Context Based Multimodal Approach

    Directory of Open Access Journals (Sweden)

    Priya Metri

    2011-12-01

    Full Text Available Emotions play a crucial role in person to person interaction. In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers. The ability to understand human emotions is desirable for the computer in several applications especially by observing facial expressions. This paper explores a ways of human-computer interaction that enable the computer to be more aware of the user’s emotional expressions we present a approach for the emotion recognition from a facial expression, hand and body posture. Our model uses multimodal emotion recognition system in which we use two different models for facial expression recognition and for hand and body posture recognition and then combining the result of both classifiers using a third classifier which give the resulting emotion . Multimodal system gives more accurate result than a signal or bimodal system

  3. Introducing the Interactive Model for the Training of Audiovisual Translators and Analysis of Multimodal Texts

    Directory of Open Access Journals (Sweden)

    Pietro Luigi Iaia

    2015-07-01

    Full Text Available Abstract – This paper introduces the ‘Interactive Model’ of audiovisual translation developed in the context of my PhD research on the cognitive-semantic, functional and socio-cultural features of the Italian-dubbing translation of a corpus of humorous texts. The Model is based on two interactive macro-phases – ‘Multimodal Critical Analysis of Scripts’ (MuCrAS and ‘Multimodal Re-Textualization of Scripts’ (MuReTS. Its construction and application are justified by a multidisciplinary approach to the analysis and translation of audiovisual texts, so as to focus on the linguistic and extralinguistic dimensions affecting both the reception of source texts and the production of target ones (Chaume 2004; Díaz Cintas 2004. By resorting to Critical Discourse Analysis (Fairclough 1995, 2001, to a process-based approach to translation and to a socio-semiotic analysis of multimodal texts (van Leeuwen 2004; Kress and van Leeuwen 2006, the Model is meant to be applied to the training of audiovisual translators and discourse analysts in order to help them enquire into the levels of pragmalinguistic equivalence between the source and the target versions. Finally, a practical application shall be discussed, detailing the Italian rendering of a comic sketch from the American late-night talk show Conan.Abstract – Questo studio introduce il ‘Modello Interattivo’ di traduzione audiovisiva sviluppato durante il mio dottorato di ricerca incentrato sulle caratteristiche cognitivo-semantiche, funzionali e socio-culturali della traduzione italiana per il doppiaggio di un corpus di testi comici. Il Modello è costituito da due fasi: la prima, di ‘Analisi critica e multimodale degli script’ (MuCrAS e la seconda, di ‘Ritestualizzazione critica e multimodale degli script’ (MuReTS, e la sua costruzione e applicazione sono frutto di un approccio multidisciplinare all’analisi e traduzione dei testi audiovisivi, al fine di esaminare le

  4. Interactive Feature Space Explorer© for multi-modal magnetic resonance imaging.

    Science.gov (United States)

    Özcan, Alpay; Türkbey, Barış; Choyke, Peter L; Akin, Oguz; Aras, Ömer; Mun, Seong K

    2015-07-01

    Wider information content of multi-modal biomedical imaging is advantageous for detection, diagnosis and prognosis of various pathologies. However, the necessity to evaluate a large number images might hinder these advantages and reduce the efficiency. Herein, a new computer aided approach based on the utilization of feature space (FS) with reduced reliance on multiple image evaluations is proposed for research and routine clinical use. The method introduces the physician experience into the discovery process of FS biomarkers for addressing biological complexity, e.g., disease heterogeneity. This, in turn, elucidates relevant biophysical information which would not be available when automated algorithms are utilized. Accordingly, the prototype platform was designed and built for interactively investigating the features and their corresponding anatomic loci in order to identify pathologic FS regions. While the platform might be potentially beneficial in decision support generally and specifically for evaluating outlier cases, it is also potentially suitable for accurate ground truth determination in FS for algorithm development. Initial assessments conducted on two different pathologies from two different institutions provided valuable biophysical perspective. Investigations of the prostate magnetic resonance imaging data resulted in locating a potential aggressiveness biomarker in prostate cancer. Preliminary findings on renal cell carcinoma imaging data demonstrated potential for characterization of disease subtypes in the FS. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Interaction of the topological phase and the Magnus effect in a multimode fiber

    Science.gov (United States)

    Aleksiev, A.; Modnikova, E.; Fadeyeva, Tatyana A.; Lapayeva, S. N.; Volyar, Alexander V.

    1996-04-01

    The phenomenon of a light topological birefringence in a multimode fiber has been discussed in papers. The physical mechanism of the topological birefringence, as expected earlier is determined by the interaction of the two effects, namely, the topological Berry's phase and the optical Magnus effect. So, for a square distribution of a refractive index in a fiber cross section the near-meridianal local waves propagate along ray trajectories represented by strongly stretched ellipses of an elliptical helix. A plane made by a fiber axis and a major axis of a ray elliptical helix undergoes a rotation into the fiber, and the rotation hand of this plane is defined by a sign of the wave polarization circularity. If a linear polarized light drops into a fiber input its left-hand polarized component will rotate the major axis of the elliptical helix counterclockwise, and the right-hand polarized component rotates clockwise. Since the angular rates of this rotation are slightly different one from another forthcoming the local waves having counter circulation of a polarization should be collapsed in a single linear polarized local wave. However if the local wave trajectory doesn't lie in a plane then it is a space curve.

  6. Research on the Human-computer Interaction Design for Children' s Smart Toys%儿童智能玩具中人机交互设计的研究

    Institute of Scientific and Technical Information of China (English)

    吴国荣; 王微

    2012-01-01

    以人机交互在现实生活中的信息传播、交流为启示,分析了电子时代玩具的功能、造型等信息,并结合日常生活中人对机器的广泛运用,进而分析了近年来在计算机软件及产品设计领域开发的人机界面设计。通过语音识别、肢体触碰、图像交互以及数字交互等新技术研究领域,实现交互式智能玩具在儿童成长阶段发挥的重要作用,让儿童在潜移默化中"寓教于乐、健康成长"。%Inspired by the information dissemination and communication of human-computer interaction in real life, it analyzed the function and modeling of the electronic age toys. Combined with the extensive use of human for machine in the daily life, and then it analyzed the human-computer interface design of software and product man-machine in the recent years. Through the research field of voice recognition, physical touch, image interaction and digital interactive technology, the interactive smart toys play an important role in the child growth which make the children entertaining and grow healthy.

  7. Towards Multimodal Content Representation

    CERN Document Server

    Bunt, Harry

    2009-01-01

    Multimodal interfaces, combining the use of speech, graphics, gestures, and facial expressions in input and output, promise to provide new possibilities to deal with information in more effective and efficient ways, supporting for instance: - the understanding of possibly imprecise, partial or ambiguous multimodal input; - the generation of coordinated, cohesive, and coherent multimodal presentations; - the management of multimodal interaction (e.g., task completion, adapting the interface, error prevention) by representing and exploiting models of the user, the domain, the task, the interactive context, and the media (e.g. text, audio, video). The present document is intended to support the discussion on multimodal content representation, its possible objectives and basic constraints, and how the definition of a generic representation framework for multimodal content representation may be approached. It takes into account the results of the Dagstuhl workshop, in particular those of the informal working group...

  8. Application of a pH responsive multimodal hydrophobic interaction chromatography medium for the analysis of glycosylated proteins.

    Science.gov (United States)

    Kallberg, K; Becker, K; Bülow, L

    2011-02-04

    Protein glycosylation has significant effects on the structure and function of proteins. The efficient separation and enrichment of glycoproteins from complex biological samples is one key aspect and represents a major bottleneck of glycoproteome research. In this paper, we have explored pH multimodal hydrophobic interaction chromatography to separate glycosylated from non-glycosylated forms of proteins. Three different proteins, ribonuclease, invertase and IgG, have been examined and different glycoforms have been identified. The media itself shows strong responsiveness to small variations in pH, which makes it possible to fine-tune the chromatographic conditions according to the properties of the protein isolated. Optimal glycoprotein separation has been obtained at pH 4. The pH responsive multimodal HIC medium in contrast to conventional HIC media is able to resolve contaminating DNA. Copyright © 2010 Elsevier B.V. All rights reserved.

  9. Multimodal approaches for emotion recognition: a survey

    Science.gov (United States)

    Sebe, Nicu; Cohen, Ira; Gevers, Theo; Huang, Thomas S.

    2005-01-01

    Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing-emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user's emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and physiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.

  10. 基于认知耦合态的翻转课堂人机交互设计%Human-Computer Interaction Design for Flipped Classroom Based on Cognitive Coupling States

    Institute of Scientific and Technical Information of China (English)

    陈凤燕; 朱旭; 程仁贵; 孟世敏

    2014-01-01

    在无监督环境中,保持学生持续而有效的学习是翻转课堂的难点。翻转课堂无监督学习环境是人机情境。从人机交互角度,学生沉浸在持续学习中,达到有效学习、深度学习状态,也称“人机认知耦合态”。认知耦合态是学生认知结构、个性、能力和教师设计的学习内容、情境、轨道匹配的状态,是学生和机器相互依赖,形成高效学习体。人机耦合态设计理念上需理解学生心理规律及过程,让计算机成为“教助理”引导学生学习;设计形式上需采集人机交互数据、观测学习过程、创意耦合情境、调制认知过程。翻转课堂中人机认知耦合设计重点是教学资源结构、认知思维过程、在线导学互动、学习成像形式、认知大数据处理技术、实证教学实施方法。基于人机认知耦合态的翻转课堂是教育数字化、实证化思想的实践,也是信息技术与教育深度融合的尝试。%The difficult point in the flipped classroom is how to keep persistent and effective learning in unsupervised environment. The unsupervised learning environment in the flipped classroom is human-computer situation. In human-computer interaction, the persistent learning requires that students immerse themselves in interactive situation, so as to achieve the effective learning and deep learning, which is known as Cognitive Coupling State (CCS). The CCS is a match between the cognitive structure, personality, ability of students and the learning content, design situation, track of teachers ’ design. Students and machines rely on each other. When one designs the CCS, he should study the psychology of students and use computers as teaching assistants to guide students’ learning. Collecting the data of the human-computer interaction, observing learning process, creating coupling situation and modulating cognitive process are needed in the design of CCS. The key points of

  11. A hardware and software architecture to deal with multimodal and collaborative interactions in multiuser virtual reality environments

    Science.gov (United States)

    Martin, P.; Tseu, A.; Férey, N.; Touraine, D.; Bourdot, P.

    2014-02-01

    Most advanced immersive devices provide collaborative environment within several users have their distinct head-tracked stereoscopic point of view. Combining with common used interactive features such as voice and gesture recognition, 3D mouse, haptic feedback, and spatialized audio rendering, these environments should faithfully reproduce a real context. However, even if many studies have been carried out on multimodal systems, we are far to definitively solve the issue of multimodal fusion, which consists in merging multimodal events coming from users and devices, into interpretable commands performed by the application. Multimodality and collaboration was often studied separately, despite of the fact that these two aspects share interesting similarities. We discuss how we address this problem, thought the design and implementation of a supervisor that is able to deal with both multimodal fusion and collaborative aspects. The aim of this supervisor is to ensure the merge of user's input from virtual reality devices in order to control immersive multi-user applications. We deal with this problem according to a practical point of view, because the main requirements of this supervisor was defined according to a industrial task proposed by our automotive partner, that as to be performed with multimodal and collaborative interactions in a co-located multi-user environment. In this task, two co-located workers of a virtual assembly chain has to cooperate to insert a seat into the bodywork of a car, using haptic devices to feel collision and to manipulate objects, combining speech recognition and two hands gesture recognition as multimodal instructions. Besides the architectural aspect of this supervisor, we described how we ensure the modularity of our solution that could apply on different virtual reality platforms, interactive contexts and virtual contents. A virtual context observer included in this supervisor in was especially designed to be independent to the

  12. Real-time interactive tractography analysis for multimodal brain visualization tool: MultiXplore

    Science.gov (United States)

    Bakhshmand, Saeed M.; de Ribaupierre, Sandrine; Eagleson, Roy

    2017-03-01

    Most debilitating neurological disorders can have anatomical origins. Yet unlike other body organs, the anatomy alone cannot easily provide an understanding of brain functionality. In fact, addressing the challenge of linking structural and functional connectivity remains in the frontiers of neuroscience. Aggregating multimodal neuroimaging datasets may be critical for developing theories that span brain functionality, global neuroanatomy and internal microstructures. Functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) are main such techniques that are employed to investigate the brain under normal and pathological conditions. FMRI records blood oxygenation level of the grey matter (GM), whereas DTI is able to reveal the underlying structure of the white matter (WM). Brain global activity is assumed to be an integration of GM functional hubs and WM neural pathways that serve to connect them. In this study we developed and evaluated a two-phase algorithm. This algorithm is employed in a 3D interactive connectivity visualization framework and helps to accelerate clustering of virtual neural pathways. In this paper, we will detail an algorithm that makes use of an index-based membership array formed for a whole brain tractography file and corresponding parcellated brain atlas. Next, we demonstrate efficiency of the algorithm by measuring required times for extracting a variety of fiber clusters, which are chosen in such a way to resemble all sizes probable output data files that algorithm will generate. The proposed algorithm facilitates real-time visual inspection of neuroimaging data to further the discovery in structure-function relationship of the brain networks.

  13. project SENSE : multimodal simulation with full-body real-time verbal and nonverbal interactions

    NARCIS (Netherlands)

    Miri, Hossein; Kolkmeier, Jan; Taylor, Paul Jonathon; Poppe, Ronald; Heylen, Dirk; Poppe, Ronald; Meyer, John-Jules; Veltkamp, Remco; Dastani, Mehdi

    2016-01-01

    This paper presents a multimodal simulation system, project-SENSE, that combines virtual reality and full-body motion capture technologies with real-time verbal and nonverbal communication. We introduce the technical setup and employed hardware and software of a first prototype. We discuss the

  14. 基于人机交互的界面设计研究%Research on the Interface Design Based on Human-Computer Interaction

    Institute of Scientific and Technical Information of China (English)

    孙扬; 浦云明; 黄淑贞

    2012-01-01

    Making the application of interaction design in the computer application software interface as the breakthrough point, analyzes and summarizes the principles and prototype construction of interaction design, and analyses and improves the 91Note interation design scheme, leads the application of interaction design principle in the practice according to its availability and easy using. Discusses some non-technical issues to improve the quality of software products for bet- ter customer service.%以交互设计在计算机应用软件界面上的应用为切入点.分析总结交互设计原则和交互设计中的原型构建M题,并根据可用性与易用性原则,分析改进现有91Note交互设计方案,领会交互设计原则在实践中的应用。从非技术问题上改进软件质量,使软件产品更好地为用户服务。

  15. 人机交互技术在现代展示设计中的应用%Application of Human-Computer Interaction Technology in Modern Exhibition Design

    Institute of Scientific and Technical Information of China (English)

    方芳

    2014-01-01

    随着各种新型交互方式充斥着人类的生活空间,使人们得以了解、探知一个全新的感官世界。通过设计各种交互媒体设施,将多种新型的交互方式应用于现代空间陈列之中,让观众通过看、听、触等方式欣赏、品味乃至探讨展品的“外延”与“内涵”,使展览生动有趣,激发观众的新奇感与兴奋感,有效提升其对展品的阅读、理解能力。%New ways for interacting with each other which are iflled in human living space allow us to discover a whole new world of senses. People integrate the “intrinsic” and “extrinsic” exhibition into the environment or atmosphere by their senses such as seeing, hearing, touching etc when a variety of interactive media facilities are applied to this new way of modern interactive exhibition space, which makes the exhibition interesting and inspires novelty and excitement of the audience as well as effectively enhances the audience's ability of understanding the exhibition.

  16. 基于肤色识别的人机交互方法在游戏中的应用研究%Research on Human-Computer Interaction Methods in Game Application Based on Skin Color Recognition

    Institute of Scientific and Technical Information of China (English)

    闫玉宝; 夏露; 侯宪锋

    2012-01-01

    利用计算机视觉技术实现游戏人机交互来提高游戏的娱乐性,是当前国内外应用研究的热点.文中提出了采用肤色检测技术应用于游戏交互的方法.通过摄像头对肤色进行采样,再利用统计方法对皮肤颜色进行分析建立肤色模型;采用背景差分阈值分割法和Camshift算法进行手势跟踪监测,获取手的位置;将手的位置作为信号传递给游戏角色,从而控制游戏.在VC ++6.0下,使用OpenCV和OpenGL开源库,构建了普通摄像头视觉游戏实验平台,通过手势的运动轨迹控制粒子系统喷射方向.实验结果表明,通过肤色进行手势跟踪监测,进而控制游戏角色运动,具有很好的实时性和交互性.%The game human-computer interaction is fulfilled by making use of computer vision technology to improve the game entertainment. It is a current research hotspot at home and abroad. This paper puts forward the method that the skin detection technology is applied to game interaction. Skin detection technology is applied in human-computer interaction in this paper. Skin color is sampled through the camera, and skin color model is established by making use of statistical methods for analysis of skin color. In order to reduce the influence of the background color recognition, the RGB model translates into HSV model. It takes advantage of the background difference threshold segmentation method and Camshaft algorithm for hand tracking monitoring to get the position of the hand. So as to control the game, the position of handle as a signal is transmitted to characters. And in VC+ + 6. 0, it uses open source library OpenCV and gestures OpenGL to build a common experimental platform game camera vision, with gestures trajectory controlling particle system injection. The experimental result shows that making use of the skin color tracking and monitoring gesture to control game character movement has very good real-time and interactivity.

  17. 基于多屏协同的智能电视人机交互系统%SMART TV HUMAN-COMPUTER INTERACTION SYSTEM BASED ON MULTI-SCREEN COLLABORATION

    Institute of Scientific and Technical Information of China (English)

    黄兴旺; 孙鹏; 韩锐; 刘春梅

    2016-01-01

    基于多屏协同的智能电视人机交互系统定义了移动设备对智能电视的远程操控和文本输入的通信机制,以解决用户对智能电视的操作不灵活的问题,尤其改善用户的文本输入操作体验。该通信机制稳定、可扩展性强,适用于不同平台的远程操控和文本输入需求。该系统基于 UPnP 协议实现了快速连接移动设备和智能电视,并且以 Android 系统广播机制和输入机制为依据,提出了基于虚拟驱动的输入扩展机制,达到了原生鼠标键盘事件的效果,实现了利用移动设备对智能电视进行交互控制的设计。实验证明,该人机交互系统,具有无缝连接、操作简单的优点,尤其适合用户进行文本输入操作。%The smart TV human-computer interaction system based on multi-screen collaboration defines a communication mechanism for mobile devices in regard to remote control and text input on smart TV so as to solve the problem that user’s control on smart TV is not flexible,in particular to improve user’s experience of text inputting operation.This mechanism is stable and has high scalability,and is suitable for the requirement of remote control and text input on different platforms.Based on UPnP protocol the system implements fast connection on mobile devices and smart TV,and presents the virtual drive-based input extension mechanism based on Android system broadcast mechanism and input mechanism,this achieves the effect of the native mouse and keyboard events,thus realises the design of using mobile devices to interactively control smart TV.Experiments show that this human-computer interaction system has the advantages of seamless connectivity and simple operation,and is particularly suitable for users in text input operation.

  18. 体感设备与被动立体相结合的人机交互方法研究%Research on the method of human-computer interaction in combining somatosensory equipment with passive stereo

    Institute of Scientific and Technical Information of China (English)

    谭剑波; 张光刘; 李琳

    2011-01-01

    The technology of human-computer interaction full of sense of immersion and manipulation is the target of researchers in the area of virtual reality, and three-dimensional display and somatosensory interaction has become the hot topics at present. By means of introducing the Somatosensory equipment into the display environment of passive stereo, this paper designs and develops a three-dimensional tennis game. Taking the advantage of a stereo display of 3D, and through reasonably planning the relationships and mapping links among objects, participants' visual sense and somatosensory equipment in the game, the sense of immersion and manipulation is improved dramatically.%富有沉浸感和操纵感的人机交互技术是虚拟现实学科研究者的追求目标,立体显示与体感交互是当前的热点问题.文章将体感设备引入被动立体的显示环境,设计开发了一个立体网球游戏,利用立体显示的三维视觉特性,通过合理规划游戏中的对象、参与人的视觉感官以及体感设备三者之间的联系和映射关系,使沉浸感和操纵感得到很大的提升.

  19. Analysis of the Trend of Development of Multimedia Human-Computer Interaction Techniques in the Field of Product Design%浅析多媒体人机交互技术在产品设计领域的发展趋势

    Institute of Scientific and Technical Information of China (English)

    宋培培

    2012-01-01

    This paper introduced the development of science and technology is flourishing, international exchange is frequent, global economic and cultural interdependence is enhanced since the 1990s. Multimedia human-computer interaction technology has gotten rapid development. Meanwhile, it also plays a significant role in the field of product design and product display and sales. According to the detailed analysis on the main interactive ways of multimedia human-computer interaction in the product design, including network virtual interactive and multimedia, multi-channel intelligent human-computer interaction, as well as existent problems in these interactions, through the research of existing human-computer interaction technology, the future direction and trend of development of multimedia human-computer interaction technology in the field of product design is proposed.%介绍了自20世纪90年代以来,科学技术蓬勃发展,国际交流频繁,全球经济文化相互依赖增强.多媒体人机交互技术发展迅速.同时,在产品设计领域以及产品展示和销售方面也起到了很大的作用.针对产品设计中的多媒体人机交互的主要交互方式,包括了网络虚拟的交互方式和多媒体、多通道的智能人机交互方式.以及这些交互方式中所存在问题做了详细分析.通过现有的这些人机交互技术的分析研究,提出了多媒体人机交互技术在产品设计领域未来的发展方向及发展趋势.

  20. The application of 2.5D human-computer interaction inversion to aeromagnetic anomaly interpretation%2.5D人机交互反演在航磁异常解释中的应用

    Institute of Scientific and Technical Information of China (English)

    周子阳; 常树帅; 宁媛丽; 陈江源

    2016-01-01

    2.5D joint inversion of gravity and magnetic module in RGIS data processing software uses 2.5D human-computer interaction inversion method of gravity and magnetic anomalies,which has advantages of simple interface,convenient operation,and real-time dis-play of inversional curve. Taking inversion of GanC-2011-0011 aeromagnetic anomaly in the Dunhuang area of Gansu Province as an ex-ample,this paper describes concrete methods of data import,parameter setting and modeling. The inversion result is basically in accord with verification result with drilling. It is shown that the inversion result is reliable.%RGIS软件中2.5D重磁联合反演模块采用2.5D人机交互重磁异常反演方法,具有界面简洁、操作方便、反演曲线实时显示等优点.笔者以甘肃敦煌地区甘C-2011-0011航磁异常反演为例,介绍了该软件数据输入、参数设置及模型建立的具体方法.反演结果与实际钻孔验证结果基本吻合,表明反演结果可靠.

  1. Investigation of Lecture method on Fitts' Law in Human-computer Interaction Courses%人机交互课程中费茨定律的讲授方案探析

    Institute of Scientific and Technical Information of China (English)

    涂华伟

    2016-01-01

    费茨定律(Fitts’Law)是人机交互界面设计中的重要指导理论。因此,费茨定律的讲授是人机交互课堂的重要环节。在本文中,作者以自身教学实践为基础,提出费茨定律的三层次讲解模型(理论层次、应用层次和科研层次),以系统地讲授费茨定律的理论及应用。具体而言,这个模型首先对比香农定理和费茨定律,分析费茨定律起源及其参数含义。其次参考商业操作系统(iOS和Windows)的设计,说明费茨定律的应用场景。最后以发表在人机交互顶级会议CHI上新颖的研究成果为例,阐述费茨定律的理论指导作用。课堂反馈初步说明了该教学模型的有效性。该教学方式不仅为更好地讲解费茨定律提供了借鉴,也为讲授人机交互课程的其他知识给出了思路。%Fitts' law plays an important role in human-computer interface design. Hence, the lecture on Fitts' law is vital in Human-computer Interaction (HCI) courses. In this article, the author proposes a three-level lecture model (theory, application and research levels) based on the author' teaching experience. Specifically, this model first compares Shannon's theorem to Fit-ts' law so as to analyze the origin of Fitts' Law and the meanings of its parameters. Then this model refers to commercial OS design such as iOS and Windows to illustrate application scenarios of Fitts' law. Last, this model demonstrates the function of Fitts' law as a theoretical guidance by taking research outcomes published in the top level HCI conference CHI as examples. The initial feedback demonstrates the effectiveness of the lecture model. The proposed method not only provides references of how to better lecture on Fitts' law, but also offers thoughts to lecture on other HCI knowledge.

  2. Aesthetic Approaches to Human-Computer Interaction

    DEFF Research Database (Denmark)

    This volume consists of revised papers from the First International Workshop on Activity Theory Based Practical Methods for IT Design. The workshop took place in Copenhagen, Denmark, September 2-3, 2004. The particular focus of the workshop was the development of methods based on activity theory...... for practical development of IT-based systems....

  3. Measuring Appeal in Human Computer Interaction

    DEFF Research Database (Denmark)

    Neben, Tillmann; Xiao, Bo Sophia; Lim, Eric T.

    2015-01-01

    has relied predominantly on subjective self-rating scales, this research-in-progress paper proposes complementary objective measurement for appeal. We start by reviewing the linkages between the theoretical constructs related to appeal and their neurophysiological correlates. We then review past......Appeal refers to the positive emotional response to an aesthetic, beautiful, or in another way desirable stimulus. It is a recurring topic in information systems (IS) research, and is important for understanding many phenomena of user behavior and decision-making. While past IS research on appeal...

  4. Human-computer interaction fundamentals and practice

    CERN Document Server

    Kim, Gerard Jounghyun

    2015-01-01

    Introduction What HCI Is and Why It Is Important Principles of HCI     ""Know Thy User""      Understand the Task      Reduce Memory Load      Strive for Consistency      Remind Users and Refresh Their Memory      Prevent Errors/Reversal of Action      Naturalness SummaryReferences Specific HCI Guidelines Guideline Categories Examples of HCI Guidelines      Visual Display Layout (General HCI Design)      Information Structuring and Navigation (General HCI Design)      Taking User Input (General H

  5. Aesthetic Approaches to Human-Computer Interaction

    DEFF Research Database (Denmark)

    This volume consists of revised papers from the First International Workshop on Activity Theory Based Practical Methods for IT Design. The workshop took place in Copenhagen, Denmark, September 2-3, 2004. The particular focus of the workshop was the development of methods based on activity theory ...

  6. Multi-channel virtual reality human-computer interactive terminal design and application%多通道虚拟现实人机交互终端设计及其应用

    Institute of Scientific and Technical Information of China (English)

    徐守祥; 胡文; 于成龙; 马超

    2015-01-01

    设计了一款多通道虚拟现实人机交互终端,它依据虚拟环境产生真实的环境模拟,为真人带来沉浸式的环境带入体验。给出了利用虚拟环境中的语义对象控制该交互终端,产生三维环境、立体声音、自然气象、碰撞接触和气味仿真等感知功能的方法,为人的大脑依附于虚拟世界的化身上给出了一种新途径,通过虚拟世界环境的变换,实现真人的时空穿越体验。借助Unity虚拟现实开发平台和虚拟现实头盔,给出了该方案的原型系统。%In order to bring immersive environment into reality experience, we propose a multi-channel virtual reality human-computer interactive terminal, which is based on the virtual environment to simulate real environment. In the virtual environment, semantic objects are given as the controller of the interactive terminal. It produces three-dimensional environment, stereo sound, natural calamities, impact and odor perception. The embodiment of human brain attached to the virtual world generates a new way by the transformation of virtual world environment to achieve a real-time travel experience. By means of Unity development platform and virtual reality helmet, the scheme of prototype system is presented.

  7. STUDY ON HUMAN-COMPUTER SYSTEM FOR STABLE VIRTUAL DISASSEMBLY

    Institute of Scientific and Technical Information of China (English)

    Guan Qiang; Zhang Shensheng; Liu Jihong; Cao Pengbing; Zhong Yifang

    2003-01-01

    The cooperative work between human being and computer based on virtual reality (VR) is investigated to plan the disassembly sequences more efficiently. A three-layer model of human-computer cooperative virtual disassembly is built, and the corresponding human-computer system for stable virtual disassembly is developed. In this system, an immersive and interactive virtual disassembly environment has been created to provide planners with a more visual working scene. For cooperative disassembly, an intelligent module of stability analysis of disassembly operations is embedded into the human-computer system to assist planners to implement disassembly tasks better. The supporting matrix for stability analysis of disassembly operations is defined and the method of stability analysis is detailed. Based on the approach, the stability of any disassembly operation can be analyzed to instruct the manual virtual disassembly. At last, a disassembly case in the virtual environment is given to prove the validity of above ideas.

  8. Human-computer interface design

    Energy Technology Data Exchange (ETDEWEB)

    Bowser, S.E.

    1995-04-01

    Modern military forces assume that computer-based information is reliable, timely, available, usable, and shared. The importance of computer-based information is based on the assumption that {open_quotes}shared situation awareness, coupled with the ability to conduct continuous operations, will allow information age armies to observe, decide, and act faster, more correctly and more precisely than their enemies.{close_quotes} (Sullivan and Dubik 1994). Human-Computer Interface (HCI) design standardization is critical to the realization of the previously stated assumptions. Given that a key factor of a high-performance, high-reliability system is an easy-to-use, effective design of the interface between the hardware, software, and the user, it follows logically that the interface between the computer and the military user is critical to the success of the information-age military. The proliferation of computer technology has resulted in the development of an extensive variety of computer-based systems and the implementation of varying HCI styles on these systems. To accommodate the continued growth in computer-based systems, minimize HCI diversity, and improve system performance and reliability, the U.S. Department of Defense (DoD) is continuing to adopt interface standards for developing computer-based systems.

  9. Dynamics of bell-nonlocality for two atoms interacting with a vacuum multi-mode noise field

    Science.gov (United States)

    Liu, Yu-Jie; Zheng, Li; Han, Dong-Mei; Lü, Huan-Lin; Zheng, Tai-Yu

    2016-06-01

    We investigate the internal-state Bell nonlocal entanglement dynamics, as measured by CHSH inequality of two atoms interacting with a vacuum multi-mode noise field by taking into account the spatial degrees of freedom of the two atoms. The dynamics of Bell nonlocality of the atoms with the atomic internal states being initially in a Werner-type state is studied, by deriving the analytical solutions of the Schrödinger equation, and tracing over the degrees of freedom of the field and the external motion of the two atoms. In addition, through comparison with entanglement as measured by concurrence, we find that the survival time of entanglement is much longer than that of the Bell-inequality violation. And the comparison of the quantum correlation time between two Werner-type states is discussed.

  10. Inter-signal interaction and uncertain information in anuran multimodal signals

    Institute of Scientific and Technical Information of China (English)

    Ryan C.TAYLOR; Barrett A.KLEIN; Michael J.RYAN

    2011-01-01

    Disentangling the influence of multiple signal components on receivers and elucidating general processes influencing complex signal evolution are deffcult tasks.In this study we test mate preferences of female squirres treefrogs Hyla squirella and female tǘngara frogs Physalaemus pustulosus for similar combinations of acoustic and visual components of their multimodal courtship signals.In a two-choice playback expenment with squirrel treefrogs,the visual stimulus of a male model significantly increased the attractivness of a relatively unattractive slow call rate.A previous study demonstrated that faster call rates are more attractive to female squirrel treefrogs,and all else being equal,models of male frogs with large body stripes are more attractive.In a similar experiment with female tǘngar afrogs,the visul istimulus of a robotic frog failed to increse the attracCtivenes of relatively unattractive call.Females also showed no preference for the distinct stripe on the robot that males commonly bear on their throat.Thus,features of conspicuous signal components such as body stripes are not universally important and signal funchon is likely to differ even among species with similar ecologies and communication systems.Finally,we discuss the putative information content of anuran signals and suggest that the categorization of redundant versus multiple messages may not be sufficient as a general explanation for the evolution of multimodal signaling.Instead of relying on untested assumptions concerning the information content of signals,we discuss the value of initially collecting comparative empirical data sets related to receiver responses.

  11. Mathematical morphology based electro-oculography recognition algorithm for human-computer interaction%基于数学形态学的眼电信号识别及其应用

    Institute of Scientific and Technical Information of China (English)

    陈卫东; 李昕; 刘俊; 郝耀耀; 廖玉玺; 苏煜; 张韶岷; 郑筱祥

    2011-01-01

    Electro-oculography (EOG) signals can be used for recognizing the directions of eye movements and voluntary eye blinks, which can be used to develop a new human-computer interaction (HCI) system.A mathematical morphology based algorithm was presented to process the EOG signals, which always contain some interference components, such as baseline drift, EMG interference and movement artifacts.The new approach can effectively reduce the artifacts and recognize the directions of eye movements and voluntary eye blinks by using a set of thresholds. A HCI system for disabled using the method was designed and tested by both healthy and disabled people. Experimental results showed that the average correct rate was 96.2%. The system can be employed in clinical HCI fields.%通过分析眼电(EOG)信号可以识别人眼球的运动状态及眨眼情况,进而设计一种新型的人机交互(HCI)系统.眼电信号通常包含一些干扰信息,如漂移、肌电干扰、运动伪迹.为了去除这些干扰信息,提出一种利用数学形态学对眼电信号进行处理的方法;通过阈值检测可以准确识别使用者眼球的运动状态和有意识眨眼.设计一个基于眼电的人机交互系统并通过健康与残疾被试的测试.实验结果显示,眼电信号识别的平均正确率达到96.2%,表明该方法可以应用于临床人机交互领域.

  12. The Quantum Human Computer (QHC) Hypothesis

    Science.gov (United States)

    Salmani-Nodoushan, Mohammad Ali

    2008-01-01

    This article attempts to suggest the existence of a human computer called Quantum Human Computer (QHC) on the basis of an analogy between human beings and computers. To date, there are two types of computers: Binary and Quantum. The former operates on the basis of binary logic where an object is said to exist in either of the two states of 1 and…

  13. 大学英语口语写作教学中的多模态互动%Multimodal interaction in university spoken English writing teaching

    Institute of Scientific and Technical Information of China (English)

    韩丹

    2013-01-01

    Based on the multi-modal and interactive teaching theory, the paper discusses the multimodal interaction in English Speaking and Writing Teaching in universities. Through multi-modal interactive mode, the students' English writing and oral communication levels have been greatly improved. Multi-modal teaching mode in the classroom teaching could fully motivate the students’ sensory systems;realize the integration of teaching, so as to improve English speaking and writing teaching efficiency in the classroom in universities.%本文以多模态及互动教学理论为基础,对大学英语口语写作教学中的多模态互动进行讨论。通过多模态互动的模式,学生的英语写作水平和口语交际水平得到大幅度提高。多模态的教学模式使课堂教学呈现全方位调动学生的感官系统,实现教学一体化,进而提高了大学英语口语写作教学的课堂效率。

  14. Study on augmented reality human-computer interactive technology with Vizard and Kinect%基于Vizard和Kinect的增强现实人机交互技术研究

    Institute of Scientific and Technical Information of China (English)

    张利利; 刘江城; 林晓斌

    2016-01-01

    Natural human-computer interaction technology based on posture and language has largely solved the problem of the traditional game experience, and augmented the user experience. This paper presents a method to develop augmented reality interactive technology with Vizard and Kinect. It uses Kinect to obtain tracking identification points of human skeleton, and gets three-dimensional coordinate values of corresponding bones node by Vizard and FAAST. By space vector processing, the body posture can be converted to a state value and then the corresponding control command output is completed, and the interaction control between user and system is realized. The experimental results show that the method is cost low, development cycle short, integrates the real world information with virtual world information seamlessly, thus augments the game user experience.%基于体态和语言的人机自然交互技术很大程度上解决了传统游戏的体验问题,增强了游戏用户的体验感。提出了一种将Vizard和Kinect相结合开发增强现实人机交互技术的方法,即利用Kinect设备获取人体骨骼跟踪识别点,通过Vizard和FAAST取得对应骨骼节点的三维坐标值,并进行空间向量处理,进而判断肢体信息再转换为状态参数,最后完成相应的控制指令输出,实现用户和系统的交互控制。实验结果表明,这种开发方法成本低,周期短,将真实世界信息和虚拟世界信息“无缝”地融合,增强游戏用户的体验感。

  15. The Human-Computer Interface and Information Literacy: Some Basics and Beyond.

    Science.gov (United States)

    Church, Gary M.

    1999-01-01

    Discusses human/computer interaction research, human/computer interface, and their relationships to information literacy. Highlights include communication models; cognitive perspectives; task analysis; theory of action; problem solving; instructional design considerations; and a suggestion that human/information interface may be a more appropriate…

  16. 超越HCI--多媒体信息的多模式处理%Beyond HCI: Multimodal Manipulation of Multimedia Information

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Human communication operates over a variety of modalities between humans and computers. When we communicate with other people and with information systems, we exchange or/and retrieve multimedia information. Over the last few years, the Interactive Systems Laboratory at Carnegie Mellon University has developed multimodal systems to empower all of us with increased access to information, and the ability to communicate through diverse media in increasingly varied environments. In this paper, we review our research activities in developing multimodal systems. We show that both verbal and non-verbal cues can significantly enhance robustness, flexibility, naturalness and performance of human computer interaction. We demonstrate that multimodal systems can enhance human-human communication and cooperation by efficient manipulation of multimedia information.

  17. 基于多点手势识别的人机交互技术框架%Framework of human-computer interaction based on multi-point gesture recognition

    Institute of Scientific and Technical Information of China (English)

    李文生; 解梅; 邓春健

    2011-01-01

    提出了一种基于机器视觉的多点手势识别方法及其人机交互技术框架.指尖跟踪和手势识别服务程序通过一个普通的摄像机捕获用户手的运动,对多个指尖目标进行实时检测和跟踪,在指尖跟踪结果基础上利用BP神经网络实现多点手势识别,并根据指尖跟踪和手势识别结果构造相应的消息(包括低级指点消息和高级手势消息)发送给客户端应用程序,客户端响应消息并进行相应的处理.该框架可以帮助开发人员的在应用程序中增加类似iPhone多点触摸控制的多点手势识别控制功能,实现更加自然的人机交互,提高用户操作体验.%A framework of human-computer interaction based on multi-point gesture recognition is presented. The server of fingertip tracking and gesture recognition firstly captures the movement of user' s hands by a camera, detects and tracks multiple fingertips in real time, then realizes multi-point gesture recognition by making use of the results of fingertip tracking through BP neural network. Finally,the server constructs messages (including primary fmgertip messages and senior gesture message) according to the result of fingertip tracking and gesture recognition and sends them the client application, the client responds to the messages. The framework can help programmers realize multi-point gesture based control functions just like multi-touch functions of iPhone, achieve a more natural humancomputer interaction and improve the operation experience of user.

  18. Supporting Negotiation Behavior with Haptics-Enabled Human-Computer Interfaces

    OpenAIRE

    Küçükyılmaz, Ayşe; Sezgin, Tevfik Metin; Başdoğan, Çağatay

    2012-01-01

    An active research goal for human-computer interaction is to allow humans to communicate with computers in an intuitive and natural fashion, especially in real-life interaction scenarios. One approach that has been advocated to achieve this has been to build computer systems with human-like qualities and capabilities. In this paper, we present insight on how human-computer interaction can be enriched by employing the computers with behavioral patterns that naturally appear in human-human nego...

  19. 基于MapX的电力系统GIS人机交互设计%Design of GIS Human-Computer Interaction Based on MapX for Electric Power System

    Institute of Scientific and Technical Information of China (English)

    朱作欣; 朱全胜; 马超; 李卫东

    2011-01-01

    In this paper, based on the method geographic information system (GIS) visual development interface, a GIS map is designed and drawn by MapInfo, which includes basic geographic layer, power station layer, substation layer, 220 kV transmission line layer, and 500 kV transmission line layer. Moreover, some basic functions and advanced functions of human-computer interaction of electric power system GIS based on MapX are achieved. The basic functions include space function, search function, display data function, thematic map function, eagle eye map function, contour map function, and 3D Visualization function. The advanced functions include multi-screen display function and associated adaptive regulatory function. According to different usage features and demands, more advanced functions can be developed on this basis. The design as presented in this paper will have significant effects in the management, analysis and maintenance of the power grid data.%应用基于地理信息系统(GIS)的电力系统可视化界面开发方法,使用MapInfo软件,设计绘制包括基础地理层、发电厂层、变电站层、220 kV输电线路层、500 kV输电线路层的GIS图.在此基础上,实现基于MapX的电力系统GIS人机交互基本功能及高级功能,基本功能包括空间功能、查询功能、详细数据展示功能、专题图功能、鹰眼图功能、等高线图功能、3D可视化功能,高级功能包括关联多屏显示功能、自适应调节功能.在整个系统中,基本功能可以较为简单地进行实现;高级功能突破了简单的二次开发限制,针对性更强,且具备了更专业的人机交互功能.依据电力系统中不同的使用特点和需求,还可以在此基础上开发出更多的其他高级功能.在对电网数据进行管理、分析和维护时具有显著的效果.

  20. Feasibility Study of Increasing Multimodal Interaction between Private and Public Transport Based on the Use of Intellectual Transport Systems and Services

    Directory of Open Access Journals (Sweden)

    Ulrich Weidmann

    2011-04-01

    Full Text Available The introduction of intellectual transport systems and services (ITS into the public and private transport sectors is closely connected with the development of multimodality in transport system (particularly, in towns and their suburbs. Taking into consideration the problems of traffic jams, the need for increasing the efficiency of power consumption and reducing the amount of burnt gases ejected into the air and the harmful effect of noise, the use of multimodal transport concept has been growing fast recently in most cities. It embraces a system of integrated tickets, the infrastructure, allowing a passenger to leave a car or a bike near a public transport station and to continue his/her travel by public transport (referred to as ‘Park&Ride’, ‘Bike&Ride’, as well as, real-time information system, universal design, and computer-aided traffic control. These concepts seem to be even more effective, when multimodal intellectual transport systems and services (ITS are introduced. In Lithuania, ITS is not widely used in passenger transportation, though its potential is great, particularly, taking into consideration the critical state of the capacity of public transport infrastructure. The paper considers the possibilities of increasing the effectiveness of public transport system ITS by increasing its interaction with private transport in the context of multimodal concept realization.Article in Lithuanian

  1. Multimodal Afslapning

    Directory of Open Access Journals (Sweden)

    Stephen Palmer

    2012-10-01

    Full Text Available Artiklen beskriver Multimodal Relaxation Method (MRM, fremover Multimodal Afslapningsmetode, somkan anvendes i livs-, ledelses-, virksomheds-, sports- eller sundhedscoaching til at forbedre ydeevnen hos denenkelte og reducere eller håndtere stress. Inden for sports- og sundhedscoaching kan metoden anvendes til atreducere fysiske spændinger og styrke fysiologisk kontrol, fx lavere hjertefrekvens og nedsætte blodtrykket.

  2. CD-ROM Multimodal Affordances: Classroom Interaction Perspectives in the Malaysian English Literacy Hour

    Science.gov (United States)

    Gardner, Sheena; Yaacob, Aizan

    2009-01-01

    CD-ROM affordances are explored in this article through participation in classroom interaction. CD-ROMs for shared reading of animated stories and language work were introduced to all Malaysian primary schools in 2003 for the Year 1 English Literacy Hour. We present classroom interaction extracts that show how the same CD-ROMs offer different…

  3. A Study of Multimodal Discourse in the Design of Interactive Digital Material for Language Learning

    Science.gov (United States)

    Burset, Silvia; Bosch, Emma; Pujolà, Joan-Tomàs

    2016-01-01

    This study analyses some published interactive materials for the learning of Spanish as a f?irst language and English as a Foreign Language (EFL) commonly used in primary and secondary education in Spain. The present investigation looks into the relationships between text and image on the interface of Interactive Digital Material (IDM) to develop…

  4. Generic Multimedia Multimodal Agents Paradigms and Their Dynamic Reconfiguration at the Architectural Level

    Directory of Open Access Journals (Sweden)

    H. Djenidi

    2004-09-01

    Full Text Available The multimodal fusion for natural human-computer interaction involves complex intelligent architectures which are subject to the unexpected errors and mistakes of users. These architectures should react to events occurring simultaneously, and possibly redundantly, from different input media. In this paper, intelligent agent-based generic architectures for multimedia multimodal dialog protocols are proposed. Global agents are decomposed into their relevant components. Each element is modeled separately. The elementary models are then linked together to obtain the full architecture. The generic components of the application are then monitored by an agent-based expert system which can then perform dynamic changes in reconfiguration, adaptation, and evolution at the architectural level. For validation purposes, the proposed multiagent architectures and their dynamic reconfiguration are applied to practical examples, including a W3C application.

  5. Gelatin-based Hydrogel Degradation and Tissue Interaction in vivo: Insights from Multimodal Preclinical Imaging in Immunocompetent Nude Mice

    Science.gov (United States)

    Tondera, Christoph; Hauser, Sandra; Krüger-Genge, Anne; Jung, Friedrich; Neffe, Axel T.; Lendlein, Andreas; Klopfleisch, Robert; Steinbach, Jörg; Neuber, Christin; Pietzsch, Jens

    2016-01-01

    Hydrogels based on gelatin have evolved as promising multifunctional biomaterials. Gelatin is crosslinked with lysine diisocyanate ethyl ester (LDI) and the molar ratio of gelatin and LDI in the starting material mixture determines elastic properties of the resulting hydrogel. In order to investigate the clinical potential of these biopolymers, hydrogels with different ratios of gelatin and diisocyanate (3-fold (G10_LNCO3) and 8-fold (G10_LNCO8) molar excess of isocyanate groups) were subcutaneously implanted in mice (uni- or bilateral implantation). Degradation and biomaterial-tissue-interaction were investigated in vivo (MRI, optical imaging, PET) and ex vivo (autoradiography, histology, serum analysis). Multimodal imaging revealed that the number of covalent net points correlates well with degradation time, which allows for targeted modification of hydrogels based on properties of the tissue to be replaced. Importantly, the degradation time was also dependent on the number of implants per animal. Despite local mechanisms of tissue remodeling no adverse tissue responses could be observed neither locally nor systemically. Finally, this preclinical investigation in immunocompetent mice clearly demonstrated a complete restoration of the original healthy tissue. PMID:27698944

  6. An interactive, multi-modal Anatomy workshop improves academic performance in the health sciences: a cohort study.

    Science.gov (United States)

    Nicholson, Leslie L; Reed, Darren; Chan, Cliffton

    2016-01-12

    Students often strategically adopt surface approaches to learning anatomy in order to pass this necessarily content-heavy subject. The consequence of this approach, without understanding and contextualisation, limits transfer of anatomical knowledge to clinical applications. Encouraging deep approaches to learning is challenging in the current environment of lectures and laboratory-based practica. A novel interactive anatomy workshop was proposed in an attempt to address this issue. This workshop comprised of body painting, clay modelling, white-boarding and quizzes, and was undertaken by 66 health science students utilising their preferred learning styles. Performance was measured prior to the workshop at the mid-semester examination and after the workshop at the end-semester examination. Differences between mid- and end-semester performances were calculated and compared between workshop attendees and non-attendees. Baseline, post-workshop and follow-up surveys were administered to identify learning styles, goals for attendance, useful aspects of the workshop and self-confidence ratings. Workshop attendees significantly improved their performance compared to non-attendees (p = 0.001) despite a difference at baseline (p = 0.05). Increased self-confidence was reported by the attendees (p learning, 97% of attendees reported utilising multi-modal learning styles. Five main goals for participating in the workshop included: understanding, strategic engagement, examination preparation, memorisation and increasing self-confidence. All attendees reported achieving these goals. The most useful components of the workshop were body painting and clay modelling. This interactive workshop improved attendees' examination performance and promoted engaged-enquiry and deeper learning. This tool accommodates varied learning styles and improves self-confidence, which may be a valuable supplement to traditional anatomy teaching.

  7. Design of human computer interaction system of virtual crops based on Leap Motion%基于Leap Motion的虚拟农作物人机交互系统设计

    Institute of Scientific and Technical Information of China (English)

    吴福理; 丁胤; 丁维龙; 谢涛

    2016-01-01

    In recent years, the Somatosensory Technology has been applied in many fields including entertainment, education, automation and medicine etc. But in agriculture, it still has rarely involved. The traditional human-computer interaction system of virtual plant operating at a particular operating system or on a mobile platform, and interactive mode is interactive through the mouse and keyboard, need parameters and commands more cumbersome user input, resulting in the lack of good user interaction experience. In view of the above situation, in this paper, we designed and developed virtual farming object interaction system based on cloud computing and somatosensory interactive technology. The system firstly generated a 3D model of the virtual crop in the cloud, and the model was stored in the cloud. Our virtual crop model included rice and tomato. The cloud-side provided the data calculation ability and responded the browser requesting, the browser-side was responsible for display, caching and a small amount of calculation, and Leap Motion was responsible for interaction on the browser-side. In order to obtain the relevant parameters for rice modeling, we had done experiments in China National Rice Research Institute in Zhejiang, Hangzhou between 2015 and 2016. The selected rice stage was from the jointing stage to the heading stage. For each plant, we measured three blades in different leaf positions, which included blade lengths, blade widths, the changed widths along the blades, and blade growth positions. The 3D data of virtual crops needed to be generated by algorithms on Amazon cloud platform. The topological structures of tomato plants were described by the parametric L-system in our system, and we separated the structures into stems, rachis, blades, fruit branches and flower branches. Using WebGL to render 3D crop models on browser allowed and incorporated users to directly interact with it. In this paper, we defined a 3D virtual crop data exchange protocol

  8. Multimodal Interaction in Ambient Intelligence Environments Using Speech, Localization and Robotics

    Science.gov (United States)

    Galatas, Georgios

    2013-01-01

    An Ambient Intelligence Environment is meant to sense and respond to the presence of people, using its embedded technology. In order to effectively sense the activities and intentions of its inhabitants, such an environment needs to utilize information captured from multiple sensors and modalities. By doing so, the interaction becomes more natural…

  9. On the usability of multimodal interaction for mobile access to information services

    NARCIS (Netherlands)

    Sturm, J.

    2005-01-01

    The dynamics of our everyday life and the ubiquitous presence of information services create increasing demands for mobile access. Advanced mobile devices are now equipped with a sizeable screen, which opens the door to novel ways of interaction besides speech. This thesis investigates the advantage

  10. Multimodal feedback for finger-based interaction in mobile augmented reality

    NARCIS (Netherlands)

    Hürst, W.O.; Vriens, Kevin

    2016-01-01

    Mobile or handheld augmented reality uses a smartphone's live video stream and enriches it with superimposed graphics. In such scenarios, tracking one's fingers in front of the camera and interpreting these traces as gestures offers interesting perspectives for interaction. Yet, the lack of haptic

  11. Multimodal Adaptation and Enriched Interaction of Multimedia Content for Mobile Users

    NARCIS (Netherlands)

    Cesar Garcia, P.S.; Bulterman, D.C.A.; Kernchen, R.; Hesselman, C.; Boussard, M.; Spedalieri, A.; Vaishnavi, I.; Gao, B.

    2008-01-01

    This paper introduces an architecture, together with an implemented scenario, capable of dynamically adapt the way mobile users consume and interact with multimedia content. The architecture is based on a representative scenario identified by the European project SPICE, in which multimedia content i

  12. Multimodal user interfaces to improve social integration of elderly and mobility impaired.

    Science.gov (United States)

    Dias, Miguel Sales; Pires, Carlos Galinho; Pinto, Fernando Miguel; Teixeira, Vítor Duarte; Freitas, João

    2012-01-01

    Technologies for Human-Computer Interaction (HCI) and Communication have evolved tremendously over the past decades. However, citizens such as mobility impaired or elderly or others, still face many difficulties interacting with communication services, either due to HCI issues or intrinsic design problems with the services. In this paper we start by presenting the results of two user studies, the first one conducted with a group of mobility impaired users, comprising paraplegic and quadriplegic individuals; and the second one with elderly. The study participants carried out a set of tasks with a multimodal (speech, touch, gesture, keyboard and mouse) and multi-platform (mobile, desktop) system, offering an integrated access to communication and entertainment services, such as email, agenda, conferencing, instant messaging and social media, referred to as LHC - Living Home Center. The system was designed to take into account the requirements captured from these users, with the objective of evaluating if the adoption of multimodal interfaces for audio-visual communication and social media services, could improve the interaction with such services. Our study revealed that a multimodal prototype system, offering natural interaction modalities, especially supporting speech and touch, can in fact improve access to the presented services, contributing to the reduction of social isolation of mobility impaired, as well as elderly, and improving their digital inclusion.

  13. Multimodal interaction with BCL-2 family proteins underlies the proapoptotic activity of PUMA BH3.

    Science.gov (United States)

    Edwards, Amanda L; Gavathiotis, Evripidis; LaBelle, James L; Braun, Craig R; Opoku-Nsiah, Kwadwo A; Bird, Gregory H; Walensky, Loren D

    2013-07-25

    PUMA is a proapoptotic BCL-2 family member that drives the apoptotic response to a diversity of cellular insults. Deciphering the spectrum of PUMA interactions that confer its context-dependent proapoptotic properties remains a high priority goal. Here, we report the synthesis of PUMA SAHBs, structurally stabilized PUMA BH3 helices that, in addition to broadly targeting antiapoptotic proteins, directly bind to proapoptotic BAX. NMR, photocrosslinking, and biochemical analyses revealed that PUMA SAHBs engage an α1/α6 trigger site on BAX to initiate its functional activation. We further demonstrated that a cell-permeable PUMA SAHB analog induces apoptosis in neuroblastoma cells and, like expressed PUMA protein, engages BCL-2, MCL-1, and BAX. Thus, we find that PUMA BH3 is a dual antiapoptotic inhibitor and proapoptotic direct activator, and its mimetics may serve as effective pharmacologic triggers of apoptosis in resistant human cancers.

  14. The GuideView System for Interactive, Structured, Multi-modal Delivery of Clinical Guidelines

    Science.gov (United States)

    Iyengar, Sriram; Florez-Arango, Jose; Garcia, Carlos Andres

    2009-01-01

    GuideView is a computerized clinical guideline system which delivers clinical guidelines in an easy-to-understand and easy-to-use package. It may potentially enhance the quality of medical care or allow non-medical personnel to provide acceptable levels of care in situations where physicians or nurses may not be available. Such a system can be very valuable during space flight missions when a physician is not readily available, or perhaps the designated medical personnel is unable to provide care. Complex clinical guidelines are broken into simple steps. At each step clinical information is presented in multiple modes, including voice,audio, text, pictures, and video. Users can respond via mouse clicks or via voice navigation. GuideView can also interact with medical sensors using wireless or wired connections. The system's interface is illustrated and the results of a usability study are presented.

  15. Applying Human Computation Methods to Information Science

    Science.gov (United States)

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  16. Automatic Multimodal Cognitive Load Measurement (AMCLM)

    Science.gov (United States)

    2011-06-01

    Final Project Report Grant AOARD-10-4029 Automatic Multimodal Cognitive Load Measurement (AMCLM) June 2011 NICTA DSIM Team...collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 12 AUG 2011 2. REPORT TYPE 3. DATES COVERED 4...human-computer interface, such as air traffic control , in-car safety and electronic games. By quantifying the mental efforts of a person when

  17. Human -Computer Interface using Gestures based on Neural Network

    Directory of Open Access Journals (Sweden)

    Aarti Malik

    2014-10-01

    Full Text Available - Gestures are powerful tools for non-verbal communication. Human computer interface (HCI is a growing field which reduces the complexity of interaction between human and machine in which gestures are used for conveying information or controlling the machine. In the present paper, static hand gestures are utilized for this purpose. The paper presents a novel technique of recognizing hand gestures i.e. A-Z alphabets, 0-9 numbers and 6 additional control signals (for keyboard and mouse control by extracting various features of hand ,creating a feature vector table and training a neural network. The proposed work has a recognition rate of 99%. .

  18. The effect of parent training in music and multimodal stimulation on parent-neonate interactions in the neonatal intensive care unit.

    Science.gov (United States)

    Whipple, J

    2000-01-01

    This study examined the effects of parent training in music and multimodal stimulation on the quantity and quality of parent-neonate interactions and the weight gain and length of hospitalization of premature and low birthweight (LBW) infants in a Neonatal Intensive Care Unit (NICU). Twenty sets of parents and premature LBW infants participated in the study. Parents in the experimental group (n = 10) received approximately one hour of instruction in appropriate uses of music, multimodal stimulation including massage techniques, and signs of infant overstimulation and techniques for its avoidance. Parent-neonate interactions, specifically parent actions and responses and infant stress and nonstress behaviors, were observed for subjects in both groups. Infant stress behaviors were significantly fewer and appropriateness of parent actions and responses were significantly greater for experimental infants and parents than for control subjects. Parents in the experimental group also self-reported spending significantly more time visiting in the NICU than did parents of control infants. In addition, length of hospitalization was shorter and average daily weight gain was greater for infants whose parents received training, although these differences were not significant. A one month, postdischarge follow-up showed little difference between experimental and control group parent-infant interactions in the home.

  19. Multimodal stilistik

    DEFF Research Database (Denmark)

    Nørgaard, Nina

    2012-01-01

    socialsemiotiske multimodalitetsteori. Formålet med en sådan multimodal stilistik er således at udvikle et konsistent systematisk analyseapparat, der kan fange og beskrive den multimodale semiosis, der realiseres i romanen såvel som i andre typer tekst. Med nedslag i et udvalg af skandinaviske og oversatte...

  20. Quantum equations of motion for multimode laser generation with a spatial dependence of the atom interaction with the field taken into account

    Science.gov (United States)

    Kozlovskii, A. V.

    2011-04-01

    We derive equations of motion for the electromagnetic field operators a{q'/ + }aq″ for a three-level multimode laser with a spatial dependence of the interaction of atoms with the field of a standing wave in a cavity taken into account. We calculate and analyze the dynamics of means of photon numbers in the field modes and of the correlation function of field modes. We explore the effect of intermode correlations on the dynamics of establishing stationary laser generation. We find that taking the spatial dependence of the interaction of atoms with the field and the intermode correlation into account in investigating the means of photon numbers leads to revealing new properties of laser generation, such as saturation of the laser radiation intensity in a single-mode regime and generation of short light pulses of side below-threshold modes with the amplitudes depending on the initial state of the field in a cavity.

  1. Vibrational configuration interaction using a tiered multimode scheme and tests of approximate treatments of vibrational angular momentum coupling: a case study for methane.

    Science.gov (United States)

    Mielke, Steven L; Chakraborty, Arindam; Truhlar, Donald G

    2013-08-15

    We present vibrational configuration interaction calculations employing the Watson Hamiltonian and a multimode expansion. Results for the lowest 36 eigenvalues of the zero total angular momentum rovibrational spectrum of methane agree with the accurate benchmarks of Wang and Carrington to within a mean unsigned deviation of 0.68, 0.033, and 0.014 cm(-1) for 4-mode, 5-mode, and 6-mode representations, respectively. We note that in the case of the 5-mode results, this is a factor of 10 better agreement than for 5-mode calculations reported earlier by Wu, Huang, Carter, and Bowman for the same set of eigenvalues, which indicates that the multimode expansion is even more rapidly convergent than previously demonstrated. Our largest calculations employ a tiered approach with matrix elements treated using a variable-order multimode expansion with orders ranging from 4-mode to 7-mode; strategies for assigning matrix elements to particular multimode tiers are discussed. Improvements of 7-mode coupling over 6-mode coupling are small (averaging 0.002 cm(-1) for the first 36 eigenvalues) suggesting that 7-mode coupling is sufficient to fully converge the results. A number of approximate treatments of the computationally expensive vibrational angular momentum terms are explored. The use of optimized vibrational quadratures allows rapid integration of the matrix elements, especially the vibrational angular momentum terms, which require significantly fewer quadrature points than are required to integrate the potential. We assign the lowest 243 states and compare our results to those of Wang and Carrington, who provided assignments for the same set of states. Excellent agreement is observed for most states, but our results are lower for some of the higher-energy states by as much as 20 cm(-1), with the largest deviations being for the states with six quanta of excitation in the F2 bends, suggesting that the earlier results were not fully converged with respect to the basis set. We

  2. Human-computer interface incorporating personal and application domains

    Science.gov (United States)

    Anderson, Thomas G.

    2011-03-29

    The present invention provides a human-computer interface. The interface includes provision of an application domain, for example corresponding to a three-dimensional application. The user is allowed to navigate and interact with the application domain. The interface also includes a personal domain, offering the user controls and interaction distinct from the application domain. The separation into two domains allows the most suitable interface methods in each: for example, three-dimensional navigation in the application domain, and two- or three-dimensional controls in the personal domain. Transitions between the application domain and the personal domain are under control of the user, and the transition method is substantially independent of the navigation in the application domain. For example, the user can fly through a three-dimensional application domain, and always move to the personal domain by moving a cursor near one extreme of the display.

  3. Does textual feedback hinder spoken interaction in natural language?

    Science.gov (United States)

    Le Bigot, Ludovic; Terrier, Patrice; Jamet, Eric; Botherel, Valerie; Rouet, Jean-Francois

    2010-01-01

    The aim of the study was to determine the influence of textual feedback on the content and outcome of spoken interaction with a natural language dialogue system. More specifically, the assumption that textual feedback could disrupt spoken interaction was tested in a human-computer dialogue situation. In total, 48 adult participants, familiar with the system, had to find restaurants based on simple or difficult scenarios using a real natural language service system in a speech-only (phone), speech plus textual dialogue history (multimodal) or text-only (web) modality. The linguistic contents of the dialogues differed as a function of modality, but were similar whether the textual feedback was included in the spoken condition or not. These results add to burgeoning research efforts on multimodal feedback, in suggesting that textual feedback may have little or no detrimental effect on information searching with a real system. STATEMENT OF RELEVANCE: The results suggest that adding textual feedback to interfaces for human-computer dialogue could enhance spoken interaction rather than create interference. The literature currently suggests that adding textual feedback to tasks that depend on the visual sense benefits human-computer interaction. The addition of textual output when the spoken modality is heavily taxed by the task was investigated.

  4. Cognitive Principles in Robust Multimodal Interpretation

    CERN Document Server

    Chai, J Y; Qu, S; 10.1613/jair.1936

    2011-01-01

    Multimodal conversational interfaces provide a natural means for users to communicate with computer systems through multiple modalities such as speech and gesture. To build effective multimodal interfaces, automated interpretation of user multimodal inputs is important. Inspired by the previous investigation on cognitive status in multimodal human machine interaction, we have developed a greedy algorithm for interpreting user referring expressions (i.e., multimodal reference resolution). This algorithm incorporates the cognitive principles of Conversational Implicature and Givenness Hierarchy and applies constraints from various sources (e.g., temporal, semantic, and contextual) to resolve references. Our empirical results have shown the advantage of this algorithm in efficiently resolving a variety of user references. Because of its simplicity and generality, this approach has the potential to improve the robustness of multimodal input interpretation.

  5. An innovative multimodal virtual platform for communication with devices in a natural way

    Science.gov (United States)

    Kinkar, Chhayarani R.; Golash, Richa; Upadhyay, Akhilesh R.

    2012-03-01

    As technology grows people are diverted and are more interested in communicating with machine or computer naturally. This will make machine more compact and portable by avoiding remote, keyboard etc. also it will help them to live in an environment free from electromagnetic waves. This thought has made 'recognition of natural modality in human computer interaction' a most appealing and promising research field. Simultaneously it has been observed that using single mode of interaction limit the complete utilization of commands as well as data flow. In this paper a multimodal platform, where out of many natural modalities like eye gaze, speech, voice, face etc. human gestures are combined with human voice is proposed which will minimize the mean square error. This will loosen the strict environment needed for accurate and robust interaction while using single mode. Gesture complement Speech, gestures are ideal for direct object manipulation and natural language is used for descriptive tasks. Human computer interaction basically requires two broad sections recognition and interpretation. Recognition and interpretation of natural modality in complex binary instruction is a tough task as it integrate real world to virtual environment. The main idea of the paper is to develop a efficient model for data fusion coming from heterogeneous sensors, camera and microphone. Through this paper we have analyzed that the efficiency is increased if heterogeneous data (image & voice) is combined at feature level using artificial intelligence. The long term goal of this paper is to design a robust system for physically not able or having less technical knowledge.

  6. Evaluating Multimodal Literacies in Student Blogs

    Science.gov (United States)

    O'Byrne, Barbara; Murrell, Stacey

    2014-01-01

    This research presents ways in which high school students used the multimodal and interactive affordances of blogs to create, organize, communicate and participate on an educational blog. Their actions demonstrated how plural modes of literacy are infiltrating digital environments and reshaping literacy and learning. Multimodal blogging practices…

  7. Promoting diagnostic accuracy in general practitioner management of otitis media in children: findings from a multimodal, interactive workshop on tympanometry and pneumatic otoscopy.

    Science.gov (United States)

    Rosenkranz, Sara; Abbott, Penelope; Reath, Jennifer; Gunasekera, Hasantha; Hu, Wendy

    2012-01-01

    Previous research has shown that general practitioners (GPs) rarely use pneumatic otoscopy or tympanometry as recommended by best practice guidelines for diagnosing otitis media. The purpose of this study was to determine whether a multimodal, interactive training workshop on the techniques of pneumatic otoscopy and tympanometry would improve the confidence of GPs for the diagnosis of otitis media with effusion (OME) and acute otitis media (AOM), and for using pneumatic otoscopy and tympanometry. Additionally, we sought to determine whether this training could change GPs' intentions for using pneumatic otoscopy and tympanometry in their practices. Twenty-three GPs participated in a three-hour training workshop led by an ear, nose and throat (ENT) surgeon, a paediatrician and an audiologist. Prior to and following the workshop, GPs completed questionnaires indicating their previous use and beliefs about the usefulness of pneumatic otoscopy and tympanometry, confidence for diagnosing AOM and OME, confidence for using pneumatic otoscopy and tympanometry, and intention to use pneumatic otoscopy and tympanometry in the future. There were no differences (P > 0.05) from pre- to post-workshop in GP confidence for diagnosing AOM. There were increases in GP confidence for diagnosis of OME (pre: 4.5 ± 0.9, post: 4.9 ± 0.4, P pneumatic otoscopy (pre: 3.6 ± 1.6, post: 4.8 ± 1.0, P 0.05) in intention to use pneumatic otoscopy or tympanometry in their practices in the future. These results suggest that a multimodal, interactive workshop can significantly increase the confidence of GPs for diagnosis of OME and also for using pneumatic otoscopy and tympanometry. It is likely, however, that GPs will need follow-up and further practice with these techniques to implement them in their practices.

  8. A multimodal interactive control method based on recurrent neural network with parametric bias model for companion robots%基于PB递归神经网络的陪护机器人多模式交互控制方法

    Institute of Scientific and Technical Information of China (English)

    徐敏; 陶永; 魏洪兴

    2011-01-01

    本文提出了一种基于PB递归神经网络(RNNPB)算法的陪护机器人多模式交互控制方法.首先,提出了一种包含多模式交互、交互识别与交互决策等智能体组成的陪护机器人多模式交互框架,然后,将基于PB的学习算法应用于陪护机器人的交互过程,形成了一种基于RNNPB模型的陪护机器人多模式交互控制方法,通过交互状态识别及决策判断结果进行交互输出,实现了陪护机器人交互过程中复杂任务的规划和交互的学习适应.实验验证了该交互控制方法的有效性.%A multimodal interactive control method based on the recurrent neural network with parametric bias (RNNPB) model was proposed for the companion robots. Firstly, a multimodal interaction framework, which was composed of the multimodal interaction agent, the interactive recognition agent and the interaction decision agent, was proposed, and then, a PB-based learning algorithm was applied to the interactive process of companion robots to form the multimodal interaction control method based on the RNNPB model. The RNNPB control method can be used to deal with the interactive sates recognition and decision analysis process and generate the patterns of behavior as the output of interaction process to realize the complex mission planning, studying and adapting of companion robots' interaction process. The experimental results showed the effectiveness of the control method.

  9. Human-computer interface glove using flexible piezoelectric sensors

    Science.gov (United States)

    Cha, Youngsu; Seo, Jeonggyu; Kim, Jun-Sik; Park, Jung-Min

    2017-05-01

    In this note, we propose a human-computer interface glove based on flexible piezoelectric sensors. We select polyvinylidene fluoride as the piezoelectric material for the sensors because of advantages such as a steady piezoelectric characteristic and good flexibility. The sensors are installed in a fabric glove by means of pockets and Velcro bands. We detect changes in the angles of the finger joints from the outputs of the sensors, and use them for controlling a virtual hand that is utilized in virtual object manipulation. To assess the sensing ability of the piezoelectric sensors, we compare the processed angles from the sensor outputs with the real angles from a camera recoding. With good agreement between the processed and real angles, we successfully demonstrate the user interaction system with the virtual hand and interface glove based on the flexible piezoelectric sensors, for four hand motions: fist clenching, pinching, touching, and grasping.

  10. The Human-Computer Domain Relation in UX Models

    DEFF Research Database (Denmark)

    Clemmensen, Torkil

    This paper argues that the conceptualization of the human, the computer and the domain of use in competing lines of UX research have problematic similarities and superficial differences. The paper qualitatively analyses concepts and models in five research papers that together represent two...... influential lines of UX research: aesthetics and temporal UX, and two use situations: using a website and starting to use a smartphone. The results suggest that the two lines of UX research share a focus on users’ evaluative judgments of technology, both focuses on product qualities rather than activity...... domains, give little details about users, and treat human-computer interaction as perception. The conclusion gives similarities and differences between the approaches to UX. The implications for theory building are indicated....

  11. Application Exploration of Techniques of New Type Human-computer Interaction in Games%新型人机互动技术在游戏中的应用探索

    Institute of Scientific and Technical Information of China (English)

    薛凯

    2011-01-01

    随着计算机性能的不断发展,人机交互的瓶颈问题越来越突出.传统的人机交互手段已经远远不能满足现代计算机互动游戏所需要的信息量.因此使用最廉价和最普及的人机交互设备(网络摄像头和麦克风)拓展了人机交互的信息量,从而增强人机交互的效率,实现一个不用鼠标键盘进行游戏控制的新的游戏形态.%At present,the bottle-neck of human-machine interaction is more and more outstanding along with computer performance development.The traditional man-machine interactive technique can't provide enough information to modern computer interactive games.Therefore,the information amount of human-machine interaction is expanded by using the cheapest and popular human-machine equipments(web camera and microphone) to increase the efficiency of human-machine.A new game form can be realized without a mouse and keyboard controlling.

  12. Human-Computer Etiquette Cultural Expectations and the Design Implications They Place on Computers and Technology

    CERN Document Server

    Hayes, Caroline C

    2010-01-01

    Written by experts from various fields, this edited collection explores a wide range of issues pertaining to how computers evoke human social expectations. The book illustrates how socially acceptable conventions can strongly impact the effectiveness of human-computer interactions and how to consider such norms in the design of human-computer interfaces. Providing a complete introduction to the design of social responses to computers, the text emphasizes the value of social norms in the development of usable and enjoyable technology. It also describes the role of socially correct behavior in t

  13. WEB-BASED PERSONAL DIGITAL PHOTO COLLECTIONS: MULTIMODAL RETRIEVAL

    Directory of Open Access Journals (Sweden)

    Nor Azman Ismail

    2010-09-01

    Full Text Available When personal photo collections get large retrieval of specific photos or sets of photos becomes difficult mainly due to the fairly primitive means by which they are organised. Commercial photo handling systems help but often have only elementary searching features. In this paper, we describe an interactive web-based photo retrieval system that enables personal digital photo users to accomplish photo browsing by using multimodal interaction. This system not only enables users to use mouse click input modalities but also speech input modality to browse their personal digital photos in the World Wide Web (WWW environment. The prototype system and it architecture utilise web technology which was built using web programming scripting (JavaScript, XHTML, ASP, XML based mark-up language and image database in order to achieve its objective. All prototype programs and data files including the user’s photo repository, profiles, dialogues, grammars, prompt, and retrieval engine are stored and located in the web server. Our approach also consists of human-computer speech dialogue based on photo browsing of image content by four main categories (Who? What? When? and Where?. Our user study with 20 digital photo users showed that the participants reacted positively to their experience with the system interactions.

  14. Multimodal Dimensional Interactions Teaching Model in English Listening Teaching%英语专业听力多模态互动教学模式研究

    Institute of Scientific and Technical Information of China (English)

    罗玉枝

    2011-01-01

    This paper explores multimodal dimensional interactions teaching model in English listening teaching and discusses the main content of this teaching mode: ( 1 )"Interactions" are the soul of this model; (2 )"Knowing students ' interests, difficulties and emotions" is the key; ( 3 ) "Enhancing students ' listening comprehension ability" is the target; (4) "Rich multimodal resources" are the technology; (5 ) "Listening with pleasure" is the main feature. By introducing this teaching model, we do hope English teaching experts throw themselves into English teachim,, research.%本文探讨了"英语专业听力多模态互动教学模式",并介绍了这一新型教学模式的主要内容:(1)"互动"是灵魂("主体间互动"、"主客间互动"、"官能间互动)";(2)"了解学生的兴趣、困难和情感"是关键;(3)"提高学生的听力水平"是目标;(4)"丰富的多模态资源"是技术;(5)"愉快的听"是特色。希望此教学模式能抛砖引玉,引起更多的专家学者投入到英语听力教学理论与实践的研究。

  15. Multimodality as a Sociolinguistic Resource

    Science.gov (United States)

    Collister, Lauren Brittany

    2013-01-01

    This work explores the use of multimodal communication in a community of expert "World of Warcraft"® players and its impact on politeness, identity, and relationships. Players in the community regularly communicated using three linguistic modes quasi-simultaneously: text chat, voice chat, and face-to-face interaction. Using the…

  16. Learners' Multimodal Displays of Willingness to Participate in Classroom Interaction in the L2 and CLIL Contexts

    Science.gov (United States)

    Evnitskaya, Natalia; Berger, Evelyne

    2017-01-01

    Drawing on recent conversation-analytic and socio-interactionist research on students' participation in L1 and L2 classroom interaction in teacher-fronted activities, this paper makes a step further by presenting an exploratory study of students' displays of willingness to participate (WTP) in classroom interaction and pedagogical activities…

  17. A Physics-driven Neural Networks-based Simulation System (PhyNNeSS) for multimodal interactive virtual environments involving nonlinear deformable objects.

    Science.gov (United States)

    De, Suvranu; Deo, Dhannanjay; Sankaranarayanan, Ganesh; Arikatla, Venkata S

    2011-08-01

    BACKGROUND: While an update rate of 30 Hz is considered adequate for real time graphics, a much higher update rate of about 1 kHz is necessary for haptics. Physics-based modeling of deformable objects, especially when large nonlinear deformations and complex nonlinear material properties are involved, at these very high rates is one of the most challenging tasks in the development of real time simulation systems. While some specialized solutions exist, there is no general solution for arbitrary nonlinearities. METHODS: In this work we present PhyNNeSS - a Physics-driven Neural Networks-based Simulation System - to address this long-standing technical challenge. The first step is an off-line pre-computation step in which a database is generated by applying carefully prescribed displacements to each node of the finite element models of the deformable objects. In the next step, the data is condensed into a set of coefficients describing neurons of a Radial Basis Function network (RBFN). During real-time computation, these neural networks are used to reconstruct the deformation fields as well as the interaction forces. RESULTS: We present realistic simulation examples from interactive surgical simulation with real time force feedback. As an example, we have developed a deformable human stomach model and a Penrose-drain model used in the Fundamentals of Laparoscopic Surgery (FLS) training tool box. CONCLUSIONS: A unique computational modeling system has been developed that is capable of simulating the response of nonlinear deformable objects in real time. The method distinguishes itself from previous efforts in that a systematic physics-based pre-computational step allows training of neural networks which may be used in real time simulations. We show, through careful error analysis, that the scheme is scalable, with the accuracy being controlled by the number of neurons used in the simulation. PhyNNeSS has been integrated into SoFMIS (Software Framework for Multimodal

  18. Multimodal perception and simulation

    NARCIS (Netherlands)

    Werkhoven, P.J.; Erp, J.B.F. van

    2013-01-01

    This chapter discusses mechanisms of multimodal perception in the context of multimodal simulators and virtual worlds. We review some notable findings from psychophysical experiments with a focus on what we call touch-inclusive multimodal perception—that is, the sensory integration of the tactile sy

  19. INTERACT

    DEFF Research Database (Denmark)

    Jochum, Elizabeth; Borggreen, Gunhild; Murphey, TD

    This paper considers the impact of visual art and performance on robotics and human-computer interaction and outlines a research project that combines puppetry and live performance with robotics. Kinesics—communication through movement—is the foundation of many theatre and performance traditions...... interaction between a human operator and an artificial actor or agent. We can apply insights from puppetry to develop culturally-aware robots. Here we describe the development of a robotic marionette theatre wherein robotic controllers assume the role of human puppeteers. The system has been built, tested...

  20. Individual Difference Effects in Human-Computer Interaction

    Science.gov (United States)

    1991-10-01

    evaluated in terns of the amount of sales revenue af -er deducting production costs. nhe time variable was measured in terms of the amount of time a subject...subject acted as an inventory/ production manage:r of a hypothetical firm which was simulated by a computer program. The cubject’s task was to obtain the...34search list" will be examined. Thus, the u3ar w.ll probably match "apple pie" but not "apple cider " or "appl-? butter’ because these items would not

  1. Human-Computer Interaction and Information Management Research Needs

    Science.gov (United States)

    2003-10-01

    35 4.1 NSF’s Digital Libraries and Digital Government Programs and their Joint Workshop with the Library...and retrieval across multiple digital libraries • Efficient management and distribution of large data sets • Approaches for efficiently and...later HCI&IM CG meetings. 4.1 NSF’s Digital Libraries and Digital Government Programs and their Joint Workshop with the Library of Congress on

  2. Questioning Mechanisms During Tutoring, Conversation, and Human-Computer Interaction

    Science.gov (United States)

    1993-06-01

    O74W10 I8 Publi I"&ln 0W4 Sft fo ftm colimm" fi Infoowiivi it qnf~ teo *5 to veli iff9 ¶fO’i w~ ’triffi. oftetwist" tft t0w 1 fo~Wr n 7vg mirwIt~ittO...show a positive relationship between question asking and achievement (Fishbein, Eckart, Lauver, van Leeuwen . & Langmeyer, 1990). In summary, the...331-339. Fishbein, H. D., Eckart, T., Lauver, E., Van Leeuwen . R., & Langmeyer. D. (1990). Learners’ questions and comprehension in a tutoring setting

  3. Brain-Computer Interfaces and Human-Computer Interaction

    NARCIS (Netherlands)

    Tan, Desney; Nijholt, Anton; Tan, Desney S.; Nijholt, Anton

    2010-01-01

    Advances in cognitive neuroscience and brain imaging technologies have started to provide us with the ability to interface directly with the human brain. This ability is made possible through the use of sensors that can monitor some of the physical processes that occur within the brain that correspo

  4. Impact of Cognitive Architectures on Human-Computer Interaction

    Science.gov (United States)

    2014-09-01

    simulation. In this work they were preparing for the Synthetic Theatre of War-1997 exercise where between 10,000 and 50,000 automated agents would...work with up to 1,000 humans.27 The results of this exercise are documented by Laird et al.28 5. Conclusions and Future Work To assess whether cognitive...RW, MacKenzie IS. Towards a standard for pointing device evaluation, perspectives on 27 years of Fitts’ law research in HCI. International Journal of

  5. Brain-Computer Interfaces and Human-Computer Interaction

    NARCIS (Netherlands)

    Tan, Desney; Tan, Desney S.; Nijholt, Antinus

    2010-01-01

    Advances in cognitive neuroscience and brain imaging technologies have started to provide us with the ability to interface directly with the human brain. This ability is made possible through the use of sensors that can monitor some of the physical processes that occur within the brain that

  6. Human-Computer Interaction Software: Lessons Learned, Challenges Ahead

    Science.gov (United States)

    1989-01-01

    domain communi- Iatelligent s t s s Me cation. Users familiar with problem Inteligent support systes. High-func- anddomains but inxperienced with comput...8217i. April 1987, pp. 7.3-78. His research interests include artificial intel- Creating better HCI softw-are will have a 8. S.K Catrd. I.P. Moran. arid

  7. Brain-Computer Interfaces Revolutionizing Human-Computer Interaction

    CERN Document Server

    Graimann, Bernhard; Allison, Brendan

    2010-01-01

    A brain-computer interface (BCI) establishes a direct output channel between the human brain and external devices. BCIs infer user intent via direct measures of brain activity and thus enable communication and control without movement. This book, authored by experts in the field, provides an accessible introduction to the neurophysiological and signal-processing background required for BCI, presents state-of-the-art non-invasive and invasive approaches, gives an overview of current hardware and software solutions, and reviews the most interesting as well as new, emerging BCI applications. The book is intended not only for students and young researchers, but also for newcomers and other readers from diverse backgrounds keen to learn about this vital scientific endeavour.

  8. Mobile human-computer interaction perspective on mobile learning

    CSIR Research Space (South Africa)

    Botha, Adèle

    2010-10-01

    Full Text Available , will have to be incorporated in some sense, as virtual reality through mobile technology becomes a reality. Elements of context can be naively described as situations where the user’s physical relation to space and time would be significant (high context... mobile technology as an ICT in education. This investigation has led our research to suggest additional insights for MHCI and simultaneously provided a better understanding of the development and implementation of mobiles in teaching and learning...

  9. Questioning Mechanisms during Tutoring, Conversation, and Human-Computer Interaction

    Science.gov (United States)

    1993-06-01

    Box 519 Monterey, CA 93940-5380 1961 Tuttle Park Place Santa Barbara, CA 93102 Columbus, OH 43210-1102 Dr. Charles E. Davis Margaret Day, Librarian Dr...Rue Andre-Pascal University of British Columbia Cameron Station, Bldg 5 75016 Paris Vancouver, BC CANADA Alexandria, VA 22314 FRANCE V6T IZ4 Dr. Richard... Julia S. Hough Dr. William Howell Dr. Steven Hunka Cambridge University Press Chief Scientist 3-104 Educ. N. 40 West 20th Street AFHRIJCA University

  10. Human Computation An Integrated Approach to Learning from the Crowd

    CERN Document Server

    Law, Edith

    2011-01-01

    Human computation is a new and evolving research area that centers around harnessing human intelligence to solve computational problems that are beyond the scope of existing Artificial Intelligence (AI) algorithms. With the growth of the Web, human computation systems can now leverage the abilities of an unprecedented number of people via the Web to perform complex computation. There are various genres of human computation applications that exist today. Games with a purpose (e.g., the ESP Game) specifically target online gamers who generate useful data (e.g., image tags) while playing an enjoy

  11. Gesture controlled human-computer interface for the disabled.

    Science.gov (United States)

    Szczepaniak, Oskar M; Sawicki, Dariusz J

    2017-02-28

    The possibility of using a computer by a disabled person is one of the difficult problems of the human-computer interaction (HCI), while the professional activity (employment) is one of the most important factors affecting the quality of life, especially for disabled people. The aim of the project has been to propose a new HCI system that would allow for resuming employment for people who have lost the possibility of a standard computer operation. The basic requirement was to replace all functions of a standard mouse without the need of performing precise hand movements and using fingers. The Microsoft's Kinect motion controller had been selected as a device which would recognize hand movements. Several tests were made in order to create optimal working environment with the new device. The new communication system consisted of the Kinect device and the proper software had been built. The proposed system was tested by means of the standard subjective evaluations and objective metrics according to the standard ISO 9241-411:2012. The overall rating of the new HCI system shows the acceptance of the solution. The objective tests show that although the new system is a bit slower, it may effectively replace the computer mouse. The new HCI system fulfilled its task for a specific disabled person. This resulted in the ability to return to work. Additionally, the project confirmed the possibility of effective but nonstandard use of the Kinect device. Med Pr 2017;68(1):1-21.

  12. The Intersection of Multimodality and Critical Perspective: Multimodality as Subversion

    Science.gov (United States)

    Huang, Shin-ying

    2015-01-01

    This study explores the relevance of multimodality to critical media literacy. It is based on the understanding that communication is intrinsically multimodal and multimodal communication is inherently social and ideological. By analysing two English-language learners' multimodal ensembles, the study reports on how multimodality contributes to a…

  13. Proposition d’une grille d’analyse d’interactions tutorales dans un dispositif multimodal en ligne

    Directory of Open Access Journals (Sweden)

    Vincent Caroline

    2012-07-01

    Full Text Available Cet article présente une recherche menée sur un corpus d’interactions pédagogiques synchrones entre des tuteurs novices et des apprenants de français. Les interactants communiquent via le logiciel Skype, les tuteurs et les apprenants disposent ainsi de trois modalités : le chat, l’audio et la vidéo, pouvant ainsi écrire, voir, entendre et parler à leur(s interlocuteur(s. Lesquelles vont être utilisés par chaque tuteur, dans quelle proportion et avec quel "degré investissement" ? Quelles conséquences ces différents choix d’utilisation ont sur la nature de l’interaction ? Nous émettons l’hypothèse que des corrélations pourraient être établies à deux niveaux : Premièrement, le profil initial des tuteurs (compétences individuelles, expériences professionnelles en présentiel ou à distance, habitude de l’environnement informatique et le contexte des interactions (perturbations extérieures, problèmes techniques, type de tâche, besoin exprimé des apprenants ont une influence sur la façon dont le tuteur utilise les outils à disposition. Deuxièmement, les choix d’utilisations de ces différentes modalités influencent la nature de l’interaction et la relation entre tuteurs et apprenants. Nous nous proposons donc dans cet article de présenter la grille d’analyse que nous avons mise au point dans un double objectif : d’une part analyser un corpus d’interactions pédagogiques en ligne en prenant en charge la globalité de leur multimodalité, et d’autre part participer à la réflexion sur la méthodologie des corpus multimodaux et la formation des tuteurs à la pédagogie synchrone en ligne.

  14. Secure Human-Computer Identification against Peeping Attacks (SecHCI): A Survey

    OpenAIRE

    Li, SJ; Shum, HY

    2003-01-01

    This paper focuses on human-computer identification systems against peeping attacks, in which adversaries can observe (and even control) interactions between humans (provers) and computers (verifiers). Real cases on peeping attacks were reported by Ross J. Anderson ten years before. Fixed passwords are insecure to peeping attacks since adversaries can simply replay the observed passwords. Some identification techniques can be used to defeat peeping attacks, but auxiliary devices must be used ...

  15. Multimodal responsive action

    DEFF Research Database (Denmark)

    Oshima, Sae

    While a first pair part projects a limited set of second pair parts to be provided next, responders select different types and formats for second pair parts to assemble activities (Schegloff 2007). Accordingly, various ways of shaping responses have been extensively studied (e.g. Pomerantz 1984......; Raymond 2003; Schegloff and Lerner 2009), including those with multimodal actions (e.g. Olsher 2004; Fasulo & Monzoni 2009). Some responsive actions can also be completed with bodily behavior alone, such as: when an agreement display is achieved by using only nonvocal actions (Jarmon 1996), when...... both verbal and body-behavioral elements. This paper explores one such situation in professional-client interaction, during the event of evaluating a service outcome in a haircutting session. In general, a haircutting session is brought to its closure through the service-assessment sequence, in which...

  16. Evaluation of multimodal ground cues

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Lecuyer, Anatole; Serafin, Stefania

    2012-01-01

    This chapter presents an array of results on the perception of ground surfaces via multiple sensory modalities,with special attention to non visual perceptual cues, notably those arising from audition and haptics, as well as interactions between them. It also reviews approaches to combining synth...... synthetic multimodal cues, from vision, haptics, and audition, in order to realize virtual experiences of walking on simulated ground surfaces or other features....

  17. Working memory and referential communication – multimodal aspects of interaction between children with sensorineural hearing impairment and normal hearing peers

    Directory of Open Access Journals (Sweden)

    Olof eSandgren

    2015-03-01

    Full Text Available Whereas the language development of children with sensorineural hearing impairment (SNHI has repeatedly been shown to differ from that of peers with normal hearing (NH, few studies have used an experimental approach to investigate the consequences on everyday communicative interaction. This mini review gives an overview of a range of studies on children with SNHI and NH exploring intra- and inter-individual cognitive and linguistic systems during communication.Over the last decade, our research group has studied the conversational strategies of Swedish speaking children and adolescents with SNHI and NH using referential communication, an experimental analogue to problem-solving in the classroom. We have established verbal and nonverbal control and validation mechanisms, related to working memory capacity (WMC and phonological short term memory (PSTM. We present main findings and future directions relevant for the field of cognitive hearing science and for the clinical and school-based management of children and adolescents with SNHI.

  18. Human/computer control of undersea teleoperators

    Science.gov (United States)

    Sheridan, T. B.; Verplank, W. L.; Brooks, T. L.

    1978-01-01

    The potential of supervisory controlled teleoperators for accomplishment of manipulation and sensory tasks in deep ocean environments is discussed. Teleoperators and supervisory control are defined, the current problems of human divers are reviewed, and some assertions are made about why supervisory control has potential use to replace and extend human diver capabilities. The relative roles of man and computer and the variables involved in man-computer interaction are next discussed. Finally, a detailed description of a supervisory controlled teleoperator system, SUPERMAN, is presented.

  19. Training to interact interculturally on perceptions and identities: Implementing multimodal analysis in a cultural approach to discourse

    Directory of Open Access Journals (Sweden)

    Michelangelo Conoscenti

    2009-12-01

    Full Text Available The impact of communication on society via computers (computer-mediated communication, or CMC is still evolving, and our understanding of its social, psychological, political and economic implications is far from complete. Online communication systems structure interaction in accordance with new forms of relating that affect the different social organisations that have emerged through the use of these systems. The situation is even more complex if we consider how the different actors that participate in a negotiation process – often with different motivations and diverging interests – will react to this new environment. The aim of this article is to carry out an analysis of intragroupal processes in a scenario of virtual diplomacy in order to better understand the influence of regulations on groups. To this end, we observe an intercultural group to understand how – even in such a specific case as that of CMC – people construct the world with language instead of merely describing it as it is.

  20. Multimodal probing of oxygen and water interaction with metallic and semiconducting carbon nanotube networks under ultraviolet irradiation

    Science.gov (United States)

    Muckley, Eric S.; Nelson, Anthony J.; Jacobs, Christopher B.; Ivanov, Ilia N.

    2016-04-01

    Interaction between ultraviolet (UV) light and carbon nanotube (CNT) networks plays a central role in gas adsorption, sensor sensitivity, and stability of CNT-based electronic devices. To determine the effect of UV light on sorption kinetics and resistive gas/vapor response of different CNT networks, films of semiconducting single-wall nanotubes (s-SWNTs), metallic single-wall nanotubes, and multiwall nanotubes were exposed to O2 and H2O vapor in the dark and under UV irradiation. Changes in film resistance and mass were measured in situ. In the dark, resistance of metallic nanotube networks increases in the presence of O2 and H2O, whereas resistance of s-SWNT networks decreases. UV irradiation decreases the resistance of metallic nanotube networks in the presence of O2 and H2O and increases the gas/vapor sensitivity of s-SWNT networks by nearly a factor of 2 compared to metallic nanotube networks. s-SWNT networks show evidence of delamination from the gold-plated quartz crystal microbalance crystal, possibly due to preferential adsorption of O2 and H2O on gold. UV irradiation increases the sensitivity of all CNT networks to O2 and H2O by an order of magnitude, which demonstrates the importance of UV light for enhancing response and lowering detection limits in CNT-based gas/vapor sensors.

  1. Rationale awareness for quality assurance in iterative human computation processes

    CERN Document Server

    Xiao, Lu

    2012-01-01

    Human computation refers to the outsourcing of computation tasks to human workers. It offers a new direction for solving a variety of problems and calls for innovative ways of managing human computation processes. The majority of human computation tasks take a parallel approach, whereas the potential of an iterative approach, i.e., having workers iteratively build on each other's work, has not been sufficiently explored. This study investigates whether and how human workers' awareness of previous workers' rationales affects the performance of the iterative approach in a brainstorming task and a rating task. Rather than viewing this work as a conclusive piece, the author believes that this research endeavor is just the beginning of a new research focus that examines and supports meta-cognitive processes in crowdsourcing activities.

  2. Multimodal Design for Enactive Toys

    DEFF Research Database (Denmark)

    De Goetzen, Amalia; Mion, Luca; Avanzini, Federico;

    2008-01-01

    papers presented were carefully selected during two rounds of reviewing and improvement. Due to the interdisciplinary nature of the area, the papers address a broad variety of topics in computer science and engineering areas such as information retrieval, programming, human computer interaction, digital......This book constitutes the thoroughly refereed post-conference proceedings of the 4th International Computer Music Modeling and Retrieval Symposium, CMMR 2007, held in Copenhagen, Denmark, in August 2007 jointly with the International Computer Music Conference 2007, ICMC 2007. The 33 revised full...

  3. Multimodality, Ethnodrama, and the Preparation of Pre-Service Teachers of Writing

    Science.gov (United States)

    Hobson, Sarah

    2014-01-01

    Given the prevalence of multimodal texts in today's world, it is not surprising that adolescent literacies are as dynamic, multimodal and visual as the texts with which they interact. As Kress & Van Leeuwen (1996) have outlined, the multimodal texts that make up a large percentage of our world consist of a range of modes (auditory,…

  4. Sub-Pixel Multimodal Image Registration by Human Interaction%基于人工交互的多模态图像亚像素配准

    Institute of Scientific and Technical Information of China (English)

    金宏彬; 范春晓; 李永; 杨仁杰

    2015-01-01

    Image registration based on key-point mappings usually provides alignment of integer-pixel precision. Sub-pixel registration is of great challenge to the technique exploiting key-point mappings. The authors proposed an interactive algorithm to address the sub-pixel registration problem. The proposed al-gorithm comprises two steps,the first step is to input control points and is getting a rough registration by using projection transform and linear least square algorithm, the second step is to adjust the control points with sub-pixel step. The average distance of control points was applied to quantitatively measure registra-tion quality. The evaluation method combined with subjective and objective judgment was used. Experi-ment shows that the proposed algorithm can achieve sub-pixel registration result. The performance will be more reliable than other registration technique using scale invariant feature transform and partial intensity invariant feature descriptor, and also the performance of multimodal image registration gets significantly improved.%基于特征点自动匹配的图像配准技术通常无法实现亚像素精度的配准,在多模态图像集上甚至无法完成整像素配准。为了提高多模态图像配准精度,对亚像素图像配准技术进行研究,提出了一种基于人工交互的适用于多模态图像的亚像素配准算法。对待配准图像和参考图像输入控制点,利用投影变换和最小线性平方差算法进行粗配准,根据双边平均配准误差对控制点进行亚像素调整,从而达到精确配准。定性与定量实验结果表明,相比基于尺度不变特征和局部强度不变的特征描述符配准算法,该算法具备更高的配准精度,可显著提高多模态图像配准性能。

  5. Supporting Negotiation Behavior with Haptics-Enabled Human-Computer Interfaces.

    Science.gov (United States)

    Oguz, S O; Kucukyilmaz, A; Sezgin, Tevfik Metin; Basdogan, C

    2012-01-01

    An active research goal for human-computer interaction is to allow humans to communicate with computers in an intuitive and natural fashion, especially in real-life interaction scenarios. One approach that has been advocated to achieve this has been to build computer systems with human-like qualities and capabilities. In this paper, we present insight on how human-computer interaction can be enriched by employing the computers with behavioral patterns that naturally appear in human-human negotiation scenarios. For this purpose, we introduce a two-party negotiation game specifically built for studying the effectiveness of haptic and audio-visual cues in conveying negotiation related behaviors. The game is centered around a real-time continuous two-party negotiation scenario based on the existing game-theory and negotiation literature. During the game, humans are confronted with a computer opponent, which can display different behaviors, such as concession, competition, and negotiation. Through a user study, we show that the behaviors that are associated with human negotiation can be incorporated into human-computer interaction, and the addition of haptic cues provides a statistically significant increase in the human-recognition accuracy of machine-displayed behaviors. In addition to aspects of conveying these negotiation-related behaviors, we also focus on and report game-theoretical aspects of the overall interaction experience. In particular, we show that, as reported in the game-theory literature, certain negotiation strategies such as tit-for-tat may generate maximum combined utility for the negotiating parties, providing an excellent balance between the energy spent by the user and the combined utility of the negotiating parties.

  6. A novel machine fault diagnosis method based on multivariate graph visualization analysis and human-computer interaction (HCI)%基于多元图可视化分析和人机交互的设备故障诊断方法研究

    Institute of Scientific and Technical Information of China (English)

    崔建新; 洪文学; 高海波

    2011-01-01

    Aiming at the limitations of the data-oriented fault dignosis method, this paper presents a novel fault diagnosis technology which is based on the visualization analysis of empirical samples' fault patterns expressed by multivariate graphs and the human-computer interaction (HCI) according to the basic theories of multivariate graph expression. It realizes the combination of the data-oriented machine fault diagnosis and the object-oriented fault diagnosis by experts' participation in the fault dignosis process, thus overcoming the obstacles in single mechine learning. The fault diagnosis technology based on multivariate graphical visual analysis and HCI was tested by the experiments using the fault database of the machine learning repository, Irvine, University of California (UCI). The experimental results show the process of the visual analysis and HCI can improve the aecuracy of the data-oriented prosing fault diagnosis.%针对面向数据的故障诊断方法的局限性,根据多元图表示基本理论,提出了基于多元图表达的经验样本故障模式可视化分析和人机交互(HCI)的故障诊断技术,该技术通过专家参与机器自动识别诊断过程实现了面向对象领域的故障诊断方法和面向数据的故障诊断方法的有效结合,克服了单一机器学习的局限性.采用国际标准UCI数据库中的故障数据库进行了数据实验,实验结果显示,信息可视化人机交互过程有利于提高面向数据的故障诊断研究的分类正确率.

  7. Multimodal nanoparticulate bioimaging contrast agents.

    Science.gov (United States)

    Sharma, Parvesh; Singh, Amit; Brown, Scott C; Bengtsson, Niclas; Walter, Glenn A; Grobmyer, Stephen R; Iwakuma, Nobutaka; Santra, Swadeshmukul; Scott, Edward W; Moudgil, Brij M

    2010-01-01

    A wide variety of bioimaging techniques (e.g., ultrasound, computed X-ray tomography, magnetic resonance imaging (MRI), and positron emission tomography) are commonly employed for clinical diagnostics and scientific research. While all of these methods use a characteristic "energy-matter" interaction to provide specific details about biological processes, each modality differs from another in terms of spatial and temporal resolution, anatomical and molecular details, imaging depth, as well as the desirable material properties of contrast agents needed for augmented imaging. On many occasions, it is advantageous to apply multiple complimentary imaging modalities for faster and more accurate prognosis. Since most imaging modalities employ exogenous contrast agents to improve the signal-to-noise ratio, the development and use of multimodal contrast agents is considered to be highly advantageous for obtaining improved imagery from sought-after imaging modalities. Multimodal contrast agents offer improvements in patient care, and at the same time can reduce costs and enhance safety by limiting the number of contrast agent administrations required for imaging purposes. Herein, we describe the synthesis and characterization of nanoparticulate-based multimodal contrast agent for noninvasive bioimaging using MRI, optical, and photoacoustic tomography (PAT)-imaging modalities. The synthesis of these agents is described using microemulsions, which enable facile integration of the desired diversity of contrast agents and material components into a single entity.

  8. Critical Analysis of Multimodal Discourse

    DEFF Research Database (Denmark)

    van Leeuwen, Theo

    2013-01-01

    This is an encyclopaedia article which defines the fields of critical discourse analysis and multimodality studies, argues that within critical discourse analysis more attention should be paid to multimodality, and within multimodality to critical analysis, and ends reviewing a few examples...... of recent work in the critical analysis of multimodal discourse....

  9. Differentiated effects of the multimodal antidepressant vortioxetine on sleep architecture: Part 2, pharmacological interactions in rodents suggest a role of serotonin-3 receptor antagonism

    OpenAIRE

    Steven C Leiser; Iglesias-Bregna, Deborah; Westrich, Ligia; Pehrson, Alan L.; Sanchez, Connie

    2015-01-01

    Antidepressants often disrupt sleep. Vortioxetine, a multimodal antidepressant acting through serotonin (5-HT) transporter (SERT) inhibition, 5-HT3, 5-HT7 and 5-HT1D receptor antagonism, 5-HT1B receptor partial agonism, and 5-HT1A receptor agonism, had fewer incidences of sleep-related adverse events reported in depressed patients. In the accompanying paper a polysomnographic electroencephalography (sleep-EEG) study of vortioxetine and paroxetine in healthy subjects indicated that at low/inte...

  10. Natural multimodal communication for human–robot collaboration

    National Research Council Canada - National Science Library

    Maurtua, Iñaki; Fernández, Izaskun; Tellaeche, Alberto; Kildal, Johan; Susperregi, Loreto; Ibarguren, Aitor; Sierra, Basilio

    2017-01-01

    This article presents a semantic approach for multimodal interaction between humans and industrial robots to enhance the dependability and naturalness of the collaboration between them in real industrial settings...

  11. Translating board games: multimodality and play

    OpenAIRE

    Evans, Jonathan

    2013-01-01

    This article examines the translation of modern board games as multimodal texts. It argues that games are produced in the interaction between players, pieces and rules, making them a participatory form of text. The article analyses the elements of the rules and in-game text in order to show how the multimodal elements of the text are essential to the experience of the game and how they affect the translation process. Many games are designed to be translated for many markets and avoid unnecess...

  12. Human Computer Interface Design Criteria. Volume 1. User Interface Requirements

    Science.gov (United States)

    2010-03-19

    2 entitled Human Computer Interface ( HCI )Design Criteria Volume 1: User Interlace Requirements which contains the following major changes from...MISSILE SYSTEMS CENTER Air Force Space Command 483 N. Aviation Blvd. El Segundo, CA 90245 4. This standard has been approved for use on all Space and...and efficient model of how the system works and can generalize this knowledge to other systems. According to Mayhew in Principles and Guidelines in

  13. Unmanned Surface Vehicle Human-Computer Interface for Amphibious Operations

    Science.gov (United States)

    2013-08-01

    FIGURES Figure 1. MOCU Baseline HCI using Both Aerial Photo and Digital Nautical Chart ( DNC ) Maps to Control and Monitor Land, Sea, and Air...Action DNC Digital Nautical Chart FNC Future Naval Capability HCI Human-Computer Interface HRI Human-Robot Interface HSI Human-Systems Integration...Digital Nautical Chart ( DNC ) Maps to Control and Monitor Land, Sea, and Air Vehicles. 3.2 BASELINE MOCU HCI The Baseline MOCU interface is a tiled

  14. Multimodal Interfaces: Literature Review of Ecological Interface Design, Multimodal Perception and Attention, and Intelligent Adaptive Multimodal Interfaces

    Science.gov (United States)

    2010-05-01

    which can be used in generating tactile messages using the vest. It is important to note that other types of tactile and haptic interfaces exist...field haptics in virtual environments. In Proceedings of the 2003 IEEE Virtual Reality Conference (pp. 287-288). Los Alamitos, CA: IEEE Computer... Tactile Information presentation in the cockpit. In Proceedings of the First International Workshop on Haptic Human-Computer Interaction (pp. 174-181

  15. Kansei Colour Concepts to Improve Effective Colour Selection in Designing Human Computer Interfaces

    Directory of Open Access Journals (Sweden)

    Tharangie K G D

    2010-05-01

    Full Text Available Colours have a major impact on Human Computer Interaction. Although there is a very thin line between appropriate and inappropriate use of colours, if used properly, colours can be a powerful tool to improve the usefulness of a computer interface in a wide variety of areas. Many designers mostly consider the physical aspect of the colour and tend to forget that psychological aspect of colour exists. However the findings of this study confirm that the psychological aspect or the affective dimension of colour also plays an important role in colour Interface design towards user satisfaction. Using Kansei Engineering principles the study explores the affective variability of colours and how it can be manipulated to provide better design guidance and solutions. A group of twenty adults from Sri Lanka, age ranging from 30 to 40 took part in the study. Survey was conducted using a Kansei colour questionnaire in normal atmospheric conditions. The results reveal that the affective variability of colours plays an important role in human computer interaction as an influential factor in drawing the user towards or withdrawing from the Interface. Thereby improving or degrading the user satisfaction.

  16. Multimodal Resources in Transnational Adoption

    DEFF Research Database (Denmark)

    Raudaskoski, Pirkko Liisa

    The paper discusses an empirical analysis which highlights the multimodal nature of identity construction. A documentary on transnational adoption provides real life incidents as research material. The incidents involve (or from them emerge) various kinds of multimodal resources and participants...

  17. Multimodal Resources in Transnational Adoption

    DEFF Research Database (Denmark)

    Raudaskoski, Pirkko Liisa

    The paper discusses an empirical analysis which highlights the multimodal nature of identity construction. A documentary on transnational adoption provides real life incidents as research material. The incidents involve (or from them emerge) various kinds of multimodal resources and participants...

  18. Evolution of the field quantum entropy and entanglement in a system of multimode light field interacting resonantly with a two-level atom through N_j-degenerate N~Σ-photon process

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The time evolution of the field quantum entropy and entanglement in a system of multi-mode coherent light field resonantly interacting with a two-level atom by de-generating the multi-photon process is studied by utilizing the Von Neumann re-duced entropy theory,and the analytical expressions of the quantum entropy of the multimode field and the numerical calculation results for three-mode field inter-acting with the atom are obtained. Our attention focuses on the discussion of the influences of the initial average photon number,the atomic distribution angle and the phase angle of the atom dipole on the evolution of the quantum field entropy and entanglement. The results obtained from the numerical calculation indicate that: the stronger the quantum field is,the weaker the entanglement between the quan-tum field and the atom will be,and when the field is strong enough,the two sub-systems may be in a disentangled state all the time; the quantum field entropy is strongly dependent on the atomic distribution angle,namely,the quantum field and the two-level atom are always in the entangled state,and are nearly stable at maximum entanglement after a short time of vibration; the larger the atomic dis-tribution angle is,the shorter the time for the field quantum entropy to evolve its maximum value is; the phase angles of the atom dipole almost have no influences on the entanglement between the quantum field and the two-level atom. Entangled states or pure states based on these properties of the field quantum entropy can be prepared.

  19. Evolution of the field quantum entropy and entanglement in a system of multimode light field interacting resonantly with a two-level atom through Nj-degenerate NΣ-photon process

    Institute of Scientific and Technical Information of China (English)

    LIU WangYun; YANG ZhiYong; AN YuYing

    2008-01-01

    The time evolution of the field quantum entropy and entanglement in a system of multi-mode coherent light field resonantly interacting with a two-level atom by de-generating the multi-photon process is studied by utilizing the Von Neumann re-duced entropy theory, and the analytical expressions of the quantum entropy of the multimode field and the numerical calculation results for three-mode field inter-acting with the atom are obtained. Our attention focuses on the discussion of the influences of the initial average photon number, the atomic distribution angle and the phase angle of the atom dipole on the evolution of the quantum field entropy and entanglement. The results obtained from the numerical calculation indicate that: the stronger the quantum field is, the weaker the entanglement between the quan-tum field and the atom will be, and when the field is strong enough, the two sub-systems may be in a disentangled state all the time; the quantum field entropy is strongly dependent on the atomic distribution angle, namely, the quantum field and the two-level atom are always in the entangled state, and are nearly stable at maximum entanglement after a short time of vibration; the larger the atomic dis-tribution angle is, the shorter the time for the field quantum entropy to evolve its maximum value is; the phase angles of the atom dipole almost have no influences on the entanglement between the quantum field and the two-level atom. Entangled states or pure states based on these properties of the field quantum entropy can be prepared.

  20. Multimodal sequence learning.

    Science.gov (United States)

    Kemény, Ferenc; Meier, Beat

    2016-02-01

    While sequence learning research models complex phenomena, previous studies have mostly focused on unimodal sequences. The goal of the current experiment is to put implicit sequence learning into a multimodal context: to test whether it can operate across different modalities. We used the Task Sequence Learning paradigm to test whether sequence learning varies across modalities, and whether participants are able to learn multimodal sequences. Our results show that implicit sequence learning is very similar regardless of the source modality. However, the presence of correlated task and response sequences was required for learning to take place. The experiment provides new evidence for implicit sequence learning of abstract conceptual representations. In general, the results suggest that correlated sequences are necessary for implicit sequence learning to occur. Moreover, they show that elements from different modalities can be automatically integrated into one unitary multimodal sequence.

  1. Multimodal Processes Rescheduling

    DEFF Research Database (Denmark)

    Bocewicz, Grzegorz; Banaszak, Zbigniew A.; Nielsen, Peter

    2013-01-01

    Cyclic scheduling problems concerning multimodal processes are usually observed in FMSs producing multi-type parts where the Automated Guided Vehicles System (AGVS) plays a role of a material handling system. Schedulability analysis of concurrently flowing cyclic processes (SCCP) exe-cuted in the......Cyclic scheduling problems concerning multimodal processes are usually observed in FMSs producing multi-type parts where the Automated Guided Vehicles System (AGVS) plays a role of a material handling system. Schedulability analysis of concurrently flowing cyclic processes (SCCP) exe...

  2. Safety Metrics for Human-Computer Controlled Systems

    Science.gov (United States)

    Leveson, Nancy G; Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  3. Metalogue: A Multiperspective Multimodal Dialogue System with Metacognitive Abilities for Highly Adaptive and Flexible Dialogue Management

    NARCIS (Netherlands)

    Alexandersson, Jan; Aretoulaki, Maria; Campbell, Nick; Gardner, Michael; Girenko, Andrey; Klakow, Dietrich; Koryzis, Dimitris; Petukhova, Volha; Specht, Marcus; Spiliotopoulos, Dimitris; Stricker, Alexander; Taatgen, Niels

    2014-01-01

    This poster paper presents a high-level description of the Metalogue project that is developing a multi-modal dialogue system that is able to implement interactive behaviors that seem natural to users and is flexible enough to exploit the full potential of multimodal interaction. We provide an outli

  4. Cognitive Construction of Multimodal Metaphor in COLOURFUL GUIZHOU promo

    Institute of Scientific and Technical Information of China (English)

    2015-01-01

    Based on multimodal metaphor theory,the processes of cognitive construction of Multimodal Metaphor in dynamic discourse are analyzed in colorful Guizhou promo as the subject. The thesis focuses on the interaction of different modal and the process of dynamic metaphor’s construction with the genre characteristic of promos.We find that this promo is not only to impressed audiences with language,image and voice,but also has direct connection with audiences for largely propagandizin.

  5. Cognitive Construction of Multimodal Metaphor in COLOURFUL GUIZHOU promo

    Institute of Scientific and Technical Information of China (English)

    王然; 颜至敏; 曾贤模

    2015-01-01

    Based on multimodal metaphor theory,the processes of cognitive construction of Multimodal Metaphor in dynamic discourse are analyzed in colorful Guizhou promo as the subject.The thesis focuses on the interaction of different modal and the process of dynamic metaphor’s construction with the genre characteristic of promos.We find that this promo is not only to impressed audiences with language,image and voice,but also has direct connection with audiences for largely propagandizin.

  6. Multimode geodesic branching components

    Science.gov (United States)

    Schulz, D.; Voges, E.

    1983-01-01

    Geodesic branching components are investigated for multimode guided wave optics. Geodesic structures with particular properties, e.g. focussing star couplers, are derived by a synthesis technique based on a theorem of Toraldo di Francia. Experimentally, the geodesic surfaces are printed on acrylic glass and are spin-coated with organic film waveguides.

  7. Multimodal emergens via musik

    DEFF Research Database (Denmark)

    Bonde, Anders

    2010-01-01

    I denne artikel præsenteres og argumenteres for en værkanalytisk indfaldsvinkel i forhold til det at undersøge multimodal betydningsdannelse i audiovisuelle medieprodukter såsom reklamefilm og dokumentarfilm, hvor flere forskellige modaliteter eller semiotiske ressourcer samvirker. Som teoretisk...

  8. Multimodal Strategies of Theorization

    DEFF Research Database (Denmark)

    Cartel, Melodie; Colombero, Sylvain; Boxenbaum, Eva

    This paper examines the role of multimodal strategies in processes of theorization. Empirically, we investigate the theorization process of a highly disruptive innovation in the history of architecture: reinforced concrete. Relying on archival data from a dominant French architectural journal fro...

  9. Multimodal Strategies of Theorization

    DEFF Research Database (Denmark)

    Cartel, Melodie; Colombero, Sylvain; Boxenbaum, Eva

    This paper examines the role of multimodal strategies in processes of theorization. Empirically, we investigate the theorization process of a highly disruptive innovation in the history of architecture: reinforced concrete. Relying on archival data from a dominant French architectural journal from...... with well-known rhetorical strategies and develop a process model of theorization....

  10. Modeling Multimodal Stratification

    DEFF Research Database (Denmark)

    Boeriis, Morten

    2017-01-01

    This article discusses one of the core axioms of social semiotic theory, namely stratification, in the light of developments in multimodality in recent years. The discussion takes a point of departure in the approaches to stratification taken by Hjelmslev, Halliday, and Kress and van Leeuwen...

  11. Multimodal Semantic Analysis of Public Transport Movements

    Science.gov (United States)

    Halb, Wolfgang; Neuschmied, Helmut

    We present a system for multimodal, semantic analysis of person movements that incorporates data from surveillance cameras, weather sensors, and third-party information providers. The interactive demonstration will show the automated creation of a survey of passenger transfer behavior at a public transport hub. Such information is vital for public transportation planning and the presented approach increases the cost-effectiveness and data accuracy as compared to traditional methods.

  12. On a Combined Analysis Framework for Multimodal Discourse Analysis

    Institute of Scientific and Technical Information of China (English)

    窦瑞芳

    2015-01-01

    When people communicate,they do not only use language,that is,a single mode of communication,but also simultaneously use body languages,eye contacts,pictures,etc,which is called multimodal communication. The multimodal communication,as a matter of fact,is the most natural way of communication.Therefore,in order to make a complete discourse analysis,all the modes involved in an interaction or discourse should be taken into account and the new analysis framework for Multimodal Discourse Analysis ought to be created to move forward such type of analysis.In this passage,the author makes a tentative move to shape a new analysis framework for Multimodal Discourse Analysis.

  13. On a Combined Analysis Framework for Multimodal Discourse Analysis

    Institute of Scientific and Technical Information of China (English)

    窦瑞芳

    2015-01-01

    When people communicate,they do not only use language,that is,a single mode of communication,but also simultaneously use body languages,eye contacts,pictures,etc,which is called multimodal communication.The multimodal communication,as a matter of fact,is the most natural way of communication.Therefore,in order to make a complete discourse analysis,all the modes involved in an interaction or discourse should be taken into account and the new analysis framework for Multimodal Discourse Analysis ought to be created to move forward such type of analysis.In this passage,the author makes a tentative move to shape a new analysis framework for Multimodal Discourse Analysis.

  14. Differentiated effects of the multimodal antidepressant vortioxetine on sleep architecture: Part 2, pharmacological interactions in rodents suggest a role of serotonin-3 receptor antagonism.

    Science.gov (United States)

    Leiser, Steven C; Iglesias-Bregna, Deborah; Westrich, Ligia; Pehrson, Alan L; Sanchez, Connie

    2015-10-01

    Antidepressants often disrupt sleep. Vortioxetine, a multimodal antidepressant acting through serotonin (5-HT) transporter (SERT) inhibition, 5-HT3, 5-HT7 and 5-HT1D receptor antagonism, 5-HT1B receptor partial agonism, and 5-HT1A receptor agonism, had fewer incidences of sleep-related adverse events reported in depressed patients. In the accompanying paper a polysomnographic electroencephalography (sleep-EEG) study of vortioxetine and paroxetine in healthy subjects indicated that at low/intermediate levels of SERT occupancy, vortioxetine affected rapid eye movement (REM) sleep differently than paroxetine. Here we investigated clinically meaningful doses (80-90% SERT occupancy) of vortioxetine and paroxetine on sleep-EEG in rats to further elucidate the serotoninergic receptor mechanisms mediating this difference. Cortical EEG, electromyography (EMG), and locomotion were recorded telemetrically for 10 days, following an acute dose, from rats receiving vortioxetine-infused chow or paroxetine-infused water and respective controls. Sleep stages were manually scored into active wake, quiet wake, and non-REM or REM sleep. Acute paroxetine or vortioxetine delayed REM onset latency (ROL) and decreased REM episodes. After repeated administration, vortioxetine yielded normal sleep-wake rhythms while paroxetine continued to suppress REM. Paroxetine, unlike vortioxetine, increased transitions from non-REM to wake, suggesting fragmented sleep. Next, we investigated the role of 5-HT3 receptors in eliciting these differences. The 5-HT3 receptor antagonist ondansetron significantly reduced paroxetine's acute effects on ROL, while the 5-HT3 receptor agonist SR57227A significantly increased vortioxetine's acute effect on ROL. Overall, our data are consistent with the clinical findings that vortioxetine impacts REM sleep differently than paroxetine, and suggests a role for 5-HT3 receptor antagonism in mitigating these differences.

  15. Multimodal eye recognition

    Science.gov (United States)

    Zhou, Zhi; Du, Yingzi; Thomas, N. L.; Delp, Edward J., III

    2010-04-01

    Multimodal biometrics use more than one means of biometric identification to achieve higher recognition accuracy, since sometimes a unimodal biometric is not good enough used to do identification and classification. In this paper, we proposed a multimodal eye recognition system, which can obtain both iris and sclera patterns from one color eye image. Gabor filter and 1-D Log-Gabor filter algorithms have been applied as the iris recognition algorithms. In sclera recognition, we introduced automatic sclera segmentation, sclera pattern enhancement, sclera pattern template generation, and sclera pattern matching. We applied kernelbased matching score fusion to improve the performance of the eye recognition system. The experimental results show that the proposed eye recognition method can achieve better performance compared to unimodal biometric identification, and the accuracy of our proposed kernel-based matching score fusion method is higher than two classic linear matching score fusion methods: Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA).

  16. Programmable Multimode Quantum Networks

    CERN Document Server

    Armstrong, Seiji; Janousek, Jiri; Hage, Boris; Treps, Nicolas; Lam, Ping Koy; Bachor, Hans-A

    2012-01-01

    Entanglement between large numbers of quantum modes is the quintessential resource for quantum information processing and future applications such as the quantum internet. Conventionally the generation of multimode entanglement in optics requires complex layouts of beam-splitters and phase shifters in order to transform the input modes in to entangled modes. These networks need substantial modification for every new set of entangled modes to be generated. Further, the complexity grows rapidly with the number of entangled modes as the number of detectors, phase locks and optical components needs to be increased. Here we report on the highly efficient and versatile generation of various multimode entangled states within one optical beam. By defining our modes to be combinations of different spatial regions of the beam, we may use just one pair of multi-pixel detectors and one local oscillator to measure an orthogonal set of modes. The transformation of this set into a desired set of entangled modes is calculate...

  17. Multimodal integration of anatomy and physiology classes: How instructors utilize multimodal teaching in their classrooms

    Science.gov (United States)

    McGraw, Gerald M., Jr.

    Multimodality is the theory of communication as it applies to social and educational semiotics (making meaning through the use of multiple signs and symbols). The term multimodality describes a communication methodology that includes multiple textual, aural, and visual applications (modes) that are woven together to create what is referred to as an artifact. Multimodal teaching methodology attempts to create a deeper meaning to course content by activating the higher cognitive areas of the student's brain, creating a more sustained retention of the information (Murray, 2009). The introduction of multimodality educational methodologies as a means to more optimally engage students has been documented within educational literature. However, studies analyzing the distribution and penetration into basic sciences, more specifically anatomy and physiology, have not been forthcoming. This study used a quantitative survey design to determine the degree to which instructors integrated multimodality teaching practices into their course curricula. The instrument used for the study was designed by the researcher based on evidence found in the literature and sent to members of three associations/societies for anatomy and physiology instructors: the Human Anatomy and Physiology Society; the iTeach Anatomy & Physiology Collaborate; and the American Physiology Society. Respondents totaled 182 instructor members of two- and four-year, private and public higher learning colleges collected from the three organizations collectively with over 13,500 members in over 925 higher learning institutions nationwide. The study concluded that the expansion of multimodal methodologies into anatomy and physiology classrooms is at the beginning of the process and that there is ample opportunity for expansion. Instructors continue to use lecture as their primary means of interaction with students. Email is still the major form of out-of-class communication for full-time instructors. Instructors with

  18. Multimodal Person Identification

    Science.gov (United States)

    Pnevmatikakis, Aristodemos; Ekenel, Hazım K.; Barras, Claude; Hernando, Javier

    Person identification is of paramount importance in security, surveillance, human-computer interfaces, and smart spaces. All these applications attempt the recognition of people based on audiovisual data. The way the systems collect these data divides them into two categories: Near-field systems: Both the sensor and the person to be identified focus on each other. Far-field systems: The sensors monitor an entire space in which the person appears, occasionally collecting useful data (face and/or speech) about that person. Also, the person pays no attention to the sensors and is possibly unaware of their existence.

  19. Promoting Multilingual Communicative Competence through Multimodal Academic Learning Situations

    Science.gov (United States)

    Kyppö, Anna; Natri, Teija

    2016-01-01

    This paper presents information on the factors affecting the development of multilingual and multicultural communicative competence in interactive multimodal learning environments in an academic context. The interdisciplinary course in multilingual interaction offered at the University of Jyväskylä aims to enhance students' competence in…

  20. Hand Gesture and Neural Network Based Human Computer Interface

    Directory of Open Access Journals (Sweden)

    Aekta Patel

    2014-06-01

    Full Text Available Computer is used by every people either at their work or at home. Our aim is to make computers that can understand human language and can develop a user friendly human computer interfaces (HCI. Human gestures are perceived by vision. The research is for determining human gestures to create an HCI. Coding of these gestures into machine language demands a complex programming algorithm. In this project, We have first detected, recognized and pre-processing the hand gestures by using General Method of recognition. Then We have found the recognized image’s properties and using this, mouse movement, click and VLC Media player controlling are done. After that we have done all these functions thing using neural network technique and compared with General recognition method. From this we can conclude that neural network technique is better than General Method of recognition. In this, I have shown the results based on neural network technique and comparison between neural network method & general method.

  1. Develop and Evaluate the Effects of Multimodal Presentation System on Elementary ESL Students

    Science.gov (United States)

    Kuo, Fang-O; Yu, Pao-Ta; Hsiao, Wei-Hung

    2013-01-01

    The purpose of this study is to develop and evaluate the effects of multimodal presentation system (MPS), a multimodal presentation software integrated with interactive whiteboard (IWB), on student learning in the elementary English as second language (ESL) course. It focuses primarily on techniques and tools to enhance the students' ESL learning…

  2. A Single-Photon Subtractor for Multimode Quantum States

    Science.gov (United States)

    Ra, Young-Sik; Jacquard, Clément; Averchenko, Valentin; Roslund, Jonathan; Cai, Yin; Dufour, Adrien; Fabre, Claude; Treps, Nicolas

    2016-05-01

    In the last decade, single-photon subtraction has proved to be key operations in optical quantum information processing and quantum state engineering. Implementation of the photon subtraction has been based on linear optics and single-photon detection on single-mode resources. This technique, however, becomes unsuitable with multimode resources such as spectrally multimode squeezed states or continuous variables cluster states. We implement a single-photon subtractor for such multimode resources based on sum-frequency generation and single-photon detection. An input multimode quantum state interacts with a bright control beam whose spectrum has been engineered through ultrafast pulse-shaping. The multimode quantum state resulting from the single-photon subtractor is analyzed with multimode homodyne detection whose local oscillator spectrum is independently engineered. We characterize the single-photon subtractor via coherent-state quantum process tomography, which provides its mode-selectivity and subtraction modes. The ability to simultaneously control the state engineering and its detection ensures both flexibility and scalability in the production of highly entangled non-Gaussian quantum states.

  3. Multimodal processes scheduling in mesh-like network environment

    Directory of Open Access Journals (Sweden)

    Bocewicz Grzegorz

    2015-06-01

    Full Text Available Multimodal processes planning and scheduling play a pivotal role in many different domains including city networks, multimodal transportation systems, computer and telecommunication networks and so on. Multimodal process can be seen as a process partially processed by locally executed cyclic processes. In that context the concept of a Mesh-like Multimodal Transportation Network (MMTN in which several isomorphic subnetworks interact each other via distinguished subsets of common shared intermodal transport interchange facilities (such as a railway station, bus station or bus/tram stop as to provide a variety of demand-responsive passenger transportation services is examined. Consider a mesh-like layout of a passengers transport network equipped with different lines including buses, trams, metro, trains etc. where passenger flows are treated as multimodal processes. The goal is to provide a declarative model enabling to state a constraint satisfaction problem aimed at multimodal transportation processes scheduling encompassing passenger flow itineraries. Then, the main objective is to provide conditions guaranteeing solvability of particular transport lines scheduling, i.e. guaranteeing the right match-up of local cyclic acting bus, tram, metro and train schedules to a given passengers flow itineraries.

  4. Generalized modulational instability in multimode fibers: wideband multimode parametric amplification

    CERN Document Server

    Guasoni, M

    2015-01-01

    In this paper intermodal modulational instability (IM-MI) is analyzed in a multimode fiber where several spatial and polarization modes propagate. The coupled nonlinear Schr\\"{o}dinger equations describing the modal evolution in the fiber are linearized and reduced to an eigenvalue problem. As a result, the amplification of each mode can be described by means of the eigenvalues and eigenvectors of a matrix that stores the information about the dispersion properties of the modes and the modal power distribution of the pump. Some useful analytical formulas are also provided that estimate the modal amplification as function of the system parameters. Finally, the impact of third-order dispersion and of absorbtion losses is evaluated, which reveals some surprising phenomena into the IM-MI dynamics. These outcomes generalize previous studies on bimodal-MI, related to the interaction between 2 spatial or polarization modes, to the most general case of $N>2$ interacting modes. Moreover, they pave the way towards the ...

  5. Mining Multimodal Sequential Patterns: A Case Study on Affect Detection

    DEFF Research Database (Denmark)

    Martínez, Héctor Pérez; Yannakakis, Georgios N.

    2011-01-01

    Temporal data from multimodal interaction such as speech and bio-signals cannot be easily analysed without a preprocessing phase through which some key characteristics of the signals are extracted. Typically, standard statistical signal features such as average values are calculated prior...... to the analysis and, subsequently, are presented either to a multimodal fusion mechanism or a computational model of the interaction. This paper proposes a feature extraction methodology which is based on frequent sequence mining within and across multiple modalities of user input. The proposed method is applied...

  6. Simplified Multimodal Biometric Identification

    Directory of Open Access Journals (Sweden)

    Abhijit Shete

    2014-03-01

    Full Text Available Multibiometric systems are expected to be more reliable than unimodal biometric systems for personal identification due to the presence of multiple, fairly independent pieces of evidence e.g. Unique Identification Project "Aadhaar" of Government of India. In this paper, we present a novel wavelet based technique to perform fusion at the feature level and score level by considering two biometric modalities, face and fingerprint. The results indicate that the proposed technique can lead to substantial improvement in multimodal matching performance. The proposed technique is simple because of no preprocessing of raw biometric traits as well as no feature and score normalization.

  7. Investigation of protein selectivity in multimodal chromatography using in silico designed Fab fragment variants.

    Science.gov (United States)

    Karkov, Hanne Sophie; Krogh, Berit Olsen; Woo, James; Parimal, Siddharth; Ahmadian, Haleh; Cramer, Steven M

    2015-11-01

    In this study, a unique set of antibody Fab fragments was designed in silico and produced to examine the relationship between protein surface properties and selectivity in multimodal chromatographic systems. We hypothesized that multimodal ligands containing both hydrophobic and charged moieties would interact strongly with protein surface regions where charged groups and hydrophobic patches were in close spatial proximity. Protein surface property characterization tools were employed to identify the potential multimodal ligand binding regions on the Fab fragment of a humanized antibody and to evaluate the impact of mutations on surface charge and hydrophobicity. Twenty Fab variants were generated by site-directed mutagenesis, recombinant expression, and affinity purification. Column gradient experiments were carried out with the Fab variants in multimodal, cation-exchange, and hydrophobic interaction chromatographic systems. The results clearly indicated that selectivity in the multimodal system was different from the other chromatographic modes examined. Column retention data for the reduced charge Fab variants identified a binding site comprising light chain CDR1 as the main electrostatic interaction site for the multimodal and cation-exchange ligands. Furthermore, the multimodal ligand binding was enhanced by additional hydrophobic contributions as evident from the results obtained with hydrophobic Fab variants. The use of in silico protein surface property analyses combined with molecular biology techniques, protein expression, and chromatographic evaluations represents a previously undescribed and powerful approach for investigating multimodal selectivity with complex biomolecules.

  8. Multimodality of Learning Through Anchored Instruction

    Science.gov (United States)

    Love, Mary Susan

    2004-01-01

    Multimodality of learning results from the intertextual relationship between multimodal design and other meaning-making modes. Meaning making is becoming more multimodal because language is continually reshaped by new forms of communication media. This article examines anchored instruction from a multimodal perspective. The first section includes…

  9. Learning multimodal latent attributes.

    Science.gov (United States)

    Fu, Yanwei; Hospedales, Timothy M; Xiang, Tao; Gong, Shaogang

    2014-02-01

    The rapid development of social media sharing has created a huge demand for automatic media classification and annotation techniques. Attribute learning has emerged as a promising paradigm for bridging the semantic gap and addressing data sparsity via transferring attribute knowledge in object recognition and relatively simple action classification. In this paper, we address the task of attribute learning for understanding multimedia data with sparse and incomplete labels. In particular, we focus on videos of social group activities, which are particularly challenging and topical examples of this task because of their multimodal content and complex and unstructured nature relative to the density of annotations. To solve this problem, we 1) introduce a concept of semilatent attribute space, expressing user-defined and latent attributes in a unified framework, and 2) propose a novel scalable probabilistic topic model for learning multimodal semilatent attributes, which dramatically reduces requirements for an exhaustive accurate attribute ontology and expensive annotation effort. We show that our framework is able to exploit latent attributes to outperform contemporary approaches for addressing a variety of realistic multimedia sparse data learning tasks including: multitask learning, learning with label noise, N-shot transfer learning, and importantly zero-shot learning.

  10. MUVA: a MUltimodal Visceral design Ambient device

    DEFF Research Database (Denmark)

    Kivac, Robert; Klem, Sune Øllgaard; Olsen, Sophus Béneé

    2016-01-01

    This paper presents MUVA (MUltimodal Visceral design Ambient device), a prototype for a storytelling light- and sound-based ambient device. The aim of this device is to encourage social interaction and expand the emotional closeness in families with children where at least one parent has irregular...... work schedule. MUVA differs from the other ambient devices, because it is targeted to children, and it adopts a visceral design approach in order to be appealing to its users. It is a raindrop-shaped lamp, which features audio playing, while its light color is affected by the audio playing. MUVA can...

  11. Challenges in Transcribing Multimodal Data: A Case Study

    Science.gov (United States)

    Helm, Francesca; Dooly, Melinda

    2017-01-01

    Computer-mediated communication (CMC) once meant principally text-based communication mediated by computers, but rapid technological advances in recent years have heralded an era of multimodal communication with a growing emphasis on audio and video synchronous interaction. As CMC, in all its variants (text chats, video chats, forums, blogs, SMS,…

  12. Robustness of multimodal processes itineraries

    DEFF Research Database (Denmark)

    Bocewicz, G.; Banaszak, Z.; Nielsen, Izabela Ewa

    2013-01-01

    This paper concerns multimodal transport systems (MTS) represented by a supernetworks in which several unimodal networks are connected by transfer links and focuses on the scheduling problems encountered in these systems. Assuming unimodal networks are modeled as cyclic lines, i.e. the routes det...... of multimodal processes driven itinerary planning problem is our main contribution. Illustrative examples providing alternative itineraries in some cases of MTS malfunction are presented....

  13. Design Principles for Interactive Software

    DEFF Research Database (Denmark)

    The book addresses the crucial intersection of human-computer interaction (HCI) and software engineering by asking both what users require from interactive systems and what developers need to produce well-engineered software. Needs are expressed as......The book addresses the crucial intersection of human-computer interaction (HCI) and software engineering by asking both what users require from interactive systems and what developers need to produce well-engineered software. Needs are expressed as...

  14. A Language/Action Model of Human-Computer Communication in a Psychiatric Hospital

    Science.gov (United States)

    Morelli, R. A.; Goethe, J. W.; Bronzino, J. D.

    1990-01-01

    When a staff physician says to an intern he is supervising “I think you should try medication X,” this statement may differ in meaning from the same string of words spoken between colleagues. In the first case, the statement may have the force of an order (“Do this!”), while in the latter it is merely a suggestion. In either case, the utterance sets up important expectations which constrain the future actions of the parties involved. This paper lays out an analytic framework, based on speech act theory, for representing such “conversations for action” so that they may be used to inform the design of human-computer interaction. The language/action design perspective views the information system -- in this case an expert system that monitors drug treatment -- as one of many “agents” within a broad communicative network. Speech act theory is used to model a typical psychiatric hospital unit as a system of communicative action. In addition to identifying and characterizing the primary communicative agents and speech acts, the model presents a taxonomy of key conversational patterns and shows how they may be applied to the design of a clinical monitoring system. In the final section, the advantages and implications of this design approach are discussed.

  15. Impact of familiarity on information complexity in human-computer interfaces

    Directory of Open Access Journals (Sweden)

    Bakaev Maxim

    2016-01-01

    Full Text Available A quantitative measure of information complexity remains very much desirable in HCI field, since it may aid in optimization of user interfaces, especially in human-computer systems for controlling complex objects. Our paper is dedicated to exploration of subjective (subject-depended aspect of the complexity, conceptualized as information familiarity. Although research of familiarity in human cognition and behaviour is done in several fields, the accepted models in HCI, such as Human Processor or Hick-Hyman’s law do not generally consider this issue. In our experimental study the subjects performed search and selection of digits and letters, whose familiarity was conceptualized as frequency of occurrence in numbers and texts. The analysis showed significant effect of information familiarity on selection time and throughput in regression models, although the R2 values were somehow low. Still, we hope that our results might aid in quantification of information complexity and its further application for optimizing interaction in human-machine systems.

  16. A multimodal approach to estimating vigilance using EEG and forehead EOG

    Science.gov (United States)

    Zheng, Wei-Long; Lu, Bao-Liang

    2017-04-01

    Objective. Covert aspects of ongoing user mental states provide key context information for user-aware human computer interactions. In this paper, we focus on the problem of estimating the vigilance of users using EEG and EOG signals. Approach. The PERCLOS index as vigilance annotation is obtained from eye tracking glasses. To improve the feasibility and wearability of vigilance estimation devices for real-world applications, we adopt a novel electrode placement for forehead EOG and extract various eye movement features, which contain the principal information of traditional EOG. We explore the effects of EEG from different brain areas and combine EEG and forehead EOG to leverage their complementary characteristics for vigilance estimation. Considering that the vigilance of users is a dynamic changing process because the intrinsic mental states of users involve temporal evolution, we introduce continuous conditional neural field and continuous conditional random field models to capture dynamic temporal dependency. Main results. We propose a multimodal approach to estimating vigilance by combining EEG and forehead EOG and incorporating the temporal dependency of vigilance into model training. The experimental results demonstrate that modality fusion can improve the performance compared with a single modality, EOG and EEG contain complementary information for vigilance estimation, and the temporal dependency-based models can enhance the performance of vigilance estimation. From the experimental results, we observe that theta and alpha frequency activities are increased, while gamma frequency activities are decreased in drowsy states in contrast to awake states. Significance. The forehead setup allows for the simultaneous collection of EEG and EOG and achieves comparative performance using only four shared electrodes in comparison with the temporal and posterior sites.

  17. User interface issues in supporting human-computer integrated scheduling

    Science.gov (United States)

    Cooper, Lynne P.; Biefeld, Eric W.

    1991-09-01

    The topics are presented in view graph form and include the following: characteristics of Operations Mission Planner (OMP) schedule domain; OMP architecture; definition of a schedule; user interface dimensions; functional distribution; types of users; interpreting user interaction; dynamic overlays; reactive scheduling; and transitioning the interface.

  18. Towards human-computer synergetic analysis of large-scale biological data.

    Science.gov (United States)

    Singh, Rahul; Yang, Hui; Dalziel, Ben; Asarnow, Daniel; Murad, William; Foote, David; Gormley, Matthew; Stillman, Jonathan; Fisher, Susan

    2013-01-01

    Advances in technology have led to the generation of massive amounts of complex and multifarious biological data in areas ranging from genomics to structural biology. The volume and complexity of such data leads to significant challenges in terms of its analysis, especially when one seeks to generate hypotheses or explore the underlying biological processes. At the state-of-the-art, the application of automated algorithms followed by perusal and analysis of the results by an expert continues to be the predominant paradigm for analyzing biological data. This paradigm works well in many problem domains. However, it also is limiting, since domain experts are forced to apply their instincts and expertise such as contextual reasoning, hypothesis formulation, and exploratory analysis after the algorithm has produced its results. In many areas where the organization and interaction of the biological processes is poorly understood and exploratory analysis is crucial, what is needed is to integrate domain expertise during the data analysis process and use it to drive the analysis itself. In context of the aforementioned background, the results presented in this paper describe advancements along two methodological directions. First, given the context of biological data, we utilize and extend a design approach called experiential computing from multimedia information system design. This paradigm combines information visualization and human-computer interaction with algorithms for exploratory analysis of large-scale and complex data. In the proposed approach, emphasis is laid on: (1) allowing users to directly visualize, interact, experience, and explore the data through interoperable visualization-based and algorithmic components, (2) supporting unified query and presentation spaces to facilitate experimentation and exploration, (3) providing external contextual information by assimilating relevant supplementary data, and (4) encouraging user-directed information

  19. The Stability of Multi-modal Traffic Network

    Institute of Scientific and Technical Information of China (English)

    HAN Ling-Hui; Sun Hui-Jun; ZHU Cheng-Juan; WU Jian-Jun; JIA Bin

    2013-01-01

    There is an explicit and implicit assumption in multimodal traffic equilibrium models,that is,if the equilibrium exists,then it will also occur.The assumption is very idealized; in fact,it may be shown that the quite contrary could happen,because in multimodal traffic network,especially in mixed traffic conditions the interaction among traffic modes is asymmetric and the asymmetric interaction may result in the instability of traffic system.In this paper,to study the stability of multimodal traffic system,we respectively present the travel cost function in mixed traffic conditions and in traffic network with dedicated bus lanes.Based on a day-to-day dynamical model,we study the evolution of daily route choice of travelers in multimodal traffic network using 10000 random initial values for different cases.From the results of simulation,it can be concluded that the asymmetric interaction between the cars and buses in mixed traffic conditions can lead the traffic system to instability when traffic demand is larger.We also study the effect of travelers' perception error on the stability of multimodal traffic network.Although the larger perception error can alleviate the effect of interaction between cars and buses and improve the stability of traffic system in mixed traffic conditions,the traffic system also become instable when the traffic demand is larger than a number.For all cases simulated in this study,with the same parameters,traffic system with dedicated bus lane has better stability for traffic demand than that in mixed traffic conditions.We also find that the network with dedicated bus lane has higher portion of travelers by bus than it of mixed traffic network.So it can be concluded that building dedicated bus lane can improve the stability of traffic system and attract more travelers to choose bus reducing the traffic congestion.

  20. 多模态社会交往符号与外语听说教学模式研究%Multimode Sociosemiotics and Foreign Language Listening and Speaking Teaching

    Institute of Scientific and Technical Information of China (English)

    周健敏

    2012-01-01

    外语课堂教学交往符号的形式和结构,在计算机辅助外语教学向多媒体、多模态人机交互方向推进过程中发生着根本性的变革。提升语言交际能力的听说教学在外语教学中起到越来越重要的作用,外语听说课堂是师生利用多媒体物质手段,借助包括有声语言在内的多种模态符号进行交往的微社会。在多模态社会交往符号视角下,外语听说教学走向以多维外语听说互动课堂为"一体"、以多模态网络学习平台等辅助手段为"多翼"的教学模式。%With the reform of foreign language teaching in colleges and universities,as well as and the rapid development of computer assisted instruction transferring to multimedia and multimodal human-computer interaction,fundamental changes occurs in terms of language teaching classroom discourse and structures.Listening and speaking class is a place where teachers and students communicate in the classroom with the help of multimedia material means.With the variety of modes,including sound language micro-society of symbol interaction and multimodal perspective of social interaction symbols,foreign language would move on to multidimensional interaction pattern characterized by"One Body with Many Wings".

  1. Vector-Resonance-Multimode Instability

    Science.gov (United States)

    Sergeyev, S. V.; Kbashi, H.; Tarasov, N.; Loiko, Yu.; Kolpakov, S. A.

    2017-01-01

    The modulation and multimode instabilities are the main mechanisms which drive spontaneous spatial and temporal pattern formation in a vast number of nonlinear systems ranging from biology to laser physics. Using an Er-doped fiber laser as a test bed, here for the first time we demonstrate both experimentally and theoretically a new type of a low-threshold vector-resonance-multimode instability which inherits features of multimode and modulation instabilities. The same as for the multimode instability, a large number of longitudinal modes can be excited without mode synchronization. To enable modulation instability, we modulate the state of polarization of the lasing signal with the period of the beat length by an adjustment of the in-cavity birefringence and the state of polarization of the pump wave. As a result, we show the regime's tunability from complex oscillatory to periodic with longitudinal mode synchronization in the case of resonance matching between the beat and cavity lengths. Apart from the interest in laser physics for unlocking the tunability and stability of dynamic regimes, the proposed mechanism of the vector-resonance-multimode instability can be of fundamental interest for the nonlinear dynamics of various distributed systems.

  2. A multimodal interaction system for navigation

    NARCIS (Netherlands)

    Hofs, Dennis; Akker, op den Rieks; Nijholt, Anton; Hondorp, Hendri; Kruijff-Korbayova, I.; Kosny, C.

    2003-01-01

    To help users find their way in a virtual theatre we developed a navigation agent. In natural language dialogue the agent assists users looking for the location of an object or room, and it shows routes between locations. The speech-based dialogue system allows users to ask questions such as “Where

  3. Multimode interaction in axially excited cylindrical shells

    OpenAIRE

    2014-01-01

    Cylindrical shells exhibit a dense frequency spectrum, especially near the lowest frequency range. In addition, due to the circumferential symmetry, frequencies occur in pairs. So, in the vicinity of the lowest natural frequencies, several equal or nearly equal frequencies may occur, leading to a complex dynamic behavior. So, the aim of the present work is to investigate the dynamic behavior and stability of cylindrical shells under axial forcing with multiple equal or nearly equal natural fr...

  4. The Multimodal Possibilities of Online Instructions

    DEFF Research Database (Denmark)

    Kampf, Constance

    2006-01-01

    The WWW simplifies the process of delivering online instructions through multimodal channels because of the ease of use for voice, video, pictures, and text modes of communication built into it.  Given that instructions are being produced in multimodal format for the WWW, how do multi-modal analy...

  5. Multimodal Friction Ignition Tester

    Science.gov (United States)

    Davis, Eddie; Howard, Bill; Herald, Stephen

    2009-01-01

    The multimodal friction ignition tester (MFIT) is a testbed for experiments on the thermal and mechanical effects of friction on material specimens in pressurized, oxygen-rich atmospheres. In simplest terms, a test involves recording sensory data while rubbing two specimens against each other at a controlled normal force, with either a random stroke or a sinusoidal stroke having controlled amplitude and frequency. The term multimodal in the full name of the apparatus refers to a capability for imposing any combination of widely ranging values of the atmospheric pressure, atmospheric oxygen content, stroke length, stroke frequency, and normal force. The MFIT was designed especially for studying the tendency toward heating and combustion of nonmetallic composite materials and the fretting of metals subjected to dynamic (vibrational) friction forces in the presence of liquid oxygen or pressurized gaseous oxygen test conditions approximating conditions expected to be encountered in proposed composite material oxygen tanks aboard aircraft and spacecraft in flight. The MFIT includes a stainless-steel pressure vessel capable of retaining the required test atmosphere. Mounted atop the vessel is a pneumatic cylinder containing a piston for exerting the specified normal force between the two specimens. Through a shaft seal, the piston shaft extends downward into the vessel. One of the specimens is mounted on a block, denoted the pressure block, at the lower end of the piston shaft. This specimen is pressed down against the other specimen, which is mounted in a recess in another block, denoted the slip block, that can be moved horizontally but not vertically. The slip block is driven in reciprocating horizontal motion by an electrodynamic vibration exciter outside the pressure vessel. The armature of the electrodynamic exciter is connected to the slip block via a horizontal shaft that extends into the pressure vessel via a second shaft seal. The reciprocating horizontal

  6. Multimodal pain management and arthrofibrosis.

    Science.gov (United States)

    Lavernia, Carlos; Cardona, Diego; Rossi, Mark D; Lee, David

    2008-09-01

    Pain control after arthroplasty has been a key concern for orthopedic surgeons. After total knee arthroplasty (TKA), a small group of patients developed a painful joint with suboptimal range of motion. Manipulation under anesthesia increases flexion and extension while decreasing pain in most cases. The objective of the present investigation is to asses the effect of a multimodal pain management protocol on arthrofibrosis in primary TKAs. A cohort of 1136 patients who underwent primary TKA was selected. Patients were divided into 2 groups: group A had 778 procedures performed using a traditional approach to pain control; group B included 358 procedures that received multimodal pain management. Group A had an incidence of manipulation of 4.75% (37/778). Of 357 patients, 8 required manipulation in group B, which is an incidence of 2.24%. We recommend that orthopedic surgeons consider using a multimodal pain management protocol for TKA.

  7. Integration Model of Eye—Gaze,Voice and Manual Response in Multimodal User Interface

    Institute of Scientific and Technical Information of China (English)

    王坚

    1996-01-01

    This paper reports the utility of eye-gaze,voice and manual response in the design of multimodal user interface.A device-and application-independent user interface model(VisualMan)of 3D object selection and manipulation was developed and validated in a prototype interface based on a 3D cube manipulation task.The multimodal inpus are integrated in the prototype interface based on the priority of modalities and interaction context.The implications of the model for virtual reality interface are discussed and a virtual environment using the multimodal user interface model is proposed.

  8. 基于虚拟成像技术与语音人机交互技术的移动终端的研究∗%The Research of Mobile Terminals based on Virtual Imaging Technology and Voice Human-computer Interaction Research of Mobile Terminal

    Institute of Scientific and Technical Information of China (English)

    林峻; 陈锦彪

    2016-01-01

    在智能电网发展背景下,针对电力一线作业人员应用平板电脑与 PDA 等,存在户外携带便利性较差,不能简化原有的工作,需要双手拿平板电脑或 PDA 操作录入等问题,结合电网发展对电子化作业的需求,探索将虚拟成像技术与语音人机交互技术应用于移动终端,详细阐述了其技术原理与实现方式,采用成熟的 Mediatek 平台与 Android 操作系统,配备高像素数码摄像头、双麦克风降噪处理系统,同时实现 WCDMA、GSM、GPS、蓝牙和 WiFi 等通信定位功能,将所有功能集成于电力安全帽,形成移动作业的新平台。解放电力工作人员双手,实现了作业同时电子化办公、电力系统内的数据交换及远程可视指导作业等。在配网电力检修操作中进行新作业模式尝试,摸索出了科技成果在员工操作培训模式的改进方式,形成了全新的高效作业方式,具有良好的应用前景。%Under the background of the smart grid development,the tablet computers and PDA when applied in the pow-er line homework personnel have poor outdoor carrying convenience,can not simplify the original work,need hands to carry tablet PC or PDA.Combined with power grid development on the demand of electronic exploration,apply virtual imaging technology and speech interactive mobile terminals,elaborate on the technical principle and implementation way,use the mature Mediatek platform and Android operating system,equip with high pixel digital photo head and the double micro-phone down processing.Realize the WCDMA and GSM,GPS,bluetooth and WiFi positioning functions at the same time loith as communication and integrate functions in the power safety helmet to form the new platform of mobile operations. Hands liberation tapping at a keyboard power staff achieves the operating electronic office in the power system data exchange and remote visual guidance,etc.New operations are tried in the

  9. Switchable lasing in multimode microcavities

    DEFF Research Database (Denmark)

    Zhukovsky, Sergei V.; Chigrin, Dmitry N.; Lavrinenko, Andrei

    2007-01-01

    We propose the new concept of a switchable multimode microlaser. As a generic, realistic model of a multimode microresonator a system of two coupled defects in a two-dimensional photonic crystal is considered. We demonstrate theoretically that lasing of the cavity into one selected resonator mode...... can be caused by injecting an appropriate optical pulse at the onset of laser action (injection seeding). Temporal mode-to-mode switching by reseeding the cavity after a short cooldown period is demonstrated by direct numerical solution. A qualitative analytical explanation of the mode switching...

  10. A multimodal behavioral approach to performance anxiety.

    Science.gov (United States)

    Lazarus, Arnold A; Abramovitz, Arnold

    2004-08-01

    Cognitive-behavior therapy (CBT) stresses a trimodal assessment framework (affect, behavior, and cognition [ABC]), whereas the multimodal approach assesses seven discrete but interactive components--behavior, affect, sensation, imagery, cognition, interpersonal relationships, and drugs/biological factors (BASIC I.D.). Only complex or recalcitrant cases call for the entire seven-pronged range of multimodal interventions. Various case illustrations are offered as examples of how a clinician might proceed when confronted with problems that fall under the general heading of performance anxiety. The main example is of a violinist in a symphony orchestra whose career was in serious jeopardy because of his extreme fear of performing in public. He responded very well to a focused but elaborate desensitization procedure. The hierarchy that was eventually constructed contained many dimensions and subhierarchies featuring interlocking elements that evoked his anxiety. In addition to imaginal systematic desensitization, sessions were devoted to his actual performance in the clinical setting. As a homework assignment, he found it helpful to listen to a long-playing record of an actual rehearsal and to play along with the world-renowned orchestra and conductor. The subsequent disclosure by the client of an important sexual problem was dealt with concomitantly by using a fairly conventional counseling procedure. Therapy required 20 sessions over a 3-month period.

  11. Plate-forme Magicien d'Oz pour l'\\'etude de l'apport des ACAs \\`a l'interaction

    CERN Document Server

    Simonin, Jérôme; Carbonell, Noëlle

    2007-01-01

    In order to evaluate the contribution of Embodied (Animated) Conversational Agents (ECAs) to the effectiveness and usability of human-computer interaction, we developed a software platform meant to collect usage data. This platform, which implements the wizard of Oz paradigm, makes it possible to simulate user interfaces integrating ACAs for any Windows software application. It can also save and "replay" a rich interaction trace including user and system events, screen captures, users' speech and eye fixations. This platform has been used to assess users' subjective judgements and reactions to a multimodal online help system meant to facilitate the use of software for the general public (Flash). The online help system is embodied using a 3D talking head (developed by FT R&D) which "says" oral help messages illustrated with Flash screen copies.

  12. Multimodal Aspects of Corporate Social Responsibility Communication

    Directory of Open Access Journals (Sweden)

    Carmen Daniela Maier

    2014-12-01

    Full Text Available This article addresses how the multimodal persuasive strategies of corporate social responsibility communication can highlight a company’s commitment to gender empowerment and environmental protection while advertising simultaneously its products. Drawing on an interdisciplinary methodological framework related to CSR communication, multimodal discourse analysis and gender theory, the article proposes a multimodal analysis model through which it is possible to map and explain the multimodal persuasive strategies employed by Coca-Cola company in their community-related films. By examining the semiotic modes’ interconnectivity and functional differentiation, this analytical endeavour expands the existing research work as the usual textual focus is extended to a multimodal one.

  13. Multi-modal applicability of a reversed-phase/weak-anion exchange material in reversed-phase, anion-exchange, ion-exclusion, hydrophilic interaction and hydrophobic interaction chromatography modes.

    Science.gov (United States)

    Lämmerhofer, Michael; Nogueira, Raquel; Lindner, Wolfgang

    2011-06-01

    We recently introduced a mixed-mode reversed-phase/weak anion-exchange type separation material based on silica particles which consisted of a hydrophobic alkyl strand with polar embedded groups (thioether and amide functionalities) and a terminal weak anion-exchange-type quinuclidine moiety. This stationary phase was designed to separate molecules by lipophilicity and charge differences and was mainly devised for peptide separations with hydroorganic reversed-phase type elution conditions. Herein, we demonstrate the extraordinary flexibility of this RP/WAX phase, in particular for peptide separations, by illustrating its applicability in various chromatographic modes. The column packed with this material can, depending on the solute character and employed elution conditions, exploit attractive or repulsive electrostatic interactions, and/or hydrophobic or hydrophilic interactions as retention and selectivity increments. As a consequence, the column can be operated in a reversed-phase mode (neutral compounds), anion-exchange mode (acidic compounds), ion-exclusion chromatography mode (cationic solutes), hydrophilic interaction chromatography mode (polar compounds), and hydrophobic interaction chromatography mode (e.g., hydrophobic peptides). Mixed-modes of these chromatographic retention principles may be materialized as well. This allows an exceptionally flexible adjustment of retention and selectivity by tuning experimental conditions. The distinct separation mechanisms will be outlined by selected examples of peptide separations in the different modes.

  14. Multimodal Analysis on Teacher Talk—A Case Study of an Excellent Grammar Lesson

    Institute of Scientific and Technical Information of China (English)

    张丽君

    2016-01-01

    With the development of multi-media and computer technology, discourses are transforming into multi-semiotic, which is a more complex pattern, containing pictures, flash, font colour, tables, layout etc. Based on the multimodal analysis of an excellent lesson on English grammar, this paper explores how to realize meaning construction and interaction in class multimodally, how to make classroom discourse enlightening, vivid and coherent, and thus realize the experiential, interpersonal and textual function of classroom discourse.

  15. A Learning Algorithm for Multimodal Grammar Inference.

    Science.gov (United States)

    D'Ulizia, A; Ferri, F; Grifoni, P

    2011-12-01

    The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.

  16. Psychosocial and Cultural Modeling in Human Computation Systems: A Gamification Approach

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Riensche, Roderick M.; Haack, Jereme N.; Butner, R. Scott

    2013-11-20

    “Gamification”, the application of gameplay to real-world problems, enables the development of human computation systems that support decision-making through the integration of social and machine intelligence. One of gamification’s major benefits includes the creation of a problem solving environment where the influence of cognitive and cultural biases on human judgment can be curtailed through collaborative and competitive reasoning. By reducing biases on human judgment, gamification allows human computation systems to exploit human creativity relatively unhindered by human error. Operationally, gamification uses simulation to harvest human behavioral data that provide valuable insights for the solution of real-world problems.

  17. Integrated processing in multimodal argumentation

    NARCIS (Netherlands)

    Hoven, P.J. van den; Jiang, W.

    2011-01-01

    The question addressed in this paper is simple. If the argumentative function of a multimodal narrative text requires the integration of the information from different modes, among which verbal ones, what model for the order of processing and the integration of information do we need to adopt? Using

  18. Multimodal approach to postoperative recovery

    DEFF Research Database (Denmark)

    Kehlet, Henrik

    2009-01-01

    PURPOSE OF REVIEW: To provide updated information on recent developments within individual components of multimodal interventions to improve postoperative outcome (fast-track methodology). RECENT FINDINGS: The value of the fast-track methodology to improve recovery and decrease hospital stay...

  19. Multimodal imaging of ischemic wounds

    Science.gov (United States)

    Zhang, Shiwu; Gnyawali, Surya; Huang, Jiwei; Liu, Peng; Gordillo, Gayle; Sen, Chandan K.; Xu, Ronald

    2012-12-01

    The wound healing process involves the reparative phases of inflammation, proliferation, and remodeling. Interrupting any of these phases may result in chronically unhealed wounds, amputation, or even patient death. Quantitative assessment of wound tissue ischemia, perfusion, and inflammation provides critical information for appropriate detection, staging, and treatment of chronic wounds. However, no method is available for noninvasive, simultaneous, and quantitative imaging of these tissue parameters. We integrated hyperspectral, laser speckle, and thermographic imaging modalities into a single setup for multimodal assessment of tissue oxygenation, perfusion, and inflammation characteristics. Advanced algorithms were developed for accurate reconstruction of wound oxygenation and appropriate co-registration between different imaging modalities. The multimodal wound imaging system was validated by an ongoing clinical trials approved by OSU IRB. In the clinical trial, a wound of 3mm in diameter was introduced on a healthy subject's lower extremity and the healing process was serially monitored by the multimodal imaging setup. Our experiments demonstrated the clinical usability of multimodal wound imaging.

  20. Multi-Modality Phantom Development

    Energy Technology Data Exchange (ETDEWEB)

    Huber, Jennifer S.; Peng, Qiyu; Moses, William W.

    2009-03-20

    Multi-modality imaging has an increasing role in the diagnosis and treatment of a large number of diseases, particularly if both functional and anatomical information are acquired and accurately co-registered. Hence, there is a resulting need for multi modality phantoms in order to validate image co-registration and calibrate the imaging systems. We present our PET-ultrasound phantom development, including PET and ultrasound images of a simple prostate phantom. We use agar and gelatin mixed with a radioactive solution. We also present our development of custom multi-modality phantoms that are compatible with PET, transrectal ultrasound (TRUS), MRI and CT imaging. We describe both our selection of tissue mimicking materials and phantom construction procedures. These custom PET-TRUS-CT-MRI prostate phantoms use agargelatin radioactive mixtures with additional contrast agents and preservatives. We show multi-modality images of these custom prostate phantoms, as well as discuss phantom construction alternatives. Although we are currently focused on prostate imaging, this phantom development is applicable to many multi-modality imaging applications.

  1. Risk Issues in Developing Novel User Interfaces for Human-Computer Interaction

    KAUST Repository

    Klinker, Gudrun

    2014-01-01

    © 2014 Springer International Publishing Switzerland. All rights are reserved. When new user interfaces or information visualization schemes are developed for complex information processing systems, it is not readily clear how much they do, in fact, support and improve users\\' understanding and use of such systems. Is a new interface better than an older one? In what respect, and in which situations? To provide answers to such questions, user testing schemes are employed. This chapter reports on a range of risks pertaining to the design and implementation of user interfaces in general, and to newly emerging interfaces (3-dimensionally, immersive, mobile) in particular.

  2. Human-Computer Interaction and Operators' Performance Optimizing Work Design with Activity Theory

    CERN Document Server

    Bedny, Gregory Z

    2010-01-01

    Directed to a broad and interdisciplinary audience, this book provides a complete account of what has been accomplished in applied and systemic-structural activity theory. It presents a new approach to applied psychology and the study of human work that has derived from activity theory. The selected articles demonstrate the basic principles of studying human work and particularly computer-based work in complex sociotechnical systems. The book includes examples of applied and systemic-structural activity theory to HCI and man-machine-systems, aviation, safety, design and optimization of human p

  3. Understanding Usefulness in Human-Computer Interaction to Enhance User Experience Evaluation

    Science.gov (United States)

    MacDonald, Craig Matthew

    2012-01-01

    The concept of usefulness has implicitly played a pivotal role in evaluation research, but the meaning of usefulness has changed over time from system reliability to user performance and learnability/ease of use for non-experts. Despite massive technical and social changes, usability remains the "gold standard" for system evaluation.…

  4. Trends in Human-Computer Interaction to Support Future Intelligence Analysis Capabilities

    Science.gov (United States)

    2011-06-01

    strategies including (DARPA, 2011a): • Intelligent interruption to improve limited working memory ; • Attention management to improve focus during...complex tasks; • Cued memory retrieval to improve situational awareness and context recovery; • Modality switching (i.e., audio, visual) to increase...www.biometry.com www.handresearch.com Vein pattern palm reading by Fujitsu www.dealspwn.com 16 Augmented Cognition / Brain Computer Interfaces NeuroSky MindSet OCZ

  5. Guidelines for the use of vibro-tactile displays in human computer interaction

    NARCIS (Netherlands)

    Erp, J.B.F. van

    2002-01-01

    Vibro-tactile displays convey messages by presenting vibration to the user's skin. In recent years, the interest in and application of vibro-tactile displays is growing. Vibratory displays are introduced in mobile devices, desktop applications and even in aircraft [1]. Despite the growing interest,

  6. Proceedings of the 5th Danish Human-Computer Interaction Research Symposium

    DEFF Research Database (Denmark)

    Clemmensen, Torkil; Nielsen, Lene

    2005-01-01

    ORGANISATIONS Olav W. Bertelsen & Pär-Ola Zander PROCESS MANAGEMENT TOOLS IN HIGHER EDUCATION E-LEARNING - A NEWRESEARCH AREA Karin Tweddell Levinsen FROM HANDICRAFT SCHOOL TO DESIGN UNIVERSITY Eva Brandt THE USE PROJECT: BRIDGING THE GAP BETWEEN USABILITY EVALUATIONAND SOFTWARE DEVELOPMENT Als, B., Frøkjær, E...

  7. Human Computer Interaction (HCI) and Internet Residency: Implications for Both Personal Life and Teaching/Learning

    Science.gov (United States)

    Crearie, Linda

    2016-01-01

    Technological advances over the last decade have had a significant impact on the teaching and learning experiences students encounter today. We now take technologies such as Web 2.0, mobile devices, cloud computing, podcasts, social networking, super-fast broadband, and connectedness for granted. So what about the student use of these types of…

  8. Human-Computer Interaction Based on Hand Gestures Using RGB-D Sensors

    Directory of Open Access Journals (Sweden)

    Sergio Llorente

    2013-09-01

    Full Text Available In this paper we present a new method for hand gesture recognition based on an RGB-D sensor. The proposed approach takes advantage of depth information to cope with the most common problems of traditional video-based hand segmentation methods: cluttered backgrounds and occlusions. The algorithm also uses colour and semantic information to accurately identify any number of hands present in the image. Ten different static hand gestures are recognised, including all different combinations of spread fingers. Additionally, movements of an open hand are followed and 6 dynamic gestures are identified. The main advantage of our approach is the freedom of the user’s hands to be at any position of the image without the need of wearing any specific clothing or additional devices. Besides, the whole method can be executed without any initial training or calibration. Experiments carried out with different users and in different environments prove the accuracy and robustness of the method which, additionally, can be run in real-time.

  9. Knowledge Management of Web Financial Reporting in Human-Computer Interactive Perspective

    Science.gov (United States)

    Wang, Dong; Chen, Yujing; Xu, Jing

    2017-01-01

    Handling and analyzing to web financial data is becoming a challenge issue in knowledge management and education to accounting practitioners. eXtensible Business Reporting Language (XBRL), which is a type of web financial reporting, describes and recognizes financial items by tagging metadata. The goal is to make it possible for financial reports…

  10. An Innovative Solution Based on Human-Computer Interaction to Support Cognitive Rehabilitation

    Directory of Open Access Journals (Sweden)

    José M. Cogollor

    2014-10-01

    Full Text Available This contribution focuses its objective in describing the design and implementation of an innovative system to provide cognitive rehabilitation. People who will take advantage of this platform suffer from a post-stroke disease called Apraxia and Action Disorganisation Syndrome (AADS. The platform has been integrated at Universidad Politécnica de Madrid and tries to reduce the stay in hospital or rehabilitation center by supporting self-rehabilitation at home. So, the system acts as an intelligent machine which guides patients while executing Activities of Daily Living (ADL, such as preparing a simple tea, by informing them about the errors committed and possible actions to correct them. A short introduction to other works related to stroke, patients to work with, how the system works and how it is implemented are provided in the document. Finally, some relevant information from experiment made with healthy people for technical validation is also shown.

  11. Using Tablet PCs in Classroom for Teaching Human-Computer Interaction: An Experience in High Education

    Science.gov (United States)

    da Silva, André Constantino; Marques, Daniela; de Oliveira, Rodolfo Francisco; Noda, Edgar

    2014-01-01

    The use of computers in the teaching and learning process is investigated by many researches and, nowadays, due the available diversity of computing devices, tablets are become popular in classroom too. So what are the advantages and disadvantages to use tablets in classroom? How can we shape the teaching and learning activities to get the best of…

  12. Prevention or Identification of Web Intrusion via Human Computer Interaction Behaviour - A Proposal

    Science.gov (United States)

    2004-10-25

    Ana Fred2 and António Alves Vieira1 [1]Escola Superior de Tecnologia de Setúbal Campus do IPS Estefanilha Setúbal, Portugal Tel +351 265790000, Fax...AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Escola Superior de Tecnologia de

  13. Robot Arm Control and Having Meal Aid System with Eye Based Human-Computer Interaction (HCI)

    Science.gov (United States)

    Arai, Kohei; Mardiyanto, Ronny

    Robot arm control and having meal aid system with eye based HCI is proposed. The proposed system allows disabled person to select desirable food from the meal tray by their eyes only. Robot arm which is used for retrieving the desirable food is controlled by human eye. At the tip of the robot arm, tiny camera is equipped. Disabled person wear a glass of which a single Head Mount Display: HMD and tiny camera is mounted so that disabled person can take a look at the desired food and retrieve it by looking at the food displayed onto HMD. Experimental results show that disabled person can retrieve the desired food successfully. It also is confirmed that robot arm control by eye based HCI is much faster than that by hands.

  14. Preface (to: Brain-Computer Interfaces. Applying our Minds to Human-Computer Interaction)

    NARCIS (Netherlands)

    Tan, Desney; Tan, Desney S.; Nijholt, Antinus

    2010-01-01

    The advances in cognitive neuroscience and brain imaging technologies provide us with the increasing ability to interface directly with activity in the brain. Researchers have begun to use these technologies to build brain-computer interfaces. Originally, these interfaces were meant to allow

  15. The Changing Face of Human-Computer Interaction in the Age of Ubiquitous Computing

    Science.gov (United States)

    Rogers, Yvonne

    HCI is reinventing itself. No longer only about being user-centered, it has set its sights on pastures new, embracing a much broader and far-reaching set of interests. From emotional, eco-friendly, embodied experiences to context, constructivism and culture, HCI research is changing apace: from what it looks at, the lenses it uses and what it has to offer. Part of this is as a reaction to what is happening in the world; ubiquitous technologies are proliferating and transforming how we live our lives. We are becoming more connected and more dependent on technology. The home, the crèche, outdoors, public places and even the human body are now being experimented with as potential places to embed computational devices, even to the extent of invading previously private and taboo aspects of our lives. In this paper, I examine the diversity of lifestyle and technological transformations in our midst and outline some 'difficult' questions these raise together with alternative directions for HCI research and practice.

  16. Incorporating a Human-Computer Interaction Course into Software Development Curriculums

    Science.gov (United States)

    Janicki, Thomas N.; Cummings, Jeffrey; Healy, R. Joseph

    2015-01-01

    Individuals have increasing options on retrieving information related to hardware and software. Specific hardware devices include desktops, tablets and smart devices. Also, the number of software applications has significantly increased the user's capability to access data. Software applications include the traditional web site, smart device…

  17. Non-Speech Sound in Human-Computer Interaction: A Review and Design Guidelines.

    Science.gov (United States)

    Hereford, James; Winn, William

    1994-01-01

    Reviews research on uses of computer sound and suggests how sound might be used effectively by instructional and interface designers. Topics include principles of interface design; the perception of sound; earcons, both symbolic and iconic; sound in data analysis; sound in virtual environments; and guidelines for using sound. (70 references) (LRW)

  18. Undergraduate Use of CD-ROM Databases: Observations of Human-Computer Interaction and Relevance Judgments.

    Science.gov (United States)

    Shaw, Debora

    1996-01-01

    Describes a study that observed undergraduates as they searched bibliographic databases on a CD-ROM local area network. Topics include related research, information needs, evolution of search topics, database selection, search strategies, relevance judgments, CD-ROM interfaces, and library instruction. (Author/LRW)

  19. Modeling Goal-Directed User Exploration in Human-Computer Interaction

    Science.gov (United States)

    2011-02-01

    is implemented as a LISP program outside the confines of a cognitive architecture. The normalization assumption is implemented by simply normalizing...invoke a LISP function to compute the infoscent of the link with respect to the goal. The LISP function will then update the utilities of the three...competing productions (see Section 4.2.1.1) based on the link’s infoscent. This LISP function is an example of a black-box implementation of the

  20. Connecting multimodality in human communication.

    Science.gov (United States)

    Regenbogen, Christina; Habel, Ute; Kellermann, Thilo

    2013-01-01

    A successful reciprocal evaluation of social signals serves as a prerequisite for social coherence and empathy. In a previous fMRI study we studied naturalistic communication situations by presenting video clips to our participants and recording their behavioral responses regarding empathy and its components. In two conditions, all three channels transported congruent emotional or neutral information, respectively. Three conditions selectively presented two emotional channels and one neutral channel and were thus bimodally emotional. We reported channel-specific emotional contributions in modality-related areas, elicited by dynamic video clips with varying combinations of emotionality in facial expressions, prosody, and speech content. However, to better understand the underlying mechanisms accompanying a naturalistically displayed human social interaction in some key regions that presumably serve as specific processing hubs for facial expressions, prosody, and speech content, we pursued a reanalysis of the data. Here, we focused on two different descriptions of temporal characteristics within these three modality-related regions [right fusiform gyrus (FFG), left auditory cortex (AC), left angular gyrus (AG) and left dorsomedial prefrontal cortex (dmPFC)]. By means of a finite impulse response (FIR) analysis within each of the three regions we examined the post-stimulus time-courses as a description of the temporal characteristics of the BOLD response during the video clips. Second, effective connectivity between these areas and the left dmPFC was analyzed using dynamic causal modeling (DCM) in order to describe condition-related modulatory influences on the coupling between these regions. The FIR analysis showed initially diminished activation in bimodally emotional conditions but stronger activation than that observed in neutral videos toward the end of the stimuli, possibly by bottom-up processes in order to compensate for a lack of emotional information. The

  1. Testing Two Tools for Multimodal Navigation

    Directory of Open Access Journals (Sweden)

    Mats Liljedahl

    2012-01-01

    Full Text Available The latest smartphones with GPS, electronic compasses, directional audio, touch screens, and so forth, hold a potential for location-based services that are easier to use and that let users focus on their activities and the environment around them. Rather than interpreting maps, users can search for information by pointing in a direction and database queries can be created from GPS location and compass data. Users can also get guidance to locations through point and sweep gestures, spatial sound, and simple graphics. This paper describes two studies testing two applications with multimodal user interfaces for navigation and information retrieval. The applications allow users to search for information and get navigation support using combinations of point and sweep gestures, nonspeech audio, graphics, and text. Tests show that users appreciated both applications for their ease of use and for allowing users to interact directly with the surrounding environment.

  2. Multimode-singlemode-multimode fiber sensor for alcohol sensing application

    Science.gov (United States)

    Rofi'ah, Iftihatur; Hatta, A. M.; Sekartedjo, Sekartedjo

    2016-11-01

    Alcohol is volatile and flammable liquid which is soluble substances both on polar and non polar substances that has been used in some industrial sectors. Alcohol detection method now widely used one of them is the optical fiber sensor. In this paper used fiber optic sensor based on Multimode-Single-mode-Multimode (MSM) to detect alcohol solution at a concentration range of 0-3%. The working principle of sensor utilizes the modal interference between the core modes and the cladding modes, thus make the sensor sensitive to environmental changes. The result showed that characteristic of the sensor not affect the length of the single-mode fiber (SMF). We obtain that the sensor with a length of 5 mm of single-mode can sensing the alcohol with a sensitivity of 0.107 dB/v%.

  3. Human Computation

    CERN Document Server

    CERN. Geneva

    2008-01-01

    What if people could play computer games and accomplish work without even realizing it? What if billions of people collaborated to solve important problems for humanity or generate training data for computers? My work aims at a general paradigm for doing exactly that: utilizing human processing power to solve computational problems in a distributed manner. In particular, I focus on harnessing human time and energy for addressing problems that computers cannot yet solve. Although computers have advanced dramatically in many respects over the last 50 years, they still do not possess the basic conceptual intelligence or perceptual capabilities...

  4. Glove-Enabled Computer Operations (GECO): Design and Testing of an Extravehicular Activity Glove Adapted for Human-Computer Interface

    Science.gov (United States)

    Adams, Richard J.; Olowin, Aaron; Krepkovich, Eileen; Hannaford, Blake; Lindsay, Jack I. C.; Homer, Peter; Patrie, James T.; Sands, O. Scott

    2013-01-01

    The Glove-Enabled Computer Operations (GECO) system enables an extravehicular activity (EVA) glove to be dual-purposed as a human-computer interface device. This paper describes the design and human participant testing of a right-handed GECO glove in a pressurized glove box. As part of an investigation into the usability of the GECO system for EVA data entry, twenty participants were asked to complete activities including (1) a Simon Says Games in which they attempted to duplicate random sequences of targeted finger strikes and (2) a Text Entry activity in which they used the GECO glove to enter target phrases in two different virtual keyboard modes. In a within-subjects design, both activities were performed both with and without vibrotactile feedback. Participants mean accuracies in correctly generating finger strikes with the pressurized glove were surprisingly high, both with and without the benefit of tactile feedback. Five of the subjects achieved mean accuracies exceeding 99 in both conditions. In Text Entry, tactile feedback provided a statistically significant performance benefit, quantified by characters entered per minute, as well as reduction in error rate. Secondary analyses of responses to a NASA Task Loader Index (TLX) subjective workload assessments reveal a benefit for tactile feedback in GECO glove use for data entry. This first-ever investigation of employment of a pressurized EVA glove for human-computer interface opens up a wide range of future applications, including text chat communications, manipulation of procedureschecklists, cataloguingannotating images, scientific note taking, human-robot interaction, and control of suit andor other EVA systems.

  5. Inorganic Nanoparticles for Multimodal Molecular Imaging

    Directory of Open Access Journals (Sweden)

    Magdalena Swierczewska

    2011-01-01

    Full Text Available Multimodal molecular imaging can offer a synergistic improvement of diagnostic ability over a single imaging modality. Recent development of hybrid imaging systems has profoundly impacted the pool of available multimodal imaging probes. In particular, much interest has been focused on biocompatible, inorganic nanoparticle-based multimodal probes. Inorganic nanoparticles offer exceptional advantages to the field of multimodal imaging owing to their unique characteristics, such as nanometer dimensions, tunable imaging properties, and multifunctionality. Nanoparticles mainly based on iron oxide, quantum dots, gold, and silica have been applied to various imaging modalities to characterize and image specific biologic processes on a molecular level. A combination of nanoparticles and other materials such as biomolecules, polymers, and radiometals continue to increase functionality for in vivo multimodal imaging and therapeutic agents. In this review, we discuss the unique concepts, characteristics, and applications of the various multimodal imaging probes based on inorganic nanoparticles.

  6. Multimodal Biometrics using Feature Fusion

    Directory of Open Access Journals (Sweden)

    K. Krishneswari

    2012-01-01

    Full Text Available Problem statement: Biometrics is a unique, measurable physiological or behavioural characteristic of a person and finds extensive applications in authentication and authorization. Fingerprint, palm print, iris, voice, are some of the most widely used biometrics for personal identification. To reduce the error rates and enhance the usability of biometric system, multimodal biometric systems are used where more than one biometric characteristics are used. Approach: In this study it is proposed to investigate the performance of multimodal biometrics using palm print and fingerprint. Features are extracted using Discrete Cosine Transform (DCT and attributes selected using Information Gain (IG. Results and Conclusion: The proposed technique shows an average improvement of 8.52% compared to using palmprint technique alone. The processing time does not increase for verification compared to palm print techniques.

  7. Continuous verification using multimodal biometrics.

    Science.gov (United States)

    Sim, Terence; Zhang, Sheng; Janakiraman, Rajkumar; Kumar, Sandeep

    2007-04-01

    Conventional verification systems, such as those controlling access to a secure room, do not usually require the user to reauthenticate himself for continued access to the protected resource. This may not be sufficient for high-security environments in which the protected resource needs to be continuously monitored for unauthorized use. In such cases, continuous verification is needed. In this paper, we present the theory, architecture, implementation, and performance of a multimodal biometrics verification system that continuously verifies the presence of a logged-in user. Two modalities are currently used--face and fingerprint--but our theory can be readily extended to include more modalities. We show that continuous verification imposes additional requirements on multimodal fusion when compared to conventional verification systems. We also argue that the usual performance metrics of false accept and false reject rates are insufficient yardsticks for continuous verification and propose new metrics against which we benchmark our system.

  8. Development and validation of the Multimodal Presence Scale for virtual reality environments: A confirmatory factor analysis and item response theory approach

    DEFF Research Database (Denmark)

    Makransky, Guido; Jensen, Lau Lilleholt; Aaby, Anders

    2017-01-01

    Presence is one of the most important psychological constructs for understanding human-computer interaction. However, different terminology and operationalizations of presence across fields have plagued the comparability and generalizability of results across studies. Lee's (2004) unified...

  9. The bicriterion multimodal assignment problem

    DEFF Research Database (Denmark)

    Pedersen, Christian Roed; Nielsen, Lars Relund; Andersen, Kim Allan

    2008-01-01

    We consider the bicriterion multimodal assignment problem, which is a new generalization of the classical linear assignment problem. A two-phase solution method using an effective ranking scheme is presented. The algorithm is valid for generating all nondominated criterion points...... or an approximation. Extensive computational results are conducted on a large library of test instances to test the performance of the algorithm and to identify hard test instances. Also, test results of the algorithm applied to the bicriterion assignment problem are provided....

  10. The Theme in the Multimodality

    Institute of Scientific and Technical Information of China (English)

    Liu; Xiaolin

    2014-01-01

    <正>1.Systemic Functional Linguistic(SFL)and Multimodality The theoretical foundation for this analysis is mainly extrapolated from the approach of language as a social semiotic process(Halliday 1978,2004).SFL deals with the way texts are articulated to be appropriate for particular situations of use.Halliday develops a Systemic Functional approach in relation to verbal language and offers a set of grammatical systems which realize the three metafunctions of language.In them,the clause

  11. Experiencia de enseñanza multimodal en una clase de idiomas [Experience of multimodal teaching in a language classroom

    Directory of Open Access Journals (Sweden)

    María Martínez Lirola

    2013-12-01

    Full Text Available Resumen: Nuestra sociedad es cada vez más tecnológica y multimodal por lo que es necesario que la enseñanza se adapte a los nuevos tiempos. Este artículo analiza el modo en que la asignatura Lengua Inglesa IV de la Licenciatura en Filología Inglesa en la Universidad de Alicante combina el desarrollo de las cinco destrezas (escucha, habla, lectura, escritura e interacción evaluadas por medio de un portafolio con la multimodalidad en las prácticas docentes y en cada una de las actividades que componen el portafolio. Los resultados de una encuesta preparada al final del curso académico 2011-2012 ponen de manifiesto las competencias principales que el alumnado universitario desarrolla gracias a la docencia multimodal y la importancia de las tutorías en este tipo de enseñanza. Abstract: Our society becomes more technological and multimodal and, consequently, teaching has to be adapted to the new time. This article analyses the way in which the subject English Language IV of the degree English Studies at the University of Alicante combines the development of the five skills (listening, speaking, reading, writing and interacting evaluated through a portfolio with multimodality in the teaching practices and in each of the activities that are part of the portfolio. The results of a survey prepared at the end of the academic year 2011-2012 point out the main competences that university students develop thanks to multimodal teaching and the importance of tutorials in this kind of teaching.

  12. Multi-Mode Broadband Patch Antenna

    Science.gov (United States)

    Romanofsky, Robert R. (Inventor)

    2001-01-01

    A multi-mode broad band patch antenna is provided that allows for the same aperture to be used at independent frequencies such as reception at 19 GHz and transmission at 29 GHz. Furthermore, the multi-mode broadband patch antenna provides a ferroelectric film that allows for tuning capability of the multi-mode broadband patch antenna over a relatively large tuning range. The alternative use of a semiconductor substrate permits reduced control voltages since the semiconductor functions as a counter electrode.

  13. Multimodal CT in stroke imaging: new concepts.

    Science.gov (United States)

    Ledezma, Carlos J; Wintermark, Max

    2009-01-01

    A multimodal CT protocol provides a comprehensive noninvasive survey of acute stroke patients with accurate demonstration of the site of arterial occlusion and its hemodynamic tissue status. It combines widespread availability with the ability to provide functional characterization of cerebral ischemia, and could potentially allow more accurate selection of candidates for acute stroke reperfusion therapy. This article discusses the individual components of multimodal CT and addresses the potential role of a combined multimodal CT stroke protocol in acute stroke therapy.

  14. Multimodal Diversity of Postmodernist Fiction Text

    Directory of Open Access Journals (Sweden)

    U. I. Tykha

    2016-12-01

    Full Text Available The article is devoted to the analysis of structural and functional manifestations of multimodal diversity in postmodernist fiction texts. Multimodality is defined as the coexistence of more than one semiotic mode within a certain context. Multimodal texts feature a diversity of semiotic modes in the communication and development of their narrative. Such experimental texts subvert conventional patterns by introducing various semiotic resources – verbal or non-verbal.

  15. Advances in multimodality molecular imaging

    Directory of Open Access Journals (Sweden)

    Zaidi Habib

    2009-01-01

    Full Text Available Multimodality molecular imaging using high resolution positron emission tomography (PET combined with other modalities is now playing a pivotal role in basic and clinical research. The introduction of combined PET/CT systems in clinical setting has revolutionized the practice of diagnostic imaging. The complementarity between the intrinsically aligned anatomic (CT and functional or metabolic (PET information provided in a "one-stop shop" and the possibility to use CT images for attenuation correction of the PET data has been the driving force behind the success of this technology. On the other hand, combining PET with Magnetic Resonance Imaging (MRI in a single gantry is technically more challenging owing to the strong magnetic fields. Nevertheless, significant progress has been made resulting in the design of few preclinical PET systems and one human prototype dedicated for simultaneous PET/MR brain imaging. This paper discusses recent advances in PET instrumentation and the advantages and challenges of multimodality imaging systems. Future opportunities and the challenges facing the adoption of multimodality imaging instrumentation will also be addressed.

  16. Diffusion Maps for Multimodal Registration

    Directory of Open Access Journals (Sweden)

    Gemma Piella

    2014-06-01

    Full Text Available Multimodal image registration is a difficult task, due to the significant intensity variations between the images. A common approach is to use sophisticated similarity measures, such as mutual information, that are robust to those intensity variations. However, these similarity measures are computationally expensive and, moreover, often fail to capture the geometry and the associated dynamics linked with the images. Another approach is the transformation of the images into a common space where modalities can be directly compared. Within this approach, we propose to register multimodal images by using diffusion maps to describe the geometric and spectral properties of the data. Through diffusion maps, the multimodal data is transformed into a new set of canonical coordinates that reflect its geometry uniformly across modalities, so that meaningful correspondences can be established between them. Images in this new representation can then be registered using a simple Euclidean distance as a similarity measure. Registration accuracy was evaluated on both real and simulated brain images with known ground-truth for both rigid and non-rigid registration. Results showed that the proposed approach achieved higher accuracy than the conventional approach using mutual information.

  17. Multimodality imaging of pulmonary infarction

    Energy Technology Data Exchange (ETDEWEB)

    Bray, T.J.P., E-mail: timothyjpbray@gmail.com [Department of Radiology, Papworth Hospital NHS Foundation Trust, Ermine Street, Papworth Everard, Cambridge CB23 3RE (United Kingdom); Mortensen, K.H., E-mail: mortensen@doctors.org.uk [Department of Radiology, Papworth Hospital NHS Foundation Trust, Ermine Street, Papworth Everard, Cambridge CB23 3RE (United Kingdom); University Department of Radiology, Addenbrookes Hospital, Cambridge University Hospitals NHS Foundation Trust, Hills Road, Box 318, Cambridge CB2 0QQ (United Kingdom); Gopalan, D., E-mail: deepa.gopalan@btopenworld.com [Department of Radiology, Papworth Hospital NHS Foundation Trust, Ermine Street, Papworth Everard, Cambridge CB23 3RE (United Kingdom)

    2014-12-15

    Highlights: • A plethora of pulmonary and systemic disorders, often associated with grave outcomes, may cause pulmonary infarction. • A stereotypical infarct is a peripheral wedge shaped pleurally based opacity but imaging findings can be highly variable. • Multimodality imaging is key to diagnosing the presence, aetiology and complications of pulmonary infarction. • Multimodality imaging of pulmonary infarction together with any ancillary features often guide to early targeted treatment. • CT remains the principal imaging modality with MRI increasingly used alongside nuclear medicine studies and ultrasound. - Abstract: The impact of absent pulmonary arterial and venous flow on the pulmonary parenchyma depends on a host of factors. These include location of the occlusive insult, the speed at which the occlusion develops and the ability of the normal dual arterial supply to compensate through increased bronchial arterial flow. Pulmonary infarction occurs when oxygenation is cut off secondary to sudden occlusion with lack of recruitment of the dual supply arterial system. Thromboembolic disease is the commonest cause of such an insult but a whole range of disease processes intrinsic and extrinsic to the pulmonary arterial and venous lumen may also result in infarcts. Recognition of the presence of infarction can be challenging as imaging manifestations often differ from the classically described wedge shaped defect and a number of weighty causes need consideration. This review highlights aetiologies and imaging appearances of pulmonary infarction, utilising cases to illustrate the essential role of a multimodality imaging approach in order to arrive at the appropriate diagnosis.

  18. Multimodal responsive action

    DEFF Research Database (Denmark)

    Oshima, Sae

    ) the participants’ sensitivity toward negative client feedback; and 2) their orientation to the informed response validated with an adequate self-inspection. Contrary to some services that may be assessed by a clear measure of whether something now works or not (e.g. mechanical repair), service evaluations...... in spoken interaction. In: Gardner, R. & Wagner, J. (Eds.), Second Language Conversation, pp. 221-283. London: Continuum. Pomerantz, A. (1984). Agreeing and disagreeing with assessments: Some features of preferred/dispreferred turn shapes. In: Atkinson, J. M. & Heritage, J. (Eds.), Structures of Social...

  19. Multimodal 3D PET/CT system for bronchoscopic procedure planning

    Science.gov (United States)

    Cheirsilp, Ronnarit; Higgins, William E.

    2013-02-01

    Integrated positron emission tomography (PET) / computed-tomography (CT) scanners give 3D multimodal data sets of the chest. Such data sets offer the potential for more complete and specific identification of suspect lesions and lymph nodes for lung-cancer assessment. This in turn enables better planning of staging bronchoscopies. The richness of the data, however, makes the visualization and planning process difficult. We present an integrated multimodal 3D PET/CT system that enables efficient region identification and bronchoscopic procedure planning. The system first invokes a series of automated 3D image-processing methods that construct a 3D chest model. Next, the user interacts with a set of interactive multimodal graphical tools that facilitate procedure planning for specific regions of interest (ROIs): 1) an interactive region candidate list that enables efficient ROI viewing in all tools; 2) a virtual PET-CT bronchoscopy rendering with SUV quantitative visualization to give a "fly through" endoluminal view of prospective ROIs; 3) transverse, sagittal, coronal multi-planar reformatted (MPR) views of the raw CT, PET, and fused CT-PET data; and 4) interactive multimodal volume/surface rendering to give a 3D perspective of the anatomy and candidate ROIs. In addition the ROI selection process is driven by a semi-automatic multimodal method for region identification. In this way, the system provides both global and local information to facilitate more specific ROI identification and procedure planning. We present results to illustrate the system's function and performance.

  20. Learners Performing Tasks in a Japanese EFL Classroom: A Multimodal and Interpersonal Approach to Analysis

    Science.gov (United States)

    Stone, Paul

    2012-01-01

    In this paper I describe and analyse learner task-based interactions from a multimodal perspective with the aim of better understanding how learners' interpersonal relationships might affect task performance. Task-based pedagogy is focused on classroom interaction between learners, yet analysis of tasks has often neglected the analysis of this…

  1. Measuring stress and cognitive load effects on the perceived quality of a multimodal dialogue system

    NARCIS (Netherlands)

    Niculescu, A.I.; van Dijk, Elisabeth M.A.G.; Cao, Y.; Nijholt, Antinus; Spink, A.J.; Grieco, F; Krips, O.E.; Loyens, L.W.S.; Noldus, L.P.J.J.; Zimmerman, P.H.

    2010-01-01

    In this paper we present the results of a pilot study investigating the impact of stress and cognitive load on the perceived interaction quality of a multimodal dialogue system for crisis management. Four test subjects interacted with the system in four differently configured trials aiming to induce

  2. Measuring stress and cognitive load effects on the perceived quality of a multimodal dialogue system

    NARCIS (Netherlands)

    Niculescu, Andreea; Dijk, van Betsy; Cao, Yujia; Nijholt, Anton; Spink, A.J.; Grieco, F.; Krips, O.E.; Loyens, L.W.S.; Noldus, L.P.J.J.; Zimmerman, P.H.

    2010-01-01

    In this paper we present the results of a pilot study investigating the impact of stress and cognitive load on the perceived interaction quality of a multimodal dialogue system for crisis management. Four test subjects interacted with the system in four differently configured trials aiming to induce

  3. The integration of emotional and symbolic components in multimodal communication.

    Science.gov (United States)

    Mehu, Marc

    2015-01-01

    Human multimodal communication can be said to serve two main purposes: information transfer and social influence. In this paper, I argue that different components of multimodal signals play different roles in the processes of information transfer and social influence. Although the symbolic components of communication (e.g., verbal and denotative signals) are well suited to transfer conceptual information, emotional components (e.g., non-verbal signals that are difficult to manipulate voluntarily) likely take a function that is closer to social influence. I suggest that emotion should be considered a property of communicative signals, rather than an entity that is transferred as content by non-verbal signals. In this view, the effect of emotional processes on communication serve to change the quality of social signals to make them more efficient at producing responses in perceivers, whereas symbolic components increase the signals' efficiency at interacting with the cognitive processes dedicated to the assessment of relevance. The interaction between symbolic and emotional components will be discussed in relation to the need for perceivers to evaluate the reliability of multimodal signals.

  4. Multimodal targeted high relaxivity thermosensitive liposome for in vivo imaging

    Science.gov (United States)

    Kuijten, Maayke M. P.; Hannah Degeling, M.; Chen, John W.; Wojtkiewicz, Gregory; Waterman, Peter; Weissleder, Ralph; Azzi, Jamil; Nicolay, Klaas; Tannous, Bakhos A.

    2015-11-01

    Liposomes are spherical, self-closed structures formed by lipid bilayers that can encapsulate drugs and/or imaging agents in their hydrophilic core or within their membrane moiety, making them suitable delivery vehicles. We have synthesized a new liposome containing gadolinium-DOTA lipid bilayer, as a targeting multimodal molecular imaging agent for magnetic resonance and optical imaging. We showed that this liposome has a much higher molar relaxivities r1 and r2 compared to a more conventional liposome containing gadolinium-DTPA-BSA lipid. By incorporating both gadolinium and rhodamine in the lipid bilayer as well as biotin on its surface, we used this agent for multimodal imaging and targeting of tumors through the strong biotin-streptavidin interaction. Since this new liposome is thermosensitive, it can be used for ultrasound-mediated drug delivery at specific sites, such as tumors, and can be guided by magnetic resonance imaging.

  5. OPTIMIZATION DESIGN OF HYDRAU-LIC MANIFOLD BLOCKS BASED ON HUMAN-COMPUTER COOPERATIVE GENETIC ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    Feng Yi; Li Li; Tian Shujun

    2003-01-01

    Optimization design of hydraulic manifold blocks (HMB) is studied as a complex solid spatial layout problem. Based on comprehensive research into structure features and design rules of HMB, an optimal mathematical model for this problem is presented. Using human-computer cooperative genetic algorithm (GA) and its hybrid optimization strategies, integrated layout and connection design schemes of HMB can be automatically optimized. An example is given to testify it.

  6. 考虑用户视觉注意机制的人机交互界面设计%Human-Computer Interface Design Considering Visual Attention

    Institute of Scientific and Technical Information of China (English)

    王宁; 余隋怀; 肖琳臻; 周宪

    2016-01-01

    为了有效提高人机交互界面的设计质量和效率,将视觉注意的计算方法引入到人机交互界面设计过程中,提出了一种考虑用户视觉注意机制的人机交互界面设计方法。利用视觉注意计算模型分析了人机交互界面中各元件的视觉注意程度,构建了视觉注意焦点图;采用配对比较法确定各元件的重要度和使用频率,以灰度图的形式建立了人机交互界面重要性分布图。通过比对视觉注意焦点图和重要性分布图,以用户的视觉特性为评价指标对人机交互界面设计方案进行了评估。以智能手机的人机交互界面设计为例,对所提出方法进行验证。结果表明:该方法优化了用户的视觉特性,与传统方法相比,人机交互界面设计质量和效率得以提升;所设计的人机交互界面增强了用户的交互体验。%In order to improve the design efficiency and quality,a new method for human-computer interface design,which considers human visual attention and ergonomics,is proposed.The visual attention of human-computer interface is analyzed and calculated by a context-aware saliency detection algorithm and the visual attention map is established.Meantime,the importance and frequency of use of each component is obtained by the users’investigation.And the significance distribution map of human-computer interface is drawn.By comparing the two maps,the designer can estimate if the component with high significance has high visual attention or not.Studying the design of human computer interaction interface of intelligent mobile phone,the proposed method is validated.The results show that the proposed method optimizes the user’s visual characteristics.Compared with the traditional method, the quality and efficiency of the human-computer interaction interface design is improved.The designed human-computer interaction interface enhances the user’s interactive experience.

  7. Adhesion of multimode adhesives to enamel and dentin after one year of water storage.

    Science.gov (United States)

    Vermelho, Paulo Moreira; Reis, André Figueiredo; Ambrosano, Glaucia Maria Bovi; Giannini, Marcelo

    2017-06-01

    This study aimed to evaluate the ultramorphological characteristics of tooth-resin interfaces and the bond strength (BS) of multimode adhesive systems to enamel and dentin. Multimode adhesives (Scotchbond Universal (SBU) and All-Bond Universal) were tested in both self-etch and etch-and-rinse modes and compared to control groups (Optibond FL and Clearfil SE Bond (CSB)). Adhesives were applied to human molars and composite blocks were incrementally built up. Teeth were sectioned to obtain specimens for microtensile BS and TEM analysis. Specimens were tested after storage for either 24 h or 1 year. SEM analyses were performed to classify the failure pattern of beam specimens after BS testing. Etching increased the enamel BS of multimode adhesives; however, BS decreased after storage for 1 year. No significant differences in dentin BS were noted between multimode and control in either evaluation period. Storage for 1 year only reduced the dentin BS for SBU in self-etch mode. TEM analysis identified hybridization and interaction zones in dentin and enamel for all adhesives. Silver impregnation was detected on dentin-resin interfaces after storage of specimens for 1 year only with the SBU and CSB. Storage for 1 year reduced enamel BS when adhesives are applied on etched surface; however, BS of multimode adhesives did not differ from those of the control group. In dentin, no significant difference was noted between the multimode and control group adhesives, regardless of etching mode. In general, multimode adhesives showed similar behavior when compared to traditional adhesive techniques. Multimode adhesives are one-step self-etching adhesives that can also be used after enamel/dentin phosphoric acid etching, but each product may work better in specific conditions.

  8. Multimode optical fibers: steady state mode exciter.

    Science.gov (United States)

    Ikeda, M; Sugimura, A; Ikegami, T

    1976-09-01

    The steady state mode power distribution of the multimode graded index fiber was measured. A simple and effective steady state mode exciter was fabricated by an etching technique. Its insertion loss was 0.5 dB for an injection laser. Deviation in transmission characteristics of multimode graded index fibers can be avoided by using the steady state mode exciter.

  9. Multimodal Literacies in the Secondary English Classroom

    Science.gov (United States)

    Sewell, William C.; Denton, Shawn

    2011-01-01

    To provide insight into the issue of multimodal literacy instruction, the authors explore presentation techniques and instructional activities employed in their secondary language arts classes. They collaborate on assignments that focus students on "anchored media instruction" and engage them in producing multimodal, technology-infused projects,…

  10. Multimodal Narrative Inquiry: Six Teacher Candidates Respond

    Science.gov (United States)

    Morawski, Cynthia M.; Rottmann, Jennifer

    2016-01-01

    In this paper we present findings of a study on the implementation of a multimodal teacher narrative inquiry component, theoretically grounded by Rosenblatt's theory of transaction analysis, methodologically supported by action research and practically enacted by narrative inquiry and multimodal learning. In particular, the component offered…

  11. Multimodal Pedagogies for Teacher Education in TESOL

    Science.gov (United States)

    Yi, Youngjoo; Angay-Crowder, Tuba

    2016-01-01

    As a growing number of English language learners (ELLs) engage in digital and multimodal literacy practices in their daily lives, teachers are starting to incorporate multimodal approaches into their instruction. However, anecdotal and empirical evidence shows that teachers often feel unprepared for integrating such practices into their curricula…

  12. (Re-)Examination of Multimodal Augmented Reality

    NARCIS (Netherlands)

    Rosa, N.E.; Werkhoven, P.J.; Hürst, W.O.

    The majority of augmented reality (AR) research has been concerned with visual perception, however the move towards multimodality is imminent. At the same time, there is no clear vision of what multimodal AR is. The purpose of this position paper is to consider possible ways of examining AR other

  13. A cuckoo search algorithm for multimodal optimization.

    Science.gov (United States)

    Cuevas, Erik; Reyna-Orta, Adolfo

    2014-01-01

    Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration.

  14. Addressing multimodality in overt aggression detection

    NARCIS (Netherlands)

    Lefter, I.; Rothkrantz, L.J.M.; Burghouts, G.; Yang, Z.; Wiggers, P.

    2011-01-01

    Automatic detection of aggressive situations has a high societal and scientific relevance. It has been argued that using data from multimodal sensors as for example video and sound as opposed to unimodal is bound to increase the accuracy of detections. We approach the problem of multimodal aggressio

  15. The Multimodal Possibilities of Online Instructions

    DEFF Research Database (Denmark)

    Kampf, Constance

    2006-01-01

    The WWW simplifies the process of delivering online instructions through multimodal channels because of the ease of use for voice, video, pictures, and text modes of communication built into it.  Given that instructions are being produced in multimodal format for the WWW, how do multi......-modal analysis tools help us understand the impact of multimodal channels used in instructions?  This paper looks at Kress and VanLeewan's questions of "how narrative is shaped in a specific mode and how it is reshaped when it appears in different modes?" (2001:128)  In addition, the intersection of linguistics...... and rhetoric is explored as a means of understanding the multimodal possibilities for technical communication genres such as instructions....

  16. Multifuel multimodal network design; Projeto de redes multicombustiveis multimodal

    Energy Technology Data Exchange (ETDEWEB)

    Lage, Carolina; Dias, Gustavo; Bahiense, Laura; Ferreira Filho, Virgilio J.M. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE). Programa de Engenharia de Producao

    2008-07-01

    The objective of the Multi commodity Multimodal Network Project is the development of modeling tools and methodologies for the optimal sizing of production networks and multimodal distribution of multiple fuel and its incomes, considering investments and transportation costs. Given the inherently non-linear combinatory nature of the problem, the resolution of real instances by the complete model, in an exact way, becomes computationally intractable. Thus, the strategy for resolution should contain a combination of exacts and heuristics methods, that must be applied to subdivisions of the original problem. This paper deals with one of these subdivisions, tackling the problem of modeling a network of pipelines in order to drain the production of ethanol away from the producing plants. The objective consists in defining the best network topology, minimizing investment and operational costs, and attending the total demand. In order to do that, the network was considered a tree, where the nodes are the center of producing regions and the edges are the pipelines, trough where the ethanol produced by plants must be drained away. The main objective also includes the decision over the optimal diameter of each pipeline and the optimal size of the bombs, in order to minimize the pumping costs. (author)

  17. Metawidgets in the multimodal interface

    Energy Technology Data Exchange (ETDEWEB)

    Blattner, M.M. (Lawrence Livermore National Lab., CA (United States) Anderson (M.D.) Cancer Center, Houston, TX (United States)); Glinert, E.P.; Jorge, J.A.; Ormsby, G.R. (Rensselaer Polytechnic Inst., Troy, NY (United States). Dept. of Computer Science)

    1991-01-01

    We analyze two intertwined and fundamental issues concerning computer-to-human communication in the multimodal interfaces: the interplay between sound and graphics, and the role of object persistence. Our observations lead us to introduce metawidgets as abstract entities capable of manifesting themselves to users as image, as sound, or as various combinations and/or sequences of the two media. We show examples of metawidgets in action, and discuss mechanisms for choosing among alternative media for metawidget instantiation. Finally, we describe a couple of experimental microworlds we have implemented to test out some of our ideas. 17 refs., 7 figs.

  18. Multimodal Treatment of Chronic Pain.

    Science.gov (United States)

    Dale, Rebecca; Stacey, Brett

    2016-01-01

    Most patients with chronic pain receive multimodal treatment. There is scant literature to guide us, but when approaching combination pharmacotherapy, the practitioner and patient must weigh the benefits with the side effects; many medications have modest effect yet carry significant side effects that can be additive. Chronic pain often leads to depression, anxiety, and deconditioning, which are targets for treatment. Structured interdisciplinary programs are beneficial but costly. Interventions have their place in the treatment of chronic pain and should be a part of a multidisciplinary treatment plan. Further research is needed to validate many common combination treatments.

  19. Modal dynamics in multimode fibers

    CERN Document Server

    Fridman, Moti; Nixon, Micha; Friesem, Asher A; Davidson, Nir

    2010-01-01

    The dynamics of modes and their states of polarizations in multimode fibers as a function of time, space, and wavelength are experimentally and theoretically investigated. The results reveal that the states of polarizations are displaced in Poincare sphere representation when varying the angular orientations of the polarization at the incident light. Such displacements, which complicates the interpretation of the results, are overcome by resorting to modified Poincare spheres representation. With such modification it should be possible to predict the output modes and their state of polarization when the input mode and state of polarization are known.

  20. Multimodal therapy in perioperative analgesia.

    Science.gov (United States)

    Gritsenko, Karina; Khelemsky, Yury; Kaye, Alan David; Vadivelu, Nalini; Urman, Richard D

    2014-03-01

    This article reviews the current evidence for multimodal analgesic options for common surgical procedures. As perioperative physicians, we have come a long way from using only opioids for postoperative pain to combinations of acetaminophen, nonsteroidal anti-inflammatory drugs (NSAIDs), selective Cyclo-oxygenase (COX-2) inhibitors, local anesthetics, N-methyl-d-aspartate (NMDA) receptor antagonists, and regional anesthetics. As discussed in this article, many of these agents have decreased narcotic requirements, improved patient satisfaction, and decreased postanesthesia care unit (PACU) times, as well as morbidity in the perioperative period.

  1. Haptic-Multimodal Flight Control System Update

    Science.gov (United States)

    Goodrich, Kenneth H.; Schutte, Paul C.; Williams, Ralph A.

    2011-01-01

    The rapidly advancing capabilities of autonomous aircraft suggest a future where many of the responsibilities of today s pilot transition to the vehicle, transforming the pilot s job into something akin to driving a car or simply being a passenger. Notionally, this transition will reduce the specialized skills, training, and attention required of the human user while improving safety and performance. However, our experience with highly automated aircraft highlights many challenges to this transition including: lack of automation resilience; adverse human-automation interaction under stress; and the difficulty of developing certification standards and methods of compliance for complex systems performing critical functions traditionally performed by the pilot (e.g., sense and avoid vs. see and avoid). Recognizing these opportunities and realities, researchers at NASA Langley are developing a haptic-multimodal flight control (HFC) system concept that can serve as a bridge between today s state of the art aircraft that are highly automated but have little autonomy and can only be operated safely by highly trained experts (i.e., pilots) to a future in which non-experts (e.g., drivers) can safely and reliably use autonomous aircraft to perform a variety of missions. This paper reviews the motivation and theoretical basis of the HFC system, describes its current state of development, and presents results from two pilot-in-the-loop simulation studies. These preliminary studies suggest the HFC reshapes human-automation interaction in a way well-suited to revolutionary ease-of-use.

  2. MOBILTEL - Mobile Multimodal Telecommunications dialogue system based on VoIP telephony

    Directory of Open Access Journals (Sweden)

    Anton Čižmár

    2009-10-01

    Full Text Available In this paper the project MobilTel ispresented. The communication itself is becoming amultimodal interactive process. The MobilTel projectprovides research and development activities inmultimodal interfaces area. The result is a functionalarchitecture for mobile multimodal telecommunicationsystem running on handheld device. The MobilTelcommunicator is a multimodal Slovak speech andgraphical interface with integrated VoIP client. Theother possible modalities are pen – touch screeninteraction, keyboard, and display on which theinformation is more user friendly presented (icons,emoticons, etc., and provides hyperlink and scrollingmenu availability.We describe the method of interaction between mobileterminal (PDA and MobilTel multimodal PCcommunicator over a VoIP WLAN connection basedon SIP protocol. We also present the graphicalexamples of services that enable users to obtaininformation about weather or information about trainconnection between two train stations.

  3. Cardiac imaging. A multimodality approach

    Energy Technology Data Exchange (ETDEWEB)

    Thelen, Manfred [Johannes Gutenberg University Hospital, Mainz (Germany); Erbel, Raimund [University Hospital Essen (Germany). Dept. of Cardiology; Kreitner, Karl-Friedrich [Johannes Gutenberg University Hospital, Mainz (Germany). Clinic and Polyclinic for Diagnostic and Interventional Radiology; Barkhausen, Joerg (eds.) [University Hospital Schleswig-Holstein, Luebeck (Germany). Dept. of Radiology and Nuclear Medicine

    2009-07-01

    An excellent atlas on modern diagnostic imaging of the heart Written by an interdisciplinary team of experts, Cardiac Imaging: A Multimodality Approach features an in-depth introduction to all current imaging modalities for the diagnostic assessment of the heart as well as a clinical overview of cardiac diseases and main indications for cardiac imaging. With a particular emphasis on CT and MRI, the first part of the atlas also covers conventional radiography, echocardiography, angiography and nuclear medicine imaging. Leading specialists demonstrate the latest advances in the field, and compare the strengths and weaknesses of each modality. The book's second part features clinical chapters on heart defects, endocarditis, coronary heart disease, cardiomyopathies, myocarditis, cardiac tumors, pericardial diseases, pulmonary vascular diseases, and diseases of the thoracic aorta. The authors address anatomy, pathophysiology, and clinical features, and evaluate the various diagnostic options. Key features: - Highly regarded experts in cardiology and radiology off er image-based teaching of the latest techniques - Readers learn how to decide which modality to use for which indication - Visually highlighted tables and essential points allow for easy navigation through the text - More than 600 outstanding images show up-to-date technology and current imaging protocols Cardiac Imaging: A Multimodality Approach is a must-have desk reference for cardiologists and radiologists in practice, as well as a study guide for residents in both fields. It will also appeal to cardiac surgeons, general practitioners, and medical physicists with a special interest in imaging of the heart. (orig.)

  4. Untangled modes in multimode waveguides

    Science.gov (United States)

    Plöschner, Martin; Tyc, TomáÅ.¡; Čižmár, TomáÅ.¡

    2016-03-01

    Small, fibre-based endoscopes have already improved our ability to image deep within the human body. A novel approach introduced recently utilised disordered light within a standard multimode optical fibre for lensless imaging. Importantly, this approach brought very significant reduction of the instruments footprint to dimensions below 100 μm. The most important limitations of this exciting technology is the lack of bending flexibility - imaging is only possible as long as the fibre remains stationary. The only route to allow flexibility of such endoscopes is in trading-in all the knowledge about the optical system we have, particularly the cylindrical symmetry of refractive index distribution. In perfect straight step-index cylindrical waveguides we can find optical modes that do not change their spatial distribution as they propagate through. In this paper we present a theoretical background that provides description of such modes in more realistic model of real-life step-index multimode fibre taking into account common deviations in distribution of the refractive index from its ideal step-index profile. Separately, we discuss how to include the influence of fibre bending.

  5. Collection of Information Directly from Patients through an Adaptive Human-computer Interface

    Science.gov (United States)

    Lobach, David F.; Arbanas, Jennifer M.; Mishra, Dharani D.; Wildemuth, Barbara; Campbell, Marci

    2002-01-01

    Clinical information collected directly from patients is critical to the practice of medicine. Past efforts to collect this information using computers have had limited utility because these efforts required users to be facile with the information collecting system. This poster describes the development and function of a computer system that uses technology to overcome the limitations of previous computer-based data collection tools by adapting the human-computer interface to fit the skills of the user. The system has been successfully used at two diverse clinical sites.

  6. Computational Virtual Reality (VR) as a human-computer interface in the operation of telerobotic systems

    Science.gov (United States)

    Bejczy, Antal K.

    1995-01-01

    This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.

  7. Mediating multimodal environmental knowledge across animation techniques

    DEFF Research Database (Denmark)

    Maier, Carmen Daniela

    2011-01-01

    in the technology- mediated discourse of short animation films. This is done by employing the model of multimodal discourse analysis proposed by Van Leeuwen (2008) and Machin and Van Leeuwen (2007) to the analysis of a series of animation films focused on environmental problems. These films are united under......://www.sustainlane.com/. The multimodal discourse analysis is meant to reveal how selection and representation of environmental knowledge about social actors, social actions, resources, time and space are influenced by animation techniques. Furthermore, in the context of this multimodal discourse analysis, their influence upon...

  8. Tibial cortical lesions: A multimodality pictorial review

    Energy Technology Data Exchange (ETDEWEB)

    Tyler, P.A., E-mail: philippa.tyler@rnoh.nhs.uk [Department of Radiology, Royal National Orthopaedic Hospital, Brockley Hill, Stanmore HA7 4LP (United Kingdom); Mohaghegh, P., E-mail: pegah1000@gmail.com [Department of Radiology, Royal National Orthopaedic Hospital, Brockley Hill, Stanmore HA7 4LP (United Kingdom); Foley, J., E-mail: jfoley1@nhs.net [Department of Radiology, Glasgow Royal Infirmary, 16 Alexandra Parade, Glasgow G31 2ES (United Kingdom); Isaac, A., E-mail: amandaisaac@doctors.org.uk [Department of Radiology, King' s College Hospital, Denmark Hill, London SE5 9RS (United Kingdom); Zavareh, A., E-mail: ali.zavareh@gmail.com [Department of Radiology, North Bristol NHS Trust, Frenchay, Bristol BS16 1LE (United Kingdom); Thorning, C., E-mail: cthorning@doctors.org.uk [Department of Radiology, East Surrey Hospital, Canada Avenue, Redhill, Surrey RH1 5RH (United Kingdom); Kirwadi, A., E-mail: anandkirwadi@gmail.com [Department of Radiology, Manchester Royal Infirmary, Oxford Road, Manchester M13 9WL (United Kingdom); Pressney, I., E-mail: ipressney@hotmail.com [Department of Radiology, Royal National Orthopaedic Hospital, Brockley Hill, Stanmore HA7 4LP (United Kingdom); Amary, F., E-mail: fernanda.amary@rnoh.nhs.uk [Department of Histopathology, Royal National Orthopaedic Hospital, Brockley Hill, Stanmore HA7 4LP (United Kingdom); Rajeswaran, G., E-mail: grajeswaran@gmail.com [Department of Radiology, Chelsea and Westminster Hospital, 369 Fulham Road, London SW10 9NH (United Kingdom)

    2015-01-15

    Highlights: • Multimodality imaging plays an important role in the investigation and diagnosis of shin pain. • We review the multimodality imaging findings of common cortically based tibial lesions. • We also describe the rarer pathologies of tibial cortical lesions. - Abstract: Shin pain is a common complaint, particularly in young and active patients, with a wide range of potential diagnoses and resulting implications. We review the natural history and multimodality imaging findings of the more common causes of cortically-based tibial lesions, as well as the rarer pathologies less frequently encountered in a general radiology department.

  9. The multimodal argumentation of persuasive counter discourses

    DEFF Research Database (Denmark)

    Maier, Carmen Daniela

    are given prominence in the argumentation by examining their complex interplay and functional differentiation. The ways in which speech, writing and images articulate the counter discourse occupy a central position in the analysis. A special focus is put on the multimodal configuration of specific...... and new multimodal ways of discussing them. References Kress, G. 2010. Multimodality. A Social Semiotic Approach to Contemporary Communication. London: Routledge. Van Leeuwen, Theo. 2008. Discourse and Practice. New Tools for Critical Discourse Analysis. Oxford: Oxford University Press. Chouliaraki, L...

  10. Timing of Multimodal Robot Behaviors during Human-Robot Collaboration

    DEFF Research Database (Denmark)

    Jensen, Lars Christian; Fischer, Kerstin; Suvei, Stefan-Daniel

    2017-01-01

    In this paper, we address issues of timing between robot behaviors in multimodal human-robot interaction. In particular, we study what effects sequential order and simultaneity of robot arm and body movement and verbal behavior have on the fluency of interactions. In a study with the Care...... output plays a special role because participants carry their expectations from human verbal interaction into the interactions with robots.......-O-bot, a large service robot, in a medical measurement scenario, we compare the timing of the robot's behaviors in three between-subject conditions. The results show that the relative timing of robot behaviors has significant effects on the number of problems participants encounter, and that the robot's verbal...

  11. Practical applications of multimodal NDT data

    Science.gov (United States)

    Frankle, Robert S.

    1993-01-01

    Today's powerful computer workstations enable multimodal nondestructive testing (NDT) to be used for such practical applications as detecting and evaluating defects in structures. Radiography (x ray) and ultrasonics (UT) are examples of two different nondestructive tests, or modalities, which measure characteristics of materials and structures without affecting them. Traditionally, NDT produced an analog result, such as an image on x-ray film, which was difficult to review, interpret, and store. New and more powerful digital NDT techniques, such as industrial x-ray computed tomography (CT), produce digital output that is readily amenable to computerized analysis and storage. Computers are now available with sufficient memory and performance to support interactive processing of digital NDT data sets, which can easily exceed 100 megabytes. Numerous data sets can be stored on small, inexpensive tape cassettes. Failure Analysis Associates, Inc. (FaAA) has developed software-based techniques for using NDT to identify defects in structures. These techniques are also used to visualize the NDT data and to analyze the structural integrity of parts containing NDT-detected defects. FaAA's approach employs state-of-the-art scientific visualization and computer workstation technology. For some types of materials, such as advanced composites, data from different NDT modalities are needed to locate different types of defects. Applications of this technology include assessment of impact damage in composite aerospace structures, investigation of failed assemblies, and evaluation of metallic casting defects.

  12. Photon correlations in multimode waveguides

    Energy Technology Data Exchange (ETDEWEB)

    Poem, Eilon; Silberberg, Yaron [Department of Physics of Complex Systems, Weizmann Institute of Science, Rehovot 76100 (Israel)

    2011-10-15

    We consider the propagation of classical and nonclassical light in multimode optical waveguides. We focus on the evolution of the few-photon correlation functions, which, much as the light-intensity distribution in such systems, evolve in a periodic manner, culminating in the ''revival'' of the initial correlation pattern at the end of each period. It is found that when the input state possesses nontrivial symmetries, the correlation revival period can be longer than that of the intensity, and thus the same intensity pattern can display different correlation patterns. We experimentally demonstrate this effect for classical, pseudothermal light, and compare the results with the predictions for nonclassical, quantum light.

  13. Multimode waveguide based directional coupler

    Science.gov (United States)

    Ahmed, Rajib; Rifat, Ahmmed A.; Sabouri, Aydin; Al-Qattan, Bader; Essa, Khamis; Butt, Haider

    2016-07-01

    The Silicon-on-Insulator (SOI) based platform overcomes limitations of the previous copper and fiber based technologies. Due to its high index difference, SOI waveguide (WG) and directional couplers (DC) are widely used for high speed optical networks and hybrid Electro-Optical inter-connections; TE00-TE01, TE00-TE00 and TM00-TM00 SOI direction couplers are designed with symmetrical and asymmetrical configurations to couple with TE00, TE01 and TM00 in a multi-mode semi-triangular ring-resonator configuration which will be applicable for multi-analyte sensing. Couplers are designed with effective index method and their structural parameters are optimized with consideration to coupler length, wavelength and polarization dependence. Lastly, performance of the couplers are analyzed in terms of cross-talk, mode overlap factor, coupling length and coupling efficiency.

  14. Multimodal integration in statistical learning

    DEFF Research Database (Denmark)

    Mitchell, Aaron; Christiansen, Morten Hyllekvist; Weiss, Dan

    2014-01-01

    Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study......, we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker’s face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally...... incongruent, which we predicted would produce the McGurk illusion, resulting in the perception of an audiovisual syllable (e.g., /ni/). In this way, we used the McGurk illusion to manipulate the underlying statistical structure of the speech streams, such that perception of these illusory syllables...

  15. Multimodal signalling in estrildid finches

    DEFF Research Database (Denmark)

    Gomes, A. C. R.; Funghi, C.; Soma, M.

    2017-01-01

    radiations, and one that includes many model species for research in sexual selection and communication. We found little evidence for either joint evolution or trade-offs between song and colour ornamentation. Some negative correlations between dance repertoire and song traits may suggest a functional...... compromise, but generally courtship dance also evolved independently from other signals. Instead of correlated evolution, we found that song, dance and colour are each related to different socio-ecological traits. Song complexity evolved together with ecological generalism, song performance with investment...... in reproduction, dance with commonness and habitat type, whereas colour ornamentation was shown previously to correlate mostly with gregariousness. We conclude that multimodal signals evolve in response to various socio-ecological traits, suggesting the accumulation of distinct signalling functions....

  16. Kinesthetic Interaction

    DEFF Research Database (Denmark)

    Fogtmann, Maiken Hillerup; Fritsch, Jonas; Kortbek, Karen Johanne

    2008-01-01

    Within the Human-Computer Interaction community there is a growing interest in designing for the whole body in interaction design. The attempts aimed at addressing the body have very different outcomes spanning from theoretical arguments for understanding the body in the design process, to more...... to reveal bodily potential in relation to three design themes – kinesthetic development, kinesthetic means and kinesthetic disorder; and seven design parameters – engagement, sociality, movability, explicit motivation, implicit motivation, expressive meaning and kinesthetic empathy. The framework is a tool...

  17. Collaboration of Miniature Multi-Modal Mobile Smart Robots over a Network

    Science.gov (United States)

    2015-08-14

    independently evolving research directions based on physics -based models of mechanical, electromechanical and electronic devices, operational constraints...interactions between independently evolving research directions based on physics -based models of mechanical, electromechanical and electronic devices...theoretical research on mathematics of failures in sensor-network-based miniature multimodal mobile robots and electromechanical systems. The views

  18. Using a Multimodal Approach to Facilitate Articulation, Phonemic Awareness, and Literacy in Young Children

    Science.gov (United States)

    Pieretti, Robert A.; Kaul, Sandra D.; Zarchy, Razi M.; O'Hanlon, Laureen M.

    2015-01-01

    The primary focus of this research study was to examine the benefit of a using a multimodal approach to speech sound correction with preschool children. The approach uses the auditory, tactile, and kinesthetic modalities and includes a unique, interactive visual focus that attempts to provide a visual representation of a phonemic category. The…

  19. Multimodal integration of haptics, speech, and affect in an educational environment

    NARCIS (Netherlands)

    Nijholt, Antinus; Chu, H-W; Savoie, M.; Sanchez, B.

    2004-01-01

    In this paper we investigate the introduction of haptics in a multimodal tutoring environment. In this environment a haptic device is used to control a virtual injection needle and speech input and output is provided to interact with a virtual tutor, available as a talking head, and a virtual patien

  20. Multimodal Discourse Analysis of College English Teacher ’ s Non-verbal Behaviors

    Institute of Scientific and Technical Information of China (English)

    WANG Qian

    2015-01-01

    This paper analyzes College English teacher’s non-verbal behaviors in multimedia classroom from the perspective of Multimodal Discourse Analysis, aiming to explore how the non-verbal behaviors in classroom teaching can be applied in meaning-making process to facilitate teacher-students interaction and realize three metafunctions of language in class.

  1. US Army Weapon Systems Human-Computer Interface (WSHCI) style guide, Version 1

    Energy Technology Data Exchange (ETDEWEB)

    Avery, L.W.; O`Mara, P.A.; Shepard, A.P.

    1996-09-30

    A stated goal of the U.S. Army has been the standardization of the human computer interfaces (HCIS) of its system. Some of the tools being used to accomplish this standardization are HCI design guidelines and style guides. Currently, the Army is employing a number of style guides. While these style guides provide good guidance for the command, control, communications, computers, and intelligence (C4I) domain, they do not necessarily represent the more unique requirements of the Army`s real time and near-real time (RT/NRT) weapon systems. The Office of the Director of Information for Command, Control, Communications, and Computers (DISC4), in conjunction with the Weapon Systems Technical Architecture Working Group (WSTAWG), recognized this need as part of their activities to revise the Army Technical Architecture (ATA). To address this need, DISC4 tasked the Pacific Northwest National Laboratory (PNNL) to develop an Army weapon systems unique HCI style guide. This document, the U.S. Army Weapon Systems Human-Computer Interface (WSHCI) Style Guide, represents the first version of that style guide. The purpose of this document is to provide HCI design guidance for RT/NRT Army systems across the weapon systems domains of ground, aviation, missile, and soldier systems. Each domain should customize and extend this guidance by developing their domain-specific style guides, which will be used to guide the development of future systems within their domains.

  2. The use of analytical models in human-computer interface design

    Science.gov (United States)

    Gugerty, Leo

    1993-01-01

    Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.

  3. A Model-based Framework for Risk Assessment in Human-Computer Controlled Systems

    Science.gov (United States)

    Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems. This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions. Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  4. Human Computation: Object Recognition for Mobile Games Based on Single Player

    Directory of Open Access Journals (Sweden)

    Mohamed Sakr

    2014-07-01

    Full Text Available Smart phones and its applications gain a lot of popularity nowadays. Many people depend on them to finish their tasks banking, social networking, fun and a lot other things. Games with a purpose (GWAP and microtask crowdsourcing are considered two techniques of the human-computation. GWAPs depend on humans to accomplish their tasks. Porting GWAPs to smart phones will be great in increasing the number of humans in it. One of the systems of human-computation is ESP Game. ESP Game is a type of games with a purpose. ESP game will be good candidate to be ported to smart phones. This paper presents a new mobile game called MemoryLabel. It is a single player mobile game. It helps in labeling images and gives description for them. In addition, the game gives description for objects in the image not the whole image. We deploy our algorithm at the University of Menoufia for evaluation. In addition, the game is published on Google play market for android applications. In this trial, we first focused on measuring the total number of labels generated by our game and also the number of objects that have been labeled. The results reveal that the proposed game has promising results in describing images and objects.

  5. Effects of muscle fatigue on the usability of a myoelectric human-computer interface.

    Science.gov (United States)

    Barszap, Alexander G; Skavhaug, Ida-Maria; Joshi, Sanjay S

    2016-10-01

    Electromyography-based human-computer interface development is an active field of research. However, knowledge on the effects of muscle fatigue for specific devices is limited. We have developed a novel myoelectric human-computer interface in which subjects continuously navigate a cursor to targets by manipulating a single surface electromyography (sEMG) signal. Two-dimensional control is achieved through simultaneous adjustments of power in two frequency bands through a series of dynamic low-level muscle contractions. Here, we investigate the potential effects of muscle fatigue during the use of our interface. In the first session, eight subjects completed 300 cursor-to-target trials without breaks; four using a wrist muscle and four using a head muscle. The wrist subjects returned for a second session in which a static fatiguing exercise took place at regular intervals in-between cursor-to-target trials. In the first session we observed no declines in performance as a function of use, even after the long period of use. In the second session, we observed clear changes in cursor trajectories, paired with a target-specific decrease in hit rates.

  6. An analysis of the aesthetic embodiment of human computer interface%对于计算机人机界面交互美感体现的分析

    Institute of Scientific and Technical Information of China (English)

    宋发君

    2016-01-01

    In this paper,based on in the aspects of man-machine interface of computer application of aesthetic principle,application of man-machine interface aesthetics of computer implementation of, according to the summary,how to rely on computer interactive interface beauty to ascend will users use efficiency of human-computer interaction and feelings for lifting.%本文立足于计算机人机界面应用美学原则这一层面,对计算机人机界面应用美学实施探讨,根据所进行的总结,怎样凭借计算机人机交互界面美感提升来将用户使用人机交互效率与感受提升。

  7. Histology image search using multimodal fusion.

    Science.gov (United States)

    Caicedo, Juan C; Vanegas, Jorge A; Páez, Fabian; González, Fabio A

    2014-10-01

    This work proposes a histology image indexing strategy based on multimodal representations obtained from the combination of visual features and associated semantic annotations. Both data modalities are complementary information sources for an image retrieval system, since visual features lack explicit semantic information and semantic terms do not usually describe the visual appearance of images. The paper proposes a novel strategy to build a fused image representation using matrix factorization algorithms and data reconstruction principles to generate a set of multimodal features. The methodology can seamlessly recover the multimodal representation of images without semantic annotations, allowing us to index new images using visual features only, and also accepting single example images as queries. Experimental evaluations on three different histology image data sets show that our strategy is a simple, yet effective approach to building multimodal representations for histology image search, and outperforms the response of the popular late fusion approach to combine information.

  8. Intelligent Multimodal Signal Adaptation System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Micro Analysis and Design (MA&D) is pleased to submit this proposal to design an Intelligent Multimodal Signal Adaptation System. This system will dynamically...

  9. Multimode siloxane polymer components for optical interconnects

    Science.gov (United States)

    Bamiedakis, Nikolaos; Beals, Joseph, IV; Penty, Richard V.; White, Ian H.; DeGroot, Jon v., Jr.; Clapp, Terry V.; De Shazer, David

    2009-02-01

    This paper presents an overview of multimode waveguides and waveguide components formed from siloxane polymer materials which are suitable for use in optical interconnection applications. The components can be cost-effectively integrated onto conventional PCBs and offer increased functionality in optical transmission. The multimode waveguides exhibit low loss (0.04 dB/cm at 850 nm) and low crosstalk (benefit from the multimode nature of the waveguides allowing low loss combining (4 dB for an 8×1 device). A large range of power splitting ratios between 30% and 75% is achieved with multimode coupler devices. Examples of system applications benefiting from the use of these components are briefly presented including a terabit capacity optical backplane, a radio-over-fibre multicasting system and a SCM passive optical network.

  10. Mediating multimodal environmental knowledge across animation techniques

    DEFF Research Database (Denmark)

    Maier, Carmen Daniela

    2011-01-01

    The growing awareness of and concern about present environmental problems generate a proliferation of new forms of environmental discourses that are mediated in various ways. This chapter explores issues related to the ways in which environmental knowledge is multimodally communicated...... in the technology- mediated discourse of short animation films. This is done by employing the model of multimodal discourse analysis proposed by Van Leeuwen (2008) and Machin and Van Leeuwen (2007) to the analysis of a series of animation films focused on environmental problems. These films are united under......://www.sustainlane.com/. The multimodal discourse analysis is meant to reveal how selection and representation of environmental knowledge about social actors, social actions, resources, time and space are influenced by animation techniques. Furthermore, in the context of this multimodal discourse analysis, their influence upon...

  11. Multi-Modal Treatment of Nocturnal Enuresis.

    Science.gov (United States)

    Mohr, Caroline; Sharpley, Christopher F.

    1988-01-01

    The article reports a multimodal treatment of nocturnal enuresis and anxious behavior in a mildly mentally retarded woman. Behavioral treatment and removal of caffeine from the subject's diet eliminated both nocturnal enuresis and anxious behavior. (Author/DB)

  12. Spatial sound in the use of multimodal interfaces for the acquisition of motor skills

    DEFF Research Database (Denmark)

    Hoffmann, Pablo F.

    2008-01-01

    in facilitating skill acquisition by complementing, or substituting, other sensory modalities. An overview of related research areas on audiovisual and audiotactile interaction is given in connection to the potential benefits of spatial sound as a means to improve the perceptual quality of the interfaces as well......This paper discusses the potential effectiveness of spatial sound in the use of multimodal interfaces and virtual environment technologies for the acquisition of motor skills. Because skills are generally of multimodal nature, spatial sound is discussed in terms of the role that it may play...... as to convey information considered critical for the transfer of motor skills....

  13. Engineering gestures for multimodal user interfaces

    OpenAIRE

    Echtler, Florian; Kammer, Dietrich; Vanacken, Davy; Hoste, Lode; Signer, Beat

    2014-01-01

    Despite increased presence of gestural and multimodal user interfaces in research as well as daily life, development of such systems still mostly relies on programming concepts which have emerged from classic WIMP user interfaces. This workshop proposes to explore the gap between attempts to formalize and structure development for multimodal interfaces in the research community on the one hand and the lack of adoption of these formal languages and frameworks by practitioners and other researc...

  14. Esthesioneuroblastoma: Multimodal management and review of literature

    Science.gov (United States)

    Kumar, Ritesh

    2015-01-01

    Esthesioneuroblastoma (ENB) is a rare malignant neoplasm arising from the olfactory neuroepithelium. ENB constitutes only 3% of all malignant intranasal neoplasm. Because of the rarity, the number of patients of ENB treated in individual departments is small. Most of these patients presents in locally advanced stages and require multimodality treatment in form of surgery, chemotherapy and radiotherapy. Multimodality approach with a risk-adapted strategy is required to achieve good control rates while minimizing treatment related toxicity. PMID:26380824

  15. Multimodal pain stimulation of the gastrointestinal tract

    Institute of Scientific and Technical Information of China (English)

    Asbjφrn Mohr Drewes; Hans Gregersen

    2006-01-01

    Understanding and characterization of pain and other sensory symptoms are among the most important issues in the diagnosis and assessment of patient with gastrointestinal disorders. Methods to evoke and assess experimental pain have recently developed into a new area with the possibility for multimodal stimulation (e.g.,electrical, mechanical, thermal and chemical stimulation)of different nerves and pain pathways in the human gut. Such methods mimic to a high degree the pain experienced in the clinic. Multimodal pain methods have increased our basic understanding of different peripheral receptors in the gut in health and disease. Together with advanced muscle analysis, the methods have increased our understanding of receptors sensitive to mechanical,chemical and temperature stimuli in diseases, such as systemic sclerosis and diabetes. The methods can also be used to unravel central pain mechanisms, such as those involved in allodynia, hyperalgesia and referred pain. Abnormalities in central pain mechanisms are often seen in patients with chronic gut pain and hence methods relying on multimodal pain stimulation may help to understand the symptoms in these patients.Sex differences have been observed in several diseases of the gut, and differences in central pain processing between males and females have been hypothesized using multimodal pain stimulations. Finally, multimodal methods have recently been used to gain more insight into the effect of drugs against pain in the GI tract.Hence, the multimodal methods undoubtedly represents a major step forward in the future characterization and treatment of patients with various diseases of the gut.

  16. Empowering Prospective Teachers to Become Active Sense-Makers: Multimodal Modeling of the Seasons

    Science.gov (United States)

    Kim, Mi Song

    2015-10-01

    Situating science concepts in concrete and authentic contexts, using information and communications technologies, including multimodal modeling tools, is important for promoting the development of higher-order thinking skills in learners. However, teachers often struggle to integrate emergent multimodal models into a technology-rich informal learning environment. Our design-based research co-designs and develops engaging, immersive, and interactive informal learning activities called "Embodied Modeling-Mediated Activities" (EMMA) to support not only Singaporean learners' deep learning of astronomy but also the capacity of teachers. As part of the research on EMMA, this case study describes two prospective teachers' co-design processes involving multimodal models for teaching and learning the concept of the seasons in a technology-rich informal learning setting. Our study uncovers four prominent themes emerging from our data concerning the contextualized nature of learning and teaching involving multimodal models in informal learning contexts: (1) promoting communication and emerging questions, (2) offering affordances through limitations, (3) explaining one concept involving multiple concepts, and (4) integrating teaching and learning experiences. This study has an implication for the development of a pedagogical framework for teaching and learning in technology-enhanced learning environments—that is empowering teachers to become active sense-makers using multimodal models.

  17. Criteria of Human-computer Interface Design for Computer Assisted Surgery Systems

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jian-guo; LIN Yan-ping; WANG Cheng-tao; LIU Zhi-hong; YANG Qing-ming

    2008-01-01

    In recent years, computer assisted surgery (CAS) systems become more and more common in clinical practices, but few specific design criteria have been proposed for human-computer interface (HCI) in CAS systems. This paper tried to give universal criteria of HCI design for CAS systems through introduction of demonstration application, which is total knee replacement (TKR) with a nonimage-based navigation system.A typical computer assisted process can be divided into four phases: the preoperative planning phase, the intraoperative registration phase, the intraoperative navigation phase and finally the postoperative assessment phase. The interface design for four steps is described respectively in the demonstration application. These criteria this paper summarized can be useful to software developers to achieve reliable and effective interfaces for new CAS systems more easily.

  18. U.S. Army weapon systems human-computer interface style guide. Version 2

    Energy Technology Data Exchange (ETDEWEB)

    Avery, L.W.; O`Mara, P.A.; Shepard, A.P.; Donohoo, D.T.

    1997-12-31

    A stated goal of the US Army has been the standardization of the human computer interfaces (HCIs) of its system. Some of the tools being used to accomplish this standardization are HCI design guidelines and style guides. Currently, the Army is employing a number of HCI design guidance documents. While these style guides provide good guidance for the command, control, communications, computers, and intelligence (C4I) domain, they do not necessarily represent the more unique requirements of the Army`s real time and near-real time (RT/NRT) weapon systems. The Office of the Director of Information for Command, Control, Communications, and Computers (DISC4), in conjunction with the Weapon Systems Technical Architecture Working Group (WSTAWG), recognized this need as part of their activities to revise the Army Technical Architecture (ATA), now termed the Joint Technical Architecture-Army (JTA-A). To address this need, DISC4 tasked the Pacific Northwest National Laboratory (PNNL) to develop an Army weapon systems unique HCI style guide, which resulted in the US Army Weapon Systems Human-Computer Interface (WSHCI) Style Guide Version 1. Based on feedback from the user community, DISC4 further tasked PNNL to revise Version 1 and publish Version 2. The intent was to update some of the research and incorporate some enhancements. This document provides that revision. The purpose of this document is to provide HCI design guidance for the RT/NRT Army system domain across the weapon systems subdomains of ground, aviation, missile, and soldier systems. Each subdomain should customize and extend this guidance by developing their domain-specific style guides, which will be used to guide the development of future systems within their subdomains.

  19. Multimodal analgesia and regional anaesthesia.

    Science.gov (United States)

    Tornero Tornero, C; Fernández Rodríguez, L E; Orduña Valls, J

    Multimodal analgesia provides quality analgesia, with fewer side effects due to the use of combined analgesics or analgesic techniques. Regional anaesthesia plays a fundamental role in achieving this goal. The different techniques of regional anaesthesia that include both peripheral and central blocks in either a single dose or in continuous infusion help to modulate the nociceptive stimuli that access the central level. The emergence of the ultrasound as an effective system to perform regional anaesthesia techniques has allowed the development of new regional anaesthesia techniques that formerly could not be carried out since only neurostimulation or skin references were used. It is essential to take into account that even with effective blocking it is advisable to associate other drugs by other routes, in this way we will be able to reduce the required doses individually and attempt to achieve a synergistic, not purely additive, effect. Copyright © 2017 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.

  20. Multimodality Management of Trigeminal Schwannomas.

    Science.gov (United States)

    Niranjan, Ajay; Barnett, Samuel; Anand, Vijay; Agazzi, Siviero

    2016-08-01

    Patients presenting with trigeminal schwannomas require multimodality management by a skull base surgical team that can offer expertise in both transcranial and transnasal approaches as well as radiosurgical and microsurgical strategies. Improvement in neurologic symptoms, preservation of cranial nerve function, and control of mass effect are the primary goals of management for trigeminal schwannomas. Complete surgical resection is the treatment of choice but may not be possible in all cases. Radiosurgery is an option as primary management for small- to moderate-sized tumors and can be used for postoperative residuals or recurrences. Planned surgical resection followed by SRS for residual tumor is an effective option for larger trigeminal schwannomas. The endoscopic resection is an excellent approach for patients with an extradural tumor or tumors isolated to the Meckel cave. A detailed analysis of a tumor and its surroundings based on high-quality imaging can help better estimate the expected outcome from each treatment. An expert skull base team should be able to provide precise counseling for each patient's situation for selecting the best option.

  1. Miniature multimode monolithic flextensional transducers.

    Science.gov (United States)

    Hladky-Hennion, Anne-Christine; Uzgur, A Erman; Markley, Douglas C; Safari, Ahmad; Cochran, Joe K; Newnham, Robert E

    2007-10-01

    Traditional flextensional transducers classified in seven groups based on their designs have been used extensively in 1-100 kHz range for mine hunting, fish finding, oil explorations, and biomedical applications. In this study, a new family of small, low cost underwater, and biomedical transducers has been developed. After the fabrication of transducers, finite-elements analysis (FEA) was used extensively in order to optimize these miniature versions of high-power, low-frequency flextensional transducer designs to achieve broad bandwidth for both transmitting and receiving, engineered vibration modes, and optimized acoustic directivity patterns. Transducer topologies with various shapes, cross sections, and symmetries can be fabricated through high-volume, low-cost ceramic and metal extrusion processes. Miniaturized transducers posses resonance frequencies in the range of above 1 MHz to below 10 kHz. Symmetry and design of the transducer, polling patterns, driving and receiving electrode geometries, and driving conditions have a strong effect on the vibration modes, resonance frequencies, and radiation patterns. This paper is devoted to small, multimode flextensional transducers with active shells, which combine the advantages of small size and low-cost manufacturing with control of the shape of the acoustic radiation/receive pattern. The performance of the transducers is emphasized.

  2. Development of Principles for Multimodal Displays in Army Human-Robot Operations

    Science.gov (United States)

    2010-06-01

    proceedings for the past 5 years: Human Factors, Presence, Human Computer Interaction ( HCI ), and IEEE. 2.2.2 Coding Procedure and Inclusion Criteria... aviation and HRI professionals. Tasks predominantly included navigating platforms to targets or areas of interest, executing an action (e.g...in these related streams of research to date and provides guidelines for future integrative research. From a practical perspective, this

  3. GAIML: A New Language for Verbal and Graphical Interaction in Chatbots

    Directory of Open Access Journals (Sweden)

    Roberto Pirrone

    2008-01-01

    Full Text Available Natural and intuitive interaction between users and complex systems is a crucial research topic in human-computer interaction. A major direction is the definition and implementation of systems with natural language understanding capabilities. The interaction in natural language is often performed by means of systems called chatbots. A chatbot is a conversational agent with a proper knowledge base able to interact with users. Chatbots appearance can be very sophisticated with 3D avatars and speech processing modules. However the interaction between the system and the user is only performed through textual areas for inputs and replies. An interaction able to add to natural language also graphical widgets could be more effective. On the other side, a graphical interaction involving also the natural language can increase the comfort of the user instead of using only graphical widgets. In many applications multi-modal communication must be preferred when the user and the system have a tight and complex interaction. Typical examples are cultural heritages applications (intelligent museum guides, picture browsing or systems providing the user with integrated information taken from different and heterogenous sources as in the case of the iGoogle™ interface. We propose to mix the two modalities (verbal and graphical to build systems with a reconfigurable interface, which is able to change with respect to the particular application context. The result of this proposal is the Graphical Artificial Intelligence Markup Language (GAIML an extension of AIML allowing merging both interaction modalities. In this context a suitable chatbot system called Graphbot is presented to support this language. With this language is possible to define personalized interface patterns that are the most suitable ones in relation to the data types exchanged between the user and the system according to the context of the dialogue.

  4. Squeezing Properties of the Generalized Multimode Squeezed States

    Institute of Scientific and Technical Information of China (English)

    SONGTong-Qiang

    2001-01-01

    By means of the invariance of Weyl ordering under similar transformations we derive the explicit form of the generalized multimode squeezed states.Moreover,the completeness relation and the squeezing properties of the generalized multimode squeezed states are discussed.

  5. Squeezing Properties of the Generalized Multimode Squeezed States

    Institute of Scientific and Technical Information of China (English)

    SONG Tong-Qiang

    2001-01-01

    By means of the invariance of Weyl ordering under similar transformations we derive the explicit form of the generalized multimode squeezed states. Moreover, the completeness relation and the squeezing properties of the generalized multimode squeezed states are discussed.

  6. 多感官人机交互界面的视觉设计原则%Discussion on the Visual Design Pinciples of Multi-Sense Human-Computer Interface

    Institute of Scientific and Technical Information of China (English)

    肖红; 郭歌

    2012-01-01

    通过阐述多感官人机交互界面的概念,分析了交互界面的视觉设计发展趋势——动态化、多维化、综合化。在此基础上,提出了人机交互界面的视觉设计应遵守"简洁易用"的总原则,并根据多感官人机交互界面的功能与特点,依次提出了:简洁性与美观性并存,统一性与多样性并存,易用性与交互性并存,静态与动态并存,理性与感性并存这5条设计原则。%Through expounding the concept and definition of "Multi-Sense Human-Computer Interface",it analyzed the visual trend of interface design was interactive,dynamic,comprehensiveness.On this basis,according to the functions and features of the multi-sense human-computer interface,it proposed the human-computer interface design should comply with the general principles of "Keep It Simple And Stupid".To put it concretely,the visual design of multi-sense human-computer interface should: simplicity and aesthetic property;unity and diversity;ease of use and interactive;statics and dynamic;rational and perceptual.

  7. Single LP(0,n) mode excitation in multimode fibers.

    Science.gov (United States)

    Bhatia, Nitin; Rustagi, Kailash C; John, Joseph

    2014-07-14

    We analyze the transmission of a Single mode - Multimode -Multimode (SMm) fiber structure with the aim of exciting a single radial mode in the second multimode fiber. We show that by appropriate choice of the length of the central multimode fiber one can obtain > 90% of the total core power in a chosen mode. We also discuss methods of removing undesirable cladding and radiation modes and estimate tolerances for practical applications.

  8. Multimodale trafiknet i GIS (Multimodal Traffic Network in GIS)

    DEFF Research Database (Denmark)

    Kronbak, Jacob; Brems, Camilla Riff

    1996-01-01

    The report introduces the use of multi-modal traffic networks within a geographical Information System (GIS). The necessary theory of modelling multi-modal traffic network is reviewed and applied to the ARC/INFO GIS by an explorative example.......The report introduces the use of multi-modal traffic networks within a geographical Information System (GIS). The necessary theory of modelling multi-modal traffic network is reviewed and applied to the ARC/INFO GIS by an explorative example....

  9. Self-organized instability in graded-index multimode fibre

    CERN Document Server

    Wright, Logan G; Nolan, Daniel A; Li, Ming-Jun; Christodoulides, Demetrios N; Wise, Frank W

    2016-01-01

    Multimode fibres (MMFs) are attracting interest for complex spatiotemporal dynamics, and for ultrafast fibre sources, imaging and telecommunications. This new interest is based on three key properties: their high spatiotemporal complexity (information capacity), the important role of disorder, and complex intermodal interactions. To date, phenomena in MMFs have been studied only in limiting cases where one or more of these properties can be neglected. Here we study MMFs in a regime in which all these elements are integral. We observe a spatial beam-cleaning process preceding spatiotemporal modulation instability. We show that the origin of these processes is a universal unstable attractor in graded-index MMFs. Both the self-organization of the attractor, as well as its instability, are caused by intermodal interactions characterized by cooperating disorder, nonlinearity and dissipation. The demonstration of a disorder-enhanced nonlinear process in MMF has important implications for telecommunications, and the...

  10. [Multimodal pain therapy: principles and indications].

    Science.gov (United States)

    Arnold, B; Brinkschmidt, T; Casser, H-R; Gralow, I; Irnich, D; Klimczyk, K; Müller, G; Nagel, B; Pfingsten, M; Schiltenwolf, M; Sittl, R; Söllner, W

    2009-04-01

    Multimodal pain therapy describes an integrated multidisciplinary treatment in small groups with a closely coordinated therapeutical approach. Somatic and psychotherapeutic procedures cooperate with physical and psychological training programs. For chronic pain syndromes with complex somatic, psychological and social consequences, a therapeutic intensity of at least 100 hours is recommended. Under these conditions multimodal pain therapy has proven to be more effective than other kinds of treatment. If monodisciplinary and/or outpatient therapies fail, health insurance holders have a legitimate claim to this form of therapy.Medical indications are given for patients with chronic pain syndromes, but also if there is an elevated risk of chronic pain in the early stadium of the disease and aiming at delaying the process of chronification. Relative contraindications are a lack of motivation for behavioural change, severe mental disorders or psychopathologies and addiction problems. The availability of multimodal pain treatment centers in Germany is currently insufficient.

  11. Instrumentation challenges in multi-modality imaging

    Energy Technology Data Exchange (ETDEWEB)

    Brasse, D., E-mail: david.brasse@iphc.cnrs.fr [Institut Pluridisciplinaire Hubert Curien, Université de Strasbourg, 23 rue du Loess 67037 Strasbourg (France); CNRS, UMR7178, 67037 Strasbourg (France); Boisson, F. [Institut Pluridisciplinaire Hubert Curien, Université de Strasbourg, 23 rue du Loess 67037 Strasbourg (France); CNRS, UMR7178, 67037 Strasbourg (France)

    2016-02-11

    Based on different physical principles, imaging procedures currently used in both clinical and preclinical applications present different performance that allow researchers to achieve a large number of studies. However, the relevance of obtaining a maximum of information relating to the same subject is undeniable. The last two decades have thus seen the advent of a full-fledged research axis, the multimodal in vivo imaging. Whether from an instrumentation point of view, for medical research or the development of new probes, all these research works illustrate the growing interest of the scientific community for multimodal imaging, which can be approached with different backgrounds and perspectives from engineers to end-users point of views. In the present review, we discuss the multimodal imaging concept, which focuses not only on PET/CT and PET/MRI instrumentation but also on recent investigations of what could become a possible future in the field.

  12. Multimodality Data Integration in Epilepsy

    Directory of Open Access Journals (Sweden)

    Otto Muzik

    2007-01-01

    Full Text Available An important goal of software development in the medical field is the design of methods which are able to integrate information obtained from various imaging and nonimaging modalities into a cohesive framework in order to understand the results of qualitatively different measurements in a larger context. Moreover, it is essential to assess the various features of the data quantitatively so that relationships in anatomical and functional domains between complementing modalities can be expressed mathematically. This paper presents a clinically feasible software environment for the quantitative assessment of the relationship among biochemical functions as assessed by PET imaging and electrophysiological parameters derived from intracranial EEG. Based on the developed software tools, quantitative results obtained from individual modalities can be merged into a data structure allowing a consistent framework for advanced data mining techniques and 3D visualization. Moreover, an effort was made to derive quantitative variables (such as the spatial proximity index, SPI characterizing the relationship between complementing modalities on a more generic level as a prerequisite for efficient data mining strategies. We describe the implementation of this software environment in twelve children (mean age 5.2±4.3 years with medically intractable partial epilepsy who underwent both high-resolution structural MR and functional PET imaging. Our experiments demonstrate that our approach will lead to a better understanding of the mechanisms of epileptogenesis and might ultimately have an impact on treatment. Moreover, our software environment holds promise to be useful in many other neurological disorders, where integration of multimodality data is crucial for a better understanding of the underlying disease mechanisms.

  13. Interaction with geospatial data

    OpenAIRE

    SCHOENING, Johannes

    2015-01-01

    My research interest lies at the interaction between human-computer interaction (HCI) and geoinformatics. I am interested in developing new methods and novel user interfaces to navigate through spatial information. This article will give a brief overview on my past and current research topics and streams. Generally speaking, geography is playing an increasingly important role in computer science and also in the field of HCI ranging from social computing to natural user interfaces (NUIs). At t...

  14. Multimodal surveillance sensors, algorithms, and systems

    CERN Document Server

    Zhu, Zhigang

    2007-01-01

    From front-end sensors to systems and environmental issues, this practical resource guides you through the many facets of multimodal surveillance. The book examines thermal, vibration, video, and audio sensors in a broad context of civilian and military applications. This cutting-edge volume provides an in-depth treatment of data fusion algorithms that takes you to the core of multimodal surveillance, biometrics, and sentient computing. The book discusses such people and activity topics as tracking people and vehicles and identifying individuals by their speech.Systems designers benefit from d

  15. Detecting multimode entanglement by symplectic uncertainty relations

    CERN Document Server

    Serafini, A

    2005-01-01

    Quantities invariant under symplectic (i.e. linear and canonical) transformations are constructed as functions of the second moments of N pairs of bosonic field operators. A general multimode uncertainty relation is derived as a necessary constraint on such symplectic invariants. In turn, necessary conditions for the separability of multimode continuous variable states under (MxN)-mode bipartitions are derived from the uncertainty relation. These conditions are proven to be necessary and sufficient for (1+N)-mode Gaussian states and for (M+N)-mode bisymmetric Gaussian states.

  16. Strategy development management of Multimodal Transport Network

    Directory of Open Access Journals (Sweden)

    Nesterova Natalia S.

    2016-01-01

    Full Text Available The article gives a brief overview of works on the development of transport infrastructure for multimodal transportation and integration of Russian transport system into the international transport corridors. The technology for control of the strategy, that changes shape and capacity of Multi-modal Transport Network (MTN, is considered as part of the methodology for designing and development of MTN. This technology allows to carry out strategic and operational management of the strategy implementation based on the use of the balanced scorecard.

  17. Latex nanoparticles for multimodal imaging and detection in vivo

    Energy Technology Data Exchange (ETDEWEB)

    Cartier, R [Clinic for Anaesthesiology and Intensive Care Medicine, Charite Campus Virchow-Klinikum, Berlin University Medical School, Berlin (Germany); Kaufner, L [Clinic for Anaesthesiology and Intensive Care Medicine, Charite Campus Virchow-Klinikum, Berlin University Medical School, Berlin (Germany); Paulke, B R [Fraunhofer Institute for Applied Polymer Research, Potsdam (Germany); Wuestneck, R [Clinic for Anaesthesiology and Intensive Care Medicine, Charite Campus Virchow-Klinikum, Berlin University Medical School, Berlin (Germany); Pietschmann, S [Clinic for Anaesthesiology and Intensive Care Medicine, Charite Campus Virchow-Klinikum, Berlin University Medical School, Berlin (Germany); Michel, R [Department of Radiology, Charite Campus Virchow-Klinikum, Berlin University Medical School, Berlin (Germany); Bruhn, H [Department of Radiology, Charite Campus Virchow-Klinikum, Berlin University Medical School, Berlin (Germany); Pison, U [Clinic for Anaesthesiology and Intensive Care Medicine, Charite Campus Virchow-Klinikum, Berlin University Medical School, Berlin (Germany)

    2007-05-16

    The aim of the present work was to develop a multimodal imaging and detection approach to study the behaviour of nanoparticles in animal studies. Highly carboxylated 144 nm-sized latex nanoparticles were labelled with {sup 68}Ga for positron emission tomography, {sup 111}In for quantitative gamma scintigraphy or Gd{sup 3+} for magnetic resonance imaging. Following intravenous injection into rats, precise localization was achieved revealing the tracer in the blood compartment with a time-dependent accumulation in the liver. In addition, rhodamine B was also incorporated to examine specific interactions with blood cells. Flow cytometry and fluorescent microscopy show uptake of nanoparticles by leucocytes and, unexpectedly, thrombocytes, but not erythrocytes. Cellular internalization was an active and selective process. Further incorporation of polyethylene glycol into the nanoparticle corona could prevent uptake by thrombocytes but not macrophages or monocytes. Our data demonstrate the feasibility of a multimodal approach and its usefulness to analyse the fate of nanoparticles at the macroscopic and cellular level. It will facilitate the development of functionalized nanocarrier systems and extend their biomedical applications.

  18. Multimodal chromatography: an efficient tool in downstream processing of proteins.

    Science.gov (United States)

    Kallberg, Kristian; Johansson, Hans-Olof; Bulow, Leif

    2012-12-01

    Chromatography has become an indispensable tool for the purification of proteins. Since the regulatory demands on protein purity are expected to become stricter, the need for generating improved resins for chromatographic separations has increased. More advanced scientific investigations of protein structure/function relationships, in particular, have also been a driving force for generating more sophisticated chromatographic materials for protein separations. As a consequence, the development of alternative chromatographic strategies has been very rapid during the past decade and several new ligands have been designed and explored both in the laboratory and in large-scale industrial settings. This review describes some of these efforts using multimodal chromatography, where two or more physicochemical properties are used to enhance the specificity of the interactions between the protein and the ligand on the chromatographic matrix. In addition to experimental studies, computer modeling of ligand-protein binding has improved the design of ligands for protein recognition. The use of descriptors as well as in silico docking methods have been implemented to design multimodal resins in several instances.

  19. The mind-writing pupil : A human-computer interface based on decoding of covert attention through pupillometry

    NARCIS (Netherlands)

    Mathôt, Sebastiaan; Melmi, Jean Baptiste; Van Der Linden, Lotje; Van Der Stigchel, Stefan

    2016-01-01

    We present a new human-computer interface that is based on decoding of attention through pupillometry. Our method builds on the recent finding that covert visual attention affects the pupillary light response: Your pupil constricts when you covertly (without looking at it) attend to a bright, compar

  20. The Mind-Writing Pupil : A Human-Computer Interface Based on Decoding of Covert Attention through Pupillometry

    NARCIS (Netherlands)

    Mathot, Sebastiaan; Melmi, Jean-Baptiste; van der Linden, Lotje; van der Stigchel, Stefan

    2016-01-01

    We present a new human-computer interface that is based on decoding of attention through pupillometry. Our method builds on the recent finding that covert visual attention affects the pupillary light response: Your pupil constricts when you covertly (without looking at it) attend to a bright, compar