WorldWideScience

Sample records for multimodal human-computer interaction

  1. Modeling multimodal human-computer interaction

    NARCIS (Netherlands)

    Obrenovic, Z.; Starcevic, D.

    2004-01-01

    Incorporating the well-known Unified Modeling Language into a generic modeling framework makes research on multimodal human-computer interaction accessible to a wide range off software engineers. Multimodal interaction is part of everyday human discourse: We speak, move, gesture, and shift our gaze

  2. Multimodal Information Presentation for High-Load Human Computer Interaction

    NARCIS (Netherlands)

    Cao, Y.

    2011-01-01

    This dissertation addresses multimodal information presentation in human computer interaction. Information presentation refers to the manner in which computer systems/interfaces present information to human users. More specifically, the focus of our work is not on which information to present, but

  3. HCI^2 Workbench: A Development Tool for Multimodal Human-Computer Interaction Systems

    NARCIS (Netherlands)

    Shen, Jie; Wenzhe, Shi; Pantic, Maja

    In this paper, we present a novel software tool designed and implemented to simplify the development process of Multimodal Human-Computer Interaction (MHCI) systems. This tool, which is called the HCI^2 Workbench, exploits a Publish / Subscribe (P/S) architecture [13] [14] to facilitate efficient

  4. A Software Framework for Multimodal Human-Computer Interaction Systems

    NARCIS (Netherlands)

    Shen, Jie; Pantic, Maja

    2009-01-01

    This paper describes a software framework we designed and implemented for the development and research in the area of multimodal human-computer interface. The proposed framework is based on publish / subscribe architecture, which allows developers and researchers to conveniently configure, test and

  5. HCIDL: Human-computer interface description language for multi-target, multimodal, plastic user interfaces

    Directory of Open Access Journals (Sweden)

    Lamia Gaouar

    2018-06-01

    Full Text Available From the human-computer interface perspectives, the challenges to be faced are related to the consideration of new, multiple interactions, and the diversity of devices. The large panel of interactions (touching, shaking, voice dictation, positioning … and the diversification of interaction devices can be seen as a factor of flexibility albeit introducing incidental complexity. Our work is part of the field of user interface description languages. After an analysis of the scientific context of our work, this paper introduces HCIDL, a modelling language staged in a model-driven engineering approach. Among the properties related to human-computer interface, our proposition is intended for modelling multi-target, multimodal, plastic interaction interfaces using user interface description languages. By combining plasticity and multimodality, HCIDL improves usability of user interfaces through adaptive behaviour by providing end-users with an interaction-set adapted to input/output of terminals and, an optimum layout. Keywords: Model driven engineering, Human-computer interface, User interface description languages, Multimodal applications, Plastic user interfaces

  6. Measuring Multimodal Synchrony for Human-Computer Interaction

    NARCIS (Netherlands)

    Reidsma, Dennis; Nijholt, Antinus; Tschacher, Wolfgang; Ramseyer, Fabian; Sourin, A.

    2010-01-01

    Nonverbal synchrony is an important and natural element in human-human interaction. It can also play various roles in human-computer interaction. In particular this is the case in the interaction between humans and the virtual humans that inhabit our cyberworlds. Virtual humans need to adapt their

  7. HCI^2 Framework: A software framework for multimodal human-computer interaction systems

    NARCIS (Netherlands)

    Shen, Jie; Pantic, Maja

    2013-01-01

    This paper presents a novel software framework for the development and research in the area of multimodal human-computer interface (MHCI) systems. The proposed software framework, which is called the HCI∧2 Framework, is built upon publish/subscribe (P/S) architecture. It implements a

  8. Multimodal interaction in image and video applications

    CERN Document Server

    Sappa, Angel D

    2013-01-01

    Traditional Pattern Recognition (PR) and Computer Vision (CV) technologies have mainly focused on full automation, even though full automation often proves elusive or unnatural in many applications, where the technology is expected to assist rather than replace the human agents. However, not all the problems can be automatically solved being the human interaction the only way to tackle those applications. Recently, multimodal human interaction has become an important field of increasing interest in the research community. Advanced man-machine interfaces with high cognitive capabilities are a hot research topic that aims at solving challenging problems in image and video applications. Actually, the idea of computer interactive systems was already proposed on the early stages of computer science. Nowadays, the ubiquity of image sensors together with the ever-increasing computing performance has open new and challenging opportunities for research in multimodal human interaction. This book aims to show how existi...

  9. Integrated multimodal human-computer interface and augmented reality for interactive display applications

    Science.gov (United States)

    Vassiliou, Marius S.; Sundareswaran, Venkataraman; Chen, S.; Behringer, Reinhold; Tam, Clement K.; Chan, M.; Bangayan, Phil T.; McGee, Joshua H.

    2000-08-01

    We describe new systems for improved integrated multimodal human-computer interaction and augmented reality for a diverse array of applications, including future advanced cockpits, tactical operations centers, and others. We have developed an integrated display system featuring: speech recognition of multiple concurrent users equipped with both standard air- coupled microphones and novel throat-coupled sensors (developed at Army Research Labs for increased noise immunity); lip reading for improving speech recognition accuracy in noisy environments, three-dimensional spatialized audio for improved display of warnings, alerts, and other information; wireless, coordinated handheld-PC control of a large display; real-time display of data and inferences from wireless integrated networked sensors with on-board signal processing and discrimination; gesture control with disambiguated point-and-speak capability; head- and eye- tracking coupled with speech recognition for 'look-and-speak' interaction; and integrated tetherless augmented reality on a wearable computer. The various interaction modalities (speech recognition, 3D audio, eyetracking, etc.) are implemented a 'modality servers' in an Internet-based client-server architecture. Each modality server encapsulates and exposes commercial and research software packages, presenting a socket network interface that is abstracted to a high-level interface, minimizing both vendor dependencies and required changes on the client side as the server's technology improves.

  10. Appearance-based human gesture recognition using multimodal features for human computer interaction

    Science.gov (United States)

    Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun

    2011-03-01

    The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.

  11. Quantifying Quality Aspects of Multimodal Interactive Systems

    CERN Document Server

    Kühnel, Christine

    2012-01-01

    This book systematically addresses the quantification of quality aspects of multimodal interactive systems. The conceptual structure is based on a schematic view on human-computer interaction where the user interacts with the system and perceives it via input and output interfaces. Thus, aspects of multimodal interaction are analyzed first, followed by a discussion of the evaluation of output and input and concluding with a view on the evaluation of a complete system.

  12. Human-computer interaction for alert warning and attention allocation systems of the multimodal watchstation

    Science.gov (United States)

    Obermayer, Richard W.; Nugent, William A.

    2000-11-01

    The SPAWAR Systems Center San Diego is currently developing an advanced Multi-Modal Watchstation (MMWS); design concepts and software from this effort are intended for transition to future United States Navy surface combatants. The MMWS features multiple flat panel displays and several modes of user interaction, including voice input and output, natural language recognition, 3D audio, stylus and gestural inputs. In 1999, an extensive literature review was conducted on basic and applied research concerned with alerting and warning systems. After summarizing that literature, a human computer interaction (HCI) designer's guide was prepared to support the design of an attention allocation subsystem (AAS) for the MMWS. The resultant HCI guidelines are being applied in the design of a fully interactive AAS prototype. An overview of key findings from the literature review, a proposed design methodology with illustrative examples, and an assessment of progress made in implementing the HCI designers guide are presented.

  13. Multimodal Desktop Interaction: The Face –Object-Gesture–Voice Example

    DEFF Research Database (Denmark)

    Vidakis, Nikolas; Vlasopoulos, Anastasios; Kounalakis, Tsampikos

    2013-01-01

    This paper presents a natural user interface system based on multimodal human computer interaction, which operates as an intermediate module between the user and the operating system. The aim of this work is to demonstrate a multimodal system which gives users the ability to interact with desktop...

  14. Multimodal interaction for human-robot teams

    Science.gov (United States)

    Burke, Dustin; Schurr, Nathan; Ayers, Jeanine; Rousseau, Jeff; Fertitta, John; Carlin, Alan; Dumond, Danielle

    2013-05-01

    Unmanned ground vehicles have the potential for supporting small dismounted teams in mapping facilities, maintaining security in cleared buildings, and extending the team's reconnaissance and persistent surveillance capability. In order for such autonomous systems to integrate with the team, we must move beyond current interaction methods using heads-down teleoperation which require intensive human attention and affect the human operator's ability to maintain local situational awareness and ensure their own safety. This paper focuses on the design, development and demonstration of a multimodal interaction system that incorporates naturalistic human gestures, voice commands, and a tablet interface. By providing multiple, partially redundant interaction modes, our system degrades gracefully in complex environments and enables the human operator to robustly select the most suitable interaction method given the situational demands. For instance, the human can silently use arm and hand gestures for commanding a team of robots when it is important to maintain stealth. The tablet interface provides an overhead situational map allowing waypoint-based navigation for multiple ground robots in beyond-line-of-sight conditions. Using lightweight, wearable motion sensing hardware either worn comfortably beneath the operator's clothing or integrated within their uniform, our non-vision-based approach enables an accurate, continuous gesture recognition capability without line-of-sight constraints. To reduce the training necessary to operate the system, we designed the interactions around familiar arm and hand gestures.

  15. Child-Computer Interaction: ICMI 2012 special session

    NARCIS (Netherlands)

    Nijholt, Antinus; Morency, L.P.; Bohus, L.; Aghajan, H.; Nijholt, Antinus; Cassell, J.; Epps, J.

    2012-01-01

    This is a short introduction to the special session on child computer interaction at the International Conference on Multimodal Interaction 2012 (ICMI 2012). In human-computer interaction users have become participants in the design process. This is not different for child computer interaction

  16. A multimodal parallel architecture: A cognitive framework for multimodal interactions.

    Science.gov (United States)

    Cohn, Neil

    2016-01-01

    Human communication is naturally multimodal, and substantial focus has examined the semantic correspondences in speech-gesture and text-image relationships. However, visual narratives, like those in comics, provide an interesting challenge to multimodal communication because the words and/or images can guide the overall meaning, and both modalities can appear in complicated "grammatical" sequences: sentences use a syntactic structure and sequential images use a narrative structure. These dual structures create complexity beyond those typically addressed by theories of multimodality where only a single form uses combinatorial structure, and also poses challenges for models of the linguistic system that focus on single modalities. This paper outlines a broad theoretical framework for multimodal interactions by expanding on Jackendoff's (2002) parallel architecture for language. Multimodal interactions are characterized in terms of their component cognitive structures: whether a particular modality (verbal, bodily, visual) is present, whether it uses a grammatical structure (syntax, narrative), and whether it "dominates" the semantics of the overall expression. Altogether, this approach integrates multimodal interactions into an existing framework of language and cognition, and characterizes interactions between varying complexity in the verbal, bodily, and graphic domains. The resulting theoretical model presents an expanded consideration of the boundaries of the "linguistic" system and its involvement in multimodal interactions, with a framework that can benefit research on corpus analyses, experimentation, and the educational benefits of multimodality. Copyright © 2015.

  17. Reference resolution in multi-modal interaction: Preliminary observations

    NARCIS (Netherlands)

    González González, G.R.; Nijholt, Antinus

    2002-01-01

    In this paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can apply

  18. Reference Resolution in Multi-modal Interaction: Position paper

    NARCIS (Netherlands)

    Fernando, T.; Nijholt, Antinus

    2002-01-01

    In this position paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can

  19. Multimodal Desktop Interaction: The Face –Object-Gesture–Voice Example

    OpenAIRE

    Vidakis, Nikolas; Vlasopoulos, Anastasios; Kounalakis, Tsampikos; Varchalamas, Petros; Dimitriou, Michalis; Kalliatakis, Gregory; Syntychakis, Efthimios; Christofakis, John; Triantafyllidis, Georgios

    2013-01-01

    This paper presents a natural user interface systembased on multimodal human computer interaction, whichoperates as an intermediate module between the user and theoperating system. The aim of this work is to demonstrate amultimodal system which gives users the ability to interact withdesktop applications using face, objects, voice and gestures.These human behaviors constitute the input qualifiers to thesystem. Microsoft Kinect multi-sensor was utilized as inputdevice in order to succeed the n...

  20. From Human-Computer Interaction to Human-Robot Social Interaction

    OpenAIRE

    Toumi, Tarek; Zidani, Abdelmadjid

    2014-01-01

    Human-Robot Social Interaction became one of active research fields in which researchers from different areas propose solutions and directives leading robots to improve their interactions with humans. In this paper we propose to introduce works in both human robot interaction and human computer interaction and to make a bridge between them, i.e. to integrate emotions and capabilities concepts of the robot in human computer model to become adequate for human robot interaction and discuss chall...

  1. Exploring the requirements for multimodal interaction for mobile devices in an end-to-end journey context.

    Science.gov (United States)

    Krehl, Claudia; Sharples, Sarah

    2012-01-01

    The paper investigates the requirements for multimodal interaction on mobile devices in an end-to-end journey context. Traditional interfaces are deemed cumbersome and inefficient for exchanging information with the user. Multimodal interaction provides a different user-centred approach allowing for more natural and intuitive interaction between humans and computers. It is especially suitable for mobile interaction as it can overcome additional constraints including small screens, awkward keypads, and continuously changing settings - an inherent property of mobility. This paper is based on end-to-end journeys where users encounter several contexts during their journeys. Interviews and focus groups explore the requirements for multimodal interaction design for mobile devices by examining journey stages and identifying the users' information needs and sources. Findings suggest that multimodal communication is crucial when users multitask. Choosing suitable modalities depend on user context, characteristics and tasks.

  2. Toward Multimodal Human-Robot Interaction to Enhance Active Participation of Users in Gait Rehabilitation.

    Science.gov (United States)

    Gui, Kai; Liu, Honghai; Zhang, Dingguo

    2017-11-01

    Robotic exoskeletons for physical rehabilitation have been utilized for retraining patients suffering from paraplegia and enhancing motor recovery in recent years. However, users are not voluntarily involved in most systems. This paper aims to develop a locomotion trainer with multiple gait patterns, which can be controlled by the active motion intention of users. A multimodal human-robot interaction (HRI) system is established to enhance subject's active participation during gait rehabilitation, which includes cognitive HRI (cHRI) and physical HRI (pHRI). The cHRI adopts brain-computer interface based on steady-state visual evoked potential. The pHRI is realized via admittance control based on electromyography. A central pattern generator is utilized to produce rhythmic and continuous lower joint trajectories, and its state variables are regulated by cHRI and pHRI. A custom-made leg exoskeleton prototype with the proposed multimodal HRI is tested on healthy subjects and stroke patients. The results show that voluntary and active participation can be effectively involved to achieve various assistive gait patterns.

  3. A Multimodal Interaction Framework for Blended Learning

    DEFF Research Database (Denmark)

    Vidakis, Nikolaos; Kalafatis, Konstantinos; Triantafyllidis, Georgios

    2016-01-01

    Humans interact with each other by utilizing the five basic senses as input modalities, whereas sounds, gestures, facial expressions etc. are utilized as output modalities. Multimodal interaction is also used between humans and their surrounding environment, although enhanced with further senses ...... framework enabling deployment of a vast variety of modalities, tailored appropriately for use in blended learning environment....

  4. Occupational stress in human computer interaction.

    Science.gov (United States)

    Smith, M J; Conway, F T; Karsh, B T

    1999-04-01

    There have been a variety of research approaches that have examined the stress issues related to human computer interaction including laboratory studies, cross-sectional surveys, longitudinal case studies and intervention studies. A critical review of these studies indicates that there are important physiological, biochemical, somatic and psychological indicators of stress that are related to work activities where human computer interaction occurs. Many of the stressors of human computer interaction at work are similar to those stressors that have historically been observed in other automated jobs. These include high workload, high work pressure, diminished job control, inadequate employee training to use new technology, monotonous tasks, por supervisory relations, and fear for job security. New stressors have emerged that can be tied primarily to human computer interaction. These include technology breakdowns, technology slowdowns, and electronic performance monitoring. The effects of the stress of human computer interaction in the workplace are increased physiological arousal; somatic complaints, especially of the musculoskeletal system; mood disturbances, particularly anxiety, fear and anger; and diminished quality of working life, such as reduced job satisfaction. Interventions to reduce the stress of computer technology have included improved technology implementation approaches and increased employee participation in implementation. Recommendations for ways to reduce the stress of human computer interaction at work are presented. These include proper ergonomic conditions, increased organizational support, improved job content, proper workload to decrease work pressure, and enhanced opportunities for social support. A model approach to the design of human computer interaction at work that focuses on the system "balance" is proposed.

  5. International workshop on multimodal analyses enabling artificial agents in human-machine interaction (workshop summary)

    NARCIS (Netherlands)

    Böck, Ronald; Bonin, Francesca; Campbell, Nick; Poppe, R.W.

    2016-01-01

    In this paper a brief overview of the third workshop on Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction. The paper is focussing on the main aspects intended to be discussed in the workshop reflecting the main scope of the papers presented during the meeting. The MA3HMI

  6. Human-computer interaction : Guidelines for web animation

    OpenAIRE

    Galyani Moghaddam, Golnessa; Moballeghi, Mostafa

    2006-01-01

    Human-computer interaction in the large is an interdisciplinary area which attracts researchers, educators, and practioners from many differenf fields. Human-computer interaction studies a human and a machine in communication, it draws from supporting knowledge on both the machine and the human side. This paper is related to the human side of human-computer interaction and focuses on animations. The growing use of animation in Web pages testifies to the increasing ease with which such multim...

  7. Perceptually-Inspired Computing

    Directory of Open Access Journals (Sweden)

    Ming Lin

    2015-08-01

    Full Text Available Human sensory systems allow individuals to see, hear, touch, and interact with the surrounding physical environment. Understanding human perception and its limit enables us to better exploit the psychophysics of human perceptual systems to design more efficient, adaptive algorithms and develop perceptually-inspired computational models. In this talk, I will survey some of recent efforts on perceptually-inspired computing with applications to crowd simulation and multimodal interaction. In particular, I will present data-driven personality modeling based on the results of user studies, example-guided physics-based sound synthesis using auditory perception, as well as perceptually-inspired simplification for multimodal interaction. These perceptually guided principles can be used to accelerating multi-modal interaction and visual computing, thereby creating more natural human-computer interaction and providing more immersive experiences. I will also present their use in interactive applications for entertainment, such as video games, computer animation, and shared social experience. I will conclude by discussing possible future research directions.

  8. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction.

    Science.gov (United States)

    Xu, Tian Linger; Zhang, Hui; Yu, Chen

    2016-05-01

    We focus on a fundamental looking behavior in human-robot interactions - gazing at each other's face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user's face as a response to the human's gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot's gaze toward the human partner's face in real time and then analyzed the human's gaze behavior as a response to the robot's gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot's face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained.

  9. Fundamentals of human-computer interaction

    CERN Document Server

    Monk, Andrew F

    1985-01-01

    Fundamentals of Human-Computer Interaction aims to sensitize the systems designer to the problems faced by the user of an interactive system. The book grew out of a course entitled """"The User Interface: Human Factors for Computer-based Systems"""" which has been run annually at the University of York since 1981. This course has been attended primarily by systems managers from the computer industry. The book is organized into three parts. Part One focuses on the user as processor of information with studies on visual perception; extracting information from printed and electronically presented

  10. Language evolution and human-computer interaction

    Science.gov (United States)

    Grudin, Jonathan; Norman, Donald A.

    1991-01-01

    Many of the issues that confront designers of interactive computer systems also appear in natural language evolution. Natural languages and human-computer interfaces share as their primary mission the support of extended 'dialogues' between responsive entities. Because in each case one participant is a human being, some of the pressures operating on natural languages, causing them to evolve in order to better support such dialogue, also operate on human-computer 'languages' or interfaces. This does not necessarily push interfaces in the direction of natural language - since one entity in this dialogue is not a human, this is not to be expected. Nonetheless, by discerning where the pressures that guide natural language evolution also appear in human-computer interaction, we can contribute to the design of computer systems and obtain a new perspective on natural languages.

  11. Facial Emotion Recognition Using Context Based Multimodal Approach

    Directory of Open Access Journals (Sweden)

    Priya Metri

    2011-12-01

    Full Text Available Emotions play a crucial role in person to person interaction. In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers. The ability to understand human emotions is desirable for the computer in several applications especially by observing facial expressions. This paper explores a ways of human-computer interaction that enable the computer to be more aware of the user’s emotional expressions we present a approach for the emotion recognition from a facial expression, hand and body posture. Our model uses multimodal emotion recognition system in which we use two different models for facial expression recognition and for hand and body posture recognition and then combining the result of both classifiers using a third classifier which give the resulting emotion . Multimodal system gives more accurate result than a signal or bimodal system

  12. Multimodal human-machine interaction for service robots in home-care environments

    OpenAIRE

    Goetze, Stefan; Fischer, S.; Moritz, Niko; Appell, Jens-E.; Wallhoff, Frank

    2012-01-01

    This contribution focuses on multimodal interaction techniques for a mobile communication and assistance system on a robot platform. The system comprises of acoustic, visual and haptic input modalities. Feedback is given to the user by a graphical user interface and a speech synthesis system. By this, multimodal and natural communication with the robot system is possible.

  13. Computer-aided psychotherapy based on multimodal elicitation, estimation and regulation of emotion.

    Science.gov (United States)

    Cosić, Krešimir; Popović, Siniša; Horvat, Marko; Kukolja, Davor; Dropuljić, Branimir; Kovač, Bernard; Jakovljević, Miro

    2013-09-01

    Contemporary psychiatry is looking at affective sciences to understand human behavior, cognition and the mind in health and disease. Since it has been recognized that emotions have a pivotal role for the human mind, an ever increasing number of laboratories and research centers are interested in affective sciences, affective neuroscience, affective psychology and affective psychopathology. Therefore, this paper presents multidisciplinary research results of Laboratory for Interactive Simulation System at Faculty of Electrical Engineering and Computing, University of Zagreb in the stress resilience. Patient's distortion in emotional processing of multimodal input stimuli is predominantly consequence of his/her cognitive deficit which is result of their individual mental health disorders. These emotional distortions in patient's multimodal physiological, facial, acoustic, and linguistic features related to presented stimulation can be used as indicator of patient's mental illness. Real-time processing and analysis of patient's multimodal response related to annotated input stimuli is based on appropriate machine learning methods from computer science. Comprehensive longitudinal multimodal analysis of patient's emotion, mood, feelings, attention, motivation, decision-making, and working memory in synchronization with multimodal stimuli provides extremely valuable big database for data mining, machine learning and machine reasoning. Presented multimedia stimuli sequence includes personalized images, movies and sounds, as well as semantically congruent narratives. Simultaneously, with stimuli presentation patient provides subjective emotional ratings of presented stimuli in terms of subjective units of discomfort/distress, discrete emotions, or valence and arousal. These subjective emotional ratings of input stimuli and corresponding physiological, speech, and facial output features provides enough information for evaluation of patient's cognitive appraisal deficit

  14. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction

    Science.gov (United States)

    XU, TIAN (LINGER); ZHANG, HUI; YU, CHEN

    2016-01-01

    We focus on a fundamental looking behavior in human-robot interactions – gazing at each other’s face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user’s face as a response to the human’s gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot’s gaze toward the human partner’s face in real time and then analyzed the human’s gaze behavior as a response to the robot’s gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot’s face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained. PMID:28966875

  15. Human-computer interaction and management information systems

    CERN Document Server

    Galletta, Dennis F

    2014-01-01

    ""Human-Computer Interaction and Management Information Systems: Applications"" offers state-of-the-art research by a distinguished set of authors who span the MIS and HCI fields. The original chapters provide authoritative commentaries and in-depth descriptions of research programs that will guide 21st century scholars, graduate students, and industry professionals. Human-Computer Interaction (or Human Factors) in MIS is concerned with the ways humans interact with information, technologies, and tasks, especially in business, managerial, organizational, and cultural contexts. It is distinctiv

  16. Multimodal and ubiquitous computing systems: supporting independent-living older users.

    Science.gov (United States)

    Perry, Mark; Dowdall, Alan; Lines, Lorna; Hone, Kate

    2004-09-01

    We document the rationale and design of a multimodal interface to a pervasive/ubiquitous computing system that supports independent living by older people in their own homes. The Millennium Home system involves fitting a resident's home with sensors--these sensors can be used to trigger sequences of interaction with the resident to warn them about dangerous events, or to check if they need external help. We draw lessons from the design process and conclude the paper with implications for the design of multimodal interfaces to ubiquitous systems developed for the elderly and in healthcare, as well as for more general ubiquitous computing applications.

  17. Humans, computers and wizards human (simulated) computer interaction

    CERN Document Server

    Fraser, Norman; McGlashan, Scott; Wooffitt, Robin

    2013-01-01

    Using data taken from a major European Union funded project on speech understanding, the SunDial project, this book considers current perspectives on human computer interaction and argues for the value of an approach taken from sociology which is based on conversation analysis.

  18. Timing of Multimodal Robot Behaviors during Human-Robot Collaboration

    DEFF Research Database (Denmark)

    Jensen, Lars Christian; Fischer, Kerstin; Suvei, Stefan-Daniel

    2017-01-01

    In this paper, we address issues of timing between robot behaviors in multimodal human-robot interaction. In particular, we study what effects sequential order and simultaneity of robot arm and body movement and verbal behavior have on the fluency of interactions. In a study with the Care-O-bot, ...... output plays a special role because participants carry their expectations from human verbal interaction into the interactions with robots....

  19. Minimal mobile human computer interaction

    NARCIS (Netherlands)

    el Ali, A.

    2013-01-01

    In the last 20 years, the widespread adoption of personal, mobile computing devices in everyday life, has allowed entry into a new technological era in Human Computer Interaction (HCI). The constant change of the physical and social context in a user's situation made possible by the portability of

  20. MIDA - Optimizing control room performance through multi-modal design

    International Nuclear Information System (INIS)

    Ronan, A. M.

    2006-01-01

    Multi-modal interfaces can support the integration of humans with information processing systems and computational devices to maximize the unique qualities that comprise a complex system. In a dynamic environment, such as a nuclear power plant control room, multi-modal interfaces, if designed correctly, can provide complementary interaction between the human operator and the system which can improve overall performance while reducing human error. Developing such interfaces can be difficult for a designer without explicit knowledge of Human Factors Engineering principles. The Multi-modal Interface Design Advisor (MIDA) was developed as a support tool for system designers and developers. It provides design recommendations based upon a combination of Human Factors principles, a knowledge base of historical research, and current interface technologies. MIDA's primary objective is to optimize available multi-modal technologies within a human computer interface in order to balance operator workload with efficient operator performance. The purpose of this paper is to demonstrate MIDA and illustrate its value as a design evaluation tool within the nuclear power industry. (authors)

  1. A new strategic neurosurgical planning tool for brainstem cavernous malformations using interactive computer graphics with multimodal fusion images.

    Science.gov (United States)

    Kin, Taichi; Nakatomi, Hirofumi; Shojima, Masaaki; Tanaka, Minoru; Ino, Kenji; Mori, Harushi; Kunimatsu, Akira; Oyama, Hiroshi; Saito, Nobuhito

    2012-07-01

    In this study, the authors used preoperative simulation employing 3D computer graphics (interactive computer graphics) to fuse all imaging data for brainstem cavernous malformations. The authors evaluated whether interactive computer graphics or 2D imaging correlated better with the actual operative field, particularly in identifying a developmental venous anomaly (DVA). The study population consisted of 10 patients scheduled for surgical treatment of brainstem cavernous malformations. Data from preoperative imaging (MRI, CT, and 3D rotational angiography) were automatically fused using a normalized mutual information method, and then reconstructed by a hybrid method combining surface rendering and volume rendering methods. With surface rendering, multimodality and multithreshold techniques for 1 tissue were applied. The completed interactive computer graphics were used for simulation of surgical approaches and assumed surgical fields. Preoperative diagnostic rates for a DVA associated with brainstem cavernous malformation were compared between conventional 2D imaging and interactive computer graphics employing receiver operating characteristic (ROC) analysis. The time required for reconstruction of 3D images was 3-6 hours for interactive computer graphics. Observation in interactive mode required approximately 15 minutes. Detailed anatomical information for operative procedures, from the craniotomy to microsurgical operations, could be visualized and simulated three-dimensionally as 1 computer graphic using interactive computer graphics. Virtual surgical views were consistent with actual operative views. This technique was very useful for examining various surgical approaches. Mean (±SEM) area under the ROC curve for rate of DVA diagnosis was significantly better for interactive computer graphics (1.000±0.000) than for 2D imaging (0.766±0.091; pcomputer graphics than with 2D images. Interactive computer graphics was also useful in helping to plan the surgical

  2. The Past, Present and Future of Human Computer Interaction

    KAUST Repository

    Churchill, Elizabeth

    2018-01-16

    Human Computer Interaction (HCI) focuses on how people interact with, and are transformed by computation. Our current technology landscape is changing rapidly. Interactive applications, devices and services are increasingly becoming embedded into our environments. From our homes to the urban and rural spaces, we traverse everyday. We are increasingly able toヨoften required toヨmanage and configure multiple, interconnected devices and program their interactions. Artificial intelligence (AI) techniques are being used to create dynamic services that learn about us and others, that make conclusions about our intents and affiliations, and that mould our digital interactions based in predictions about our actions and needs, nudging us toward certain behaviors. Computation is also increasingly embedded into our bodies. Understanding human interactions in the everyday digital and physical context. During this lecture, Elizabeth Churchill -Director of User Experience at Google- will talk about how an emerging landscape invites us to revisit old methods and tactics for understanding how people interact with computers and computation, and how it challenges us to think about new methods and frameworks for understanding the future of human-centered computation.

  3. Proxemics in Human-Computer Interaction

    OpenAIRE

    Greenberg, Saul; Honbaek, Kasper; Quigley, Aaron; Reiterer, Harald; Rädle, Roman

    2014-01-01

    In 1966, anthropologist Edward Hall coined the term "proxemics." Proxemics is an area of study that identifies the culturally dependent ways in which people use interpersonal distance to understand and mediate their interactions with others. Recent research has demonstrated the use of proxemics in human-computer interaction (HCI) for supporting users' explicit and implicit interactions in a range of uses, including remote office collaboration, home entertainment, and games. One promise of pro...

  4. The epistemology and ontology of human-computer interaction

    NARCIS (Netherlands)

    Brey, Philip A.E.

    2005-01-01

    This paper analyzes epistemological and ontological dimensions of Human-Computer Interaction (HCI) through an analysis of the functions of computer systems in relation to their users. It is argued that the primary relation between humans and computer systems has historically been epistemic:

  5. Multimodal approaches for emotion recognition: a survey

    Science.gov (United States)

    Sebe, Nicu; Cohen, Ira; Gevers, Theo; Huang, Thomas S.

    2005-01-01

    Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing-emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user's emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and physiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.

  6. Human-Computer Interaction The Agency Perspective

    CERN Document Server

    Oliveira, José

    2012-01-01

    Agent-centric theories, approaches and technologies are contributing to enrich interactions between users and computers. This book aims at highlighting the influence of the agency perspective in Human-Computer Interaction through a careful selection of research contributions. Split into five sections; Users as Agents, Agents and Accessibility, Agents and Interactions, Agent-centric Paradigms and Approaches, and Collective Agents, the book covers a wealth of novel, original and fully updated material, offering:   ü  To provide a coherent, in depth, and timely material on the agency perspective in HCI ü  To offer an authoritative treatment of the subject matter presented by carefully selected authors ü  To offer a balanced and broad coverage of the subject area, including, human, organizational, social, as well as technological concerns. ü  To offer a hands-on-experience by covering representative case studies and offering essential design guidelines   The book will appeal to a broad audience of resea...

  7. Using Noninvasive Wearable Computers to Recognize Human Emotions from Physiological Signals

    Directory of Open Access Journals (Sweden)

    Nasoz Fatma

    2004-01-01

    Full Text Available We discuss the strong relationship between affect and cognition and the importance of emotions in multimodal human computer interaction (HCI and user modeling. We introduce the overall paradigm for our multimodal system that aims at recognizing its users' emotions and at responding to them accordingly depending upon the current context or application. We then describe the design of the emotion elicitation experiment we conducted by collecting, via wearable computers, physiological signals from the autonomic nervous system (galvanic skin response, heart rate, temperature and mapping them to certain emotions (sadness, anger, fear, surprise, frustration, and amusement. We show the results of three different supervised learning algorithms that categorize these collected signals in terms of emotions, and generalize their learning to recognize emotions from new collections of signals. We finally discuss possible broader impact and potential applications of emotion recognition for multimodal intelligent systems.

  8. Benefits of Subliminal Feedback Loops in Human-Computer Interaction

    OpenAIRE

    Walter Ritter

    2011-01-01

    A lot of efforts have been directed to enriching human-computer interaction to make the user experience more pleasing or efficient. In this paper, we briefly present work in the fields of subliminal perception and affective computing, before we outline a new approach to add analog communication channels to the human-computer interaction experience. In this approach, in addition to symbolic predefined mappings of input to output, a subliminal feedback loop is used that provides feedback in evo...

  9. Human-computer systems interaction backgrounds and applications 3

    CERN Document Server

    Kulikowski, Juliusz; Mroczek, Teresa; Wtorek, Jerzy

    2014-01-01

    This book contains an interesting and state-of the art collection of papers on the recent progress in Human-Computer System Interaction (H-CSI). It contributes the profound description of the actual status of the H-CSI field and also provides a solid base for further development and research in the discussed area. The contents of the book are divided into the following parts: I. General human-system interaction problems; II. Health monitoring and disabled people helping systems; and III. Various information processing systems. This book is intended for a wide audience of readers who are not necessarily experts in computer science, machine learning or knowledge engineering, but are interested in Human-Computer Systems Interaction. The level of particular papers and specific spreading-out into particular parts is a reason why this volume makes fascinating reading. This gives the reader a much deeper insight than he/she might glean from research papers or talks at conferences. It touches on all deep issues that ...

  10. Human-Computer Interaction in Smart Environments

    Science.gov (United States)

    Paravati, Gianluca; Gatteschi, Valentina

    2015-01-01

    Here, we provide an overview of the content of the Special Issue on “Human-computer interaction in smart environments”. The aim of this Special Issue is to highlight technologies and solutions encompassing the use of mass-market sensors in current and emerging applications for interacting with Smart Environments. Selected papers address this topic by analyzing different interaction modalities, including hand/body gestures, face recognition, gaze/eye tracking, biosignal analysis, speech and activity recognition, and related issues.

  11. Introduction to human-computer interaction

    CERN Document Server

    Booth, Paul

    2014-01-01

    Originally published in 1989 this title provided a comprehensive and authoritative introduction to the burgeoning discipline of human-computer interaction for students, academics, and those from industry who wished to know more about the subject. Assuming very little knowledge, the book provides an overview of the diverse research areas that were at the time only gradually building into a coherent and well-structured field. It aims to explain the underlying causes of the cognitive, social and organizational problems typically encountered when computer systems are introduced. It is clear and co

  12. Human-Computer Interaction in Smart Environments

    Directory of Open Access Journals (Sweden)

    Gianluca Paravati

    2015-08-01

    Full Text Available Here, we provide an overview of the content of the Special Issue on “Human-computer interaction in smart environments”. The aim of this Special Issue is to highlight technologies and solutions encompassing the use of mass-market sensors in current and emerging applications for interacting with Smart Environments. Selected papers address this topic by analyzing different interaction modalities, including hand/body gestures, face recognition, gaze/eye tracking, biosignal analysis, speech and activity recognition, and related issues.

  13. From Annotated Multimodal Corpora to Simulated Human-Like Behaviors

    DEFF Research Database (Denmark)

    Rehm, Matthias; André, Elisabeth

    2008-01-01

    Multimodal corpora prove useful at different stages of the development process of embodied conversational agents. Insights into human-human communicative behaviors can be drawn from such corpora. Rules for planning and generating such behavior in agents can be derived from this information....... And even the evaluation of human-agent interactions can rely on corpus data from human-human communication. In this paper, we exemplify how corpora can be exploited at the different development steps, starting with the question of how corpora are annotated and on what level of granularity. The corpus data...

  14. Humor in Human-Computer Interaction : A Short Survey

    NARCIS (Netherlands)

    Nijholt, Anton; Niculescu, Andreea; Valitutti, Alessandro; Banchs, Rafael E.; Joshi, Anirudha; Balkrishan, Devanuj K.; Dalvi, Girish; Winckler, Marco

    2017-01-01

    This paper is a short survey on humor in human-computer interaction. It describes how humor is designed and interacted with in social media, virtual agents, social robots and smart environments. Benefits and future use of humor in interactions with artificial entities are discussed based on

  15. A Multimodal Emotion Detection System during Human-Robot Interaction

    Science.gov (United States)

    Alonso-Martín, Fernando; Malfaz, María; Sequeira, João; Gorostiza, Javier F.; Salichs, Miguel A.

    2013-01-01

    In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately. PMID:24240598

  16. Mobile human-computer interaction perspective on mobile learning

    CSIR Research Space (South Africa)

    Botha, Adèle

    2010-10-01

    Full Text Available Applying a Mobile Human Computer Interaction (MHCI) view to the domain of education using Mobile Learning (Mlearning), the research outlines its understanding of the influences and effects of different interactions on the use of mobile technology...

  17. Eyeblink Synchrony in Multimodal Human-Android Interaction.

    Science.gov (United States)

    Tatsukawa, Kyohei; Nakano, Tamami; Ishiguro, Hiroshi; Yoshikawa, Yuichiro

    2016-12-23

    As the result of recent progress in technology of communication robot, robots are becoming an important social partner for humans. Behavioral synchrony is understood as an important factor in establishing good human-robot relationships. In this study, we hypothesized that biasing a human's attitude toward a robot changes the degree of synchrony between human and robot. We first examined whether eyeblinks were synchronized between a human and an android in face-to-face interaction and found that human listeners' eyeblinks were entrained to android speakers' eyeblinks. This eyeblink synchrony disappeared when the android speaker spoke while looking away from the human listeners but was enhanced when the human participants listened to the speaking android while touching the android's hand. These results suggest that eyeblink synchrony reflects a qualitative state in human-robot interactions.

  18. Human-Computer Interaction and Information Management Research Needs

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — In a visionary future, Human-Computer Interaction HCI and Information Management IM have the potential to enable humans to better manage their lives through the use...

  19. Multimodal Challenge: Analytics Beyond User-computer Interaction Data

    NARCIS (Netherlands)

    Di Mitri, Daniele; Schneider, Jan; Specht, Marcus; Drachsler, Hendrik

    2018-01-01

    This contribution describes one the challenges explored in the Fourth LAK Hackathon. This challenge aims at shifting the focus from learning situations which can be easily traced through user-computer interactions data and concentrate more on user-world interactions events, typical of co-located and

  20. An audio-visual dataset of human-human interactions in stressful situations

    NARCIS (Netherlands)

    Lefter, I.; Burghouts, G.J.; Rothkrantz, L.J.M.

    2014-01-01

    Stressful situations are likely to occur at human operated service desks, as well as at human-computer interfaces used in public domain. Automatic surveillance can help notifying when extra assistance is needed. Human communication is inherently multimodal e.g. speech, gestures, facial expressions.

  1. Human-computer interaction handbook fundamentals, evolving technologies and emerging applications

    CERN Document Server

    Sears, Andrew

    2007-01-01

    This second edition of The Human-Computer Interaction Handbook provides an updated, comprehensive overview of the most important research in the field, including insights that are directly applicable throughout the process of developing effective interactive information technologies. It features cutting-edge advances to the scientific knowledge base, as well as visionary perspectives and developments that fundamentally transform the way in which researchers and practitioners view the discipline. As the seminal volume of HCI research and practice, The Human-Computer Interaction Handbook feature

  2. Choice of Human-Computer Interaction Mode in Stroke Rehabilitation.

    Science.gov (United States)

    Mousavi Hondori, Hossein; Khademi, Maryam; Dodakian, Lucy; McKenzie, Alison; Lopes, Cristina V; Cramer, Steven C

    2016-03-01

    Advances in technology are providing new forms of human-computer interaction. The current study examined one form of human-computer interaction, augmented reality (AR), whereby subjects train in the real-world workspace with virtual objects projected by the computer. Motor performances were compared with those obtained while subjects used a traditional human-computer interaction, that is, a personal computer (PC) with a mouse. Patients used goal-directed arm movements to play AR and PC versions of the Fruit Ninja video game. The 2 versions required the same arm movements to control the game but had different cognitive demands. With AR, the game was projected onto the desktop, where subjects viewed the game plus their arm movements simultaneously, in the same visual coordinate space. In the PC version, subjects used the same arm movements but viewed the game by looking up at a computer monitor. Among 18 patients with chronic hemiparesis after stroke, the AR game was associated with 21% higher game scores (P = .0001), 19% faster reaching times (P = .0001), and 15% less movement variability (P = .0068), as compared to the PC game. Correlations between game score and arm motor status were stronger with the AR version. Motor performances during the AR game were superior to those during the PC game. This result is due in part to the greater cognitive demands imposed by the PC game, a feature problematic for some patients but clinically useful for others. Mode of human-computer interface influences rehabilitation therapy demands and can be individualized for patients. © The Author(s) 2015.

  3. User localization during human-robot interaction.

    Science.gov (United States)

    Alonso-Martín, F; Gorostiza, Javi F; Malfaz, María; Salichs, Miguel A

    2012-01-01

    This paper presents a user localization system based on the fusion of visual information and sound source localization, implemented on a social robot called Maggie. One of the main requisites to obtain a natural interaction between human-human and human-robot is an adequate spatial situation between the interlocutors, that is, to be orientated and situated at the right distance during the conversation in order to have a satisfactory communicative process. Our social robot uses a complete multimodal dialog system which manages the user-robot interaction during the communicative process. One of its main components is the presented user localization system. To determine the most suitable allocation of the robot in relation to the user, a proxemic study of the human-robot interaction is required, which is described in this paper. The study has been made with two groups of users: children, aged between 8 and 17, and adults. Finally, at the end of the paper, experimental results with the proposed multimodal dialog system are presented.

  4. Best of Affective Computing and Intelligent Interaction 2013 in Multimodal Interactions

    NARCIS (Netherlands)

    Soleymani, Mohammad; Soleymani, M.; Pun, T.; Pun, Thierry; Nijholt, Antinus

    The fifth biannual Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII 2013) was held in Geneva, Switzerland. This conference featured the recent advancement in affective computing and relevant applications in education, entertainment and health. A number of

  5. Applying systemic-structural activity theory to design of human-computer interaction systems

    CERN Document Server

    Bedny, Gregory Z; Bedny, Inna

    2015-01-01

    Human-Computer Interaction (HCI) is an interdisciplinary field that has gained recognition as an important field in ergonomics. HCI draws on ideas and theoretical concepts from computer science, psychology, industrial design, and other fields. Human-Computer Interaction is no longer limited to trained software users. Today people interact with various devices such as mobile phones, tablets, and laptops. How can you make such interaction user friendly, even when user proficiency levels vary? This book explores methods for assessing the psychological complexity of computer-based tasks. It also p

  6. Learning multimodal dictionaries.

    Science.gov (United States)

    Monaci, Gianluca; Jost, Philippe; Vandergheynst, Pierre; Mailhé, Boris; Lesage, Sylvain; Gribonval, Rémi

    2007-09-01

    Real-world phenomena involve complex interactions between multiple signal modalities. As a consequence, humans are used to integrate at each instant perceptions from all their senses in order to enrich their understanding of the surrounding world. This paradigm can be also extremely useful in many signal processing and computer vision problems involving mutually related signals. The simultaneous processing of multimodal data can, in fact, reveal information that is otherwise hidden when considering the signals independently. However, in natural multimodal signals, the statistical dependencies between modalities are in general not obvious. Learning fundamental multimodal patterns could offer deep insight into the structure of such signals. In this paper, we present a novel model of multimodal signals based on their sparse decomposition over a dictionary of multimodal structures. An algorithm for iteratively learning multimodal generating functions that can be shifted at all positions in the signal is proposed, as well. The learning is defined in such a way that it can be accomplished by iteratively solving a generalized eigenvector problem, which makes the algorithm fast, flexible, and free of user-defined parameters. The proposed algorithm is applied to audiovisual sequences and it is able to discover underlying structures in the data. The detection of such audio-video patterns in audiovisual clips allows to effectively localize the sound source on the video in presence of substantial acoustic and visual distractors, outperforming state-of-the-art audiovisual localization algorithms.

  7. Interactive Multimodal Molecular Set – Designing Ludic Engaging Science Learning Content

    DEFF Research Database (Denmark)

    Thorsen, Tine Pinholt; Christiansen, Kasper Holm Bonde; Jakobsen Sillesen, Kristian

    2014-01-01

    This paper reports on an exploratory study investigating 10 primary school students’ interaction with an interactive multimodal molecular set fostering ludic engaging science learning content in primary schools (8th and 9th grade). The concept of the prototype design was to bridge the physical...... and virtual worlds with electronic tags and, through this, blend the familiarity of the computer and toys, to create a tool that provided a ludic approach to learning about atoms and molecules. The study was inspired by the participatory design and informant design methodologies and included design...

  8. An Efficient Human Identification through MultiModal Biometric System

    Directory of Open Access Journals (Sweden)

    K. Meena

    Full Text Available ABSTRACT Human identification is essential for proper functioning of society. Human identification through multimodal biometrics is becoming an emerging trend, and one of the reasons is to improve recognition accuracy. Unimodal biometric systems are affected by various problemssuch as noisy sensor data,non-universality, lack of individuality, lack of invariant representation and susceptibility to circumvention.A unimodal system has limited accuracy. Hence, Multimodal biometric systems by combining more than one biometric feature in different levels are proposed in order to enhance the performance of the system. A supervisor module combines the different opinions or decisions delivered by each subsystem and then make a final decision. In this paper, a multimodal biometrics authentication is proposed by combining face, iris and finger features. Biometric features are extracted by Local Derivative Ternary Pattern (LDTP in Contourlet domain and an extensive evaluation of LDTP is done using Support Vector Machine and Nearest Neighborhood Classifier. The experimental evaluations are performed on a public dataset demonstrating the accuracy of the proposed system compared with the existing systems. It is observed that, the combination of face, fingerprint and iris gives better performance in terms of accuracy, False Acceptance Rate, False Rejection Rate with minimum computation time.

  9. Artifical Intelligence for Human Computing

    NARCIS (Netherlands)

    Huang, Th.S.; Nijholt, Antinus; Pantic, Maja; Pentland, A.; Unknown, [Unknown

    2007-01-01

    This book constitutes the thoroughly refereed post-proceedings of two events discussing AI for Human Computing: one Special Session during the Eighth International ACM Conference on Multimodal Interfaces (ICMI 2006), held in Banff, Canada, in November 2006, and a Workshop organized in conjunction

  10. Interactivity in Educational Apps for Young Children: A Multimodal Analysis

    Science.gov (United States)

    Blitz-Raith, Alexandra H.; Liu, Jianxin

    2017-01-01

    Interactivity is an important indicator of an educational app's reception. Since most educational apps are multimodal, it justifies a methodological initiative to understand meaningful involvement of multimodality in enacting and even amplifying interactivity in an educational app. Yet research so far has largely concentrated on algorithm…

  11. The effect of a pretest in an interactive, multimodal pretraining system for learning science concepts

    NARCIS (Netherlands)

    Bos, Floor/Floris; Terlouw, C.; Pilot, Albert

    2009-01-01

    In line with the cognitive theory of multimedia learning by Moreno and Mayer (2007), an interactive, multimodal learning environment was designed for the pretraining of science concepts in the joint area of physics, chemistry, biology, applied mathematics, and computer sciences. In the experimental

  12. Gaze-and-brain-controlled interfaces for human-computer and human-robot interaction

    Directory of Open Access Journals (Sweden)

    Shishkin S. L.

    2017-09-01

    Full Text Available Background. Human-machine interaction technology has greatly evolved during the last decades, but manual and speech modalities remain single output channels with their typical constraints imposed by the motor system’s information transfer limits. Will brain-computer interfaces (BCIs and gaze-based control be able to convey human commands or even intentions to machines in the near future? We provide an overview of basic approaches in this new area of applied cognitive research. Objective. We test the hypothesis that the use of communication paradigms and a combination of eye tracking with unobtrusive forms of registering brain activity can improve human-machine interaction. Methods and Results. Three groups of ongoing experiments at the Kurchatov Institute are reported. First, we discuss the communicative nature of human-robot interaction, and approaches to building a more e cient technology. Specifically, “communicative” patterns of interaction can be based on joint attention paradigms from developmental psychology, including a mutual “eye-to-eye” exchange of looks between human and robot. Further, we provide an example of “eye mouse” superiority over the computer mouse, here in emulating the task of selecting a moving robot from a swarm. Finally, we demonstrate a passive, noninvasive BCI that uses EEG correlates of expectation. This may become an important lter to separate intentional gaze dwells from non-intentional ones. Conclusion. The current noninvasive BCIs are not well suited for human-robot interaction, and their performance, when they are employed by healthy users, is critically dependent on the impact of the gaze on selection of spatial locations. The new approaches discussed show a high potential for creating alternative output pathways for the human brain. When support from passive BCIs becomes mature, the hybrid technology of the eye-brain-computer (EBCI interface will have a chance to enable natural, fluent, and the

  13. Interactivity in Educational Apps for Young children: A Multimodal Analysis

    Directory of Open Access Journals (Sweden)

    Alexandra H. Blitz-Raith

    2017-11-01

    Full Text Available Interactivity is an important indicator of an educational app's reception. Since most educational apps are multimodal, it justifies a methodological initiative to understand meaningful involvement of multimodality in enacting and even amplifying interactivity in an educational app. Yet research so far has largely concentrated on algorithm construct and user feedback rather than on multimodal interactions, especially from a social semiotics perspective. Drawing from social semiotics approaches, this article proposes a multimodal analytic framework to examine three layers of mode in engendering interaction; namely, multiplicity, function, and relationship. Using the analytic framework in an analysis of The Farm Adventure for Kids, a popular educational app for pre-school children, we found that still images are dominant proportionally and are central in the interactive process. We also found that tapping still images of animals on screen is the main action, with other screen actions deliberately excluded. Such findings suggest that aligning children’s cognitive and physical capabilities to the use of mode become the primary consideration in educational app design and that consistent attendance to this alignment in mobilizing modes significantly affect an educational app’s interactivity, and consequently its reception by young children

  14. Virtual reality/ augmented reality technology : the next chapter of human-computer interaction

    OpenAIRE

    Huang, Xing

    2015-01-01

    No matter how many different size and shape the computer has, the basic components of computers are still the same. If we use the user perspective to look for the development of computer history, we can surprisingly find that it is the input output device that leads the development of the industry development, in one word, human-computer interaction changes the development of computer history. Human computer interaction has been gone through three stages, the first stage relies on the inpu...

  15. Proceedings of the Third International Conference on Intelligent Human Computer Interaction

    CERN Document Server

    Pokorný, Jaroslav; Snášel, Václav; Abraham, Ajith

    2013-01-01

    The Third International Conference on Intelligent Human Computer Interaction 2011 (IHCI 2011) was held at Charles University, Prague, Czech Republic from August 29 - August 31, 2011. This conference was third in the series, following IHCI 2009 and IHCI 2010 held in January at IIIT Allahabad, India. Human computer interaction is a fast growing research area and an attractive subject of interest for both academia and industry. There are many interesting and challenging topics that need to be researched and discussed. This book aims to provide excellent opportunities for the dissemination of interesting new research and discussion about presented topics. It can be useful for researchers working on various aspects of human computer interaction. Topics covered in this book include user interface and interaction, theoretical background and applications of HCI and also data mining and knowledge discovery as a support of HCI applications.

  16. Towards an intelligent framework for multimodal affective data analysis.

    Science.gov (United States)

    Poria, Soujanya; Cambria, Erik; Hussain, Amir; Huang, Guang-Bin

    2015-03-01

    An increasingly large amount of multimodal content is posted on social media websites such as YouTube and Facebook everyday. In order to cope with the growth of such so much multimodal data, there is an urgent need to develop an intelligent multi-modal analysis framework that can effectively extract information from multiple modalities. In this paper, we propose a novel multimodal information extraction agent, which infers and aggregates the semantic and affective information associated with user-generated multimodal data in contexts such as e-learning, e-health, automatic video content tagging and human-computer interaction. In particular, the developed intelligent agent adopts an ensemble feature extraction approach by exploiting the joint use of tri-modal (text, audio and video) features to enhance the multimodal information extraction process. In preliminary experiments using the eNTERFACE dataset, our proposed multi-modal system is shown to achieve an accuracy of 87.95%, outperforming the best state-of-the-art system by more than 10%, or in relative terms, a 56% reduction in error rate. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Human-Computer Interaction, Tourism and Cultural Heritage

    Science.gov (United States)

    Cipolla Ficarra, Francisco V.

    We present a state of the art of the human-computer interaction aimed at tourism and cultural heritage in some cities of the European Mediterranean. In the work an analysis is made of the main problems deriving from training understood as business and which can derail the continuous growth of the HCI, the new technologies and tourism industry. Through a semiotic and epistemological study the current mistakes in the context of the interrelations of the formal and factual sciences will be detected and also the human factors that have an influence on the professionals devoted to the development of interactive systems in order to safeguard and boost cultural heritage.

  18. Design of a compact low-power human-computer interaction equipment for hand motion

    Science.gov (United States)

    Wu, Xianwei; Jin, Wenguang

    2017-01-01

    Human-Computer Interaction (HCI) raises demand of convenience, endurance, responsiveness and naturalness. This paper describes a design of a compact wearable low-power HCI equipment applied to gesture recognition. System combines multi-mode sense signals: the vision sense signal and the motion sense signal, and the equipment is equipped with the depth camera and the motion sensor. The dimension (40 mm × 30 mm) and structure is compact and portable after tight integration. System is built on a module layered framework, which contributes to real-time collection (60 fps), process and transmission via synchronous confusion with asynchronous concurrent collection and wireless Blue 4.0 transmission. To minimize equipment's energy consumption, system makes use of low-power components, managing peripheral state dynamically, switching into idle mode intelligently, pulse-width modulation (PWM) of the NIR LEDs of the depth camera and algorithm optimization by the motion sensor. To test this equipment's function and performance, a gesture recognition algorithm is applied to system. As the result presents, general energy consumption could be as low as 0.5 W.

  19. An agent-based architecture for multimodal interaction

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.

    In this paper, an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The agent-based architecture can be used to create multimodal interaction. The generic process model has been designed, implemented and used to simulate

  20. An agent-based architecture for multimodal interaction

    NARCIS (Netherlands)

    Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.

    2001-01-01

    In this paper, an executable generic process model is proposed for combined verbal and non-verbal communication processes and their interaction. The agent-based architecture can be used to create multimodal interaction. The generic process model has been designed, implemented and used to simulate

  1. Modelling Engagement in Multi-Party Conversations : Data-Driven Approaches to Understanding Human-Human Communication Patterns for Use in Human-Robot Interactions

    OpenAIRE

    Oertel, Catharine

    2016-01-01

    The aim of this thesis is to study human-human interaction in order to provide virtual agents and robots with the capability to engage into multi-party-conversations in a human-like-manner. The focus lies with the modelling of conversational dynamics and the appropriate realization of multi-modal feedback behaviour. For such an undertaking, it is important to understand how human-human communication unfolds in varying contexts and constellations over time. To this end, multi-modal human-human...

  2. Multimodal Student Interaction Online: An Ecological Perspective

    Science.gov (United States)

    Berglund, Therese Ornberg

    2009-01-01

    This article describes the influence of tool and task design on student interaction in language learning at a distance. Interaction in a multimodal desktop video conferencing environment, FlashMeeting, is analyzed from an ecological perspective with two main foci: participation rates and conversational feedback strategies. The quantitative…

  3. Multi-modal RGB–Depth–Thermal Human Body Segmentation

    DEFF Research Database (Denmark)

    Palmero, Cristina; Clapés, Albert; Bahnsen, Chris

    2016-01-01

    This work addresses the problem of human body segmentation from multi-modal visual cues as a first stage of automatic human behavior analysis. We propose a novel RGB-Depth-Thermal dataset along with a multi-modal seg- mentation baseline. The several modalities are registered us- ing a calibration...... to other state-of-the-art meth- ods, obtaining an overlap above 75% on the novel dataset when compared to the manually annotated ground-truth of human segmentations....

  4. The Human-Computer Interaction of Cross-Cultural Gaming Strategy

    Science.gov (United States)

    Chakraborty, Joyram; Norcio, Anthony F.; Van Der Veer, Jacob J.; Andre, Charles F.; Miller, Zachary; Regelsberger, Alexander

    2015-01-01

    This article explores the cultural dimensions of the human-computer interaction that underlies gaming strategies. The article is a desktop study of existing literature and is organized into five sections. The first examines the cultural aspects of knowledge processing. The social constructs technology interaction is discussed. Following this, the…

  5. Cognitive engineering models: A prerequisite to the design of human-computer interaction in complex dynamic systems

    Science.gov (United States)

    Mitchell, Christine M.

    1993-01-01

    This chapter examines a class of human-computer interaction applications, specifically the design of human-computer interaction for the operators of complex systems. Such systems include space systems (e.g., manned systems such as the Shuttle or space station, and unmanned systems such as NASA scientific satellites), aviation systems (e.g., the flight deck of 'glass cockpit' airplanes or air traffic control) and industrial systems (e.g., power plants, telephone networks, and sophisticated, e.g., 'lights out,' manufacturing facilities). The main body of human-computer interaction (HCI) research complements but does not directly address the primary issues involved in human-computer interaction design for operators of complex systems. Interfaces to complex systems are somewhat special. The 'user' in such systems - i.e., the human operator responsible for safe and effective system operation - is highly skilled, someone who in human-machine systems engineering is sometimes characterized as 'well trained, well motivated'. The 'job' or task context is paramount and, thus, human-computer interaction is subordinate to human job interaction. The design of human interaction with complex systems, i.e., the design of human job interaction, is sometimes called cognitive engineering.

  6. HCI in Mobile and Ubiquitous Computing

    OpenAIRE

    椎尾, 一郎; 安村, 通晃; 福本, 雅明; 伊賀, 聡一郎; 増井, 俊之

    2003-01-01

    This paper provides some perspectives to human computer interaction in mobile and ubiquitous computing. The review covers overview of ubiquitous computing, mobile computing and wearable computing. It also summarizes HCI topics on these field, including real-world oriented interface, multi-modal interface, context awareness and in-visible computers. Finally we discuss killer applications for coming ubiquitous computing era.

  7. Exploring multimodal robotic interaction through storytelling for aphasics

    NARCIS (Netherlands)

    Mubin, O.; Al Mahmud, A.; Abuelma'atti, O.; England, D.

    2008-01-01

    In this poster, we propose the design of a multimodal robotic interaction mechanism that is intended to be used by Aphasics for storytelling. Through limited physical interaction, mild to moderate aphasic people can interact with a robot that may help them to be more active in their day to day

  8. Optimization Model for Web Based Multimodal Interactive Simulations.

    Science.gov (United States)

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-07-15

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update . In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.

  9. Multimodal interaction design in collocated mobile phone use

    NARCIS (Netherlands)

    El-Ali, A.; Lucero, A.; Aaltonen, V.

    2011-01-01

    In the context of the Social and Spatial Interactions (SSI) platform, we explore how multimodal interaction design (input and output) can augment and improve the experience of collocated, collaborative activities using mobile phones. Based largely on our prototype evaluations, we reflect on and

  10. Human-robot interaction strategies for walker-assisted locomotion

    CERN Document Server

    Cifuentes, Carlos A

    2016-01-01

    This book presents the development of a new multimodal human-robot interface for testing and validating control strategies applied to robotic walkers for assisting human mobility and gait rehabilitation. The aim is to achieve a closer interaction between the robotic device and the individual, empowering the rehabilitation potential of such devices in clinical applications. A new multimodal human-robot interface for testing and validating control strategies applied to robotic walkers for assisting human mobility and gait rehabilitation is presented. Trends and opportunities for future advances in the field of assistive locomotion via the development of hybrid solutions based on the combination of smart walkers and biomechatronic exoskeletons are also discussed. .

  11. Multimodal human communication--targeting facial expressions, speech content and prosody.

    Science.gov (United States)

    Regenbogen, Christina; Schneider, Daniel A; Gur, Raquel E; Schneider, Frank; Habel, Ute; Kellermann, Thilo

    2012-05-01

    Human communication is based on a dynamic information exchange of the communication channels facial expressions, prosody, and speech content. This fMRI study elucidated the impact of multimodal emotion processing and the specific contribution of each channel on behavioral empathy and its prerequisites. Ninety-six video clips displaying actors who told self-related stories were presented to 27 healthy participants. In two conditions, all channels uniformly transported only emotional or neutral information. Three conditions selectively presented two emotional channels and one neutral channel. Subjects indicated the actors' emotional valence and their own while fMRI was recorded. Activation patterns of tri-channel emotional communication reflected multimodal processing and facilitative effects for empathy. Accordingly, subjects' behavioral empathy rates significantly deteriorated once one source was neutral. However, emotionality expressed via two of three channels yielded activation in a network associated with theory-of-mind-processes. This suggested participants' effort to infer mental states of their counterparts and was accompanied by a decline of behavioral empathy, driven by the participants' emotional responses. Channel-specific emotional contributions were present in modality-specific areas. The identification of different network-nodes associated with human interactions constitutes a prerequisite for understanding dynamics that underlie multimodal integration and explain the observed decline in empathy rates. This task might also shed light on behavioral deficits and neural changes that accompany psychiatric diseases. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. A new multimodal interactive way of subjective scoring of 3D video quality of experience

    Science.gov (United States)

    Kim, Taewan; Lee, Kwanghyun; Lee, Sanghoon; Bovik, Alan C.

    2014-03-01

    People that watch today's 3D visual programs, such as 3D cinema, 3D TV and 3D games, experience wide and dynamically varying ranges of 3D visual immersion and 3D quality of experience (QoE). It is necessary to be able to deploy reliable methodologies that measure each viewers subjective experience. We propose a new methodology that we call Multimodal Interactive Continuous Scoring of Quality (MICSQ). MICSQ is composed of a device interaction process between the 3D display and a separate device (PC, tablet, etc.) used as an assessment tool, and a human interaction process between the subject(s) and the device. The scoring process is multimodal, using aural and tactile cues to help engage and focus the subject(s) on their tasks. Moreover, the wireless device interaction process makes it possible for multiple subjects to assess 3D QoE simultaneously in a large space such as a movie theater, and at di®erent visual angles and distances.

  13. Model-based acquisition and analysis of multimodal interactions for improving human-robot interaction

    OpenAIRE

    Renner, Patrick; Pfeiffer, Thies

    2014-01-01

    For solving complex tasks cooperatively in close interaction with robots, they need to understand natural human communication. To achieve this, robots could benefit from a deeper understanding of the processes that humans use for successful communication. Such skills can be studied by investigating human face-to-face interactions in complex tasks. In our work the focus lies on shared-space interactions in a path planning task and thus 3D gaze directions and hand movements are of particular in...

  14. A multimodal dataset for authoring and editing multimedia content: The MAMEM project

    Directory of Open Access Journals (Sweden)

    Spiros Nikolopoulos

    2017-12-01

    Full Text Available We present a dataset that combines multimodal biosignals and eye tracking information gathered under a human-computer interaction framework. The dataset was developed in the vein of the MAMEM project that aims to endow people with motor disabilities with the ability to edit and author multimedia content through mental commands and gaze activity. The dataset includes EEG, eye-tracking, and physiological (GSR and Heart rate signals collected from 34 individuals (18 able-bodied and 16 motor-impaired. Data were collected during the interaction with specifically designed interface for web browsing and multimedia content manipulation and during imaginary movement tasks. The presented dataset will contribute towards the development and evaluation of modern human-computer interaction systems that would foster the integration of people with severe motor impairments back into society.

  15. Multimodal 2D Brain Computer Interface.

    Science.gov (United States)

    Almajidy, Rand K; Boudria, Yacine; Hofmann, Ulrich G; Besio, Walter; Mankodiya, Kunal

    2015-08-01

    In this work we used multimodal, non-invasive brain signal recording systems, namely Near Infrared Spectroscopy (NIRS), disc electrode electroencephalography (EEG) and tripolar concentric ring electrodes (TCRE) electroencephalography (tEEG). 7 healthy subjects participated in our experiments to control a 2-D Brain Computer Interface (BCI). Four motor imagery task were performed, imagery motion of the left hand, the right hand, both hands and both feet. The signal slope (SS) of the change in oxygenated hemoglobin concentration measured by NIRS was used for feature extraction while the power spectrum density (PSD) of both EEG and tEEG in the frequency band 8-30Hz was used for feature extraction. Linear Discriminant Analysis (LDA) was used to classify different combinations of the aforementioned features. The highest classification accuracy (85.2%) was achieved by using features from all the three brain signals recording modules. The improvement in classification accuracy was highly significant (p = 0.0033) when using the multimodal signals features as compared to pure EEG features.

  16. Multimodal interaction with W3C standards toward natural user interfaces to everything

    CERN Document Server

    2017-01-01

    This book presents new standards for multimodal interaction published by the W3C and other standards bodies in straightforward and accessible language, while also illustrating the standards in operation through case studies and chapters on innovative implementations. The book illustrates how, as smart technology becomes ubiquitous, and appears in more and more different shapes and sizes, vendor-specific approaches to multimodal interaction become impractical, motivating the need for standards. This book covers standards for voice, emotion, natural language understanding, dialog, and multimodal architectures. The book describes the standards in a practical manner, making them accessible to developers, students, and researchers. Comprehensive resource that explains the W3C standards for multimodal interaction clear and straightforward way; Includes case studies of the use of the standards on a wide variety of devices, including mobile devices, tablets, wearables and robots, in applications such as assisted livi...

  17. Design Science in Human-Computer Interaction: A Model and Three Examples

    Science.gov (United States)

    Prestopnik, Nathan R.

    2013-01-01

    Humanity has entered an era where computing technology is virtually ubiquitous. From websites and mobile devices to computers embedded in appliances on our kitchen counters and automobiles parked in our driveways, information and communication technologies (ICTs) and IT artifacts are fundamentally changing the ways we interact with our world.…

  18. Implementations of the CC'01 Human-Computer Interaction Guidelines Using Bloom's Taxonomy

    Science.gov (United States)

    Manaris, Bill; Wainer, Michael; Kirkpatrick, Arthur E.; Stalvey, RoxAnn H.; Shannon, Christine; Leventhal, Laura; Barnes, Julie; Wright, John; Schafer, J. Ben; Sanders, Dean

    2007-01-01

    In today's technology-laden society human-computer interaction (HCI) is an important knowledge area for computer scientists and software engineers. This paper surveys existing approaches to incorporate HCI into computer science (CS) and such related issues as the perceived gap between the interests of the HCI community and the needs of CS…

  19. Stress and Cognitive Load in Multimodal Conversational Interactions

    NARCIS (Netherlands)

    Niculescu, A.I.; Cao, Y.; Nijholt, Antinus; Stephanides, C.

    2009-01-01

    The quality assessment of multimodal conversational interactions is determined by many influence parameters. Stress and cognitive load are two of them. In order to assess the impact of stress and cognitive load on the perceived conversational quality it is essential to control their levels during

  20. The Study on Human-Computer Interaction Design Based on the Users’ Subconscious Behavior

    Science.gov (United States)

    Li, Lingyuan

    2017-09-01

    Human-computer interaction is human-centered. An excellent interaction design should focus on the study of user experience, which greatly comes from the consistence between design and human behavioral habit. However, users’ behavioral habits often result from subconsciousness. Therefore, it is smart to utilize users’ subconscious behavior to achieve design's intention and maximize the value of products’ functions, which gradually becomes a new trend in this field.

  1. Ghost-in-the-Machine reveals human social signals for human-robot interaction.

    Science.gov (United States)

    Loth, Sebastian; Jettka, Katharina; Giuliani, Manuel; de Ruiter, Jan P

    2015-01-01

    We used a new method called "Ghost-in-the-Machine" (GiM) to investigate social interactions with a robotic bartender taking orders for drinks and serving them. Using the GiM paradigm allowed us to identify how human participants recognize the intentions of customers on the basis of the output of the robotic recognizers. Specifically, we measured which recognizer modalities (e.g., speech, the distance to the bar) were relevant at different stages of the interaction. This provided insights into human social behavior necessary for the development of socially competent robots. When initiating the drink-order interaction, the most important recognizers were those based on computer vision. When drink orders were being placed, however, the most important information source was the speech recognition. Interestingly, the participants used only a subset of the available information, focussing only on a few relevant recognizers while ignoring others. This reduced the risk of acting on erroneous sensor data and enabled them to complete service interactions more swiftly than a robot using all available sensor data. We also investigated socially appropriate response strategies. In their responses, the participants preferred to use the same modality as the customer's requests, e.g., they tended to respond verbally to verbal requests. Also, they added redundancy to their responses, for instance by using echo questions. We argue that incorporating the social strategies discovered with the GiM paradigm in multimodal grammars of human-robot interactions improves the robustness and the ease-of-use of these interactions, and therefore provides a smoother user experience.

  2. Eye Tracking Based Control System for Natural Human-Computer Interaction

    Directory of Open Access Journals (Sweden)

    Xuebai Zhang

    2017-01-01

    Full Text Available Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user’s eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design.

  3. Eye Tracking Based Control System for Natural Human-Computer Interaction.

    Science.gov (United States)

    Zhang, Xuebai; Liu, Xiaolong; Yuan, Shyan-Ming; Lin, Shu-Fan

    2017-01-01

    Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user's eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design.

  4. Multimodal user interfaces to improve social integration of elderly and mobility impaired.

    Science.gov (United States)

    Dias, Miguel Sales; Pires, Carlos Galinho; Pinto, Fernando Miguel; Teixeira, Vítor Duarte; Freitas, João

    2012-01-01

    Technologies for Human-Computer Interaction (HCI) and Communication have evolved tremendously over the past decades. However, citizens such as mobility impaired or elderly or others, still face many difficulties interacting with communication services, either due to HCI issues or intrinsic design problems with the services. In this paper we start by presenting the results of two user studies, the first one conducted with a group of mobility impaired users, comprising paraplegic and quadriplegic individuals; and the second one with elderly. The study participants carried out a set of tasks with a multimodal (speech, touch, gesture, keyboard and mouse) and multi-platform (mobile, desktop) system, offering an integrated access to communication and entertainment services, such as email, agenda, conferencing, instant messaging and social media, referred to as LHC - Living Home Center. The system was designed to take into account the requirements captured from these users, with the objective of evaluating if the adoption of multimodal interfaces for audio-visual communication and social media services, could improve the interaction with such services. Our study revealed that a multimodal prototype system, offering natural interaction modalities, especially supporting speech and touch, can in fact improve access to the presented services, contributing to the reduction of social isolation of mobility impaired, as well as elderly, and improving their digital inclusion.

  5. Human-Centric Interfaces for Ambient Intelligence

    CERN Document Server

    Aghajan, Hamid; Delgado, Ramon Lopez-Cozar

    2009-01-01

    To create truly effective human-centric ambient intelligence systems both engineering and computing methods are needed. This is the first book to bridge data processing and intelligent reasoning methods for the creation of human-centered ambient intelligence systems. Interdisciplinary in nature, the book covers topics such as multi-modal interfaces, human-computer interaction, smart environments and pervasive computing, addressing principles, paradigms, methods and applications. This book will be an ideal reference for university researchers, R&D engineers, computer engineers, and graduate s

  6. Engageability: a new sub-principle of the learnability principle in human-computer interaction

    Directory of Open Access Journals (Sweden)

    B Chimbo

    2011-12-01

    Full Text Available The learnability principle relates to improving the usability of software, as well as users’ performance and productivity. A gap has been identified as the current definition of the principle does not distinguish between users of different ages. To determine the extent of the gap, this article compares the ways in which two user groups, adults and children, learn how to use an unfamiliar software application. In doing this, we bring together the research areas of human-computer interaction (HCI, adult and child learning, learning theories and strategies, usability evaluation and interaction design. A literature survey conducted on learnability and learning processes considered the meaning of learnability of software applications across generations. In an empirical investigation, users aged from 9 to 12 and from 35 to 50 were observed in a usability laboratory while learning to use educational software applications. Insights that emerged from data analysis showed different tactics and approaches that children and adults use when learning unfamiliar software. Eye tracking data was also recorded. Findings indicated that subtle re- interpretation of the learnability principle and its associated sub-principles was required. An additional sub-principle, namely engageability was proposed to incorporate aspects of learnability that are not covered by the existing sub-principles. Our re-interpretation of the learnability principle and the resulting design recommendations should help designers to fulfill the varying needs of different-aged users, and improve the learnability of their designs. Keywords: Child computer interaction, Design principles, Eye tracking, Generational differences, human-computer interaction, Learning theories, Learnability, Engageability, Software applications, Uasability Disciplines: Human-Computer Interaction (HCI Studies, Computer science, Observational Studies

  7. An evaluation framework for multimodal interaction determining quality aspects and modality choice

    CERN Document Server

    Wechsung, Ina

    2014-01-01

    This book presents (1) an exhaustive and empirically validated taxonomy of quality aspects of multimodal interaction as well as respective measurement methods, (2) a validated questionnaire specifically tailored to the evaluation of multimodal systems and covering most of the taxonomy‘s quality aspects, (3) insights on how the quality perceptions of multimodal systems relate to the quality perceptions of its individual components, (4) a set of empirically tested factors which influence modality choice, and (5) models regarding the relationship of the perceived quality of a modality and the actual usage of a modality.

  8. Knowledge translation in health care as a multimodal interactional accomplishment

    DEFF Research Database (Denmark)

    Kjær, Malene

    2014-01-01

    of their education where they are in clinical practice. The analysis is made possible through video recordings of how student nurses translate their theoretical knowledge into a professional situational conduct in everyday interactional accomplishments among supervisors and patients. The analysis shows how some......In the theory of health care, knowledge translation is regarded as a crucial phenomenon that makes the whole health care system work in a desired manner. The present paper studies knowledge translation from the student nurses’ perspective and does that through a close analysis of the part...... knowledge gets translated through the use of rich multimodal embodied interactions, whereas the more abstract aspects of knowledge remain untranslated. Overall, the study contributes to the understanding of knowledge translation as a multimodal, locally situated accomplishment....

  9. Cooperation in human-computer communication

    OpenAIRE

    Kronenberg, Susanne

    2000-01-01

    The goal of this thesis is to simulate cooperation in human-computer communication to model the communicative interaction process of agents in natural dialogs in order to provide advanced human-computer interaction in that coherence is maintained between contributions of both agents, i.e. the human user and the computer. This thesis contributes to certain aspects of understanding and generation and their interaction in the German language. In spontaneous dialogs agents cooperate by the pro...

  10. EEG Classification for Hybrid Brain-Computer Interface Using a Tensor Based Multiclass Multimodal Analysis Scheme.

    Science.gov (United States)

    Ji, Hongfei; Li, Jie; Lu, Rongrong; Gu, Rong; Cao, Lei; Gong, Xiaoliang

    2016-01-01

    Electroencephalogram- (EEG-) based brain-computer interface (BCI) systems usually utilize one type of changes in the dynamics of brain oscillations for control, such as event-related desynchronization/synchronization (ERD/ERS), steady state visual evoked potential (SSVEP), and P300 evoked potentials. There is a recent trend to detect more than one of these signals in one system to create a hybrid BCI. However, in this case, EEG data were always divided into groups and analyzed by the separate processing procedures. As a result, the interactive effects were ignored when different types of BCI tasks were executed simultaneously. In this work, we propose an improved tensor based multiclass multimodal scheme especially for hybrid BCI, in which EEG signals are denoted as multiway tensors, a nonredundant rank-one tensor decomposition model is proposed to obtain nonredundant tensor components, a weighted fisher criterion is designed to select multimodal discriminative patterns without ignoring the interactive effects, and support vector machine (SVM) is extended to multiclass classification. Experiment results suggest that the proposed scheme can not only identify the different changes in the dynamics of brain oscillations induced by different types of tasks but also capture the interactive effects of simultaneous tasks properly. Therefore, it has great potential use for hybrid BCI.

  11. Stereo Vision for Unrestricted Human-Computer Interaction

    OpenAIRE

    Eldridge, Ross; Rudolph, Heiko

    2008-01-01

    Human computer interfaces have come long way in recent years, but the goal of a computer interpreting unrestricted human movement remains elusive. The use of stereo vision in this field has enabled the development of systems that begin to approach this goal. As computer technology advances we come ever closer to a system that can react to the ambiguities of human movement in real-time. In the foreseeable future stereo computer vision is not likely to replace the keyboard or mouse. There is at...

  12. Enrichment of Human-Computer Interaction in Brain-Computer Interfaces via Virtual Environments

    Directory of Open Access Journals (Sweden)

    Alonso-Valerdi Luz María

    2017-01-01

    Full Text Available Tridimensional representations stimulate cognitive processes that are the core and foundation of human-computer interaction (HCI. Those cognitive processes take place while a user navigates and explores a virtual environment (VE and are mainly related to spatial memory storage, attention, and perception. VEs have many distinctive features (e.g., involvement, immersion, and presence that can significantly improve HCI in highly demanding and interactive systems such as brain-computer interfaces (BCI. BCI is as a nonmuscular communication channel that attempts to reestablish the interaction between an individual and his/her environment. Although BCI research started in the sixties, this technology is not efficient or reliable yet for everyone at any time. Over the past few years, researchers have argued that main BCI flaws could be associated with HCI issues. The evidence presented thus far shows that VEs can (1 set out working environmental conditions, (2 maximize the efficiency of BCI control panels, (3 implement navigation systems based not only on user intentions but also on user emotions, and (4 regulate user mental state to increase the differentiation between control and noncontrol modalities.

  13. Emotional pictures and sounds: A review of multimodal interactions of emotion cues in multiple domains

    Directory of Open Access Journals (Sweden)

    Antje B M Gerdes

    2014-12-01

    Full Text Available In everyday life, multiple sensory channels jointly trigger emotional experiences and one channel may alter processing in another channel. For example, seeing an emotional facial expression and hearing the voice’s emotional tone will jointly create the emotional experience. This example, where auditory and visual input is related to social communication, has gained considerable attention by researchers. However, interactions of visual and auditory emotional information are not limited to social communication but can extend to much broader contexts including human, animal, and environmental cues. In this article, we review current research on audiovisual emotion processing beyond face-voice stimuli to develop a broader perspective on multimodal interactions in emotion processing. We argue that current concepts of multimodality should be extended in considering an ecologically valid variety of stimuli in audiovisual emotion processing. Therefore, we provide an overview of studies in which emotional sounds and interactions with complex pictures of scenes were investigated. In addition to behavioral studies, we focus on neuroimaging, electro- and peripher-physiological findings. Furthermore, we integrate these findings and identify similarities or differences. We conclude with suggestions for future research.

  14. Interactive natural language acquisition in a multi-modal recurrent neural architecture

    Science.gov (United States)

    Heinrich, Stefan; Wermter, Stefan

    2018-01-01

    For the complex human brain that enables us to communicate in natural language, we gathered good understandings of principles underlying language acquisition and processing, knowledge about sociocultural conditions, and insights into activity patterns in the brain. However, we were not yet able to understand the behavioural and mechanistic characteristics for natural language and how mechanisms in the brain allow to acquire and process language. In bridging the insights from behavioural psychology and neuroscience, the goal of this paper is to contribute a computational understanding of appropriate characteristics that favour language acquisition. Accordingly, we provide concepts and refinements in cognitive modelling regarding principles and mechanisms in the brain and propose a neurocognitively plausible model for embodied language acquisition from real-world interaction of a humanoid robot with its environment. In particular, the architecture consists of a continuous time recurrent neural network, where parts have different leakage characteristics and thus operate on multiple timescales for every modality and the association of the higher level nodes of all modalities into cell assemblies. The model is capable of learning language production grounded in both, temporal dynamic somatosensation and vision, and features hierarchical concept abstraction, concept decomposition, multi-modal integration, and self-organisation of latent representations.

  15. Man-machine interactions 3

    CERN Document Server

    Czachórski, Tadeusz; Kozielski, Stanisław

    2014-01-01

    Man-Machine Interaction is an interdisciplinary field of research that covers many aspects of science focused on a human and machine in conjunction.  Basic goal of the study is to improve and invent new ways of communication between users and computers, and many different subjects are involved to reach the long-term research objective of an intuitive, natural and multimodal way of interaction with machines.  The rapid evolution of the methods by which humans interact with computers is observed nowadays and new approaches allow using computing technologies to support people on the daily basis, making computers more usable and receptive to the user's needs.   This monograph is the third edition in the series and presents important ideas, current trends and innovations in  the man-machine interactions area.  The aim of this book is to introduce not only hardware and software interfacing concepts, but also to give insights into the related theoretical background. Reader is provided with a compilation of high...

  16. The integration of audio−tactile information is modulated by multimodal social interaction with physical contact in infancy

    Directory of Open Access Journals (Sweden)

    Yukari Tanaka

    2018-04-01

    Full Text Available Interaction between caregivers and infants is multimodal in nature. To react interactively and smoothly to such multimodal signals, infants must integrate all these signals. However, few empirical infant studies have investigated how multimodal social interaction with physical contact facilitates multimodal integration, especially regarding audio − tactile (A-T information. By using electroencephalogram (EEG and event-related potentials (ERPs, the present study investigated how neural processing involved in A-T integration is modulated by tactile interaction. Seven- to 8-months-old infants heard one pseudoword both whilst being tickled (multimodal ‘A-T’ condition, and not being tickled (unimodal ‘A’ condition. Thereafter, their EEG was measured during the perception of the same words. Compared to the A condition, the A-T condition resulted in enhanced ERPs and higher beta-band activity within the left temporal regions, indicating neural processing of A-T integration. Additionally, theta-band activity within the middle frontal region was enhanced, which may reflect enhanced attention to social information. Furthermore, differential ERPs correlated with the degree of engagement in the tickling interaction. We provide neural evidence that the integration of A-T information in infants’ brains is facilitated through tactile interaction with others. Such plastic changes in neural processing may promote harmonious social interaction and effective learning in infancy. Keywords: Electroencephalogram (EEG, Infants, Multisensory integration, Touch interaction

  17. Cross-cultural human-computer interaction and user experience design a semiotic perspective

    CERN Document Server

    Brejcha, Jan

    2015-01-01

    This book describes patterns of language and culture in human-computer interaction (HCI). Through numerous examples, it shows why these patterns matter and how to exploit them to design a better user experience (UX) with computer systems. It provides scientific information on the theoretical and practical areas of the interaction and communication design for research experts and industry practitioners and covers the latest research in semiotics and cultural studies, bringing a set of tools and methods to benefit the process of designing with the cultural background in mind.

  18. Virtual microscopy : Merging of computer mediated collaboration and intuitive interfacing

    NARCIS (Netherlands)

    De Ridder, H.; De Ridder-Sluiter, J.G.; Kluin, P.M.; Christiaans, H.H.C.M.

    2009-01-01

    Ubiquitous computing (or Ambient Intelligence) is an upcoming technology that is usually associated with futuristic smart environments in which information is available anytime anywhere and with which humans can interact in a natural, multimodal way. However spectacular the corresponding scenarios

  19. The integration of audio-tactile information is modulated by multimodal social interaction with physical contact in infancy.

    Science.gov (United States)

    Tanaka, Yukari; Kanakogi, Yasuhiro; Kawasaki, Masahiro; Myowa, Masako

    2018-04-01

    Interaction between caregivers and infants is multimodal in nature. To react interactively and smoothly to such multimodal signals, infants must integrate all these signals. However, few empirical infant studies have investigated how multimodal social interaction with physical contact facilitates multimodal integration, especially regarding audio - tactile (A-T) information. By using electroencephalogram (EEG) and event-related potentials (ERPs), the present study investigated how neural processing involved in A-T integration is modulated by tactile interaction. Seven- to 8-months-old infants heard one pseudoword both whilst being tickled (multimodal 'A-T' condition), and not being tickled (unimodal 'A' condition). Thereafter, their EEG was measured during the perception of the same words. Compared to the A condition, the A-T condition resulted in enhanced ERPs and higher beta-band activity within the left temporal regions, indicating neural processing of A-T integration. Additionally, theta-band activity within the middle frontal region was enhanced, which may reflect enhanced attention to social information. Furthermore, differential ERPs correlated with the degree of engagement in the tickling interaction. We provide neural evidence that the integration of A-T information in infants' brains is facilitated through tactile interaction with others. Such plastic changes in neural processing may promote harmonious social interaction and effective learning in infancy. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Human computer interaction using hand gestures

    CERN Document Server

    Premaratne, Prashan

    2014-01-01

    Human computer interaction (HCI) plays a vital role in bridging the 'Digital Divide', bringing people closer to consumer electronics control in the 'lounge'. Keyboards and mouse or remotes do alienate old and new generations alike from control interfaces. Hand Gesture Recognition systems bring hope of connecting people with machines in a natural way. This will lead to consumers being able to use their hands naturally to communicate with any electronic equipment in their 'lounge.' This monograph will include the state of the art hand gesture recognition approaches and how they evolved from their inception. The author would also detail his research in this area for the past 8 years and how the future might turn out to be using HCI. This monograph will serve as a valuable guide for researchers (who would endeavour into) in the world of HCI.

  1. Challenges in Transcribing Multimodal Data: A Case Study

    Science.gov (United States)

    Helm, Francesca; Dooly, Melinda

    2017-01-01

    Computer-mediated communication (CMC) once meant principally text-based communication mediated by computers, but rapid technological advances in recent years have heralded an era of multimodal communication with a growing emphasis on audio and video synchronous interaction. As CMC, in all its variants (text chats, video chats, forums, blogs, SMS,…

  2. Quantifying Trust, Distrust, and Suspicion in Human-System Interactions

    Science.gov (United States)

    2015-10-26

    communication, psychology , human factors, management, marketing, information technology, and brain/neurology. We first developed a generic model of state...task classification based upon topographic EEG data. Biological Psychology , 1995. 40: p. 239-250. 5. Gevins, A., et al., High-Resolution EEG...Interaction (submitted), 2013. 15. Pouliota, P., et al., Nonlinear hemodynamic responses in human epilepsy : A multimodal analysis with fNIRS-EEG and fMRI

  3. USING RESEARCH METHODS IN HUMAN COMPUTER INTERACTION TO DESIGN TECHNOLOGY FOR RESILIENCE

    OpenAIRE

    Lopes, Arminda Guerra

    2016-01-01

    ABSTRACT Research in human computer interaction (HCI) covers both technological and human behavioural concerns. As a consequence, the contributions made in HCI research tend to be aware to either engineering or the social sciences. In HCI the purpose of practical research contributions is to reveal unknown insights about human behaviour and its relationship to technology. Practical research methods normally used in HCI include formal experiments, field experiments, field studies, interviews, ...

  4. Realtime Interaction Analysis of Social Interplay in a Multimodal Musical-Sonic Interaction Context

    DEFF Research Database (Denmark)

    Hansen, Anne-Marie

    2010-01-01

    This paper presents an approach to the analysis of social interplay among users in a multimodal interaction and musical performance situation. The approach consists of a combined method of realtime sensor data analysis for the description and interpretation of player gestures and video micro......-analysis methods used to describe the interaction situation and the context in which the social interplay takes place. This combined method is used in an iterative process, where the design of interactive games with musical-sonic feedback is improved according to newly discovered understandings and interpretations...

  5. Beyond image quality : designing engaging interactions with digital products

    NARCIS (Netherlands)

    Ridder, de H.; Rozendaal, M.C.

    2008-01-01

    Ubiquitous computing (or Ambient Intelligence) promises a world in which information is available anytime anywhere and with which humans can interact in a natural, multimodal way. In such world, perceptual image quality remains an important criterion since most information will be displayed

  6. Beyond image quality : Designing engaging interactions with digital products

    NARCIS (Netherlands)

    De Ridder, H.; Rozendaal, M.C.

    2008-01-01

    Ubiquitous computing (or Ambient Intelligence) promises a world in which information is available anytime anywhere and with which humans can interact in a natural, multimodal way. In such world, perceptual image quality remains an important criterion since most information will be displayed

  7. Reciprocity in computer-human interaction: source-based, norm-based, and affect-based explanations.

    Science.gov (United States)

    Lee, Seungcheol Austin; Liang, Yuhua Jake

    2015-04-01

    Individuals often apply social rules when they interact with computers, and this is known as the Computers Are Social Actors (CASA) effect. Following previous work, one approach to understand the mechanism responsible for CASA is to utilize computer agents and have the agents attempt to gain human compliance (e.g., completing a pattern recognition task). The current study focuses on three key factors frequently cited to influence traditional notions of compliance: evaluations toward the source (competence and warmth), normative influence (reciprocity), and affective influence (mood). Structural equation modeling assessed the effects of these factors on human compliance with computer request. The final model shows that norm-based influence (reciprocity) increased the likelihood of compliance, while evaluations toward the computer agent did not significantly influence compliance.

  8. A hardware and software architecture to deal with multimodal and collaborative interactions in multiuser virtual reality environments

    Science.gov (United States)

    Martin, P.; Tseu, A.; Férey, N.; Touraine, D.; Bourdot, P.

    2014-02-01

    Most advanced immersive devices provide collaborative environment within several users have their distinct head-tracked stereoscopic point of view. Combining with common used interactive features such as voice and gesture recognition, 3D mouse, haptic feedback, and spatialized audio rendering, these environments should faithfully reproduce a real context. However, even if many studies have been carried out on multimodal systems, we are far to definitively solve the issue of multimodal fusion, which consists in merging multimodal events coming from users and devices, into interpretable commands performed by the application. Multimodality and collaboration was often studied separately, despite of the fact that these two aspects share interesting similarities. We discuss how we address this problem, thought the design and implementation of a supervisor that is able to deal with both multimodal fusion and collaborative aspects. The aim of this supervisor is to ensure the merge of user's input from virtual reality devices in order to control immersive multi-user applications. We deal with this problem according to a practical point of view, because the main requirements of this supervisor was defined according to a industrial task proposed by our automotive partner, that as to be performed with multimodal and collaborative interactions in a co-located multi-user environment. In this task, two co-located workers of a virtual assembly chain has to cooperate to insert a seat into the bodywork of a car, using haptic devices to feel collision and to manipulate objects, combining speech recognition and two hands gesture recognition as multimodal instructions. Besides the architectural aspect of this supervisor, we described how we ensure the modularity of our solution that could apply on different virtual reality platforms, interactive contexts and virtual contents. A virtual context observer included in this supervisor in was especially designed to be independent to the

  9. Embodied conversational agents for multimodal automated social skills training in people with autism spectrum disorders.

    Science.gov (United States)

    Tanaka, Hiroki; Negoro, Hideki; Iwasaka, Hidemi; Nakamura, Satoshi

    2017-01-01

    Social skills training, performed by human trainers, is a well-established method for obtaining appropriate skills in social interaction. Previous work automated the process of social skills training by developing a dialogue system that teaches social communication skills through interaction with a computer avatar. Even though previous work that simulated social skills training only considered acoustic and linguistic information, human social skills trainers take into account visual and other non-verbal features. In this paper, we create and evaluate a social skills training system that closes this gap by considering the audiovisual features of the smiling ratio and the head pose (yaw and pitch). In addition, the previous system was only tested with graduate students; in this paper, we applied our system to children or young adults with autism spectrum disorders. For our experimental evaluation, we recruited 18 members from the general population and 10 people with autism spectrum disorders and gave them our proposed multimodal system to use. An experienced human social skills trainer rated the social skills of the users. We evaluated the system's effectiveness by comparing pre- and post-training scores and identified significant improvement in their social skills using our proposed multimodal system. Computer-based social skills training is useful for people who experience social difficulties. Such a system can be used by teachers, therapists, and social skills trainers for rehabilitation and the supplemental use of human-based training anywhere and anytime.

  10. Multi-step EMG Classification Algorithm for Human-Computer Interaction

    Science.gov (United States)

    Ren, Peng; Barreto, Armando; Adjouadi, Malek

    A three-electrode human-computer interaction system, based on digital processing of the Electromyogram (EMG) signal, is presented. This system can effectively help disabled individuals paralyzed from the neck down to interact with computers or communicate with people through computers using point-and-click graphic interfaces. The three electrodes are placed on the right frontalis, the left temporalis and the right temporalis muscles in the head, respectively. The signal processing algorithm used translates the EMG signals during five kinds of facial movements (left jaw clenching, right jaw clenching, eyebrows up, eyebrows down, simultaneous left & right jaw clenching) into five corresponding types of cursor movements (left, right, up, down and left-click), to provide basic mouse control. The classification strategy is based on three principles: the EMG energy of one channel is typically larger than the others during one specific muscle contraction; the spectral characteristics of the EMG signals produced by the frontalis and temporalis muscles during different movements are different; the EMG signals from adjacent channels typically have correlated energy profiles. The algorithm is evaluated on 20 pre-recorded EMG signal sets, using Matlab simulations. The results show that this method provides improvements and is more robust than other previous approaches.

  11. Developing a multimodal biometric authentication system using soft computing methods.

    Science.gov (United States)

    Malcangi, Mario

    2015-01-01

    Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.

  12. Enhancing Human-Computer Interaction Design Education: Teaching Affordance Design for Emerging Mobile Devices

    Science.gov (United States)

    Faiola, Anthony; Matei, Sorin Adam

    2010-01-01

    The evolution of human-computer interaction design (HCID) over the last 20 years suggests that there is a growing need for educational scholars to consider new and more applicable theoretical models of interactive product design. The authors suggest that such paradigms would call for an approach that would equip HCID students with a better…

  13. Affective Computing used in an imaging interaction paradigm

    DEFF Research Database (Denmark)

    Schultz, Nette

    2003-01-01

    This paper combines affective computing with an imaging interaction paradigm. An imaging interaction paradigm means that human and computer communicates primarily by images. Images evoke emotions in humans, so the computer must be able to behave emotionally intelligent. An affective image selection...

  14. The Review of Visual Analysis Methods of Multi-modal Spatio-temporal Big Data

    Directory of Open Access Journals (Sweden)

    ZHU Qing

    2017-10-01

    Full Text Available The visual analysis of spatio-temporal big data is not only the state-of-art research direction of both big data analysis and data visualization, but also the core module of pan-spatial information system. This paper reviews existing visual analysis methods at three levels:descriptive visual analysis, explanatory visual analysis and exploratory visual analysis, focusing on spatio-temporal big data's characteristics of multi-source, multi-granularity, multi-modal and complex association.The technical difficulties and development tendencies of multi-modal feature selection, innovative human-computer interaction analysis and exploratory visual reasoning in the visual analysis of spatio-temporal big data were discussed. Research shows that the study of descriptive visual analysis for data visualizationis is relatively mature.The explanatory visual analysis has become the focus of the big data analysis, which is mainly based on interactive data mining in a visual environment to diagnose implicit reason of problem. And the exploratory visual analysis method needs a major break-through.

  15. Human-Computer Interaction and Sociological Insight: A Theoretical Examination and Experiment in Building Affinity in Small Groups

    Science.gov (United States)

    Oren, Michael Anthony

    2011-01-01

    The juxtaposition of classic sociological theory and the, relatively, young discipline of human-computer interaction (HCI) serves as a powerful mechanism for both exploring the theoretical impacts of technology on human interactions as well as the application of technological systems to moderate interactions. It is the intent of this dissertation…

  16. Proceedings of the topical meeting on advances in human factors research on man/computer interactions

    International Nuclear Information System (INIS)

    Anon.

    1990-01-01

    This book discusses the following topics: expert systems and knowledge engineering-I; verification and validation of software; methods for modeling UMAN/computer performance; MAN/computer interaction problems in producing procedures -1-2; progress and problems with automation-1-2; experience with electronic presentation of procedures-2; intelligent displays and monitors; modeling user/computer interface; and computer-based human decision-making aids

  17. A multimodal architecture for simulating natural interactive walking in virtual environments

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Serafin, Stefania; Turchet, Luca

    2011-01-01

    We describe a multimodal system that exploits the use of footwear-based interaction in virtual environments. We developed a pair of shoes enhanced with pressure sensors, actuators, and markers. These shoes control a multichannel surround sound system and drive a physically based audio...

  18. SnapAnatomy, a computer-based interactive tool for independent learning of human anatomy.

    Science.gov (United States)

    Yip, George W; Rajendran, Kanagasuntheram

    2008-06-01

    Computer-aided instruction materials are becoming increasing popular in medical education and particularly in the teaching of human anatomy. This paper describes SnapAnatomy, a new interactive program that the authors designed for independent learning of anatomy. SnapAnatomy is primarily tailored for the beginner student to encourage the learning of anatomy by developing a three-dimensional visualization of human structure that is essential to applications in clinical practice and the understanding of function. The program allows the student to take apart and to accurately put together body components in an interactive, self-paced and variable manner to achieve the learning outcome.

  19. Experimental evaluation of multimodal human computer interface for tactical audio applications

    NARCIS (Netherlands)

    Obrenovic, Z.; Starcevic, D.; Jovanov, E.; Oy, S.

    2002-01-01

    Mission critical and information overwhelming applications require careful design of the human computer interface. Typical applications include night vision or low visibility mission navigation, guidance through a hostile territory, and flight navigation and orientation. Additional channels of

  20. Evidence Report: Risk of Inadequate Human-Computer Interaction

    Science.gov (United States)

    Holden, Kritina; Ezer, Neta; Vos, Gordon

    2013-01-01

    Human-computer interaction (HCI) encompasses all the methods by which humans and computer-based systems communicate, share information, and accomplish tasks. When HCI is poorly designed, crews have difficulty entering, navigating, accessing, and understanding information. HCI has rarely been studied in an operational spaceflight context, and detailed performance data that would support evaluation of HCI have not been collected; thus, we draw much of our evidence from post-spaceflight crew comments, and from other safety-critical domains like ground-based power plants, and aviation. Additionally, there is a concern that any potential or real issues to date may have been masked by the fact that crews have near constant access to ground controllers, who monitor for errors, correct mistakes, and provide additional information needed to complete tasks. We do not know what types of HCI issues might arise without this "safety net". Exploration missions will test this concern, as crews may be operating autonomously due to communication delays and blackouts. Crew survival will be heavily dependent on available electronic information for just-in-time training, procedure execution, and vehicle or system maintenance; hence, the criticality of the Risk of Inadequate HCI. Future work must focus on identifying the most important contributing risk factors, evaluating their contribution to the overall risk, and developing appropriate mitigations. The Risk of Inadequate HCI includes eight core contributing factors based on the Human Factors Analysis and Classification System (HFACS): (1) Requirements, policies, and design processes, (2) Information resources and support, (3) Allocation of attention, (4) Cognitive overload, (5) Environmentally induced perceptual changes, (6) Misperception and misinterpretation of displayed information, (7) Spatial disorientation, and (8) Displays and controls.

  1. Multimodal processes scheduling in mesh-like network environment

    Directory of Open Access Journals (Sweden)

    Bocewicz Grzegorz

    2015-06-01

    Full Text Available Multimodal processes planning and scheduling play a pivotal role in many different domains including city networks, multimodal transportation systems, computer and telecommunication networks and so on. Multimodal process can be seen as a process partially processed by locally executed cyclic processes. In that context the concept of a Mesh-like Multimodal Transportation Network (MMTN in which several isomorphic subnetworks interact each other via distinguished subsets of common shared intermodal transport interchange facilities (such as a railway station, bus station or bus/tram stop as to provide a variety of demand-responsive passenger transportation services is examined. Consider a mesh-like layout of a passengers transport network equipped with different lines including buses, trams, metro, trains etc. where passenger flows are treated as multimodal processes. The goal is to provide a declarative model enabling to state a constraint satisfaction problem aimed at multimodal transportation processes scheduling encompassing passenger flow itineraries. Then, the main objective is to provide conditions guaranteeing solvability of particular transport lines scheduling, i.e. guaranteeing the right match-up of local cyclic acting bus, tram, metro and train schedules to a given passengers flow itineraries.

  2. Investigation on human serum albumin and Gum Tragacanth interactions using experimental and computational methods.

    Science.gov (United States)

    Moradi, Sajad; Taran, Mojtaba; Shahlaei, Mohsen

    2018-02-01

    The study on the interaction of human serum albumin and Gum Tragacanth, a biodegradable bio-polymer, has been undertaken. For this purpose, several experimental and computational methods were used. Investigation of thermodynamic parameters and mode of interactions were carried out using Fluorescence spectroscopy in 300 and 310K. Also, a Fourier transformed infrared spectra and synchronous fluorescence spectroscopy was performed. To give detailed insight of possible interactions, docking and molecular dynamic simulations were also applied. Results show that the interaction is based on hydrogen bonding and van der Waals forces. Structural analysis implies on no adverse change in protein conformation during binding of GT. Furthermore, computational methods confirm some evidence on secondary structure enhancement of protein as a presence of combining with Gum Tragacanth. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. An innovative multimodal virtual platform for communication with devices in a natural way

    Science.gov (United States)

    Kinkar, Chhayarani R.; Golash, Richa; Upadhyay, Akhilesh R.

    2012-03-01

    As technology grows people are diverted and are more interested in communicating with machine or computer naturally. This will make machine more compact and portable by avoiding remote, keyboard etc. also it will help them to live in an environment free from electromagnetic waves. This thought has made 'recognition of natural modality in human computer interaction' a most appealing and promising research field. Simultaneously it has been observed that using single mode of interaction limit the complete utilization of commands as well as data flow. In this paper a multimodal platform, where out of many natural modalities like eye gaze, speech, voice, face etc. human gestures are combined with human voice is proposed which will minimize the mean square error. This will loosen the strict environment needed for accurate and robust interaction while using single mode. Gesture complement Speech, gestures are ideal for direct object manipulation and natural language is used for descriptive tasks. Human computer interaction basically requires two broad sections recognition and interpretation. Recognition and interpretation of natural modality in complex binary instruction is a tough task as it integrate real world to virtual environment. The main idea of the paper is to develop a efficient model for data fusion coming from heterogeneous sensors, camera and microphone. Through this paper we have analyzed that the efficiency is increased if heterogeneous data (image & voice) is combined at feature level using artificial intelligence. The long term goal of this paper is to design a robust system for physically not able or having less technical knowledge.

  4. Human-Computer Interaction Handbook Fundamentals, Evolving Technologies, and Emerging Applications

    CERN Document Server

    Jacko, Julie A

    2012-01-01

    The third edition of a groundbreaking reference, The Human--Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications raises the bar for handbooks in this field. It is the largest, most complete compilation of HCI theories, principles, advances, case studies, and more that exist within a single volume. The book captures the current and emerging sub-disciplines within HCI related to research, development, and practice that continue to advance at an astonishing rate. It features cutting-edge advances to the scientific knowledge base as well as visionary perspe

  5. Multimodal Imaging of Human Brain Activity: Rational, Biophysical Aspects and Modes of Integration

    Science.gov (United States)

    Blinowska, Katarzyna; Müller-Putz, Gernot; Kaiser, Vera; Astolfi, Laura; Vanderperren, Katrien; Van Huffel, Sabine; Lemieux, Louis

    2009-01-01

    Until relatively recently the vast majority of imaging and electrophysiological studies of human brain activity have relied on single-modality measurements usually correlated with readily observable or experimentally modified behavioural or brain state patterns. Multi-modal imaging is the concept of bringing together observations or measurements from different instruments. We discuss the aims of multi-modal imaging and the ways in which it can be accomplished using representative applications. Given the importance of haemodynamic and electrophysiological signals in current multi-modal imaging applications, we also review some of the basic physiology relevant to understanding their relationship. PMID:19547657

  6. Interactive multimodal ambulatory monitoring to investigate the association between physical activity and affect

    Directory of Open Access Journals (Sweden)

    Ulrich W. Ebner-Priemer

    2013-01-01

    Full Text Available Although there is a wealth of evidence that physical activity has positive effects on psychological health, a large proportion of people are inactive. Data regarding counts, steps, and movement patterns are limited in their ability to explain why people remain inactive. We propose that multimodal ambulatory monitoring, which combines the assessment of physical activity with the assessment of psychological variables, helps to elucidate real world physical activity. Whereas physical activity can be monitored continuously, psychological variables can only be assessed at discrete intervals, such as every hour. Moreover, the assessment of psychological variables must be linked to the activity of interest. For example, if an inactive and overweight person is physically active once a week, psychological variables should be assessed during this episode. Linking the assessment of psychological variables to episodes of an activity of interest can be achieved with interactive monitoring. The primary aim of our interactive multimodal ambulatory monitoring approach was to intentionally increase the number of e-diary assessments during active episodes.We developed and tested an interactive monitoring algorithm that continuously monitors physical activity in everyday life. When predefined thresholds are surpassed, the algorithm triggers a signal for participants to answer questions in their electronic diary.Using data from 70 participants wearing an accelerative device for 24 hours each, we found that our algorithm quadrupled the frequency of e-diary assessments during the activity episodes of interest compared to random sampling. Multimodal interactive ambulatory monitoring appears to be a promising approach to enhancing our understanding of real world physical activity and movement.

  7. Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots.

    Science.gov (United States)

    Hagiwara, Yoshinobu; Inoue, Masakazu; Kobayashi, Hiroyoshi; Taniguchi, Tadahiro

    2018-01-01

    In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., "I am in my home" and "I am in front of the table," a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.

  8. Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots

    Directory of Open Access Journals (Sweden)

    Yoshinobu Hagiwara

    2018-03-01

    Full Text Available In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., “I am in my home” and “I am in front of the table,” a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA. Object recognition results using convolutional neural network (CNN, hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL, and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.

  9. Development of Multimodal Human Interface Technology

    Science.gov (United States)

    Hirose, Michitaka

    About 20 years have passed since the word “Virtual Reality” became popular. During these two decades, novel human interface technology so called “multimodal interface technology” has been formed. In this paper, firstly, recent progress in realtime CG, BCI and five senses IT is quickly reviewed. Since the life cycle of the information technology is said to be 20 years or so, novel directions and paradigms of VR technology can be found in conjunction with the technologies forementioned. At the end of the paper, these futuristic directions such as ultra-realistic media are briefly introduced.

  10. Recent developments in multimodality fluorescence imaging probes

    Directory of Open Access Journals (Sweden)

    Jianhong Zhao

    2018-05-01

    Full Text Available Multimodality optical imaging probes have emerged as powerful tools that improve detection sensitivity and accuracy, important in disease diagnosis and treatment. In this review, we focus on recent developments of optical fluorescence imaging (OFI probe integration with other imaging modalities such as X-ray computed tomography (CT, magnetic resonance imaging (MRI, positron emission tomography (PET, single-photon emission computed tomography (SPECT, and photoacoustic imaging (PAI. The imaging technologies are briefly described in order to introduce the strengths and limitations of each techniques and the need for further multimodality optical imaging probe development. The emphasis of this account is placed on how design strategies are currently implemented to afford physicochemically and biologically compatible multimodality optical fluorescence imaging probes. We also present studies that overcame intrinsic disadvantages of each imaging technique by multimodality approach with improved detection sensitivity and accuracy. KEY WORDS: Optical imaging, Fluorescence, Multimodality, Near-infrared fluorescence, Nanoprobe, Computed tomography, Magnetic resonance imaging, Positron emission tomography, Single-photon emission computed tomography, Photoacoustic imaging

  11. Guest Editorial Special Issue on Human Computing

    NARCIS (Netherlands)

    Pantic, Maja; Santos, E.; Pentland, A.; Nijholt, Antinus

    2009-01-01

    The seven articles in this special issue focus on human computing. Most focus on two challenging issues in human computing, namely, machine analysis of human behavior in group interactions and context-sensitive modeling.

  12. Visual exploration and analysis of human-robot interaction rules

    Science.gov (United States)

    Zhang, Hui; Boyles, Michael J.

    2013-01-01

    We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming

  13. 3D hierarchical spatial representation and memory of multimodal sensory data

    Science.gov (United States)

    Khosla, Deepak; Dow, Paul A.; Huber, David J.

    2009-04-01

    This paper describes an efficient method and system for representing, processing and understanding multi-modal sensory data. More specifically, it describes a computational method and system for how to process and remember multiple locations in multimodal sensory space (e.g., visual, auditory, somatosensory, etc.). The multimodal representation and memory is based on a biologically-inspired hierarchy of spatial representations implemented with novel analogues of real representations used in the human brain. The novelty of the work is in the computationally efficient and robust spatial representation of 3D locations in multimodal sensory space as well as an associated working memory for storage and recall of these representations at the desired level for goal-oriented action. We describe (1) A simple and efficient method for human-like hierarchical spatial representations of sensory data and how to associate, integrate and convert between these representations (head-centered coordinate system, body-centered coordinate, etc.); (2) a robust method for training and learning a mapping of points in multimodal sensory space (e.g., camera-visible object positions, location of auditory sources, etc.) to the above hierarchical spatial representations; and (3) a specification and implementation of a hierarchical spatial working memory based on the above for storage and recall at the desired level for goal-oriented action(s). This work is most useful for any machine or human-machine application that requires processing of multimodal sensory inputs, making sense of it from a spatial perspective (e.g., where is the sensory information coming from with respect to the machine and its parts) and then taking some goal-oriented action based on this spatial understanding. A multi-level spatial representation hierarchy means that heterogeneous sensory inputs (e.g., visual, auditory, somatosensory, etc.) can map onto the hierarchy at different levels. When controlling various machine

  14. Using multiple metaphors and multimodalities as a semiotic resource when teaching year 2 students computational strategies

    Science.gov (United States)

    Mildenhall, Paula; Sherriff, Barbara

    2017-06-01

    Recent research indicates that using multimodal learning experiences can be effective in teaching mathematics. Using a social semiotic lens within a participationist framework, this paper reports on a professional learning collaboration with a primary school teacher designed to explore the use of metaphors and modalities in mathematics instruction. This video case study was conducted in a year 2 classroom over two terms, with the focus on building children's understanding of computational strategies. The findings revealed that the teacher was able to successfully plan both multimodal and multiple metaphor learning experiences that acted as semiotic resources to support the children's understanding of abstract mathematics. The study also led to implications for teaching when using multiple metaphors and multimodalities.

  15. Rhythmic walking interactions with auditory feedback

    DEFF Research Database (Denmark)

    Jylhä, Antti; Serafin, Stefania; Erkut, Cumhur

    2012-01-01

    of interactions based on varying the temporal characteristics of the output, using the sound of human walking as the input. The system either provides a direct synthesis of a walking sound based on the detected amplitude envelope of the user's footstep sounds, or provides a continuous synthetic walking sound...... as a stimulus for the walking human, either with a fixed tempo or a tempo adapting to the human gait. In a pilot experiment, the different interaction modes are studied with respect to their effect on the walking tempo and the experience of the subjects. The results tentatively outline different user profiles......Walking is a natural rhythmic activity that has become of interest as a means of interacting with software systems such as computer games. Therefore, designing multimodal walking interactions calls for further examination. This exploratory study presents a system capable of different kinds...

  16. Dual-Modality Imaging of the Human Finger Joint Systems by Using Combined Multispectral Photoacoustic Computed Tomography and Ultrasound Computed Tomography

    Directory of Open Access Journals (Sweden)

    Yubin Liu

    2016-01-01

    Full Text Available We developed a homemade dual-modality imaging system that combines multispectral photoacoustic computed tomography and ultrasound computed tomography for reconstructing the structural and functional information of human finger joint systems. The fused multispectral photoacoustic-ultrasound computed tomography (MPAUCT system was examined by the phantom and in vivo experimental tests. The imaging results indicate that the hard tissues such as the bones and the soft tissues including the blood vessels, the tendon, the skins, and the subcutaneous tissues in the finger joints systems can be effectively recovered by using our multimodality MPAUCT system. The developed MPAUCT system is able to provide us with more comprehensive information of the human finger joints, which shows its potential for characterization and diagnosis of bone or joint diseases.

  17. Contradictory Explorative Assessment. Multimodal Teacher/Student Interaction in Scandinavian Digital Learning Environments

    Science.gov (United States)

    Kjällander, Susanne

    2018-01-01

    Assessment in the much-discussed digital divide in Scandinavian technologically advanced schools, is the study object of this article. Interaction is studied to understand assessment; and to see how assessment can be didactically designed to recognise students' learning. With a multimodal, design theoretical perspective on learning teachers' and…

  18. Advancements in Violin-Related Human-Computer Interaction

    DEFF Research Database (Denmark)

    Overholt, Daniel

    2014-01-01

    of human intelligence and emotion is at the core of the Musical Interface Technology Design Space, MITDS. This is a framework that endeavors to retain and enhance such traits of traditional instruments in the design of interactive live performance interfaces. Utilizing the MITDS, advanced Human...

  19. Real-time non-invasive eyetracking and gaze-point determination for human-computer interaction and biomedicine

    Science.gov (United States)

    Talukder, Ashit; Morookian, John-Michael; Monacos, S.; Lam, R.; Lebaw, C.; Bond, A.

    2004-01-01

    Eyetracking is one of the latest technologies that has shown potential in several areas including human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological problems in individuals.

  20. Computerized Cognitive Rehabilitation: Comparing Different Human-Computer Interactions.

    Science.gov (United States)

    Quaglini, Silvana; Alloni, Anna; Cattani, Barbara; Panzarasa, Silvia; Pistarini, Caterina

    2017-01-01

    In this work we describe an experiment involving aphasic patients, where the same speech rehabilitation exercise was administered in three different modalities, two of which are computer-based. In particular, one modality exploits the "Makey Makey", an electronic board which allows interacting with the computer using physical objects.

  1. Dreaming Machines : On multimodal fusion and information retrieval using neural-symbolic cognitive agents

    NARCIS (Netherlands)

    Penning, H.L.H. de; Avila Garcez, A. d; Meyer, J.J.C.

    2013-01-01

    Deep Boltzmann Machines (DBM) have been used as a computational cognitive model in various AI-related research and applications, notably in computational vision and multimodal fusion. Being regarded as a biological plausible model of the human brain, the DBM is also becoming a popular instrument to

  2. Animal-Computer Interaction (ACI) : An analysis, a perspective, and guidelines

    NARCIS (Netherlands)

    van den Broek, E.L.

    2016-01-01

    Animal-Computer Interaction (ACI)’s founding elements are discussed in relation to its overarching discipline Human-Computer Interaction (HCI). Its basic dimensions are identified: agent, computing machinery, and interaction, and their levels of processing: perceptual, cognitive, and affective.

  3. Feasibility Study of Increasing Multimodal Interaction between Private and Public Transport Based on the Use of Intellectual Transport Systems and Services

    Directory of Open Access Journals (Sweden)

    Ulrich Weidmann

    2011-04-01

    Full Text Available The introduction of intellectual transport systems and services (ITS into the public and private transport sectors is closely connected with the development of multimodality in transport system (particularly, in towns and their suburbs. Taking into consideration the problems of traffic jams, the need for increasing the efficiency of power consumption and reducing the amount of burnt gases ejected into the air and the harmful effect of noise, the use of multimodal transport concept has been growing fast recently in most cities. It embraces a system of integrated tickets, the infrastructure, allowing a passenger to leave a car or a bike near a public transport station and to continue his/her travel by public transport (referred to as ‘Park&Ride’, ‘Bike&Ride’, as well as, real-time information system, universal design, and computer-aided traffic control. These concepts seem to be even more effective, when multimodal intellectual transport systems and services (ITS are introduced. In Lithuania, ITS is not widely used in passenger transportation, though its potential is great, particularly, taking into consideration the critical state of the capacity of public transport infrastructure. The paper considers the possibilities of increasing the effectiveness of public transport system ITS by increasing its interaction with private transport in the context of multimodal concept realization.Article in Lithuanian

  4. Multimodal imaging analysis of single-photon emission computed tomography and magnetic resonance tomography for improving diagnosis of Parkinson's disease

    International Nuclear Information System (INIS)

    Barthel, H.; Georgi, P.; Slomka, P.; Dannenberg, C.; Kahn, T.

    2000-01-01

    Parkinson's disease (PD) is characterized by a degeneration of nigrostriated dopaminergic neurons, which can be imaged with 123 I-labeled 2β-carbomethoxy-3β-(4-iodophenyl) tropane ([ 123 I]β-CIT) and single-photon emission computed tomography (SPECT). However, the quality of the region of interest (ROI) technique used for quantitative analysis of SPECT data is compromised by limited anatomical information in the images. We investigated whether the diagnosis of PD can be improved by combining the use of SPECT images with morphological image data from magnetic resonance imaging (MRI)/computed tomography (CT). We examined 27 patients (8 men, 19 women; aged 55±13 years) with PD (Hoehn and Yahr stage 2.1±0.8) by high-resolution [ 123 I]β-CIT SPECT (185-200 MBq, Ceraspect camera). SPECT images were analyzed both by a unimodal technique (ROIs defined directly within the SPECT studies) and a multimodal technique (ROIs defined within individual MRI/CT studies and transferred to the corresponding interactively coregistered SPECT studies). [ 123 I]β-CIT binding ratios (cerebellum as reference), which were obtained for heads of caudate nuclei (CA), putamina (PU), and global striatal structures were compared with clinical parameters. Differences between contra- and ipsilateral (related to symptom dominance) striatal [ 123 I]β-CIT binding ratios proved to be larger in the multimodal ROI technique than in the unimodal approach (e.g., for PU: 1.2*** vs. 0.7**). Binding ratios obtained by the unimodal ROI technique were significantly correlated with those of the multimodal technique (e.g., for CA: y=0.97x+2.8; r=0.70; P com subscore (r=-0.49* vs. -0.32). These results show that the impact of [ 123 I]β-CIT SPECT for diagnosing PD is affected by the method used to analyze the SPECT images. The described multimodal approach, which is based on coregistration of SPECT and morphological imaging data, leads to improved determination of the degree of this dopaminergic disorder

  5. Soft Electronics Enabled Ergonomic Human-Computer Interaction for Swallowing Training

    Science.gov (United States)

    Lee, Yongkuk; Nicholls, Benjamin; Sup Lee, Dong; Chen, Yanfei; Chun, Youngjae; Siang Ang, Chee; Yeo, Woon-Hong

    2017-04-01

    We introduce a skin-friendly electronic system that enables human-computer interaction (HCI) for swallowing training in dysphagia rehabilitation. For an ergonomic HCI, we utilize a soft, highly compliant (“skin-like”) electrode, which addresses critical issues of an existing rigid and planar electrode combined with a problematic conductive electrolyte and adhesive pad. The skin-like electrode offers a highly conformal, user-comfortable interaction with the skin for long-term wearable, high-fidelity recording of swallowing electromyograms on the chin. Mechanics modeling and experimental quantification captures the ultra-elastic mechanical characteristics of an open mesh microstructured sensor, conjugated with an elastomeric membrane. Systematic in vivo studies investigate the functionality of the soft electronics for HCI-enabled swallowing training, which includes the application of a biofeedback system to detect swallowing behavior. The collection of results demonstrates clinical feasibility of the ergonomic electronics in HCI-driven rehabilitation for patients with swallowing disorders.

  6. Ghost-in-the-Machine reveals human social signals for human–robot interaction

    Science.gov (United States)

    Loth, Sebastian; Jettka, Katharina; Giuliani, Manuel; de Ruiter, Jan P.

    2015-01-01

    We used a new method called “Ghost-in-the-Machine” (GiM) to investigate social interactions with a robotic bartender taking orders for drinks and serving them. Using the GiM paradigm allowed us to identify how human participants recognize the intentions of customers on the basis of the output of the robotic recognizers. Specifically, we measured which recognizer modalities (e.g., speech, the distance to the bar) were relevant at different stages of the interaction. This provided insights into human social behavior necessary for the development of socially competent robots. When initiating the drink-order interaction, the most important recognizers were those based on computer vision. When drink orders were being placed, however, the most important information source was the speech recognition. Interestingly, the participants used only a subset of the available information, focussing only on a few relevant recognizers while ignoring others. This reduced the risk of acting on erroneous sensor data and enabled them to complete service interactions more swiftly than a robot using all available sensor data. We also investigated socially appropriate response strategies. In their responses, the participants preferred to use the same modality as the customer’s requests, e.g., they tended to respond verbally to verbal requests. Also, they added redundancy to their responses, for instance by using echo questions. We argue that incorporating the social strategies discovered with the GiM paradigm in multimodal grammars of human–robot interactions improves the robustness and the ease-of-use of these interactions, and therefore provides a smoother user experience. PMID:26582998

  7. Investigation of protein selectivity in multimodal chromatography using in silico designed Fab fragment variants.

    Science.gov (United States)

    Karkov, Hanne Sophie; Krogh, Berit Olsen; Woo, James; Parimal, Siddharth; Ahmadian, Haleh; Cramer, Steven M

    2015-11-01

    In this study, a unique set of antibody Fab fragments was designed in silico and produced to examine the relationship between protein surface properties and selectivity in multimodal chromatographic systems. We hypothesized that multimodal ligands containing both hydrophobic and charged moieties would interact strongly with protein surface regions where charged groups and hydrophobic patches were in close spatial proximity. Protein surface property characterization tools were employed to identify the potential multimodal ligand binding regions on the Fab fragment of a humanized antibody and to evaluate the impact of mutations on surface charge and hydrophobicity. Twenty Fab variants were generated by site-directed mutagenesis, recombinant expression, and affinity purification. Column gradient experiments were carried out with the Fab variants in multimodal, cation-exchange, and hydrophobic interaction chromatographic systems. The results clearly indicated that selectivity in the multimodal system was different from the other chromatographic modes examined. Column retention data for the reduced charge Fab variants identified a binding site comprising light chain CDR1 as the main electrostatic interaction site for the multimodal and cation-exchange ligands. Furthermore, the multimodal ligand binding was enhanced by additional hydrophobic contributions as evident from the results obtained with hydrophobic Fab variants. The use of in silico protein surface property analyses combined with molecular biology techniques, protein expression, and chromatographic evaluations represents a previously undescribed and powerful approach for investigating multimodal selectivity with complex biomolecules. © 2015 Wiley Periodicals, Inc.

  8. Adaptive multimodal interaction in mobile augmented reality: A conceptual framework

    Science.gov (United States)

    Abidin, Rimaniza Zainal; Arshad, Haslina; Shukri, Saidatul A'isyah Ahmad

    2017-10-01

    Recently, Augmented Reality (AR) is an emerging technology in many mobile applications. Mobile AR was defined as a medium for displaying information merged with the real world environment mapped with augmented reality surrounding in a single view. There are four main types of mobile augmented reality interfaces and one of them are multimodal interfaces. Multimodal interface processes two or more combined user input modes (such as speech, pen, touch, manual gesture, gaze, and head and body movements) in a coordinated manner with multimedia system output. In multimodal interface, many frameworks have been proposed to guide the designer to develop a multimodal applications including in augmented reality environment but there has been little work reviewing the framework of adaptive multimodal interface in mobile augmented reality. The main goal of this study is to propose a conceptual framework to illustrate the adaptive multimodal interface in mobile augmented reality. We reviewed several frameworks that have been proposed in the field of multimodal interfaces, adaptive interface and augmented reality. We analyzed the components in the previous frameworks and measure which can be applied in mobile devices. Our framework can be used as a guide for designers and developer to develop a mobile AR application with an adaptive multimodal interfaces.

  9. Using Noninvasive Brain Measurement to Explore the Psychological Effects of Computer Malfunctions on Users during Human-Computer Interactions

    Directory of Open Access Journals (Sweden)

    Leanne M. Hirshfield

    2014-01-01

    Full Text Available In today’s technologically driven world, there is a need to better understand the ways that common computer malfunctions affect computer users. These malfunctions may have measurable influences on computer user’s cognitive, emotional, and behavioral responses. An experiment was conducted where participants conducted a series of web search tasks while wearing functional near-infrared spectroscopy (fNIRS and galvanic skin response sensors. Two computer malfunctions were introduced during the sessions which had the potential to influence correlates of user trust and suspicion. Surveys were given after each session to measure user’s perceived emotional state, cognitive load, and perceived trust. Results suggest that fNIRS can be used to measure the different cognitive and emotional responses associated with computer malfunctions. These cognitive and emotional changes were correlated with users’ self-report levels of suspicion and trust, and they in turn suggest future work that further explores the capability of fNIRS for the measurement of user experience during human-computer interactions.

  10. Multimodality

    DEFF Research Database (Denmark)

    Buhl, Mie

    2010-01-01

    In this paper, I address an ongoing discussion in Danish E-learning research about how to take advantage of the fact that digital media facilitate other communication forms than text, so-called ‘multimodal' communication, which should not be confused with the term ‘multimedia'. While multimedia...... on their teaching and learning situations. The choices they make involve e-learning resources like videos, social platforms and mobile devices, not just as digital artefacts we interact with, but the entire practice of using digital media. In a life-long learning perspective, multimodality is potentially very...

  11. A multimodal interface for real-time soldier-robot teaming

    Science.gov (United States)

    Barber, Daniel J.; Howard, Thomas M.; Walter, Matthew R.

    2016-05-01

    Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.

  12. Multimodal imaging of the human knee down to the cellular level

    Science.gov (United States)

    Schulz, G.; Götz, C.; Müller-Gerbl, M.; Zanette, I.; Zdora, M.-C.; Khimchenko, A.; Deyhle, H.; Thalmann, P.; Müller, B.

    2017-06-01

    Computed tomography reaches the best spatial resolution for the three-dimensional visualization of human tissues among the available nondestructive clinical imaging techniques. Nowadays, sub-millimeter voxel sizes are regularly obtained. Regarding investigations on true micrometer level, lab-based micro-CT (μCT) has become gold standard. The aim of the present study is firstly the hierarchical investigation of a human knee post mortem using hard X-ray μCT and secondly a multimodal imaging using absorption and phase contrast modes in order to investigate hard (bone) and soft (cartilage) tissues on the cellular level. After the visualization of the entire knee using a clinical CT, a hierarchical imaging study was performed using the lab-system nanotom® m. First, the entire knee was measured with a pixel length of 65 μm. The highest resolution with a pixel length of 3 μm could be achieved after extracting cylindrically shaped plugs from the femoral bones. For the visualization of the cartilage, grating-based phase contrast μCT (I13-2, Diamond Light Source) was performed. With an effective voxel size of 2.3 μm it was possible to visualize individual chondrocytes within the cartilage.

  13. Accident sequence analysis of human-computer interface design

    International Nuclear Information System (INIS)

    Fan, C.-F.; Chen, W.-H.

    2000-01-01

    It is important to predict potential accident sequences of human-computer interaction in a safety-critical computing system so that vulnerable points can be disclosed and removed. We address this issue by proposing a Multi-Context human-computer interaction Model along with its analysis techniques, an Augmented Fault Tree Analysis, and a Concurrent Event Tree Analysis. The proposed augmented fault tree can identify the potential weak points in software design that may induce unintended software functions or erroneous human procedures. The concurrent event tree can enumerate possible accident sequences due to these weak points

  14. Brain-Computer Interfaces Revolutionizing Human-Computer Interaction

    CERN Document Server

    Graimann, Bernhard; Allison, Brendan

    2010-01-01

    A brain-computer interface (BCI) establishes a direct output channel between the human brain and external devices. BCIs infer user intent via direct measures of brain activity and thus enable communication and control without movement. This book, authored by experts in the field, provides an accessible introduction to the neurophysiological and signal-processing background required for BCI, presents state-of-the-art non-invasive and invasive approaches, gives an overview of current hardware and software solutions, and reviews the most interesting as well as new, emerging BCI applications. The book is intended not only for students and young researchers, but also for newcomers and other readers from diverse backgrounds keen to learn about this vital scientific endeavour.

  15. Human motion sensing and recognition a fuzzy qualitative approach

    CERN Document Server

    Liu, Honghai; Ji, Xiaofei; Chan, Chee Seng; Khoury, Mehdi

    2017-01-01

    This book introduces readers to the latest exciting advances in human motion sensing and recognition, from the theoretical development of fuzzy approaches to their applications. The topics covered include human motion recognition in 2D and 3D, hand motion analysis with contact sensors, and vision-based view-invariant motion recognition, especially from the perspective of Fuzzy Qualitative techniques. With the rapid development of technologies in microelectronics, computers, networks, and robotics over the last decade, increasing attention has been focused on human motion sensing and recognition in many emerging and active disciplines where human motions need to be automatically tracked, analyzed or understood, such as smart surveillance, intelligent human-computer interaction, robot motion learning, and interactive gaming. Current challenges mainly stem from the dynamic environment, data multi-modality, uncertain sensory information, and real-time issues. These techniques are shown to effectively address the ...

  16. Advances in Human-Computer Interaction: Graphics and Animation Components for Interface Design

    Science.gov (United States)

    Cipolla Ficarra, Francisco V.; Nicol, Emma; Cipolla-Ficarra, Miguel; Richardson, Lucy

    We present an analysis of communicability methodology in graphics and animation components for interface design, called CAN (Communicability, Acceptability and Novelty). This methodology has been under development between 2005 and 2010, obtaining excellent results in cultural heritage, education and microcomputing contexts. In studies where there is a bi-directional interrelation between ergonomics, usability, user-centered design, software quality and the human-computer interaction. We also present the heuristic results about iconography and layout design in blogs and websites of the following countries: Spain, Italy, Portugal and France.

  17. Multimodality Registration without a Dedicated Multimodality Scanner

    Directory of Open Access Journals (Sweden)

    Bradley J. Beattie

    2007-03-01

    Full Text Available Multimodality scanners that allow the acquisition of both functional and structural image sets on a single system have recently become available for animal research use. Although the resultant registered functional/structural image sets can greatly enhance the interpretability of the functional data, the cost of multimodality systems can be prohibitive, and they are often limited to two modalities, which generally do not include magnetic resonance imaging. Using a thin plastic wrap to immobilize and fix a mouse or other small animal atop a removable bed, we are able to calculate registrations between all combinations of four different small animal imaging scanners (positron emission tomography, single-photon emission computed tomography, magnetic resonance, and computed tomography [CT] at our disposal, effectively equivalent to a quadruple-modality scanner. A comparison of serially acquired CT images, with intervening acquisitions on other scanners, demonstrates the ability of the proposed procedures to maintain the rigidity of an anesthetized mouse during transport between scanners. Movement of the bony structures of the mouse was estimated to be 0.62 mm. Soft tissue movement was predominantly the result of the filling (or emptying of the urinary bladder and thus largely constrained to this region. Phantom studies estimate the registration errors for all registration types to be less than 0.5 mm. Functional images using tracers targeted to known structures verify the accuracy of the functional to structural registrations. The procedures are easy to perform and produce robust and accurate results that rival those of dedicated multimodality scanners, but with more flexible registration combinations and while avoiding the expense and redundancy of multimodality systems.

  18. User involvement in the design of human-computer interactions: some similarities and differences between design approaches

    NARCIS (Netherlands)

    Bekker, M.M.; Long, J.B.

    1998-01-01

    This paper presents a general review of user involvement in the design of human-computer interactions, as advocated by a selection of different approaches to design. The selection comprises User-Centred Design, Participatory Design, Socio-Technical Design, Soft Systems Methodology, and Joint

  19. Multimodal emotional state recognition using sequence-dependent deep hierarchical features.

    Science.gov (United States)

    Barros, Pablo; Jirak, Doreen; Weber, Cornelius; Wermter, Stefan

    2015-12-01

    Emotional state recognition has become an important topic for human-robot interaction in the past years. By determining emotion expressions, robots can identify important variables of human behavior and use these to communicate in a more human-like fashion and thereby extend the interaction possibilities. Human emotions are multimodal and spontaneous, which makes them hard to be recognized by robots. Each modality has its own restrictions and constraints which, together with the non-structured behavior of spontaneous expressions, create several difficulties for the approaches present in the literature, which are based on several explicit feature extraction techniques and manual modality fusion. Our model uses a hierarchical feature representation to deal with spontaneous emotions, and learns how to integrate multiple modalities for non-verbal emotion recognition, making it suitable to be used in an HRI scenario. Our experiments show that a significant improvement of recognition accuracy is achieved when we use hierarchical features and multimodal information, and our model improves the accuracy of state-of-the-art approaches from 82.5% reported in the literature to 91.3% for a benchmark dataset on spontaneous emotion expressions. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. A System for Multimodal Interaction with Kinect-Enabled Virtual Windows

    Directory of Open Access Journals (Sweden)

    Ana M. Bernardos

    2015-11-01

    Full Text Available Commercial off-the-shelf gaming devices (e.g. such as Kinect are demonstrating to have a great potential beyond their initial service purpose. In particular, when integrated within the environment or as part of smart objects, peripheral COTS for gaming may facilitate the definition of novel interaction methods, particularly applicable to smart spaces service concepts. In this direction, this paper describes a system prototype that makes possible to deliver multimodal interaction with the media contents in a Virtual Window. Using a Kinect device, the Interactive Window itself adjusts the video clipping to the real time perspective of the user – who can freely move within the sensor coverage are. On the clipped video, the user is able to select objects by pointing at meaningful image sections and to initiate actions related to them. Voice orders may also complete the interaction when necessary. Although implemented for smart spaces, the service concept can also be applied to learning, remote control processes or teleconference.

  1. Multimodality Inferring of Human Cognitive States Based on Integration of Neuro-Fuzzy Network and Information Fusion Techniques

    Directory of Open Access Journals (Sweden)

    P. Bhattacharya

    2007-11-01

    Full Text Available To achieve an effective and safe operation on the machine system where the human interacts with the machine mutually, there is a need for the machine to understand the human state, especially cognitive state, when the human's operation task demands an intensive cognitive activity. Due to a well-known fact with the human being, a highly uncertain cognitive state and behavior as well as expressions or cues, the recent trend to infer the human state is to consider multimodality features of the human operator. In this paper, we present a method for multimodality inferring of human cognitive states by integrating neuro-fuzzy network and information fusion techniques. To demonstrate the effectiveness of this method, we take the driver fatigue detection as an example. The proposed method has, in particular, the following new features. First, human expressions are classified into four categories: (i casual or contextual feature, (ii contact feature, (iii contactless feature, and (iv performance feature. Second, the fuzzy neural network technique, in particular Takagi-Sugeno-Kang (TSK model, is employed to cope with uncertain behaviors. Third, the sensor fusion technique, in particular ordered weighted aggregation (OWA, is integrated with the TSK model in such a way that cues are taken as inputs to the TSK model, and then the outputs of the TSK are fused by the OWA which gives outputs corresponding to particular cognitive states under interest (e.g., fatigue. We call this method TSK-OWA. Validation of the TSK-OWA, performed in the Northeastern University vehicle drive simulator, has shown that the proposed method is promising to be a general tool for human cognitive state inferring and a special tool for the driver fatigue detection.

  2. Multimodal Learning Analytics and Education Data Mining: Using Computational Technologies to Measure Complex Learning Tasks

    Science.gov (United States)

    Blikstein, Paulo; Worsley, Marcelo

    2016-01-01

    New high-frequency multimodal data collection technologies and machine learning analysis techniques could offer new insights into learning, especially when students have the opportunity to generate unique, personalized artifacts, such as computer programs, robots, and solutions engineering challenges. To date most of the work on learning analytics…

  3. The Emotiv EPOC interface paradigm in Human-Computer Interaction

    OpenAIRE

    Ancău Dorina; Roman Nicolae-Marius; Ancău Mircea

    2017-01-01

    Numerous studies have suggested the use of decoded error potentials in the brain to improve human-computer communication. Together with state-of-the-art scientific equipment, experiments have also tested instruments with more limited performance for the time being, such as Emotiv EPOC. This study presents a review of these trials and a summary of the results obtained. However, the level of these results indicates a promising prospect for using this headset as a human-computer interface for er...

  4. Multimodal system for the planning and guidance of bronchoscopy

    Science.gov (United States)

    Higgins, William E.; Cheirsilp, Ronnarit; Zang, Xiaonan; Byrnes, Patrick

    2015-03-01

    Many technical innovations in multimodal radiologic imaging and bronchoscopy have emerged recently in the effort against lung cancer. Modern X-ray computed-tomography (CT) scanners provide three-dimensional (3D) high-resolution chest images, positron emission tomography (PET) scanners give complementary molecular imaging data, and new integrated PET/CT scanners combine the strengths of both modalities. State-of-the-art bronchoscopes permit minimally invasive tissue sampling, with vivid endobronchial video enabling navigation deep into the airway-tree periphery, while complementary endobronchial ultrasound (EBUS) reveals local views of anatomical structures outside the airways. In addition, image-guided intervention (IGI) systems have proven their utility for CT-based planning and guidance of bronchoscopy. Unfortunately, no IGI system exists that integrates all sources effectively through the complete lung-cancer staging work flow. This paper presents a prototype of a computer-based multimodal IGI system that strives to fill this need. The system combines a wide range of automatic and semi-automatic image-processing tools for multimodal data fusion and procedure planning. It also provides a flexible graphical user interface for follow-on guidance of bronchoscopy/EBUS. Human-study results demonstrate the system's potential.

  5. The Emotiv EPOC interface paradigm in Human-Computer Interaction

    Directory of Open Access Journals (Sweden)

    Ancău Dorina

    2017-01-01

    Full Text Available Numerous studies have suggested the use of decoded error potentials in the brain to improve human-computer communication. Together with state-of-the-art scientific equipment, experiments have also tested instruments with more limited performance for the time being, such as Emotiv EPOC. This study presents a review of these trials and a summary of the results obtained. However, the level of these results indicates a promising prospect for using this headset as a human-computer interface for error decoding.

  6. Audio Technology and Mobile Human Computer Interaction

    DEFF Research Database (Denmark)

    Chamberlain, Alan; Bødker, Mads; Hazzard, Adrian

    2017-01-01

    Audio-based mobile technology is opening up a range of new interactive possibilities. This paper brings some of those possibilities to light by offering a range of perspectives based in this area. It is not only the technical systems that are developing, but novel approaches to the design...... and understanding of audio-based mobile systems are evolving to offer new perspectives on interaction and design and support such systems to be applied in areas, such as the humanities....

  7. A multimodal interface to resolve the Midas-Touch problem in gaze controlled wheelchair.

    Science.gov (United States)

    Meena, Yogesh Kumar; Cecotti, Hubert; Wong-Lin, KongFatt; Prasad, Girijesh

    2017-07-01

    Human-computer interaction (HCI) research has been playing an essential role in the field of rehabilitation. The usability of the gaze controlled powered wheelchair is limited due to Midas-Touch problem. In this work, we propose a multimodal graphical user interface (GUI) to control a powered wheelchair that aims to help upper-limb mobility impaired people in daily living activities. The GUI was designed to include a portable and low-cost eye-tracker and a soft-switch wherein the wheelchair can be controlled in three different ways: 1) with a touchpad 2) with an eye-tracker only, and 3) eye-tracker with soft-switch. The interface includes nine different commands (eight directions and stop) and integrated within a powered wheelchair system. We evaluated the performance of the multimodal interface in terms of lap-completion time, the number of commands, and the information transfer rate (ITR) with eight healthy participants. The analysis of the results showed that the eye-tracker with soft-switch provides superior performance with an ITR of 37.77 bits/min among the three different conditions (pusers.

  8. Ergonomic guidelines for using notebook personal computers. Technical Committee on Human-Computer Interaction, International Ergonomics Association.

    Science.gov (United States)

    Saito, S; Piccoli, B; Smith, M J; Sotoyama, M; Sweitzer, G; Villanueva, M B; Yoshitake, R

    2000-10-01

    In the 1980's, the visual display terminal (VDT) was introduced in workplaces of many countries. Soon thereafter, an upsurge in reported cases of related health problems, such as musculoskeletal disorders and eyestrain, was seen. Recently, the flat panel display or notebook personal computer (PC) became the most remarkable feature in modern workplaces with VDTs and even in homes. A proactive approach must be taken to avert foreseeable ergonomic and occupational health problems from the use of this new technology. Because of its distinct physical and optical characteristics, the ergonomic requirements for notebook PCs in terms of machine layout, workstation design, lighting conditions, among others, should be different from the CRT-based computers. The Japan Ergonomics Society (JES) technical committee came up with a set of guidelines for notebook PC use following exploratory discussions that dwelt on its ergonomic aspects. To keep in stride with this development, the Technical Committee on Human-Computer Interaction under the auspices of the International Ergonomics Association worked towards the international issuance of the guidelines. This paper unveils the result of this collaborative effort.

  9. HumanComputer Systems Interaction Backgrounds and Applications 2 Part 2

    CERN Document Server

    Kulikowski, Juliusz; Mroczek, Teresa

    2012-01-01

    This volume of the book contains a collection of chapters selected from the papers which originally (in shortened form) have been presented at the 3rd International Conference on Human-Systems Interaction held in Rzeszow, Poland, in 2010. The chapters are divided into five sections concerning: IV. Environment monitoring and robotic systems, V. Diagnostic systems, VI. Educational Systems, and VII. General Problems. The novel concepts and realizations of humanoid robots, talking robots and orthopedic surgical robots, as well as those of direct brain-computer interface  are examples of particularly interesting topics presented in Sec. VI. In Sec. V the problems of  skin cancer recognition, colonoscopy diagnosis, and brain strokes diagnosis as well as more general problems of ontology design for  medical diagnostic knowledge are presented. Example of an industrial diagnostic system and a concept of new algorithm for edges detection in computer-analyzed images  are also presented in this Section. Among the edu...

  10. Time-dependent, multimode interaction analysis of the gyroklystron amplifier

    Energy Technology Data Exchange (ETDEWEB)

    Swati, M. V., E-mail: swati.mv.ece10@iitbhu.ac.in; Chauhan, M. S.; Jain, P. K. [Department of Electronics Engineering, Indian Institute of Technology, Banaras Hindu University, Varanasi 221005 (India)

    2016-08-15

    In this paper, a time-dependent multimode nonlinear analysis for the gyroklystron amplifier has been developed by extending the analysis of gyrotron oscillators by employing the self-consistent approach. The nonlinear analysis developed here has been validated by taking into account the reported experimental results for a 32.3 GHz, three cavity, second harmonic gyroklystron operating in the TE{sub 02} mode. The analysis has been used to estimate the temporal RF growth in the operating mode as well as the nearby competing modes. Device gain and bandwidth have been computed for different drive powers and frequencies. The effect of various beam parameters, such as beam voltage, beam current, and pitch factor, has also been studied. The computational results have estimated the gyroklystron saturated RF power ∼319 kW at 32.3 GHz with efficiency ∼23% and gain ∼26.3 dB with device bandwidth ∼0.027% (8 MHz) for a 70 kV, 20 A electron beam. The computed results are found to be in agreement with the experimental values within 10%.

  11. The integration of emotional and symbolic components in multimodal communication

    Directory of Open Access Journals (Sweden)

    Marc eMehu

    2015-07-01

    Full Text Available Human multimodal communication can be said to serve two main purposes: information transfer and social influence. In this paper, I argue that different components of multimodal signals play different roles in the processes of information transfer and social influence. Although the symbolic components of communication (e.g. verbal and denotative signals are well suited to transfer conceptual information, emotional components (e.g. nonverbal signals that are difficult to manipulate voluntarily likely take a function that is closer to social influence. I suggest that emotion should be considered a property of communicative signals, rather than an entity that is transferred as content by nonverbal signals. In this view, the effect of emotional processes on communication serve to change the quality of social signals to make them more efficient at producing responses in perceivers, whereas symbolic components increase the signals’ efficiency at interacting with the cognitive processes dedicated to the assessment of relevance. The interaction between symbolic and emotional components will be discussed in relation to the need for perceivers to evaluate the reliability of multimodal signals.

  12. The integration of emotional and symbolic components in multimodal communication

    Science.gov (United States)

    Mehu, Marc

    2015-01-01

    Human multimodal communication can be said to serve two main purposes: information transfer and social influence. In this paper, I argue that different components of multimodal signals play different roles in the processes of information transfer and social influence. Although the symbolic components of communication (e.g., verbal and denotative signals) are well suited to transfer conceptual information, emotional components (e.g., non-verbal signals that are difficult to manipulate voluntarily) likely take a function that is closer to social influence. I suggest that emotion should be considered a property of communicative signals, rather than an entity that is transferred as content by non-verbal signals. In this view, the effect of emotional processes on communication serve to change the quality of social signals to make them more efficient at producing responses in perceivers, whereas symbolic components increase the signals’ efficiency at interacting with the cognitive processes dedicated to the assessment of relevance. The interaction between symbolic and emotional components will be discussed in relation to the need for perceivers to evaluate the reliability of multimodal signals. PMID:26217280

  13. A truly human interface: Interacting face-to-face with someone whose words are determined by a computer program

    Directory of Open Access Journals (Sweden)

    Kevin eCorti

    2015-05-01

    Full Text Available We use speech shadowing to create situations wherein people converse in person with a human whose words are determined by a conversational agent computer program. Speech shadowing involves a person (the shadower repeating vocal stimuli originating from a separate communication source in real-time. Humans shadowing for conversational agent sources (e.g., chat bots become hybrid agents (echoborgs capable of face-to-face interlocution. We report three studies that investigated people’s experiences interacting with echoborgs and the extent to which echoborgs pass as autonomous humans. First, participants in a Turing Test spoke with a chat bot via either a text interface or an echoborg. Human shadowing did not improve the chat bot’s chance of passing but did increase interrogators’ ratings of how human-like the chat bot seemed. In our second study, participants had to decide whether their interlocutor produced words generated by a chat bot or simply pretended to be one. Compared to those who engaged a text interface, participants who engaged an echoborg were more likely to perceive their interlocutor as pretending to be a chat bot. In our third study, participants were naïve to the fact that their interlocutor produced words generated by a chat bot. Unlike those who engaged a text interface, the vast majority of participants who engaged an echoborg neither sensed nor suspected a robotic interaction. These findings have implications for android science, the Turing Test paradigm, and human-computer interaction. The human body, as the delivery mechanism of communication, fundamentally alters the social psychological dynamics of interactions with machine intelligence.

  14. Virtual microscopy: Merging of computer mediated collaboration and intuitive interfacing

    OpenAIRE

    De Ridder, H.; De Ridder-Sluiter, J.G.; Kluin, P.M.; Christiaans, H.H.C.M.

    2009-01-01

    Ubiquitous computing (or Ambient Intelligence) is an upcoming technology that is usually associated with futuristic smart environments in which information is available anytime anywhere and with which humans can interact in a natural, multimodal way. However spectacular the corresponding scenarios may be, it is equally challenging to consider how this technology may enhance existing situations. This is illustrated by a case study from the Dutch medical field: central quality reviewing for pat...

  15. Multimodal Counseling Interventions: Effect on Human Papilloma Virus Vaccination Acceptance

    OpenAIRE

    Oroma Nwanodi; Helen Salisbury; Curtis Bay

    2017-01-01

    Human papilloma virus (HPV) vaccine was developed to reduce HPV-attributable cancers, external genital warts (EGW), and recurrent respiratory papillomatosis. Adolescent HPV vaccination series completion rates are less than 40% in the United States of America, but up to 80% in Australia and the United Kingdom. Population-based herd immunity requires 80% or greater vaccination series completion rates. Pro-vaccination counseling facilitates increased vaccination rates. Multimodal counseling inte...

  16. Brain-Computer Interfaces. Applying our Minds to Human-Computer Interaction

    NARCIS (Netherlands)

    Tan, Desney S.; Nijholt, Antinus

    2010-01-01

    For generations, humans have fantasized about the ability to create devices that can see into a person’s mind and thoughts, or to communicate and interact with machines through thought alone. Such ideas have long captured the imagination of humankind in the form of ancient myths and modern science

  17. A Framework for Agent-based Human Interaction Support

    Directory of Open Access Journals (Sweden)

    Axel Bürkle

    2008-10-01

    Full Text Available In this paper we describe an agent-based infrastructure for multimodal perceptual systems which aims at developing and realizing computer services that are delivered to humans in an implicit and unobtrusive way. The framework presented here supports the implementation of human-centric context-aware applications providing non-obtrusive assistance to participants in events such as meetings, lectures, conferences and presentations taking place in indoor "smart spaces". We emphasize on the design and implementation of an agent-based framework that supports "pluggable" service logic in the sense that the service developer can concentrate on coding the service logic independently of the underlying middleware. Furthermore, we give an example of the architecture's ability to support the cooperation of multiple services in a meeting scenario using an intelligent connector service and a semantic web oriented travel service.

  18. Brain Computer Interfaces for Enhanced Interaction with Mobile Robot Agents

    Science.gov (United States)

    2016-07-27

    SECURITY CLASSIFICATION OF: Brain Computer Interfaces (BCIs) show great potential in allowing humans to interact with computational environments in a...Distribution Unlimited UU UU UU UU 27-07-2016 17-Sep-2013 16-Sep-2014 Final Report: Brain Computer Interfaces for Enhanced Interactions with Mobile Robot...published in peer-reviewed journals: Number of Papers published in non peer-reviewed journals: Final Report: Brain Computer Interfaces for Enhanced

  19. Human and Virtual Agents Interacting in the Virtuality Continuum

    NARCIS (Netherlands)

    Nijholt, Antinus; Miyares Bermúdez, E.; Ruiz Miyares, L.

    2006-01-01

    We introduce several of our projects in an overview that makes it possible to compare research approaches on interaction modeling in virtual, mixed-reality and real (physical) environments. For these environments, interaction modeling means multimodal (verbal and nonverbal) interaction modeling

  20. Brain-Computer Interfaces Applying Our Minds to Human-computer Interaction

    CERN Document Server

    Tan, Desney S

    2010-01-01

    For generations, humans have fantasized about the ability to create devices that can see into a person's mind and thoughts, or to communicate and interact with machines through thought alone. Such ideas have long captured the imagination of humankind in the form of ancient myths and modern science fiction stories. Recent advances in cognitive neuroscience and brain imaging technologies have started to turn these myths into a reality, and are providing us with the ability to interface directly with the human brain. This ability is made possible through the use of sensors that monitor physical p

  1. Metawidgets in the multimodal interface

    Energy Technology Data Exchange (ETDEWEB)

    Blattner, M.M. (Lawrence Livermore National Lab., CA (United States) Anderson (M.D.) Cancer Center, Houston, TX (United States)); Glinert, E.P.; Jorge, J.A.; Ormsby, G.R. (Rensselaer Polytechnic Inst., Troy, NY (United States). Dept. of Computer Science)

    1991-01-01

    We analyze two intertwined and fundamental issues concerning computer-to-human communication in the multimodal interfaces: the interplay between sound and graphics, and the role of object persistence. Our observations lead us to introduce metawidgets as abstract entities capable of manifesting themselves to users as image, as sound, or as various combinations and/or sequences of the two media. We show examples of metawidgets in action, and discuss mechanisms for choosing among alternative media for metawidget instantiation. Finally, we describe a couple of experimental microworlds we have implemented to test out some of our ideas. 17 refs., 7 figs.

  2. Multimodal Sensing Interface for Haptic Interaction

    Directory of Open Access Journals (Sweden)

    Carlos Diaz

    2017-01-01

    Full Text Available This paper investigates the integration of a multimodal sensing system for exploring limits of vibrato tactile haptic feedback when interacting with 3D representation of real objects. In this study, the spatial locations of the objects are mapped to the work volume of the user using a Kinect sensor. The position of the user’s hand is obtained using the marker-based visual processing. The depth information is used to build a vibrotactile map on a haptic glove enhanced with vibration motors. The users can perceive the location and dimension of remote objects by moving their hand inside a scanning region. A marker detection camera provides the location and orientation of the user’s hand (glove to map the corresponding tactile message. A preliminary study was conducted to explore how different users can perceive such haptic experiences. Factors such as total number of objects detected, object separation resolution, and dimension-based and shape-based discrimination were evaluated. The preliminary results showed that the localization and counting of objects can be attained with a high degree of success. The users were able to classify groups of objects of different dimensions based on the perceived haptic feedback.

  3. Multimodal integration of anatomy and physiology classes: How instructors utilize multimodal teaching in their classrooms

    Science.gov (United States)

    McGraw, Gerald M., Jr.

    Multimodality is the theory of communication as it applies to social and educational semiotics (making meaning through the use of multiple signs and symbols). The term multimodality describes a communication methodology that includes multiple textual, aural, and visual applications (modes) that are woven together to create what is referred to as an artifact. Multimodal teaching methodology attempts to create a deeper meaning to course content by activating the higher cognitive areas of the student's brain, creating a more sustained retention of the information (Murray, 2009). The introduction of multimodality educational methodologies as a means to more optimally engage students has been documented within educational literature. However, studies analyzing the distribution and penetration into basic sciences, more specifically anatomy and physiology, have not been forthcoming. This study used a quantitative survey design to determine the degree to which instructors integrated multimodality teaching practices into their course curricula. The instrument used for the study was designed by the researcher based on evidence found in the literature and sent to members of three associations/societies for anatomy and physiology instructors: the Human Anatomy and Physiology Society; the iTeach Anatomy & Physiology Collaborate; and the American Physiology Society. Respondents totaled 182 instructor members of two- and four-year, private and public higher learning colleges collected from the three organizations collectively with over 13,500 members in over 925 higher learning institutions nationwide. The study concluded that the expansion of multimodal methodologies into anatomy and physiology classrooms is at the beginning of the process and that there is ample opportunity for expansion. Instructors continue to use lecture as their primary means of interaction with students. Email is still the major form of out-of-class communication for full-time instructors. Instructors with

  4. Human computer confluence applied in healthcare and rehabilitation.

    Science.gov (United States)

    Viaud-Delmon, Isabelle; Gaggioli, Andrea; Ferscha, Alois; Dunne, Stephen

    2012-01-01

    Human computer confluence (HCC) is an ambitious research program studying how the emerging symbiotic relation between humans and computing devices can enable radically new forms of sensing, perception, interaction, and understanding. It is an interdisciplinary field, bringing together researches from horizons as various as pervasive computing, bio-signals processing, neuroscience, electronics, robotics, virtual & augmented reality, and provides an amazing potential for applications in medicine and rehabilitation.

  5. Institutionalizing human-computer interaction for global health.

    Science.gov (United States)

    Gulliksen, Jan

    2017-06-01

    Digitalization is the societal change process in which new ICT-based solutions bring forward completely new ways of doing things, new businesses and new movements in the society. Digitalization also provides completely new ways of addressing issues related to global health. This paper provides an overview of the field of human-computer interaction (HCI) and in what way the field has contributed to international development in different regions of the world. Additionally, it outlines the United Nations' new sustainability goals from December 2015 and what these could contribute to the development of global health and its relationship to digitalization. Finally, it argues why and how HCI could be adopted and adapted to fit the contextual needs, the need for localization and for the development of new digital innovations. The research methodology is mostly qualitative following an action research paradigm in which the actual change process that the digitalization is evoking is equally important as the scientific conclusions that can be drawn. In conclusion, the paper argues that digitalization is fundamentally changing the society through the development and use of digital technologies and may have a profound effect on the digital development of every country in the world. But it needs to be developed based on local practices, it needs international support and to not be limited by any technological constraints. Particularly digitalization to support global health requires a profound understanding of the users and their context, arguing for user-centred systems design methodologies as particularly suitable.

  6. Connecting multimodality in human communication.

    Science.gov (United States)

    Regenbogen, Christina; Habel, Ute; Kellermann, Thilo

    2013-01-01

    A successful reciprocal evaluation of social signals serves as a prerequisite for social coherence and empathy. In a previous fMRI study we studied naturalistic communication situations by presenting video clips to our participants and recording their behavioral responses regarding empathy and its components. In two conditions, all three channels transported congruent emotional or neutral information, respectively. Three conditions selectively presented two emotional channels and one neutral channel and were thus bimodally emotional. We reported channel-specific emotional contributions in modality-related areas, elicited by dynamic video clips with varying combinations of emotionality in facial expressions, prosody, and speech content. However, to better understand the underlying mechanisms accompanying a naturalistically displayed human social interaction in some key regions that presumably serve as specific processing hubs for facial expressions, prosody, and speech content, we pursued a reanalysis of the data. Here, we focused on two different descriptions of temporal characteristics within these three modality-related regions [right fusiform gyrus (FFG), left auditory cortex (AC), left angular gyrus (AG) and left dorsomedial prefrontal cortex (dmPFC)]. By means of a finite impulse response (FIR) analysis within each of the three regions we examined the post-stimulus time-courses as a description of the temporal characteristics of the BOLD response during the video clips. Second, effective connectivity between these areas and the left dmPFC was analyzed using dynamic causal modeling (DCM) in order to describe condition-related modulatory influences on the coupling between these regions. The FIR analysis showed initially diminished activation in bimodally emotional conditions but stronger activation than that observed in neutral videos toward the end of the stimuli, possibly by bottom-up processes in order to compensate for a lack of emotional information. The

  7. Computations and interaction

    NARCIS (Netherlands)

    Baeten, J.C.M.; Luttik, S.P.; Tilburg, van P.J.A.; Natarajan, R.; Ojo, A.

    2011-01-01

    We enhance the notion of a computation of the classical theory of computing with the notion of interaction. In this way, we enhance a Turing machine as a model of computation to a Reactive Turing Machine that is an abstract model of a computer as it is used nowadays, always interacting with the user

  8. Tunable-Range, Photon-Mediated Atomic Interactions in Multimode Cavity QED

    Directory of Open Access Journals (Sweden)

    Varun D. Vaidya

    2018-01-01

    Full Text Available Optical cavity QED provides a platform with which to explore quantum many-body physics in driven-dissipative systems. Single-mode cavities provide strong, infinite-range photon-mediated interactions among intracavity atoms. However, these global all-to-all couplings are limiting from the perspective of exploring quantum many-body physics beyond the mean-field approximation. The present work demonstrates that local couplings can be created using multimode cavity QED. This is established through measurements of the threshold of a superradiant, self-organization phase transition versus atomic position. Specifically, we experimentally show that the interference of near-degenerate cavity modes leads to both a strong and tunable-range interaction between Bose-Einstein condensates (BECs trapped within the cavity. We exploit the symmetry of a confocal cavity to measure the interaction between real BECs and their virtual images without unwanted contributions arising from the merger of real BECs. Atom-atom coupling may be tuned from short range to long range. This capability paves the way toward future explorations of exotic, strongly correlated systems such as quantum liquid crystals and driven-dissipative spin glasses.

  9. Une approche pragmatique cognitive de l'interaction personne/système informatisé A Cognitive Pragmatic Approach of Human/Computer Interaction

    Directory of Open Access Journals (Sweden)

    Madeleine Saint-Pierre

    1998-06-01

    Full Text Available Dans cet article, nous proposons une approche inférentielle de l'interaction humain/ordinateur. C'est par la prise en compte de l'activité cognitive de l'utilisateur pendant son travail avec un système que nous voulons comprendre ce type d'interaction. Ceci mènera à une véritable évaluation des interfaces/utilisateurs et pourra servir de guide pour des interfaces en développement. Nos analyses décrivent le processus inférentiel impliqué dans le contexte dynamique d'exécution de tâche, grâce à une catégorisation de l'activité cognitive issue des verbalisations recueillies auprès d'utilisateurs qui " pensent à haute voix " en travaillant. Nous présentons des instruments méthodologiques mis au point dans notre recherche pour l'analyses et la catégorisation des protocoles. Les résultats sont interprétés dans le cadre de la théorie de la pertinence de Sperber et Wilson (1995 en termes d'effort cognitif dans le traitement des objets (linguistique, iconique, graphique... apparaissant à l'écran et d'effet cognitif de ces derniers. Cette approche est généralisable à tout autre contexte d'interaction humain/ordinateur comme, par exemple, le télé-apprentissage.This article proposes an inferential approach for the study of human/computer interaction. It is by taking into account the user's cognitive activity while working at a computer that we propose to understand this interaction. This approach leads to a real user/interface evaluation and, hopefully, will serve as guidelines for the design of new interfaces. Our analysis describe the inferential process involved in the dynamics of task performance. The cognitive activity of the user is grasped by the mean of a " thinking aloud " method through which the user is asked to verbalize while working at the computer. Tools developped by our research team for the categorization of the verbal protocols are presented. The results are interpreted within the relevance theory

  10. Multimodal label-free microscopy

    Directory of Open Access Journals (Sweden)

    Nicolas Pavillon

    2014-09-01

    Full Text Available This paper reviews the different multimodal applications based on a large extent of label-free imaging modalities, ranging from linear to nonlinear optics, while also including spectroscopic measurements. We put specific emphasis on multimodal measurements going across the usual boundaries between imaging modalities, whereas most multimodal platforms combine techniques based on similar light interactions or similar hardware implementations. In this review, we limit the scope to focus on applications for biology such as live cells or tissues, since by their nature of being alive or fragile, we are often not free to take liberties with the image acquisition times and are forced to gather the maximum amount of information possible at one time. For such samples, imaging by a given label-free method usually presents a challenge in obtaining sufficient optical signal or is limited in terms of the types of observable targets. Multimodal imaging is then particularly attractive for these samples in order to maximize the amount of measured information. While multimodal imaging is always useful in the sense of acquiring additional information from additional modes, at times it is possible to attain information that could not be discovered using any single mode alone, which is the essence of the progress that is possible using a multimodal approach.

  11. Learning to Detect Human-Object Interactions

    KAUST Repository

    Chao, Yu-Wei; Liu, Yunfan; Liu, Xieyang; Zeng, Huayi; Deng, Jia

    2017-01-01

    In this paper we study the problem of detecting human-object interactions (HOI) in static images, defined as predicting a human and an object bounding box with an interaction class label that connects them. HOI detection is a fundamental problem in computer vision as it provides semantic information about the interactions among the detected objects. We introduce HICO-DET, a new large benchmark for HOI detection, by augmenting the current HICO classification benchmark with instance annotations. We propose Human-Object Region-based Convolutional Neural Networks (HO-RCNN), a novel DNN-based framework for HOI detection. At the core of our HO-RCNN is the Interaction Pattern, a novel DNN input that characterizes the spatial relations between two bounding boxes. We validate the effectiveness of our HO-RCNN using HICO-DET. Experiments demonstrate that our HO-RCNN, by exploiting human-object spatial relations through Interaction Patterns, significantly improves the performance of HOI detection over baseline approaches.

  12. Learning to Detect Human-Object Interactions

    KAUST Repository

    Chao, Yu-Wei

    2017-02-17

    In this paper we study the problem of detecting human-object interactions (HOI) in static images, defined as predicting a human and an object bounding box with an interaction class label that connects them. HOI detection is a fundamental problem in computer vision as it provides semantic information about the interactions among the detected objects. We introduce HICO-DET, a new large benchmark for HOI detection, by augmenting the current HICO classification benchmark with instance annotations. We propose Human-Object Region-based Convolutional Neural Networks (HO-RCNN), a novel DNN-based framework for HOI detection. At the core of our HO-RCNN is the Interaction Pattern, a novel DNN input that characterizes the spatial relations between two bounding boxes. We validate the effectiveness of our HO-RCNN using HICO-DET. Experiments demonstrate that our HO-RCNN, by exploiting human-object spatial relations through Interaction Patterns, significantly improves the performance of HOI detection over baseline approaches.

  13. An Egocentric Approach Towards Ubiquitous Multimodal Interaction

    DEFF Research Database (Denmark)

    Pederson, Thomas; Jalaliniya, Shahram

    2015-01-01

    In this position paper we present our take on the possibilities that emerge from a mix of recent ideas in interaction design, wearable computers, and context-aware systems which taken together could allow us to get closer to Marc Weiser's vision of calm computing. Multisensory user experience plays...

  14. Cognition beyond the brain computation, interactivity and human artifice

    CERN Document Server

    Cowley, Stephen J

    2013-01-01

    Arguing that a collective dimension has given cognitive flexibility to human intelligence, this book shows that traditional cognitive psychology underplays the role of bodies, dialogue, diagrams, tools, talk, customs, habits, computers and cultural practices.

  15. Human and Virtual Agents Interacting in the Virtuality Continuum

    NARCIS (Netherlands)

    Nijholt, Antinus

    2004-01-01

    In this paper we take a multi-party interaction point of view on our research on multimodal interactions between agents in various virtual environments: an educational environment, a meeting environment, and a storytelling environment. These environments are quite different. All these environments

  16. Human agency beliefs influence behaviour during virtual social interactions.

    Science.gov (United States)

    Caruana, Nathan; Spirou, Dean; Brock, Jon

    2017-01-01

    In recent years, with the emergence of relatively inexpensive and accessible virtual reality technologies, it is now possible to deliver compelling and realistic simulations of human-to-human interaction. Neuroimaging studies have shown that, when participants believe they are interacting via a virtual interface with another human agent, they show different patterns of brain activity compared to when they know that their virtual partner is computer-controlled. The suggestion is that users adopt an "intentional stance" by attributing mental states to their virtual partner. However, it remains unclear how beliefs in the agency of a virtual partner influence participants' behaviour and subjective experience of the interaction. We investigated this issue in the context of a cooperative "joint attention" game in which participants interacted via an eye tracker with a virtual onscreen partner, directing each other's eye gaze to different screen locations. Half of the participants were correctly informed that their partner was controlled by a computer algorithm ("Computer" condition). The other half were misled into believing that the virtual character was controlled by a second participant in another room ("Human" condition). Those in the "Human" condition were slower to make eye contact with their partner and more likely to try and guide their partner before they had established mutual eye contact than participants in the "Computer" condition. They also responded more rapidly when their partner was guiding them, although the same effect was also found for a control condition in which they responded to an arrow cue. Results confirm the influence of human agency beliefs on behaviour in this virtual social interaction context. They further suggest that researchers and developers attempting to simulate social interactions should consider the impact of agency beliefs on user experience in other social contexts, and their effect on the achievement of the application's goals.

  17. Overview Electrotactile Feedback for Enhancing Human Computer Interface

    Science.gov (United States)

    Pamungkas, Daniel S.; Caesarendra, Wahyu

    2018-04-01

    To achieve effective interaction between a human and a computing device or machine, adequate feedback from the computing device or machine is required. Recently, haptic feedback is increasingly being utilised to improve the interactivity of the Human Computer Interface (HCI). Most existing haptic feedback enhancements aim at producing forces or vibrations to enrich the user’s interactive experience. However, these force and/or vibration actuated haptic feedback systems can be bulky and uncomfortable to wear and only capable of delivering a limited amount of information to the user which can limit both their effectiveness and the applications they can be applied to. To address this deficiency, electrotactile feedback is used. This involves delivering haptic sensations to the user by electrically stimulating nerves in the skin via electrodes placed on the surface of the skin. This paper presents a review and explores the capability of electrotactile feedback for HCI applications. In addition, a description of the sensory receptors within the skin for sensing tactile stimulus and electric currents alsoseveral factors which influenced electric signal to transmit to the brain via human skinare explained.

  18. Releasing the constraints on aphasia therapy: the positive impact of gesture and multimodality treatments.

    Science.gov (United States)

    Rose, Miranda L

    2013-05-01

    There is a 40-year history of interest in the use of arm and hand gestures in treatments that target the reduction of aphasic linguistic impairment and compensatory methods of communication (Rose, 2006). Arguments for constraining aphasia treatment to the verbal modality have arisen from proponents of constraint-induced aphasia therapy (Pulvermüller et al., 2001). Confusion exists concerning the role of nonverbal treatments in treating people with aphasia. The central argument of this paper is that given the state of the empirical evidence and the strong theoretical accounts of modality interactions in human communication, gesture-based and multimodality aphasia treatments are at least as legitimate an option as constraint-based aphasia treatment. Theoretical accounts of modality interactions in human communication and the gesture production abilities of individuals with aphasia that are harnessed in treatments are reviewed. The negative effects on word retrieval of restricting gesture production are also reviewed, and an overview of the neurological architecture subserving language processing is provided as rationale for multimodality treatments. The evidence for constrained and unconstrained treatments is critically reviewed. Together, these data suggest that constraint treatments and multimodality treatments are equally efficacious, and there is limited support for constraining client responses to the spoken modality.

  19. The Next Wave: Humans, Computers, and Redefining Reality

    Science.gov (United States)

    Little, William

    2018-01-01

    The Augmented/Virtual Reality (AVR) Lab at KSC is dedicated to " exploration into the growing computer fields of Extended Reality and the Natural User Interface (it is) a proving ground for new technologies that can be integrated into future NASA projects and programs." The topics of Human Computer Interface, Human Computer Interaction, Augmented Reality, Virtual Reality, and Mixed Reality are defined; examples of work being done in these fields in the AVR Lab are given. Current new and future work in Computer Vision, Speech Recognition, and Artificial Intelligence are also outlined.

  20. Situated dialog in speech-based human-computer interaction

    CERN Document Server

    Raux, Antoine; Lane, Ian; Misu, Teruhisa

    2016-01-01

    This book provides a survey of the state-of-the-art in the practical implementation of Spoken Dialog Systems for applications in everyday settings. It includes contributions on key topics in situated dialog interaction from a number of leading researchers and offers a broad spectrum of perspectives on research and development in the area. In particular, it presents applications in robotics, knowledge access and communication and covers the following topics: dialog for interacting with robots; language understanding and generation; dialog architectures and modeling; core technologies; and the analysis of human discourse and interaction. The contributions are adapted and expanded contributions from the 2014 International Workshop on Spoken Dialog Systems (IWSDS 2014), where researchers and developers from industry and academia alike met to discuss and compare their implementation experiences, analyses and empirical findings.

  1. Potential of Cognitive Computing and Cognitive Systems

    Science.gov (United States)

    Noor, Ahmed K.

    2015-01-01

    Cognitive computing and cognitive technologies are game changers for future engineering systems, as well as for engineering practice and training. They are major drivers for knowledge automation work, and the creation of cognitive products with higher levels of intelligence than current smart products. This paper gives a brief review of cognitive computing and some of the cognitive engineering systems activities. The potential of cognitive technologies is outlined, along with a brief description of future cognitive environments, incorporating cognitive assistants - specialized proactive intelligent software agents designed to follow and interact with humans and other cognitive assistants across the environments. The cognitive assistants engage, individually or collectively, with humans through a combination of adaptive multimodal interfaces, and advanced visualization and navigation techniques. The realization of future cognitive environments requires the development of a cognitive innovation ecosystem for the engineering workforce. The continuously expanding major components of the ecosystem include integrated knowledge discovery and exploitation facilities (incorporating predictive and prescriptive big data analytics); novel cognitive modeling and visual simulation facilities; cognitive multimodal interfaces; and cognitive mobile and wearable devices. The ecosystem will provide timely, engaging, personalized / collaborative, learning and effective decision making. It will stimulate creativity and innovation, and prepare the participants to work in future cognitive enterprises and develop new cognitive products of increasing complexity. http://www.aee.odu.edu/cognitivecomp

  2. Translator-computer interaction in action

    DEFF Research Database (Denmark)

    Bundgaard, Kristine; Christensen, Tina Paulsen; Schjoldager, Anne

    2016-01-01

    perspective, this paper investigates the relationship between machines and humans in the field of translation, analysing a CAT process in which machine-translation (MT) technology was integrated into a translation-memory (TM) suite. After a review of empirical research into the impact of CAT tools......Though we lack empirically-based knowledge of the impact of computer-aided translation (CAT) tools on translation processes, it is generally agreed that all professional translators are now involved in some kind of translator-computer interaction (TCI), using O’Brien’s (2012) term. Taking a TCI......, the study indicates that the tool helps the translator conform to project and customer requirements....

  3. USING OLFACTORY DISPLAYS AS A NONTRADITIONAL INTERFACE IN HUMAN COMPUTER INTERACTION

    Directory of Open Access Journals (Sweden)

    Alper Efe

    2017-07-01

    Full Text Available Smell has its limitations and disadvantages as a display medium, but it also has its strengths and many have recognized its potential. At present, in communications and virtual technologies, smell is either forgotten or improperly stimulated, because non controlled odorants present in the physical space surrounding the user. Nonetheless a controlled presentation of olfactory information can give advantages in various application fields. Therefore, two enabling technologies, electronic noses and especially olfactory displays are reviewed. Scenarios of usage are discussed together with relevant psycho-physiological issues. End-to-end systems including olfactory interfaces are quantitatively characterised under many respects. Recent works done by the authors on field are reported. The article will touch briefly on the control of scent emissions; an important factor to consider when building scented computer systems. As a sample application SUBSMELL system investigated. A look at areas of human computer interaction where olfaction output may prove useful will be presented. The article will finish with some brief conclusions and discuss some shortcomings and gaps of the topic. In particular, the addition of olfactory cues to a virtual environment increased the user's sense of presence and memory of the environment. Also, this article discusses the educational aspect of the subsmell systems.

  4. Pollution going multimodal: the complex impact of the human-altered sensory environment on animal perception and performance.

    Science.gov (United States)

    Halfwerk, Wouter; Slabbekoorn, Hans

    2015-04-01

    Anthropogenic sensory pollution is affecting ecosystems worldwide. Human actions generate acoustic noise, emanate artificial light and emit chemical substances. All of these pollutants are known to affect animals. Most studies on anthropogenic pollution address the impact of pollutants in unimodal sensory domains. High levels of anthropogenic noise, for example, have been shown to interfere with acoustic signals and cues. However, animals rely on multiple senses, and pollutants often co-occur. Thus, a full ecological assessment of the impact of anthropogenic activities requires a multimodal approach. We describe how sensory pollutants can co-occur and how covariance among pollutants may differ from natural situations. We review how animals combine information that arrives at their sensory systems through different modalities and outline how sensory conditions can interfere with multimodal perception. Finally, we describe how sensory pollutants can affect the perception, behaviour and endocrinology of animals within and across sensory modalities. We conclude that sensory pollution can affect animals in complex ways due to interactions among sensory stimuli, neural processing and behavioural and endocrinal feedback. We call for more empirical data on covariance among sensory conditions, for instance, data on correlated levels in noise and light pollution. Furthermore, we encourage researchers to test animal responses to a full-factorial set of sensory pollutants in the presence or the absence of ecologically important signals and cues. We realize that such approach is often time and energy consuming, but we think this is the only way to fully understand the multimodal impact of sensory pollution on animal performance and perception. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  5. Multimodal neural correlates of cognitive control in the Human Connectome Project.

    Science.gov (United States)

    Lerman-Sinkoff, Dov B; Sui, Jing; Rachakonda, Srinivas; Kandala, Sridhar; Calhoun, Vince D; Barch, Deanna M

    2017-12-01

    Cognitive control is a construct that refers to the set of functions that enable decision-making and task performance through the representation of task states, goals, and rules. The neural correlates of cognitive control have been studied in humans using a wide variety of neuroimaging modalities, including structural MRI, resting-state fMRI, and task-based fMRI. The results from each of these modalities independently have implicated the involvement of a number of brain regions in cognitive control, including dorsal prefrontal cortex, and frontal parietal and cingulo-opercular brain networks. However, it is not clear how the results from a single modality relate to results in other modalities. Recent developments in multimodal image analysis methods provide an avenue for answering such questions and could yield more integrated models of the neural correlates of cognitive control. In this study, we used multiset canonical correlation analysis with joint independent component analysis (mCCA + jICA) to identify multimodal patterns of variation related to cognitive control. We used two independent cohorts of participants from the Human Connectome Project, each of which had data from four imaging modalities. We replicated the findings from the first cohort in the second cohort using both independent and predictive analyses. The independent analyses identified a component in each cohort that was highly similar to the other and significantly correlated with cognitive control performance. The replication by prediction analyses identified two independent components that were significantly correlated with cognitive control performance in the first cohort and significantly predictive of performance in the second cohort. These components identified positive relationships across the modalities in neural regions related to both dynamic and stable aspects of task control, including regions in both the frontal-parietal and cingulo-opercular networks, as well as regions

  6. Enhanced Learning through Multimodal Training: Evidence from a Comprehensive Cognitive, Physical Fitness, and Neuroscience Intervention.

    Science.gov (United States)

    Ward, N; Paul, E; Watson, P; Cooke, G E; Hillman, C H; Cohen, N J; Kramer, A F; Barbey, A K

    2017-07-19

    The potential impact of brain training methods for enhancing human cognition in healthy and clinical populations has motivated increasing public interest and scientific scrutiny. At issue is the merits of intervention modalities, such as computer-based cognitive training, physical exercise training, and non-invasive brain stimulation, and whether such interventions synergistically enhance cognition. To investigate this issue, we conducted a comprehensive 4-month randomized controlled trial in which 318 healthy, young adults were enrolled in one of five interventions: (1) Computer-based cognitive training on six adaptive tests of executive function; (2) Cognitive and physical exercise training; (3) Cognitive training combined with non-invasive brain stimulation and physical exercise training; (4) Active control training in adaptive visual search and change detection tasks; and (5) Passive control. Our findings demonstrate that multimodal training significantly enhanced learning (relative to computer-based cognitive training alone) and provided an effective method to promote skill learning across multiple cognitive domains, spanning executive functions, working memory, and planning and problem solving. These results help to establish the beneficial effects of multimodal intervention and identify key areas for future research in the continued effort to improve human cognition.

  7. Human Computation

    CERN Multimedia

    CERN. Geneva

    2008-01-01

    What if people could play computer games and accomplish work without even realizing it? What if billions of people collaborated to solve important problems for humanity or generate training data for computers? My work aims at a general paradigm for doing exactly that: utilizing human processing power to solve computational problems in a distributed manner. In particular, I focus on harnessing human time and energy for addressing problems that computers cannot yet solve. Although computers have advanced dramatically in many respects over the last 50 years, they still do not possess the basic conceptual intelligence or perceptual capabilities...

  8. Human Behavior Analysis by Means of Multimodal Context Mining

    Directory of Open Access Journals (Sweden)

    Oresti Banos

    2016-08-01

    Full Text Available There is sufficient evidence proving the impact that negative lifestyle choices have on people’s health and wellness. Changing unhealthy behaviours requires raising people’s self-awareness and also providing healthcare experts with a thorough and continuous description of the user’s conduct. Several monitoring techniques have been proposed in the past to track users’ behaviour; however, these approaches are either subjective and prone to misreporting, such as questionnaires, or only focus on a specific component of context, such as activity counters. This work presents an innovative multimodal context mining framework to inspect and infer human behaviour in a more holistic fashion. The proposed approach extends beyond the state-of-the-art, since it not only explores a sole type of context, but also combines diverse levels of context in an integral manner. Namely, low-level contexts, including activities, emotions and locations, are identified from heterogeneous sensory data through machine learning techniques. Low-level contexts are combined using ontological mechanisms to derive a more abstract representation of the user’s context, here referred to as high-level context. An initial implementation of the proposed framework supporting real-time context identification is also presented. The developed system is evaluated for various realistic scenarios making use of a novel multimodal context open dataset and data on-the-go, demonstrating prominent context-aware capabilities at both low and high levels.

  9. Evaluation of Binocular Eye Trackers and Algorithms for 3D Gaze Interaction in Virtual Reality Environments

    OpenAIRE

    Thies Pfeiffer; Ipke Wachsmuth; Marc E. Latoschik

    2009-01-01

    Tracking user's visual attention is a fundamental aspect in novel human-computer interaction paradigms found in Virtual Reality. For example, multimodal interfaces or dialogue-based communications with virtual and real agents greatly benefit from the analysis of the user's visual attention as a vital source for deictic references or turn-taking signals. Current approaches to determine visual attention rely primarily on monocular eye trackers. Hence they are restricted to the interpretation of...

  10. The Catchment Feature Model: A Device for Multimodal Fusion and a Bridge between Signal and Sense

    Science.gov (United States)

    Quek, Francis

    2004-12-01

    The catchment feature model addresses two questions in the field of multimodal interaction: how we bridge video and audio processing with the realities of human multimodal communication, and how information from the different modes may be fused. We argue from a detailed literature review that gestural research has clustered around manipulative and semaphoric use of the hands, motivate the catchment feature model psycholinguistic research, and present the model. In contrast to "whole gesture" recognition, the catchment feature model applies a feature decomposition approach that facilitates cross-modal fusion at the level of discourse planning and conceptualization. We present our experimental framework for catchment feature-based research, cite three concrete examples of catchment features, and propose new directions of multimodal research based on the model.

  11. Social signal processing for studying parent-infant interaction

    Directory of Open Access Journals (Sweden)

    Marie eAvril

    2014-12-01

    Full Text Available Studying early interactions is a core issue of infant development and psychopathology. Automatic social signal processing theoretically offers the possibility to extract and analyse communication by taking an integrative perspective, considering the multimodal nature and dynamics of behaviours (including synchrony. This paper proposes an explorative method to acquire and extract relevant social signals from a naturalistic early parent-infant interaction. An experimental setup is proposed based on both clinical and technical requirements. We extracted various cues from body postures and speech productions of partners using the IMI2S (Interaction, Multimodal Integration, and Social Signal Framework. Preliminary clinical and computational results are reported for two dyads (one pathological in a situation of severe emotional neglect and one normal control as an illustration of our cross-disciplinary protocol. The results from both clinical and computational analyses highlight similar differences: the pathological dyad shows dyssynchronic interaction led by the infant whereas the control dyad shows synchronic interaction and a smooth interactive dialog. The results suggest that the current method might be promising for future studies.

  12. The Stability of Multi-modal Traffic Network

    International Nuclear Information System (INIS)

    Han Linghui; Sun Huijun; Zhu Chengjuan; Jia Bin; Wu Jianjun

    2013-01-01

    There is an explicit and implicit assumption in multimodal traffic equilibrium models, that is, if the equilibrium exists, then it will also occur. The assumption is very idealized; in fact, it may be shown that the quite contrary could happen, because in multimodal traffic network, especially in mixed traffic conditions the interaction among traffic modes is asymmetric and the asymmetric interaction may result in the instability of traffic system. In this paper, to study the stability of multimodal traffic system, we respectively present the travel cost function in mixed traffic conditions and in traffic network with dedicated bus lanes. Based on a day-to-day dynamical model, we study the evolution of daily route choice of travelers in multimodal traffic network using 10000 random initial values for different cases. From the results of simulation, it can be concluded that the asymmetric interaction between the cars and buses in mixed traffic conditions can lead the traffic system to instability when traffic demand is larger. We also study the effect of travelers' perception error on the stability of multimodal traffic network. Although the larger perception error can alleviate the effect of interaction between cars and buses and improve the stability of traffic system in mixed traffic conditions, the traffic system also become instable when the traffic demand is larger than a number. For all cases simulated in this study, with the same parameters, traffic system with dedicated bus lane has better stability for traffic demand than that in mixed traffic conditions. We also find that the network with dedicated bus lane has higher portion of travelers by bus than it of mixed traffic network. So it can be concluded that building dedicated bus lane can improve the stability of traffic system and attract more travelers to choose bus reducing the traffic congestion. (general)

  13. Analyzing Multimode Wireless Sensor Networks Using the Network Calculus

    Directory of Open Access Journals (Sweden)

    Xi Jin

    2015-01-01

    Full Text Available The network calculus is a powerful tool to analyze the performance of wireless sensor networks. But the original network calculus can only model the single-mode wireless sensor network. In this paper, we combine the original network calculus with the multimode model to analyze the maximum delay bound of the flow of interest in the multimode wireless sensor network. There are two combined methods A-MM and N-MM. The method A-MM models the whole network as a multimode component, and the method N-MM models each node as a multimode component. We prove that the maximum delay bound computed by the method A-MM is tighter than or equal to that computed by the method N-MM. Experiments show that our proposed methods can significantly decrease the analytical delay bound comparing with the separate flow analysis method. For the large-scale wireless sensor network with 32 thousands of sensor nodes, our proposed methods can decrease about 70% of the analytical delay bound.

  14. Rethinking Human-Centered Computing: Finding the Customer and Negotiated Interactions at the Airport

    Science.gov (United States)

    Wales, Roxana; O'Neill, John; Mirmalek, Zara

    2003-01-01

    The breakdown in the air transportation system over the past several years raises an interesting question for researchers: How can we help improve the reliability of airline operations? In offering some answers to this question, we make a statement about Huuman-Centered Computing (HCC). First we offer the definition that HCC is a multi-disciplinary research and design methodology focused on supporting humans as they use technology by including cognitive and social systems, computational tools and the physical environment in the analysis of organizational systems. We suggest that a key element in understanding organizational systems is that there are external cognitive and social systems (customers) as well as internal cognitive and social systems (employees) and that they interact dynamically to impact the organization and its work. The design of human-centered intelligent systems must take this outside-inside dynamic into account. In the past, the design of intelligent systems has focused on supporting the work and improvisation requirements of employees but has often assumed that customer requirements are implicitly satisfied by employee requirements. Taking a customer-centric perspective provides a different lens for understanding this outside-inside dynamic, the work of the organization and the requirements of both customers and employees In this article we will: 1) Demonstrate how the use of ethnographic methods revealed the important outside-inside dynamic in an airline, specifically the consequential relationship between external customer requirements and perspectives and internal organizational processes and perspectives as they came together in a changing environment; 2) Describe how taking a customer centric perspective identifies places where the impact of the outside-inside dynamic is most critical and requires technology that can be adaptive; 3) Define and discuss the place of negotiated interactions in airline operations, identifying how these

  15. Low-Loss Photonic Reservoir Computing with Multimode Photonic Integrated Circuits.

    Science.gov (United States)

    Katumba, Andrew; Heyvaert, Jelle; Schneider, Bendix; Uvin, Sarah; Dambre, Joni; Bienstman, Peter

    2018-02-08

    We present a numerical study of a passive integrated photonics reservoir computing platform based on multimodal Y-junctions. We propose a novel design of this junction where the level of adiabaticity is carefully tailored to capture the radiation loss in higher-order modes, while at the same time providing additional mode mixing that increases the richness of the reservoir dynamics. With this design, we report an overall average combination efficiency of 61% compared to the standard 50% for the single-mode case. We demonstrate that with this design, much more power is able to reach the distant nodes of the reservoir, leading to increased scaling prospects. We use the example of a header recognition task to confirm that such a reservoir can be used for bit-level processing tasks. The design itself is CMOS-compatible and can be fabricated through the known standard fabrication procedures.

  16. Patient-tailored multimodal neuroimaging, visualization and quantification of human intra-cerebral hemorrhage

    Science.gov (United States)

    Goh, Sheng-Yang M.; Irimia, Andrei; Vespa, Paul M.; Van Horn, John D.

    2016-03-01

    In traumatic brain injury (TBI) and intracerebral hemorrhage (ICH), the heterogeneity of lesion sizes and types necessitates a variety of imaging modalities to acquire a comprehensive perspective on injury extent. Although it is advantageous to combine imaging modalities and to leverage their complementary benefits, there are difficulties in integrating information across imaging types. Thus, it is important that efforts be dedicated to the creation and sustained refinement of resources for multimodal data integration. Here, we propose a novel approach to the integration of neuroimaging data acquired from human patients with TBI/ICH using various modalities; we also demonstrate the integrated use of multimodal magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI) data for TBI analysis based on both visual observations and quantitative metrics. 3D models of healthy-appearing tissues and TBIrelated pathology are generated, both of which are derived from multimodal imaging data. MRI volumes acquired using FLAIR, SWI, and T2 GRE are used to segment pathology. Healthy tissues are segmented using user-supervised tools, and results are visualized using a novel graphical approach called a `connectogram', where brain connectivity information is depicted within a circle of radially aligned elements. Inter-region connectivity and its strength are represented by links of variable opacities drawn between regions, where opacity reflects the percentage longitudinal change in brain connectivity density. Our method for integrating, analyzing and visualizing structural brain changes due to TBI and ICH can promote knowledge extraction and enhance the understanding of mechanisms underlying recovery.

  17. Applications of Elpasolites as a Multimode Radiation Sensor

    Science.gov (United States)

    Guckes, Amber

    This study consists of both computational and experimental investigations. The computational results enabled detector design selections and confirmed experimental results. The experimental results determined that the CLYC scintillation detector can be applied as a functional and field-deployable multimode radiation sensor. The computational study utilized MCNP6 code to investigate the response of CLYC to various incident radiations and to determine the feasibility of its application as a handheld multimode sensor and as a single-scintillator collimated directional detection system. These simulations include: • Characterization of the response of the CLYC scintillator to gamma-rays and neutrons; • Study of the isotopic enrichment of 7Li versus 6Li in the CLYC for optimal detection of both thermal neutrons and fast neutrons; • Analysis of collimator designs to determine the optimal collimator for the single CLYC sensor directional detection system to assay gamma rays and neutrons; Simulations of a handheld CLYC multimode sensor and a single CLYC scintillator collimated directional detection system with the optimized collimator to determine the feasibility of detecting nuclear materials that could be encountered during field operations. These nuclear materials include depleted uranium, natural uranium, low-enriched uranium, highly-enriched uranium, reactor-grade plutonium, and weapons-grade plutonium. The experimental study includes the design, construction, and testing of both a handheld CLYC multimode sensor and a single CLYC scintillator collimated directional detection system. Both were designed in the Inventor CAD software and based on results of the computational study to optimize its performance. The handheld CLYC multimode sensor is modular, scalable, low?power, and optimized for high count rates. Commercial?off?the?shelf components were used where possible in order to optimize size, increase robustness, and minimize cost. The handheld CLYC multimode

  18. Validation of a multimodal travel simulator with travel information provision

    NARCIS (Netherlands)

    Chorus, C.G.; Molin, E.J.E.; Arentze, T.A.; Hoogendoorn, S.P.; Timmermans, H.J.P.; Wee, van G.P.

    2007-01-01

    This paper presents a computer based travel simulator for collecting data concerning the use of next-generation ATIS and their effects on traveler decision making in a multimodal travel environment. The tool distinguishes itself by presenting a completely abstract multimodal transport network, where

  19. Multimodal follow-up questions to multimodal answers in a QA system

    NARCIS (Netherlands)

    van Schooten, B.W.; op den Akker, Hendrikus J.A.

    2007-01-01

    We are developing a dialogue manager (DM) for a multimodal interactive Question Answering (QA) system. Our QA system presents answers using text and pictures, and the user may pose follow-up questions using text or speech, while indicating screen elements with the mouse. We developed a corpus of

  20. Multimodality imaging techniques.

    Science.gov (United States)

    Martí-Bonmatí, Luis; Sopena, Ramón; Bartumeus, Paula; Sopena, Pablo

    2010-01-01

    In multimodality imaging, the need to combine morphofunctional information can be approached by either acquiring images at different times (asynchronous), and fused them through digital image manipulation techniques or simultaneously acquiring images (synchronous) and merging them automatically. The asynchronous post-processing solution presents various constraints, mainly conditioned by the different positioning of the patient in the two scans acquired at different times in separated machines. The best solution to achieve consistency in time and space is obtained by the synchronous image acquisition. There are many multimodal technologies in molecular imaging. In this review we will focus on those multimodality image techniques more commonly used in the field of diagnostic imaging (SPECT-CT, PET-CT) and new developments (as PET-MR). The technological innovations and development of new tracers and smart probes are the main key points that will condition multimodality image and diagnostic imaging professionals' future. Although SPECT-CT and PET-CT are standard in most clinical scenarios, MR imaging has some advantages, providing excellent soft-tissue contrast and multidimensional functional, structural and morphological information. The next frontier is to develop efficient detectors and electronics systems capable of detecting two modality signals at the same time. Not only PET-MR but also MR-US or optic-PET will be introduced in clinical scenarios. Even more, MR diffusion-weighted, pharmacokinetic imaging, spectroscopy or functional BOLD imaging will merge with PET tracers to further increase molecular imaging as a relevant medical discipline. Multimodality imaging techniques will play a leading role in relevant clinical applications. The development of new diagnostic imaging research areas, mainly in the field of oncology, cardiology and neuropsychiatry, will impact the way medicine is performed today. Both clinical and experimental multimodality studies, in

  1. A 3D character animation engine for multimodal interaction on mobile devices

    Science.gov (United States)

    Sandali, Enrico; Lavagetto, Fabio; Pisano, Paolo

    2005-03-01

    Talking virtual characters are graphical simulations of real or imaginary persons that enable natural and pleasant multimodal interaction with the user, by means of voice, eye gaze, facial expression and gestures. This paper presents an implementation of a 3D virtual character animation and rendering engine, compliant with the MPEG-4 standard, running on Symbian-based SmartPhones. Real-time animation of virtual characters on mobile devices represents a challenging task, since many limitations must be taken into account with respect to processing power, graphics capabilities, disk space and execution memory size. The proposed optimization techniques allow to overcome these issues, guaranteeing a smooth and synchronous animation of facial expressions and lip movements on mobile phones such as Sony-Ericsson's P800 and Nokia's 6600. The animation engine is specifically targeted to the development of new "Over The Air" services, based on embodied conversational agents, with applications in entertainment (interactive story tellers), navigation aid (virtual guides to web sites and mobile services), news casting (virtual newscasters) and education (interactive virtual teachers).

  2. Multimodal integration in statistical learning

    DEFF Research Database (Denmark)

    Mitchell, Aaron; Christiansen, Morten Hyllekvist; Weiss, Dan

    2014-01-01

    , we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker’s face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally...... facilitated participants’ ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.......Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study...

  3. Neural correlate of human reciprocity in social interactions.

    Science.gov (United States)

    Sakaiya, Shiro; Shiraito, Yuki; Kato, Junko; Ide, Hiroko; Okada, Kensuke; Takano, Kouji; Kansaku, Kenji

    2013-01-01

    Reciprocity plays a key role maintaining cooperation in society. However, little is known about the neural process that underpins human reciprocity during social interactions. Our neuroimaging study manipulated partner identity (computer, human) and strategy (random, tit-for-tat) in repeated prisoner's dilemma games and investigated the neural correlate of reciprocal interaction with humans. Reciprocal cooperation with humans but exploitation of computers by defection was associated with activation in the left amygdala. Amygdala activation was also positively and negatively correlated with a preference change for human partners following tit-for-tat and random strategies, respectively. The correlated activation represented the intensity of positive feeling toward reciprocal and negative feeling toward non-reciprocal partners, and so reflected reciprocity in social interaction. Reciprocity in social interaction, however, might plausibly be misinterpreted and so we also examined the neural coding of insight into the reciprocity of partners. Those with and without insight revealed differential brain activation across the reward-related circuitry (i.e., the right middle dorsolateral prefrontal cortex and dorsal caudate) and theory of mind (ToM) regions [i.e., ventromedial prefrontal cortex (VMPFC) and precuneus]. Among differential activations, activation in the precuneus, which accompanied deactivation of the VMPFC, was specific to those without insight into human partners who were engaged in a tit-for-tat strategy. This asymmetric (de)activation might involve specific contributions of ToM regions to the human search for reciprocity. Consequently, the intensity of emotion attached to human reciprocity was represented in the amygdala, whereas insight into the reciprocity of others was reflected in activation across the reward-related and ToM regions. This suggests the critical role of mentalizing, which was not equated with reward expectation during social interactions.

  4. Neural correlate of human reciprocity in social interactions

    Directory of Open Access Journals (Sweden)

    Shiro eSakaiya

    2013-12-01

    Full Text Available Reciprocity plays a key role maintaining cooperation in society. However, little is known about the neural process that underpins human reciprocity during social interactions. Our neuroimaging study manipulated partner identity (computer, human and strategy (random, tit-for-tat in repeated prisoner’s dilemma games and investigated the neural correlate of reciprocal interaction with humans. Reciprocal cooperation with humans but exploitation of computers by defection was associated with activation in the left amygdala. Amygdala activation was also positively and negatively correlated with a preference change for human partners following tit-for-tat and random strategies, respectively. The correlated activation represented the intensity of positive feeling toward reciprocal and negative feeling toward non-reciprocal partners, and so reflected reciprocity in social interaction. Reciprocity in social interaction, however, might plausibly be misinterpreted and so we also examined the neural coding of insight into the reciprocity of partners. Those with and without insight revealed differential brain activation across the reward-related circuitry (i.e., the right middle dorsolateral prefrontal cortex and dorsal caudate and theory of mind (ToM regions (i.e., ventromedial prefrontal cortex [VMPFC] and precuneus. Among differential activations, activation in the precuneus, which accompanied deactivation of the VMPFC, was specific to those without insight into human partners who were engaged in a tit-for-tat strategy. This asymmetric (deactivation might involve specific contributions of ToM regions to the human search for reciprocity. Consequently, the intensity of emotion attached to human reciprocity was represented in the amygdala, whereas insight into the reciprocity of others was reflected in activation across the reward-related and ToM regions. This suggests the critical role of mentalizing, which was not equated with reward expectation during

  5. The Catchment Feature Model: A Device for Multimodal Fusion and a Bridge between Signal and Sense

    Directory of Open Access Journals (Sweden)

    Francis Quek

    2004-09-01

    Full Text Available The catchment feature model addresses two questions in the field of multimodal interaction: how we bridge video and audio processing with the realities of human multimodal communication, and how information from the different modes may be fused. We argue from a detailed literature review that gestural research has clustered around manipulative and semaphoric use of the hands, motivate the catchment feature model psycholinguistic research, and present the model. In contrast to “whole gesture” recognition, the catchment feature model applies a feature decomposition approach that facilitates cross-modal fusion at the level of discourse planning and conceptualization. We present our experimental framework for catchment feature-based research, cite three concrete examples of catchment features, and propose new directions of multimodal research based on the model.

  6. Interactive Computer Graphics

    Science.gov (United States)

    Kenwright, David

    2000-01-01

    Aerospace data analysis tools that significantly reduce the time and effort needed to analyze large-scale computational fluid dynamics simulations have emerged this year. The current approach for most postprocessing and visualization work is to explore the 3D flow simulations with one of a dozen or so interactive tools. While effective for analyzing small data sets, this approach becomes extremely time consuming when working with data sets larger than one gigabyte. An active area of research this year has been the development of data mining tools that automatically search through gigabyte data sets and extract the salient features with little or no human intervention. With these so-called feature extraction tools, engineers are spared the tedious task of manually exploring huge amounts of data to find the important flow phenomena. The software tools identify features such as vortex cores, shocks, separation and attachment lines, recirculation bubbles, and boundary layers. Some of these features can be extracted in a few seconds; others take minutes to hours on extremely large data sets. The analysis can be performed off-line in a batch process, either during or following the supercomputer simulations. These computations have to be performed only once, because the feature extraction programs search the entire data set and find every occurrence of the phenomena being sought. Because the important questions about the data are being answered automatically, interactivity is less critical than it is with traditional approaches.

  7. Multimodal training between agents

    DEFF Research Database (Denmark)

    Rehm, Matthias

    2003-01-01

    In the system Locator1, agents are treated as individual and autonomous subjects that are able to adapt to heterogenous user groups. Applying multimodal information from their surroundings (visual and linguistic), they acquire the necessary concepts for a successful interaction. This approach has...

  8. Interaction Design in a Context of Rehabilitation

    DEFF Research Database (Denmark)

    Høgn, Pia; Lykke, Marianne; Missel, Pernille

    2016-01-01

    Information and communication technology(ICT) mediated learning processes for people suffering from aphasia after an acquired brain injury are a relatively uncovered area of research. Helmer-Nielsen et al. (2014)[1] report about projects that study the effect of ICT-mediated speech therapists...... suffering from aphasia, with the purpose of increasing their possibilities for living independent and active lives. In the project, we design a digital environment for communication and learning for people with aphasia who require special learning processes for rebuilding language after a brain injury...... of a combination of learning theory, theories on the brain function and human-computer interaction. The user study showed the needs for collaborative learning processes with communication and social interaction as their focus, tools to support multimodality expressions and customised teaching materials...

  9. Cognitive engineering in the design of human-computer interaction and expert systems

    International Nuclear Information System (INIS)

    Salvendy, G.

    1987-01-01

    The 68 papers contributing to this book cover the following areas: Theories of Interface Design; Methodologies of Interface Design; Applications of Interface Design; Software Design; Human Factors in Speech Technology and Telecommunications; Design of Graphic Dialogues; Knowledge Acquisition for Knowledge-Based Systems; Design, Evaluation and Use of Expert Systems. This demonstrates the dual role of cognitive engineering. On the one hand cognitive engineering is utilized to design computing systems which are compatible with human cognition and can be effectively and be easily utilized by all individuals. On the other hand, cognitive engineering is utilized to transfer human cognition into the computer for the purpose of building expert systems. Two papers are of interest to INIS

  10. Agrupador de imágenes multimodal no supervisado

    OpenAIRE

    Pérez Hernando, Jesús

    2013-01-01

    Trabajo Fin de Máster donde se implementa un método multimodal para la clasificación de imágenes sin etiquetar y sin intervención humana. Treball Fi de Màster on s'implementa un mètode multimodal per a la classificació d'imatges sense etiquetar i sense intervenció humana. Master thesis for the Computer Science Engineering program.

  11. Disappearing computers, social actors and embodied agents

    NARCIS (Netherlands)

    Nijholt, Antinus; Kunii, T.L.; Hock Soon, S.; Sourin, A.

    2003-01-01

    Presently, there are user interfaces that allow multimodal interactions. Many existing research and prototype systems introduced embodied agents, assuming that they allow a more natural conversation or dialogue between user and computer. Here we will first take a look at how in general people react

  12. MIDA: A Multimodal Imaging-Based Detailed Anatomical Model of the Human Head and Neck.

    Directory of Open Access Journals (Sweden)

    Maria Ida Iacono

    Full Text Available Computational modeling and simulations are increasingly being used to complement experimental testing for analysis of safety and efficacy of medical devices. Multiple voxel- and surface-based whole- and partial-body models have been proposed in the literature, typically with spatial resolution in the range of 1-2 mm and with 10-50 different tissue types resolved. We have developed a multimodal imaging-based detailed anatomical model of the human head and neck, named "MIDA". The model was obtained by integrating three different magnetic resonance imaging (MRI modalities, the parameters of which were tailored to enhance the signals of specific tissues: i structural T1- and T2-weighted MRIs; a specific heavily T2-weighted MRI slab with high nerve contrast optimized to enhance the structures of the ear and eye; ii magnetic resonance angiography (MRA data to image the vasculature, and iii diffusion tensor imaging (DTI to obtain information on anisotropy and fiber orientation. The unique multimodal high-resolution approach allowed resolving 153 structures, including several distinct muscles, bones and skull layers, arteries and veins, nerves, as well as salivary glands. The model offers also a detailed characterization of eyes, ears, and deep brain structures. A special automatic atlas-based segmentation procedure was adopted to include a detailed map of the nuclei of the thalamus and midbrain into the head model. The suitability of the model to simulations involving different numerical methods, discretization approaches, as well as DTI-based tensorial electrical conductivity, was examined in a case-study, in which the electric field was generated by transcranial alternating current stimulation. The voxel- and the surface-based versions of the models are freely available to the scientific community.

  13. Multimodal surveillance sensors, algorithms, and systems

    CERN Document Server

    Zhu, Zhigang

    2007-01-01

    From front-end sensors to systems and environmental issues, this practical resource guides you through the many facets of multimodal surveillance. The book examines thermal, vibration, video, and audio sensors in a broad context of civilian and military applications. This cutting-edge volume provides an in-depth treatment of data fusion algorithms that takes you to the core of multimodal surveillance, biometrics, and sentient computing. The book discusses such people and activity topics as tracking people and vehicles and identifying individuals by their speech.Systems designers benefit from d

  14. The collision of multimode dromions and a firewall in the two-component long-wave-short-wave resonance interaction equation

    International Nuclear Information System (INIS)

    Radha, R; Kumar, C Senthil; Lakshmanan, M; Gilson, C R

    2009-01-01

    In this communication, we investigate the two-component long-wave-short-wave resonance interaction equation and show that it admits the Painleve property. We then suitably exploit the recently developed truncated Painleve approach to generate exponentially localized solutions for the short-wave components S (1) and S (2) while the long wave L admits a line soliton only. The exponentially localized solutions driving the short waves S (1) and S (2) in the y-direction are endowed with different energies (intensities) and are called 'multimode dromions'. We also observe that the multimode dromions suffer from intramodal inelastic collision while the existence of a firewall across the modes prevents the switching of energy between the modes. (fast track communication)

  15. Computer Aided Analysis of TM-Multimode Planar Graded-index Optical Waveguides

    International Nuclear Information System (INIS)

    Ashry, M.; Nasr, A.S.; Abou El-Fadl, A.A.

    2000-01-01

    An algorithm is developed for analysis TM-Multimode Planar graded-index optical waveguides. A Modified Impedance Boundary Method of Moments (MIBMOM) for the analysis of planar graded-index optical waveguide structures is presented. The algorithm is used to calculate the dispersion characteristics and the field distribution of TM-multimode planar graded-index optical waveguides. The technique is based on Galerkin s procedure and the exact boundary condition at the interfaces between the graded index region and the step index cladding. Legendre polynomials are used as basis functions. The efficiency of this algorithm is examined with waveguides having various index profiles such as exponential, Gaussian and complementary error functions. The advantage of the MIBMOM is the complete solution of TM-multimode as presented which is very difficult by the other methods. With this algorithm a minimum number of basis functions to give accurate results is used. The obtained results show good agreement with the experimental results

  16. Interaction between visual and chemical cues in a Liolaemus lizard: a multimodal approach.

    Science.gov (United States)

    Vicente, Natalin S; Halloy, Monique

    2017-12-01

    Multimodal communication involves the use of signals and cues across two or more sensory modalities. The genus Liolaemus (Iguania: Liolaemidae) offers a great potential for studies on the ecology and evolution of multimodal communication, including visual and chemical signals. In this study, we analyzed the response of male and female Liolaemus pacha to chemical, visual and combined (multimodal) stimuli. Using cue-isolation tests, we registered the number of tongue flicks and headbob displays from exposure to signals in each modality. Number of tongue flicks was greater when a chemical stimulus was presented alone than in the presence of visual or multimodal stimuli. In contrast, headbob displays were fewer in number with visual and chemical stimuli alone, but significantly higher in number when combined. Female signallers triggered significantly more tongue flicks than male signallers, suggesting that chemical cues are involved in sexual recognition. We did not find an inhibition between chemical and visual cues. On the contrary, we observed a dominance of the chemical modality, because when presented with visual stimuli, lizards also responded with more tongue flicks than headbob displays. The total response produced by multimodal stimuli was similar to that of the chemical stimuli alone, possibly suggesting non-redundancy. We discuss whether the visual component of a multimodal signal could attract attention at a distance, increasing the effectiveness of transmission and reception of the information in chemical cues. Copyright © 2017 Elsevier GmbH. All rights reserved.

  17. Open-Box Muscle-Computer Interface: Introduction to Human-Computer Interactions in Bioengineering, Physiology, and Neuroscience Courses

    Science.gov (United States)

    Landa-Jiménez, M. A.; González-Gaspar, P.; Pérez-Estudillo, C.; López-Meraz, M. L.; Morgado-Valle, C.; Beltran-Parrazal, L.

    2016-01-01

    A Muscle-Computer Interface (muCI) is a human-machine system that uses electromyographic (EMG) signals to communicate with a computer. Surface EMG (sEMG) signals are currently used to command robotic devices, such as robotic arms and hands, and mobile robots, such as wheelchairs. These signals reflect the motor intention of a user before the…

  18. Drum-mate: interaction dynamics and gestures in human-humanoid drumming experiments

    Science.gov (United States)

    Kose-Bagci, Hatice; Dautenhahn, Kerstin; Syrdal, Dag S.; Nehaniv, Chrystopher L.

    2010-06-01

    This article investigates the role of interaction kinesics in human-robot interaction (HRI). We adopted a bottom-up, synthetic approach towards interactive competencies in robots using simple, minimal computational models underlying the robot's interaction dynamics. We present two empirical, exploratory studies investigating a drumming experience with a humanoid robot (KASPAR) and a human. In the first experiment, the turn-taking behaviour of the humanoid is deterministic and the non-verbal gestures of the robot accompany its drumming to assess the impact of non-verbal gestures on the interaction. The second experiment studies a computational framework that facilitates emergent turn-taking dynamics, whereby the particular dynamics of turn-taking emerge from the social interaction between the human and the humanoid. The results from the HRI experiments are presented and analysed qualitatively (in terms of the participants' subjective experiences) and quantitatively (concerning the drumming performance of the human-robot pair). The results point out a trade-off between the subjective evaluation of the drumming experience from the perspective of the participants and the objective evaluation of the drumming performance. A certain number of gestures was preferred as a motivational factor in the interaction. The participants preferred the models underlying the robot's turn-taking which enable the robot and human to interact more and provide turn-taking closer to 'natural' human-human conversations, despite differences in objective measures of drumming behaviour. The results are consistent with the temporal behaviour matching hypothesis previously proposed in the literature which concerns the effect that the participants adapt their own interaction dynamics to the robot's.

  19. A Multimodal Search Engine for Medical Imaging Studies.

    Science.gov (United States)

    Pinho, Eduardo; Godinho, Tiago; Valente, Frederico; Costa, Carlos

    2017-02-01

    The use of digital medical imaging systems in healthcare institutions has increased significantly, and the large amounts of data in these systems have led to the conception of powerful support tools: recent studies on content-based image retrieval (CBIR) and multimodal information retrieval in the field hold great potential in decision support, as well as for addressing multiple challenges in healthcare systems, such as computer-aided diagnosis (CAD). However, the subject is still under heavy research, and very few solutions have become part of Picture Archiving and Communication Systems (PACS) in hospitals and clinics. This paper proposes an extensible platform for multimodal medical image retrieval, integrated in an open-source PACS software with profile-based CBIR capabilities. In this article, we detail a technical approach to the problem by describing its main architecture and each sub-component, as well as the available web interfaces and the multimodal query techniques applied. Finally, we assess our implementation of the engine with computational performance benchmarks.

  20. Are Children with Autism More Responsive to Animated Characters? A Study of Interactions with Humans and Human-Controlled Avatars

    Science.gov (United States)

    Carter, Elizabeth J.; Williams, Diane L.; Hodgins, Jessica K.; Lehman, Jill F.

    2014-01-01

    Few direct comparisons have been made between the responsiveness of children with autism to computer-generated or animated characters and their responsiveness to humans. Twelve 4-to 8-year-old children with autism interacted with a human therapist; a human-controlled, interactive avatar in a theme park; a human actor speaking like the avatar; and…

  1. Fiber-Optic Vibration Sensor Based on Multimode Fiber

    Directory of Open Access Journals (Sweden)

    I. Lujo

    2008-06-01

    Full Text Available The purpose of this paper is to present a fiberoptic vibration sensor based on the monitoring of the mode distribution in a multimode optical fiber. Detection of vibrations and their parameters is possible through observation of the output speckle pattern from the multimode optical fiber. A working experimental model has been built in which all used components are widely available and cheap: a CCD camera (a simple web-cam, a multimode laser in visible range as a light source, a length of multimode optical fiber, and a computer for signal processing. Measurements have shown good agreement with the actual frequency of vibrations, and promising results were achieved with the amplitude measurements although they require some adaptation of the experimental model. Proposed sensor is cheap and lightweight and therefore presents an interesting alternative for monitoring large smart structures.

  2. Quality of human-computer interaction - results of a national usability survey of hospital-IT in Germany

    Directory of Open Access Journals (Sweden)

    Bundschuh Bettina B

    2011-11-01

    Full Text Available Abstract Background Due to the increasing functionality of medical information systems, it is hard to imagine day to day work in hospitals without IT support. Therefore, the design of dialogues between humans and information systems is one of the most important issues to be addressed in health care. This survey presents an analysis of the current quality level of human-computer interaction of healthcare-IT in German hospitals, focused on the users' point of view. Methods To evaluate the usability of clinical-IT according to the design principles of EN ISO 9241-10 the IsoMetrics Inventory, an assessment tool, was used. The focus of this paper has been put on suitability for task, training effort and conformity with user expectations, differentiated by information systems. Effectiveness has been evaluated with the focus on interoperability and functionality of different IT systems. Results 4521 persons from 371 hospitals visited the start page of the study, while 1003 persons from 158 hospitals completed the questionnaire. The results show relevant variations between different information systems. Conclusions Specialised information systems with defined functionality received better assessments than clinical information systems in general. This could be attributed to the improved customisation of these specialised systems for specific working environments. The results can be used as reference data for evaluation and benchmarking of human computer engineering in clinical health IT context for future studies.

  3. Human-centered Computing: Toward a Human Revolution

    OpenAIRE

    Jaimes, Alejandro; Gatica-Perez, Daniel; Sebe, Nicu; Huang, Thomas S.

    2007-01-01

    Human-centered computing studies the design, development, and deployment of mixed-initiative human-computer systems. HCC is emerging from the convergence of multiple disciplines that are concerned both with understanding human beings and with the design of computational artifacts.

  4. Multimodality

    DEFF Research Database (Denmark)

    Buhl, Mie

    In this paper, I address an ongoing discussion in Danish E-learning research about how to take advantage of the fact that digital media facilitate other communication forms than text, so-called ‘multimodal’ communication, which should not be confused with the term ‘multimedia’. While multimedia...... and learning situations. The choices they make involve E-learning resources like videos, social platforms and mobile devices, not just as digital artefacts we interact with, but the entire practice of using digital media. In a life-long learning perspective, multimodality is potentially very useful...

  5. A Perspective on Computational Human Performance Models as Design Tools

    Science.gov (United States)

    Jones, Patricia M.

    2010-01-01

    The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.

  6. Analysis of psychological factors for quality assessment of interactive multimodal service

    Science.gov (United States)

    Yamagishi, Kazuhisa; Hayashi, Takanori

    2005-03-01

    We proposed a subjective quality assessment model for interactive multimodal services. First, psychological factors of an audiovisual communication service were extracted by using the semantic differential (SD) technique and factor analysis. Forty subjects participated in subjective tests and performed point-to-point conversational tasks on a PC-based TV phone that exhibits various network qualities. The subjects assessed those qualities on the basis of 25 pairs of adjectives. Two psychological factors, i.e., an aesthetic feeling and a feeling of activity, were extracted from the results. Then, quality impairment factors affecting these two psychological factors were analyzed. We found that the aesthetic feeling is mainly affected by IP packet loss and video coding bit rate, and the feeling of activity depends on delay time and video frame rate. We then proposed an opinion model derived from the relationships among quality impairment factors, psychological factors, and overall quality. The results indicated that the estimation error of the proposed model is almost equivalent to the statistical reliability of the subjective score. Finally, using the proposed model, we discuss guidelines for quality design of interactive audiovisual communication services.

  7. Interactive computer-enhanced remote viewing system

    Energy Technology Data Exchange (ETDEWEB)

    Tourtellott, J.A.; Wagner, J.F. [Mechanical Technology Incorporated, Latham, NY (United States)

    1995-10-01

    Remediation activities such as decontamination and decommissioning (D&D) typically involve materials and activities hazardous to humans. Robots are an attractive way to conduct such remediation, but for efficiency they need a good three-dimensional (3-D) computer model of the task space where they are to function. This model can be created from engineering plans and architectural drawings and from empirical data gathered by various sensors at the site. The model is used to plan robotic tasks and verify that selected paths are clear of obstacles. This report describes the development of an Interactive Computer-Enhanced Remote Viewing System (ICERVS), a software system to provide a reliable geometric description of a robotic task space, and enable robotic remediation to be conducted more effectively and more economically.

  8. Multimodality image registration with software: state-of-the-art

    International Nuclear Information System (INIS)

    Slomka, Piotr J.; Baum, Richard P.

    2009-01-01

    Multimodality image integration of functional and anatomical data can be performed by means of dedicated hybrid imaging systems or by software image co-registration techniques. Hybrid positron emission tomography (PET)/computed tomography (CT) systems have found wide acceptance in oncological imaging, while software registration techniques have a significant role in patient-specific, cost-effective, and radiation dose-effective application of integrated imaging. Software techniques allow accurate (2-3 mm) rigid image registration of brain PET with CT and MRI. Nonlinear techniques are used in whole-body image registration, and recent developments allow for significantly accelerated computing times. Nonlinear software registration of PET with CT or MRI is required for multimodality radiation planning. Difficulties remain in the validation of nonlinear registration of soft tissue organs. The utilization of software-based multimodality image integration in a clinical environment is sometimes hindered by the lack of appropriate picture archiving and communication systems (PACS) infrastructure needed to efficiently and automatically integrate all available images into one common database. In cardiology applications, multimodality PET/single photon emission computed tomography and coronary CT angiography imaging is typically not required unless the results of one of the tests are equivocal. Software image registration is likely to be used in a complementary fashion with hybrid PET/CT or PET/magnetic resonance imaging systems. Software registration of stand-alone scans ''paved the way'' for the clinical application of hybrid scanners, demonstrating practical benefits of image integration before the hybrid dual-modality devices were available. (orig.)

  9. Twenty Years of Creativity Research in Human-Computer Interaction: Current State and Future Directions

    DEFF Research Database (Denmark)

    Frich Pedersen, Jonas; Biskjaer, Michael Mose; Dalsgaard, Peter

    2018-01-01

    Creativity has been a growing topic within the ACM community since the 1990s. However, no clear overview of this trend has been offered. We present a thorough survey of 998 creativity-related publications in the ACM Digital Library collected using keyword search to determine prevailing approaches......, topics, and characteristics of creativity-oriented Human-Computer Interaction (HCI) research. . A selected sample based on yearly citations yielded 221 publications, which were analyzed using constant comparison analysis. We found that HCI is almost exclusively responsible for creativity......-oriented publications; they focus on collaborative creativity rather than individual creativity; there is a general lack of definition of the term ‘creativity’; empirically based contributions are prevalent; and many publications focus on new tools, often developed by researchers. On this basis, we present three...

  10. Adhesion of multimode adhesives to enamel and dentin after one year of water storage.

    Science.gov (United States)

    Vermelho, Paulo Moreira; Reis, André Figueiredo; Ambrosano, Glaucia Maria Bovi; Giannini, Marcelo

    2017-06-01

    This study aimed to evaluate the ultramorphological characteristics of tooth-resin interfaces and the bond strength (BS) of multimode adhesive systems to enamel and dentin. Multimode adhesives (Scotchbond Universal (SBU) and All-Bond Universal) were tested in both self-etch and etch-and-rinse modes and compared to control groups (Optibond FL and Clearfil SE Bond (CSB)). Adhesives were applied to human molars and composite blocks were incrementally built up. Teeth were sectioned to obtain specimens for microtensile BS and TEM analysis. Specimens were tested after storage for either 24 h or 1 year. SEM analyses were performed to classify the failure pattern of beam specimens after BS testing. Etching increased the enamel BS of multimode adhesives; however, BS decreased after storage for 1 year. No significant differences in dentin BS were noted between multimode and control in either evaluation period. Storage for 1 year only reduced the dentin BS for SBU in self-etch mode. TEM analysis identified hybridization and interaction zones in dentin and enamel for all adhesives. Silver impregnation was detected on dentin-resin interfaces after storage of specimens for 1 year only with the SBU and CSB. Storage for 1 year reduced enamel BS when adhesives are applied on etched surface; however, BS of multimode adhesives did not differ from those of the control group. In dentin, no significant difference was noted between the multimode and control group adhesives, regardless of etching mode. In general, multimode adhesives showed similar behavior when compared to traditional adhesive techniques. Multimode adhesives are one-step self-etching adhesives that can also be used after enamel/dentin phosphoric acid etching, but each product may work better in specific conditions.

  11. How should Fitts' Law be applied to human-computer interaction?

    Science.gov (United States)

    Gillan, D. J.; Holden, K.; Adam, S.; Rudisill, M.; Magee, L.

    1992-01-01

    The paper challenges the notion that any Fitts' Law model can be applied generally to human-computer interaction, and proposes instead that applying Fitts' Law requires knowledge of the users' sequence of movements, direction of movement, and typical movement amplitudes as well as target sizes. Two experiments examined a text selection task with sequences of controlled movements (point-click and point-drag). For the point-click sequence, a Fitts' Law model that used the diagonal across the text object in the direction of pointing (rather than the horizontal extent of the text object) as the target size provided the best fit for the pointing time data, whereas for the point-drag sequence, a Fitts' Law model that used the vertical size of the text object as the target size gave the best fit. Dragging times were fitted well by Fitts' Law models that used either the vertical or horizontal size of the terminal character in the text object. Additional results of note were that pointing in the point-click sequence was consistently faster than in the point-drag sequence, and that pointing in either sequence was consistently faster than dragging. The discussion centres around the need to define task characteristics before applying Fitts' Law to an interface design or analysis, analyses of pointing and of dragging, and implications for interface design.

  12. Multimodality image analysis work station

    International Nuclear Information System (INIS)

    Ratib, O.; Huang, H.K.

    1989-01-01

    The goal of this project is to design and implement a PACS (picture archiving and communication system) workstation for quantitative analysis of multimodality images. The Macintosh II personal computer was selected for its friendly user interface, its popularity among the academic and medical community, and its low cost. The Macintosh operates as a stand alone workstation where images are imported from a central PACS server through a standard Ethernet network and saved on a local magnetic or optical disk. A video digitizer board allows for direct acquisition of images from sonograms or from digitized cine angiograms. The authors have focused their project on the exploration of new means of communicating quantitative data and information through the use of an interactive and symbolic user interface. The software developed includes a variety of image analysis, algorithms for digitized angiograms, sonograms, scintigraphic images, MR images, and CT scans

  13. Transnational HCI: Humans, Computers and Interactions in Global Contexts

    DEFF Research Database (Denmark)

    Vertesi, Janet; Lindtner, Silvia; Shklovski, Irina

    2011-01-01

    , but as evolving in relation to global processes, boundary crossings, frictions and hybrid practices. In doing so, we expand upon existing research in HCI to consider the effects, implications for individuals and communities, and design opportunities in times of increased transnational interactions. We hope...... to broaden the conversation around the impact of technology in global processes by bringing together scholars from HCI and from related humanities, media arts and social sciences disciplines....

  14. Shared periodic performer movements coordinate interactions in duo improvisations

    Science.gov (United States)

    Jakubowski, Kelly; Moran, Nikki; Keller, Peter E.

    2018-01-01

    Human interaction involves the exchange of temporally coordinated, multimodal cues. Our work focused on interaction in the visual domain, using music performance as a case for analysis due to its temporally diverse and hierarchical structures. We made use of two improvising duo datasets—(i) performances of a jazz standard with a regular pulse and (ii) non-pulsed, free improvizations—to investigate whether human judgements of moments of interaction between co-performers are influenced by body movement coordination at multiple timescales. Bouts of interaction in the performances were manually annotated by experts and the performers’ movements were quantified using computer vision techniques. The annotated interaction bouts were then predicted using several quantitative movement and audio features. Over 80% of the interaction bouts were successfully predicted by a broadband measure of the energy of the cross-wavelet transform of the co-performers’ movements in non-pulsed duos. A more complex model, with multiple predictors that captured more specific, interacting features of the movements, was needed to explain a significant amount of variance in the pulsed duos. The methods developed here have key implications for future work on measuring visual coordination in musical ensemble performances, and can be easily adapted to other musical contexts, ensemble types and traditions. PMID:29515867

  15. Multimodal interaction in the perception of impact events displayed via a multichannel audio and simulated structure-borne vibration

    Science.gov (United States)

    Martens, William L.; Woszczyk, Wieslaw

    2005-09-01

    For multimodal display systems in which realistic reproduction of impact events is desired, presenting structure-borne vibration along with multichannel audio recordings has been observed to create a greater sense of immersion in a virtual acoustic environment. Furthermore, there is an increased proportion of reports that the impact event took place within the observer's local area (this is termed ``presence with'' the event, in contrast to ``presence in'' the environment in which the event occurred). While holding the audio reproduction constant, varying the intermodal arrival time and level of mechanically displayed, synthetic whole-body vibration revealed a number of other subjective attributes that depend upon multimodal interaction in the perception of a representative impact event. For example, when the structure-borne component of the displayed impact event arrived 10 to 20 ms later than the airborne component, the intermodal delay was not only tolerated, but gave rise to an increase in the proportion of reports that the impact event had greater power. These results have enabled the refinement of a multimodal simulation in which the manipulation of synthetic whole-body vibration can be used to control perceptual attributes of impact events heard within an acoustic environment reproduced via a multichannel loudspeaker array.

  16. Computer aided composition by means of interactive GP

    DEFF Research Database (Denmark)

    Ando, Daichi; Dahlstedt, Palle; Nordahl, Mats G.

    2006-01-01

    Research on the application of Interactive Evolutionary Computation (IEC) to the field of musical computation has been improved in recent years, marking an interesting parallel to the current trend of applying human characteristics or sensitivities to computer systems. However, past techniques...... developed for IEC-based composition have not necessarily proven very effective for professional use. This is due to the large difference between data representation used by IEC and authored classical music composition. To solve this difficulties, we purpose a new IEC approach to music composition based...... on classical music theory. In this paper, we describe an established system according to the above idea, and detail of making success of composition a piece....

  17. Bacteria Hunt: Evaluating multi-paradigm BCI interaction

    NARCIS (Netherlands)

    Mühl, C.; Gürkök, Hayrettin; Plass - Oude Bos, D.; Thurlings, Marieke E.; Scherffig, Lasse; Duvinage, Matthieu; Elbakyan, Alexandra A.; Kang, SungWook; Poel, Mannes; Heylen, Dirk K.J.

    The multimodal, multi-paradigm brain-computer interfacing (BCI) game Bacteria Hunt was used to evaluate two aspects of BCI interaction in a gaming context. One goal was to examine the effect of feedback on the ability of the user to manipulate his mental state of relaxation. This was done by having

  18. Congruency versus strategic effects in multimodal affective picture categorization

    NARCIS (Netherlands)

    Lemmens, P.M.C.

    2005-01-01

    In communication between humans, emotion is an important aspect having a powerful influence on the structure as well as content of a conversation. In human-factors research, the interaction between humans is often used as a guide to improve the quality of human-computer interaction. Despite its

  19. Interactive computer-enhanced remote viewing system

    International Nuclear Information System (INIS)

    Tourtellott, J.A.; Wagner, J.F.

    1995-01-01

    Remediation activities such as decontamination and decommissioning (D ampersand D) typically involve materials and activities hazardous to humans. Robots are an attractive way to conduct such remediation, but for efficiency they need a good three-dimensional (3-D) computer model of the task space where they are to function. This model can be created from engineering plans and architectural drawings and from empirical data gathered by various sensors at the site. The model is used to plan robotic tasks and verify that selected paths are clear of obstacles. This report describes the development of an Interactive Computer-Enhanced Remote Viewing System (ICERVS), a software system to provide a reliable geometric description of a robotic task space, and enable robotic remediation to be conducted more effectively and more economically

  20. Human Computer Music Performance

    OpenAIRE

    Dannenberg, Roger B.

    2012-01-01

    Human Computer Music Performance (HCMP) is the study of music performance by live human performers and real-time computer-based performers. One goal of HCMP is to create a highly autonomous artificial performer that can fill the role of a human, especially in a popular music setting. This will require advances in automated music listening and understanding, new representations for music, techniques for music synchronization, real-time human-computer communication, music generation, sound synt...

  1. Granular computing and decision-making interactive and iterative approaches

    CERN Document Server

    Chen, Shyi-Ming

    2015-01-01

    This volume is devoted to interactive and iterative processes of decision-making– I2 Fuzzy Decision Making, in brief. Decision-making is inherently interactive. Fuzzy sets help realize human-machine communication in an efficient way by facilitating a two-way interaction in a friendly and transparent manner. Human-centric interaction is of paramount relevance as a leading guiding design principle of decision support systems.   The volume provides the reader with an updated and in-depth material on the conceptually appealing and practically sound methodology and practice of I2 Fuzzy Decision Making. The book engages a wealth of methods of fuzzy sets and Granular Computing, brings new concepts, architectures and practice of fuzzy decision-making providing the reader with various application studies.   The book is aimed at a broad audience of researchers and practitioners in numerous disciplines in which decision-making processes play a pivotal role and serve as a vehicle to produce solutions to existing prob...

  2. Human-machine interaction in nuclear power plants

    International Nuclear Information System (INIS)

    Yoshikawa, Hidekazu

    2005-01-01

    Advanced nuclear power plants are generally large complex systems automated by computers. Whenever a rate plant emergency occurs the plant operators must cope with the emergency under severe mental stress without committing any fatal errors. Furthermore, the operators must train to improve and maintain their ability to cope with every conceivable situation, though it is almost impossible to be fully prepared for an infinite variety of situations. In view of the limited capability of operators in emergency situations, there has been a new approach to preventing the human error caused by improper human-machine interaction. The new approach has been triggered by the introduction of advanced information systems that help operators recognize and counteract plant emergencies. In this paper, the adverse effect of automation in human-machine systems is explained. The discussion then focuses on how to configure a joint human-machine system for ideal human-machine interaction. Finally, there is a new proposal on how to organize technologies that recognize the different states of such a joint human-machine system

  3. Multimodality image registration with software: state-of-the-art

    Energy Technology Data Exchange (ETDEWEB)

    Slomka, Piotr J. [Cedars-Sinai Medical Center, AIM Program/Department of Imaging, Los Angeles, CA (United States); University of California, David Geffen School of Medicine, Los Angeles, CA (United States); Baum, Richard P. [Center for PET, Department of Nuclear Medicine, Bad Berka (Germany)

    2009-03-15

    Multimodality image integration of functional and anatomical data can be performed by means of dedicated hybrid imaging systems or by software image co-registration techniques. Hybrid positron emission tomography (PET)/computed tomography (CT) systems have found wide acceptance in oncological imaging, while software registration techniques have a significant role in patient-specific, cost-effective, and radiation dose-effective application of integrated imaging. Software techniques allow accurate (2-3 mm) rigid image registration of brain PET with CT and MRI. Nonlinear techniques are used in whole-body image registration, and recent developments allow for significantly accelerated computing times. Nonlinear software registration of PET with CT or MRI is required for multimodality radiation planning. Difficulties remain in the validation of nonlinear registration of soft tissue organs. The utilization of software-based multimodality image integration in a clinical environment is sometimes hindered by the lack of appropriate picture archiving and communication systems (PACS) infrastructure needed to efficiently and automatically integrate all available images into one common database. In cardiology applications, multimodality PET/single photon emission computed tomography and coronary CT angiography imaging is typically not required unless the results of one of the tests are equivocal. Software image registration is likely to be used in a complementary fashion with hybrid PET/CT or PET/magnetic resonance imaging systems. Software registration of stand-alone scans ''paved the way'' for the clinical application of hybrid scanners, demonstrating practical benefits of image integration before the hybrid dual-modality devices were available. (orig.)

  4. Human computer interaction and communication aids for hearing-impaired, deaf and deaf-blind people: Introduction to the special thematic session

    DEFF Research Database (Denmark)

    Bothe, Hans-Heinrich

    2008-01-01

    This paper gives ail overview and extends the Special Thematic Session (STS) oil research and development of technologies for hearing-impaired, deaf, and deaf-blind people. The topics of the session focus oil special equipment or services to improve communication and human computer interaction....... The papers are related to visual communication using captions, sign language, speech-reading, to vibro-tactile stimulation, or to general services for hearing-impaired persons....

  5. Computer assisted radiology

    International Nuclear Information System (INIS)

    Lemke, H.U.; Jaffe, C.C.; Felix, R.

    1993-01-01

    The proceedings of the CAR'93 symposium present the 126 oral papers and the 58 posters contributed to the four Technical Sessions entitled: (1) Image Management, (2) Medical Workstations, (3) Digital Image Generation - DIG, and (4) Application Systems - AS. Topics discussed in Session (1) are: picture archiving and communication systems, teleradiology, hospital information systems and radiological information systems, technology assessment and implications, standards, and data bases. Session (2) deals with computer vision, computer graphics, design and application, man computer interaction. Session (3) goes into the details of the diagnostic examination methods such as digital radiography, MRI, CT, nuclear medicine, ultrasound, digital angiography, and multimodality imaging. Session (4) is devoted to computer-assisted techniques, as there are: computer assisted radiological diagnosis, knowledge based systems, computer assisted radiation therapy and computer assisted surgical planning. (UWA). 266 figs [de

  6. Interactive computing in BASIC an introduction to interactive computing and a practical course in the BASIC language

    CERN Document Server

    Sanderson, Peter C

    1973-01-01

    Interactive Computing in BASIC: An Introduction to Interactive Computing and a Practical Course in the BASIC Language provides a general introduction to the principles of interactive computing and a comprehensive practical guide to the programming language Beginners All-purpose Symbolic Instruction Code (BASIC). The book starts by providing an introduction to computers and discussing the aspects of terminal usage, programming languages, and the stages in writing and testing a program. The text then discusses BASIC with regard to methods in writing simple arithmetical programs, control stateme

  7. Seven Years after the Manifesto: Literature Review and Research Directions for Technologies in Animal Computer Interaction

    Directory of Open Access Journals (Sweden)

    Ilyena Hirskyj-Douglas

    2018-06-01

    Full Text Available As technologies diversify and become embedded in everyday lives, the technologies we expose to animals, and the new technologies being developed for animals within the field of Animal Computer Interaction (ACI are increasing. As we approach seven years since the ACI manifesto, which grounded the field within Human Computer Interaction and Computer Science, this thematic literature review looks at the technologies developed for (non-human animals. Technologies that are analysed include tangible and physical, haptic and wearable, olfactory, screen technology and tracking systems. The conversation explores what exactly ACI is whilst questioning what it means to be animal by considering the impact and loop between machine and animal interactivity. The findings of this review are expected to form the first grounding foundation of ACI technologies informing future research in animal computing as well as suggesting future areas for exploration.

  8. Reflection effects in multimode fiber systems utilizing laser transmitters

    Science.gov (United States)

    Bates, Harry E.

    1991-11-01

    A number of optical communication lines are now in use at NASA-Kennedy for the transmission of voice, computer data, and video signals. Now, all of these channels use a single carrier wavelength centered near 1300 or 1550 nm. Engineering tests in the past have given indications of the growth of systematic and random noise in the RF spectrum of a fiber network as the number of connector pairs is increased. This noise seems to occur when a laser transmitter is used instead of a LED. It has been suggested that the noise is caused by back reflections created at connector fiber interfaces. Experiments were performed to explore the effect of reflection on the transmitting laser under conditions of reflective feedback. This effort included computer integration of some of the instrumentation in the fiber optic lab using the Lab View software recently acquired by the lab group. The main goal was to interface the Anritsu Optical and RF spectrum analyzers to the MacIntosh II computer so that laser spectra and network RF spectra could be simultaneously and rapidly acquired in a form convenient for analysis. Both single and multimode fiber is installed at Kennedy. Since most are multimode, this effort concentrated on multimode systems.

  9. Human-machine interactions

    Science.gov (United States)

    Forsythe, J Chris [Sandia Park, NM; Xavier, Patrick G [Albuquerque, NM; Abbott, Robert G [Albuquerque, NM; Brannon, Nathan G [Albuquerque, NM; Bernard, Michael L [Tijeras, NM; Speed, Ann E [Albuquerque, NM

    2009-04-28

    Digital technology utilizing a cognitive model based on human naturalistic decision-making processes, including pattern recognition and episodic memory, can reduce the dependency of human-machine interactions on the abilities of a human user and can enable a machine to more closely emulate human-like responses. Such a cognitive model can enable digital technology to use cognitive capacities fundamental to human-like communication and cooperation to interact with humans.

  10. Computer aided systems human engineering: A hypermedia tool

    Science.gov (United States)

    Boff, Kenneth R.; Monk, Donald L.; Cody, William J.

    1992-01-01

    The Computer Aided Systems Human Engineering (CASHE) system, Version 1.0, is a multimedia ergonomics database on CD-ROM for the Apple Macintosh II computer, being developed for use by human system designers, educators, and researchers. It will initially be available on CD-ROM and will allow users to access ergonomics data and models stored electronically as text, graphics, and audio. The CASHE CD-ROM, Version 1.0 will contain the Boff and Lincoln (1988) Engineering Data Compendium, MIL-STD-1472D and a unique, interactive simulation capability, the Perception and Performance Prototyper. Its features also include a specialized data retrieval, scaling, and analysis capability and the state of the art in information retrieval, browsing, and navigation.

  11. The human-bacterial pathogen protein interaction networks of Bacillus anthracis, Francisella tularensis, and Yersinia pestis.

    Directory of Open Access Journals (Sweden)

    Matthew D Dyer

    2010-08-01

    Full Text Available Bacillus anthracis, Francisella tularensis, and Yersinia pestis are bacterial pathogens that can cause anthrax, lethal acute pneumonic disease, and bubonic plague, respectively, and are listed as NIAID Category A priority pathogens for possible use as biological weapons. However, the interactions between human proteins and proteins in these bacteria remain poorly characterized leading to an incomplete understanding of their pathogenesis and mechanisms of immune evasion.In this study, we used a high-throughput yeast two-hybrid assay to identify physical interactions between human proteins and proteins from each of these three pathogens. From more than 250,000 screens performed, we identified 3,073 human-B. anthracis, 1,383 human-F. tularensis, and 4,059 human-Y. pestis protein-protein interactions including interactions involving 304 B. anthracis, 52 F. tularensis, and 330 Y. pestis proteins that are uncharacterized. Computational analysis revealed that pathogen proteins preferentially interact with human proteins that are hubs and bottlenecks in the human PPI network. In addition, we computed modules of human-pathogen PPIs that are conserved amongst the three networks. Functionally, such conserved modules reveal commonalities between how the different pathogens interact with crucial host pathways involved in inflammation and immunity.These data constitute the first extensive protein interaction networks constructed for bacterial pathogens and their human hosts. This study provides novel insights into host-pathogen interactions.

  12. Multimodality imaging of the postoperative shoulder

    Energy Technology Data Exchange (ETDEWEB)

    Woertler, Klaus [Technische Universitaet Muenchen, Department of Radiology, Munich (Germany)

    2007-12-15

    Multimodality imaging of the postoperative shoulder includes radiography, magnetic resonance (MR) imaging, MR arthrography, computed tomography (CT), CT arthrography, and ultrasound. Target-oriented evaluation of the postoperative shoulder necessitates familiarity with surgical techniques, their typical complications and sources of failure, knowledge of normal and abnormal postoperative findings, awareness of the advantages and weaknesses with the different radiologic techniques, and clinical information on current symptoms and function. This article reviews the most commonly used surgical procedures for treatment of anterior glenohumeral instability, lesions of the labral-bicipital complex, subacromial impingement, and rotator cuff lesions and highlights the significance of imaging findings with a view to detection of recurrent lesions and postoperative complications in a multimodality approach. (orig.)

  13. Can Computers Foster Human Users’ Creativity? Theory and Praxis of Mixed-Initiative Co-Creativity

    Directory of Open Access Journals (Sweden)

    Antonios Liapis

    2016-07-01

    Full Text Available This article discusses the impact of artificially intelligent computers to the process of design, play and educational activities. A computational process which has the necessary intelligence and creativity to take a proactive role in such activities can not only support human creativity but also foster it and prompt lateral thinking. The argument is made both from the perspective of human creativity, where the computational input is treated as an external stimulus which triggers re-framing of humans’ routines and mental associations, but also from the perspective of computational creativity where human input and initiative constrains the search space of the algorithm, enabling it to focus on specific possible solutions to a problem rather than globally search for the optimal. The article reviews four mixed-initiative tools (for design and educational play based on how they contribute to human-machine co-creativity. These paradigms serve different purposes, afford different human interaction methods and incorporate different computationally creative processes. Assessing how co-creativity is facilitated on a per-paradigm basis strengthens the theoretical argument and provides an initial seed for future work in the burgeoning domain of mixed-initiative interaction.

  14. MOBILTEL - Mobile Multimodal Telecommunications dialogue system based on VoIP telephony

    Directory of Open Access Journals (Sweden)

    Anton Čižmár

    2009-10-01

    Full Text Available In this paper the project MobilTel ispresented. The communication itself is becoming amultimodal interactive process. The MobilTel projectprovides research and development activities inmultimodal interfaces area. The result is a functionalarchitecture for mobile multimodal telecommunicationsystem running on handheld device. The MobilTelcommunicator is a multimodal Slovak speech andgraphical interface with integrated VoIP client. Theother possible modalities are pen – touch screeninteraction, keyboard, and display on which theinformation is more user friendly presented (icons,emoticons, etc., and provides hyperlink and scrollingmenu availability.We describe the method of interaction between mobileterminal (PDA and MobilTel multimodal PCcommunicator over a VoIP WLAN connection basedon SIP protocol. We also present the graphicalexamples of services that enable users to obtaininformation about weather or information about trainconnection between two train stations.

  15. Human agency beliefs influence behaviour during virtual social interactions

    Directory of Open Access Journals (Sweden)

    Nathan Caruana

    2017-09-01

    Full Text Available In recent years, with the emergence of relatively inexpensive and accessible virtual reality technologies, it is now possible to deliver compelling and realistic simulations of human-to-human interaction. Neuroimaging studies have shown that, when participants believe they are interacting via a virtual interface with another human agent, they show different patterns of brain activity compared to when they know that their virtual partner is computer-controlled. The suggestion is that users adopt an “intentional stance” by attributing mental states to their virtual partner. However, it remains unclear how beliefs in the agency of a virtual partner influence participants’ behaviour and subjective experience of the interaction. We investigated this issue in the context of a cooperative “joint attention” game in which participants interacted via an eye tracker with a virtual onscreen partner, directing each other’s eye gaze to different screen locations. Half of the participants were correctly informed that their partner was controlled by a computer algorithm (“Computer” condition. The other half were misled into believing that the virtual character was controlled by a second participant in another room (“Human” condition. Those in the “Human” condition were slower to make eye contact with their partner and more likely to try and guide their partner before they had established mutual eye contact than participants in the “Computer” condition. They also responded more rapidly when their partner was guiding them, although the same effect was also found for a control condition in which they responded to an arrow cue. Results confirm the influence of human agency beliefs on behaviour in this virtual social interaction context. They further suggest that researchers and developers attempting to simulate social interactions should consider the impact of agency beliefs on user experience in other social contexts, and their effect

  16. Diffusion Maps for Multimodal Registration

    Directory of Open Access Journals (Sweden)

    Gemma Piella

    2014-06-01

    Full Text Available Multimodal image registration is a difficult task, due to the significant intensity variations between the images. A common approach is to use sophisticated similarity measures, such as mutual information, that are robust to those intensity variations. However, these similarity measures are computationally expensive and, moreover, often fail to capture the geometry and the associated dynamics linked with the images. Another approach is the transformation of the images into a common space where modalities can be directly compared. Within this approach, we propose to register multimodal images by using diffusion maps to describe the geometric and spectral properties of the data. Through diffusion maps, the multimodal data is transformed into a new set of canonical coordinates that reflect its geometry uniformly across modalities, so that meaningful correspondences can be established between them. Images in this new representation can then be registered using a simple Euclidean distance as a similarity measure. Registration accuracy was evaluated on both real and simulated brain images with known ground-truth for both rigid and non-rigid registration. Results showed that the proposed approach achieved higher accuracy than the conventional approach using mutual information.

  17. Improving treatment planning accuracy through multimodality imaging

    International Nuclear Information System (INIS)

    Sailer, Scott L.; Rosenman, Julian G.; Soltys, Mitchel; Cullip, Tim J.; Chen, Jun

    1996-01-01

    Purpose: In clinical practice, physicians are constantly comparing multiple images taken at various times during the patient's treatment course. One goal of such a comparison is to accurately define the gross tumor volume (GTV). The introduction of three-dimensional treatment planning has greatly enhanced the ability to define the GTV, but there are times when the GTV is not visible on the treatment-planning computed tomography (CT) scan. We have modified our treatment-planning software to allow for interactive display of multiple, registered images that enhance the physician's ability to accurately determine the GTV. Methods and Materials: Images are registered using interactive tools developed at the University of North Carolina at Chapel Hill (UNC). Automated methods are also available. Images registered with the treatment-planning CT scan are digitized from film. After a physician has approved the registration, the registered images are made available to the treatment-planning software. Structures and volumes of interest are contoured on all images. In the beam's eye view, wire loop representations of these structures can be visualized from all image types simultaneously. Each registered image can be seamlessly viewed during the treatment-planning process, and all contours from all image types can be seen on any registered image. A beam may, therefore, be designed based on any contour. Results: Nineteen patients have been planned and treated using multimodality imaging from November 1993 through August 1994. All registered images were digitized from film, and many were from outside institutions. Brain has been the most common site (12), but the techniques of registration and image display have also been used for the thorax (4), abdomen (2), and extremity (1). The registered image has been an magnetic resonance (MR) scan in 15 cases and a diagnostic CT scan in 5 cases. In one case, sequential MRs, one before treatment and another after 30 Gy, were used to plan

  18. Movement coordination in applied human-human and human-robot interaction

    DEFF Research Database (Denmark)

    Schubö, Anna; Vesper, Cordula; Wiesbeck, Mathey

    2007-01-01

    and describing human-human interaction in terms of goal-oriented movement coordination is considered an important and necessary step for designing and describing human-robot interaction. In the present scenario, trajectories of hand and finger movements were recorded while two human participants performed......The present paper describes a scenario for examining mechanisms of movement coordination in humans and robots. It is assumed that coordination can best be achieved when behavioral rules that shape movement execution in humans are also considered for human-robot interaction. Investigating...... coordination were affected. Implications for human-robot interaction are discussed....

  19. Proceedings of the 5th Danish Human-Computer Interaction Research Symposium

    DEFF Research Database (Denmark)

    Clemmensen, Torkil; Nielsen, Lene

    2005-01-01

    Lene Nielsen DEALING WITH REALITY - IN THEORY Gitte Skou PetersenA NEW IFIP WORKING GROUP - HUMAN WORK INTERACTION DESIGN Rikke Ørngreen, Torkil Clemmensen & Annelise Mark-Pejtersen CLASSIFICATION OF DESCRIPTIONS USED IN SOFTWARE AND INTERACTION DESIGN Georg Strøm OBSTACLES TO DESIGN IN VOLUNTEER BASED...... for the symposium, of which 14 were presented orally in four panel sessions. Previously the symposium has been held at University of Aarhus 2001, University of Copenhagen 2002, Roskilde University Center 2003, Aalborg University 2004. Torkil Clemmensen & Lene Nielsen Copenhagen, November 2005 CONTENT INTRODUCTION...

  20. Affective Computing and Intelligent Interaction

    CERN Document Server

    2012-01-01

    2012 International Conference on Affective Computing and Intelligent Interaction (ICACII 2012) was the most comprehensive conference focused on the various aspects of advances in Affective Computing and Intelligent Interaction. The conference provided a rare opportunity to bring together worldwide academic researchers and practitioners for exchanging the latest developments and applications in this field such as Intelligent Computing, Affective Computing, Machine Learning, Business Intelligence and HCI.   This volume is a collection of 119 papers selected from 410 submissions from universities and industries all over the world, based on their quality and relevancy to the conference. All of the papers have been peer-reviewed by selected experts.  

  1. Cortical inter-hemispheric circuits for multimodal vocal learning in songbirds.

    Science.gov (United States)

    Paterson, Amy K; Bottjer, Sarah W

    2017-10-15

    Vocal learning in songbirds and humans is strongly influenced by social interactions based on sensory inputs from several modalities. Songbird vocal learning is mediated by cortico-basal ganglia circuits that include the SHELL region of lateral magnocellular nucleus of the anterior nidopallium (LMAN), but little is known concerning neural pathways that could integrate multimodal sensory information with SHELL circuitry. In addition, cortical pathways that mediate the precise coordination between hemispheres required for song production have been little studied. In order to identify candidate mechanisms for multimodal sensory integration and bilateral coordination for vocal learning in zebra finches, we investigated the anatomical organization of two regions that receive input from SHELL: the dorsal caudolateral nidopallium (dNCL SHELL ) and a region within the ventral arcopallium (Av). Anterograde and retrograde tracing experiments revealed a topographically organized inter-hemispheric circuit: SHELL and dNCL SHELL , as well as adjacent nidopallial areas, send axonal projections to ipsilateral Av; Av in turn projects to contralateral SHELL, dNCL SHELL , and regions of nidopallium adjacent to each. Av on each side also projects directly to contralateral Av. dNCL SHELL and Av each integrate inputs from ipsilateral SHELL with inputs from sensory regions in surrounding nidopallium, suggesting that they function to integrate multimodal sensory information with song-related responses within LMAN-SHELL during vocal learning. Av projections share this integrated information from the ipsilateral hemisphere with contralateral sensory and song-learning regions. Our results suggest that the inter-hemispheric pathway through Av may function to integrate multimodal sensory feedback with vocal-learning circuitry and coordinate bilateral vocal behavior. © 2017 Wiley Periodicals, Inc.

  2. Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification

    Directory of Open Access Journals (Sweden)

    Gayathri Rajagopal

    2015-01-01

    Full Text Available This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.

  3. Adaptive interaction a utility maximization approach to understanding human interaction with technology

    CERN Document Server

    Payne, Stephen J

    2013-01-01

    This lecture describes a theoretical framework for the behavioural sciences that holds high promise for theory-driven research and design in Human-Computer Interaction. The framework is designed to tackle the adaptive, ecological, and bounded nature of human behaviour. It is designed to help scientists and practitioners reason about why people choose to behave as they do and to explain which strategies people choose in response to utility, ecology, and cognitive information processing mechanisms. A key idea is that people choose strategies so as to maximise utility given constraints. The frame

  4. Increasing trend of wearables and multimodal interface for human activity monitoring: A review.

    Science.gov (United States)

    Kumari, Preeti; Mathew, Lini; Syal, Poonam

    2017-04-15

    Activity recognition technology is one of the most important technologies for life-logging and for the care of elderly persons. Elderly people prefer to live in their own houses, within their own locality. If, they are capable to do so, several benefits can follow in terms of society and economy. However, living alone may have high risks. Wearable sensors have been developed to overcome these risks and these sensors are supposed to be ready for medical uses. It can help in monitoring the wellness of elderly persons living alone by unobtrusively monitoring their daily activities. The study aims to review the increasing trends of wearable devices and need of multimodal recognition for continuous or discontinuous monitoring of human activity, biological signals such as Electroencephalogram (EEG), Electrooculogram (EOG), Electromyogram (EMG), Electrocardiogram (ECG) and parameters along with other symptoms. This can provide necessary assistance in times of ominous need, which is crucial for the advancement of disease-diagnosis and treatment. Shared control architecture with multimodal interface can be used for application in more complex environment where more number of commands is to be used to control with better results in terms of controlling. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Joint sparse representation for robust multimodal biometrics recognition.

    Science.gov (United States)

    Shekhar, Sumit; Patel, Vishal M; Nasrabadi, Nasser M; Chellappa, Rama

    2014-01-01

    Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods.

  6. Haptic-Multimodal Flight Control System Update

    Science.gov (United States)

    Goodrich, Kenneth H.; Schutte, Paul C.; Williams, Ralph A.

    2011-01-01

    The rapidly advancing capabilities of autonomous aircraft suggest a future where many of the responsibilities of today s pilot transition to the vehicle, transforming the pilot s job into something akin to driving a car or simply being a passenger. Notionally, this transition will reduce the specialized skills, training, and attention required of the human user while improving safety and performance. However, our experience with highly automated aircraft highlights many challenges to this transition including: lack of automation resilience; adverse human-automation interaction under stress; and the difficulty of developing certification standards and methods of compliance for complex systems performing critical functions traditionally performed by the pilot (e.g., sense and avoid vs. see and avoid). Recognizing these opportunities and realities, researchers at NASA Langley are developing a haptic-multimodal flight control (HFC) system concept that can serve as a bridge between today s state of the art aircraft that are highly automated but have little autonomy and can only be operated safely by highly trained experts (i.e., pilots) to a future in which non-experts (e.g., drivers) can safely and reliably use autonomous aircraft to perform a variety of missions. This paper reviews the motivation and theoretical basis of the HFC system, describes its current state of development, and presents results from two pilot-in-the-loop simulation studies. These preliminary studies suggest the HFC reshapes human-automation interaction in a way well-suited to revolutionary ease-of-use.

  7. Interactive inverse kinematics for human motion estimation

    DEFF Research Database (Denmark)

    Engell-Nørregård, Morten Pol; Hauberg, Søren; Lapuyade, Jerome

    2009-01-01

    We present an application of a fast interactive inverse kinematics method as a dimensionality reduction for monocular human motion estimation. The inverse kinematics solver deals efficiently and robustly with box constraints and does not suffer from shaking artifacts. The presented motion...... to significantly speed up the particle filtering. It should be stressed that the observation part of the system has not been our focus, and as such is described only from a sense of completeness. With our approach it is possible to construct a robust and computationally efficient system for human motion estimation....

  8. Object recognition through a multi-mode fiber

    Science.gov (United States)

    Takagi, Ryosuke; Horisaki, Ryoichi; Tanida, Jun

    2017-04-01

    We present a method of recognizing an object through a multi-mode fiber. A number of speckle patterns transmitted through a multi-mode fiber are provided to a classifier based on machine learning. We experimentally demonstrated binary classification of face and non-face targets based on the method. The measurement process of the experimental setup was random and nonlinear because a multi-mode fiber is a typical strongly scattering medium and any reference light was not used in our setup. Comparisons between three supervised learning methods, support vector machine, adaptive boosting, and neural network, are also provided. All of those learning methods achieved high accuracy rates at about 90% for the classification. The approach presented here can realize a compact and smart optical sensor. It is practically useful for medical applications, such as endoscopy. Also our study indicated a promising utilization of artificial intelligence, which has rapidly progressed, for reducing optical and computational costs in optical sensing systems.

  9. Study of internalization and viability of multimodal nanoparticles for labeling of human umbilical cord mesenchymal stem cells

    International Nuclear Information System (INIS)

    Miyaki, Liza Aya Mabuchi; Sibov, Tatiana Tais; Pavon, Lorena Favaro; Mamani, Javier Bustamante; Gamarra, Lionel Fernel

    2012-01-01

    Objective: To analyze multimodal magnetic nanoparticles-Rhodamine B in culture media for cell labeling, and to establish a study of multimodal magnetic nanoparticles-Rhodamine B detection at labeled cells evaluating they viability at concentrations of 10 μg Fe/mL and 100μg Fe/mL. Methods: We performed the analysis of stability of multimodal magnetic nanoparticles-Rhodamine B in different culture media; the mesenchymal stem cells labeling with multimodal magnetic nanoparticles-Rhodamine B; the intracellular detection of multimodal magnetic nanoparticles-Rhodamine B in mesenchymal stem cells, and assessment of the viability of labeled cells by kinetic proliferation. Results: The stability analysis showed that multimodal magnetic nanoparticles-Rhodamine B had good stability in cultured Dulbecco's Modified Eagle's-Low Glucose medium and RPMI 1640 medium. The mesenchymal stem cell with multimodal magnetic nanoparticles-Rhodamine B described location of intracellular nanoparticles, which were shown as blue granules co-localized in fluorescent clusters, thus characterizing magnetic and fluorescent properties of multimodal magnetic nanoparticles Rhodamine B. Conclusion: The stability of multimodal magnetic nanoparticles-Rhodamine B found in cultured Dulbecco's Modified Eagle's-Low Glucose medium and RPMI 1640 medium assured intracellular mesenchymal stem cells labeling. This cell labeling did not affect viability of labeled mesenchymal stem cells since they continued to proliferate for five days. (author)

  10. Creating Standardized Video Recordings of Multimodal Interactions across Cultures

    DEFF Research Database (Denmark)

    Rehm, Matthias; André, Elisabeth; Bee, Nikolaus

    2009-01-01

    the literature is often too anecdotal to serve as the basis for modeling a system’s behavior, making it necessary to collect multimodal corpora in a standardized fashion in different cultures. In this chapter, the challenges of such an endeavor are introduced and solutions are presented by examples from a German......-Japanese project that aims at modeling culture-specific behaviors for Embodied Conversational Agents....

  11. Gestures and multimodal input

    OpenAIRE

    Keates, Simeon; Robinson, Peter

    1999-01-01

    For users with motion impairments, the standard keyboard and mouse arrangement for computer access often presents problems. Other approaches have to be adopted to overcome this. In this paper, we will describe the development of a prototype multimodal input system based on two gestural input channels. Results from extensive user trials of this system are presented. These trials showed that the physical and cognitive loads on the user can quickly become excessive and detrimental to the interac...

  12. An object-oriented computational model to study cardiopulmonary hemodynamic interactions in humans.

    Science.gov (United States)

    Ngo, Chuong; Dahlmanns, Stephan; Vollmer, Thomas; Misgeld, Berno; Leonhardt, Steffen

    2018-06-01

    This work introduces an object-oriented computational model to study cardiopulmonary interactions in humans. Modeling was performed in object-oriented programing language Matlab Simscape, where model components are connected with each other through physical connections. Constitutive and phenomenological equations of model elements are implemented based on their non-linear pressure-volume or pressure-flow relationship. The model includes more than 30 physiological compartments, which belong either to the cardiovascular or respiratory system. The model considers non-linear behaviors of veins, pulmonary capillaries, collapsible airways, alveoli, and the chest wall. Model parameters were derisved based on literature values. Model validation was performed by comparing simulation results with clinical and animal data reported in literature. The model is able to provide quantitative values of alveolar, pleural, interstitial, aortic and ventricular pressures, as well as heart and lung volumes during spontaneous breathing and mechanical ventilation. Results of baseline simulation demonstrate the consistency of the assigned parameters. Simulation results during mechanical ventilation with PEEP trials can be directly compared with animal and clinical data given in literature. Object-oriented programming languages can be used to model interconnected systems including model non-linearities. The model provides a useful tool to investigate cardiopulmonary activity during spontaneous breathing and mechanical ventilation. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Multi-Modal, Multi-Touch Interaction with Maps in Disaster Management Applications

    Directory of Open Access Journals (Sweden)

    V. Paelke

    2012-07-01

    Full Text Available Multi-touch interaction has become popular in recent years and impressive advances in technology have been demonstrated, with the presentation of digital maps as a common presentation scenario. However, most existing systems are really technology demonstrators and have not been designed with real applications in mind. A critical factor in the management of disaster situations is the access to current and reliable data. New sensors and data acquisition platforms (e.g. satellites, UAVs, mobile sensor networks have improved the supply of spatial data tremendously. However, in many cases this data is not well integrated into current crisis management systems and the capabilities to analyze and use it lag behind sensor capabilities. Therefore, it is essential to develop techniques that allow the effective organization, use and management of heterogeneous data from a wide variety of data sources. Standard user interfaces are not well suited to provide this information to crisis managers. Especially in dynamic situations conventional cartographic displays and mouse based interaction techniques fail to address the need to review a situation rapidly and act on it as a team. The development of novel interaction techniques like multi-touch and tangible interaction in combination with large displays provides a promising base technology to provide crisis managers with an adequate overview of the situation and to share relevant information with other stakeholders in a collaborative setting. However, design expertise on the use of such techniques in interfaces for real-world applications is still very sparse. In this paper we report on interdisciplinary research with a user and application centric focus to establish real-world requirements, to design new multi-modal mapping interfaces, and to validate them in disaster management applications. Initial results show that tangible and pen-based interaction are well suited to provide an intuitive and visible way to

  14. Designing Multimodal Mobile Interaction for a Text Messaging Application for Visually Impaired Users

    Directory of Open Access Journals (Sweden)

    Carlos Duarte

    2017-12-01

    Full Text Available While mobile devices have experienced important accessibility advances in the past years, people with visual impairments still face important barriers, especially in specific contexts when both their hands are not free to hold the mobile device, like when walking outside. By resorting to a multimodal combination of body based gestures and voice, we aim to achieve full hands and vision free interaction with mobile devices. In this article, we describe this vision and present the design of a prototype, inspired by that vision, of a text messaging application. The article also presents a user study where the suitability of the proposed approach was assessed, and a performance comparison between our prototype and existing SMS applications was conducted. Study participants received positively the prototype, which also supported better performance in tasks that involved text editing.

  15. HumanComputer Systems Interaction Backgrounds and Applications 2 Part 1

    CERN Document Server

    Kulikowski, Juliusz; Mroczek, Teresa

    2012-01-01

    The main contemporary human-system interaction (H-SI) problems consist in design and/or improvement of the tools for effective exchange of information between individual humans or human groups and technical systems created for humans aiding in reaching their vital goals. This book is a second issue in a series devoted to the novel in H-SI results and contributions reached for the last years by many research groups in European and extra-European countries. The preliminary (usually shortened) versions of the chapters  were presented as conference papers at the 3rd International Conference on H-SI held in Rzeszow, Poland, in 2010. A  large number of valuable papers  selected for publication caused a necessity to publish the book in two volumes. The given, 1st Volume  consists of sections devoted to: I. Decision Supporting Systems, II. Distributed Knowledge Bases and WEB Systems and III. Impaired Persons  Aiding Systems. The decision supporting systems concern various application areas, like enterprises mana...

  16. Multifuel multimodal network design; Projeto de redes multicombustiveis multimodal

    Energy Technology Data Exchange (ETDEWEB)

    Lage, Carolina; Dias, Gustavo; Bahiense, Laura; Ferreira Filho, Virgilio J.M. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE). Programa de Engenharia de Producao

    2008-07-01

    The objective of the Multi commodity Multimodal Network Project is the development of modeling tools and methodologies for the optimal sizing of production networks and multimodal distribution of multiple fuel and its incomes, considering investments and transportation costs. Given the inherently non-linear combinatory nature of the problem, the resolution of real instances by the complete model, in an exact way, becomes computationally intractable. Thus, the strategy for resolution should contain a combination of exacts and heuristics methods, that must be applied to subdivisions of the original problem. This paper deals with one of these subdivisions, tackling the problem of modeling a network of pipelines in order to drain the production of ethanol away from the producing plants. The objective consists in defining the best network topology, minimizing investment and operational costs, and attending the total demand. In order to do that, the network was considered a tree, where the nodes are the center of producing regions and the edges are the pipelines, trough where the ethanol produced by plants must be drained away. The main objective also includes the decision over the optimal diameter of each pipeline and the optimal size of the bombs, in order to minimize the pumping costs. (author)

  17. BUILD-IT : a computer vision-based interaction technique for a planning tool

    NARCIS (Netherlands)

    Rauterberg, G.W.M.; Fjeld, M.; Krueger, H.; Bichsel, M.; Leonhardt, U.; Meier, M.; Thimbleby, H.; O'Conaill, B.; Thomas, P.J.

    1997-01-01

    Shows a method that goes beyond the established approaches of human-computer interaction. We first bring a serious critique of traditional interface types, showing their major drawbacks and limitations. Promising alternatives are offered by virtual (or immersive) reality (VR) and by augmented

  18. Language and Identity in Multimodal Text: Case Study of Thailand’s Bank Pamphlet

    Directory of Open Access Journals (Sweden)

    Korapat Pruekchaikul

    2017-12-01

    Full Text Available With the main objective of presenting a linguistic model for the analysis of identity construction in multimodal texts, particularly in advertising, this article attempts to integrate three theoretical frameworks, namely the types of discourse of the Socio-Discursive Interactionism, Greimas’ actantial roles and the symbolic processes of the Grammar of Visual Design proposed by Kress e van Leeuwen. The first two theories are used to analyze verbal language form whereas the third is exclusively for images in advertising. The data sample is a Thai bank pamphlet of Siam Commercial Bank, collected in Bangkok, Thailand, in June, 2015. According to the data analysis, the theoretical frameworks employed here proves that identity, the psychological product, exists in the human mind and can be indexed by language in interaction. Also, the analysis found that identity could be projected as multimodally as language manifestation, of which forms are not only verbal but also pictorial.

  19. Vision-based interaction

    CERN Document Server

    Turk, Matthew

    2013-01-01

    In its early years, the field of computer vision was largely motivated by researchers seeking computational models of biological vision and solutions to practical problems in manufacturing, defense, and medicine. For the past two decades or so, there has been an increasing interest in computer vision as an input modality in the context of human-computer interaction. Such vision-based interaction can endow interactive systems with visual capabilities similar to those important to human-human interaction, in order to perceive non-verbal cues and incorporate this information in applications such

  20. Linking variability in brain chemistry and circuit function through multimodal human neuroimaging

    DEFF Research Database (Denmark)

    Fisher, Patrick M; Hariri, A R

    2012-01-01

    and dopamine system and its effects on threat- and reward-related brain function, we review evidence for how such a multimodal neuroimaging strategy can be successfully implemented. Furthermore, we discuss how multimodal PET-fMRI can be integrated with techniques such as imaging genetics, pharmacological......Identifying neurobiological mechanisms mediating the emergence of individual differences in behavior is critical for advancing our understanding of relative risk for psychopathology. Neuroreceptor positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) can be used...

  1. Human Pacman: A Mobile Augmented Reality Entertainment System Based on Physical, Social, and Ubiquitous Computing

    Science.gov (United States)

    Cheok, Adrian David

    This chapter details the Human Pacman system to illuminate entertainment computing which ventures to embed the natural physical world seamlessly with a fantasy virtual playground by capitalizing on infrastructure provided by mobile computing, wireless LAN, and ubiquitous computing. With Human Pacman, we have a physical role-playing computer fantasy together with real human-social and mobile-gaming that emphasizes on collaboration and competition between players in a wide outdoor physical area that allows natural wide-area human-physical movements. Pacmen and Ghosts are now real human players in the real world experiencing mixed computer graphics fantasy-reality provided by using the wearable computers on them. Virtual cookies and actual tangible physical objects are incorporated into the game play to provide novel experiences of seamless transitions between the real and virtual worlds. This is an example of a new form of gaming that anchors on physicality, mobility, social interaction, and ubiquitous computing.

  2. Human-technology interaction for standoff IED detection

    Science.gov (United States)

    Zhang, Evan; Zou, Yiyang; Zachrich, Liping; Fulton, Jack

    2011-03-01

    IEDs kill our soldiers and innocent people every day. Lessons learned from Iraq and Afghanistan clearly indicated that IEDs cannot be detected/defeated by technology alone; human-technology interaction must be engaged. In most cases, eye is the best detector, brain is the best computer, and technologies are tools, they must be used by human being properly then can achieve full functionality. In this paper, a UV Raman/fluorescence, CCD and LWIR 3 sensor fusion system for standoff IED detection and a handheld fusion system for close range IED detection are developed and demonstrated. We must train solders using their eyes or CCD/LWIR cameras to do wide area search while on the move to find small suspected area first then use the spectrometer because the laser spot is too small, to scan a one-mile long and 2-meter wide road needs 185 days although our fusion system can detect the IED in 30m with 1s interrogating time. Even if the small suspected area (e.g., 0.5mx0.5m) is found, human eyes still cannot detect the IED, soldiers must use or interact with the technology - laser based spectrometer to scan the area then they are able to detect and identify the IED in 10 minutes not 185 days. Therefore, the human-technology interaction approach will be the best solution for IED detection.

  3. "Look at what I am saying": Multimodal science teaching

    Science.gov (United States)

    Pozzer-Ardenghi, Lilian

    Language constitutes the dominant representational mode in science teaching, and lectures are still the most prevalent of the teaching methods in school science. In this dissertation, I investigate lectures from a multimodal and communicative perspective to better understand how teaching as a cultural-historical and social activity unfolds; that is, I am concerned with teaching as a communicative event, where a variety of signs (or semiotic resources), expressed in diverse modalities (or modes of communication) are produced and reproduced while the teacher articulates very specific conceptual meanings for the students. Within a trans-disciplinary approach that merges theoretical and methodical frameworks of social and cultural studies of human activity and interaction, communicative and gestures studies, linguistics, semiotics, pragmatics, and studies on teaching and learning science, I investigate teaching as a communicative, dynamic, multimodal, and social activity. My research questions include: What are the resources produced and reproduced in the classroom when the teacher is lecturing? How do these resources interact with each other? What meanings do they carry and how are these associated to achieve the coherence necessary to accomplish the communication of complex and abstract scientific concepts, not only within one lecture, but also within an entire unit of the curricula encompassing various lectures? My results show that, when lecturing, the communication of scientific concepts occur along trajectories driven by the dialectical relation among the various semiotic resources a lecturer makes available that together constitute a unit---the idea. Speech, gestures, and other nonverbal resources are but one-sided expressions of a higher order communicative meaning unit. The iterable nature of the signs produced and reproduced during science lectures permits, supports, and encourages the repetition, variation, and translation of ideas, themes, and languages and

  4. Spatial computing in interactive architecture

    NARCIS (Netherlands)

    S.O. Dulman (Stefan); M. Krezer; L. Hovestad

    2014-01-01

    htmlabstractDistributed computing is the theoretical foundation for applications and technologies like interactive architecture, wearable computing, and smart materials. It evolves continuously, following needs rising from scientific developments, novel uses of technology, or simply the curiosity to

  5. Child-Computer Interaction SIG

    DEFF Research Database (Denmark)

    Hourcade, Juan Pablo; Revelle, Glenda; Zeising, Anja

    2016-01-01

    This SIG will provide child-computer interaction researchers and practitioners an opportunity to discuss four topics that represent new challenges and opportunities for the community. The four areas are: interactive technologies for children under the age of five, technology for inclusion, privacy...... and information security in the age of the quantified self, and the maker movement....

  6. The impact of using computer decision-support software in primary care nurse-led telephone triage: interactional dilemmas and conversational consequences.

    Science.gov (United States)

    Murdoch, Jamie; Barnes, Rebecca; Pooler, Jillian; Lattimer, Valerie; Fletcher, Emily; Campbell, John L

    2015-02-01

    Telephone triage represents one strategy to manage demand for face-to-face GP appointments in primary care. Although computer decision-support software (CDSS) is increasingly used by nurses to triage patients, little is understood about how interaction is organized in this setting. Specifically any interactional dilemmas this computer-mediated setting invokes; and how these may be consequential for communication with patients. Using conversation analytic methods we undertook a multi-modal analysis of 22 audio-recorded telephone triage nurse-caller interactions from one GP practice in England, including 10 video-recordings of nurses' use of CDSS during triage. We draw on Goffman's theoretical notion of participation frameworks to make sense of these interactions, presenting 'telling cases' of interactional dilemmas nurses faced in meeting patient's needs and accurately documenting the patient's condition within the CDSS. Our findings highlight troubles in the 'interactional workability' of telephone triage exposing difficulties faced in aligning the proximal and wider distal context that structures CDSS-mediated interactions. Patients present with diverse symptoms, understanding of triage consultations, and communication skills which nurses need to negotiate turn-by-turn with CDSS requirements. Nurses therefore need to have sophisticated communication, technological and clinical skills to ensure patients' presenting problems are accurately captured within the CDSS to determine safe triage outcomes. Dilemmas around how nurses manage and record information, and the issues of professional accountability that may ensue, raise questions about the impact of CDSS and its use in supporting nurses to deliver safe and effective patient care. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Evaluation of multimodal ground cues

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Lecuyer, Anatole; Serafin, Stefania

    2012-01-01

    This chapter presents an array of results on the perception of ground surfaces via multiple sensory modalities,with special attention to non visual perceptual cues, notably those arising from audition and haptics, as well as interactions between them. It also reviews approaches to combining...... synthetic multimodal cues, from vision, haptics, and audition, in order to realize virtual experiences of walking on simulated ground surfaces or other features....

  8. Virtual microscopy: merging of computer mediated communication and intuitive interfacing

    Science.gov (United States)

    de Ridder, Huib; de Ridder-Sluiter, Johanna G.; Kluin, Philip M.; Christiaans, Henri H. C. M.

    2009-02-01

    Ubiquitous computing (or Ambient Intelligence) is an upcoming technology that is usually associated with futuristic smart environments in which information is available anytime anywhere and with which humans can interact in a natural, multimodal way. However spectacular the corresponding scenarios may be, it is equally challenging to consider how this technology may enhance existing situations. This is illustrated by a case study from the Dutch medical field: central quality reviewing for pathology in child oncology. The main goal of the review is to assess the quality of the diagnosis based on patient material. The sharing of knowledge in social face-to-face interaction during such meeting is an important advantage. At the same time there is the disadvantage that the experts from the seven Dutch academic medical centers have to travel to the review meeting and that the required logistics to collect and bring patient material and data to the meeting is cumbersome and time-consuming. This paper focuses on how this time-consuming, nonefficient way of reviewing can be replaced by a virtual collaboration system by merging technology supporting Computer Mediated Collaboration and intuitive interfacing. This requires insight in the preferred way of communication and collaboration as well as knowledge about preferred interaction style with a virtual shared workspace.

  9. Introducing the Interactive Model for the Training of Audiovisual Translators and Analysis of Multimodal Texts

    Directory of Open Access Journals (Sweden)

    Pietro Luigi Iaia

    2015-07-01

    Full Text Available Abstract – This paper introduces the ‘Interactive Model’ of audiovisual translation developed in the context of my PhD research on the cognitive-semantic, functional and socio-cultural features of the Italian-dubbing translation of a corpus of humorous texts. The Model is based on two interactive macro-phases – ‘Multimodal Critical Analysis of Scripts’ (MuCrAS and ‘Multimodal Re-Textualization of Scripts’ (MuReTS. Its construction and application are justified by a multidisciplinary approach to the analysis and translation of audiovisual texts, so as to focus on the linguistic and extralinguistic dimensions affecting both the reception of source texts and the production of target ones (Chaume 2004; Díaz Cintas 2004. By resorting to Critical Discourse Analysis (Fairclough 1995, 2001, to a process-based approach to translation and to a socio-semiotic analysis of multimodal texts (van Leeuwen 2004; Kress and van Leeuwen 2006, the Model is meant to be applied to the training of audiovisual translators and discourse analysts in order to help them enquire into the levels of pragmalinguistic equivalence between the source and the target versions. Finally, a practical application shall be discussed, detailing the Italian rendering of a comic sketch from the American late-night talk show Conan.Abstract – Questo studio introduce il ‘Modello Interattivo’ di traduzione audiovisiva sviluppato durante il mio dottorato di ricerca incentrato sulle caratteristiche cognitivo-semantiche, funzionali e socio-culturali della traduzione italiana per il doppiaggio di un corpus di testi comici. Il Modello è costituito da due fasi: la prima, di ‘Analisi critica e multimodale degli script’ (MuCrAS e la seconda, di ‘Ritestualizzazione critica e multimodale degli script’ (MuReTS, e la sua costruzione e applicazione sono frutto di un approccio multidisciplinare all’analisi e traduzione dei testi audiovisivi, al fine di esaminare le

  10. A Human-Centred Tangible approach to learning Computational Thinking

    Directory of Open Access Journals (Sweden)

    Tommaso Turchi

    2016-08-01

    Full Text Available Computational Thinking has recently become a focus of many teaching and research domains; it encapsulates those thinking skills integral to solving complex problems using a computer, thus being widely applicable in our society. It is influencing research across many disciplines and also coming into the limelight of education, mostly thanks to public initiatives such as the Hour of Code. In this paper we present our arguments for promoting Computational Thinking in education through the Human-centred paradigm of Tangible End-User Development, namely by exploiting objects whose interactions with the physical environment are mapped to digital actions performed on the system.

  11. Multimodal communication in animals, humans and robots: an introduction to perspectives in brain-inspired informatics.

    Science.gov (United States)

    Wermter, S; Page, M; Knowles, M; Gallese, V; Pulvermüller, F; Taylor, J

    2009-03-01

    Recent years have seen convergence in research on brain mechanisms and neurocomputational approaches, culminating in the creation of a new generation of robots whose artificial "brains" respect neuroscience principles and whose "cognitive" systems venture into higher cognitive domains such as planning and action sequencing, complex object and concept processing, and language. The present article gives an overview of selected projects in this general multidisciplinary field. The work reviewed centres on research funded by the EU in the context of the New and Emergent Science and Technology, NEST, funding scheme highlighting the topic "What it means to be human". Examples of such projects include learning by imitation (Edici project), examining the origin of human rule-based reasoning (Far), studying the neural origins of language (Neurocom), exploring the evolutionary origins of the human mind (Pkb140404), researching into verbal and non-verbal communication (Refcom), using and interpreting signs (Sedsu), characterising human language by structural complexity (Chlasc), and representing abstract concepts (Abstract). Each of the communication-centred research projects revealed individual insights; however, there had been little overall analysis of results and hypotheses. In the Specific Support Action Nestcom, we proposed to analyse some NEST projects focusing on the central question "What it means to communicate" and to review, understand and integrate the results of previous communication-related research, in order to develop and communicate multimodal experimental hypotheses for investigation by future projects. The present special issue includes a range of papers on the interplay between neuroinformatics, brain science and robotics in the general area of higher cognitive functions and multimodal communication. These papers extend talks given at the NESTCOM workshops, at ICANN (http://www.his.sunderland.ac.uk/nestcom/workshop/icann.html) in Porto and at the first

  12. Closed-loop EMG-informed model-based analysis of human musculoskeletal mechanics on rough terrains

    NARCIS (Netherlands)

    Varotto, C.; Sawacha, Z.; Gizzi, L; Farina, D.; Sartori, M.

    2017-01-01

    This work aims at estimating the musculoskeletal forces acting in the human lower extremity during locomotion on rough terrains. We employ computational models of the human neuro-musculoskeletal system that are informed by multi-modal movement data including foot-ground reaction forces, 3D marker

  13. Human-Robot Interaction and Human Self-Realization

    DEFF Research Database (Denmark)

    Nørskov, Marco

    2014-01-01

    is to test the basis for this type of discrimination when it comes to human-robot interaction. Furthermore, the paper will take Heidegger's warning concerning technology as a vantage point and explore the possibility of human-robot interaction forming a praxis that might help humans to be with robots beyond...

  14. Utilizing Multi-Modal Literacies in Middle Grades Science

    Science.gov (United States)

    Saurino, Dan; Ogletree, Tamra; Saurino, Penelope

    2010-01-01

    The nature of literacy is changing. Increased student use of computer-mediated, digital, and visual communication spans our understanding of adolescent multi-modal capabilities that reach beyond the traditional conventions of linear speech and written text in the science curriculum. Advancing technology opens doors to learning that involve…

  15. Interaction of promethazine and adiphenine to human hemoglobin: A comparative spectroscopic and computational analysis

    Science.gov (United States)

    Maurya, Neha; ud din Parray, Mehraj; Maurya, Jitendra Kumar; Kumar, Amit; Patel, Rajan

    2018-06-01

    The binding nature of amphiphilic drugs viz. promethazine hydrochloride (PMT) and adiphenine hydrochloride (ADP), with human hemoglobin (Hb) was unraveled by fluorescence, absorbance, time resolved fluorescence, fluorescence resonance energy transfer (FRET) and circular dichroism (CD) spectral techniques in combination with molecular docking and molecular dynamic simulation methods. The steady state fluorescence spectra indicated that both PMT and ADP quenches the fluorescence of Hb through static quenching mechanism which was further confirmed by time resolved fluorescence spectra. The UV-Vis spectroscopy suggested ground state complex formation. The activation energy (Ea) was observed more in the case of Hb-ADP than Hb-PMT interaction system. The FRET result indicates the high probability of energy transfer from β Trp37 residue of Hb to the PMT (r = 2.02 nm) and ADP (r = 2.33 nm). The thermodynamic data reveal that binding of PMT with Hb are exothermic in nature involving hydrogen bonding and van der Waal interaction whereas in the case of ADP hydrophobic forces play the major role and binding process is endothermic in nature. The CD results show that both PMT and ADP, induced secondary structural changes of Hb and unfold the protein by losing a large helical content while the effect is more pronounced with ADP. Additionally, we also utilized computational approaches for deep insight into the binding of these drugs with Hb and the results are well matched with our experimental results.

  16. Multimodality and Ambient Intelligence

    NARCIS (Netherlands)

    Nijholt, Antinus; Verhaegh, W.; Aarts, E.; Korst, J.

    2004-01-01

    In this chapter we discuss multimodal interface technology. We present eexamples of multimodal interfaces and show problems and opportunities. Fusion of modalities is discussed and some roadmap discussions on research in multimodality are summarized. This chapter also discusses future developments

  17. Experiencia de enseñanza multimodal en una clase de idiomas [Experience of multimodal teaching in a language classroom

    Directory of Open Access Journals (Sweden)

    María Martínez Lirola

    2013-12-01

    Full Text Available Resumen: Nuestra sociedad es cada vez más tecnológica y multimodal por lo que es necesario que la enseñanza se adapte a los nuevos tiempos. Este artículo analiza el modo en que la asignatura Lengua Inglesa IV de la Licenciatura en Filología Inglesa en la Universidad de Alicante combina el desarrollo de las cinco destrezas (escucha, habla, lectura, escritura e interacción evaluadas por medio de un portafolio con la multimodalidad en las prácticas docentes y en cada una de las actividades que componen el portafolio. Los resultados de una encuesta preparada al final del curso académico 2011-2012 ponen de manifiesto las competencias principales que el alumnado universitario desarrolla gracias a la docencia multimodal y la importancia de las tutorías en este tipo de enseñanza. Abstract: Our society becomes more technological and multimodal and, consequently, teaching has to be adapted to the new time. This article analyses the way in which the subject English Language IV of the degree English Studies at the University of Alicante combines the development of the five skills (listening, speaking, reading, writing and interacting evaluated through a portfolio with multimodality in the teaching practices and in each of the activities that are part of the portfolio. The results of a survey prepared at the end of the academic year 2011-2012 point out the main competences that university students develop thanks to multimodal teaching and the importance of tutorials in this kind of teaching.

  18. Visual tracking for multi-modality computer-assisted image guidance

    Science.gov (United States)

    Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp

    2017-03-01

    With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.

  19. Fourier domain asymmetric cryptosystem for privacy protected multimodal biometric security

    Science.gov (United States)

    Choudhury, Debesh

    2016-04-01

    We propose a Fourier domain asymmetric cryptosystem for multimodal biometric security. One modality of biometrics (such as face) is used as the plaintext, which is encrypted by another modality of biometrics (such as fingerprint). A private key is synthesized from the encrypted biometric signature by complex spatial Fourier processing. The encrypted biometric signature is further encrypted by other biometric modalities, and the corresponding private keys are synthesized. The resulting biometric signature is privacy protected since the encryption keys are provided by the human, and hence those are private keys. Moreover, the decryption keys are synthesized using those private encryption keys. The encrypted signatures are decrypted using the synthesized private keys and inverse complex spatial Fourier processing. Computer simulations demonstrate the feasibility of the technique proposed.

  20. Multimodality Imaging of Heart Valve Disease

    International Nuclear Information System (INIS)

    Rajani, Ronak; Khattar, Rajdeep; Chiribiri, Amedeo; Victor, Kelly; Chambers, John

    2014-01-01

    Unidentified heart valve disease is associated with a significant morbidity and mortality. It has therefore become important to accurately identify, assess and monitor patients with this condition in order that appropriate and timely intervention can occur. Although echocardiography has emerged as the predominant imaging modality for this purpose, recent advances in cardiac magnetic resonance and cardiac computed tomography indicate that they may have an important contribution to make. The current review describes the assessment of regurgitant and stenotic heart valves by multimodality imaging (echocardiography, cardiac computed tomography and cardiac magnetic resonance) and discusses their relative strengths and weaknesses

  1. Multimodality Imaging of Heart Valve Disease

    Energy Technology Data Exchange (ETDEWEB)

    Rajani, Ronak, E-mail: Dr.R.Rajani@gmail.com [Department of Cardiology, St. Thomas’ Hospital, London (United Kingdom); Khattar, Rajdeep [Department of Cardiology, Royal Brompton Hospital, London (United Kingdom); Chiribiri, Amedeo [Divisions of Imaging Sciences, The Rayne Institute, St. Thomas' Hospital, London (United Kingdom); Victor, Kelly; Chambers, John [Department of Cardiology, St. Thomas’ Hospital, London (United Kingdom)

    2014-09-15

    Unidentified heart valve disease is associated with a significant morbidity and mortality. It has therefore become important to accurately identify, assess and monitor patients with this condition in order that appropriate and timely intervention can occur. Although echocardiography has emerged as the predominant imaging modality for this purpose, recent advances in cardiac magnetic resonance and cardiac computed tomography indicate that they may have an important contribution to make. The current review describes the assessment of regurgitant and stenotic heart valves by multimodality imaging (echocardiography, cardiac computed tomography and cardiac magnetic resonance) and discusses their relative strengths and weaknesses.

  2. BILINGUAL MULTIMODAL SYSTEM FOR TEXT-TO-AUDIOVISUAL SPEECH AND SIGN LANGUAGE SYNTHESIS

    Directory of Open Access Journals (Sweden)

    A. A. Karpov

    2014-09-01

    Full Text Available We present a conceptual model, architecture and software of a multimodal system for audio-visual speech and sign language synthesis by the input text. The main components of the developed multimodal synthesis system (signing avatar are: automatic text processor for input text analysis; simulation 3D model of human's head; computer text-to-speech synthesizer; a system for audio-visual speech synthesis; simulation 3D model of human’s hands and upper body; multimodal user interface integrating all the components for generation of audio, visual and signed speech. The proposed system performs automatic translation of input textual information into speech (audio information and gestures (video information, information fusion and its output in the form of multimedia information. A user can input any grammatically correct text in Russian or Czech languages to the system; it is analyzed by the text processor to detect sentences, words and characters. Then this textual information is converted into symbols of the sign language notation. We apply international «Hamburg Notation System» - HamNoSys, which describes the main differential features of each manual sign: hand shape, hand orientation, place and type of movement. On their basis the 3D signing avatar displays the elements of the sign language. The virtual 3D model of human’s head and upper body has been created using VRML virtual reality modeling language, and it is controlled by the software based on OpenGL graphical library. The developed multimodal synthesis system is a universal one since it is oriented for both regular users and disabled people (in particular, for the hard-of-hearing and visually impaired, and it serves for multimedia output (by audio and visual modalities of input textual information.

  3. Analysis and synthesis of multi-qubit, multi-mode quantum devices

    Energy Technology Data Exchange (ETDEWEB)

    Solgun, Firat

    2015-03-27

    In this thesis we propose new methods in multi-qubit multi-mode circuit quantum electrodynamics (circuit-QED) architectures. First we describe a direct parity measurement method for three qubits, which can be realized in 2D circuit-QED with a possible extension to four qubits in a 3D circuit-QED setup for the implementation of the surface code. In Chapter 3 we show how to derive Hamiltonians and compute relaxation rates of the multi-mode superconducting microwave circuits consisting of single Josephson junctions using an exact impedance synthesis technique (the Brune synthesis) and applying previous formalisms for lumped element circuit quantization. In the rest of the thesis we extend our method to multi-junction (multi-qubit) multi-mode circuits through the use of state-space descriptions which allows us to quantize any multiport microwave superconducting circuit with a reciprocal lossy impedance response.

  4. A new piezoelectric energy harvesting design concept: multimodal energy harvesting skin.

    Science.gov (United States)

    Lee, Soobum; Youn, Byeng D

    2011-03-01

    This paper presents an advanced design concept for a piezoelectric energy harvesting (EH), referred to as multimodal EH skin. This EH design facilitates the use of multimodal vibration and enhances power harvesting efficiency. The multimodal EH skin is an extension of our previous work, EH skin, which was an innovative design paradigm for a piezoelectric energy harvester: a vibrating skin structure and an additional thin piezoelectric layer in one device. A computational (finite element) model of the multilayered assembly - the vibrating skin structure and piezoelectric layer - is constructed and the optimal topology and/or shape of the piezoelectric layer is found for maximum power generation from multiple vibration modes. A design rationale for the multimodal EH skin was proposed: designing a piezoelectric material distribution and external resistors. In the material design step, the piezoelectric material is segmented by inflection lines from multiple vibration modes of interests to minimize voltage cancellation. The inflection lines are detected using the voltage phase. In the external resistor design step, the resistor values are found for each segment to maximize power output. The presented design concept, which can be applied to any engineering system with multimodal harmonic-vibrating skins, was applied to two case studies: an aircraft skin and a power transformer panel. The excellent performance of multimodal EH skin was demonstrated, showing larger power generation than EH skin without segmentation or unimodal EH skin.

  5. The Properties of Intelligent Human-Machine Interface

    Directory of Open Access Journals (Sweden)

    Alexander Alfimtsev

    2012-04-01

    Full Text Available Intelligent human-machine interfaces based on multimodal interaction are developed separately in different application areas. No unified opinion exists about the issue of what properties should these interfaces have to provide an intuitive and natural interaction. Having carried out an analytical survey of the papers that deal with intelligent interfaces a set of properties are presented, which are necessary for intelligent interface between an information system and a human: absolute response, justification, training, personification, adaptiveness, collectivity, security, hidden persistence, portability, filtering.

  6. Interactive visualization and analysis of multimodal datasets for surgical applications.

    Science.gov (United States)

    Kirmizibayrak, Can; Yim, Yeny; Wakid, Mike; Hahn, James

    2012-12-01

    Surgeons use information from multiple sources when making surgical decisions. These include volumetric datasets (such as CT, PET, MRI, and their variants), 2D datasets (such as endoscopic videos), and vector-valued datasets (such as computer simulations). Presenting all the information to the user in an effective manner is a challenging problem. In this paper, we present a visualization approach that displays the information from various sources in a single coherent view. The system allows the user to explore and manipulate volumetric datasets, display analysis of dataset values in local regions, combine 2D and 3D imaging modalities and display results of vector-based computer simulations. Several interaction methods are discussed: in addition to traditional interfaces including mouse and trackers, gesture-based natural interaction methods are shown to control these visualizations with real-time performance. An example of a medical application (medialization laryngoplasty) is presented to demonstrate how the combination of different modalities can be used in a surgical setting with our approach.

  7. When computers were human

    CERN Document Server

    Grier, David Alan

    2013-01-01

    Before Palm Pilots and iPods, PCs and laptops, the term ""computer"" referred to the people who did scientific calculations by hand. These workers were neither calculating geniuses nor idiot savants but knowledgeable people who, in other circumstances, might have become scientists in their own right. When Computers Were Human represents the first in-depth account of this little-known, 200-year epoch in the history of science and technology. Beginning with the story of his own grandmother, who was trained as a human computer, David Alan Grier provides a poignant introduction to the wider wo

  8. Safety Metrics for Human-Computer Controlled Systems

    Science.gov (United States)

    Leveson, Nancy G; Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  9. Collaborative interactive visualization: exploratory concept

    Science.gov (United States)

    Mokhtari, Marielle; Lavigne, Valérie; Drolet, Frédéric

    2015-05-01

    Dealing with an ever increasing amount of data is a challenge that military intelligence analysts or team of analysts face day to day. Increased individual and collective comprehension goes through collaboration between people. Better is the collaboration, better will be the comprehension. Nowadays, various technologies support and enhance collaboration by allowing people to connect and collaborate in settings as varied as across mobile devices, over networked computers, display walls, tabletop surfaces, to name just a few. A powerful collaboration system includes traditional and multimodal visualization features to achieve effective human communication. Interactive visualization strengthens collaboration because this approach is conducive to incrementally building a mental assessment of the data meaning. The purpose of this paper is to present an overview of the envisioned collaboration architecture and the interactive visualization concepts underlying the Sensemaking Support System prototype developed to support analysts in the context of the Joint Intelligence Collection and Analysis Capability project at DRDC Valcartier. It presents the current version of the architecture, discusses future capabilities to help analyst(s) in the accomplishment of their tasks and finally recommends collaboration and visualization technologies allowing to go a step further both as individual and as a team.

  10. AirDraw: Leveraging Smart Watch Motion Sensors for Mobile Human Computer Interactions

    OpenAIRE

    Sajjadi, Seyed A; Moazen, Danial; Nahapetian, Ani

    2017-01-01

    Wearable computing is one of the fastest growing technologies today. Smart watches are poised to take over at least of half the wearable devices market in the near future. Smart watch screen size, however, is a limiting factor for growth, as it restricts practical text input. On the other hand, wearable devices have some features, such as consistent user interaction and hands-free, heads-up operations, which pave the way for gesture recognition methods of text entry. This paper proposes a new...

  11. Computer-based personality judgments are more accurate than those made by humans.

    Science.gov (United States)

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-27

    Judging others' personalities is an essential skill in successful social living, as personality is a key driver behind people's interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants' Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy.

  12. Computation as Medium

    DEFF Research Database (Denmark)

    Jochum, Elizabeth Ann; Putnam, Lance

    2017-01-01

    Artists increasingly utilize computational tools to generate art works. Computational approaches to art making open up new ways of thinking about agency in interactive art because they invite participation and allow for unpredictable outcomes. Computational art is closely linked...... to the participatory turn in visual art, wherein spectators physically participate in visual art works. Unlike purely physical methods of interaction, computer assisted interactivity affords artists and spectators more nuanced control of artistic outcomes. Interactive art brings together human bodies, computer code......, and nonliving objects to create emergent art works. Computation is more than just a tool for artists, it is a medium for investigating new aesthetic possibilities for choreography and composition. We illustrate this potential through two artistic projects: an improvisational dance performance between a human...

  13. Ubiquitous human computing.

    Science.gov (United States)

    Zittrain, Jonathan

    2008-10-28

    Ubiquitous computing means network connectivity everywhere, linking devices and systems as small as a drawing pin and as large as a worldwide product distribution chain. What could happen when people are so readily networked? This paper explores issues arising from two possible emerging models of ubiquitous human computing: fungible networked brainpower and collective personal vital sign monitoring.

  14. Hi-Jack: a novel computational framework for pathway-based inference of host–pathogen interactions

    KAUST Repository

    Kleftogiannis, Dimitrios A.

    2015-03-09

    Motivation: Pathogens infect their host and hijack the host machinery to produce more progeny pathogens. Obligate intracellular pathogens, in particular, require resources of the host to replicate. Therefore, infections by these pathogens lead to alterations in the metabolism of the host, shifting in favor of pathogen protein production. Some computational identification of mechanisms of host-pathogen interactions have been proposed, but it seems the problem has yet to be approached from the metabolite-hijacking angle. Results: We propose a novel computational framework, Hi-Jack, for inferring pathway-based interactions between a host and a pathogen that relies on the idea of metabolite hijacking. Hi-Jack searches metabolic network data from hosts and pathogens, and identifies candidate reactions where hijacking occurs. A novel scoring function ranks candidate hijacked reactions and identifies pathways in the host that interact with pathways in the pathogen, as well as the associated frequent hijacked metabolites. We also describe host-pathogen interaction principles that can be used in the future for subsequent studies. Our case study on Mycobacterium tuberculosis (Mtb) revealed pathways in human-e.g. carbohydrate metabolism, lipids metabolism and pathways related to amino acids metabolism-that are likely to be hijacked by the pathogen. In addition, we report interesting potential pathway interconnections between human and Mtb such as linkage of human fatty acid biosynthesis with Mtb biosynthesis of unsaturated fatty acids, or linkage of human pentose phosphate pathway with lipopolysaccharide biosynthesis in Mtb. © The Author 2015. Published by Oxford University Press. All rights reserved.

  15. Redesign of a computerized clinical reminder for colorectal cancer screening: a human-computer interaction evaluation

    Directory of Open Access Journals (Sweden)

    Saleem Jason J

    2011-11-01

    Full Text Available Abstract Background Based on barriers to the use of computerized clinical decision support (CDS learned in an earlier field study, we prototyped design enhancements to the Veterans Health Administration's (VHA's colorectal cancer (CRC screening clinical reminder to compare against the VHA's current CRC reminder. Methods In a controlled simulation experiment, 12 primary care providers (PCPs used prototypes of the current and redesigned CRC screening reminder in a within-subject comparison. Quantitative measurements were based on a usability survey, workload assessment instrument, and workflow integration survey. We also collected qualitative data on both designs. Results Design enhancements to the VHA's existing CRC screening clinical reminder positively impacted aspects of usability and workflow integration but not workload. The qualitative analysis revealed broad support across participants for the design enhancements with specific suggestions for improving the reminder further. Conclusions This study demonstrates the value of a human-computer interaction evaluation in informing the redesign of information tools to foster uptake, integration into workflow, and use in clinical practice.

  16. 3D Data Acquisition Platform for Human Activity Understanding

    Science.gov (United States)

    2016-03-02

    SECURITY CLASSIFICATION OF: In this project, we incorporated motion capture devices, 3D vision sensors, and EMG sensors to cross validate...multimodality data acquisition, and address fundamental research problems of representation and invariant description of 3D data, human motion modeling and...applications of human activity analysis, and computational optimization of large-scale 3D data. The support for the acquisition of such research

  17. Nanocrystal core high-density lipoproteins: a multimodality contrast agent platform

    NARCIS (Netherlands)

    Cormode, David P.; Skajaa, Torjus; van Schooneveld, Matti M.; Koole, Rolf; Jarzyna, Peter; Lobatto, Mark E.; Calcagno, Claudia; Barazza, Alessandra; Gordon, Ronald E.; Zanzonico, Pat; Fisher, Edward A.; Fayad, Zahi A.; Mulder, Willem J. M.

    2008-01-01

    High density lipoprotein (HDL) is an important natural nanoparticle that may be modified for biomedical imaging purposes. Here we developed a novel technique to create unique multimodality HDL mimicking nanoparticles by incorporation of gold, iron oxide, or quantum dot nanocrystals for computed

  18. Reading Multimodal Texts for Learning – a Model for Cultivating Multimodal Literacy

    Directory of Open Access Journals (Sweden)

    Kristina Danielsson

    2016-08-01

    Full Text Available The re-conceptualisation of texts over the last 20 years, as well as the development of a multimodal understanding of communication and representation of knowledge, has profound consequences for the reading and understanding of multimodal texts, not least in educational contexts. However, if teachers and students are given tools to “unwrap” multimodal texts, they can develop a deeper understanding of texts, information structures, and the textual organisation of knowledge. This article presents a model for working with multimodal texts in education with the intention to highlight mutual multimodal text analysis in relation to the subject content. Examples are taken from a Singaporean science textbook as well as a Chilean science textbook, in order to demonstrate that the framework is versatile and applicable across different cultural contexts. The model takes into account the following aspects of texts: the general structure, how different semiotic resources operate, the ways in which different resources are combined (including coherence, the use of figurative language, and explicit/implicit values. Since learning operates on different dimensions – such as social and affective dimensions besides the cognitive ones – our inclusion of figurative language and values as components for textual analysis is a contribution to multimodal text analysis for learning.

  19. Frequency tripling with multimode-lasers

    International Nuclear Information System (INIS)

    Langer, H.; Roehr, H.; Wrobel, W.G.

    1978-10-01

    The presence of different modes with random phases in a laser beam leads to fluctuations in nonlinear optical interactions. This paper describes the influence of the linewidth of a dye laser on the generation of intensive Lyman-alpha radiation by frequency tripling. Using this Lyman-alpha source for resonance scattering on strongly doppler-broadened lines in fusion plasmas the detection limit of neutral hydrogen is nearly two orders higher with the multimode than the singlemode dye laser. (orig.) [de

  20. Ontology for assessment studies of human-computer-interaction in surgery.

    Science.gov (United States)

    Machno, Andrej; Jannin, Pierre; Dameron, Olivier; Korb, Werner; Scheuermann, Gerik; Meixensberger, Jürgen

    2015-02-01

    New technologies improve modern medicine, but may result in unwanted consequences. Some occur due to inadequate human-computer-interactions (HCI). To assess these consequences, an investigation model was developed to facilitate the planning, implementation and documentation of studies for HCI in surgery. The investigation model was formalized in Unified Modeling Language and implemented as an ontology. Four different top-level ontologies were compared: Object-Centered High-level Reference, Basic Formal Ontology, General Formal Ontology (GFO) and Descriptive Ontology for Linguistic and Cognitive Engineering, according to the three major requirements of the investigation model: the domain-specific view, the experimental scenario and the representation of fundamental relations. Furthermore, this article emphasizes the distinction of "information model" and "model of meaning" and shows the advantages of implementing the model in an ontology rather than in a database. The results of the comparison show that GFO fits the defined requirements adequately: the domain-specific view and the fundamental relations can be implemented directly, only the representation of the experimental scenario requires minor extensions. The other candidates require wide-ranging extensions, concerning at least one of the major implementation requirements. Therefore, the GFO was selected to realize an appropriate implementation of the developed investigation model. The ensuing development considered the concrete implementation of further model aspects and entities: sub-domains, space and time, processes, properties, relations and functions. The investigation model and its ontological implementation provide a modular guideline for study planning, implementation and documentation within the area of HCI research in surgery. This guideline helps to navigate through the whole study process in the form of a kind of standard or good clinical practice, based on the involved foundational frameworks

  1. Development of a hardware-based registration system for the multimodal medical images by USB cameras

    International Nuclear Information System (INIS)

    Iwata, Michiaki; Minato, Kotaro; Watabe, Hiroshi; Koshino, Kazuhiro; Yamamoto, Akihide; Iida, Hidehiro

    2009-01-01

    There are several medical imaging scanners and each modality has different aspect for visualizing inside of human body. By combining these images, diagnostic accuracy could be improved, and therefore, several attempts for multimodal image registration have been implemented. One popular approach is to use hybrid image scanners such as positron emission tomography (PET)/CT and single photon emission computed tomography (SPECT)/CT. However, these hybrid scanners are expensive and not fully available. We developed multimodal image registration system with universal serial bus (USB) cameras, which is inexpensive and applicable to any combinations of existed conventional imaging scanners. The multiple USB cameras will determine the three dimensional positions of a patient while scanning. Using information of these positions and rigid body transformation, the acquired image is registered to the common coordinate which is shared with another scanner. For each scanner, reference marker is attached on gantry of the scanner. For observing the reference marker's position by the USB cameras, the location of the USB cameras can be arbitrary. In order to validate the system, we scanned a cardiac phantom with different positions by PET and MRI scanners. Using this system, images from PET and MRI were visually aligned, and good correlations between PET and MRI images were obtained after the registration. The results suggest this system can be inexpensively used for multimodal image registrations. (author)

  2. A Human-Robot Interaction Perspective on Assistive and Rehabilitation Robotics.

    Science.gov (United States)

    Beckerle, Philipp; Salvietti, Gionata; Unal, Ramazan; Prattichizzo, Domenico; Rossi, Simone; Castellini, Claudio; Hirche, Sandra; Endo, Satoshi; Amor, Heni Ben; Ciocarlie, Matei; Mastrogiovanni, Fulvio; Argall, Brenna D; Bianchi, Matteo

    2017-01-01

    Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human-robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions.

  3. Multimodal Hyper-connectivity Networks for MCI Classification.

    Science.gov (United States)

    Li, Yang; Gao, Xinqiang; Jie, Biao; Yap, Pew-Thian; Kim, Min-Jeong; Wee, Chong-Yaw; Shen, Dinggang

    2017-09-01

    Hyper-connectivity network is a network where every edge is connected to more than two nodes, and can be naturally denoted using a hyper-graph. Hyper-connectivity brain network, either based on structural or functional interactions among the brain regions, has been used for brain disease diagnosis. However, the conventional hyper-connectivity network is constructed solely based on single modality data, ignoring potential complementary information conveyed by other modalities. The integration of complementary information from multiple modalities has been shown to provide a more comprehensive representation about the brain disruptions. In this paper, a novel multimodal hyper-network modelling method was proposed for improving the diagnostic accuracy of mild cognitive impairment (MCI). Specifically, we first constructed a multimodal hyper-connectivity network by simultaneously considering information from diffusion tensor imaging and resting-state functional magnetic resonance imaging data. We then extracted different types of network features from the hyper-connectivity network, and further exploited a manifold regularized multi-task feature selection method to jointly select the most discriminative features. Our proposed multimodal hyper-connectivity network demonstrated a better MCI classification performance than the conventional single modality based hyper-connectivity networks.

  4. The human interactome knowledge base (hint-kb): An integrative human protein interaction database enriched with predicted protein–protein interaction scores using a novel hybrid technique

    KAUST Repository

    Theofilatos, Konstantinos A.

    2013-07-12

    Proteins are the functional components of many cellular processes and the identification of their physical protein–protein interactions (PPIs) is an area of mature academic research. Various databases have been developed containing information about experimentally and computationally detected human PPIs as well as their corresponding annotation data. However, these databases contain many false positive interactions, are partial and only a few of them incorporate data from various sources. To overcome these limitations, we have developed HINT-KB (http://biotools.ceid.upatras.gr/hint-kb/), a knowledge base that integrates data from various sources, provides a user-friendly interface for their retrieval, cal-culatesasetoffeaturesofinterest and computesaconfidence score for every candidate protein interaction. This confidence score is essential for filtering the false positive interactions which are present in existing databases, predicting new protein interactions and measuring the frequency of each true protein interaction. For this reason, a novel machine learning hybrid methodology, called (Evolutionary Kalman Mathematical Modelling—EvoKalMaModel), was used to achieve an accurate and interpretable scoring methodology. The experimental results indicated that the proposed scoring scheme outperforms existing computational methods for the prediction of PPIs.

  5. Extending NGOMSL Model for Human-Humanoid Robot Interaction in the Soccer Robotics Domain

    Directory of Open Access Journals (Sweden)

    Rajesh Elara Mohan

    2008-01-01

    Full Text Available In the field of human-computer interaction, the Natural Goals, Operators, Methods, and Selection rules Language (NGOMSL model is one of the most popular methods for modelling knowledge and cognitive processes for rapid usability evaluation. The NGOMSL model is a description of the knowledge that a user must possess to operate the system represented as elementary actions for effective usability evaluations. In the last few years, mobile robots have been exhibiting a stronger presence in commercial markets and very little work has been done with NGOMSL modelling for usability evaluations in the human-robot interaction discipline. This paper focuses on extending the NGOMSL model for usability evaluation of human-humanoid robot interaction in the soccer robotics domain. The NGOMSL modelled human-humanoid interaction design of Robo-Erectus Junior was evaluated and the results of the experiments showed that the interaction design was able to find faults in an average time of 23.84 s. Also, the interaction design was able to detect the fault within the 60 s in 100% of the cases. The Evaluated Interaction design was adopted by our Robo-Erectus Junior version of humanoid robots in the RoboCup 2007 humanoid soccer league.

  6. After-effects of human-computer interaction indicated by P300 of the event-related brain potential.

    Science.gov (United States)

    Trimmel, M; Huber, R

    1998-05-01

    After-effects of human-computer interaction (HCI) were investigated by using the P300 component of the event-related brain potential (ERP). Forty-nine subjects (naive non-users, beginners, experienced users, programmers) completed three paper/pencil tasks (text editing, solving intelligence test items, filling out a questionnaire on sensation seeking) and three HCI tasks (text editing, executing a tutor program or programming, playing Tetris). The sequence of 7-min tasks was randomized between subjects and balanced between groups. After each experimental condition ERPs were recorded during an acoustic discrimination task at F3, F4, Cz, P3 and P4. Data indicate that: (1) mental after-effects of HCI can be detected by P300 of the ERP; (2) HCI showed in general a reduced amplitude; (3) P300 amplitude varied also with type of task, mainly at F4 where it was smaller after cognitive tasks (intelligence test/programming) and larger after emotion-based tasks (sensation seeking/Tetris); (4) cognitive tasks showed shorter latencies; (5) latencies were widely location-independent (within the range of 356-358 ms at F3, F4, P3 and P4) after executing the tutor program or programming; and (6) all observed after-effects were independent of the user's experience in operating computers and may therefore reflect short-term after-effects only and no structural changes of information processing caused by HCI.

  7. Human Computing and Machine Understanding of Human Behavior: A Survey

    NARCIS (Netherlands)

    Pantic, Maja; Pentland, Alex; Nijholt, Antinus; Huang, Thomas; Quek, F.; Yang, Yie

    2006-01-01

    A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should

  8. Simulating human behavior for national security human interactions.

    Energy Technology Data Exchange (ETDEWEB)

    Bernard, Michael Lewis; Hart, Dereck H.; Verzi, Stephen J.; Glickman, Matthew R.; Wolfenbarger, Paul R.; Xavier, Patrick Gordon

    2007-01-01

    This 3-year research and development effort focused on what we believe is a significant technical gap in existing modeling and simulation capabilities: the representation of plausible human cognition and behaviors within a dynamic, simulated environment. Specifically, the intent of the ''Simulating Human Behavior for National Security Human Interactions'' project was to demonstrate initial simulated human modeling capability that realistically represents intra- and inter-group interaction behaviors between simulated humans and human-controlled avatars as they respond to their environment. Significant process was made towards simulating human behaviors through the development of a framework that produces realistic characteristics and movement. The simulated humans were created from models designed to be psychologically plausible by being based on robust psychological research and theory. Progress was also made towards enhancing Sandia National Laboratories existing cognitive models to support culturally plausible behaviors that are important in representing group interactions. These models were implemented in the modular, interoperable, and commercially supported Umbra{reg_sign} simulation framework.

  9. Multimodal imaging of lung cancer and its microenvironment (Conference Presentation)

    Science.gov (United States)

    Hariri, Lida P.; Niederst, Matthew J.; Mulvey, Hillary; Adams, David C.; Hu, Haichuan; Chico Calero, Isabel; Szabari, Margit V.; Vakoc, Benjamin J.; Hasan, Tayyaba; Bouma, Brett E.; Engelman, Jeffrey A.; Suter, Melissa J.

    2016-03-01

    Despite significant advances in targeted therapies for lung cancer, nearly all patients develop drug resistance within 6-12 months and prognosis remains poor. Developing drug resistance is a progressive process that involves tumor cells and their microenvironment. We hypothesize that microenvironment factors alter tumor growth and response to targeted therapy. We conducted in vitro studies in human EGFR-mutant lung carcinoma cells, and demonstrated that factors secreted from lung fibroblasts results in increased tumor cell survival during targeted therapy with EGFR inhibitor, gefitinib. We also demonstrated that increased environment stiffness results in increased tumor survival during gefitinib therapy. In order to test our hypothesis in vivo, we developed a multimodal optical imaging protocol for preclinical intravital imaging in mouse models to assess tumor and its microenvironment over time. We have successfully conducted multimodal imaging of dorsal skinfold chamber (DSC) window mice implanted with GFP-labeled human EGFR mutant lung carcinoma cells and visualized changes in tumor development and microenvironment facets over time. Multimodal imaging included structural OCT to assess tumor viability and necrosis, polarization-sensitive OCT to measure tissue birefringence for collagen/fibroblast detection, and Doppler OCT to assess tumor vasculature. Confocal imaging was also performed for high-resolution visualization of EGFR-mutant lung cancer cells labeled with GFP, and was coregistered with OCT. Our results demonstrated that stromal support and vascular growth are essential to tumor progression. Multimodal imaging is a useful tool to assess tumor and its microenvironment over time.

  10. Multimodal Counseling Interventions: Effect on Human Papilloma Virus Vaccination Acceptance.

    Science.gov (United States)

    Nwanodi, Oroma; Salisbury, Helen; Bay, Curtis

    2017-11-06

    Human papilloma virus (HPV) vaccine was developed to reduce HPV-attributable cancers, external genital warts (EGW), and recurrent respiratory papillomatosis. Adolescent HPV vaccination series completion rates are less than 40% in the United States of America, but up to 80% in Australia and the United Kingdom. Population-based herd immunity requires 80% or greater vaccination series completion rates. Pro-vaccination counseling facilitates increased vaccination rates. Multimodal counseling interventions may increase HPV vaccination series non-completers' HPV-attributable disease knowledge and HPV-attributable disease prophylaxis (vaccination) acceptance over a brief 14-sentence counseling intervention. An online, 4-group, randomized controlled trial, with 260 or more participants per group, found that parents were more likely to accept HPV vaccination offers for their children than were childless young adults for themselves (68.2% and 52.9%). A combined audiovisual and patient health education handout (PHEH) intervention raised knowledge of HPV vaccination purpose, p = 0.02, and HPV vaccination acceptance for seven items, p HPV vaccination acceptance for five items, p HPV causes EGW, and that HPV vaccination prevents HPV-attributable diseases were better conveyed by the combined audiovisual and PHEH than the control 14-sentence counseling intervention alone.

  11. Multimode optical dermoscopy (SkinSpect) analysis for skin with melanocytic nevus

    Science.gov (United States)

    Vasefi, Fartash; MacKinnon, Nicholas; Saager, Rolf; Kelly, Kristen M.; Maly, Tyler; Chave, Robert; Booth, Nicholas; Durkin, Anthony J.; Farkas, Daniel L.

    2016-04-01

    We have developed a multimode dermoscope (SkinSpect™) capable of illuminating human skin samples in-vivo with spectrally-programmable linearly-polarized light at 33 wavelengths between 468nm and 857 nm. Diffusely reflected photons are separated into collinear and cross-polarized image paths and images captured for each illumination wavelength. In vivo human skin nevi (N = 20) were evaluated with the multimode dermoscope and melanin and hemoglobin concentrations were compared with Spatially Modulated Quantitative Spectroscopy (SMoQS) measurements. Both systems show low correlation between their melanin and hemoglobin concentrations, demonstrating the ability of the SkinSpect™ to separate these molecular signatures and thus act as a biologically plausible device capable of early onset melanoma detection.

  12. AN INTELLIGENT CONVERSATION AGENT FOR HEALTH CARE DOMAIN

    Directory of Open Access Journals (Sweden)

    K. Karpagam

    2014-04-01

    Full Text Available Human Computer Interaction is one of the pervasive application areas of computer science to develop with multimodal interaction for information sharings. The conversation agent acts as the major core area for developing interfaces between a system and user with applied AI for proper responses. In this paper, the interactive system plays a vital role in improving knowledge in the domain of health through the intelligent interface between machine and human with text and speech. The primary aim is to enrich the knowledge and help the user in the domain of health using conversation agent to offer immediate response with human companion feel.

  13. Multiscale climate emulator of multimodal wave spectra: MUSCLE-spectra

    Science.gov (United States)

    Rueda, Ana; Hegermiller, Christie A.; Antolinez, Jose A. A.; Camus, Paula; Vitousek, Sean; Ruggiero, Peter; Barnard, Patrick L.; Erikson, Li H.; Tomás, Antonio; Mendez, Fernando J.

    2017-02-01

    Characterization of multimodal directional wave spectra is important for many offshore and coastal applications, such as marine forecasting, coastal hazard assessment, and design of offshore wave energy farms and coastal structures. However, the multivariate and multiscale nature of wave climate variability makes this complex problem tractable using computationally expensive numerical models. So far, the skill of statistical-downscaling model-based parametric (unimodal) wave conditions is limited in large ocean basins such as the Pacific. The recent availability of long-term directional spectral data from buoys and wave hindcast models allows for development of stochastic models that include multimodal sea-state parameters. This work introduces a statistical downscaling framework based on weather types to predict multimodal wave spectra (e.g., significant wave height, mean wave period, and mean wave direction from different storm systems, including sea and swells) from large-scale atmospheric pressure fields. For each weather type, variables of interest are modeled using the categorical distribution for the sea-state type, the Generalized Extreme Value (GEV) distribution for wave height and wave period, a multivariate Gaussian copula for the interdependence between variables, and a Markov chain model for the chronology of daily weather types. We apply the model to the southern California coast, where local seas and swells from both the Northern and Southern Hemispheres contribute to the multimodal wave spectrum. This work allows attribution of particular extreme multimodal wave events to specific atmospheric conditions, expanding knowledge of time-dependent, climate-driven offshore and coastal sea-state conditions that have a significant influence on local nearshore processes, coastal morphology, and flood hazards.

  14. Critical Analysis of Multimodal Discourse

    DEFF Research Database (Denmark)

    van Leeuwen, Theo

    2013-01-01

    This is an encyclopaedia article which defines the fields of critical discourse analysis and multimodality studies, argues that within critical discourse analysis more attention should be paid to multimodality, and within multimodality to critical analysis, and ends reviewing a few examples of re...

  15. A Single Camera Motion Capture System for Human-Computer Interaction

    Science.gov (United States)

    Okada, Ryuzo; Stenger, Björn

    This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. Anew likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i. e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband Engine™: a computer game and a virtual clothing application.

  16. 2012 International Conference on Human-centric Computing

    CERN Document Server

    Jin, Qun; Yeo, Martin; Hu, Bin; Human Centric Technology and Service in Smart Space, HumanCom 2012

    2012-01-01

    The theme of HumanCom is focused on the various aspects of human-centric computing for advances in computer science and its applications and provides an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of human-centric computing. In addition, the conference will publish high quality papers which are closely related to the various theories and practical applications in human-centric computing. Furthermore, we expect that the conference and its publications will be a trigger for further related research and technology improvements in this important subject.

  17. The Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    Directory of Open Access Journals (Sweden)

    Wojtek James eGoscinski

    2014-03-01

    Full Text Available The Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE is a national imaging and visualisation facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organisation (CSIRO, and the Victorian Partnership for Advanced Computing (VPAC, with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI, x-ray computer tomography (CT, electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i integrated multiple different neuroimaging analysis software components, (ii enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research.

  18. Why E-Business Must Evolve beyond Market Orientation: Applying Human Interaction Models to Computer-Mediated Corporate Communications.

    Science.gov (United States)

    Johnston, Kevin McCullough

    2001-01-01

    Considers the design of corporate communications for electronic business and discusses the increasing importance of corporate interaction as companies work in virtual environments. Compares sociological and psychological theories of human interaction and relationship formation with organizational interaction theories of corporate relationship…

  19. Toward an infrastructure for data-driven multimodal communication research

    DEFF Research Database (Denmark)

    Steen, Francis F.; Hougaard, Anders; Joo, Jungseock

    2018-01-01

    Research into the multimodal dimensions of human communication faces a set of distinctive methodological challenges. Collecting the datasets is resource-intensive, analysis often lacks peer validation, and the absence of shared datasets makes it difficult to develop standards. External validity...

  20. Quantifying human-environment interactions using videography in the context of infectious disease transmission.

    Science.gov (United States)

    Julian, Timothy R; Bustos, Carla; Kwong, Laura H; Badilla, Alejandro D; Lee, Julia; Bischel, Heather N; Canales, Robert A

    2018-05-08

    Quantitative data on human-environment interactions are needed to fully understand infectious disease transmission processes and conduct accurate risk assessments. Interaction events occur during an individual's movement through, and contact with, the environment, and can be quantified using diverse methodologies. Methods that utilize videography, coupled with specialized software, can provide a permanent record of events, collect detailed interactions in high resolution, be reviewed for accuracy, capture events difficult to observe in real-time, and gather multiple concurrent phenomena. In the accompanying video, the use of specialized software to capture humanenvironment interactions for human exposure and disease transmission is highlighted. Use of videography, combined with specialized software, allows for the collection of accurate quantitative representations of human-environment interactions in high resolution. Two specialized programs include the Virtual Timing Device for the Personal Computer, which collects sequential microlevel activity time series of contact events and interactions, and LiveTrak, which is optimized to facilitate annotation of events in real-time. Opportunities to annotate behaviors at high resolution using these tools are promising, permitting detailed records that can be summarized to gain information on infectious disease transmission and incorporated into more complex models of human exposure and risk.

  1. The Self-Organization of Human Interaction

    DEFF Research Database (Denmark)

    Dale, Rick; Fusaroli, Riccardo; Duran, Nicholas

    2013-01-01

    We describe a “centipede’s dilemma” that faces the sciences of human interaction. Research on human interaction has been involved in extensive theoretical debate, although the vast majority of research tends to focus on a small set of human behaviors, cognitive processes, and interactive contexts...

  2. Facilitating Multiple Intelligences Through Multimodal Learning Analytics

    Directory of Open Access Journals (Sweden)

    Ayesha PERVEEN

    2018-01-01

    Full Text Available This paper develops a theoretical framework for employing learning analytics in online education to trace multiple learning variations of online students by considering their potential of being multiple intelligences based on Howard Gardner’s 1983 theory of multiple intelligences. The study first emphasizes the need to facilitate students as multiple intelligences by online education systems and then suggests a framework of the advanced form of learning analytics i.e., multimodal learning analytics for tracing and facilitating multiple intelligences while they are engaged in online ubiquitous learning. As multimodal learning analytics is still an evolving area, it poses many challenges for technologists, educationists as well as organizational managers. Learning analytics make machines meet humans, therefore, the educationists with an expertise in learning theories can help technologists devise latest technological methods for multimodal learning analytics and organizational managers can implement them for the improvement of online education. Therefore, a careful instructional design based on a deep understanding of students’ learning abilities, is required to develop teaching plans and technological possibilities for monitoring students’ learning paths. This is how learning analytics can help design an adaptive instructional design based on a quick analysis of the data gathered. Based on that analysis, the academicians can critically reflect upon the quick or delayed implementation of the existing instructional design based on students’ cognitive abilities or even about the single or double loop learning design. The researcher concludes that the online education is multimodal in nature, has the capacity to endorse multiliteracies and, therefore, multiple intelligences can be tracked and facilitated through multimodal learning analytics in an online mode. However, online teachers’ training both in technological implementations and

  3. Computer-based personality judgments are more accurate than those made by humans

    Science.gov (United States)

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-01

    Judging others’ personalities is an essential skill in successful social living, as personality is a key driver behind people’s interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants’ Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy. PMID:25583507

  4. Computing the influences of different Intraocular Pressures on the human eye components using computational fluid-structure interaction model.

    Science.gov (United States)

    Karimi, Alireza; Razaghi, Reza; Navidbakhsh, Mahdi; Sera, Toshihiro; Kudo, Susumu

    2017-01-01

    Intraocular Pressure (IOP) is defined as the pressure of aqueous in the eye. It has been reported that the normal range of IOP should be within the 10-20 mmHg with an average of 15.50 mmHg among the ophthalmologists. Keratoconus is an anti-inflammatory eye disorder that debilitated cornea unable to reserve the normal structure contrary to the IOP in the eye. Consequently, the cornea would bulge outward and invoke a conical shape following by distorted vision. In addition, it is known that any alterations in the structure and composition of the lens and cornea would exceed a change of the eye ball as well as the mechanical and optical properties of the eye. Understanding the precise alteration of the eye components' stresses and deformations due to different IOPs could help elucidate etiology and pathogenesis to develop treatments not only for keratoconus but also for other diseases of the eye. In this study, at three different IOPs, including 10, 20, and 30 mmHg the stresses and deformations of the human eye components were quantified using a Three-Dimensional (3D) computational Fluid-Structure Interaction (FSI) model of the human eye. The results revealed the highest amount of von Mises stress in the bulged region of the cornea with 245 kPa at the IOP of 30 mmHg. The lens was also showed the von Mises stress of 19.38 kPa at the IOPs of 30 mmHg. In addition, by increasing the IOP from 10 to 30 mmHg, the radius of curvature in the cornea and lens was increased accordingly. In contrast, the sclera indicated its highest stress at the IOP of 10 mmHg due to over pressure phenomenon. The variation of IOP illustrated a little influence in the amount of stress as well as the resultant displacement of the optic nerve. These results can be used for understanding the amount of stresses and deformations in the human eye components due to different IOPs as well as for clarifying significant role of IOP on the radius of curvature of the cornea and the lens.

  5. Study of internalization and viability of multimodal nanoparticles for labeling of human umbilical cord mesenchymal stem cells; Estudo de internalizacao e viabilidade de nanoparticulas multimodal para marcacao de celulas-tronco mesenquimais de cordao umbilical humano

    Energy Technology Data Exchange (ETDEWEB)

    Miyaki, Liza Aya Mabuchi [Faculdade de Enfermagem, Hospital Israelita Albert Einstein - HIAE, Sao Paulo, SP (Brazil); Sibov, Tatiana Tais; Pavon, Lorena Favaro; Mamani, Javier Bustamante; Gamarra, Lionel Fernel, E-mail: tatianats@einstein.br [Instituto do Cerebro - InCe, Hospital Israelita Albert Einstein - HIAE, Sao Paulo, SP (Brazil)

    2012-04-15

    Objective: To analyze multimodal magnetic nanoparticles-Rhodamine B in culture media for cell labeling, and to establish a study of multimodal magnetic nanoparticles-Rhodamine B detection at labeled cells evaluating they viability at concentrations of 10 {mu}g Fe/mL and 100{mu}g Fe/mL. Methods: We performed the analysis of stability of multimodal magnetic nanoparticles-Rhodamine B in different culture media; the mesenchymal stem cells labeling with multimodal magnetic nanoparticles-Rhodamine B; the intracellular detection of multimodal magnetic nanoparticles-Rhodamine B in mesenchymal stem cells, and assessment of the viability of labeled cells by kinetic proliferation. Results: The stability analysis showed that multimodal magnetic nanoparticles-Rhodamine B had good stability in cultured Dulbecco's Modified Eagle's-Low Glucose medium and RPMI 1640 medium. The mesenchymal stem cell with multimodal magnetic nanoparticles-Rhodamine B described location of intracellular nanoparticles, which were shown as blue granules co-localized in fluorescent clusters, thus characterizing magnetic and fluorescent properties of multimodal magnetic nanoparticles Rhodamine B. Conclusion: The stability of multimodal magnetic nanoparticles-Rhodamine B found in cultured Dulbecco's Modified Eagle's-Low Glucose medium and RPMI 1640 medium assured intracellular mesenchymal stem cells labeling. This cell labeling did not affect viability of labeled mesenchymal stem cells since they continued to proliferate for five days. (author)

  6. The Development of an Interactive Computer-Based Training Program for Timely and Humane On-Farm Pig Euthanasia.

    Science.gov (United States)

    Mullins, Caitlyn R; Pairis-Garcia, Monique D; Campler, Magnus R; Anthony, Raymond; Johnson, Anna K; Coleman, Grahame J; Rault, Jean-Loup

    2018-02-05

    With extensive knowledge and training in the prevention, management, and treatment of disease conditions in animals, veterinarians play a critical role in ensuring good welfare on swine farms by training caretakers on the importance of timely euthanasia. To assist veterinarians and other industry professionals in training new and seasoned caretakers, an interactive computer-based training program was created. It consists of three modules, each containing five case studies, which cover three distinct production stages (breeding stock, piglets, and wean to grower-finisher pigs). Case study development was derived from five specific euthanasia criteria defined in the 2015 Common Swine Industry Audit, a nationally recognized auditing program used in the US. Case studies provide information regarding treatment history, clinical signs, and condition severity of the pig and prompt learners to make management decisions regarding pig treatment and care. Once a decision is made, feedback is provided so learners understand the appropriateness of their decision compared to current industry guidelines. In addition to training farm personnel, this program may also be a valuable resource if incorporated into veterinary, graduate, and continuing education curricula. This innovative tool represents the first interactive euthanasia-specific training program in the US swine industry and offers the potential to improve timely and humane on-farm pig euthanasia.

  7. Audio-Visual Tibetan Speech Recognition Based on a Deep Dynamic Bayesian Network for Natural Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Yue Zhao

    2012-12-01

    Full Text Available Audio-visual speech recognition is a natural and robust approach to improving human-robot interaction in noisy environments. Although multi-stream Dynamic Bayesian Network and coupled HMM are widely used for audio-visual speech recognition, they fail to learn the shared features between modalities and ignore the dependency of features among the frames within each discrete state. In this paper, we propose a Deep Dynamic Bayesian Network (DDBN to perform unsupervised extraction of spatial-temporal multimodal features from Tibetan audio-visual speech data and build an accurate audio-visual speech recognition model under a no frame-independency assumption. The experiment results on Tibetan speech data from some real-world environments showed the proposed DDBN outperforms the state-of-art methods in word recognition accuracy.

  8. Methodological issues in analyzing human communication – the complexities of multimodality

    DEFF Research Database (Denmark)

    Høegh, Tina

    2017-01-01

    This chapter develops a multimodal method for transcribing speech, communication, and performance. The chapter discusses the methodological solutions to the complex translation of speech, language rhythm and gesture in time and space into the two-dimensional format of a piece of paper. The focus...

  9. Eyewear Computing – Augmenting the Human with Head-mounted Wearable Assistants (Dagstuhl Seminar 16042)

    OpenAIRE

    Bulling, Andreas; Cakmakci, Ozan; Kunze, Kai; Rehg, James M.

    2016-01-01

    The seminar was composed of workshops and tutorials on head-mounted eye tracking, egocentric vision, optics, and head-mounted displays. The seminar welcomed 30 academic and industry researchers from Europe, the US, and Asia with a diverse background, including wearable and ubiquitous computing, computer vision, developmental psychology, optics, and human-computer interaction. In contrast to several previous Dagstuhl seminars, we used an ignite talk format to reduce the time of talks to...

  10. International workshop on multimodal virtual and augmented reality (workshop summary)

    NARCIS (Netherlands)

    Hürst, W.O.; Iwai, Daisuke; Balakrishnan, Prabhakaran

    2016-01-01

    Virtual reality (VR) and augmented reality (AR) are expected by many to become the next wave of computing with significant impacts on our daily lives. Motivated by this, we organized a workshop on “Multimodal Virtual and Augmented Reality (MVAR)” at the 18th ACM International Conference on

  11. LATTICE: an interactive lattice computer code

    International Nuclear Information System (INIS)

    Staples, J.

    1976-10-01

    LATTICE is a computer code which enables an interactive user to calculate the functions of a synchrotron lattice. This program satisfies the requirements at LBL for a simple interactive lattice program by borrowing ideas from both TRANSPORT and SYNCH. A fitting routine is included

  12. Designing an Automated Assessment of Public Speaking Skills Using Multimodal Cues

    Science.gov (United States)

    Chen, Lei; Feng, Gary; Leong, Chee Wee; Joe, Jilliam; Kitchen, Christopher; Lee, Chong Min

    2016-01-01

    Traditional assessments of public speaking skills rely on human scoring. We report an initial study on the development of an automated scoring model for public speaking performances using multimodal technologies. Task design, rubric development, and human rating were conducted according to standards in educational assessment. An initial corpus of…

  13. [The value of multimodal imaging by single photon emission computed tomography associated to X ray computed tomography (SPECT-CT) in the management of differentiated thyroid carcinoma: about 156 cases].

    Science.gov (United States)

    Mhiri, Aida; El Bez, Intidhar; Slim, Ihsen; Meddeb, Imène; Yeddes, Imene; Ghezaiel, Mohamed; Gritli, Saïd; Ben Slimène, Mohamed Faouzi

    2013-10-01

    Single photon emission computed tomography combined with a low dose computed tomography (SPECT-CT), is a hybrid imaging integrating functional and anatomical data. The purpose of our study was to evaluate the contribution of the SPECTCT over traditional planar imaging of patients with differentiated thyroid carcinoma (DTC). Post therapy 131IWhole body scan followed by SPECTCT of the neck and thorax, were performed in 156 patients with DTC. Among these 156 patients followed for a predominantly papillary, the use of fusion imaging SPECT-CT compared to conventional planar imaging allowed us to correct our therapeutic approach in 26.9 % (42/156 patients), according to the protocols of therapeutic management of our institute. SPECT-CT is a multimodal imaging providing better identification and more accurate anatomic localization of the foci of radioiodine uptake with impact on therapeutic management.

  14. Emotion based human-robot interaction

    Directory of Open Access Journals (Sweden)

    Berns Karsten

    2018-01-01

    Full Text Available Human-machine interaction is a major challenge in the development of complex humanoid robots. In addition to verbal communication the use of non-verbal cues such as hand, arm and body gestures or mimics can improve the understanding of the intention of the robot. On the other hand, by perceiving such mechanisms of a human in a typical interaction scenario the humanoid robot can adapt its interaction skills in a better way. In this work, the perception system of two social robots, ROMAN and ROBIN of the RRLAB of the TU Kaiserslautern, is presented in the range of human-robot interaction.

  15. The Multimodal Possibilities of Online Instructions

    DEFF Research Database (Denmark)

    Kampf, Constance

    2006-01-01

    The WWW simplifies the process of delivering online instructions through multimodal channels because of the ease of use for voice, video, pictures, and text modes of communication built into it.  Given that instructions are being produced in multimodal format for the WWW, how do multi-modal analy......The WWW simplifies the process of delivering online instructions through multimodal channels because of the ease of use for voice, video, pictures, and text modes of communication built into it.  Given that instructions are being produced in multimodal format for the WWW, how do multi...

  16. Compressive multi-mode superresolution display

    KAUST Repository

    Heide, Felix

    2014-01-01

    Compressive displays are an emerging technology exploring the co-design of new optical device configurations and compressive computation. Previously, research has shown how to improve the dynamic range of displays and facilitate high-quality light field or glasses-free 3D image synthesis. In this paper, we introduce a new multi-mode compressive display architecture that supports switching between 3D and high dynamic range (HDR) modes as well as a new super-resolution mode. The proposed hardware consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived as a single, high resolution image. © 2014 Optical Society of America.

  17. Interaction debugging : an integral approach to analyze human-robot interaction

    NARCIS (Netherlands)

    Kooijmans, T.; Kanda, T.; Bartneck, C.; Ishiguro, H.; Hagita, N.

    2006-01-01

    Along with the development of interactive robots, controlled experiments and field trials are regularly conducted to stage human-robot interaction. Experience in this field has shown that analyzing human-robot interaction for evaluation purposes fosters the development of improved systems and the

  18. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.

    Science.gov (United States)

    Scharfe, Michael; Pielot, Rainer; Schreiber, Falk

    2010-01-11

    Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.

  19. The Particle Beam Optics Interactive Computer Laboratory

    International Nuclear Information System (INIS)

    Gillespie, George H.; Hill, Barrey W.; Brown, Nathan A.; Babcock, R. Chris; Martono, Hendy; Carey, David C.

    1997-01-01

    The Particle Beam Optics Interactive Computer Laboratory (PBO Lab) is an educational software concept to aid students and professionals in learning about charged particle beams and particle beam optical systems. The PBO Lab is being developed as a cross-platform application and includes four key elements. The first is a graphic user interface shell that provides for a highly interactive learning session. The second is a knowledge database containing information on electric and magnetic optics transport elements. The knowledge database provides interactive tutorials on the fundamental physics of charged particle optics and on the technology used in particle optics hardware. The third element is a graphical construction kit that provides tools for students to interactively and visually construct optical beamlines. The final element is a set of charged particle optics computational engines that compute trajectories, transport beam envelopes, fit parameters to optical constraints and carry out similar calculations for the student designed beamlines. The primary computational engine is provided by the third-order TRANSPORT code. Augmenting TRANSPORT is the multiple ray tracing program TURTLE and a first-order matrix program that includes a space charge model and support for calculating single particle trajectories in the presence of the beam space charge. This paper describes progress on the development of the PBO Lab

  20. VoxelStats: A MATLAB Package for Multi-Modal Voxel-Wise Brain Image Analysis.

    Science.gov (United States)

    Mathotaarachchi, Sulantha; Wang, Seqian; Shin, Monica; Pascoal, Tharick A; Benedet, Andrea L; Kang, Min Su; Beaudry, Thomas; Fonov, Vladimir S; Gauthier, Serge; Labbe, Aurélie; Rosa-Neto, Pedro

    2016-01-01

    In healthy individuals, behavioral outcomes are highly associated with the variability on brain regional structure or neurochemical phenotypes. Similarly, in the context of neurodegenerative conditions, neuroimaging reveals that cognitive decline is linked to the magnitude of atrophy, neurochemical declines, or concentrations of abnormal protein aggregates across brain regions. However, modeling the effects of multiple regional abnormalities as determinants of cognitive decline at the voxel level remains largely unexplored by multimodal imaging research, given the high computational cost of estimating regression models for every single voxel from various imaging modalities. VoxelStats is a voxel-wise computational framework to overcome these computational limitations and to perform statistical operations on multiple scalar variables and imaging modalities at the voxel level. VoxelStats package has been developed in Matlab(®) and supports imaging formats such as Nifti-1, ANALYZE, and MINC v2. Prebuilt functions in VoxelStats enable the user to perform voxel-wise general and generalized linear models and mixed effect models with multiple volumetric covariates. Importantly, VoxelStats can recognize scalar values or image volumes as response variables and can accommodate volumetric statistical covariates as well as their interaction effects with other variables. Furthermore, this package includes built-in functionality to perform voxel-wise receiver operating characteristic analysis and paired and unpaired group contrast analysis. Validation of VoxelStats was conducted by comparing the linear regression functionality with existing toolboxes such as glim_image and RMINC. The validation results were identical to existing methods and the additional functionality was demonstrated by generating feature case assessments (t-statistics, odds ratio, and true positive rate maps). In summary, VoxelStats expands the current methods for multimodal imaging analysis by allowing the

  1. Multimodal Observation and Classification of People Engaged in Problem Solving: Application to Chess Players

    Directory of Open Access Journals (Sweden)

    Thomas Guntz

    2018-03-01

    Full Text Available In this paper we present the first results of a pilot experiment in the interpretation of multimodal observations of human experts engaged in solving challenging chess problems. Our goal is to investigate the extent to which observations of eye-gaze, posture, emotion and other physiological signals can be used to model the cognitive state of subjects, and to explore the integration of multiple sensor modalities to improve the reliability of detection of human displays of awareness and emotion. Domains of application for such cognitive model based systems are, for instance, healthy autonomous ageing or automated training systems. Abilities to observe cognitive abilities and emotional reactions can allow artificial systems to provide appropriate assistance in such contexts. We observed chess players engaged in problems of increasing difficulty while recording their behavior. Such recordings can be used to estimate a participant’s awareness of the current situation and to predict ability to respond effectively to challenging situations. Feature selection has been performed to construct a multimodal classifier relying on the most relevant features from each modality. Initial results indicate that eye-gaze, body posture and emotion are good features to capture such awareness. This experiment also validates the use of our equipment as a general and reproducible tool for the study of participants engaged in screen-based interaction and/or problem solving.

  2. Discrimination of skin diseases using the multimodal imaging approach

    Science.gov (United States)

    Vogler, N.; Heuke, S.; Akimov, D.; Latka, I.; Kluschke, F.; Röwert-Huber, H.-J.; Lademann, J.; Dietzek, B.; Popp, J.

    2012-06-01

    Optical microspectroscopic tools reveal great potential for dermatologic diagnostics in the clinical day-to-day routine. To enhance the diagnostic value of individual nonlinear optical imaging modalities such as coherent anti-Stokes Raman scattering (CARS), second harmonic generation (SHG) or two-photon excited fluorescence (TPF), the approach of multimodal imaging has recently been developed. Here, we present an application of nonlinear optical multimodal imaging with Raman-scattering microscopy to study sizable human-tissue cross-sections. The samples investigated contain both healthy tissue and various skin tumors. This contribution details the rich information content, which can be obtained from the multimodal approach: While CARS microscopy, which - in contrast to spontaneous Raman-scattering microscopy - is not hampered by single-photon excited fluorescence, is used to monitor the lipid and protein distribution in the samples, SHG imaging selectively highlights the distribution of collagen structures within the tissue. This is due to the fact, that SHG is only generated in structures which lack inversion geometry. Finally, TPF reveals the distribution of autofluorophores in tissue. The combination of these techniques, i.e. multimodal imaging, allows for recording chemical images of large area samples and is - as this contribution will highlight - of high clinically diagnostic value.

  3. Polarization-Sensitive Hyperspectral Imaging in vivo: A Multimode Dermoscope for Skin Analysis

    Science.gov (United States)

    Vasefi, Fartash; MacKinnon, Nicholas; Saager, Rolf B.; Durkin, Anthony J.; Chave, Robert; Lindsley, Erik H.; Farkas, Daniel L.

    2014-05-01

    Attempts to understand the changes in the structure and physiology of human skin abnormalities by non-invasive optical imaging are aided by spectroscopic methods that quantify, at the molecular level, variations in tissue oxygenation and melanin distribution. However, current commercial and research systems to map hemoglobin and melanin do not correlate well with pathology for pigmented lesions or darker skin. We developed a multimode dermoscope that combines polarization and hyperspectral imaging with an efficient analytical model to map the distribution of specific skin bio-molecules. This corrects for the melanin-hemoglobin misestimation common to other systems, without resorting to complex and computationally intensive tissue optical models. For this system's proof of concept, human skin measurements on melanocytic nevus, vitiligo, and venous occlusion conditions were performed in volunteers. The resulting molecular distribution maps matched physiological and anatomical expectations, confirming a technologic approach that can be applied to next generation dermoscopes and having biological plausibility that is likely to appeal to dermatologists.

  4. project SENSE : multimodal simulation with full-body real-time verbal and nonverbal interactions

    NARCIS (Netherlands)

    Miri, Hossein; Kolkmeier, Jan; Taylor, Paul Jonathon; Poppe, Ronald; Heylen, Dirk; Poppe, Ronald; Meyer, John-Jules; Veltkamp, Remco; Dastani, Mehdi

    2016-01-01

    This paper presents a multimodal simulation system, project-SENSE, that combines virtual reality and full-body motion capture technologies with real-time verbal and nonverbal communication. We introduce the technical setup and employed hardware and software of a first prototype. We discuss the

  5. Multimodal distribution of human cold pain thresholds.

    Science.gov (United States)

    Lötsch, Jörn; Dimova, Violeta; Lieb, Isabel; Zimmermann, Michael; Oertel, Bruno G; Ultsch, Alfred

    2015-01-01

    It is assumed that different pain phenotypes are based on varying molecular pathomechanisms. Distinct ion channels seem to be associated with the perception of cold pain, in particular TRPM8 and TRPA1 have been highlighted previously. The present study analyzed the distribution of cold pain thresholds with focus at describing the multimodality based on the hypothesis that it reflects a contribution of distinct ion channels. Cold pain thresholds (CPT) were available from 329 healthy volunteers (aged 18 - 37 years; 159 men) enrolled in previous studies. The distribution of the pooled and log-transformed threshold data was described using a kernel density estimation (Pareto Density Estimation (PDE)) and subsequently, the log data was modeled as a mixture of Gaussian distributions using the expectation maximization (EM) algorithm to optimize the fit. CPTs were clearly multi-modally distributed. Fitting a Gaussian Mixture Model (GMM) to the log-transformed threshold data revealed that the best fit is obtained when applying a three-model distribution pattern. The modes of the identified three Gaussian distributions, retransformed from the log domain to the mean stimulation temperatures at which the subjects had indicated pain thresholds, were obtained at 23.7 °C, 13.2 °C and 1.5 °C for Gaussian #1, #2 and #3, respectively. The localization of the first and second Gaussians was interpreted as reflecting the contribution of two different cold sensors. From the calculated localization of the modes of the first two Gaussians, the hypothesis of an involvement of TRPM8, sensing temperatures from 25 - 24 °C, and TRPA1, sensing cold from 17 °C can be derived. In that case, subjects belonging to either Gaussian would possess a dominance of the one or the other receptor at the skin area where the cold stimuli had been applied. The findings therefore support a suitability of complex analytical approaches to detect mechanistically determined patterns from pain phenotype data.

  6. Toward in vivo diagnosis of skin cancer using multimode imaging dermoscopy: (II) molecular mapping of highly pigmented lesions

    Science.gov (United States)

    Vasefi, Fartash; MacKinnon, Nicholas; Farkas, Daniel L.

    2014-03-01

    We have developed a multimode imaging dermoscope that combines polarization and hyperspectral imaging with a computationally rapid analytical model. This approach employs specific spectral ranges of visible and near infrared wavelengths for mapping the distribution of specific skin bio-molecules. This corrects for the melanin-hemoglobin misestimation common to other systems, without resorting to complex and computationally intensive tissue optical models that are prone to inaccuracies due to over-modeling. Various human skin measurements including a melanocytic nevus, and venous occlusion conditions were investigated and compared with other ratiometric spectral imaging approaches. Access to the broad range of hyperspectral data in the visible and near-infrared range allows our algorithm to flexibly use different wavelength ranges for chromophore estimation while minimizing melanin-hemoglobin optical signature cross-talk.

  7. Computer Assistance for Writing Interactive Programs: TICS.

    Science.gov (United States)

    Kaplow, Roy; And Others

    1973-01-01

    Investigators developed an on-line, interactive programing system--the Teacher-Interactive Computer System (TICS)--to provide assistance to those who were not programers, but nevertheless wished to write interactive instructional programs. TICS had two components: an author system and a delivery system. Underlying assumptions were that…

  8. Intelligent Interaction for Human-Friendly Service Robot in Smart House Environment

    Directory of Open Access Journals (Sweden)

    Z. Zenn Bien

    2008-01-01

    Full Text Available The smart house under consideration is a service-integrated complex system to assist older persons and/or people with disabilities. The primary goal of the system is to achieve independent living by various robotic devices and systems. Such a system is treated as a human-in-the loop system in which human- robot interaction takes place intensely and frequently. Based on our experiences of having designed and implemented a smart house environment, called Intelligent Sweet Home (ISH, we present a framework of realizing human-friendly HRI (human-robot interaction module with various effective techniques of computational intelligence. More specifically, we partition the robotic tasks of HRI module into three groups in consideration of the level of specificity, fuzziness or uncertainty of the context of the system, and present effective interaction method for each case. We first show a task planning algorithm and its architecture to deal with well-structured tasks autonomously by a simplified set of commands of the user instead of inconvenient manual operations. To provide with capability of interacting in a human-friendly way in a fuzzy context, it is proposed that the robot should make use of human bio-signals as input of the HRI module as shown in a hand gesture recognition system, called a soft remote control system. Finally we discuss a probabilistic fuzzy rule-based life-long learning system, equipped with intention reading capability by learning human behavioral patterns, which is introduced as a solution in uncertain and time-varying situations.

  9. General aviation design synthesis utilizing interactive computer graphics

    Science.gov (United States)

    Galloway, T. L.; Smith, M. R.

    1976-01-01

    Interactive computer graphics is a fast growing area of computer application, due to such factors as substantial cost reductions in hardware, general availability of software, and expanded data communication networks. In addition to allowing faster and more meaningful input/output, computer graphics permits the use of data in graphic form to carry out parametric studies for configuration selection and for assessing the impact of advanced technologies on general aviation designs. The incorporation of interactive computer graphics into a NASA developed general aviation synthesis program is described, and the potential uses of the synthesis program in preliminary design are demonstrated.

  10. A Robust Multimodal Bio metric Authentication Scheme with Voice and Face Recognition

    International Nuclear Information System (INIS)

    Kasban, H.

    2017-01-01

    This paper proposes a multimodal biometric scheme for human authentication based on fusion of voice and face recognition. For voice recognition, three categories of features (statistical coefficients, cepstral coefficients and voice timbre) are used and compared. The voice identification modality is carried out using Gaussian Mixture Model (GMM). For face recognition, three recognition methods (Eigenface, Linear Discriminate Analysis (LDA), and Gabor filter) are used and compared. The combination of voice and face biometrics systems into a single multimodal biometrics system is performed using features fusion and scores fusion. This study shows that the best results are obtained using all the features (cepstral coefficients, statistical coefficients and voice timbre features) for voice recognition, LDA face recognition method and scores fusion for the multimodal biometrics system

  11. Introducing the Geneva Multimodal expression corpus for experimental research on emotion perception.

    Science.gov (United States)

    Bänziger, Tanja; Mortillaro, Marcello; Scherer, Klaus R

    2012-10-01

    Research on the perception of emotional expressions in faces and voices is exploding in psychology, the neurosciences, and affective computing. This article provides an overview of some of the major emotion expression (EE) corpora currently available for empirical research and introduces a new, dynamic, multimodal corpus of emotion expressions, the Geneva Multimodal Emotion Portrayals Core Set (GEMEP-CS). The design features of the corpus are outlined and justified, and detailed validation data for the core set selection are presented and discussed. Finally, an associated database with microcoded facial, vocal, and body action elements, as well as observer ratings, is introduced.

  12. Multimodal Counseling Interventions: Effect on Human Papilloma Virus Vaccination Acceptance

    Directory of Open Access Journals (Sweden)

    Oroma Nwanodi

    2017-11-01

    Full Text Available Human papilloma virus (HPV vaccine was developed to reduce HPV-attributable cancers, external genital warts (EGW, and recurrent respiratory papillomatosis. Adolescent HPV vaccination series completion rates are less than 40% in the United States of America, but up to 80% in Australia and the United Kingdom. Population-based herd immunity requires 80% or greater vaccination series completion rates. Pro-vaccination counseling facilitates increased vaccination rates. Multimodal counseling interventions may increase HPV vaccination series non-completers’ HPV-attributable disease knowledge and HPV-attributable disease prophylaxis (vaccination acceptance over a brief 14-sentence counseling intervention. An online, 4-group, randomized controlled trial, with 260 or more participants per group, found that parents were more likely to accept HPV vaccination offers for their children than were childless young adults for themselves (68.2% and 52.9%. A combined audiovisual and patient health education handout (PHEH intervention raised knowledge of HPV vaccination purpose, p = 0.02, and HPV vaccination acceptance for seven items, p < 0.001 to p = 0.023. The audiovisual intervention increased HPV vaccination acceptance for five items, p < 0.001 to p = 0.006. That HPV causes EGW, and that HPV vaccination prevents HPV-attributable diseases were better conveyed by the combined audiovisual and PHEH than the control 14-sentence counseling intervention alone.

  13. Synchronous Computer-Mediated Communication and Interaction

    Science.gov (United States)

    Ziegler, Nicole

    2016-01-01

    The current study reports on a meta-analysis of the relative effectiveness of interaction in synchronous computer-mediated communication (SCMC) and face-to-face (FTF) contexts. The primary studies included in the analysis were journal articles and dissertations completed between 1990 and 2012 (k = 14). Results demonstrate that interaction in SCMC…

  14. Tangible interactive system for document browsing and visualisation of multimedia data

    Science.gov (United States)

    Rytsar, Yuriy; Voloshynovskiy, Sviatoslav; Koval, Oleksiy; Deguillaume, Frederic; Topak, Emre; Startchik, Sergei; Pun, Thierry

    2006-01-01

    In this paper we introduce and develop a framework for document interactive navigation in multimodal databases. First, we analyze the main open issues of existing multimodal interfaces and then discuss two applications that include interaction with documents in several human environments, i.e., the so-called smart rooms. Second, we propose a system set-up dedicated to the efficient navigation in the printed documents. This set-up is based on the fusion of data from several modalities that include images and text. Both modalities can be used as cover data for hidden indexes using data-hiding technologies as well as source data for robust visual hashing. The particularities of the proposed robust visual hashing are described in the paper. Finally, we address two practical applications of smart rooms for tourism and education and demonstrate the advantages of the proposed solution.

  15. The Interactive Origin and the Aesthetic Modelling of Image-Schemas and Primary Metaphors.

    Science.gov (United States)

    Martínez, Isabel C; Español, Silvia A; Pérez, Diana I

    2018-06-02

    According to the theory of conceptual metaphor, image-schemas and primary metaphors are preconceptual structures configured in human cognition, based on sensory-motor environmental activity. Focusing on the way both non-conceptual structures are embedded in early social interaction, we provide empirical evidence for the interactive and intersubjective ontogenesis of image-schemas and primary metaphors. We present the results of a multimodal image-schematic microanalysis of three interactive infant-directed performances (the composition of movement, touch, speech, and vocalization that adults produce for-and-with the infants). The microanalyses show that adults aesthetically highlight the image-schematic structures embedded in the multimodal composition of the performance, and that primary metaphors are also lived as embedded in these inter-enactive experiences. The findings allow corroborating that the psychological domains of cognition and affection are not in rivalry or conflict but rather intertwined in meaningful experiences.

  16. Quantifying human-environment interactions using videography in the context of infectious disease transmission

    Directory of Open Access Journals (Sweden)

    Timothy R. Julian

    2018-05-01

    Full Text Available Quantitative data on human-environment interactions are needed to fully understand infectious disease transmission processes and conduct accurate risk assessments. Interaction events occur during an individual’s movement through, and contact with, the environment, and can be quantified using diverse methodologies. Methods that utilize videography, coupled with specialized software, can provide a permanent record of events, collect detailed interactions in high resolution, be reviewed for accuracy, capture events difficult to observe in real-time, and gather multiple concurrent phenomena. In the accompanying video, the use of specialized software to capture humanenvironment interactions for human exposure and disease transmission is highlighted. Use of videography, combined with specialized software, allows for the collection of accurate quantitative representations of human-environment interactions in high resolution. Two specialized programs include the Virtual Timing Device for the Personal Computer, which collects sequential microlevel activity time series of contact events and interactions, and LiveTrak, which is optimized to facilitate annotation of events in real-time. Opportunities to annotate behaviors at high resolution using these tools are promising, permitting detailed records that can be summarized to gain information on infectious disease transmission and incorporated into more complex models of human exposure and risk.

  17. Learning new skills in Multimodal Enactive Environments

    Directory of Open Access Journals (Sweden)

    Bardy Benoît G.

    2011-12-01

    Full Text Available A European consortium of researchers in movement and cognitive sciences, robotics, and interaction design developed multimodal technologies to accelerate and transfer the (relearning of complex skills from virtual to real environments. The decomposition of skill into functional elements — the subskills — and the enactment of informational variables used as accelerators are here described. One illustration of accelerator using virtual reality in team rowing is described.

  18. MANIFESTATION OF MANIPULATION IN POLITICAL TALK-SHOWS: COGNITIVE AND MULTIMODAL ASPECTS

    Directory of Open Access Journals (Sweden)

    Petrova Anna Aleksandrovna

    2014-11-01

    Full Text Available The article deals with the problems of the manipulation manifestation in political television talk-shows. The suggestive processes of interaction in the analyzed genre of the media political discourse are studied in two aspects: а monomodal – as speech manipulation by verbal means at the level of emotional suggestion; b multimodal – as counter-suggestion, that restricts the effect of suggestion with visual and kinetic resources. The foundation of the cognitive analysis is a modeling method with a linguistic model which contains components of the cognitive and emotional processing of meaning, conclusions and reasoning. According to this three-component model, the speech manipulation consists in activation of dominant scripts of an addressee and is assured by the verbal resources of suggestion which associate with these scripts. The foundation of the multimodal research of the situations with counter-suggestion in the mass-media discourse is an ethnomethodological method with a reconstruction device. With this scientific attitude the authors have divided the resources of protection from the activating manipulation into two groups: 1 passive interactive communication of a suggestee in a verbal pause 2 active interactive communication of a suggestee aimed at changing the status and role domination. The empiric study of two isolated modalities and their correlations in specific situations of political talk shows allowed to develop the hypothesis on the existence of the fourth visual and kinetic component which represents space and corporal constellations with other models (or modalities of communication and their configurations. This study emphasizes the need to extend the research frames for the complex interactive processes of communication through their study in the multimodal aspect.

  19. Appearance-Based Multimodal Human Tracking and Identification for Healthcare in the Digital Home

    Directory of Open Access Journals (Sweden)

    Mau-Tsuen Yang

    2014-08-01

    Full Text Available There is an urgent need for intelligent home surveillance systems to provide home security, monitor health conditions, and detect emergencies of family members. One of the fundamental problems to realize the power of these intelligent services is how to detect, track, and identify people at home. Compared to RFID tags that need to be worn all the time, vision-based sensors provide a natural and nonintrusive solution. Observing that body appearance and body build, as well as face, provide valuable cues for human identification, we model and record multi-view faces, full-body colors and shapes of family members in an appearance database by using two Kinects located at a home’s entrance. Then the Kinects and another set of color cameras installed in other parts of the house are used to detect, track, and identify people by matching the captured color images with the registered templates in the appearance database. People are detected and tracked by multisensor fusion (Kinects and color cameras using a Kalman filter that can handle duplicate or partial measurements. People are identified by multimodal fusion (face, body appearance, and silhouette using a track-based majority voting. Moreover, the appearance-based human detection, tracking, and identification modules can cooperate seamlessly and benefit from each other. Experimental results show the effectiveness of the human tracking across multiple sensors and human identification considering the information of multi-view faces, full-body clothes, and silhouettes. The proposed home surveillance system can be applied to domestic applications in digital home security and intelligent healthcare.

  20. The Particle Beam Optics Interactive Computer Laboratory

    International Nuclear Information System (INIS)

    Gillespie, G.H.; Hill, B.W.; Brown, N.A.; Babcock, R.C.; Martono, H.; Carey, D.C.

    1997-01-01

    The Particle Beam Optics Interactive Computer Laboratory (PBO Lab) is an educational software concept to aid students and professionals in learning about charged particle beams and particle beam optical systems. The PBO Lab is being developed as a cross-platform application and includes four key elements. The first is a graphic user interface shell that provides for a highly interactive learning session. The second is a knowledge database containing information on electric and magnetic optics transport elements. The knowledge database provides interactive tutorials on the fundamental physics of charged particle optics and on the technology used in particle optics hardware. The third element is a graphical construction kit that provides tools for students to interactively and visually construct optical beamlines. The final element is a set of charged particle optics computational engines that compute trajectories, transport beam envelopes, fit parameters to optical constraints and carry out similar calculations for the student designed beamlines. The primary computational engine is provided by the third-order TRANSPORT code. Augmenting TRANSPORT is the multiple ray tracing program TURTLE and a first-order matrix program that includes a space charge model and support for calculating single particle trajectories in the presence of the beam space charge. This paper describes progress on the development of the PBO Lab. copyright 1997 American Institute of Physics

  1. Multimodal fluorescence imaging spectroscopy

    NARCIS (Netherlands)

    Stopel, Martijn H W; Blum, Christian; Subramaniam, Vinod; Engelborghs, Yves; Visser, Anthonie J.W.G.

    2014-01-01

    Multimodal fluorescence imaging is a versatile method that has a wide application range from biological studies to materials science. Typical observables in multimodal fluorescence imaging are intensity, lifetime, excitation, and emission spectra which are recorded at chosen locations at the sample.

  2. A mobile Nursing Information System based on human-computer interaction design for improving quality of nursing.

    Science.gov (United States)

    Su, Kuo-Wei; Liu, Cheng-Li

    2012-06-01

    A conventional Nursing Information System (NIS), which supports the role of nurse in some areas, is typically deployed as an immobile system. However, the traditional information system can't response to patients' conditions in real-time, causing delays on the availability of this information. With the advances of information technology, mobile devices are increasingly being used to extend the human mind's limited capacity to recall and process large numbers of relevant variables and to support information management, general administration, and clinical practice. Unfortunately, there have been few studies about the combination of a well-designed small-screen interface with a personal digital assistant (PDA) in clinical nursing. Some researchers found that user interface design is an important factor in determining the usability and potential use of a mobile system. Therefore, this study proposed a systematic approach to the development of a mobile nursing information system (MNIS) based on Mobile Human-Computer Interaction (M-HCI) for use in clinical nursing. The system combines principles of small-screen interface design with user-specified requirements. In addition, the iconic functions were designed with metaphor concept that will help users learn the system more quickly with less working-memory. An experiment involving learnability testing, thinking aloud and a questionnaire investigation was conducted for evaluating the effect of MNIS on PDA. The results show that the proposed MNIS is good on learning and higher satisfaction on symbol investigation, terminology and system information.

  3. Towards quantifying dynamic human-human physical interactions for robot assisted stroke therapy.

    Science.gov (United States)

    Mohan, Mayumi; Mendonca, Rochelle; Johnson, Michelle J

    2017-07-01

    Human-Robot Interaction is a prominent field of robotics today. Knowledge of human-human physical interaction can prove vital in creating dynamic physical interactions between human and robots. Most of the current work in studying this interaction has been from a haptic perspective. Through this paper, we present metrics that can be used to identify if a physical interaction occurred between two people using kinematics. We present a simple Activity of Daily Living (ADL) task which involves a simple interaction. We show that we can use these metrics to successfully identify interactions.

  4. A novel balance training system using multimodal biofeedback.

    Science.gov (United States)

    Afzal, Muhammad Raheel; Oh, Min-Kyun; Choi, Hye Young; Yoon, Jungwon

    2016-04-22

    A biofeedback-based balance training system can be used to provide the compromised sensory information to subjects in order to retrain their sensorimotor function. In this study, the design and evaluation of the low-cost, intuitive biofeedback system developed at Gyeongsang National University is extended to provide multimodal biofeedback for balance training by utilization of visual and haptic modalities. The system consists of a smartphone attached to the waist of the subject to provide information about tilt of the torso, a personal computer running a purpose built software to process the smartphone data and provide visual biofeedback to the subject by means of a dedicated monitor and a dedicated Phantom Omni(®) device for haptic biofeedback. For experimental verification of the system, eleven healthy young participants performed balance tasks assuming two distinct postures for 30 s each while acquiring torso tilt. The postures used were the one foot stance and the tandem Romberg stance. For both the postures, the subjects stood on a foam platform which provided a certain amount of ground instability. Post-experiment data analysis was performed using MATLAB(®) to analyze reduction in body sway. Analysis parameters based on the projection of trunk tilt information were calculated in order to ascertain the reduction in body sway and improvements in postural control. Two-way analysis of variance (ANOVA) showed no statistically significant interactions between postures and biofeedback. Post-hoc analysis revealed statistically significant reduction in body sway on provision of biofeedback. Subjects exhibited maximum body sway during no biofeedback trial, followed by either haptic or visual biofeedback and in most of the trials the multimodal biofeedback of visual and haptic together resulted in minimization of body sway, thus indicating that the multimodal biofeedback system worked well to provide significant (p biofeedback system can offer more customized training

  5. Inorganic Nanoparticles for Multimodal Molecular Imaging

    Directory of Open Access Journals (Sweden)

    Magdalena Swierczewska

    2011-01-01

    Full Text Available Multimodal molecular imaging can offer a synergistic improvement of diagnostic ability over a single imaging modality. Recent development of hybrid imaging systems has profoundly impacted the pool of available multimodal imaging probes. In particular, much interest has been focused on biocompatible, inorganic nanoparticle-based multimodal probes. Inorganic nanoparticles offer exceptional advantages to the field of multimodal imaging owing to their unique characteristics, such as nanometer dimensions, tunable imaging properties, and multifunctionality. Nanoparticles mainly based on iron oxide, quantum dots, gold, and silica have been applied to various imaging modalities to characterize and image specific biologic processes on a molecular level. A combination of nanoparticles and other materials such as biomolecules, polymers, and radiometals continue to increase functionality for in vivo multimodal imaging and therapeutic agents. In this review, we discuss the unique concepts, characteristics, and applications of the various multimodal imaging probes based on inorganic nanoparticles.

  6. Multimodality in organization studies

    DEFF Research Database (Denmark)

    Van Leeuwen, Theo

    2017-01-01

    This afterword reviews the chapters in this volume and reflects on the synergies between organization and management studies and multimodality studies that emerge from the volume. These include the combination of strong sociological theorizing and detailed multimodal analysis, a focus on material...

  7. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets

    Directory of Open Access Journals (Sweden)

    Pielot Rainer

    2010-01-01

    Full Text Available Abstract Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE, a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.

  8. Pseudo-interactive monitoring in distributed computing

    International Nuclear Information System (INIS)

    Sfiligoi, I.; Bradley, D.; Livny, M.

    2009-01-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  9. Pseudo-interactive monitoring in distributed computing

    International Nuclear Information System (INIS)

    Sfiligoi, I; Bradley, D; Livny, M

    2010-01-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  10. Pseudo-interactive monitoring in distributed computing

    Energy Technology Data Exchange (ETDEWEB)

    Sfiligoi, I.; /Fermilab; Bradley, D.; Livny, M.; /Wisconsin U., Madison

    2009-05-01

    Distributed computing, and in particular Grid computing, enables physicists to use thousands of CPU days worth of computing every day, by submitting thousands of compute jobs. Unfortunately, a small fraction of such jobs regularly fail; the reasons vary from disk and network problems to bugs in the user code. A subset of these failures result in jobs being stuck for long periods of time. In order to debug such failures, interactive monitoring is highly desirable; users need to browse through the job log files and check the status of the running processes. Batch systems typically don't provide such services; at best, users get job logs at job termination, and even this may not be possible if the job is stuck in an infinite loop. In this paper we present a novel approach of using regular batch system capabilities of Condor to enable users to access the logs and processes of any running job. This does not provide true interactive access, so commands like vi are not viable, but it does allow operations like ls, cat, top, ps, lsof, netstat and dumping the stack of any process owned by the user; we call this pseudo-interactive monitoring. It is worth noting that the same method can be used to monitor Grid jobs in a glidein-based environment. We further believe that the same mechanism could be applied to many other batch systems.

  11. Practical multimodal care for cancer cachexia.

    Science.gov (United States)

    Maddocks, Matthew; Hopkinson, Jane; Conibear, John; Reeves, Annie; Shaw, Clare; Fearon, Ken C H

    2016-12-01

    Cancer cachexia is common and reduces function, treatment tolerability and quality of life. Given its multifaceted pathophysiology a multimodal approach to cachexia management is advocated for, but can be difficult to realise in practice. We use a case-based approach to highlight practical approaches to the multimodal management of cachexia for patients across the cancer trajectory. Four cases with lung cancer spanning surgical resection, radical chemoradiotherapy, palliative chemotherapy and no anticancer treatment are presented. We propose multimodal care approaches that incorporate nutritional support, exercise, and anti-inflammatory agents, on a background of personalized oncology care and family-centred education. Collectively, the cases reveal that multimodal care is part of everyone's remit, often focuses on supported self-management, and demands buy-in from the patient and their family. Once operationalized, multimodal care approaches can be tested pragmatically, including alongside emerging pharmacological cachexia treatments. We demonstrate that multimodal care for cancer cachexia can be achieved using simple treatments and without a dedicated team of specialists. The sharing of advice between health professionals can help build collective confidence and expertise, moving towards a position in which every team member feels they can contribute towards multimodal care.

  12. FIIND: Ferret Interactive Integrated Neurodevelopment Atlas

    Directory of Open Access Journals (Sweden)

    Roberto Toro

    2018-03-01

    acquired with multi-dimensional cell-scale information. Brains will be sectioned at 25 μm, stained, scanned at 0.25 μm of resolution, and processed for real-time multi-scale visualisation. We will extend our current web-platform to integrate an interactive multi-scale visualisation of the data. Using our combined expertise in computational neuroanatomy, multi-modal neuroimaging, neuroinformatics, and the development of inter-species atlases, we propose to build an open-source web platform to allow the collaborative, online, creation of atlases of the development of the ferret brain. The web platform will allow researchers to access and visualise interactively the MRI and histology data. It will also allow researchers to create collaborative, human curated, 3D segmentations of brain structures, as well as vectorial atlases. Our work will provide a first integrated atlas of ferret brain development, and the basis for an open platform for the creation of collaborative multi-modal, multi-scale, multi-species atlases.

  13. Developing human technology curriculum

    Directory of Open Access Journals (Sweden)

    Teija Vainio

    2012-10-01

    Full Text Available During the past ten years expertise in human-computer interaction has shifted from humans interacting with desktop computers to individual human beings or groups of human beings interacting with embedded or mobile technology. Thus, humans are not only interacting with computers but with technology. Obviously, this shift should be reflected in how we educate human-technology interaction (HTI experts today and in the future. We tackle this educational challenge first by analysing current Master’s-level education in collaboration with two universities and second, discussing postgraduate education in the international context. As a result, we identified core studies that should be included in the HTI curriculum. Furthermore, we discuss some practical challenges and new directions for international HTI education.

  14. The Sweet-Home speech and multimodal corpus for home automation interaction

    OpenAIRE

    Vacher , Michel; Lecouteux , Benjamin; Chahuara , Pedro; Portet , François; Meillon , Brigitte; Bonnefond , Nicolas

    2014-01-01

    International audience; Ambient Assisted Living aims at enhancing the quality of life of older and disabled people at home thanks to Smart Homes and Home Automation. However, many studies do not include tests in real settings, because data collection in this domain is very expensive and challenging and because of the few available data sets. The SWEET-H OME multimodal corpus is a dataset recorded in realistic conditions in D OMUS, a fully equipped Smart Home with microphones and home automati...

  15. Making IBM's Computer, Watson, Human

    Science.gov (United States)

    Rachlin, Howard

    2012-01-01

    This essay uses the recent victory of an IBM computer (Watson) in the TV game, "Jeopardy," to speculate on the abilities Watson would need, in addition to those it has, to be human. The essay's basic premise is that to be human is to behave as humans behave and to function in society as humans function. Alternatives to this premise are considered…

  16. Studying the neurobiology of human social interaction: Making the case for ecological validity.

    Science.gov (United States)

    Hogenelst, Koen; Schoevers, Robert A; aan het Rot, Marije

    2015-01-01

    With this commentary we make the case for an increased focus on the ecological validity of the measures used to assess aspects of human social functioning. Impairments in social functioning are seen in many types of psychopathology, negatively affecting the lives of psychiatric patients and those around them. Yet the neurobiology underlying abnormal social interaction remains unclear. As an example of human social neuroscience research with relevance to biological psychiatry and clinical psychopharmacology, this commentary discusses published experimental studies involving manipulation of the human brain serotonin system that included assessments of social behavior. To date, these studies have mostly been laboratory-based and included computer tasks, observations by others, or single-administration self-report measures. Most laboratory measures used so far inform about the role of serotonin in aspects of social interaction, but the relevance for real-life interaction is often unclear. Few studies have used naturalistic assessments in real life. We suggest several laboratory methods with high ecological validity as well as ecological momentary assessment, which involves intensive repeated measures in naturalistic settings. In sum, this commentary intends to stimulate experimental research on the neurobiology of human social interaction as it occurs in real life.

  17. Multimodal Embodied Mimicry in Interaction

    NARCIS (Netherlands)

    Sun, X.; Esposito, Anna; Vinciarelli, Alessandro; Vicsi, Klára; Pelachaud, Catherine; Nijholt, Antinus

    2011-01-01

    Nonverbal behaviors play an important role in communicating with others. One particular kind of nonverbal interaction behavior is mimicry. It has been argued that behavioral mimicry supports harmonious relationships in social interaction through creating affiliation, rapport, and liking between

  18. Fusion in computer vision understanding complex visual content

    CERN Document Server

    Ionescu, Bogdan; Piatrik, Tomas

    2014-01-01

    This book presents a thorough overview of fusion in computer vision, from an interdisciplinary and multi-application viewpoint, describing successful approaches, evaluated in the context of international benchmarks that model realistic use cases. Features: examines late fusion approaches for concept recognition in images and videos; describes the interpretation of visual content by incorporating models of the human visual system with content understanding methods; investigates the fusion of multi-modal features of different semantic levels, as well as results of semantic concept detections, fo

  19. Advanced Technologies, Embedded and Multimedia for Human-Centric Computing

    CERN Document Server

    Chao, Han-Chieh; Deng, Der-Jiunn; Park, James; HumanCom and EMC 2013

    2014-01-01

    The theme of HumanCom and EMC are focused on the various aspects of human-centric computing for advances in computer science and its applications, embedded and multimedia computing and provides an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of human-centric computing. And the theme of EMC (Advanced in Embedded and Multimedia Computing) is focused on the various aspects of embedded system, smart grid, cloud and multimedia computing, and it provides an opportunity for academic, industry professionals to discuss the latest issues and progress in the area of embedded and multimedia computing. Therefore this book will be include the various theories and practical applications in human-centric computing and embedded and multimedia computing.

  20. Empowering Prospective Teachers to Become Active Sense-Makers: Multimodal Modeling of the Seasons

    Science.gov (United States)

    Kim, Mi Song

    2015-10-01

    Situating science concepts in concrete and authentic contexts, using information and communications technologies, including multimodal modeling tools, is important for promoting the development of higher-order thinking skills in learners. However, teachers often struggle to integrate emergent multimodal models into a technology-rich informal learning environment. Our design-based research co-designs and develops engaging, immersive, and interactive informal learning activities called "Embodied Modeling-Mediated Activities" (EMMA) to support not only Singaporean learners' deep learning of astronomy but also the capacity of teachers. As part of the research on EMMA, this case study describes two prospective teachers' co-design processes involving multimodal models for teaching and learning the concept of the seasons in a technology-rich informal learning setting. Our study uncovers four prominent themes emerging from our data concerning the contextualized nature of learning and teaching involving multimodal models in informal learning contexts: (1) promoting communication and emerging questions, (2) offering affordances through limitations, (3) explaining one concept involving multiple concepts, and (4) integrating teaching and learning experiences. This study has an implication for the development of a pedagogical framework for teaching and learning in technology-enhanced learning environments—that is empowering teachers to become active sense-makers using multimodal models.

  1. A Learning Algorithm for Multimodal Grammar Inference.

    Science.gov (United States)

    D'Ulizia, A; Ferri, F; Grifoni, P

    2011-12-01

    The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.

  2. Modelling of human-machine interaction in equipment design of manufacturing cells

    Science.gov (United States)

    Cochran, David S.; Arinez, Jorge F.; Collins, Micah T.; Bi, Zhuming

    2017-08-01

    This paper proposes a systematic approach to model human-machine interactions (HMIs) in supervisory control of machining operations; it characterises the coexistence of machines and humans for an enterprise to balance the goals of automation/productivity and flexibility/agility. In the proposed HMI model, an operator is associated with a set of behavioural roles as a supervisor for multiple, semi-automated manufacturing processes. The model is innovative in the sense that (1) it represents an HMI based on its functions for process control but provides the flexibility for ongoing improvements in the execution of manufacturing processes; (2) it provides a computational tool to define functional requirements for an operator in HMIs. The proposed model can be used to design production systems at different levels of an enterprise architecture, particularly at the machine level in a production system where operators interact with semi-automation to accomplish the goal of 'autonomation' - automation that augments the capabilities of human beings.

  3. Symbolic computation of nonlinear wave interactions on MACSYMA

    International Nuclear Information System (INIS)

    Bers, A.; Kulp, J.L.; Karney, C.F.F.

    1976-01-01

    In this paper the use of a large symbolic computation system - MACSYMA - in determining approximate analytic expressions for the nonlinear coupling of waves in an anisotropic plasma is described. MACSYMA was used to implement the solutions of a fluid plasma model nonlinear partial differential equations by perturbation expansions and subsequent iterative analytic computations. By interacting with the details of the symbolic computation, the physical processes responsible for particular nonlinear wave interactions could be uncovered and appropriate approximations introduced so as to simplify the final analytic result. Details of the MACSYMA system and its use are discussed and illustrated. (Auth.)

  4. COMPUTER-AIDED PSYCHOTHERAPY BASED ON MULTIMODAL ELICITATION, ESTIMATION AND REGULATION OF EMOTION

    OpenAIRE

    Ćosić, Krešimir; Popović, Siniša; Horvat, Marko; Kukolja, Davor; Dropuljić, Branimir; Kovač, Bernard; Jakovljević, Miro

    2013-01-01

    Contemporary psychiatry is looking at affective sciences to understand human behavior, cognition and the mind in health and disease. Since it has been recognized that emotions have a pivotal role for the human mind, an ever increasing number of laboratories and research centers are interested in affective sciences, affective neuroscience, affective psychology and affective psychopathology. Therefore, this paper presents multidisciplinary research results of Laboratory for Interactive...

  5. Personality and social skills in human-dog interaction

    DEFF Research Database (Denmark)

    Meyer, Iben Helene Coakley

    developing a social tool set that makes it very successful in interacting and communicating with humans. Human evolution has similarly resulted in the development of complex social cognition in humans. This enables humans to form bonded relationships, besides pair-bonding, and it seems that humans are also...... of this thesis was to attain a better understanding of some of the factors related to the inter-action between humans and dogs. This aim was addressed by focusing on dog personality and hu-man social skills in relation to human-dog interaction. Two studies investigated dog personality and how it a) affects...... the relationship with the owner, and b) is affected by human breeding goals. Two studies investigated how human social skills affect the communication and interaction between hu-man and dog. As part of these studies it was also investigated how experience with dogs interacts with human social skills, perception...

  6. Aspects of computer control from the human engineering standpoint

    International Nuclear Information System (INIS)

    Huang, T.V.

    1979-03-01

    A Computer Control System includes data acquisition, information display and output control signals. In order to design such a system effectively we must first determine the required operational mode: automatic control (closed loop), computer assisted (open loop), or hybrid control. The choice of operating mode will depend on the nature of the plant, the complexity of the operation, the funds available, and the technical expertise of the operating staff, among many other factors. Once the mode has been selected, consideration must be given to the method (man/machine interface) by which the operator interacts with this system. The human engineering factors are of prime importance to achieving high operating efficiency and very careful attention must be given to this aspect of the work, if full operator acceptance is to be achieved. This paper will discuss these topics and will draw on experience gained in setting up the computer control system in Main Control Center for Stanford University's Accelerator Center (a high energy physics research facility)

  7. Multimodal Human Identification for Computer Security

    National Research Council Canada - National Science Library

    Nadimi, Sohail; Hong, Edward; Bhanu, Bir

    2005-01-01

    (A) A cooperative coevolutionary approach for object detection is developed. It fuses the scene contextual information with the available statistical and prediction information available from color and infrared sensors...

  8. How Do Humans Perceive Emotion?

    Institute of Scientific and Technical Information of China (English)

    LI Wen

    2017-01-01

    Emotion carries crucial qualities of the human condition, representing one of the major challenges in artificial intelligence. Re-search in psychology and neuroscience in the past two to three decades has generated rich insights into the processes underlying human emotion. Cognition and emotion represent the two main pillars of the human psyche and human intelligence. While the hu-man cognitive system and cognitive brain has inspired and informed computer science and artificial intelligence, the future is ripe for the human emotion system to be integrated into artificial intelligence and robotic systems. Here, we review behavioral and neu-ral findings in human emotion perception, including facial emotion perception, olfactory emotion perception, multimodal emotion perception, and the time course of emotion perception. It is our hope that knowledge of how humans perceive emotion will help bring artificial intelligence strides closer to human intelligence.

  9. Contextual Interaction Design Research: Enabling HCI

    OpenAIRE

    Murer , Martin; Meschtscherjakov , Alexander; Fuchsberger , Verena; Giuliani , Manuel; Neureiter , Katja; Moser , Christiane; Aslan , Ilhan; Tscheligi , Manfred

    2015-01-01

    International audience; Human-Computer Interaction (HCI) has always been about humans, their needs and desires. Contemporary HCI thinking investigates interactions in everyday life and puts an emphasis on the emotional and experiential qualities of interactions. At the Center for Human-Computer Interaction we seek to bridge meandering strands in the field by following a guiding metaphor that shifts focus to what has always been the core quality of our research field: Enabling HCI, as a leitmo...

  10. Simulation-based computation of dose to humans in radiological environments

    International Nuclear Information System (INIS)

    Breazeal, N.L.; Davis, K.R.; Watson, R.A.; Vickers, D.S.; Ford, M.S.

    1996-03-01

    The Radiological Environment Modeling System (REMS) quantifies dose to humans working in radiological environments using the IGRIP (Interactive Graphical Robot Instruction Program) and Deneb/ERGO simulation software. These commercially available products are augmented with custom C code to provide radiation exposure information to, and collect radiation dose information from, workcell simulations. Through the use of any radiation transport code or measured data, a radiation exposure input database may be formulated. User-specified IGRIP simulations utilize these databases to compute and accumulate dose to programmable human models operating around radiation sources. Timing, distances, shielding, and human activity may be modeled accurately in the simulations. The accumulated dose is recorded in output files, and the user is able to process and view this output. The entire REMS capability can be operated from a single graphical user interface

  11. Simulation-based computation of dose to humans in radiological environments

    Energy Technology Data Exchange (ETDEWEB)

    Breazeal, N.L. [Sandia National Labs., Livermore, CA (United States); Davis, K.R.; Watson, R.A. [Sandia National Labs., Albuquerque, NM (United States); Vickers, D.S. [Brigham Young Univ., Provo, UT (United States). Dept. of Electrical and Computer Engineering; Ford, M.S. [Battelle Pantex, Amarillo, TX (United States). Dept. of Radiation Safety

    1996-03-01

    The Radiological Environment Modeling System (REMS) quantifies dose to humans working in radiological environments using the IGRIP (Interactive Graphical Robot Instruction Program) and Deneb/ERGO simulation software. These commercially available products are augmented with custom C code to provide radiation exposure information to, and collect radiation dose information from, workcell simulations. Through the use of any radiation transport code or measured data, a radiation exposure input database may be formulated. User-specified IGRIP simulations utilize these databases to compute and accumulate dose to programmable human models operating around radiation sources. Timing, distances, shielding, and human activity may be modeled accurately in the simulations. The accumulated dose is recorded in output files, and the user is able to process and view this output. The entire REMS capability can be operated from a single graphical user interface.

  12. Activity-based computing: computational management of activities reflecting human intention

    DEFF Research Database (Denmark)

    Bardram, Jakob E; Jeuris, Steven; Houben, Steven

    2015-01-01

    paradigm that has been applied in personal information management applications as well as in ubiquitous, multidevice, and interactive surface computing. ABC has emerged as a response to the traditional application- and file-centered computing paradigm, which is oblivious to a notion of a user’s activity...

  13. Human likeness: cognitive and affective factors affecting adoption of robot-assisted learning systems

    Science.gov (United States)

    Yoo, Hosun; Kwon, Ohbyung; Lee, Namyeon

    2016-07-01

    With advances in robot technology, interest in robotic e-learning systems has increased. In some laboratories, experiments are being conducted with humanoid robots as artificial tutors because of their likeness to humans, the rich possibilities of using this type of media, and the multimodal interaction capabilities of these robots. The robot-assisted learning system, a special type of e-learning system, aims to increase the learner's concentration, pleasure, and learning performance dramatically. However, very few empirical studies have examined the effect on learning performance of incorporating humanoid robot technology into e-learning systems or people's willingness to accept or adopt robot-assisted learning systems. In particular, human likeness, the essential characteristic of humanoid robots as compared with conventional e-learning systems, has not been discussed in a theoretical context. Hence, the purpose of this study is to propose a theoretical model to explain the process of adoption of robot-assisted learning systems. In the proposed model, human likeness is conceptualized as a combination of media richness, multimodal interaction capabilities, and para-social relationships; these factors are considered as possible determinants of the degree to which human cognition and affection are related to the adoption of robot-assisted learning systems.

  14. Collaborative filtering for brain-computer interaction using transfer learning and active class selection.

    Science.gov (United States)

    Wu, Dongrui; Lance, Brent J; Parsons, Thomas D

    2013-01-01

    Brain-computer interaction (BCI) and physiological computing are terms that refer to using processed neural or physiological signals to influence human interaction with computers, environment, and each other. A major challenge in developing these systems arises from the large individual differences typically seen in the neural/physiological responses. As a result, many researchers use individually-trained recognition algorithms to process this data. In order to minimize time, cost, and barriers to use, there is a need to minimize the amount of individual training data required, or equivalently, to increase the recognition accuracy without increasing the number of user-specific training samples. One promising method for achieving this is collaborative filtering, which combines training data from the individual subject with additional training data from other, similar subjects. This paper describes a successful application of a collaborative filtering approach intended for a BCI system. This approach is based on transfer learning (TL), active class selection (ACS), and a mean squared difference user-similarity heuristic. The resulting BCI system uses neural and physiological signals for automatic task difficulty recognition. TL improves the learning performance by combining a small number of user-specific training samples with a large number of auxiliary training samples from other similar subjects. ACS optimally selects the classes to generate user-specific training samples. Experimental results on 18 subjects, using both k nearest neighbors and support vector machine classifiers, demonstrate that the proposed approach can significantly reduce the number of user-specific training data samples. This collaborative filtering approach will also be generalizable to handling individual differences in many other applications that involve human neural or physiological data, such as affective computing.

  15. Collaborative filtering for brain-computer interaction using transfer learning and active class selection.

    Directory of Open Access Journals (Sweden)

    Dongrui Wu

    Full Text Available Brain-computer interaction (BCI and physiological computing are terms that refer to using processed neural or physiological signals to influence human interaction with computers, environment, and each other. A major challenge in developing these systems arises from the large individual differences typically seen in the neural/physiological responses. As a result, many researchers use individually-trained recognition algorithms to process this data. In order to minimize time, cost, and barriers to use, there is a need to minimize the amount of individual training data required, or equivalently, to increase the recognition accuracy without increasing the number of user-specific training samples. One promising method for achieving this is collaborative filtering, which combines training data from the individual subject with additional training data from other, similar subjects. This paper describes a successful application of a collaborative filtering approach intended for a BCI system. This approach is based on transfer learning (TL, active class selection (ACS, and a mean squared difference user-similarity heuristic. The resulting BCI system uses neural and physiological signals for automatic task difficulty recognition. TL improves the learning performance by combining a small number of user-specific training samples with a large number of auxiliary training samples from other similar subjects. ACS optimally selects the classes to generate user-specific training samples. Experimental results on 18 subjects, using both k nearest neighbors and support vector machine classifiers, demonstrate that the proposed approach can significantly reduce the number of user-specific training data samples. This collaborative filtering approach will also be generalizable to handling individual differences in many other applications that involve human neural or physiological data, such as affective computing.

  16. Design of a Multi-mode Flight Deck Decision Support System for Airborne Conflict Management

    Science.gov (United States)

    Barhydt, Richard; Krishnamurthy, Karthik

    2004-01-01

    NASA Langley has developed a multi-mode decision support system for pilots operating in a Distributed Air-Ground Traffic Management (DAG-TM) environment. An Autonomous Operations Planner (AOP) assists pilots in performing separation assurance functions, including conflict detection, prevention, and resolution. Ongoing AOP design has been based on a comprehensive human factors analysis and evaluation results from previous human-in-the-loop experiments with airline pilot test subjects. AOP considers complex flight mode interactions and provides flight guidance to pilots consistent with the current aircraft control state. Pilots communicate goals to AOP by setting system preferences and actively probing potential trajectories for conflicts. To minimize training requirements and improve operational use, AOP design leverages existing alerting philosophies, displays, and crew interfaces common on commercial aircraft. Future work will consider trajectory prediction uncertainties, integration with the TCAS collision avoidance system, and will incorporate enhancements based on an upcoming air-ground coordination experiment.

  17. Computer-Based Interaction Analysis with DEGREE Revisited

    Science.gov (United States)

    Barros, B.; Verdejo, M. F.

    2016-01-01

    We review our research with "DEGREE" and analyse how our work has impacted the collaborative learning community since 2000. Our research is framed within the context of computer-based interaction analysis and the development of computer-supported collaborative learning (CSCL) tools. We identify some aspects of our work which have been…

  18. Multimodal Aspects of Corporate Social Responsibility Communication

    Directory of Open Access Journals (Sweden)

    Carmen Daniela Maier

    2014-12-01

    Full Text Available This article addresses how the multimodal persuasive strategies of corporate social responsibility communication can highlight a company’s commitment to gender empowerment and environmental protection while advertising simultaneously its products. Drawing on an interdisciplinary methodological framework related to CSR communication, multimodal discourse analysis and gender theory, the article proposes a multimodal analysis model through which it is possible to map and explain the multimodal persuasive strategies employed by Coca-Cola company in their community-related films. By examining the semiotic modes’ interconnectivity and functional differentiation, this analytical endeavour expands the existing research work as the usual textual focus is extended to a multimodal one.

  19. Multimodal sequence learning.

    Science.gov (United States)

    Kemény, Ferenc; Meier, Beat

    2016-02-01

    While sequence learning research models complex phenomena, previous studies have mostly focused on unimodal sequences. The goal of the current experiment is to put implicit sequence learning into a multimodal context: to test whether it can operate across different modalities. We used the Task Sequence Learning paradigm to test whether sequence learning varies across modalities, and whether participants are able to learn multimodal sequences. Our results show that implicit sequence learning is very similar regardless of the source modality. However, the presence of correlated task and response sequences was required for learning to take place. The experiment provides new evidence for implicit sequence learning of abstract conceptual representations. In general, the results suggest that correlated sequences are necessary for implicit sequence learning to occur. Moreover, they show that elements from different modalities can be automatically integrated into one unitary multimodal sequence. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Multimodality instrument for tissue characterization

    Science.gov (United States)

    Mah, Robert W. (Inventor); Andrews, Russell J. (Inventor)

    2004-01-01

    A system with multimodality instrument for tissue identification includes a computer-controlled motor driven heuristic probe with a multisensory tip. For neurosurgical applications, the instrument is mounted on a stereotactic frame for the probe to penetrate the brain in a precisely controlled fashion. The resistance of the brain tissue being penetrated is continually monitored by a miniaturized strain gauge attached to the probe tip. Other modality sensors may be mounted near the probe tip to provide real-time tissue characterizations and the ability to detect the proximity of blood vessels, thus eliminating errors normally associated with registration of pre-operative scans, tissue swelling, elastic tissue deformation, human judgement, etc., and rendering surgical procedures safer, more accurate, and efficient. A neural network program adaptively learns the information on resistance and other characteristic features of normal brain tissue during the surgery and provides near real-time modeling. A fuzzy logic interface to the neural network program incorporates expert medical knowledge in the learning process. Identification of abnormal brain tissue is determined by the detection of change and comparison with previously learned models of abnormal brain tissues. The operation of the instrument is controlled through a user friendly graphical interface. Patient data is presented in a 3D stereographics display. Acoustic feedback of selected information may optionally be provided. Upon detection of the close proximity to blood vessels or abnormal brain tissue, the computer-controlled motor immediately stops probe penetration. The use of this system will make surgical procedures safer, more accurate, and more efficient. Other applications of this system include the detection, prognosis and treatment of breast cancer, prostate cancer, spinal diseases, and use in general exploratory surgery.

  1. Applying Connectivist Principles and the Task-Based Approach to the Design of a Multimodal Didactic Unit

    Directory of Open Access Journals (Sweden)

    Yeraldine Aldana Gutiérrez

    2012-12-01

    Full Text Available This article describes the pedagogical intervention developed in a public school as part of the research “Exploring Communications Practices through Facebook as a Mediatic Device”, framed within the computer mediated communications field. Twelve ninth graders’ communications practices were explored and addressed by means of multimodal technological resources and tasks based on the connectivist learning view. As a result, a didactic unit was designed in the form of the digital book Diverface. This one in turn displayed information through different media channels and semiotic elements to support its multimodal features. Teachers and students might thus need to reconstruct an alternative multimodal literacy so that they can produce and interpret texts of the same nature in online environments.

  2. Multimodal location estimation of videos and images

    CERN Document Server

    Friedland, Gerald

    2015-01-01

    This book presents an overview of the field of multimodal location estimation, i.e. using acoustic, visual, and/or textual cues to estimate the shown location of a video recording. The authors' sample research results in this field in a unified way integrating research work on this topic that focuses on different modalities, viewpoints, and applications. The book describes fundamental methods of acoustic, visual, textual, social graph, and metadata processing as well as multimodal integration methods used for location estimation. In addition, the text covers benchmark metrics and explores the limits of the technology based on a human baseline. ·         Discusses localization of multimedia data; ·         Examines fundamental methods of establishing location metadata for images and videos (other than GPS tagging); ·         Covers Data-Driven as well as Semantic Location Estimation.

  3. Interactive machine learning for health informatics: when do we need the human-in-the-loop?

    Science.gov (United States)

    Holzinger, Andreas

    2016-06-01

    Machine learning (ML) is the fastest growing field in computer science, and health informatics is among the greatest challenges. The goal of ML is to develop algorithms which can learn and improve over time and can be used for predictions. Most ML researchers concentrate on automatic machine learning (aML), where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from big data with many training sets. However, in the health domain, sometimes we are confronted with a small number of data sets or rare events, where aML-approaches suffer of insufficient training samples. Here interactive machine learning (iML) may be of help, having its roots in reinforcement learning, preference learning, and active learning. The term iML is not yet well used, so we define it as "algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human." This "human-in-the-loop" can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem, reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase.

  4. Supporting collaborative computing and interaction

    International Nuclear Information System (INIS)

    Agarwal, Deborah; McParland, Charles; Perry, Marcia

    2002-01-01

    To enable collaboration on the daily tasks involved in scientific research, collaborative frameworks should provide lightweight and ubiquitous components that support a wide variety of interaction modes. We envision a collaborative environment as one that provides a persistent space within which participants can locate each other, exchange synchronous and asynchronous messages, share documents and applications, share workflow, and hold videoconferences. We are developing the Pervasive Collaborative Computing Environment (PCCE) as such an environment. The PCCE will provide integrated tools to support shared computing and task control and monitoring. This paper describes the PCCE and the rationale for its design

  5. Human-computer interface incorporating personal and application domains

    Science.gov (United States)

    Anderson, Thomas G [Albuquerque, NM

    2011-03-29

    The present invention provides a human-computer interface. The interface includes provision of an application domain, for example corresponding to a three-dimensional application. The user is allowed to navigate and interact with the application domain. The interface also includes a personal domain, offering the user controls and interaction distinct from the application domain. The separation into two domains allows the most suitable interface methods in each: for example, three-dimensional navigation in the application domain, and two- or three-dimensional controls in the personal domain. Transitions between the application domain and the personal domain are under control of the user, and the transition method is substantially independent of the navigation in the application domain. For example, the user can fly through a three-dimensional application domain, and always move to the personal domain by moving a cursor near one extreme of the display.

  6. The Human-Robot Interaction Operating System

    Science.gov (United States)

    Fong, Terrence; Kunz, Clayton; Hiatt, Laura M.; Bugajska, Magda

    2006-01-01

    In order for humans and robots to work effectively together, they need to be able to converse about abilities, goals and achievements. Thus, we are developing an interaction infrastructure called the "Human-Robot Interaction Operating System" (HRI/OS). The HRI/OS provides a structured software framework for building human-robot teams, supports a variety of user interfaces, enables humans and robots to engage in task-oriented dialogue, and facilitates integration of robots through an extensible API.

  7. Sensor-based assessment of the in-situ quality of human computer interaction in the cars : final research report.

    Science.gov (United States)

    2016-01-01

    Human attention is a finite resource. When interrupted while performing a task, this : resource is split between two interactive tasks. People have to decide whether the benefits : from the interruptive interaction will be enough to offset the loss o...

  8. Modification of species-based differential evolution for multimodal optimization

    Science.gov (United States)

    Idrus, Said Iskandar Al; Syahputra, Hermawan; Firdaus, Muliawan

    2015-12-01

    At this time optimization has an important role in various fields as well as between other operational research, industry, finance and management. Optimization problem is the problem of maximizing or minimizing a function of one variable or many variables, which include unimodal and multimodal functions. Differential Evolution (DE), is a random search technique using vectors as an alternative solution in the search for the optimum. To localize all local maximum and minimum on multimodal function, this function can be divided into several domain of fitness using niching method. Species-based niching method is one of method that build sub-populations or species in the domain functions. This paper describes the modification of species-based previously to reduce the computational complexity and run more efficiently. The results of the test functions show species-based modifications able to locate all the local optima in once run the program.

  9. Interactive Computer-Assisted Instruction in Acid-Base Physiology for Mobile Computer Platforms

    Science.gov (United States)

    Longmuir, Kenneth J.

    2014-01-01

    In this project, the traditional lecture hall presentation of acid-base physiology in the first-year medical school curriculum was replaced by interactive, computer-assisted instruction designed primarily for the iPad and other mobile computer platforms. Three learning modules were developed, each with ~20 screens of information, on the subjects…

  10. Multimodal Processes Rescheduling

    DEFF Research Database (Denmark)

    Bocewicz, Grzegorz; Banaszak, Zbigniew A.; Nielsen, Peter

    2013-01-01

    Cyclic scheduling problems concerning multimodal processes are usually observed in FMSs producing multi-type parts where the Automated Guided Vehicles System (AGVS) plays a role of a material handling system. Schedulability analysis of concurrently flowing cyclic processes (SCCP) exe-cuted in the......Cyclic scheduling problems concerning multimodal processes are usually observed in FMSs producing multi-type parts where the Automated Guided Vehicles System (AGVS) plays a role of a material handling system. Schedulability analysis of concurrently flowing cyclic processes (SCCP) exe...

  11. Interactive computer-enhanced remote viewing system

    International Nuclear Information System (INIS)

    Tourtellott, J.A.; Wagner, J.F.

    1995-01-01

    Remediation activities such as decontamination and decommissioning (D ampersand D) typically involve materials and activities hazardous to humans. Robots are an attractive way to conduct such remediation, but for efficiency they need a good three-dimensional (3-D) computer model of the task space where they are to function. This model can be created from engineering plans and architectural drawings and from empirical data gathered by various sensors at the site. The model is used to plan robotic tasks and verify that selected paths am clear of obstacles. This need for a task space model is most pronounced in the remediation of obsolete production facilities and underground storage tanks. Production facilities at many sites contain compact process machinery and systems that were used to produce weapons grade material. For many such systems, a complex maze of pipes (with potentially dangerous contents) must be removed, and this represents a significant D ampersand D challenge. In an analogous way, the underground storage tanks at sites such as Hanford represent a challenge because of their limited entry and the tumbled profusion of in-tank hardware. In response to this need, the Interactive Computer-Enhanced Remote Viewing System (ICERVS) is being designed as a software system to: (1) Provide a reliable geometric description of a robotic task space, and (2) Enable robotic remediation to be conducted more effectively and more economically than with available techniques. A system such as ICERVS is needed because of the problems discussed below

  12. Multimodal Resources in Transnational Adoption

    DEFF Research Database (Denmark)

    Raudaskoski, Pirkko Liisa

    The paper discusses an empirical analysis which highlights the multimodal nature of identity construction. A documentary on transnational adoption provides real life incidents as research material. The incidents involve (or from them emerge) various kinds of multimodal resources and participants...

  13. Human performance models for computer-aided engineering

    Science.gov (United States)

    Elkind, Jerome I. (Editor); Card, Stuart K. (Editor); Hochberg, Julian (Editor); Huey, Beverly Messick (Editor)

    1989-01-01

    This report discusses a topic important to the field of computational human factors: models of human performance and their use in computer-based engineering facilities for the design of complex systems. It focuses on a particular human factors design problem -- the design of cockpit systems for advanced helicopters -- and on a particular aspect of human performance -- vision and related cognitive functions. By focusing in this way, the authors were able to address the selected topics in some depth and develop findings and recommendations that they believe have application to many other aspects of human performance and to other design domains.

  14. IPython: components for interactive and parallel computing across disciplines. (Invited)

    Science.gov (United States)

    Perez, F.; Bussonnier, M.; Frederic, J. D.; Froehle, B. M.; Granger, B. E.; Ivanov, P.; Kluyver, T.; Patterson, E.; Ragan-Kelley, B.; Sailer, Z.

    2013-12-01

    Scientific computing is an inherently exploratory activity that requires constantly cycling between code, data and results, each time adjusting the computations as new insights and questions arise. To support such a workflow, good interactive environments are critical. The IPython project (http://ipython.org) provides a rich architecture for interactive computing with: 1. Terminal-based and graphical interactive consoles. 2. A web-based Notebook system with support for code, text, mathematical expressions, inline plots and other rich media. 3. Easy to use, high performance tools for parallel computing. Despite its roots in Python, the IPython architecture is designed in a language-agnostic way to facilitate interactive computing in any language. This allows users to mix Python with Julia, R, Octave, Ruby, Perl, Bash and more, as well as to develop native clients in other languages that reuse the IPython clients. In this talk, I will show how IPython supports all stages in the lifecycle of a scientific idea: 1. Individual exploration. 2. Collaborative development. 3. Production runs with parallel resources. 4. Publication. 5. Education. In particular, the IPython Notebook provides an environment for "literate computing" with a tight integration of narrative and computation (including parallel computing). These Notebooks are stored in a JSON-based document format that provides an "executable paper": notebooks can be version controlled, exported to HTML or PDF for publication, and used for teaching.

  15. Multimodal Classification of Violent Online Political Extremism Content with Graph Convolutional Networks

    NARCIS (Netherlands)

    Rudinac, S.; Gornishka, I.; Worring, M.

    2017-01-01

    In this paper we present a multimodal approach to categorizing user posts based on their discussion topic. To integrate heterogeneous information extracted from the posts, i.e. text, visual content and the information about user interactions with the online platform, we deploy graph convolutional

  16. Human Factors Principles in Design of Computer-Mediated Visualization for Robot Missions

    Energy Technology Data Exchange (ETDEWEB)

    David I Gertman; David J Bruemmer

    2008-12-01

    With increased use of robots as a resource in missions supporting countermine, improvised explosive devices (IEDs), and chemical, biological, radiological nuclear and conventional explosives (CBRNE), fully understanding the best means by which to complement the human operator’s underlying perceptual and cognitive processes could not be more important. Consistent with control and display integration practices in many other high technology computer-supported applications, current robotic design practices rely highly upon static guidelines and design heuristics that reflect the expertise and experience of the individual designer. In order to use what we know about human factors (HF) to drive human robot interaction (HRI) design, this paper reviews underlying human perception and cognition principles and shows how they were applied to a threat detection domain.

  17. Multimodal Diversity of Postmodernist Fiction Text

    Directory of Open Access Journals (Sweden)

    U. I. Tykha

    2016-12-01

    Full Text Available The article is devoted to the analysis of structural and functional manifestations of multimodal diversity in postmodernist fiction texts. Multimodality is defined as the coexistence of more than one semiotic mode within a certain context. Multimodal texts feature a diversity of semiotic modes in the communication and development of their narrative. Such experimental texts subvert conventional patterns by introducing various semiotic resources – verbal or non-verbal.

  18. Experiments in Multimodal Information Presentation

    NARCIS (Netherlands)

    van Hooijdonk, Charlotte; Bosma, W.E.; Krahmer, Emiel; Maes, Alfons; Theune, Mariet; van den Bosch, Antal; Bouma, Gosse

    In this chapter we describe three experiments investigating multimodal information presentation in the context of a medical QA system. In Experiment 1, we wanted to know how non-experts design (multimodal) answers to medical questions, distinguishing between what questions and how questions. In

  19. Multimodal Discourse Analysis of the Movie "Argo"

    Science.gov (United States)

    Bo, Xu

    2018-01-01

    Based on multimodal discourse theory, this paper makes a multimodal discourse analysis of some shots in the movie "Argo" from the perspective of context of culture, context of situation and meaning of image. Results show that this movie constructs multimodal discourse through particular context, language and image, and successfully…

  20. Interactive computer-assisted instruction in acid-base physiology for mobile computer platforms.

    Science.gov (United States)

    Longmuir, Kenneth J

    2014-03-01

    In this project, the traditional lecture hall presentation of acid-base physiology in the first-year medical school curriculum was replaced by interactive, computer-assisted instruction designed primarily for the iPad and other mobile computer platforms. Three learning modules were developed, each with ∼20 screens of information, on the subjects of the CO2-bicarbonate buffer system, other body buffer systems, and acid-base disorders. Five clinical case modules were also developed. For the learning modules, the interactive, active learning activities were primarily step-by-step learner control of explanations of complex physiological concepts, usually presented graphically. For the clinical cases, the active learning activities were primarily question-and-answer exercises that related clinical findings to the relevant basic science concepts. The student response was remarkably positive, with the interactive, active learning aspect of the instruction cited as the most important feature. Also, students cited the self-paced instruction, extensive use of interactive graphics, and side-by-side presentation of text and graphics as positive features. Most students reported that it took less time to study the subject matter with this online instruction compared with subject matter presented in the lecture hall. However, the approach to learning was highly examination driven, with most students delaying the study of the subject matter until a few days before the scheduled examination. Wider implementation of active learning computer-assisted instruction will require that instructors present subject matter interactively, that students fully embrace the responsibilities of independent learning, and that institutional administrations measure instructional effort by criteria other than scheduled hours of instruction.

  1. Fluctuating hyperfine interactions: computational implementation

    International Nuclear Information System (INIS)

    Zacate, M. O.; Evenson, W. E.

    2010-01-01

    A library of computational routines has been created to assist in the analysis of stochastic models of hyperfine interactions. We call this library the stochastic hyperfine interactions modeling library (SHIML). It provides routines written in the C programming language that (1) read a text description of a model for fluctuating hyperfine fields, (2) set up the Blume matrix, upon which the evolution operator of the system depends, and (3) find the eigenvalues and eigenvectors of the Blume matrix so that theoretical spectra of experimental hyperfine interaction measurements can be calculated. Example model calculations are included in the SHIML package to illustrate its use and to generate perturbed angular correlation spectra for the special case of polycrystalline samples when anisotropy terms of higher order than A 22 can be neglected.

  2. Effects of interactions between humans and domesticated animals

    NARCIS (Netherlands)

    Bokkers, E.A.M.

    2006-01-01

    Humans have many kinds of relationships with domesticated animals. To maintain relationships interactions are needed. Interactions with animals may be beneficial for humans but may also be risky. Scientific literature on effects of human¿animal relationships and interactions in a workplace,

  3. Interaction as Negotiation

    DEFF Research Database (Denmark)

    Kristensen, Jannie Friis; Nielsen, Christina

    In this paper we discuss recent developments in interaction design principles for ubiquitous computing environments, specifically implications related to situated and mobile aspects of work. We present 'Interaction through Negotiation' as a general Human-Computer Interaction (HCI) paradigm, aimed...... at ubiquitous/pervasive technology and environments, with focus on facilitating negotiation in and between webs of different artifacts, humans and places. This approach is concerned with the way technology presents itself to us, both as physical entities and as conceptual entities, as well as the relations...... on several extensive empirical case studies, as well as co-operative design-sessions, we present a reflective analysis providing insights into results of the "Interaction through Negotiation" design approach in action. A very promising area of application is exception handling in pervasive computing...

  4. Computers and conversation

    CERN Document Server

    Luff, Paul; Gilbert, Nigel G

    1986-01-01

    In the past few years a branch of sociology, conversation analysis, has begun to have a significant impact on the design of human*b1computer interaction (HCI). The investigation of human*b1human dialogue has emerged as a fruitful foundation for interactive system design.****This book includes eleven original chapters by leading researchers who are applying conversation analysis to HCI. The fundamentals of conversation analysis are outlined, a number of systems are described, and a critical view of their value for HCI is offered.****Computers and Conversation will be of interest to all concerne

  5. Impact of familiarity on information complexity in human-computer interfaces

    Directory of Open Access Journals (Sweden)

    Bakaev Maxim

    2016-01-01

    Full Text Available A quantitative measure of information complexity remains very much desirable in HCI field, since it may aid in optimization of user interfaces, especially in human-computer systems for controlling complex objects. Our paper is dedicated to exploration of subjective (subject-depended aspect of the complexity, conceptualized as information familiarity. Although research of familiarity in human cognition and behaviour is done in several fields, the accepted models in HCI, such as Human Processor or Hick-Hyman’s law do not generally consider this issue. In our experimental study the subjects performed search and selection of digits and letters, whose familiarity was conceptualized as frequency of occurrence in numbers and texts. The analysis showed significant effect of information familiarity on selection time and throughput in regression models, although the R2 values were somehow low. Still, we hope that our results might aid in quantification of information complexity and its further application for optimizing interaction in human-machine systems.

  6. Portable computing - A fielded interactive scientific application in a small off-the-shelf package

    Science.gov (United States)

    Groleau, Nicolas; Hazelton, Lyman; Frainier, Rich; Compton, Michael; Colombano, Silvano; Szolovits, Peter

    1993-01-01

    Experience with the design and implementation of a portable computing system for STS crew-conducted science is discussed. Principal-Investigator-in-a-Box (PI) will help the SLS-2 astronauts perform vestibular (human orientation system) experiments in flight. PI is an interactive system that provides data acquisition and analysis, experiment step rescheduling, and various other forms of reasoning to astronaut users. The hardware architecture of PI consists of a computer and an analog interface box. 'Off-the-shelf' equipment is employed in the system wherever possible in an effort to use widely available tools and then to add custom functionality and application codes to them. Other projects which can help prospective teams to learn more about portable computing in space are also discussed.

  7. Systemic multimodal approach to speech therapy treatment in autistic children.

    Science.gov (United States)

    Tamas, Daniela; Marković, Slavica; Milankov, Vesela

    2013-01-01

    Conditions in which speech therapy treatment is applied in autistic children are often not in accordance with characteristics of opinions and learning of people with autism. A systemic multimodal approach means motivating autistic people to develop their language speech skill through the procedure which allows reliving of their personal experience according to the contents that are presented in the their natural social environment. This research was aimed at evaluating the efficiency of speech treatment based on the systemic multimodal approach to the work with autistic children. The study sample consisted of 34 children, aged from 8 to 16 years, diagnosed to have different autistic disorders, whose results showed a moderate and severe clinical picture of autism on the Childhood Autism Rating Scale. The applied instruments for the evaluation of ability were the Childhood Autism Rating Scale and Ganzberg II test. The study subjects were divided into two groups according to the type of treatment: children who were covered by the continuing treatment and systemic multimodal approach in the treatment, and children who were covered by classical speech treatment. It is shown that the systemic multimodal approach in teaching autistic children affects the stimulation of communication, socialization, self-service and work as well as that the progress achieved in these areas of functioning was retainable after long time, too. By applying the systemic multimodal approach when dealing with autistic children and by comparing their achievements on tests applied before, during and after the application of this mode, it has been concluded that certain improvement has been achieved in the functionality within the diagnosed category. The results point to a possible direction in the creation of new methods, plans and programs in dealing with autistic children based on empirical and interactive learning.

  8. Multi-modality molecular imaging: pre-clinical laboratory configuration

    Science.gov (United States)

    Wu, Yanjun; Wellen, Jeremy W.; Sarkar, Susanta K.

    2006-02-01

    In recent years, the prevalence of in vivo molecular imaging applications has rapidly increased. Here we report on the construction of a multi-modality imaging facility in a pharmaceutical setting that is expected to further advance existing capabilities for in vivo imaging of drug distribution and the interaction with their target. The imaging instrumentation in our facility includes a microPET scanner, a four wavelength time-domain optical imaging scanner, a 9.4T/30cm MRI scanner and a SPECT/X-ray CT scanner. An electronics shop and a computer room dedicated to image analysis are additional features of the facility. The layout of the facility was designed with a central animal preparation room surrounded by separate laboratory rooms for each of the major imaging modalities to accommodate the work-flow of simultaneous in vivo imaging experiments. This report will focus on the design of and anticipated applications for our microPET and optical imaging laboratory spaces. Additionally, we will discuss efforts to maximize the daily throughput of animal scans through development of efficient experimental work-flows and the use of multiple animals in a single scanning session.

  9. Investigation and evaluation into the usability of human-computer interfaces using a typical CAD system

    Energy Technology Data Exchange (ETDEWEB)

    Rickett, J D

    1987-01-01

    This research program covers three topics relating to the human-computer interface namely, voice recognition, tools and techniques for evaluation, and user and interface modeling. An investigation into the implementation of voice-recognition technologies examines how voice recognizers may be evaluated in commercial software. A prototype system was developed with the collaboration of FEMVIEW Ltd. (marketing a CAD package). A theoretical approach to evaluation leads to the hypothesis that human-computer interaction is affected by personality, influencing types of dialogue, preferred methods for providing helps, etc. A user model based on personality traits, or habitual-behavior patterns (HBP) is presented. Finally, a practical framework is provided for the evaluation of human-computer interfaces. It suggests that evaluation is an integral part of design and that the iterative use of evaluation techniques throughout the conceptualization, design, implementation and post-implementation stages will ensure systems that satisfy the needs of the users and fulfill the goal of usability.

  10. Multimodal exemplification: The expansion of meaning in electronic ...

    African Journals Online (AJOL)

    Functional Multimodal Discourse Analysis (SF-MDA) and argues for improving their exemplifica-tion multimodally. Multimodal devices, if well coordinated, can help optimize e-dictionary exam-ples in informativity, diversity, dynamicity and ...

  11. A robo-pigeon based on an innovative multi-mode telestimulation system.

    Science.gov (United States)

    Yang, Junqing; Huai, Ruituo; Wang, Hui; Lv, Changzhi; Su, Xuecheng

    2015-01-01

    In this paper, we describe a new multi-mode telestimulation system for brain-microstimulation for the navigation of a robo-pigeon, a new type of bio-robot based on Brain-Computer Interface (BCI) techniques. The multi-mode telestimulation system overcomes neuron adaptation that was a key shortcoming of the previous single-mode stimulation by the use of non-steady TTL biphasic pulses accomplished by randomly alternating pulse modes. To improve efficiency, a new behavior model ("virtual fear") is proposed and applied to the robo-pigeon. Unlike the previous "virtual reward" model, the "virtual fear" behavior model does not require special training. The performance and effectiveness of the system to alleviate the adaptation of neurons was verified by a robo-pigeon navigation test, simultaneously confirming the practicality of the "virtual fear" behavioral model.

  12. Multimodal Discourse Strategies of Factuality and Subjectivity in Educational Digital Storytelling

    Directory of Open Access Journals (Sweden)

    Patricia Bou-Franch

    2012-12-01

    Full Text Available As new technologies continue to emerge, students and lecturers are provided with new educational tools. One such tool, which is increasingly used in higher education, is digital storytelling, i.e. multi-media digital narratives. Despite the increasing attention that education and media scholars have paid to digital storytelling, there is scant research examining digital narratives from a discourse-analytic perspective.This paper addresses this gap in the literature and, in line with the belief that individuals make meaning through a range of semiotic devices, including, among others, language, sound, graphics and text, it aims to examine discourse strategies of factuality and subjectivity in historical-cultural digital narratives and their multimodal realisations (Kress & Van Leeuwen 2001; Patrona 2005. To carry out this study a corpus of 16 digital stories was compiled and analysed from a multidisciplinary framework which draws from studies on digital storytelling, computer-mediated communication, media studies, and multimodal discourse analysis. Results show that students/digital story tellers resort to a number of varied multimodal discursive strategies which are constitutive of their identity as capable students in an educational setting.

  13. From 'automation' to 'autonomy': the importance of trust repair in human-machine interaction.

    Science.gov (United States)

    de Visser, Ewart J; Pak, Richard; Shaw, Tyler H

    2018-04-09

    Modern interactions with technology are increasingly moving away from simple human use of computers as tools to the establishment of human relationships with autonomous entities that carry out actions on our behalf. In a recent commentary, Peter Hancock issued a stark warning to the field of human factors that attention must be focused on the appropriate design of a new class of technology: highly autonomous systems. In this article, we heed the warning and propose a human-centred approach directly aimed at ensuring that future human-autonomy interactions remain focused on the user's needs and preferences. By adapting literature from industrial psychology, we propose a framework to infuse a unique human-like ability, building and actively repairing trust, into autonomous systems. We conclude by proposing a model to guide the design of future autonomy and a research agenda to explore current challenges in repairing trust between humans and autonomous systems. Practitioner Summary: This paper is a call to practitioners to re-cast our connection to technology as akin to a relationship between two humans rather than between a human and their tools. To that end, designing autonomy with trust repair abilities will ensure future technology maintains and repairs relationships with their human partners.

  14. Human Work Interaction Design for Pervasive and Smart Workplaces

    DEFF Research Database (Denmark)

    Campos, Pedro F.; Lopes, Arminda; Clemmensen, Torkil

    2014-01-01

    ' experience and outputs? This workshop focuses on answering this question to support professionals, academia, national labs, and industry engaged in human work analysis and interaction design for the workplace. Conversely, tools, procedures, and professional competences for designing human......Pervasive and smart technologies have pushed workplace configuration beyond linear logic and physical boundaries. As a result, workers' experience of and access to technology is increasingly pervasive, and their agency constantly reconfigured. While this in certain areas of work is not new (e.......g., technology mediation and decision support in air traffic control), more recent developments in other domains such as healthcare (e.g., Augmented Reality in Computer Aided Surgery) have raised challenging issues for HCI researchers and practitioners. The question now is: how to improve the quality of workers...

  15. Prediction of Human Drug Targets and Their Interactions Using Machine Learning Methods: Current and Future Perspectives.

    Science.gov (United States)

    Nath, Abhigyan; Kumari, Priyanka; Chaube, Radha

    2018-01-01

    Identification of drug targets and drug target interactions are important steps in the drug-discovery pipeline. Successful computational prediction methods can reduce the cost and time demanded by the experimental methods. Knowledge of putative drug targets and their interactions can be very useful for drug repurposing. Supervised machine learning methods have been very useful in drug target prediction and in prediction of drug target interactions. Here, we describe the details for developing prediction models using supervised learning techniques for human drug target prediction and their interactions.

  16. An ontology for human-like interaction systems

    OpenAIRE

    Albacete García, Esperanza

    2016-01-01

    This report proposes and describes the development of a Ph.D. Thesis aimed at building an ontological knowledge model supporting Human-Like Interaction systems. The main function of such knowledge model in a human-like interaction system is to unify the representation of each concept, relating it to the appropriate terms, as well as to other concepts with which it shares semantic relations. When developing human-like interactive systems, the inclusion of an ontological module can be valuab...

  17. Simulation of the dynamics of a multimode bipolarisation class B laser with intracavity frequency doubling

    International Nuclear Information System (INIS)

    Khandokhin, Pavel A

    2006-01-01

    A model of a multimode bipolarisation solid-state laser with intracavity frequency doubling is developed. The interaction of different longitudinal modes is described within the framework of rate-equation approximation while the interaction of each pair of orthogonally polarised modes with identical longitudinal indices is described taking into account the phase-sensitive interaction of these modes. Comparison with the experimental data is performed. (dinamics processes in lasers)

  18. Toward a tactile language for human-robot interaction: two studies of tacton learning and performance.

    Science.gov (United States)

    Barber, Daniel J; Reinerman-Jones, Lauren E; Matthews, Gerald

    2015-05-01

    Two experiments were performed to investigate the feasibility for robot-to-human communication of a tactile language using a lexicon of standardized tactons (tactile icons) within a sentence. Improvements in autonomous systems technology and a growing demand within military operations are spurring interest in communication via vibrotactile displays. Tactile communication may become an important element of human-robot interaction (HRI), but it requires the development of messaging capabilities approaching the communication power of the speech and visual signals used in the military. In Experiment 1 (N = 38), we trained participants to identify sets of directional, dynamic, and static tactons and tested performance and workload following training. In Experiment 2 (N = 76), we introduced an extended training procedure and tested participants' ability to correctly identify two-tacton phrases. We also investigated the impact of multitasking on performance and workload. Individual difference factors were assessed. Experiment 1 showed that participants found dynamic and static tactons difficult to learn, but the enhanced training procedure in Experiment 2 produced competency in performance for all tacton categories. Participants in the latter study also performed well on two-tacton phrases and when multitasking. However, some deficits in performance and elevation of workload were observed. Spatial ability predicted some aspects of performance in both studies. Participants may be trained to identify both single tactons and tacton phrases, demonstrating the feasibility of developing a tactile language for HRI. Tactile communication may be incorporated into multi-modal communication systems for HRI. It also has potential for human-human communication in challenging environments. © 2014, Human Factors and Ergonomics Society.

  19. Cloud computing for radiologists

    OpenAIRE

    Amit T Kharat; Amjad Safvi; S S Thind; Amarjit Singh

    2012-01-01

    Cloud computing is a concept wherein a computer grid is created using the Internet with the sole purpose of utilizing shared resources such as computer software, hardware, on a pay-per-use model. Using Cloud computing, radiology users can efficiently manage multimodality imaging units by using the latest software and hardware without paying huge upfront costs. Cloud computing systems usually work on public, private, hybrid, or community models. Using the various components of a Cloud, such as...

  20. Double-Wavelet Approach to Studying the Modulation Properties of Nonstationary Multimode Dynamics

    DEFF Research Database (Denmark)

    Sosnovtseva, Olga; Mosekilde, Erik; Pavlov, A.N.

    2005-01-01

    On the basis of double-wavelet analysis, the paper proposes a method to study interactions in the form of frequency and amplitude modulation in nonstationary multimode data series. Special emphasis is given to the problem of quantifying the strength of modulation for a fast signal by a coexisting...