WorldWideScience

Sample records for human computer qhc

  1. The Quantum Human Computer (QHC) Hypothesis

    Science.gov (United States)

    Salmani-Nodoushan, Mohammad Ali

    2008-01-01

    This article attempts to suggest the existence of a human computer called Quantum Human Computer (QHC) on the basis of an analogy between human beings and computers. To date, there are two types of computers: Binary and Quantum. The former operates on the basis of binary logic where an object is said to exist in either of the two states of 1 and…

  2. The mathematics of a quantum Hamiltonian computing half adder Boolean logic gate.

    Science.gov (United States)

    Dridi, G; Julien, R; Hliwa, M; Joachim, C

    2015-08-28

    The mathematics behind the quantum Hamiltonian computing (QHC) approach of designing Boolean logic gates with a quantum system are given. Using the quantum eigenvalue repulsion effect, the QHC AND, NAND, OR, NOR, XOR, and NXOR Hamiltonian Boolean matrices are constructed. This is applied to the construction of a QHC half adder Hamiltonian matrix requiring only six quantum states to fullfil a half Boolean logical truth table. The QHC design rules open a nano-architectronic way of constructing Boolean logic gates inside a single molecule or atom by atom at the surface of a passivated semi-conductor.

  3. Human Computation

    CERN Document Server

    CERN. Geneva

    2008-01-01

    What if people could play computer games and accomplish work without even realizing it? What if billions of people collaborated to solve important problems for humanity or generate training data for computers? My work aims at a general paradigm for doing exactly that: utilizing human processing power to solve computational problems in a distributed manner. In particular, I focus on harnessing human time and energy for addressing problems that computers cannot yet solve. Although computers have advanced dramatically in many respects over the last 50 years, they still do not possess the basic conceptual intelligence or perceptual capabilities...

  4. Prediction of CO concentrations from road traffic at signalized intersections using CAL3QHC model: the Khon Kaen case study

    Directory of Open Access Journals (Sweden)

    Prungchan Wongwises

    2005-11-01

    Full Text Available Based on the US EPA air pollution model, CAL3QHC version 2.0 was applied to predict carbon monoxide (CO concentrations from road traffic at three signalized intersections in Khon Kaen province. Four data groups required by the model, namely site parameters, traffic parameters, meteorological parameters and emission parameters were collected at each intersection and have been used as the inputs to the model. The prediction results were compared to the measurement. The results showed that the predicted CO concentration variations corresponding mostly to the measurement except at some hours when there was not good agreement due to an extreme upwind location of receptor, low wind speed, raining period, other out-sources of CO concentration such as another near intersection and parking lot. However, this study shows that the CAL3QHC model can be applied to predict CO concentration in the environmental condition of Thailand quite well. Moreover, the model might be used as a tool for assessing traffic air pollution at roadway intersection as well as for air quality management.

  5. Ubiquitous Human Computing

    OpenAIRE

    Zittrain, Jonathan L.

    2008-01-01

    Ubiquitous computing means network connectivity everywhere, linking devices and systems as small as a thumb tack and as large as a worldwide product distribution chain. What could happen when people are so readily networked? This short essay explores issues arising from two possible emerging models of ubiquitous human computing: fungible networked brainpower and collective personal vital sign monitoring.

  6. When computers were human

    CERN Document Server

    Grier, David Alan

    2013-01-01

    Before Palm Pilots and iPods, PCs and laptops, the term ""computer"" referred to the people who did scientific calculations by hand. These workers were neither calculating geniuses nor idiot savants but knowledgeable people who, in other circumstances, might have become scientists in their own right. When Computers Were Human represents the first in-depth account of this little-known, 200-year epoch in the history of science and technology. Beginning with the story of his own grandmother, who was trained as a human computer, David Alan Grier provides a poignant introduction to the wider wo

  7. Computational human body models

    NARCIS (Netherlands)

    Wismans, J.S.H.M.; Happee, R.; Dommelen, J.A.W. van

    2005-01-01

    Computational human body models are widely used for automotive crashsafety research and design and as such have significantly contributed to a reduction of traffic injuries and fatalities. Currently crash simulations are mainly performed using models based on crash-dummies. However crash dummies dif

  8. Computational human body models

    NARCIS (Netherlands)

    Wismans, J.S.H.M.; Happee, R.; Dommelen, J.A.W. van

    2005-01-01

    Computational human body models are widely used for automotive crashsafety research and design and as such have significantly contributed to a reduction of traffic injuries and fatalities. Currently crash simulations are mainly performed using models based on crash-dummies. However crash dummies

  9. Handbook of human computation

    CERN Document Server

    Michelucci, Pietro

    2013-01-01

    This volume addresses the emerging area of human computation, The chapters, written by leading international researchers, explore existing and future opportunities to combine the respective strengths of both humans and machines in order to create powerful problem-solving capabilities. The book bridges scientific communities, capturing and integrating the unique perspective and achievements of each. It coalesces contributions from industry and across related disciplines in order to motivate, define, and anticipate the future of this exciting new frontier in science and cultural evolution. Reade

  10. Visualizing Humans by Computer.

    Science.gov (United States)

    Magnenat-Thalmann, Nadia

    1992-01-01

    Presents an overview of the problems and techniques involved in visualizing humans in a three-dimensional scene. Topics discussed include human shape modeling, including shape creation and deformation; human motion control, including facial animation and interaction with synthetic actors; and human rendering and clothing, including textures and…

  11. Minimal mobile human computer interaction

    NARCIS (Netherlands)

    el Ali, A.

    2013-01-01

    In the last 20 years, the widespread adoption of personal, mobile computing devices in everyday life, has allowed entry into a new technological era in Human Computer Interaction (HCI). The constant change of the physical and social context in a user's situation made possible by the portability of m

  12. Making IBM's Computer, Watson, Human

    Science.gov (United States)

    Rachlin, Howard

    2012-01-01

    This essay uses the recent victory of an IBM computer (Watson) in the TV game, Jeopardy, to speculate on the abilities Watson would need, in addition to those it has, to be human. The essay's basic premise is that to be human is to behave as humans behave and to function in society as humans function. Alternatives to this premise are considered and rejected. The viewpoint of the essay is that of teleological behaviorism. Mental states are defined as temporally extended patterns of overt behavior. From this viewpoint (although Watson does not currently have them), essential human attributes such as consciousness, the ability to love, to feel pain, to sense, to perceive, and to imagine may all be possessed by a computer. Most crucially, a computer may possess self-control and may act altruistically. However, the computer's appearance, its ability to make specific movements, its possession of particular internal structures (e.g., whether those structures are organic or inorganic), and the presence of any nonmaterial “self,” are all incidental to its humanity. PMID:22942530

  13. Humans, computers and wizards human (simulated) computer interaction

    CERN Document Server

    Fraser, Norman; McGlashan, Scott; Wooffitt, Robin

    2013-01-01

    Using data taken from a major European Union funded project on speech understanding, the SunDial project, this book considers current perspectives on human computer interaction and argues for the value of an approach taken from sociology which is based on conversation analysis.

  14. Human-computer interface design

    Energy Technology Data Exchange (ETDEWEB)

    Bowser, S.E.

    1995-04-01

    Modern military forces assume that computer-based information is reliable, timely, available, usable, and shared. The importance of computer-based information is based on the assumption that {open_quotes}shared situation awareness, coupled with the ability to conduct continuous operations, will allow information age armies to observe, decide, and act faster, more correctly and more precisely than their enemies.{close_quotes} (Sullivan and Dubik 1994). Human-Computer Interface (HCI) design standardization is critical to the realization of the previously stated assumptions. Given that a key factor of a high-performance, high-reliability system is an easy-to-use, effective design of the interface between the hardware, software, and the user, it follows logically that the interface between the computer and the military user is critical to the success of the information-age military. The proliferation of computer technology has resulted in the development of an extensive variety of computer-based systems and the implementation of varying HCI styles on these systems. To accommodate the continued growth in computer-based systems, minimize HCI diversity, and improve system performance and reliability, the U.S. Department of Defense (DoD) is continuing to adopt interface standards for developing computer-based systems.

  15. Human ear recognition by computer

    CERN Document Server

    Bhanu, Bir; Chen, Hui

    2010-01-01

    Biometrics deals with recognition of individuals based on their physiological or behavioral characteristics. The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. Unlike the fingerprint and iris, it can be easily captured from a distance without a fully cooperative subject, although sometimes it may be hidden with hair, scarf and jewellery. Also, unlike a face, the ear is a relatively stable structure that does not change much with the age and facial expressions. ""Human Ear Recognition by Computer"" is the first book o

  16. Human-centered Computing: Toward a Human Revolution

    OpenAIRE

    Jaimes, Alejandro; Gatica-Perez, Daniel; Sebe, Nicu; Thomas S. Huang

    2007-01-01

    Human-centered computing studies the design, development, and deployment of mixed-initiative human-computer systems. HCC is emerging from the convergence of multiple disciplines that are concerned both with understanding human beings and with the design of computational artifacts.

  17. Computational Techniques of Electromagnetic Dosimetry for Humans

    Science.gov (United States)

    Hirata, Akimasa; Fujiwara, Osamu

    There has been increasing public concern about the adverse health effects of human exposure to electromagnetic fields. This paper reviews the rationale of international safety guidelines for human protection against electromagnetic fields. Then, this paper also presents computational techniques to conduct dosimetry in anatomically-based human body models. Computational examples and remaining problems are also described briefly.

  18. The Social Computer: Combining Machine and Human Computation

    OpenAIRE

    Giunchiglia, Fausto; Robertson, Dave

    2010-01-01

    The social computer is a future computational system that harnesses the innate problem solving, action and information gathering powers of humans and the environments in which they live in order to tackle large scale social problems that are beyond our current capabilities. The hardware of a social computer is supplied by people’s brains and bodies, the environment where they live, including artifacts, e.g., buildings and roads, sensors into the environment, networks and computers; while the ...

  19. Cooperation in human-computer communication

    OpenAIRE

    Kronenberg, Susanne

    2000-01-01

    The goal of this thesis is to simulate cooperation in human-computer communication to model the communicative interaction process of agents in natural dialogs in order to provide advanced human-computer interaction in that coherence is maintained between contributions of both agents, i.e. the human user and the computer. This thesis contributes to certain aspects of understanding and generation and their interaction in the German language. In spontaneous dialogs agents cooperate by the pro...

  20. Language evolution and human-computer interaction

    Science.gov (United States)

    Grudin, Jonathan; Norman, Donald A.

    1991-01-01

    Many of the issues that confront designers of interactive computer systems also appear in natural language evolution. Natural languages and human-computer interfaces share as their primary mission the support of extended 'dialogues' between responsive entities. Because in each case one participant is a human being, some of the pressures operating on natural languages, causing them to evolve in order to better support such dialogue, also operate on human-computer 'languages' or interfaces. This does not necessarily push interfaces in the direction of natural language - since one entity in this dialogue is not a human, this is not to be expected. Nonetheless, by discerning where the pressures that guide natural language evolution also appear in human-computer interaction, we can contribute to the design of computer systems and obtain a new perspective on natural languages.

  1. Language evolution and human-computer interaction

    Science.gov (United States)

    Grudin, Jonathan; Norman, Donald A.

    1991-01-01

    Many of the issues that confront designers of interactive computer systems also appear in natural language evolution. Natural languages and human-computer interfaces share as their primary mission the support of extended 'dialogues' between responsive entities. Because in each case one participant is a human being, some of the pressures operating on natural languages, causing them to evolve in order to better support such dialogue, also operate on human-computer 'languages' or interfaces. This does not necessarily push interfaces in the direction of natural language - since one entity in this dialogue is not a human, this is not to be expected. Nonetheless, by discerning where the pressures that guide natural language evolution also appear in human-computer interaction, we can contribute to the design of computer systems and obtain a new perspective on natural languages.

  2. Computer Vision Method in Human Motion Detection

    Institute of Scientific and Technical Information of China (English)

    FU Li; FANG Shuai; XU Xin-he

    2007-01-01

    Human motion detection based on computer vision is a frontier research topic and is causing an increasing attention in the field of computer vision research. The wavelet transform is used to sharpen the ambiguous edges in human motion image. The shadow's effect to the image processing is also removed. The edge extraction can be successfully realized.This is an effective method for the research of human motion analysis system.

  3. Object categorization: computer and human vision perspectives

    National Research Council Canada - National Science Library

    Dickinson, Sven J

    2009-01-01

    .... The result of a series of four highly successful workshops on the topic, the book gathers many of the most distinguished researchers from both computer and human vision to reflect on their experience...

  4. Handling emotions in human-computer dialogues

    CERN Document Server

    Pittermann, Johannes; Minker, Wolfgang

    2010-01-01

    This book presents novel methods to perform robust speech-based emotion recognition at low complexity. It describes a flexible dialogue model to conveniently integrate emotions and other dialogue-influencing parameters in human-computer interaction.

  5. Microscopic computation in human brain evolution.

    Science.gov (United States)

    Wallace, R

    1995-04-01

    When human psychological performance is viewed in terms of cognitive modules, our species displays remarkable differences in computational power. Algorithmically simple computations are generally difficult to perform, whereas optimal routing or "Traveling Salesman" Problems (TSP) of far greater complexity are solved on an everyday basis. It is argued that even "simple" instances of TSP are not purely Euclidian problems in human computations, but involve emotional, autonomic, and cognitive constraints. They therefore require a level of parallel processing not possible in a macroscopic system to complete the algorithm within a brief period of time. A microscopic neurobiological model emphasizing the computational power of excited atoms within the neuronal membrane is presented as an alternative to classical connectionist approaches. The evolution of the system is viewed in terms of specific natural selection pressures driving satisfying computations toward global optimization. The relationship of microscopic computation to the nature of consciousness is examined, and possible mathematical models as a basis for simulation studies are briefly discussed.

  6. Human-Computer Interactions and Decision Behavior

    Science.gov (United States)

    1984-01-01

    software interfaces. The major components of the reseach program included the Diaiogue Management System. (DMS) operating environment, the role of...specification; and new methods for modeling, designing, and developing human-computer interfaces based on syntactic and semantic specification. The DMS...achieving communication is language. Accordingly, the transaction model employs a linguistic model consisting of parts that relate computer responses

  7. Human Adaptation to the Computer.

    Science.gov (United States)

    1986-09-01

    Resistance to Change ; Stress; Adaptation to Computers ABSTRACT (Continue on reverie of necessary and identify by block number) This thesis is a study of...OF RESISTANCE TO CHANGE -------------- 48 B. OVERCOMING RESISTANCE TO CHANGE ------------- 50 C. SPECIFIC RECOMMENDATIONS TO OVERCOME RESISTANCE...greater his bewilderment and the greater his bewilderment, the greater his resistance will be [Ref. 7:p. 539]. Overcoming man’s resistance to change

  8. Fundamentals of human-computer interaction

    CERN Document Server

    Monk, Andrew F

    1985-01-01

    Fundamentals of Human-Computer Interaction aims to sensitize the systems designer to the problems faced by the user of an interactive system. The book grew out of a course entitled """"The User Interface: Human Factors for Computer-based Systems"""" which has been run annually at the University of York since 1981. This course has been attended primarily by systems managers from the computer industry. The book is organized into three parts. Part One focuses on the user as processor of information with studies on visual perception; extracting information from printed and electronically presented

  9. Deep architectures for Human Computer Interaction

    NARCIS (Netherlands)

    Noulas, A.K.; Kröse, B.J.A.

    2008-01-01

    In this work we present the application of Conditional Restricted Boltzmann Machines in Human Computer Interaction. These provide a well suited framework to model the complex temporal patterns produced from humans in the audio and video modalities. They can be trained in a semisupervised fashion and

  10. Exploring human inactivity in computer power consumption

    Science.gov (United States)

    Candrawati, Ria; Hashim, Nor Laily Binti

    2016-08-01

    Managing computer power consumption has become an important challenge in computer society and this is consistent with a trend where a computer system is more important to modern life together with a request for increased computing power and functions continuously. Unfortunately, previous approaches are still inadequately designed to handle the power consumption problem due to unpredictable workload of a system caused by unpredictable human behaviors. This is happens due to lack of knowledge in a software system and the software self-adaptation is one approach in dealing with this source of uncertainty. Human inactivity is handled by adapting the behavioral changes of the users. This paper observes human inactivity in the computer usage and finds that computer power usage can be reduced if the idle period can be intelligently sensed from the user activities. This study introduces Control, Learn and Knowledge model that adapts the Monitor, Analyze, Planning, Execute control loop integrates with Q Learning algorithm to learn human inactivity period to minimize the computer power consumption. An experiment to evaluate this model was conducted using three case studies with same activities. The result show that the proposed model obtained those 5 out of 12 activities shows the power decreasing compared to others.

  11. Human Computer Interaction: An intellectual approach

    Directory of Open Access Journals (Sweden)

    Kuntal Saroha

    2011-08-01

    Full Text Available This paper discusses the research that has been done in thefield of Human Computer Interaction (HCI relating tohuman psychology. Human-computer interaction (HCI isthe study of how people design, implement, and useinteractive computer systems and how computers affectindividuals, organizations, and society. This encompassesnot only ease of use but also new interaction techniques forsupporting user tasks, providing better access toinformation, and creating more powerful forms ofcommunication. It involves input and output devices andthe interaction techniques that use them; how information ispresented and requested; how the computer’s actions arecontrolled and monitored; all forms of help, documentation,and training; the tools used to design, build, test, andevaluate user interfaces; and the processes that developersfollow when creating Interfaces.

  12. Human/computer control of undersea teleoperators

    Science.gov (United States)

    Sheridan, T. B.; Verplank, W. L.; Brooks, T. L.

    1978-01-01

    The potential of supervisory controlled teleoperators for accomplishment of manipulation and sensory tasks in deep ocean environments is discussed. Teleoperators and supervisory control are defined, the current problems of human divers are reviewed, and some assertions are made about why supervisory control has potential use to replace and extend human diver capabilities. The relative roles of man and computer and the variables involved in man-computer interaction are next discussed. Finally, a detailed description of a supervisory controlled teleoperator system, SUPERMAN, is presented.

  13. Applying Human Computation Methods to Information Science

    Science.gov (United States)

    Harris, Christopher Glenn

    2013-01-01

    Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…

  14. Soft Computing in Humanities and Social Sciences

    CERN Document Server

    González, Veronica

    2012-01-01

    The field of Soft Computing in Humanities and Social Sciences is at a turning point. The strong distinction between “science” and “humanities” has been criticized from many fronts and, at the same time, an increasing cooperation between the so-called “hard sciences” and “soft sciences” is taking place in a wide range of scientific projects dealing with very complex and interdisciplinary topics. In the last fifteen years the area of Soft Computing has also experienced a gradual rapprochement to disciplines in the Humanities and Social Sciences, and also in the field of Medicine, Biology and even the Arts, a phenomenon that did not occur much in the previous years.   The collection of this book presents a generous sampling of the new and burgeoning field of Soft Computing in Humanities and Social Sciences, bringing together a wide array of authors and subject matters from different disciplines. Some of the contributors of the book belong to the scientific and technical areas of Soft Computing w...

  15. Introduction to human-computer interaction

    CERN Document Server

    Booth, Paul

    2014-01-01

    Originally published in 1989 this title provided a comprehensive and authoritative introduction to the burgeoning discipline of human-computer interaction for students, academics, and those from industry who wished to know more about the subject. Assuming very little knowledge, the book provides an overview of the diverse research areas that were at the time only gradually building into a coherent and well-structured field. It aims to explain the underlying causes of the cognitive, social and organizational problems typically encountered when computer systems are introduced. It is clear and co

  16. Human brain mapping: Experimental and computational approaches

    Energy Technology Data Exchange (ETDEWEB)

    Wood, C.C.; George, J.S.; Schmidt, D.M.; Aine, C.J. [Los Alamos National Lab., NM (US); Sanders, J. [Albuquerque VA Medical Center, NM (US); Belliveau, J. [Massachusetts General Hospital, Boston, MA (US)

    1998-11-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This program developed project combined Los Alamos' and collaborators' strengths in noninvasive brain imaging and high performance computing to develop potential contributions to the multi-agency Human Brain Project led by the National Institute of Mental Health. The experimental component of the project emphasized the optimization of spatial and temporal resolution of functional brain imaging by combining: (a) structural MRI measurements of brain anatomy; (b) functional MRI measurements of blood flow and oxygenation; and (c) MEG measurements of time-resolved neuronal population currents. The computational component of the project emphasized development of a high-resolution 3-D volumetric model of the brain based on anatomical MRI, in which structural and functional information from multiple imaging modalities can be integrated into a single computational framework for modeling, visualization, and database representation.

  17. Human brain mapping: Experimental and computational approaches

    Energy Technology Data Exchange (ETDEWEB)

    Wood, C.C.; George, J.S.; Schmidt, D.M.; Aine, C.J. [Los Alamos National Lab., NM (US); Sanders, J. [Albuquerque VA Medical Center, NM (US); Belliveau, J. [Massachusetts General Hospital, Boston, MA (US)

    1998-11-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This program developed project combined Los Alamos' and collaborators' strengths in noninvasive brain imaging and high performance computing to develop potential contributions to the multi-agency Human Brain Project led by the National Institute of Mental Health. The experimental component of the project emphasized the optimization of spatial and temporal resolution of functional brain imaging by combining: (a) structural MRI measurements of brain anatomy; (b) functional MRI measurements of blood flow and oxygenation; and (c) MEG measurements of time-resolved neuronal population currents. The computational component of the project emphasized development of a high-resolution 3-D volumetric model of the brain based on anatomical MRI, in which structural and functional information from multiple imaging modalities can be integrated into a single computational framework for modeling, visualization, and database representation.

  18. Brain-Computer Interfaces Revolutionizing Human-Computer Interaction

    CERN Document Server

    Graimann, Bernhard; Allison, Brendan

    2010-01-01

    A brain-computer interface (BCI) establishes a direct output channel between the human brain and external devices. BCIs infer user intent via direct measures of brain activity and thus enable communication and control without movement. This book, authored by experts in the field, provides an accessible introduction to the neurophysiological and signal-processing background required for BCI, presents state-of-the-art non-invasive and invasive approaches, gives an overview of current hardware and software solutions, and reviews the most interesting as well as new, emerging BCI applications. The book is intended not only for students and young researchers, but also for newcomers and other readers from diverse backgrounds keen to learn about this vital scientific endeavour.

  19. Computer Simulation of the Beating Human Heart

    Science.gov (United States)

    Peskin, Charles S.; McQueen, David M.

    2001-06-01

    The mechanical function of the human heart couples together the fluid mechanics of blood and the soft tissue mechanics of the muscular heart walls and flexible heart valve leaflets. We discuss a unified mathematical formulation of this problem in which the soft tissue looks like a specialized part of the fluid in which additional forces are applied. This leads to a computational scheme known as the Immersed Boundary (IB) method for solving the coupled equations of motion of the whole system. The IB method is used to construct a three-dimensional Virtual Heart, including representations of all four chambers of the heart and all four valves, in addition to the large arteries and veins that connect the heart to the rest of the circulation. The chambers, valves, and vessels are all modeled as collections of elastic (and where appropriate, actively contractile) fibers immersed in viscous incompressible fluid. Results are shown as a computer-generated video animation of the beating heart.

  20. The epistemology and ontology of human-computer interaction

    NARCIS (Netherlands)

    Brey, Philip

    2005-01-01

    This paper analyzes epistemological and ontological dimensions of Human-Computer Interaction (HCI) through an analysis of the functions of computer systems in relation to their users. It is argued that the primary relation between humans and computer systems has historically been epistemic: computer

  1. Human-Computer Interaction in Smart Environments

    Directory of Open Access Journals (Sweden)

    Gianluca Paravati

    2015-08-01

    Full Text Available Here, we provide an overview of the content of the Special Issue on “Human-computer interaction in smart environments”. The aim of this Special Issue is to highlight technologies and solutions encompassing the use of mass-market sensors in current and emerging applications for interacting with Smart Environments. Selected papers address this topic by analyzing different interaction modalities, including hand/body gestures, face recognition, gaze/eye tracking, biosignal analysis, speech and activity recognition, and related issues.

  2. Computer Aided Design in Digital Human Modeling for Human Computer Interaction in Ergonomic Assessment: A Review

    Directory of Open Access Journals (Sweden)

    Suman Mukhopadhyay , Sanjib Kumar Das and Tania Chakraborty

    2012-12-01

    Full Text Available Research in Human-Computer Interaction (HCI hasbeen enormously successful in the area of computeraidedergonomics or human-centric designs. Perfectfit for people has always been a target for productdesign. Designers traditionally used anthropometricdimensions for 3D product design which created a lotof fitting problems when dealing with thecomplexities of the human body shapes. Computeraided design (CAD, also known as Computer aideddesign and drafting (CADD is the computertechnology used for the design processing and designdocumentation. CAD has now been used extensivelyin many applications such as automotive,shipbuilding, aerospace industries, architectural andindustrial designs, prosthetics, computer animationfor special effects in movies, advertising andtechnical manuals. As a technology, digital humanmodeling (DHM has rapidly emerged as atechnology that creates, manipulates and controlhuman representations and human-machine systemsscenes on computers for interactive ergonomic designproblem solving. DHM promises to profoundlychange how products or systems are designed, howergonomics analysis is performed, how disorders andimpairments are assessed and how therapies andsurgeries are conducted. The imperative andemerging need for the DHM appears to be consistentwith the fact that the past decade has witnessedsignificant growth in both the software systemsoffering DHM capabilities as well as the corporateadapting the technology.The authors shall dwell atlength and deliberate on how research in DHM hasfinally brought about an enhanced HCI, in thecontext of computer-aided ergonomics or humancentricdesign and discuss about future trends in thiscontext.

  3. Computers vs. Humans in Galaxy Classification

    Science.gov (United States)

    Kohler, Susanna

    2016-04-01

    In this age of large astronomical surveys, one major scientific bottleneck is the analysis of enormous data sets. Traditionally, this task requires human input but could computers eventually take over? A pair of scientists explore this question by testing whether computers can classify galaxies as well as humans.Examples of disagreement: galaxies that Galaxy-Zoo humans classified as spirals with 95% agreement, but the computer algorithm classified as ellipticals with 70% certainty. Most are cases where the computer got it wrong but not all of them. [Adapted from Kuminski et al. 2016]Limits of Citizen ScienceGalaxy Zoo is an internet-based citizen science project that uses non-astronomer volunteers to classify galaxy images. This is an innovative way to provide more manpower, but its still only practical for limited catalog sizes. How do we handle the data from upcoming surveys like the Large Synoptic Survey Telescope (LSST), which will produce billions of galaxy images when it comes online?In a recent study by Evan Kuminski and Lior Shamir, two computer scientists at Lawrence Technological University in Michigan, a machine learning algorithm known as Wndchrm was used to classify a dataset of Sloan Digital Sky Survey (SDSS) galaxies into ellipticals and spirals. The authors goal is to determine whether their algorithm can classify galaxies as accurately as the human volunteers for Galaxy Zoo.Automatic ClassificationAfter training their classifier on a small set of spiral and elliptical galaxies, Kuminski and Shamir set it loose on a catalog of ~3 million SDSS galaxies. The classifier first computes a set of 2,885 numerical descriptors (like textures, edges, and shapes) for each galaxy image, and then uses these descriptors to categorize the galaxy as spiral or elliptical.Rate of agreement of the computer classification with human classification (for the Galaxy Zoo superclean subset) for different ranges of computed classification certainties. For certainties above

  4. Human-Computer Interaction The Agency Perspective

    CERN Document Server

    Oliveira, José

    2012-01-01

    Agent-centric theories, approaches and technologies are contributing to enrich interactions between users and computers. This book aims at highlighting the influence of the agency perspective in Human-Computer Interaction through a careful selection of research contributions. Split into five sections; Users as Agents, Agents and Accessibility, Agents and Interactions, Agent-centric Paradigms and Approaches, and Collective Agents, the book covers a wealth of novel, original and fully updated material, offering:   ü  To provide a coherent, in depth, and timely material on the agency perspective in HCI ü  To offer an authoritative treatment of the subject matter presented by carefully selected authors ü  To offer a balanced and broad coverage of the subject area, including, human, organizational, social, as well as technological concerns. ü  To offer a hands-on-experience by covering representative case studies and offering essential design guidelines   The book will appeal to a broad audience of resea...

  5. Human computer interaction using hand gestures

    CERN Document Server

    Premaratne, Prashan

    2014-01-01

    Human computer interaction (HCI) plays a vital role in bridging the 'Digital Divide', bringing people closer to consumer electronics control in the 'lounge'. Keyboards and mouse or remotes do alienate old and new generations alike from control interfaces. Hand Gesture Recognition systems bring hope of connecting people with machines in a natural way. This will lead to consumers being able to use their hands naturally to communicate with any electronic equipment in their 'lounge.' This monograph will include the state of the art hand gesture recognition approaches and how they evolved from their inception. The author would also detail his research in this area for the past 8 years and how the future might turn out to be using HCI. This monograph will serve as a valuable guide for researchers (who would endeavour into) in the world of HCI.

  6. Human Computation An Integrated Approach to Learning from the Crowd

    CERN Document Server

    Law, Edith

    2011-01-01

    Human computation is a new and evolving research area that centers around harnessing human intelligence to solve computational problems that are beyond the scope of existing Artificial Intelligence (AI) algorithms. With the growth of the Web, human computation systems can now leverage the abilities of an unprecedented number of people via the Web to perform complex computation. There are various genres of human computation applications that exist today. Games with a purpose (e.g., the ESP Game) specifically target online gamers who generate useful data (e.g., image tags) while playing an enjoy

  7. Advanced Technologies, Embedded and Multimedia for Human-Centric Computing

    CERN Document Server

    Chao, Han-Chieh; Deng, Der-Jiunn; Park, James; HumanCom and EMC 2013

    2014-01-01

    The theme of HumanCom and EMC are focused on the various aspects of human-centric computing for advances in computer science and its applications, embedded and multimedia computing and provides an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of human-centric computing. And the theme of EMC (Advanced in Embedded and Multimedia Computing) is focused on the various aspects of embedded system, smart grid, cloud and multimedia computing, and it provides an opportunity for academic, industry professionals to discuss the latest issues and progress in the area of embedded and multimedia computing. Therefore this book will be include the various theories and practical applications in human-centric computing and embedded and multimedia computing.

  8. An Interdisciplinary Bibliography for Computers and the Humanities Courses.

    Science.gov (United States)

    Ehrlich, Heyward

    1991-01-01

    Presents an annotated bibliography of works related to the subject of computers and the humanities. Groups items into textbooks and overviews; introductions; human and computer languages; literary and linguistic analysis; artificial intelligence and robotics; social issue debates; computers' image in fiction; anthologies; writing and the…

  9. Computational Models to Synthesize Human Walking

    Institute of Scientific and Technical Information of China (English)

    Lei Ren; David Howard; Laurence Kenney

    2006-01-01

    The synthesis of human walking is of great interest in biomechanics and biomimetic engineering due to its predictive capabilities and potential applications in clinical biomechanics, rehabilitation engineering and biomimetic robotics. In this paper,the various methods that have been used to synthesize humanwalking are reviewed from an engineering viewpoint. This involves a wide spectrum of approaches, from simple passive walking theories to large-scale computational models integrating the nervous, muscular and skeletal systems. These methods are roughly categorized under four headings: models inspired by the concept of a CPG (Central Pattern Generator), methods based on the principles of control engineering, predictive gait simulation using optimisation, and models inspired by passive walking theory. The shortcomings and advantages of these methods are examined, and future directions are discussed in the context of providing insights into the neural control objectives driving gait and improving the stability of the predicted gaits. Future advancements are likely to be motivated by improved understanding of neural control strategies and the subtle complexities of the musculoskeletal system during human locomotion. It is only a matter of time before predictive gait models become a practical and valuable tool in clinical diagnosis, rehabilitation engineering and robotics.

  10. 2012 International Conference on Human-centric Computing

    CERN Document Server

    Jin, Qun; Yeo, Martin; Hu, Bin; Human Centric Technology and Service in Smart Space, HumanCom 2012

    2012-01-01

    The theme of HumanCom is focused on the various aspects of human-centric computing for advances in computer science and its applications and provides an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of human-centric computing. In addition, the conference will publish high quality papers which are closely related to the various theories and practical applications in human-centric computing. Furthermore, we expect that the conference and its publications will be a trigger for further related research and technology improvements in this important subject.

  11. Human-Computer Interaction and Information Management Research Needs

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — In a visionary future, Human-Computer Interaction HCI and Information Management IM have the potential to enable humans to better manage their lives through the use...

  12. Computational insight into nitration of human myoglobin.

    Science.gov (United States)

    Lin, Ying-Wu; Shu, Xiao-Gang; Du, Ke-Jie; Nie, Chang-Ming; Wen, Ge-Bo

    2014-10-01

    Protein nitration is an important post-translational modification regulating protein structure and function, especially for heme proteins. Myoglobin (Mb) is an ideal protein model for investigating the structure and function relationship of heme proteins. With limited structural information available for nitrated heme proteins from experiments, we herein performed a molecular dynamics study of human Mb with successive nitration of Tyr103, Tyr146, Trp7 and Trp14. We made a detailed comparison of protein motions, intramolecular contacts and internal cavities of nitrated Mbs with that of native Mb. It showed that although nitration of both Tyr103 and Tyr146 slightly alters the local conformation of heme active site, further nitration of both Trp7 and Trp14 shifts helix A apart from the rest of protein, which results in altered internal cavities and forms a water channel, representing an initial stage of Mb unfolding. The computational study provides an insight into the nitration of heme proteins at an atomic level, which is valuable for understanding the structure and function relationship of heme proteins in non-native states by nitration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Lightness computation by the human visual system

    Science.gov (United States)

    Rudd, Michael E.

    2017-05-01

    A model of achromatic color computation by the human visual system is presented, which is shown to account in an exact quantitative way for a large body of appearance matching data collected with simple visual displays. The model equations are closely related to those of the original Retinex model of Land and McCann. However, the present model differs in important ways from Land and McCann's theory in that it invokes additional biological and perceptual mechanisms, including contrast gain control, different inherent neural gains for incremental, and decremental luminance steps, and two types of top-down influence on the perceptual weights applied to local luminance steps in the display: edge classification and spatial integration attentional windowing. Arguments are presented to support the claim that these various visual processes must be instantiated by a particular underlying neural architecture. By pointing to correspondences between the architecture of the model and findings from visual neurophysiology, this paper suggests that edge classification involves a top-down gating of neural edge responses in early visual cortex (cortical areas V1 and/or V2) while spatial integration windowing occurs in cortical area V4 or beyond.

  14. Computational Analysis of Human Blood Flow

    Science.gov (United States)

    Panta, Yogendra; Marie, Hazel; Harvey, Mark

    2009-11-01

    Fluid flow modeling with commercially available computational fluid dynamics (CFD) software is widely used to visualize and predict physical phenomena related to various biological systems. In this presentation, a typical human aorta model was analyzed assuming the blood flow as laminar with complaint cardiac muscle wall boundaries. FLUENT, a commercially available finite volume software, coupled with Solidworks, a modeling software, was employed for the preprocessing, simulation and postprocessing of all the models.The analysis mainly consists of a fluid-dynamics analysis including a calculation of the velocity field and pressure distribution in the blood and a mechanical analysis of the deformation of the tissue and artery in terms of wall shear stress. A number of other models e.g. T branches, angle shaped were previously analyzed and compared their results for consistency for similar boundary conditions. The velocities, pressures and wall shear stress distributions achieved in all models were as expected given the similar boundary conditions. The three dimensional time dependent analysis of blood flow accounting the effect of body forces with a complaint boundary was also performed.

  15. On the Rhetorical Contract in Human-Computer Interaction.

    Science.gov (United States)

    Wenger, Michael J.

    1991-01-01

    An exploration of the rhetorical contract--i.e., the expectations for appropriate interaction--as it develops in human-computer interaction revealed that direct manipulation interfaces were more likely to establish social expectations. Study results suggest that the social nature of human-computer interactions can be examined with reference to the…

  16. Computer Modeling of Human Delta Opioid Receptor

    Directory of Open Access Journals (Sweden)

    Tatyana Dzimbova

    2013-04-01

    Full Text Available The development of selective agonists of δ-opioid receptor as well as the model of interaction of ligands with this receptor is the subjects of increased interest. In the absence of crystal structures of opioid receptors, 3D homology models with different templates have been reported in the literature. The problem is that these models are not available for widespread use. The aims of our study are: (1 to choose within recently published crystallographic structures templates for homology modeling of the human δ-opioid receptor (DOR; (2 to evaluate the models with different computational tools; and (3 to precise the most reliable model basing on correlation between docking data and in vitro bioassay results. The enkephalin analogues, as ligands used in this study, were previously synthesized by our group and their biological activity was evaluated. Several models of DOR were generated using different templates. All these models were evaluated by PROCHECK and MolProbity and relationship between docking data and in vitro results was determined. The best correlations received for the tested models of DOR were found between efficacy (erel of the compounds, calculated from in vitro experiments and Fitness scoring function from docking studies. New model of DOR was generated and evaluated by different approaches. This model has good GA341 value (0.99 from MODELLER, good values from PROCHECK (92.6% of most favored regions and MolProbity (99.5% of favored regions. Scoring function correlates (Pearson r = -0.7368, p-value = 0.0097 with erel of a series of enkephalin analogues, calculated from in vitro experiments. So, this investigation allows suggesting a reliable model of DOR. Newly generated model of DOR receptor could be used further for in silico experiments and it will give possibility for faster and more correct design of selective and effective ligands for δ-opioid receptor.

  17. The inhuman computer/the too-human psychotherapist.

    Science.gov (United States)

    Nadelson, T

    1987-10-01

    There has been an understandable rejection by psychotherapists of any natural language processing (computer/human interaction by means of usual language exchange) which is intended to embrace aspects of psychotherapy. For at least twenty years therapists have experimented with computer programs for specific and general purpose with reported success. This paper describes some of the aspects of artificial intelligence used in computer-mediated or computer-assisted therapy and the utility of such efforts in general reevaluation of human-to-human psychotherapy.

  18. Brain-Computer Interfaces and Human-Computer Interaction

    NARCIS (Netherlands)

    Tan, Desney; Nijholt, Anton; Tan, Desney S.; Nijholt, Anton

    2010-01-01

    Advances in cognitive neuroscience and brain imaging technologies have started to provide us with the ability to interface directly with the human brain. This ability is made possible through the use of sensors that can monitor some of the physical processes that occur within the brain that correspo

  19. Brain-Computer Interfaces and Human-Computer Interaction

    NARCIS (Netherlands)

    Tan, Desney; Tan, Desney S.; Nijholt, Antinus

    2010-01-01

    Advances in cognitive neuroscience and brain imaging technologies have started to provide us with the ability to interface directly with the human brain. This ability is made possible through the use of sensors that can monitor some of the physical processes that occur within the brain that

  20. Simulating Human Cognitive Using Computational Verb Theory

    Institute of Scientific and Technical Information of China (English)

    YANGTao

    2004-01-01

    Modeling and simulation of a life system is closely connected to the modeling of cognition,especially for advanced life systems. The primary difference between an advanced life system and a digital computer is that the advanced life system consists of a body with mind while a digital computer is only a mind in a formal sense. To model an advanced life system one needs to symbols into a body where a digital computer is embedded. In this paper, a computational verb theory is proposed as a new paradigm of grounding symbols into the outputs of sensors. On one hand, a computational verb can preserve the physical "meanings" of the dynamics of sensor data such that a symbolic system can be used to manipulate physical meanings instead of abstract tokens in the digital computer. On the other hand, the physical meanings of an abstract symbol/token, which is usually an output of a reasoning process in the digital computer, can be restored and fed back to the actuators. Therefore, the computational verb theory bridges the gap between symbols and physical reality from the dynamic cognition perspective.

  1. Human-Computer Interaction (HCI) in Educational Environments: Implications of Understanding Computers as Media.

    Science.gov (United States)

    Berg, Gary A.

    2000-01-01

    Reviews literature in the field of human-computer interaction (HCI) as it applies to educational environments. Topics include the origin of HCI; human factors; usability; computer interface design; goals, operations, methods, and selection (GOMS) models; command language versus direct manipulation; hypertext; visual perception; interface…

  2. Human-Computer Etiquette Cultural Expectations and the Design Implications They Place on Computers and Technology

    CERN Document Server

    Hayes, Caroline C

    2010-01-01

    Written by experts from various fields, this edited collection explores a wide range of issues pertaining to how computers evoke human social expectations. The book illustrates how socially acceptable conventions can strongly impact the effectiveness of human-computer interactions and how to consider such norms in the design of human-computer interfaces. Providing a complete introduction to the design of social responses to computers, the text emphasizes the value of social norms in the development of usable and enjoyable technology. It also describes the role of socially correct behavior in t

  3. Safety Metrics for Human-Computer Controlled Systems

    Science.gov (United States)

    Leveson, Nancy G; Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems.This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  4. Rationale awareness for quality assurance in iterative human computation processes

    CERN Document Server

    Xiao, Lu

    2012-01-01

    Human computation refers to the outsourcing of computation tasks to human workers. It offers a new direction for solving a variety of problems and calls for innovative ways of managing human computation processes. The majority of human computation tasks take a parallel approach, whereas the potential of an iterative approach, i.e., having workers iteratively build on each other's work, has not been sufficiently explored. This study investigates whether and how human workers' awareness of previous workers' rationales affects the performance of the iterative approach in a brainstorming task and a rating task. Rather than viewing this work as a conclusive piece, the author believes that this research endeavor is just the beginning of a new research focus that examines and supports meta-cognitive processes in crowdsourcing activities.

  5. Pedagogical Strategies for Human and Computer Tutoring.

    Science.gov (United States)

    Reiser, Brian J.

    The pedagogical strategies of human tutors in problem solving domains are described and the possibility of incorporating these techniques into computerized tutors is examined. GIL (Graphical Instruction in LISP), an intelligent tutoring system for LISP programming, is compared to human tutors teaching the same material in order to identify how the…

  6. Shared resource control between human and computer

    Science.gov (United States)

    Hendler, James; Wilson, Reid

    1989-01-01

    The advantages of an AI system of actively monitoring human control of a shared resource (such as a telerobotic manipulator) are presented. A system is described in which a simple AI planning program gains efficiency by monitoring human actions and recognizing when the actions cause a change in the system's assumed state of the world. This enables the planner to recognize when an interaction occurs between human actions and system goals, and allows maintenance of an up-to-date knowledge of the state of the world and thus informs the operator when human action would undo a goal achieved by the system, when an action would render a system goal unachievable, and efficiently replans the establishment of goals after human intervention.

  7. Computational Intelligence in a Human Brain Model

    Directory of Open Access Journals (Sweden)

    Viorel Gaftea

    2016-06-01

    Full Text Available This paper focuses on the current trends in brain research domain and the current stage of development of research for software and hardware solutions, communication capabilities between: human beings and machines, new technologies, nano-science and Internet of Things (IoT devices. The proposed model for Human Brain assumes main similitude between human intelligence and the chess game thinking process. Tactical & strategic reasoning and the need to follow the rules of the chess game, all are very similar with the activities of the human brain. The main objective for a living being and the chess game player are the same: securing a position, surviving and eliminating the adversaries. The brain resolves these goals, and more, the being movement, actions and speech are sustained by the vital five senses and equilibrium. The chess game strategy helps us understand the human brain better and easier replicate in the proposed ‘Software and Hardware’ SAH Model.

  8. Computational Intelligence in a Human Brain Model

    Directory of Open Access Journals (Sweden)

    Viorel Gaftea

    2016-06-01

    Full Text Available This paper focuses on the current trends in brain research domain and the current stage of development of research for software and hardware solutions, communication capabilities between: human beings and machines, new technologies, nano-science and Internet of Things (IoT devices. The proposed model for Human Brain assumes main similitude between human intelligence and the chess game thinking process. Tactical & strategic reasoning and the need to follow the rules of the chess game, all are very similar with the activities of the human brain. The main objective for a living being and the chess game player are the same: securing a position, surviving and eliminating the adversaries. The brain resolves these goals, and more, the being movement, actions and speech are sustained by the vital five senses and equilibrium. The chess game strategy helps us understand the human brain better and easier replicate in the proposed ‘Software and Hardware’ SAH Model.

  9. A Glance into the Future of Human Computer Interactions

    CERN Document Server

    Farooq, Umer; Nazir, Sohail

    2011-01-01

    Computers have a direct impact on our lives nowadays. Human's interaction with the computer has modified with the passage of time as improvement in technology occurred the better the human computer interaction became. Today we are facilitated by the operating system that has reduced all the complexity of hardware and we undergo our computation in a very convenient way irrespective of the process occurring at the hardware level. Though the human computer interaction has improved but it's not done yet. If we come to the future the computer's role in our lives would be a lot more rather our life would be of the artificial intelligence. In our future the biggest resource would be component of time and wasting time for a key board entry or a mouse input would be unbearable so the need would be of the computer interaction environment that along with the complexity reduction also minimizes the time wastage in the human computer interaction. Accordingly in our future the computation would also be increased it would n...

  10. A Glance into the Future of Human Computer Interaction

    CERN Document Server

    Farooq, Umer; Nazir, Sohail

    2011-01-01

    Computers have a direct impact on our lives nowadays. Human's interaction with the computer has modified with the passage of time as improvement in technology occurred the better the human computer interaction became. Today we are facilitated by the operating system that has reduced all the complexity of hardware and we undergo our computation in a very convenient way irrespective of the process occurring at the hardware level. Though the human computer interaction has improved but it's not done yet. If we come to the future the computer's role in our lives would be a lot more rather our life would be of the artificial intelligence. In our future the biggest resource would be component of time and wasting time for a key board entry or a mouse input would be unbearable so the need would be of the computer interaction environment that along with the complexity reduction also minimizes the time wastage in the human computer interaction. Accordingly in our future the computation would also be increased it would n...

  11. Can the human brain do quantum computing?

    Science.gov (United States)

    Rocha, A F; Massad, E; Coutinho, F A B

    2004-01-01

    The electrical membrane properties have been the key issues in the understanding of the cerebral physiology for more than almost two centuries. But, molecular neurobiology has now discovered that biochemical transactions play an important role in neuronal computations. Quantum computing (QC) is becoming a reality both from the theoretical point of view as well as from practical applications. Quantum mechanics is the most accurate description at atomic level and it lies behind all chemistry that provides the basis for biology ... maybe the magic of entanglement is also crucial for life. The purpose of the present paper is to discuss the dendrite spine as a quantum computing device, taking into account what is known about the physiology of the glutamate receptors and the cascade of biochemical transactions triggered by the glutamate binding to these receptors.

  12. Human-computer interaction and management information systems

    CERN Document Server

    Galletta, Dennis F

    2014-01-01

    ""Human-Computer Interaction and Management Information Systems: Applications"" offers state-of-the-art research by a distinguished set of authors who span the MIS and HCI fields. The original chapters provide authoritative commentaries and in-depth descriptions of research programs that will guide 21st century scholars, graduate students, and industry professionals. Human-Computer Interaction (or Human Factors) in MIS is concerned with the ways humans interact with information, technologies, and tasks, especially in business, managerial, organizational, and cultural contexts. It is distinctiv

  13. STUDY ON HUMAN-COMPUTER SYSTEM FOR STABLE VIRTUAL DISASSEMBLY

    Institute of Scientific and Technical Information of China (English)

    Guan Qiang; Zhang Shensheng; Liu Jihong; Cao Pengbing; Zhong Yifang

    2003-01-01

    The cooperative work between human being and computer based on virtual reality (VR) is investigated to plan the disassembly sequences more efficiently. A three-layer model of human-computer cooperative virtual disassembly is built, and the corresponding human-computer system for stable virtual disassembly is developed. In this system, an immersive and interactive virtual disassembly environment has been created to provide planners with a more visual working scene. For cooperative disassembly, an intelligent module of stability analysis of disassembly operations is embedded into the human-computer system to assist planners to implement disassembly tasks better. The supporting matrix for stability analysis of disassembly operations is defined and the method of stability analysis is detailed. Based on the approach, the stability of any disassembly operation can be analyzed to instruct the manual virtual disassembly. At last, a disassembly case in the virtual environment is given to prove the validity of above ideas.

  14. Cognition beyond the brain computation, interactivity and human artifice

    CERN Document Server

    Cowley, Stephen J

    2013-01-01

    Arguing that a collective dimension has given cognitive flexibility to human intelligence, this book shows that traditional cognitive psychology underplays the role of bodies, dialogue, diagrams, tools, talk, customs, habits, computers and cultural practices.

  15. Computer games as a new ontological reality of human existence

    Directory of Open Access Journals (Sweden)

    Maksim Shymeiko

    2015-05-01

    Full Text Available The article considers the ontological dimension of the phenomenon of computer games and their role in the perception of modern man in the world and himself. Describes the characteristic features of the ontological computer game as a virtual world that has an intangible character. Reveals the positive and negative features of computer games in the formation of the meaning of human life.

  16. Use of Computers in Human Factors Engineering

    Science.gov (United States)

    1974-11-01

    SENSES (PHYSIOLOGY), THERMOPLASTIC RESINS, VISUAL ACUITY (U)R RESEARCH CONCERNS DETERMINATION OF THE INFORMATION PRESENTATION REQUIREMENTS OF HUMAN DATA...THE GEOMETRY OF THE wORK STATION, IS CURRENTLY BEING DEVELOPED. IT IS CALLED COMBIMAN, AN ACRONYM FOR COMPUTERIZED BIOMECHANICAL MAN- MODELo COMBIMAN

  17. [Affective computing--a mysterious tool to explore human emotions].

    Science.gov (United States)

    Li, Xin; Li, Honghong; Dou, Yi; Hou, Yongjie; Li, Changwu

    2013-12-01

    Perception, affection and consciousness are basic psychological functions of human being. Affection is the subjective reflection of different kinds of objects. The foundation of human being's thinking is constituted by the three basic functions. Affective computing is an effective tool of revealing the affectiveness of human being in order to understand the world. Our research of affective computing focused on the relation, the generation and the influent factors among different affections. In this paper, the affective mechanism, the basic theory of affective computing, is studied, the method of acquiring and recognition of affective information is discussed, and the application of affective computing is summarized as well, in order to attract more researchers into this working area.

  18. Proactive human-computer collaboration for information discovery

    Science.gov (United States)

    DiBona, Phil; Shilliday, Andrew; Barry, Kevin

    2016-05-01

    Lockheed Martin Advanced Technology Laboratories (LM ATL) is researching methods, representations, and processes for human/autonomy collaboration to scale analysis and hypotheses substantiation for intelligence analysts. This research establishes a machinereadable hypothesis representation that is commonsensical to the human analyst. The representation unifies context between the human and computer, enabling autonomy in the form of analytic software, to support the analyst through proactively acquiring, assessing, and organizing high-value information that is needed to inform and substantiate hypotheses.

  19. Unmanned Surface Vehicle Human-Computer Interface for Amphibious Operations

    Science.gov (United States)

    2013-08-01

    FIGURES Figure 1. MOCU Baseline HCI using Both Aerial Photo and Digital Nautical Chart ( DNC ) Maps to Control and Monitor Land, Sea, and Air...Action DNC Digital Nautical Chart FNC Future Naval Capability HCI Human-Computer Interface HRI Human-Robot Interface HSI Human-Systems Integration...Digital Nautical Chart ( DNC ) Maps to Control and Monitor Land, Sea, and Air Vehicles. 3.2 BASELINE MOCU HCI The Baseline MOCU interface is a tiled

  20. Studying Collective Human Decision Making and Creativity with Evolutionary Computation.

    Science.gov (United States)

    Sayama, Hiroki; Dionne, Shelley D

    2015-01-01

    We report a summary of our interdisciplinary research project "Evolutionary Perspective on Collective Decision Making" that was conducted through close collaboration between computational, organizational, and social scientists at Binghamton University. We redefined collective human decision making and creativity as evolution of ecologies of ideas, where populations of ideas evolve via continual applications of evolutionary operators such as reproduction, recombination, mutation, selection, and migration of ideas, each conducted by participating humans. Based on this evolutionary perspective, we generated hypotheses about collective human decision making, using agent-based computer simulations. The hypotheses were then tested through several experiments with real human subjects. Throughout this project, we utilized evolutionary computation (EC) in non-traditional ways-(1) as a theoretical framework for reinterpreting the dynamics of idea generation and selection, (2) as a computational simulation model of collective human decision-making processes, and (3) as a research tool for collecting high-resolution experimental data on actual collaborative design and decision making from human subjects. We believe our work demonstrates untapped potential of EC for interdisciplinary research involving human and social dynamics.

  1. HOME COMPUTER USE AND THE DEVELOPMENT OF HUMAN CAPITAL*

    Science.gov (United States)

    Malamud, Ofer; Pop-Eleches, Cristian

    2012-01-01

    This paper uses a regression discontinuity design to estimate the effect of home computers on child and adolescent outcomes by exploiting a voucher program in Romania. Our main results indicate that home computers have both positive and negative effects on the development of human capital. Children who won a voucher to purchase a computer had significantly lower school grades but show improved computer skills. There is also some evidence that winning a voucher increased cognitive skills, as measured by Raven’s Progressive Matrices. We do not find much evidence for an effect on non-cognitive outcomes. Parental rules regarding homework and computer use attenuate the effects of computer ownership, suggesting that parental monitoring and supervision may be important mediating factors. PMID:22719135

  2. HOME COMPUTER USE AND THE DEVELOPMENT OF HUMAN CAPITAL.

    Science.gov (United States)

    Malamud, Ofer; Pop-Eleches, Cristian

    2011-05-01

    This paper uses a regression discontinuity design to estimate the effect of home computers on child and adolescent outcomes by exploiting a voucher program in Romania. Our main results indicate that home computers have both positive and negative effects on the development of human capital. Children who won a voucher to purchase a computer had significantly lower school grades but show improved computer skills. There is also some evidence that winning a voucher increased cognitive skills, as measured by Raven's Progressive Matrices. We do not find much evidence for an effect on non-cognitive outcomes. Parental rules regarding homework and computer use attenuate the effects of computer ownership, suggesting that parental monitoring and supervision may be important mediating factors.

  3. Speech Dialogue with Facial Displays Multimodal Human-Computer Conversation

    CERN Document Server

    Nagao, K; Nagao, Katashi; Takeuchi, Akikazu

    1994-01-01

    Human face-to-face conversation is an ideal model for human-computer dialogue. One of the major features of face-to-face communication is its multiplicity of communication channels that act on multiple modalities. To realize a natural multimodal dialogue, it is necessary to study how humans perceive information and determine the information to which humans are sensitive. A face is an independent communication channel that conveys emotional and conversational signals, encoded as facial expressions. We have developed an experimental system that integrates speech dialogue and facial animation, to investigate the effect of introducing communicative facial expressions as a new modality in human-computer conversation. Our experiments have shown that facial expressions are helpful, especially upon first contact with the system. We have also discovered that featuring facial expressions at an early stage improves subsequent interaction.

  4. The UK Human Genome Mapping Project online computing service.

    Science.gov (United States)

    Rysavy, F R; Bishop, M J; Gibbs, G P; Williams, G W

    1992-04-01

    This paper presents an overview of computing and networking facilities developed by the Medical Research Council to provide online computing support to the Human Genome Mapping Project (HGMP) in the UK. The facility is connected to a number of other computing facilities in various centres of genetics and molecular biology research excellence, either directly via high-speed links or through national and international wide-area networks. The paper describes the design and implementation of the current system, a 'client/server' network of Sun, IBM, DEC and Apple servers, gateways and workstations. A short outline of online computing services currently delivered by this system to the UK human genetics research community is also provided. More information about the services and their availability could be obtained by a direct approach to the UK HGMP-RC.

  5. Linguistics in the digital humanities: (computational corpus linguistics

    Directory of Open Access Journals (Sweden)

    Kim Ebensgaard Jensen

    2014-12-01

    Full Text Available Corpus linguistics has been closely intertwined with digital technology since the introduction of university computer mainframes in the 1960s. Making use of both digitized data in the form of the language corpus and computational methods of analysis involving concordancers and statistics software, corpus linguistics arguably has a place in the digital humanities. Still, it remains obscure and fi gures only sporadically in the literature on the digital humanities. Th is article provides an overview of the main principles of corpus linguistics and the role of computer technology in relation to data and method and also off ers a bird's-eye view of the history of corpus linguistics with a focus on its intimate relationship with digital technology and how digital technology has impacted the very core of corpus linguistics and shaped the identity of the corpus linguist. Ultimately, the article is oriented towards an acknowledgment of corpus linguistics' alignment with the digital humanities.

  6. A Research Roadmap for Computation-Based Human Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States); Groth, Katrina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  7. Supporting Negotiation Behavior with Haptics-Enabled Human-Computer Interfaces

    OpenAIRE

    Küçükyılmaz, Ayşe; Sezgin, Tevfik Metin; Başdoğan, Çağatay

    2012-01-01

    An active research goal for human-computer interaction is to allow humans to communicate with computers in an intuitive and natural fashion, especially in real-life interaction scenarios. One approach that has been advocated to achieve this has been to build computer systems with human-like qualities and capabilities. In this paper, we present insight on how human-computer interaction can be enriched by employing the computers with behavioral patterns that naturally appear in human-human nego...

  8. From humans to computers cognition through visual perception

    CERN Document Server

    Alexandrov, Viktor Vasilievitch

    1991-01-01

    This book considers computer vision to be an integral part of the artificial intelligence system. The core of the book is an analysis of possible approaches to the creation of artificial vision systems, which simulate human visual perception. Much attention is paid to the latest achievements in visual psychology and physiology, the description of the functional and structural organization of the human perception mechanism, the peculiarities of artistic perception and the expression of reality. Computer vision models based on these data are investigated. They include the processes of external d

  9. Human computer interaction issues in Clinical Trials Management Systems.

    Science.gov (United States)

    Starren, Justin B; Payne, Philip R O; Kaufman, David R

    2006-01-01

    Clinical trials increasingly rely upon web-based Clinical Trials Management Systems (CTMS). As with clinical care systems, Human Computer Interaction (HCI) issues can greatly affect the usefulness of such systems. Evaluation of the user interface of one web-based CTMS revealed a number of potential human-computer interaction problems, in particular, increased workflow complexity associated with a web application delivery model and potential usability problems resulting from the use of ambiguous icons. Because these design features are shared by a large fraction of current CTMS, the implications extend beyond this individual system.

  10. A Human-Centred Tangible approach to learning Computational Thinking

    Directory of Open Access Journals (Sweden)

    Tommaso Turchi

    2016-08-01

    Full Text Available Computational Thinking has recently become a focus of many teaching and research domains; it encapsulates those thinking skills integral to solving complex problems using a computer, thus being widely applicable in our society. It is influencing research across many disciplines and also coming into the limelight of education, mostly thanks to public initiatives such as the Hour of Code. In this paper we present our arguments for promoting Computational Thinking in education through the Human-centred paradigm of Tangible End-User Development, namely by exploiting objects whose interactions with the physical environment are mapped to digital actions performed on the system.

  11. A computational model of the human hand 93-ERI-053

    Energy Technology Data Exchange (ETDEWEB)

    Hollerbach, K.; Axelrod, T.

    1996-03-01

    The objectives of the Computational Hand Modeling project were to prove the feasibility of the Laboratory`s NIKE3D finite element code to orthopaedic problems. Because of the great complexity of anatomical structures and the nonlinearity of their behavior, we have focused on a subset of joints of the hand and lower extremity and have developed algorithms to model their behavior. The algorithms developed here solve fundamental problems in computational biomechanics and can be expanded to describe any other joints of the human body. This kind of computational modeling has never successfully been attempted before, due in part to a lack of biomaterials data and a lack of computational resources. With the computational resources available at the National Laboratories and the collaborative relationships we have established with experimental and other modeling laboratories, we have been in a position to pursue our innovative approach to biomechanical and orthopedic modeling.

  12. Interactive Evolutionary Computation for Analyzing Human Awareness Mechanisms

    Directory of Open Access Journals (Sweden)

    Hideyuki Takagi

    2012-01-01

    Full Text Available We discuss the importance of establishing awareness science and show the idea of using interactive evolutionary computation (IEC as a tool for analyzing awareness mechanism and making awareness models. First, we describe the importance of human factors in computational intelligence and that IEC is one of approaches for the so-called humanized computational intelligence. Second, we show examples that IEC is used as an analysis tool for human science. As analyzing human awareness mechanism is in this kind of analyzing human characteristics and capabilities, IEC may be able to be used for this purpose. Based on this expectation, we express one idea for analyzing the awareness mechanism. This idea is to make an equivalent model of an IEC user using a learning model and find latent variables that connect inputs and outputs of the user model and that help to understand or explain the inputs-outputs relationship. Although there must be several definitions of awareness, this idea is based on one definition that awareness is to find out unknown variables that helps our understanding. If we establish a method for finding the latent variables automatically, we can realize an awareness model in computer.

  13. Computational Virtual Reality (VR) as a human-computer interface in the operation of telerobotic systems

    Science.gov (United States)

    Bejczy, Antal K.

    1995-01-01

    This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.

  14. A Software Framework for Multimodal Human-Computer Interaction Systems

    NARCIS (Netherlands)

    Shen, Jie; Pantic, Maja

    2009-01-01

    This paper describes a software framework we designed and implemented for the development and research in the area of multimodal human-computer interface. The proposed framework is based on publish / subscribe architecture, which allows developers and researchers to conveniently configure, test and

  15. Formal modelling techniques in human-computer interaction

    NARCIS (Netherlands)

    Haan, de G.; Veer, van der G.C.; Vliet, van J.C.

    1991-01-01

    This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in hum

  16. Computed tomography of the human developing anterior skull base

    NARCIS (Netherlands)

    J. van Loosen (J.); A.I.J. Klooswijk (A. I J); D. van Velzen (D.); C.D.A. Verwoerd (Carel)

    1990-01-01

    markdownabstractAbstract The ossification of the anterior skull base, especially the lamina cribrosa, has been studied by computed tomography and histopathology. Sixteen human fetuses, (referred to our laboratory for pathological examination after spontaneous abortion between 18 and 32 weeks of ge

  17. CHI '13 Extended Abstracts on Human Factors in Computing Systems

    DEFF Research Database (Denmark)

    The CHI Papers and Notes program is continuing to grow along with many of our sister conferences. We are pleased that CHI is still the leading venue for research in human-computer interaction. CHI 2013 continued the use of subcommittees to manage the review process. Authors selected the subcommit...

  18. A Software Framework for Multimodal Human-Computer Interaction Systems

    NARCIS (Netherlands)

    Shen, Jie; Pantic, Maja

    2009-01-01

    This paper describes a software framework we designed and implemented for the development and research in the area of multimodal human-computer interface. The proposed framework is based on publish / subscribe architecture, which allows developers and researchers to conveniently configure, test and

  19. Studying Collective Human Decision Making and Creativity with Evolutionary Computation

    OpenAIRE

    Sayama, Hiroki; Dionne, Shelley D.

    2014-01-01

    We report a summary of our interdisciplinary research project "Evolutionary Perspective on Collective Decision Making" that was conducted through close collaboration between computational, organizational and social scientists at Binghamton University. We redefined collective human decision making and creativity as evolution of ecologies of ideas, where populations of ideas evolve via continual applications of evolutionary operators such as reproduction, recombination, mutation, selection, and...

  20. Homo ludens in the loop playful human computation systems

    CERN Document Server

    Krause, Markus

    2014-01-01

    The human mind is incredible. It solves problems with ease that will elude machines even for the next decades. This book explores what happens when humans and machines work together to solve problems machines cannot yet solve alone. It explains how machines and computers can work together and how humans can have fun helping to face some of the most challenging problems of artificial intelligence. In this book, you will find designs for games that are entertaining and yet able to collect data to train machines for complex tasks such as natural language processing or image understanding. You wil

  1. Computational Fluid and Particle Dynamics in the Human Respiratory System

    CERN Document Server

    Tu, Jiyuan; Ahmadi, Goodarz

    2013-01-01

    Traditional research methodologies in the human respiratory system have always been challenging due to their invasive nature. Recent advances in medical imaging and computational fluid dynamics (CFD) have accelerated this research. This book compiles and details recent advances in the modelling of the respiratory system for researchers, engineers, scientists, and health practitioners. It breaks down the complexities of this field and provides both students and scientists with an introduction and starting point to the physiology of the respiratory system, fluid dynamics and advanced CFD modeling tools. In addition to a brief introduction to the physics of the respiratory system and an overview of computational methods, the book contains best-practice guidelines for establishing high-quality computational models and simulations. Inspiration for new simulations can be gained through innovative case studies as well as hands-on practice using pre-made computational code. Last but not least, students and researcher...

  2. A novel polar-based human face recognition computational model

    Directory of Open Access Journals (Sweden)

    Y. Zana

    2009-07-01

    Full Text Available Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.

  3. Neuromolecular computing: a new approach to human brain evolution.

    Science.gov (United States)

    Wallace, R; Price, H

    1999-09-01

    Evolutionary approaches in human cognitive neurobiology traditionally emphasize macroscopic structures. It may soon be possible to supplement these studies with models of human information-processing of the molecular level. Thin-film, simulation, fluorescence microscopy, and high-resolution X-ray crystallographic studies provide evidence for transiently organized neural membrane molecular systems with possible computational properties. This review article examines evidence for hydrophobic-mismatch molecular interactions within phospholipid microdomains of a neural membrane bilayer. It is proposed that these interactions are a massively parallel algorithm which can rapidly compute near-optimal solutions to complex cognitive and physiological problems. Coupling of microdomain activity to permenant ion movements at ligand-gated and voltage-gated channels permits the conversion of molecular computations into neuron frequency codes. Evidence for microdomain transport of proteins to specific locations within the bilayer suggests that neuromolecular computation may be under some genetic control and thus modifiable by natural selection. A possible experimental approach for examining evolutionary changes in neuromolecular computation is briefly discussed.

  4. Hand Gesture and Neural Network Based Human Computer Interface

    Directory of Open Access Journals (Sweden)

    Aekta Patel

    2014-06-01

    Full Text Available Computer is used by every people either at their work or at home. Our aim is to make computers that can understand human language and can develop a user friendly human computer interfaces (HCI. Human gestures are perceived by vision. The research is for determining human gestures to create an HCI. Coding of these gestures into machine language demands a complex programming algorithm. In this project, We have first detected, recognized and pre-processing the hand gestures by using General Method of recognition. Then We have found the recognized image’s properties and using this, mouse movement, click and VLC Media player controlling are done. After that we have done all these functions thing using neural network technique and compared with General recognition method. From this we can conclude that neural network technique is better than General Method of recognition. In this, I have shown the results based on neural network technique and comparison between neural network method & general method.

  5. Human -Computer Interface using Gestures based on Neural Network

    Directory of Open Access Journals (Sweden)

    Aarti Malik

    2014-10-01

    Full Text Available - Gestures are powerful tools for non-verbal communication. Human computer interface (HCI is a growing field which reduces the complexity of interaction between human and machine in which gestures are used for conveying information or controlling the machine. In the present paper, static hand gestures are utilized for this purpose. The paper presents a novel technique of recognizing hand gestures i.e. A-Z alphabets, 0-9 numbers and 6 additional control signals (for keyboard and mouse control by extracting various features of hand ,creating a feature vector table and training a neural network. The proposed work has a recognition rate of 99%. .

  6. Human-Computer Interaction, Tourism and Cultural Heritage

    Science.gov (United States)

    Cipolla Ficarra, Francisco V.

    We present a state of the art of the human-computer interaction aimed at tourism and cultural heritage in some cities of the European Mediterranean. In the work an analysis is made of the main problems deriving from training understood as business and which can derail the continuous growth of the HCI, the new technologies and tourism industry. Through a semiotic and epistemological study the current mistakes in the context of the interrelations of the formal and factual sciences will be detected and also the human factors that have an influence on the professionals devoted to the development of interactive systems in order to safeguard and boost cultural heritage.

  7. A computer simulation approach to measurement of human control strategy

    Science.gov (United States)

    Green, J.; Davenport, E. L.; Engler, H. F.; Sears, W. E., III

    1982-01-01

    Human control strategy is measured through use of a psychologically-based computer simulation which reflects a broader theory of control behavior. The simulation is called the human operator performance emulator, or HOPE. HOPE was designed to emulate control learning in a one-dimensional preview tracking task and to measure control strategy in that setting. When given a numerical representation of a track and information about current position in relation to that track, HOPE generates positions for a stick controlling the cursor to be moved along the track. In other words, HOPE generates control stick behavior corresponding to that which might be used by a person learning preview tracking.

  8. Visual Interpretation Of Hand Gestures For Human Computer Interaction

    Directory of Open Access Journals (Sweden)

    M.S.Sahane

    2014-01-01

    Full Text Available The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI. In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. This discussion is organized on the basis of the method used for modeling, analyzing, and recognizing gestures. We propose pointing gesture-based large display interaction using a depth camera. A user interacts with applications for large display by using pointing gestures with the barehand. The calibration between large display and depth camera can be automatically performed by using RGB-D camera.. We also discuss implemented gestural systems as well as other potential applications of vision-based gesture recognition. We discuss directions of future research in gesture recognition, including its integration with other natural modes of human computer interaction.

  9. Computer aided systems human engineering: A hypermedia tool

    Science.gov (United States)

    Boff, Kenneth R.; Monk, Donald L.; Cody, William J.

    1992-01-01

    The Computer Aided Systems Human Engineering (CASHE) system, Version 1.0, is a multimedia ergonomics database on CD-ROM for the Apple Macintosh II computer, being developed for use by human system designers, educators, and researchers. It will initially be available on CD-ROM and will allow users to access ergonomics data and models stored electronically as text, graphics, and audio. The CASHE CD-ROM, Version 1.0 will contain the Boff and Lincoln (1988) Engineering Data Compendium, MIL-STD-1472D and a unique, interactive simulation capability, the Perception and Performance Prototyper. Its features also include a specialized data retrieval, scaling, and analysis capability and the state of the art in information retrieval, browsing, and navigation.

  10. The Human-Computer Domain Relation in UX Models

    DEFF Research Database (Denmark)

    Clemmensen, Torkil

    This paper argues that the conceptualization of the human, the computer and the domain of use in competing lines of UX research have problematic similarities and superficial differences. The paper qualitatively analyses concepts and models in five research papers that together represent two...... influential lines of UX research: aesthetics and temporal UX, and two use situations: using a website and starting to use a smartphone. The results suggest that the two lines of UX research share a focus on users’ evaluative judgments of technology, both focuses on product qualities rather than activity...... domains, give little details about users, and treat human-computer interaction as perception. The conclusion gives similarities and differences between the approaches to UX. The implications for theory building are indicated....

  11. Developing a computational model of human hand kinetics using AVS

    Energy Technology Data Exchange (ETDEWEB)

    Abramowitz, Mark S. [State Univ. of New York, Binghamton, NY (United States)

    1996-05-01

    As part of an ongoing effort to develop a finite element model of the human hand at the Institute for Scientific Computing Research (ISCR), this project extended existing computational tools for analyzing and visualizing hand kinetics. These tools employ a commercial, scientific visualization package called AVS. FORTRAN and C code, originally written by David Giurintano of the Gillis W. Long Hansen`s Disease Center, was ported to a different computing platform, debugged, and documented. Usability features were added and the code was made more modular and readable. When the code is used to visualize bone movement and tendon paths for the thumb, graphical output is consistent with expected results. However, numerical values for forces and moments at the thumb joints do not yet appear to be accurate enough to be included in ISCR`s finite element model. Future work includes debugging the parts of the code that calculate forces and moments and verifying the correctness of these values.

  12. Human-computer interaction: psychology as a science of design.

    Science.gov (United States)

    Carroll, J M

    1997-01-01

    Human-computer interaction (HCI) study is the region of intersection between psychology and the social sciences, on the one hand, and computer science and technology, on the other. HCI researchers analyze and design specific user interface technologies (e.g. pointing devices). They study and improve the processes of technology development (e.g. task analysis, design rationale). They develop and evaluate new applications of technology (e.g. word processors, digital libraries). Throughout the past two decades, HCI has progressively integrated its scientific concerns with the engineering goal of improving the usability of computer systems and applications, which has resulted in a body of technical knowledge and methodology. HCI continues to provide a challenging test domain for applying and developing psychological and social theory in the context of technology development and use.

  13. Human Computer Interface Design Criteria. Volume 1. User Interface Requirements

    Science.gov (United States)

    2010-03-19

    2 entitled Human Computer Interface ( HCI )Design Criteria Volume 1: User Interlace Requirements which contains the following major changes from...MISSILE SYSTEMS CENTER Air Force Space Command 483 N. Aviation Blvd. El Segundo, CA 90245 4. This standard has been approved for use on all Space and...and efficient model of how the system works and can generalize this knowledge to other systems. According to Mayhew in Principles and Guidelines in

  14. Human-computer systems interaction backgrounds and applications 3

    CERN Document Server

    Kulikowski, Juliusz; Mroczek, Teresa; Wtorek, Jerzy

    2014-01-01

    This book contains an interesting and state-of the art collection of papers on the recent progress in Human-Computer System Interaction (H-CSI). It contributes the profound description of the actual status of the H-CSI field and also provides a solid base for further development and research in the discussed area. The contents of the book are divided into the following parts: I. General human-system interaction problems; II. Health monitoring and disabled people helping systems; and III. Various information processing systems. This book is intended for a wide audience of readers who are not necessarily experts in computer science, machine learning or knowledge engineering, but are interested in Human-Computer Systems Interaction. The level of particular papers and specific spreading-out into particular parts is a reason why this volume makes fascinating reading. This gives the reader a much deeper insight than he/she might glean from research papers or talks at conferences. It touches on all deep issues that ...

  15. Computational Hemodynamic Simulation of Human Circulatory System under Altered Gravity

    Science.gov (United States)

    Kim. Chang Sung; Kiris, Cetin; Kwak, Dochan

    2003-01-01

    A computational hemodynamics approach is presented to simulate the blood flow through the human circulatory system under altered gravity conditions. Numerical techniques relevant to hemodynamics issues are introduced to non-Newtonian modeling for flow characteristics governed by red blood cells, distensible wall motion due to the heart pulse, and capillary bed modeling for outflow boundary conditions. Gravitational body force terms are added to the Navier-Stokes equations to study the effects of gravity on internal flows. Six-type gravity benchmark problems are originally presented to provide the fundamental understanding of gravitational effects on the human circulatory system. For code validation, computed results are compared with steady and unsteady experimental data for non-Newtonian flows in a carotid bifurcation model and a curved circular tube, respectively. This computational approach is then applied to the blood circulation in the human brain as a target problem. A three-dimensional, idealized Circle of Willis configuration is developed with minor arteries truncated based on anatomical data. Demonstrated is not only the mechanism of the collateral circulation but also the effects of gravity on the distensible wall motion and resultant flow patterns.

  16. Criteria of Human-computer Interface Design for Computer Assisted Surgery Systems

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jian-guo; LIN Yan-ping; WANG Cheng-tao; LIU Zhi-hong; YANG Qing-ming

    2008-01-01

    In recent years, computer assisted surgery (CAS) systems become more and more common in clinical practices, but few specific design criteria have been proposed for human-computer interface (HCI) in CAS systems. This paper tried to give universal criteria of HCI design for CAS systems through introduction of demonstration application, which is total knee replacement (TKR) with a nonimage-based navigation system.A typical computer assisted process can be divided into four phases: the preoperative planning phase, the intraoperative registration phase, the intraoperative navigation phase and finally the postoperative assessment phase. The interface design for four steps is described respectively in the demonstration application. These criteria this paper summarized can be useful to software developers to achieve reliable and effective interfaces for new CAS systems more easily.

  17. Issues in human/computer control of dexterous remote hands

    Science.gov (United States)

    Salisbury, K.

    1987-01-01

    Much research on dexterous robot hands has been aimed at the design and control problems associated with their autonomous operation, while relatively little research has addressed the problem of direct human control. It is likely that these two modes can be combined in a complementary manner yielding more capability than either alone could provide. While many of the issues in mixed computer/human control of dexterous hands parallel those found in supervisory control of traditional remote manipulators, the unique geometry and capabilities of dexterous hands pose many new problems. Among these are the control of redundant degrees of freedom, grasp stabilization and specification of non-anthropomorphic behavior. An overview is given of progress made at the MIT AI Laboratory in control of the Salisbury 3 finger hand, including experiments in grasp planning and manipulation via controlled slip. It is also suggested how we might introduce human control into the process at a variety of functional levels.

  18. Advancements in Violin-Related Human-Computer Interaction

    DEFF Research Database (Denmark)

    Overholt, Daniel

    2014-01-01

    Finesse is required while performing with many traditional musical instruments, as they are extremely responsive to human inputs. The violin is specifically examined here, as it excels at translating a performer’s gestures into sound in manners that evoke a wide range of affective qualities...... of human intelligence and emotion is at the core of the Musical Interface Technology Design Space, MITDS. This is a framework that endeavors to retain and enhance such traits of traditional instruments in the design of interactive live performance interfaces. Utilizing the MITDS, advanced Human......-Computer Interaction technologies for the violin are developed in order to allow musicians to explore new methods of creating music. Through this process, the aim is to provide musicians with control systems that let them transcend the interface itself, and focus on musically compelling performances....

  19. Computed tomography of human joints and radioactive waste drums

    Energy Technology Data Exchange (ETDEWEB)

    Ashby, E; Bernardi, R; Hollerbach, K; Logan, C; Martz, H; Roberson, G P

    1999-06-01

    X- and gamma-ray imaging techniques in nondestructive evaluation (NDE) and assay (NDA) have been increasing use in an array of industrial, environmental, military, and medical applications. Much of this growth in recent years is attributed to the rapid development of computed tomography (CT) and the use of NDE throughout the life-cycle of a product. Two diverse examples of CT are discussed. (1) The computational approach to normal joint kinematics and prosthetic joint analysis offers an opportunity to evaluate and improve prosthetic human joint replacements before they are manufactured or surgically implanted. Computed tomography data from scanned joints are segmented, resulting in the identification of bone and other tissues of interest, with emphasis on the articular surfaces. (2) They are developing NDE and NDE techniques to analyze closed waste drums accurately and quantitatively. Active and passive computed tomography (A and PCT) is a comprehensive and accurate gamma-ray NDA method that can identify all detectable radioisotopes present in a container and measure their radioactivity.

  20. Gesture controlled human-computer interface for the disabled.

    Science.gov (United States)

    Szczepaniak, Oskar M; Sawicki, Dariusz J

    2017-02-28

    The possibility of using a computer by a disabled person is one of the difficult problems of the human-computer interaction (HCI), while the professional activity (employment) is one of the most important factors affecting the quality of life, especially for disabled people. The aim of the project has been to propose a new HCI system that would allow for resuming employment for people who have lost the possibility of a standard computer operation. The basic requirement was to replace all functions of a standard mouse without the need of performing precise hand movements and using fingers. The Microsoft's Kinect motion controller had been selected as a device which would recognize hand movements. Several tests were made in order to create optimal working environment with the new device. The new communication system consisted of the Kinect device and the proper software had been built. The proposed system was tested by means of the standard subjective evaluations and objective metrics according to the standard ISO 9241-411:2012. The overall rating of the new HCI system shows the acceptance of the solution. The objective tests show that although the new system is a bit slower, it may effectively replace the computer mouse. The new HCI system fulfilled its task for a specific disabled person. This resulted in the ability to return to work. Additionally, the project confirmed the possibility of effective but nonstandard use of the Kinect device. Med Pr 2017;68(1):1-21.

  1. Patient-Specific Computational Modeling of Human Phonation

    Science.gov (United States)

    Xue, Qian; Zheng, Xudong; University of Maine Team

    2013-11-01

    Phonation is a common biological process resulted from the complex nonlinear coupling between glottal aerodynamics and vocal fold vibrations. In the past, the simplified symmetric straight geometric models were commonly employed for experimental and computational studies. The shape of larynx lumen and vocal folds are highly three-dimensional indeed and the complex realistic geometry produces profound impacts on both glottal flow and vocal fold vibrations. To elucidate the effect of geometric complexity on voice production and improve the fundamental understanding of human phonation, a full flow-structure interaction simulation is carried out on a patient-specific larynx model. To the best of our knowledge, this is the first patient-specific flow-structure interaction study of human phonation. The simulation results are well compared to the established human data. The effects of realistic geometry on glottal flow and vocal fold dynamics are investigated. It is found that both glottal flow and vocal fold dynamics present a high level of difference from the previous simplified model. This study also paved the important step toward the development of computer model for voice disease diagnosis and surgical planning. The project described was supported by Grant Number ROlDC007125 from the National Institute on Deafness and Other Communication Disorders (NIDCD).

  2. Uso do modelo de dispersão CAL3QHC na estimação da dispersão de CO na região central de Maringá, Estado do Paraná - doi: 10.4025/actascitechnol.v32i3.4853

    Directory of Open Access Journals (Sweden)

    Ed Pinheiro Lima

    2010-11-01

    Full Text Available Este trabalho descreve a aplicação do modelo CAL3QHC para estimar a dispersão das emissões de CO originadas de veículos leves na região central da cidade de Maringá, Estado do Paraná. Para a estimativa, foram adotados parâmetros locais do tráfego e meteorológicos e os fatores de emissão foram estimados por meio do modelo de emissão CMEM. A concentração máxima de CO foi estimada para oito sentidos de vento, considerando as emissões de veículos em movimento e em filas de semáforos, de forma separada e em conjunto. O maior valor de concentração foi 4,80 ppm para o sentido de vento N-S, a predominante para o horário analisado. A maioria dos valores de pico de concentração se localizou em uma avenida de grande movimento. Os mapas de concentração permitiram visualizar a concentração de CO, principalmente nesta avenida e nas imediações dos semáforos.

  3. Identification of Enhancers In Human: Advances In Computational Studies

    KAUST Repository

    Kleftogiannis, Dimitrios A.

    2016-03-24

    Roughly ~50% of the human genome, contains noncoding sequences serving as regulatory elements responsible for the diverse gene expression of the cells in the body. One very well studied category of regulatory elements is the category of enhancers. Enhancers increase the transcriptional output in cells through chromatin remodeling or recruitment of complexes of binding proteins. Identification of enhancer using computational techniques is an interesting area of research and up to now several approaches have been proposed. However, the current state-of-the-art methods face limitations since the function of enhancers is clarified, but their mechanism of function is not well understood. This PhD thesis presents a bioinformatics/computer science study that focuses on the problem of identifying enhancers in different human cells using computational techniques. The dissertation is decomposed into four main tasks that we present in different chapters. First, since many of the enhancer’s functions are not well understood, we study the basic biological models by which enhancers trigger transcriptional functions and we survey comprehensively over 30 bioinformatics approaches for identifying enhancers. Next, we elaborate more on the availability of enhancer data as produced by different enhancer identification methods and experimental procedures. In particular, we analyze advantages and disadvantages of existing solutions and we report obstacles that require further consideration. To mitigate these problems we developed the Database of Integrated Human Enhancers (DENdb), a centralized online repository that archives enhancer data from 16 ENCODE cell-lines. The integrated enhancer data are also combined with many other experimental data that can be used to interpret the enhancers content and generate a novel enhancer annotation that complements the existing integrative annotation proposed by the ENCODE consortium. Next, we propose the first deep-learning computational

  4. Shape perception in human and computer vision an interdisciplinary perspective

    CERN Document Server

    Dickinson, Sven J

    2013-01-01

    This comprehensive and authoritative text/reference presents a unique, multidisciplinary perspective on Shape Perception in Human and Computer Vision. Rather than focusing purely on the state of the art, the book provides viewpoints from world-class researchers reflecting broadly on the issues that have shaped the field. Drawing upon many years of experience, each contributor discusses the trends followed and the progress made, in addition to identifying the major challenges that still lie ahead. Topics and features: examines each topic from a range of viewpoints, rather than promoting a speci

  5. Computer simulations of human interferon gamma mutated forms

    Science.gov (United States)

    Lilkova, E.; Litov, L.; Petkov, P.; Petkov, P.; Markov, S.; Ilieva, N.

    2010-01-01

    In the general framework of the computer-aided drug design, the method of molecular-dynamics simulations is applied for investigation of the human interferon-gamma (hIFN-γ) binding to its two known ligands (its extracellular receptor and the heparin-derived oligosaccharides). A study of 100 mutated hIFN-γ forms is presented, the mutations encompassing residues 86-88. The structural changes are investigated by comparing the lengths of the α-helices, in which these residues are included, in the native hIFN-γ molecule and in the mutated forms. The most intriguing cases are examined in detail.

  6. Study on Human-Computer Interaction in Immersive Virtual Environment

    Institute of Scientific and Technical Information of China (English)

    段红; 黄柯棣

    2002-01-01

    Human-computer interaction is one of the most important issues in research of Virtual Environments. This paper introduces interaction software developed for a virtual operating environment for space experiments. Core components of the interaction software are: an object-oriented database for behavior management of virtual objects, a software agent called virtual eye for viewpoint control, and a software agent called virtual hand for object manipulation. Based on the above components, some instance programs for object manipulation have been developed. The user can observe the virtual environment through head-mounted display system, control viewpoint by head tracker and/or keyboard, and select and manipulate virtual objects by 3D mouse.

  7. A computational model of human auditory signal processing and perception

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten

    2008-01-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997......)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell transduction stage, a squaring expansion, an adaptation stage, a 150-Hz lowpass modulation filter, a bandpass...

  8. Atoms of recognition in human and computer vision.

    Science.gov (United States)

    Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel

    2016-03-01

    Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.

  9. Computational modeling of hypertensive growth in the human carotid artery

    Science.gov (United States)

    Sáez, Pablo; Peña, Estefania; Martínez, Miguel Angel; Kuhl, Ellen

    2014-06-01

    Arterial hypertension is a chronic medical condition associated with an elevated blood pressure. Chronic arterial hypertension initiates a series of events, which are known to collectively initiate arterial wall thickening. However, the correlation between macrostructural mechanical loading, microstructural cellular changes, and macrostructural adaptation remains unclear. Here, we present a microstructurally motivated computational model for chronic arterial hypertension through smooth muscle cell growth. To model growth, we adopt a classical concept based on the multiplicative decomposition of the deformation gradient into an elastic part and a growth part. Motivated by clinical observations, we assume that the driving force for growth is the stretch sensed by the smooth muscle cells. We embed our model into a finite element framework, where growth is stored locally as an internal variable. First, to demonstrate the features of our model, we investigate the effects of hypertensive growth in a real human carotid artery. Our results agree nicely with experimental data reported in the literature both qualitatively and quantitatively.

  10. Human-computer interface glove using flexible piezoelectric sensors

    Science.gov (United States)

    Cha, Youngsu; Seo, Jeonggyu; Kim, Jun-Sik; Park, Jung-Min

    2017-05-01

    In this note, we propose a human-computer interface glove based on flexible piezoelectric sensors. We select polyvinylidene fluoride as the piezoelectric material for the sensors because of advantages such as a steady piezoelectric characteristic and good flexibility. The sensors are installed in a fabric glove by means of pockets and Velcro bands. We detect changes in the angles of the finger joints from the outputs of the sensors, and use them for controlling a virtual hand that is utilized in virtual object manipulation. To assess the sensing ability of the piezoelectric sensors, we compare the processed angles from the sensor outputs with the real angles from a camera recoding. With good agreement between the processed and real angles, we successfully demonstrate the user interaction system with the virtual hand and interface glove based on the flexible piezoelectric sensors, for four hand motions: fist clenching, pinching, touching, and grasping.

  11. Human-computer interface incorporating personal and application domains

    Science.gov (United States)

    Anderson, Thomas G.

    2011-03-29

    The present invention provides a human-computer interface. The interface includes provision of an application domain, for example corresponding to a three-dimensional application. The user is allowed to navigate and interact with the application domain. The interface also includes a personal domain, offering the user controls and interaction distinct from the application domain. The separation into two domains allows the most suitable interface methods in each: for example, three-dimensional navigation in the application domain, and two- or three-dimensional controls in the personal domain. Transitions between the application domain and the personal domain are under control of the user, and the transition method is substantially independent of the navigation in the application domain. For example, the user can fly through a three-dimensional application domain, and always move to the personal domain by moving a cursor near one extreme of the display.

  12. Combining Natural Human-Computer Interaction and Wireless Communication

    Directory of Open Access Journals (Sweden)

    Ştefan Gheorghe PENTIUC

    2011-01-01

    Full Text Available In this paper we present how human-computer interaction can be improved by using wireless communication between devices. Devices that offer a natural user interaction, like the Microsoft Surface Table and tablet PCs, can work together to enhance the experience of an application. Users can use physical objects for a more natural way of handling the virtual world on one hand, and interact with other users wirelessly connected on the other. Physical objects, that interact with the surface table, have a tag attached to them, allowing us to identify them, and take the required action. The TCP/IP protocol was used to handle the wireless communication over the wireless network. A server and a client application were developed for the used devices. To get a wide range of targeted mobile devices, different frameworks for developing cross platform applications were analyzed.

  13. Wearable joystick for gloves-on human/computer interaction

    Science.gov (United States)

    Bae, Jaewook; Voyles, Richard M.

    2006-05-01

    In this paper, we present preliminary work on a novel wearable joystick for gloves-on human/computer interaction in hazardous environments. Interacting with traditional input devices can be clumsy and inconvenient for the operator in hazardous environments due to the bulkiness of multiple system components and troublesome wires. During a collapsed structure search, for example, protective clothing, uneven footing, and "snag" points in the environment can render traditional input devices impractical. Wearable computing has been studied by various researchers to increase the portability of devices and to improve the proprioceptive sense of the wearer's intentions. Specifically, glove-like input devices to recognize hand gestures have been developed for general-purpose applications. But, regardless of their performance, prior gloves have been fragile and cumbersome to use in rough environments. In this paper, we present a new wearable joystick to remove the wires from a simple, two-degree of freedom glove interface. Thus, we develop a wearable joystick that is low cost, durable and robust, and wire-free at the glove. In order to evaluate the wearable joystick, we take into consideration two metrics during operator tests of a commercial robot: task completion time and path tortuosity. We employ fractal analysis to measure path tortuosity. Preliminary user test results are presented that compare the performance of both a wearable joystick and a traditional joystick.

  14. A computational model for dynamic analysis of the human gait.

    Science.gov (United States)

    Vimieiro, Claysson; Andrada, Emanuel; Witte, Hartmut; Pinotti, Marcos

    2015-01-01

    Biomechanical models are important tools in the study of human motion. This work proposes a computational model to analyse the dynamics of lower limb motion using a kinematic chain to represent the body segments and rotational joints linked by viscoelastic elements. The model uses anthropometric parameters, ground reaction forces and joint Cardan angles from subjects to analyse lower limb motion during the gait. The model allows evaluating these data in each body plane. Six healthy subjects walked on a treadmill to record the kinematic and kinetic data. In addition, anthropometric parameters were recorded to construct the model. The viscoelastic parameter values were fitted for the model joints (hip, knee and ankle). The proposed model demonstrated that manipulating the viscoelastic parameters between the body segments could fit the amplitudes and frequencies of motion. The data collected in this work have viscoelastic parameter values that follow a normal distribution, indicating that these values are directly related to the gait pattern. To validate the model, we used the values of the joint angles to perform a comparison between the model results and previously published data. The model results show a same pattern and range of values found in the literature for the human gait motion.

  15. A multisegment computer simulation of normal human gait.

    Science.gov (United States)

    Gilchrist, L A; Winter, D A

    1997-12-01

    The goal of this project was to develop a computer simulation of normal human walking that would use as driving moments resultant joint moments from a gait analysis. The system description, initial conditions and driving moments were taken from an inverse dynamics analysis of a normal walking trial. A nine-segment three-dimensional (3-D) model, including a two-part foot, was used. Torsional, linear springs and dampers were used at the hip joints to keep the trunk vertical and at the knee and ankle joints to prevent nonphysiological motion. Dampers at other joints were required to ensure a smooth and realistic motion. The simulated human successfully completed one step (550 ms), including both single and double support phases. The model proved to be sensitive to changes in the spring stiffness values of the trunk controllers. Similar sensitivity was found with the springs used to prevent hyperextension of the knee at heel contact and of the metatarsal-phalangeal joint at push-off. In general, there was much less sensitivity to the damping coefficients. This simulation improves on previous efforts because it incorporates some features necessary in simulations designed to answer clinical science questions. Other control algorithms are required, however, to ensure that the model can be realistically adapted to different subjects.

  16. Hybrid Human-Computing Distributed Sense-Making: Extending the SOA Paradigm for Dynamic Adjudication and Optimization of Human and Computer Roles

    Science.gov (United States)

    Rimland, Jeffrey C.

    2013-01-01

    In many evolving systems, inputs can be derived from both human observations and physical sensors. Additionally, many computation and analysis tasks can be performed by either human beings or artificial intelligence (AI) applications. For example, weather prediction, emergency event response, assistive technology for various human sensory and…

  17. Computational modeling and analysis of the hydrodynamics of human swimming

    Science.gov (United States)

    von Loebbecke, Alfred

    Computational modeling and simulations are used to investigate the hydrodynamics of competitive human swimming. The simulations employ an immersed boundary (IB) solver that allows us to simulate viscous, incompressible, unsteady flow past complex, moving/deforming three-dimensional bodies on stationary Cartesian grids. This study focuses on the hydrodynamics of the "dolphin kick". Three female and two male Olympic level swimmers are used to develop kinematically accurate models of this stroke for the simulations. A simulation of a dolphin undergoing its natural swimming motion is also presented for comparison. CFD enables the calculation of flow variables throughout the domain and over the swimmer's body surface during the entire kick cycle. The feet are responsible for all thrust generation in the dolphin kick. Moreover, it is found that the down-kick (ventral position) produces more thrust than the up-kick. A quantity of interest to the swimming community is the drag of a swimmer in motion (active drag). Accurate estimates of this quantity have been difficult to obtain in experiments but are easily calculated with CFD simulations. Propulsive efficiencies of the human swimmers are found to be in the range of 11% to 30%. The dolphin simulation case has a much higher efficiency of 55%. Investigation of vortex structures in the wake indicate that the down-kick can produce a vortex ring with a jet of accelerated fluid flowing through its center. This vortex ring and the accompanying jet are the primary thrust generating mechanisms in the human dolphin kick. In an attempt to understand the propulsive mechanisms of surface strokes, we have also conducted a computational analysis of two different styles of arm-pulls in the backstroke and the front crawl. These simulations involve only the arm and no air-water interface is included. Two of the four strokes are specifically designed to take advantage of lift-based propulsion by undergoing lateral motions of the hand

  18. Open-Box Muscle-Computer Interface: Introduction to Human-Computer Interactions in Bioengineering, Physiology, and Neuroscience Courses

    Science.gov (United States)

    Landa-Jiménez, M. A.; González-Gaspar, P.; Pérez-Estudillo, C.; López-Meraz, M. L.; Morgado-Valle, C.; Beltran-Parrazal, L.

    2016-01-01

    A Muscle-Computer Interface (muCI) is a human-machine system that uses electromyographic (EMG) signals to communicate with a computer. Surface EMG (sEMG) signals are currently used to command robotic devices, such as robotic arms and hands, and mobile robots, such as wheelchairs. These signals reflect the motor intention of a user before the…

  19. Open-Box Muscle-Computer Interface: Introduction to Human-Computer Interactions in Bioengineering, Physiology, and Neuroscience Courses

    Science.gov (United States)

    Landa-Jiménez, M. A.; González-Gaspar, P.; Pérez-Estudillo, C.; López-Meraz, M. L.; Morgado-Valle, C.; Beltran-Parrazal, L.

    2016-01-01

    A Muscle-Computer Interface (muCI) is a human-machine system that uses electromyographic (EMG) signals to communicate with a computer. Surface EMG (sEMG) signals are currently used to command robotic devices, such as robotic arms and hands, and mobile robots, such as wheelchairs. These signals reflect the motor intention of a user before the…

  20. The Human-Computer Interface and Information Literacy: Some Basics and Beyond.

    Science.gov (United States)

    Church, Gary M.

    1999-01-01

    Discusses human/computer interaction research, human/computer interface, and their relationships to information literacy. Highlights include communication models; cognitive perspectives; task analysis; theory of action; problem solving; instructional design considerations; and a suggestion that human/information interface may be a more appropriate…

  1. Mutations that Cause Human Disease: A Computational/Experimental Approach

    Energy Technology Data Exchange (ETDEWEB)

    Beernink, P; Barsky, D; Pesavento, B

    2006-01-11

    International genome sequencing projects have produced billions of nucleotides (letters) of DNA sequence data, including the complete genome sequences of 74 organisms. These genome sequences have created many new scientific opportunities, including the ability to identify sequence variations among individuals within a species. These genetic differences, which are known as single nucleotide polymorphisms (SNPs), are particularly important in understanding the genetic basis for disease susceptibility. Since the report of the complete human genome sequence, over two million human SNPs have been identified, including a large-scale comparison of an entire chromosome from twenty individuals. Of the protein coding SNPs (cSNPs), approximately half leads to a single amino acid change in the encoded protein (non-synonymous coding SNPs). Most of these changes are functionally silent, while the remainder negatively impact the protein and sometimes cause human disease. To date, over 550 SNPs have been found to cause single locus (monogenic) diseases and many others have been associated with polygenic diseases. SNPs have been linked to specific human diseases, including late-onset Parkinson disease, autism, rheumatoid arthritis and cancer. The ability to predict accurately the effects of these SNPs on protein function would represent a major advance toward understanding these diseases. To date several attempts have been made toward predicting the effects of such mutations. The most successful of these is a computational approach called ''Sorting Intolerant From Tolerant'' (SIFT). This method uses sequence conservation among many similar proteins to predict which residues in a protein are functionally important. However, this method suffers from several limitations. First, a query sequence must have a sufficient number of relatives to infer sequence conservation. Second, this method does not make use of or provide any information on protein structure, which

  2. Human Pacman: A Mobile Augmented Reality Entertainment System Based on Physical, Social, and Ubiquitous Computing

    Science.gov (United States)

    Cheok, Adrian David

    This chapter details the Human Pacman system to illuminate entertainment computing which ventures to embed the natural physical world seamlessly with a fantasy virtual playground by capitalizing on infrastructure provided by mobile computing, wireless LAN, and ubiquitous computing. With Human Pacman, we have a physical role-playing computer fantasy together with real human-social and mobile-gaming that emphasizes on collaboration and competition between players in a wide outdoor physical area that allows natural wide-area human-physical movements. Pacmen and Ghosts are now real human players in the real world experiencing mixed computer graphics fantasy-reality provided by using the wearable computers on them. Virtual cookies and actual tangible physical objects are incorporated into the game play to provide novel experiences of seamless transitions between the real and virtual worlds. This is an example of a new form of gaming that anchors on physicality, mobility, social interaction, and ubiquitous computing.

  3. Effective Use of Human Computer Interaction in Digital Academic Supportive Devices

    OpenAIRE

    Thuseethan, S.; Kuhanesan, S.

    2015-01-01

    In this research, a literature in human-computer interaction is reviewed and the technology aspect of human computer interaction related with digital academic supportive devices is also analyzed. According to all these concerns, recommendations to design good human-computer digital academic supportive devices are analyzed and proposed. Due to improvements in both hardware and software, digital devices have unveiled continuous advances in efficiency and processing capacity. However, many of th...

  4. Computational lipidology: predicting lipoprotein density profiles in human blood plasma.

    Directory of Open Access Journals (Sweden)

    Katrin Hübner

    2008-05-01

    Full Text Available Monitoring cholesterol levels is strongly recommended to identify patients at risk for myocardial infarction. However, clinical markers beyond "bad" and "good" cholesterol are needed to precisely predict individual lipid disorders. Our work contributes to this aim by bringing together experiment and theory. We developed a novel computer-based model of the human plasma lipoprotein metabolism in order to simulate the blood lipid levels in high resolution. Instead of focusing on a few conventionally used predefined lipoprotein density classes (LDL, HDL, we consider the entire protein and lipid composition spectrum of individual lipoprotein complexes. Subsequently, their distribution over density (which equals the lipoprotein profile is calculated. As our main results, we (i successfully reproduced clinically measured lipoprotein profiles of healthy subjects; (ii assigned lipoproteins to narrow density classes, named high-resolution density sub-fractions (hrDS, revealing heterogeneous lipoprotein distributions within the major lipoprotein classes; and (iii present model-based predictions of changes in the lipoprotein distribution elicited by disorders in underlying molecular processes. In its present state, the model offers a platform for many future applications aimed at understanding the reasons for inter-individual variability, identifying new sub-fractions of potential clinical relevance and a patient-oriented diagnosis of the potential molecular causes for individual dyslipidemia.

  5. Brain computer interface to enhance episodic memory in human participants

    Directory of Open Access Journals (Sweden)

    John F Burke

    2015-01-01

    Full Text Available Recent research has revealed that neural oscillations in the theta (4-8 Hz and alpha (9-14 Hz bands are predictive of future success in memory encoding. Because these signals occur before the presentation of an upcoming stimulus, they are considered stimulus-independent in that they correlate with enhanced memory encoding independent of the item being encoded. Thus, such stimulus-independent activity has important implications for the neural mechanisms underlying episodic memory as well as the development of cognitive neural prosthetics. Here, we developed a brain computer interface (BCI to test the ability of such pre-stimulus activity to modulate subsequent memory encoding. We recorded intracranial electroencephalography (iEEG in neurosurgical patients as they performed a free recall memory task, and detected iEEG theta and alpha oscillations that correlated with optimal memory encoding. We then used these detected oscillatory changes to trigger the presentation of items in the free recall task. We found that item presentation contingent upon the presence of prestimulus theta and alpha oscillations modulated memory performance in more sessions than expected by chance. Our results suggest that an electrophysiological signal may be causally linked to a specific behavioral condition, and contingent stimulus presentation has the potential to modulate human memory encoding.

  6. Psychosocial and Cultural Modeling in Human Computation Systems: A Gamification Approach

    Energy Technology Data Exchange (ETDEWEB)

    Sanfilippo, Antonio P.; Riensche, Roderick M.; Haack, Jereme N.; Butner, R. Scott

    2013-11-20

    “Gamification”, the application of gameplay to real-world problems, enables the development of human computation systems that support decision-making through the integration of social and machine intelligence. One of gamification’s major benefits includes the creation of a problem solving environment where the influence of cognitive and cultural biases on human judgment can be curtailed through collaborative and competitive reasoning. By reducing biases on human judgment, gamification allows human computation systems to exploit human creativity relatively unhindered by human error. Operationally, gamification uses simulation to harvest human behavioral data that provide valuable insights for the solution of real-world problems.

  7. Human-Centered Software Engineering: Software Engineering Architectures, Patterns, and Sodels for Human Computer Interaction

    Science.gov (United States)

    Seffah, Ahmed; Vanderdonckt, Jean; Desmarais, Michel C.

    The Computer-Human Interaction and Software Engineering (CHISE) series of edited volumes originated from a number of workshops and discussions over the latest research and developments in the field of Human Computer Interaction (HCI) and Software Engineering (SE) integration, convergence and cross-pollination. A first volume in this series (CHISE Volume I - Human-Centered Software Engineering: Integrating Usability in the Development Lifecycle) aims at bridging the gap between the field of SE and HCI, and addresses specifically the concerns of integrating usability and user-centered systems design methods and tools into the software development lifecycle and practices. This has been done by defining techniques, tools and practices that can fit into the entire software engineering lifecycle as well as by defining ways of addressing the knowledge and skills needed, and the attitudes and basic values that a user-centered development methodology requires. The first volume has been edited as Vol. 8 in the Springer HCI Series (Seffah, Gulliksen and Desmarais, 2005).

  8. Human-Centered Design of Human-Computer-Human Dialogs in Aerospace Systems

    Science.gov (United States)

    Mitchell, Christine M.

    1998-01-01

    A series of ongoing research programs at Georgia Tech established a need for a simulation support tool for aircraft computer-based aids. This led to the design and development of the Georgia Tech Electronic Flight Instrument Research Tool (GT-EFIRT). GT-EFIRT is a part-task flight simulator specifically designed to study aircraft display design and single pilot interaction. ne simulator, using commercially available graphics and Unix workstations, replicates to a high level of fidelity the Electronic Flight Instrument Systems (EFIS), Flight Management Computer (FMC) and Auto Flight Director System (AFDS) of the Boeing 757/767 aircraft. The simulator can be configured to present information using conventional looking B757n67 displays or next generation Primary Flight Displays (PFD) such as found on the Beech Starship and MD-11.

  9. The Changing Face of Human-Computer Interaction in the Age of Ubiquitous Computing

    Science.gov (United States)

    Rogers, Yvonne

    HCI is reinventing itself. No longer only about being user-centered, it has set its sights on pastures new, embracing a much broader and far-reaching set of interests. From emotional, eco-friendly, embodied experiences to context, constructivism and culture, HCI research is changing apace: from what it looks at, the lenses it uses and what it has to offer. Part of this is as a reaction to what is happening in the world; ubiquitous technologies are proliferating and transforming how we live our lives. We are becoming more connected and more dependent on technology. The home, the crèche, outdoors, public places and even the human body are now being experimented with as potential places to embed computational devices, even to the extent of invading previously private and taboo aspects of our lives. In this paper, I examine the diversity of lifestyle and technological transformations in our midst and outline some 'difficult' questions these raise together with alternative directions for HCI research and practice.

  10. Computer science security research and human subjects: emerging considerations for research ethics boards.

    Science.gov (United States)

    Buchanan, Elizabeth; Aycock, John; Dexter, Scott; Dittrich, David; Hvizdak, Erin

    2011-06-01

    This paper explores the growing concerns with computer science research, and in particular, computer security research and its relationship with the committees that review human subjects research. It offers cases that review boards are likely to confront, and provides a context for appropriate consideration of such research, as issues of bots, clouds, and worms enter the discourse of human subjects review.

  11. Human Computer Interaction Approach in Developing Customer Relationship Management

    Directory of Open Access Journals (Sweden)

    Mohd H.N.M. Nasir

    2008-01-01

    Full Text Available Problem statement: Many published studies have found that more than 50% of Customer Relationship Management (CRM system implementations have failed due to the failure of system usability and does not fulfilled user expectation. This study presented the issues that contributed to the failures of CRM system and proposed a prototype of CRM system developed using Human Computer Interaction approaches in order to resolve the identified issues. Approach: In order to capture the users' requirements, a single in-depth case study of a multinational company was chosen in this research, in which the background, current conditions and environmental interactions were observed, recorded and analyzed for stages of patterns in relation to internal and external influences. Some techniques of blended data gathering which are interviews, naturalistic observation and studying user documentation were employed and then the prototype of CRM system was developed which incorporated User-Centered Design (UCD approach, Hierarchical Task Analysis (HTA, metaphor and identification of users' behaviors and characteristics. The implementation of these techniques, were then measured in terms of usability. Results: Based on the usability testing conducted, the results showed that most of the users agreed that the system is comfortable to work with by taking the quality attributes of learnability, memorizeablity, utility, sortability, font, visualization, user metaphor, information easy view and color as measurement parameters. Conclusions/Recommendations: By combining all these techniques, a comfort level for the users that leads to user satisfaction and higher usability degree can be achieved in a proposed CRM system. Thus, it is important that the companies should put usability quality attribute into a consideration before developing or procuring CRM system to ensure the implementation successfulness of the CRM system.

  12. A Real-Time Model-Based Human Motion Tracking and Analysis for Human-Computer Interface Systems

    Directory of Open Access Journals (Sweden)

    Chung-Lin Huang

    2004-09-01

    Full Text Available This paper introduces a real-time model-based human motion tracking and analysis method for human computer interface (HCI. This method tracks and analyzes the human motion from two orthogonal views without using any markers. The motion parameters are estimated by pattern matching between the extracted human silhouette and the human model. First, the human silhouette is extracted and then the body definition parameters (BDPs can be obtained. Second, the body animation parameters (BAPs are estimated by a hierarchical tritree overlapping searching algorithm. To verify the performance of our method, we demonstrate different human posture sequences and use hidden Markov model (HMM for posture recognition testing.

  13. Eliciting Children's Recall of Events: How Do Computers Compare with Humans?

    Science.gov (United States)

    Powell, Martine B.; Wilson, J. Clare; Thomson, Donald M.

    2002-01-01

    Describes a study that investigated the usefulness of an interactive computer program in eliciting children's reports about an event. Compared results of interviews by computer with interviews with humans with children aged five through eight that showed little benefit in computers over face-to-face interviews. (Author/LRW)

  14. Developing Educational Computer Animation Based on Human Personality Types

    Science.gov (United States)

    Musa, Sajid; Ziatdinov, Rushan; Sozcu, Omer Faruk; Griffiths, Carol

    2015-01-01

    Computer animation in the past decade has become one of the most noticeable features of technology-based learning environments. By its definition, it refers to simulated motion pictures showing movement of drawn objects, and is often defined as the art in movement. Its educational application known as educational computer animation is considered…

  15. A Model-based Framework for Risk Assessment in Human-Computer Controlled Systems

    Science.gov (United States)

    Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems. This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions. Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  16. Activity-based computing: computational management of activities reflecting human intention

    DEFF Research Database (Denmark)

    Bardram, Jakob E; Jeuris, Steven; Houben, Steven

    2015-01-01

    paradigm that has been applied in personal information management applications as well as in ubiquitous, multidevice, and interactive surface computing. ABC has emerged as a response to the traditional application- and file-centered computing paradigm, which is oblivious to a notion of a user’s activity...... context spanning heterogeneous devices, multiple applications, services, and information sources. In this article, we present ABC as an approach to contextualize information, and present our research into designing activity-centric computing technologies.......An important research topic in artificial intelligence is automatic sensing and inferencing of contextual information, which is used to build computer models of the user’s activity. One approach to build such activity-aware systems is the notion of activity-based computing (ABC). ABC is a computing...

  17. Activity-based computing: computational management of activities reflecting human intention

    DEFF Research Database (Denmark)

    Bardram, Jakob E; Jeuris, Steven; Houben, Steven

    2015-01-01

    An important research topic in artificial intelligence is automatic sensing and inferencing of contextual information, which is used to build computer models of the user’s activity. One approach to build such activity-aware systems is the notion of activity-based computing (ABC). ABC is a computing...... paradigm that has been applied in personal information management applications as well as in ubiquitous, multidevice, and interactive surface computing. ABC has emerged as a response to the traditional application- and file-centered computing paradigm, which is oblivious to a notion of a user’s activity...... context spanning heterogeneous devices, multiple applications, services, and information sources. In this article, we present ABC as an approach to contextualize information, and present our research into designing activity-centric computing technologies....

  18. Appearance-based human gesture recognition using multimodal features for human computer interaction

    Science.gov (United States)

    Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun

    2011-03-01

    The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.

  19. Operational characteristics optimization of human-computer system

    OpenAIRE

    Zulquernain Mallick; Irfan Anjum Badruddin magami; Khaleed Hussain Tandur

    2010-01-01

    Computer operational parameters are having vital influence on the operators efficiency from readability viewpoint. Four parameters namely font, text/background color, viewing angle and viewing distance are analyzed. The text reading task, in the form of English text, was presented on the computer screen to the participating subjects and their performance, measured in terms of number of words read per minute (NWRPM), was recorded. For the purpose of optimization, the Taguchi method is u...

  20. Simulation of Human Episodic Memory by Using a Computational Model of the Hippocampus

    Directory of Open Access Journals (Sweden)

    Naoyuki Sato

    2010-01-01

    Full Text Available The episodic memory, the memory of personal events and history, is essential for understanding the mechanism of human intelligence. Neuroscience evidence has shown that the hippocampus, a part of the limbic system, plays an important role in the encoding and the retrieval of the episodic memory. This paper reviews computational models of the hippocampus and introduces our own computational model of human episodic memory based on neural synchronization. Results from computer simulations demonstrate that our model provides advantage for instantaneous memory formation and selective retrieval enabling memory search. Moreover, this model was found to have the ability to predict human memory recall by integrating human eye movement data during encoding. The combined approach between computational models and experiment is efficient for theorizing the human episodic memory.

  1. Applying systemic-structural activity theory to design of human-computer interaction systems

    CERN Document Server

    Bedny, Gregory Z; Bedny, Inna

    2015-01-01

    Human-Computer Interaction (HCI) is an interdisciplinary field that has gained recognition as an important field in ergonomics. HCI draws on ideas and theoretical concepts from computer science, psychology, industrial design, and other fields. Human-Computer Interaction is no longer limited to trained software users. Today people interact with various devices such as mobile phones, tablets, and laptops. How can you make such interaction user friendly, even when user proficiency levels vary? This book explores methods for assessing the psychological complexity of computer-based tasks. It also p

  2. Comparison of human face matching behavior and computational image similarity measure

    Institute of Scientific and Technical Information of China (English)

    CHEN WenFeng; LIU ChangHong; LANDER Karen; FU XiaoLan

    2009-01-01

    Computational similarity measures have been evaluated in a variety of ways, but few of the validated computational measures are based on a high-level, cognitive criterion of objective similarity. In this paper, we evaluate two popular objective similarity measures by comparing them with face matching performance In human observers. The results suggest that these measures are still limited in predicting human behavior, especially In rejection behavior, but objective measure taking advantage of global and local face characteristics may improve the prediction. It is also suggested that human may set different criterions for "hit" and "rejection" and this may provide implications for biologically-inspired computational systems.

  3. Aiding human reliance decision making using computational models of trust

    NARCIS (Netherlands)

    Maanen, P.P. van; Klos, T.; Dongen, C.J. van

    2007-01-01

    This paper involves a human-agent system in which there is an operator charged with a pattern recognition task, using an automated decision aid. The objective is to make this human-agent system operate as effectively as possible. Effectiveness is gained by an increase of appropriate reliance on the

  4. Adapting the human-computer interface for reading literacy and computer skill to facilitate collection of information directly from patients.

    Science.gov (United States)

    Lobach, David F; Arbanas, Jennifer M; Mishra, Dharani D; Campbell, Marci; Wildemuth, Barbara M

    2004-01-01

    Clinical information collected directly from patients is critical to the practice of medicine. Past efforts to collect this information using computers have had limited utility because these efforts required users to be facile with the computerized information collecting system. In this paper we describe the design, development, and function of a computer system that uses recent technology to overcome the limitations of previous computer-based data collection tools by adapting the human-computer interface to the native language, reading literacy, and computer skills of the user. Specifically, our system uses a numerical representation of question content, multimedia, and touch screen technology to adapt the computer interface to the native language, reading literacy, and computer literacy of the user. In addition, the system supports health literacy needs throughout the data collection session and provides contextually relevant disease-specific education to users based on their responses to the questions. The system has been successfully used in an academically affiliated family medicine clinic and in an indigent adult medicine clinic.

  5. Supporting Negotiation Behavior with Haptics-Enabled Human-Computer Interfaces.

    Science.gov (United States)

    Oguz, S O; Kucukyilmaz, A; Sezgin, Tevfik Metin; Basdogan, C

    2012-01-01

    An active research goal for human-computer interaction is to allow humans to communicate with computers in an intuitive and natural fashion, especially in real-life interaction scenarios. One approach that has been advocated to achieve this has been to build computer systems with human-like qualities and capabilities. In this paper, we present insight on how human-computer interaction can be enriched by employing the computers with behavioral patterns that naturally appear in human-human negotiation scenarios. For this purpose, we introduce a two-party negotiation game specifically built for studying the effectiveness of haptic and audio-visual cues in conveying negotiation related behaviors. The game is centered around a real-time continuous two-party negotiation scenario based on the existing game-theory and negotiation literature. During the game, humans are confronted with a computer opponent, which can display different behaviors, such as concession, competition, and negotiation. Through a user study, we show that the behaviors that are associated with human negotiation can be incorporated into human-computer interaction, and the addition of haptic cues provides a statistically significant increase in the human-recognition accuracy of machine-displayed behaviors. In addition to aspects of conveying these negotiation-related behaviors, we also focus on and report game-theoretical aspects of the overall interaction experience. In particular, we show that, as reported in the game-theory literature, certain negotiation strategies such as tit-for-tat may generate maximum combined utility for the negotiating parties, providing an excellent balance between the energy spent by the user and the combined utility of the negotiating parties.

  6. Human-Computer Interaction Software: Lessons Learned, Challenges Ahead

    Science.gov (United States)

    1989-01-01

    domain communi- Iatelligent s t s s Me cation. Users familiar with problem Inteligent support systes. High-func- anddomains but inxperienced with comput...8217i. April 1987, pp. 7.3-78. His research interests include artificial intel- Creating better HCI softw-are will have a 8. S.K Catrd. I.P. Moran. arid

  7. [Attempt at computer modeling of evolution of human society].

    Science.gov (United States)

    Levchenko, V F; Menshutkin, V V

    2009-01-01

    A model of evolution of human society and biosphere, which is based on the concepts of V. I. Vernadskii about noosphere and of L. N. Gumilev about ethnogenesis is developed and studied. The mathematical apparatus of the model is composition of finite stochastic automata. By using this model, a possibility of the global ecological crisis is demonstrated in the case of preservation of the current tendencies of interaction of the biosphere and the human civilization.

  8. Investigating Students’ Achievements in Computing Science Using Human Metric

    Directory of Open Access Journals (Sweden)

    Ezekiel U. Okike

    2014-05-01

    Full Text Available This study investigates the role of personality traits, motivation for career choice and study habits in students’ academic achievements in the computing sciences. A quantitative research method was employed. Data was collected from 60 computing science students using the Myer Briggs Type indicator (MBTI with additional questionnaires. A model of the form y_(ij=ß_0+ß_1 x_(1j+ ß_2 x_(2j+ ß_3 x_(3j+ ß_4 x_(4j+ …ß_n x_nj was used, where y_ij represents a dependent variable, ß_0+ß_1 x_(1j+ ß_2 x_(2j+ ß_3 x_(3j+ ß_4 x_(4j+ …ß_n x_nj the independent variables. Data analysis was performed on the data using the Statistical Package for the social sciences (SPSS. Linear regression was done in order to fit the model and justify its significance or none significance at the 0.05 level of significance. Result of regression model was also used to determine the impact of the independent variable on students’ performance. Results from this study suggests that the strongest motivator for a choice of career in the computing sciences is the desire to become a computing professional. Students’ achievements especially in the computing sciences do not depend only on students temperamental ability or personality traits, motivations for choice of course of study and reading habit, but also on the use of Internet based sources more than going to the university library to read book materials available in all areas

  9. Deep Space Network (DSN), Network Operations Control Center (NOCC) computer-human interfaces

    Science.gov (United States)

    Ellman, Alvin; Carlton, Magdi

    1993-01-01

    The Network Operations Control Center (NOCC) of the DSN is responsible for scheduling the resources of DSN, and monitoring all multi-mission spacecraft tracking activities in real-time. Operations performs this job with computer systems at JPL connected to over 100 computers at Goldstone, Australia and Spain. The old computer system became obsolete, and the first version of the new system was installed in 1991. Significant improvements for the computer-human interfaces became the dominant theme for the replacement project. Major issues required innovating problem solving. Among these issues were: How to present several thousand data elements on displays without overloading the operator? What is the best graphical representation of DSN end-to-end data flow? How to operate the system without memorizing mnemonics of hundreds of operator directives? Which computing environment will meet the competing performance requirements? This paper presents the technical challenges, engineering solutions, and results of the NOCC computer-human interface design.

  10. Secure Human-Computer Identification against Peeping Attacks (SecHCI): A Survey

    OpenAIRE

    Li, SJ; Shum, HY

    2003-01-01

    This paper focuses on human-computer identification systems against peeping attacks, in which adversaries can observe (and even control) interactions between humans (provers) and computers (verifiers). Real cases on peeping attacks were reported by Ross J. Anderson ten years before. Fixed passwords are insecure to peeping attacks since adversaries can simply replay the observed passwords. Some identification techniques can be used to defeat peeping attacks, but auxiliary devices must be used ...

  11. Design of Food Management Information System Based on Human-computer Interaction

    Directory of Open Access Journals (Sweden)

    Xingkai Cui

    2015-07-01

    Full Text Available Food safety problem is directly related with public health. This study takes the necessity of establishing food management information system as the breakthrough point, through the interpretation of the overview of human-computer interaction technology, as well as the conceptual framework of human-computer interaction, it discusses the construction of food management information system, expecting to promote China's food safety management process so as to guarantee public health guarantee.

  12. The human-computer interaction design of self-operated mobile telemedicine devices

    OpenAIRE

    Zheng, Shaoqing

    2015-01-01

    Human-computer interaction (HCI) is an important issue in the area of medicine, for example, the operation of surgical simulators, virtual rehabilitation systems, telemedicine treatments, and so on. In this thesis, the human-computer interaction of a self-operated mobile telemedicine device is designed. The mobile telemedicine device (i.e. intelligent Medication Box or iMedBox) is used for remotely monitoring patient health and activity information such as ECG (electrocardiogram) signals, hom...

  13. Developing Educational Computer Animation Based on Human Personality Types

    Directory of Open Access Journals (Sweden)

    Sajid Musa

    2015-03-01

    Full Text Available Computer animation in the past decade has become one of the most noticeable features of technology-based learning environments. By its definition, it refers to simulated motion pictures showing movement of drawn objects, and is often defined as the art in movement. Its educational application known as educational computer animation is considered to be one of the most elegant ways for preparing materials for teaching, and its importance in assisting learners to process, understand and remember information efficiently has vastly grown since the advent of powerful graphics-oriented computers era. Based on theories and facts of psychology, colour science, computer animation, geometric modelling and technical aesthetics, this study intends to establish an inter-disciplinary area of research towards a greater educational effectiveness. With today’s high educational demands as well as the lack of time provided for certain courses, classical educational methods have shown deficiencies in keeping up with the drastic changes observed in the digital era. Generally speaking, without taking into account various significant factors as, for instance, gender, age, level of interest and memory level, educational animations may turn out to be insufficient for learners or fail to meet their needs. Though, we have noticed that the applications of animation for education have been given only inadequate attention, and students’ personality types of temperaments (sanguine, choleric, melancholic, phlegmatic, etc. have never been taken into account. We suggest there is an interesting relationship here, and propose essential factors in creating educational animations based on students’ personality types. Particularly, we study how information in computer animation may be presented in a more preferable way based on font types and their families, colours and colour schemes, emphasizing texts, shapes of characters designed by planar quadratic Bernstein-Bézier curves

  14. Design Science in Human-Computer Interaction: A Model and Three Examples

    Science.gov (United States)

    Prestopnik, Nathan R.

    2013-01-01

    Humanity has entered an era where computing technology is virtually ubiquitous. From websites and mobile devices to computers embedded in appliances on our kitchen counters and automobiles parked in our driveways, information and communication technologies (ICTs) and IT artifacts are fundamentally changing the ways we interact with our world.…

  15. Using Noninvasive Brain Measurement to Explore the Psychological Effects of Computer Malfunctions on Users during Human-Computer Interactions

    Directory of Open Access Journals (Sweden)

    Leanne M. Hirshfield

    2014-01-01

    Full Text Available In today’s technologically driven world, there is a need to better understand the ways that common computer malfunctions affect computer users. These malfunctions may have measurable influences on computer user’s cognitive, emotional, and behavioral responses. An experiment was conducted where participants conducted a series of web search tasks while wearing functional near-infrared spectroscopy (fNIRS and galvanic skin response sensors. Two computer malfunctions were introduced during the sessions which had the potential to influence correlates of user trust and suspicion. Surveys were given after each session to measure user’s perceived emotional state, cognitive load, and perceived trust. Results suggest that fNIRS can be used to measure the different cognitive and emotional responses associated with computer malfunctions. These cognitive and emotional changes were correlated with users’ self-report levels of suspicion and trust, and they in turn suggest future work that further explores the capability of fNIRS for the measurement of user experience during human-computer interactions.

  16. Computer models of the human immunoglobulins shape and segmental flexibility.

    Science.gov (United States)

    Pumphrey, R

    1986-06-01

    At present there is interest in the design and deployment of engineered biosensor molecules. Antibodies are the most versatile of the naturally occurring biosensors and it is important to understand their mechanical properties and the ways in which they can interact with their natural ligands. Two dimensional representations are clearly inadequate, and three dimensional representations are too complicated to manipulate except as numerical abstractions in computers. Recent improvements in computer graphics allow these coordinate matrices to be seen and more easily comprehended, and interactive programs permit the modification and reassembly of molecular fragments. The models which result have distinct advantages both over those of lower resolution, and those showing every atom, which are limited to the few fragments(2-5) or mutant molecules for which the X-ray crystallographic coordinates are known. In this review Richard Pumphrey describes the shape and flexibility of immunoglobulin molecules in relation to the three dimensional structure. Copyright © 1986. Published by Elsevier B.V.

  17. Parallel computing-based sclera recognition for human identification

    Science.gov (United States)

    Lin, Yong; Du, Eliza Y.; Zhou, Zhi

    2012-06-01

    Compared to iris recognition, sclera recognition which uses line descriptor can achieve comparable recognition accuracy in visible wavelengths. However, this method is too time-consuming to be implemented in a real-time system. In this paper, we propose a GPU-based parallel computing approach to reduce the sclera recognition time. We define a new descriptor in which the information of KD tree structure and sclera edge are added. Registration and matching task is divided into subtasks in various sizes according to their computation complexities. Every affine transform parameters are generated by searching on KD tree. Texture memory, constant memory, and shared memory are used to store templates and transform matrixes. The experiment results show that the proposed method executed on GPU can dramatically improve the sclera matching speed in hundreds of times without accuracy decreasing.

  18. Social effects of an anthropomorphic help agent: humans versus computers.

    Science.gov (United States)

    David, Prabu; Lu, Tingting; Kline, Susan; Cai, Li

    2007-06-01

    The purpose of this study was to examine perceptions of fairness of a computer-administered quiz as a function of the anthropomorphic features of the help agent offered within the quiz environment. The addition of simple anthropomorphic cues to a computer help agent reduced the perceived friendliness of the agent, perceived intelligence of the agent, and the perceived fairness of the quiz. These differences were observed only for male anthropomorphic cues, but not for female anthropomorphic cues. The results were not explained by the social attraction of the anthropomorphic agents used in the quiz or by gender identification with the agents. Priming of visual cues provides the best account of the data. Practical implications of the study are discussed.

  19. Human cardiac systems electrophysiology and arrhythmogenesis: iteration of experiment and computation.

    Science.gov (United States)

    Holzem, Katherine M; Madden, Eli J; Efimov, Igor R

    2014-11-01

    Human cardiac electrophysiology (EP) is a unique system for computational modelling at multiple scales. Due to the complexity of the cardiac excitation sequence, coordinated activity must occur from the single channel to the entire myocardial syncytium. Thus, sophisticated computational algorithms have been developed to investigate cardiac EP at the level of ion channels, cardiomyocytes, multicellular tissues, and the whole heart. Although understanding of each functional level will ultimately be important to thoroughly understand mechanisms of physiology and disease, cardiac arrhythmias are expressly the product of cardiac tissue-containing enough cardiomyocytes to sustain a reentrant loop of activation. In addition, several properties of cardiac cellular EP, that are critical for arrhythmogenesis, are significantly altered by cell-to-cell coupling. However, relevant human cardiac EP data, upon which to develop or validate models at all scales, has been lacking. Thus, over several years, we have developed a paradigm for multiscale human heart physiology investigation and have recovered and studied over 300 human hearts. We have generated a rich experimental dataset, from which we better understand mechanisms of arrhythmia in human and can improve models of human cardiac EP. In addition, in collaboration with computational physiologists, we are developing a database for the deposition of human heart experimental data, including thorough experimental documentation. We anticipate that accessibility to this human heart dataset will further human EP computational investigations, as well as encourage greater data transparency within the field of cardiac EP.

  20. Supporting Human Activities - Exploring Activity-Centered Computing

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Bardram, Jakob

    2002-01-01

    In this paper we explore an activity-centered computing paradigm that is aimed at supporting work processes that are radically different from the ones known from office work. Our main inspiration is healthcare work that is characterized by an extreme degree of mobility, many interruptions, ad...... objects. We also present an exploratory prototype design and first implementation and present some initial results from evaluations in a healthcare environment....

  1. The Human Dimension of Computer-Mediated Communications: Implications for International Educational Computer Conferences.

    Science.gov (United States)

    Scott, Douglass J.

    This article presents a conceptual framework for the research and practice of educational computer conferences that shifts the focus from the on-line messages being exchanged to the participants' engagement with the conference. This framework, known as the "Iceberg Metaphor" or the "Michigan Model of educational…

  2. Preface (to: Brain-Computer Interfaces. Applying our Minds to Human-Computer Interaction)

    NARCIS (Netherlands)

    Tan, Desney; Tan, Desney S.; Nijholt, Antinus

    2010-01-01

    The advances in cognitive neuroscience and brain imaging technologies provide us with the increasing ability to interface directly with activity in the brain. Researchers have begun to use these technologies to build brain-computer interfaces. Originally, these interfaces were meant to allow

  3. Data Bases and Other Computer Tools in the Humanities.

    Science.gov (United States)

    Collegiate Microcomputer, 1990

    1990-01-01

    Describes 38 database projects sponsored by the National Endowment for the Humanities (NEH). Information on hardware, software, and access and dissemination is given for projects in the areas of art and architectural history; folklore; history; medicinal plants; interdisciplinary topics; language and linguistics; literature; and music and music…

  4. The Human Genome Project: Biology, Computers, and Privacy.

    Science.gov (United States)

    Cutter, Mary Ann G.; Drexler, Edward; Gottesman, Kay S.; Goulding, Philip G.; McCullough, Laurence B.; McInerney, Joseph D.; Micikas, Lynda B.; Mural, Richard J.; Murray, Jeffrey C.; Zola, John

    This module, for high school teachers, is the second of two modules about the Human Genome Project (HGP) produced by the Biological Sciences Curriculum Study (BSCS). The first section of this module provides background information for teachers about the structure and objectives of the HGP, aspects of the science and technology that underlie the…

  5. Computational biology in human aging : an omics data integration approach

    NARCIS (Netherlands)

    Akker, Erik Ben van den

    2015-01-01

    Throughout this thesis, human aging and its relation to health are studied in the context of two parallel though complementary lines of research: biomarkers and genetics. The search for informative biomarkers of aging focuses on easy accessible and quantifiable substances of the body that can be u

  6. Recent Advances in Computational Mechanics of the Human Knee Joint

    Directory of Open Access Journals (Sweden)

    M. Kazemi

    2013-01-01

    Full Text Available Computational mechanics has been advanced in every area of orthopedic biomechanics. The objective of this paper is to provide a general review of the computational models used in the analysis of the mechanical function of the knee joint in different loading and pathological conditions. Major review articles published in related areas are summarized first. The constitutive models for soft tissues of the knee are briefly discussed to facilitate understanding the joint modeling. A detailed review of the tibiofemoral joint models is presented thereafter. The geometry reconstruction procedures as well as some critical issues in finite element modeling are also discussed. Computational modeling can be a reliable and effective method for the study of mechanical behavior of the knee joint, if the model is constructed correctly. Single-phase material models have been used to predict the instantaneous load response for the healthy knees and repaired joints, such as total and partial meniscectomies, ACL and PCL reconstructions, and joint replacements. Recently, poromechanical models accounting for fluid pressurization in soft tissues have been proposed to study the viscoelastic response of the healthy and impaired knee joints. While the constitutive modeling has been considerably advanced at the tissue level, many challenges still exist in applying a good material model to three-dimensional joint simulations. A complete model validation at the joint level seems impossible presently, because only simple data can be obtained experimentally. Therefore, model validation may be concentrated on the constitutive laws using multiple mechanical tests of the tissues. Extensive model verifications at the joint level are still crucial for the accuracy of the modeling.

  7. Recent advances in computational mechanics of the human knee joint.

    Science.gov (United States)

    Kazemi, M; Dabiri, Y; Li, L P

    2013-01-01

    Computational mechanics has been advanced in every area of orthopedic biomechanics. The objective of this paper is to provide a general review of the computational models used in the analysis of the mechanical function of the knee joint in different loading and pathological conditions. Major review articles published in related areas are summarized first. The constitutive models for soft tissues of the knee are briefly discussed to facilitate understanding the joint modeling. A detailed review of the tibiofemoral joint models is presented thereafter. The geometry reconstruction procedures as well as some critical issues in finite element modeling are also discussed. Computational modeling can be a reliable and effective method for the study of mechanical behavior of the knee joint, if the model is constructed correctly. Single-phase material models have been used to predict the instantaneous load response for the healthy knees and repaired joints, such as total and partial meniscectomies, ACL and PCL reconstructions, and joint replacements. Recently, poromechanical models accounting for fluid pressurization in soft tissues have been proposed to study the viscoelastic response of the healthy and impaired knee joints. While the constitutive modeling has been considerably advanced at the tissue level, many challenges still exist in applying a good material model to three-dimensional joint simulations. A complete model validation at the joint level seems impossible presently, because only simple data can be obtained experimentally. Therefore, model validation may be concentrated on the constitutive laws using multiple mechanical tests of the tissues. Extensive model verifications at the joint level are still crucial for the accuracy of the modeling.

  8. Individual Difference Effects in Human-Computer Interaction

    Science.gov (United States)

    1991-10-01

    evaluated in terns of the amount of sales revenue af -er deducting production costs. nhe time variable was measured in terms of the amount of time a subject...subject acted as an inventory/ production manage:r of a hypothetical firm which was simulated by a computer program. The cubject’s task was to obtain the...34search list" will be examined. Thus, the u3ar w.ll probably match "apple pie" but not "apple cider " or "appl-? butter’ because these items would not

  9. APPLYING ARTIFICIAL INTELLIGENCE TECHNIQUES TO HUMAN-COMPUTER INTERFACES

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.

    1988-01-01

    A description is given of UIMS (User Interface Management System), a system using a variety of artificial intelligence techniques to build knowledge-based user interfaces combining functionality and information from a variety of computer systems that maintain, test, and configure customer telephone...... and data networks. Three artificial intelligence (AI) techniques used in UIMS are discussed, namely, frame representation, object-oriented programming languages, and rule-based systems. The UIMS architecture is presented, and the structure of the UIMS is explained in terms of the AI techniques....

  10. APPLYING ARTIFICIAL INTELLIGENCE TECHNIQUES TO HUMAN-COMPUTER INTERFACES

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.

    1988-01-01

    A description is given of UIMS (User Interface Management System), a system using a variety of artificial intelligence techniques to build knowledge-based user interfaces combining functionality and information from a variety of computer systems that maintain, test, and configure customer telephone...... and data networks. Three artificial intelligence (AI) techniques used in UIMS are discussed, namely, frame representation, object-oriented programming languages, and rule-based systems. The UIMS architecture is presented, and the structure of the UIMS is explained in terms of the AI techniques....

  11. Advanced approaches to characterize the human intestinal microbiota by computational meta-analysis

    NARCIS (Netherlands)

    Nikkilä, J.; Vos, de W.M.

    2010-01-01

    GOALS: We describe advanced approaches for the computational meta-analysis of a collection of independent studies, including over 1000 phylogenetic array datasets, as a means to characterize the variability of human intestinal microbiota. BACKGROUND: The human intestinal microbiota is a complex micr

  12. Human Inspired Self-developmental Model of Neural Network (HIM): Introducing Content/Form Computing

    Science.gov (United States)

    Krajíček, Jiří

    This paper presents cross-disciplinary research between medical/psychological evidence on human abilities and informatics needs to update current models in computer science to support alternative methods for computation and communication. In [10] we have already proposed hypothesis introducing concept of human information model (HIM) as cooperative system. Here we continue on HIM design in detail. In our design, first we introduce Content/Form computing system which is new principle of present methods in evolutionary computing (genetic algorithms, genetic programming). Then we apply this system on HIM (type of artificial neural network) model as basic network self-developmental paradigm. Main inspiration of our natural/human design comes from well known concept of artificial neural networks, medical/psychological evidence and Sheldrake theory of "Nature as Alive" [22].

  13. Operational characteristics optimization of human-computer system

    Directory of Open Access Journals (Sweden)

    Zulquernain Mallick

    2010-09-01

    Full Text Available Computer operational parameters are having vital influence on the operators efficiency from readability viewpoint. Four parameters namely font, text/background color, viewing angle and viewing distance are analyzed. The text reading task, in the form of English text, was presented on the computer screen to the participating subjects and their performance, measured in terms of number of words read per minute (NWRPM, was recorded. For the purpose of optimization, the Taguchi method is used to find the optimal parameters to maximize operators’ efficiency for performing readability task. Two levels of each parameter have been considered in this study. An orthogonal array, the signal-to-noise (S/N ratio and the analysis of variance (ANOVA were employed to investigate the operators’ performance/efficiency. Results showed that Times Roman font, black text on white background, 40 degree viewing angle and 60 cm viewing distance, the subjects were quite comfortable, efficient and read maximum number of words per minute. Text/background color was dominant parameter with a percentage contribution of 76.18% towards the laid down objective followed by font type at 18.17%, viewing distance 7.04% and viewing angle 0.58%. Experimental results are provided to confirm the effectiveness of this approach.

  14. Measuring Human Performance within Computer Security Incident Response Teams

    Energy Technology Data Exchange (ETDEWEB)

    McClain, Jonathan T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silva, Austin Ray [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Avina, Glory Emmanuel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Forsythe, James C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Human performance has become a pertinen t issue within cyber security. However, this research has been stymied by the limited availability of expert cyber security professionals. This is partly attributable to the ongoing workload faced by cyber security professionals, which is compound ed by the limited number of qualified personnel and turnover of p ersonnel across organizations. Additionally, it is difficult to conduct research, and particularly, openly published research, due to the sensitivity inherent to cyber ope rations at most orga nizations. As an alternative, the current research has focused on data collection during cyb er security training exercises. These events draw individuals with a range of knowledge and experience extending from seasoned professionals to recent college gradu ates to college students. The current paper describes research involving data collection at two separate cyber security exercises. This data collection involved multiple measures which included behavioral performance based on human - machine transactions and questionnaire - based assessments of cyber security experience.

  15. Computational Human Performance Modeling For Alarm System Design

    Energy Technology Data Exchange (ETDEWEB)

    Jacques Hugo

    2012-07-01

    The introduction of new technologies like adaptive automation systems and advanced alarms processing and presentation techniques in nuclear power plants is already having an impact on the safety and effectiveness of plant operations and also the role of the control room operator. This impact is expected to escalate dramatically as more and more nuclear power utilities embark on upgrade projects in order to extend the lifetime of their plants. One of the most visible impacts in control rooms will be the need to replace aging alarm systems. Because most of these alarm systems use obsolete technologies, the methods, techniques and tools that were used to design the previous generation of alarm system designs are no longer effective and need to be updated. The same applies to the need to analyze and redefine operators’ alarm handling tasks. In the past, methods for analyzing human tasks and workload have relied on crude, paper-based methods that often lacked traceability. New approaches are needed to allow analysts to model and represent the new concepts of alarm operation and human-system interaction. State-of-the-art task simulation tools are now available that offer a cost-effective and efficient method for examining the effect of operator performance in different conditions and operational scenarios. A discrete event simulation system was used by human factors researchers at the Idaho National Laboratory to develop a generic alarm handling model to examine the effect of operator performance with simulated modern alarm system. It allowed analysts to evaluate alarm generation patterns as well as critical task times and human workload predicted by the system.

  16. Behind Human Error: Cognitive Systems, Computers and Hindsight

    Science.gov (United States)

    1994-12-01

    squeeze became on the powers of the operator.... And as Norbert Wiener noted some years later (1964, p. 63): The gadget-minded people often have the...for one exception see Woods and Elias , 1988). This failure to develop representations that reveal change and highlight events in the monitored...Woods, D. D., and Elias , G. (1988). Significance messages: An inte- gral display concept. In Proceedings of the 32nd Annual Meeting of the Human

  17. Collection of Information Directly from Patients through an Adaptive Human-computer Interface

    Science.gov (United States)

    Lobach, David F.; Arbanas, Jennifer M.; Mishra, Dharani D.; Wildemuth, Barbara; Campbell, Marci

    2002-01-01

    Clinical information collected directly from patients is critical to the practice of medicine. Past efforts to collect this information using computers have had limited utility because these efforts required users to be facile with the information collecting system. This poster describes the development and function of a computer system that uses technology to overcome the limitations of previous computer-based data collection tools by adapting the human-computer interface to fit the skills of the user. The system has been successfully used at two diverse clinical sites.

  18. Brain-Computer Interfaces Applying Our Minds to Human-computer Interaction

    CERN Document Server

    Tan, Desney S

    2010-01-01

    For generations, humans have fantasized about the ability to create devices that can see into a person's mind and thoughts, or to communicate and interact with machines through thought alone. Such ideas have long captured the imagination of humankind in the form of ancient myths and modern science fiction stories. Recent advances in cognitive neuroscience and brain imaging technologies have started to turn these myths into a reality, and are providing us with the ability to interface directly with the human brain. This ability is made possible through the use of sensors that monitor physical p

  19. Digging into data using new collaborative infrastructures supporting humanities-based computer science research

    OpenAIRE

    2011-01-01

    This paper explores infrastructure supporting humanities–computer science research in large–scale image data by asking: Why is collaboration a requirement for work within digital humanities projects? What is required for fruitful interdisciplinary collaboration? What are the technical and intellectual approaches to constructing such an infrastructure? What are the challenges associated with digital humanities collaborative work? We reveal that digital humanities collaboration requ...

  20. Computational analysis of expression of human embryonic stem cell-associated signatures in tumors

    OpenAIRE

    Wang, Xiaosheng

    2011-01-01

    Background The cancer stem cell model has been proposed based on the linkage between human embryonic stem cells and human cancer cells. However, the evidences supporting the cancer stem cell model remain to be collected. In this study, we extensively examined the expression of human embryonic stem cell-associated signatures including core genes, transcription factors, pathways and microRNAs in various cancers using the computational biology approach. Results We used the class comparison analy...

  1. Computational analysis of expression of human embryonic stem cell-associated signatures in tumors

    OpenAIRE

    Wang Xiaosheng

    2011-01-01

    Abstract Background The cancer stem cell model has been proposed based on the linkage between human embryonic stem cells and human cancer cells. However, the evidences supporting the cancer stem cell model remain to be collected. In this study, we extensively examined the expression of human embryonic stem cell-associated signatures including core genes, transcription factors, pathways and microRNAs in various cancers using the computational biology approach. Results We used the class compari...

  2. Brain-Computer Interfaces. Applying our Minds to Human-Computer Interaction

    NARCIS (Netherlands)

    Tan, Desney S.; Nijholt, Antinus

    2010-01-01

    For generations, humans have fantasized about the ability to create devices that can see into a person’s mind and thoughts, or to communicate and interact with machines through thought alone. Such ideas have long captured the imagination of humankind in the form of ancient myths and modern science

  3. Brain-Computer Interfaces: Applying our Minds to Human-Computer Interaction

    NARCIS (Netherlands)

    Tan, Desney S.; Nijholt, Anton

    2010-01-01

    For generations, humans have fantasized about the ability to create devices that can see into a person’s mind and thoughts, or to communicate and interact with machines through thought alone. Such ideas have long captured the imagination of humankind in the form of ancient myths and modern science f

  4. Evolution of Neural Computations: Mantis Shrimp and Human Color Decoding

    Directory of Open Access Journals (Sweden)

    Qasim Zaidi

    2014-10-01

    Full Text Available Mantis shrimp and primates both possess good color vision, but the neural implementation in the two species is very different, a reflection of the largely unrelated evolutionary lineages of these creatures. Mantis shrimp have scanning compound eyes with 12 classes of photoreceptors, and have evolved a system to decode color information at the front-end of the sensory stream. Primates have image-focusing eyes with three classes of cones, and decode color further along the visual-processing hierarchy. Despite these differences, we report a fascinating parallel between the computational strategies at the color-decoding stage in the brains of stomatopods and primates. Both species appear to use narrowly tuned cells that support interval decoding color identification.

  5. Evolution of neural computations: Mantis shrimp and human color decoding.

    Science.gov (United States)

    Zaidi, Qasim; Marshall, Justin; Thoen, Hanne; Conway, Bevil R

    2014-01-01

    Mantis shrimp and primates both possess good color vision, but the neural implementation in the two species is very different, a reflection of the largely unrelated evolutionary lineages of these creatures. Mantis shrimp have scanning compound eyes with 12 classes of photoreceptors, and have evolved a system to decode color information at the front-end of the sensory stream. Primates have image-focusing eyes with three classes of cones, and decode color further along the visual-processing hierarchy. Despite these differences, we report a fascinating parallel between the computational strategies at the color-decoding stage in the brains of stomatopods and primates. Both species appear to use narrowly tuned cells that support interval decoding color identification.

  6. A Study of Electromyogram Based on Human-Computer Interface

    Institute of Scientific and Technical Information of China (English)

    Jun-Ru Ren; Tie-Jun Liu; Yu Huang; De-Zhong Yao

    2009-01-01

    In this paper,a new control system based on forearm electromyogram (EMG) is proposed for computer peripheral control and artificial prosthesis control.This control system intends to realize the commands of six pre-defined hand poses:up,down,left,right,yes,and no.In order to research the possibility of using a unified amplifier for both electro-encephalogram (EEG) and EMG,the surface forearm EMG data is acquired by a 4-channel EEG measure-ment system.The Bayesian classifier is used to classify the power spectral density (PSD) of the signal.The experiment result verifies that this control system can supply a high command recognition rate (average 48%) even the EMG data is collected with an EEG system just with single electrode measurement.

  7. Human Computation in Visualization: Using Purpose Driven Games for Robust Evaluation of Visualization Algorithms.

    Science.gov (United States)

    Ahmed, N; Zheng, Ziyi; Mueller, K

    2012-12-01

    Due to the inherent characteristics of the visualization process, most of the problems in this field have strong ties with human cognition and perception. This makes the human brain and sensory system the only truly appropriate evaluation platform for evaluating and fine-tuning a new visualization method or paradigm. However, getting humans to volunteer for these purposes has always been a significant obstacle, and thus this phase of the development process has traditionally formed a bottleneck, slowing down progress in visualization research. We propose to take advantage of the newly emerging field of Human Computation (HC) to overcome these challenges. HC promotes the idea that rather than considering humans as users of the computational system, they can be made part of a hybrid computational loop consisting of traditional computation resources and the human brain and sensory system. This approach is particularly successful in cases where part of the computational problem is considered intractable using known computer algorithms but is trivial to common sense human knowledge. In this paper, we focus on HC from the perspective of solving visualization problems and also outline a framework by which humans can be easily seduced to volunteer their HC resources. We introduce a purpose-driven game titled "Disguise" which serves as a prototypical example for how the evaluation of visualization algorithms can be mapped into a fun and addicting activity, allowing this task to be accomplished in an extensive yet cost effective way. Finally, we sketch out a framework that transcends from the pure evaluation of existing visualization methods to the design of a new one.

  8. Impact of Cognitive Architectures on Human-Computer Interaction

    Science.gov (United States)

    2014-09-01

    simulation. In this work they were preparing for the Synthetic Theatre of War-1997 exercise where between 10,000 and 50,000 automated agents would...work with up to 1,000 humans.27 The results of this exercise are documented by Laird et al.28 5. Conclusions and Future Work To assess whether cognitive...RW, MacKenzie IS. Towards a standard for pointing device evaluation, perspectives on 27 years of Fitts’ law research in HCI. International Journal of

  9. Glove-Enabled Computer Operations (GECO): Design and Testing of an Extravehicular Activity Glove Adapted for Human-Computer Interface

    Science.gov (United States)

    Adams, Richard J.; Olowin, Aaron; Krepkovich, Eileen; Hannaford, Blake; Lindsay, Jack I. C.; Homer, Peter; Patrie, James T.; Sands, O. Scott

    2013-01-01

    The Glove-Enabled Computer Operations (GECO) system enables an extravehicular activity (EVA) glove to be dual-purposed as a human-computer interface device. This paper describes the design and human participant testing of a right-handed GECO glove in a pressurized glove box. As part of an investigation into the usability of the GECO system for EVA data entry, twenty participants were asked to complete activities including (1) a Simon Says Games in which they attempted to duplicate random sequences of targeted finger strikes and (2) a Text Entry activity in which they used the GECO glove to enter target phrases in two different virtual keyboard modes. In a within-subjects design, both activities were performed both with and without vibrotactile feedback. Participants mean accuracies in correctly generating finger strikes with the pressurized glove were surprisingly high, both with and without the benefit of tactile feedback. Five of the subjects achieved mean accuracies exceeding 99 in both conditions. In Text Entry, tactile feedback provided a statistically significant performance benefit, quantified by characters entered per minute, as well as reduction in error rate. Secondary analyses of responses to a NASA Task Loader Index (TLX) subjective workload assessments reveal a benefit for tactile feedback in GECO glove use for data entry. This first-ever investigation of employment of a pressurized EVA glove for human-computer interface opens up a wide range of future applications, including text chat communications, manipulation of procedureschecklists, cataloguingannotating images, scientific note taking, human-robot interaction, and control of suit andor other EVA systems.

  10. Metaphors for the Nature of Human-Computer Interaction in an Empowering Environment: Interaction Style Influences the Manner of Human Accomplishment.

    Science.gov (United States)

    Weller, Herman G.; Hartson, H. Rex

    1992-01-01

    Describes human-computer interface needs for empowering environments in computer usage in which the machine handles the routine mechanics of problem solving while the user concentrates on its higher order meanings. A closed-loop model of interaction is described, interface as illusion is discussed, and metaphors for human-computer interaction are…

  11. A conceptual and computational model of moral decision making in human and artificial agents.

    Science.gov (United States)

    Wallach, Wendell; Franklin, Stan; Allen, Colin

    2010-07-01

    Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for computational approaches to higher-order cognition. The need for increasingly autonomous artificial agents to factor moral considerations into their choices and actions has given rise to another new field of inquiry variously known as Machine Morality, Machine Ethics, Roboethics, or Friendly AI. In this study, we discuss how LIDA, an AGI model of human cognition, can be adapted to model both affective and rational features of moral decision making. Using the LIDA model, we will demonstrate how moral decisions can be made in many domains using the same mechanisms that enable general decision making. Comprehensive models of human cognition typically aim for compatibility with recent research in the cognitive and neural sciences. Global workspace theory, proposed by the neuropsychologist Bernard Baars (1988), is a highly regarded model of human cognition that is currently being computationally instantiated in several software implementations. LIDA (Franklin, Baars, Ramamurthy, & Ventura, 2005) is one such computational implementation. LIDA is both a set of computational tools and an underlying model of human cognition, which provides mechanisms that are capable of explaining how an agent's selection of its next action arises from bottom-up collection of sensory data and top-down processes for making sense of its current situation. We will describe how the LIDA model helps integrate emotions into the human decision-making process, and we

  12. Can Computers Foster Human Users’ Creativity? Theory and Praxis of Mixed-Initiative Co-Creativity

    Directory of Open Access Journals (Sweden)

    Antonios Liapis

    2016-07-01

    Full Text Available This article discusses the impact of artificially intelligent computers to the process of design, play and educational activities. A computational process which has the necessary intelligence and creativity to take a proactive role in such activities can not only support human creativity but also foster it and prompt lateral thinking. The argument is made both from the perspective of human creativity, where the computational input is treated as an external stimulus which triggers re-framing of humans’ routines and mental associations, but also from the perspective of computational creativity where human input and initiative constrains the search space of the algorithm, enabling it to focus on specific possible solutions to a problem rather than globally search for the optimal. The article reviews four mixed-initiative tools (for design and educational play based on how they contribute to human-machine co-creativity. These paradigms serve different purposes, afford different human interaction methods and incorporate different computationally creative processes. Assessing how co-creativity is facilitated on a per-paradigm basis strengthens the theoretical argument and provides an initial seed for future work in the burgeoning domain of mixed-initiative interaction.

  13. Computation of particle detachment from floors due to human walking

    Science.gov (United States)

    Elhadidi, Basman; Khalifa, Ezzat

    2005-11-01

    A computational model for detachment of fine particles due to the unsteady flow under a foot is developed. As the foot approaches the floor, fluid volume is displaced laterally as a wall jet from the perimeter of the contact area at high velocity and acceleration. Unsteady aerodynamic forces on particles attached to the floor are considered. Results show that the jet velocity is ˜40 m/s for a foot idealized as a 15 cm circular disk approaching the floor at 1 m/s with a final gap of 0.8 mm. This velocity is sufficient to detach small particles (1˜μm). The flow accelerates at ˜400 m/s^2 which affects the detachment of larger sized particles (˜100 μm). As the disk is brought to rest, the unsteady jet expands outwards, advecting a vortex ring closely attached to it. At the disk edge, a counter rotating vortex is generated by the sudden deceleration of the disk. Both vortices can play a role in entrainment of the suspended particles in the flowfield. Numerical studies also show that the maximum jet velocity is ˜20 m/s for a simplified foot immediately after heel contact in the stance phase of the gait.

  14. Computer-assisted learning in human and dental medicine.

    Science.gov (United States)

    Höhne, S; Schumann, R R

    2004-04-01

    This article describes the development and application of new didactic methods for use in computer-assisted teaching and learning systems for training doctors and dentists. Taking the Meducase project as an example, didactic models and their technological implementation are explained, together with the limitations of imparting knowledge with the "new media". In addition, legal concepts for a progressive, pragmatic, and innovative distribution of knowledge to undergraduate students are presented. In conclusion, potential and visions for the wide use of electronic learning in the German and European universities in the future are discussed. Self-directed learning (SDL) is a key component in both undergraduate education and lifelong learning for medical practitioners. E-learning can already be used to promote SDL at undergraduate level. The Meducase project uses self-directed, constructive, case- and problem-oriented learning within a learning platform for medical and dental students. In the long run, e-learning programs can only be successful in education if there is consistent analysis and implementation of value-added factors and the development and use of media-didactic concepts matched to electronic learning. The use of innovative forms of licensing - open source licenses for software and similar licenses for content - facilitates continuous, free access to these programs for all students and teachers. These legal concepts offer the possibility of innovative knowledge distribution, quality assurance and standardization across specializations, university departments, and possibly even national borders.

  15. Computational model of soft tissues in the human upper airway.

    Science.gov (United States)

    Pelteret, J-P V; Reddy, B D

    2012-01-01

    This paper presents a three-dimensional finite element model of the tongue and surrounding soft tissues with potential application to the study of sleep apnoea and of linguistics and speech therapy. The anatomical data was obtained from the Visible Human Project, and the underlying histological data was also extracted and incorporated into the model. Hyperelastic constitutive models were used to describe the material behaviour, and material incompressibility was accounted for. An active Hill three-element muscle model was used to represent the muscular tissue of the tongue. The neural stimulus for each muscle group was determined through the use of a genetic algorithm-based neural control model. The fundamental behaviour of the tongue under gravitational and breathing-induced loading is investigated. It is demonstrated that, when a time-dependent loading is applied to the tongue, the neural model is able to control the position of the tongue and produce a physiologically realistic response for the genioglossus.

  16. Interactive 3D computer model of the human corneolimbal region

    DEFF Research Database (Denmark)

    Molvaer, Rikke Kongshaug; Andreasen, Arne; Heegaard, Steffen;

    2013-01-01

    in the superior limbal region and one LEC, six LCs and 12 FSPs in the inferior limbal region. Only few LECs, LCs and FSPs were localized nasally and temporally. CONCLUSION: Interactive 3D models are a powerful tool that may help to shed more light on the existence and spatial localization of the different stem......PURPOSE: This study aims to clarify the existence of and to map the localization of different proposed stem cell niches in the corneal limbal region. MATERIALS AND METHODS: One human eye was cut into 2200 consecutive sections. Every other section was stained with haematoxylin and eosin, digitized...... in the limbal region: limbal epithelial crypts (LECs), limbal crypts (LCs) and focal stromal projections (FSPs). In all, eight LECs, 25 LCs and 105 FSPs were identified in the limbal region. The LECs, LCs and FSPs were predominantly located in the superior limbal region with seven LECs, 19 LCs and 93 FSPs...

  17. Situated dialog in speech-based human-computer interaction

    CERN Document Server

    Raux, Antoine; Lane, Ian; Misu, Teruhisa

    2016-01-01

    This book provides a survey of the state-of-the-art in the practical implementation of Spoken Dialog Systems for applications in everyday settings. It includes contributions on key topics in situated dialog interaction from a number of leading researchers and offers a broad spectrum of perspectives on research and development in the area. In particular, it presents applications in robotics, knowledge access and communication and covers the following topics: dialog for interacting with robots; language understanding and generation; dialog architectures and modeling; core technologies; and the analysis of human discourse and interaction. The contributions are adapted and expanded contributions from the 2014 International Workshop on Spoken Dialog Systems (IWSDS 2014), where researchers and developers from industry and academia alike met to discuss and compare their implementation experiences, analyses and empirical findings.

  18. When a Talking-Face Computer Agent Is Half-Human and Half-Humanoid: Human Identity and Consistency Preference

    Science.gov (United States)

    Gong, Li; Nass, Clifford

    2007-01-01

    Computer-generated anthropomorphic characters are a growing type of communicator that is deployed in digital communication environments. An essential theoretical question is how people identify humanlike but clearly artificial, hence humanoid, entities in comparison to natural human ones. This identity categorization inquiry was approached under…

  19. Computational model of sustained acceleration effects on human cognitive performance.

    Science.gov (United States)

    McKinlly, Richard A; Gallimore, Jennie J

    2013-08-01

    Extreme acceleration maneuvers encountered in modern agile fighter aircraft can wreak havoc on human physiology, thereby significantly influencing cognitive task performance. As oxygen content declines under acceleration stress, the activity of high order cortical tissue reduces to ensure sufficient metabolic resources are available for critical life-sustaining autonomic functions. Consequently, cognitive abilities reliant on these affected areas suffer significant performance degradations. The goal was to develop and validate a model capable of predicting human cognitive performance under acceleration stress. Development began with creation of a proportional control cardiovascular model that produced predictions of several hemodynamic parameters, including eye-level blood pressure and regional cerebral oxygen saturation (rSo2). An algorithm was derived to relate changes in rSo2 within specific brain structures to performance on cognitive tasks that require engagement of different brain areas. Data from the "precision timing" experiment were then used to validate the model predicting cognitive performance as a function of G(z) profile. The following are value ranges. Results showed high agreement between the measured and predicted values for the rSo2 (correlation coefficient: 0.7483-0.8687; linear best-fit slope: 0.5760-0.9484; mean percent error: 0.75-3.33) and cognitive performance models (motion inference task--correlation coefficient: 0.7103-0.9451; linear best-fit slope: 0.7416-0.9144; mean percent error: 6.35-38.21; precision timing task--correlation coefficient: 0.6856-0.9726; linear best-fit slope: 0.5795-1.027; mean percent error: 6.30-17.28). The evidence suggests that the model is capable of accurately predicting cognitive performance of simplistic tasks under high acceleration stress.

  20. Computational analysis of splicing errors and mutations in human transcripts

    Directory of Open Access Journals (Sweden)

    Gelfand Mikhail S

    2008-01-01

    Full Text Available Abstract Background Most retained introns found in human cDNAs generated by high-throughput sequencing projects seem to result from underspliced transcripts, and thus they capture intermediate steps of pre-mRNA splicing. On the other hand, mutations in splice sites cause exon skipping of the respective exon or activation of pre-existing cryptic sites. Both types of events reflect properties of the splicing mechanism. Results The retained introns were significantly shorter than constitutive ones, and skipped exons are shorter than exons with cryptic sites. Both donor and acceptor splice sites of retained introns were weaker than splice sites of constitutive introns. The authentic acceptor sites affected by mutations were significantly weaker in exons with activated cryptic sites than in skipped exons. The distance from a mutated splice site to the nearest equivalent site is significantly shorter in cases of activated cryptic sites compared to exon skipping events. The prevalence of retained introns within genes monotonically increased in the 5'-to-3' direction (more retained introns close to the 3'-end, consistent with the model of co-transcriptional splicing. The density of exonic splicing enhancers was higher, and the density of exonic splicing silencers lower in retained introns compared to constitutive ones and in exons with cryptic sites compared to skipped exons. Conclusion Thus the analysis of retained introns in human cDNA, exons skipped due to mutations in splice sites and exons with cryptic sites produced results consistent with the intron definition mechanism of splicing of short introns, co-transcriptional splicing, dependence of splicing efficiency on the splice site strength and the density of candidate exonic splicing enhancers and silencers. These results are consistent with other, recently published analyses.

  1. Rugoscopy: Human identification by computer-assisted photographic superimposition technique

    Directory of Open Access Journals (Sweden)

    Rezwana Begum Mohammed

    2013-01-01

    Full Text Available Background: Human identification has been studied since fourteenth century and it has gradually advanced for forensic purposes. Traditional methods such as dental, fingerprint, and DNA comparisons are probably the most common techniques used in this context, allowing fast and secure identification processes. But, in circumstances where identification of an individual by fingerprint or dental record comparison is difficult, palatal rugae may be considered as an alternative source of material. Aim: The present study was done to evaluate the individualistic nature and use of palatal rugae patterns for personal identification and also to test the efficiency of computerized software for forensic identification by photographic superimposition of palatal photographs obtained from casts. Materials and Methods: Two sets of Alginate impressions were made from the upper arches of 100 individuals (50 males and 50 females with one month interval in between and the casts were poured. All the teeth except the incisors were removed to ensure that only the palate could be used in identification process. In one set of the casts, the palatal rugae were highlighted with a graphite pencil. All the 200 casts were randomly numbered, and then, they were photographed with a 10.1 Mega Pixel Kodak digital camera using standardized method. Using computerized software, the digital photographs of the models without highlighting the palatal rugae were overlapped over the images (transparent of the palatal rugae with highlighted palatal rugae, in order to identify the pairs by superimposition technique. Incisors were remained and used as landmarks to determine the magnification required to bring the two set of photographs to the same size, in order to make perfect superimposition of images. Results: The result of the overlapping of the digital photographs of highlighted palatal rugae over normal set of models without highlighted palatal rugae resulted in 100% positive

  2. Proceedings of the Third International Conference on Intelligent Human Computer Interaction

    CERN Document Server

    Pokorný, Jaroslav; Snášel, Václav; Abraham, Ajith

    2013-01-01

    The Third International Conference on Intelligent Human Computer Interaction 2011 (IHCI 2011) was held at Charles University, Prague, Czech Republic from August 29 - August 31, 2011. This conference was third in the series, following IHCI 2009 and IHCI 2010 held in January at IIIT Allahabad, India. Human computer interaction is a fast growing research area and an attractive subject of interest for both academia and industry. There are many interesting and challenging topics that need to be researched and discussed. This book aims to provide excellent opportunities for the dissemination of interesting new research and discussion about presented topics. It can be useful for researchers working on various aspects of human computer interaction. Topics covered in this book include user interface and interaction, theoretical background and applications of HCI and also data mining and knowledge discovery as a support of HCI applications.

  3. Real Time Multiple Hand Gesture Recognition System for Human Computer Interaction

    Directory of Open Access Journals (Sweden)

    Siddharth S. Rautaray

    2012-05-01

    Full Text Available With the increasing use of computing devices in day to day life, the need of user friendly interfaces has lead towards the evolution of different types of interfaces for human computer interaction. Real time vision based hand gesture recognition affords users the ability to interact with computers in more natural and intuitive ways. Direct use of hands as an input device is an attractive method which can communicate much more information by itself in comparison to mice, joysticks etc allowing a greater number of recognition system that can be used in a variety of human computer interaction applications. The gesture recognition system consist of three main modules like hand segmentation, hand tracking and gesture recognition from hand features. The designed system further integrated with different applications like image browser, virtual game etc. possibilities for human computer interaction. Computer Vision based systems has the potential to provide more natural, non-contact solutions. The present research work focuses on to design and develops a practical framework for real time hand gesture.

  4. Human-computer interaction handbook fundamentals, evolving technologies and emerging applications

    CERN Document Server

    Sears, Andrew

    2007-01-01

    This second edition of The Human-Computer Interaction Handbook provides an updated, comprehensive overview of the most important research in the field, including insights that are directly applicable throughout the process of developing effective interactive information technologies. It features cutting-edge advances to the scientific knowledge base, as well as visionary perspectives and developments that fundamentally transform the way in which researchers and practitioners view the discipline. As the seminal volume of HCI research and practice, The Human-Computer Interaction Handbook feature

  5. Application of next generation sequencing to human gene fusion detection: computational tools, features and perspectives.

    Science.gov (United States)

    Wang, Qingguo; Xia, Junfeng; Jia, Peilin; Pao, William; Zhao, Zhongming

    2013-07-01

    Gene fusions are important genomic events in human cancer because their fusion gene products can drive the development of cancer and thus are potential prognostic tools or therapeutic targets in anti-cancer treatment. Major advancements have been made in computational approaches for fusion gene discovery over the past 3 years due to improvements and widespread applications of high-throughput next generation sequencing (NGS) technologies. To identify fusions from NGS data, existing methods typically leverage the strengths of both sequencing technologies and computational strategies. In this article, we review the NGS and computational features of existing methods for fusion gene detection and suggest directions for future development.

  6. Digital image processing and analysis human and computer vision applications with CVIPtools

    CERN Document Server

    Umbaugh, Scott E

    2010-01-01

    Section I Introduction to Digital Image Processing and AnalysisDigital Image Processing and AnalysisOverviewImage Analysis and Computer VisionImage Processing and Human VisionKey PointsExercisesReferencesFurther ReadingComputer Imaging SystemsImaging Systems OverviewImage Formation and SensingCVIPtools SoftwareImage RepresentationKey PointsExercisesSupplementary ExercisesReferencesFurther ReadingSection II Digital Image Analysis and Computer VisionIntroduction to Digital Image AnalysisIntroductionPreprocessingBinary Image AnalysisKey PointsExercisesSupplementary ExercisesReferencesFurther Read

  7. Signatures of a Statistical Computation in the Human Sense of Confidence.

    Science.gov (United States)

    Sanders, Joshua I; Hangya, Balázs; Kepecs, Adam

    2016-05-04

    Human confidence judgments are thought to originate from metacognitive processes that provide a subjective assessment about one's beliefs. Alternatively, confidence is framed in mathematics as an objective statistical quantity: the probability that a chosen hypothesis is correct. Despite similar terminology, it remains unclear whether the subjective feeling of confidence is related to the objective, statistical computation of confidence. To address this, we collected confidence reports from humans performing perceptual and knowledge-based psychometric decision tasks. We observed two counterintuitive patterns relating confidence to choice and evidence: apparent overconfidence in choices based on uninformative evidence, and decreasing confidence with increasing evidence strength for erroneous choices. We show that these patterns lawfully arise from statistical confidence, and therefore occur even for perfectly calibrated confidence measures. Furthermore, statistical confidence quantitatively accounted for human confidence in our tasks without necessitating heuristic operations. Accordingly, we suggest that the human feeling of confidence originates from a mental computation of statistical confidence.

  8. Signatures of a statistical computation in the human sense of confidence

    Science.gov (United States)

    Sanders, Joshua I.; Hangya, Balázs; Kepecs, Adam

    2017-01-01

    Summary Human confidence judgments are thought to originate from metacognitive processes that provide a subjective assessment about one’s beliefs. Alternatively, confidence is framed in mathematics as an objective statistical quantity: the estimated probability that a chosen hypothesis is correct. Despite similar terminology, it remains unclear whether the subjective feeling of confidence is related to the objective, statistical computation of confidence. To address this, we collected confidence reports from humans performing perceptual and knowledge-based psychometric decision tasks. We observed two counterintuitive patterns relating confidence to choice and evidence: apparent overconfidence in choices based on uninformative evidence, and for erroneous choices, that confidence decreased with increasing evidence strength. We show that these patterns lawfully arise when statistical confidence qualifies a decision. Furthermore, statistical confidence quantitatively accounted for human confidence in our tasks without necessitating heuristic operations. Accordingly, we suggest that the human feeling of confidence originates from a mental computation of statistical confidence. PMID:27151640

  9. A Computational Approach for Automated Posturing of a Human Finite Element Model

    Science.gov (United States)

    2016-07-01

    following: obtaining source geometries in the posture being tested, a so- called posturing “by hand” where geometries are moved to what “looks correct ...ARL-MR-0934• JULY 2016 US Army Research Laboratory A Computational Approach for Automated Posturing of a Human Finite ElementModel by Justin McKee...Automated Posturing of a Human Finite ElementModel by Justin McKee Bennett Aerospace, Inc., Cary, NC Adam Sokolow Weapons and Materials Research

  10. Human vs. Computer Diagnosis of Students' Natural Selection Knowledge: Testing the Efficacy of Text Analytic Software

    Science.gov (United States)

    Nehm, Ross H.; Haertig, Hendrik

    2012-01-01

    Our study examines the efficacy of Computer Assisted Scoring (CAS) of open-response text relative to expert human scoring within the complex domain of evolutionary biology. Specifically, we explored whether CAS can diagnose the explanatory elements (or Key Concepts) that comprise undergraduate students' explanatory models of natural selection with…

  11. A Project-Based Learning Setting to Human-Computer Interaction for Teenagers

    Science.gov (United States)

    Geyer, Cornelia; Geisler, Stefan

    2012-01-01

    Knowledge of fundamentals of human-computer interaction resp. usability engineering is getting more and more important in technical domains. However this interdisciplinary field of work and corresponding degree programs are not broadly known. Therefore at the Hochschule Ruhr West, University of Applied Sciences, a program was developed to give…

  12. Human-competitive evolution of quantum computing artefacts by Genetic Programming.

    Science.gov (United States)

    Massey, Paul; Clark, John A; Stepney, Susan

    2006-01-01

    We show how Genetic Programming (GP) can be used to evolve useful quantum computing artefacts of increasing sophistication and usefulness: firstly specific quantum circuits, then quantum programs, and finally system-independent quantum algorithms. We conclude the paper by presenting a human-competitive Quantum Fourier Transform (QFT) algorithm evolved by GP.

  13. Characteristics of an Intelligent Computer Assisted Instruction Shell with an Example in Human Physiology.

    Science.gov (United States)

    Dori, Yehudit J.; Yochim, Jerome M.

    1992-01-01

    Discusses exemplary teacher and student characteristics that can provide the base to generate an Intelligent Computer Assisted Instruction (ICAI) shell. Outlines the expertise, learning, student-model, and inference modules of an ICAI shell. Describes the development of an ICAI shell for an undergraduate course in human physiology. (33 references)…

  14. Human Computer Collaboration at the Edge: Enhancing Collective Situation Understanding with Controlled Natural Language

    Science.gov (United States)

    2016-09-06

    Braines, D. Pizzocaro, and C. Parizas, “Human-machine conversations to support multi-agency missions,” ACM SIGMOBILE Mobile Computing and Communications...management,” Commu- nications of the ACM , vol. 50, no. 3, pp. 44–49, 2007. [27] D. Braines, J. Ibbotson, D. Shaw, and A. Preece, “Building a living database

  15. Enhancing Human-Computer Interaction Design Education: Teaching Affordance Design for Emerging Mobile Devices

    Science.gov (United States)

    Faiola, Anthony; Matei, Sorin Adam

    2010-01-01

    The evolution of human-computer interaction design (HCID) over the last 20 years suggests that there is a growing need for educational scholars to consider new and more applicable theoretical models of interactive product design. The authors suggest that such paradigms would call for an approach that would equip HCID students with a better…

  16. The Human-Computer Interaction of Cross-Cultural Gaming Strategy

    Science.gov (United States)

    Chakraborty, Joyram; Norcio, Anthony F.; Van Der Veer, Jacob J.; Andre, Charles F.; Miller, Zachary; Regelsberger, Alexander

    2015-01-01

    This article explores the cultural dimensions of the human-computer interaction that underlies gaming strategies. The article is a desktop study of existing literature and is organized into five sections. The first examines the cultural aspects of knowledge processing. The social constructs technology interaction is discussed. Following this, the…

  17. Study on Speciation of Pr(III) in Human Blood Plasma by Computer Simulation

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Speciation of Pr(III) in human blood plasma has been investigated by computer simulation. The speciation and distribution of Pr(III) has been obtained. It has been found that most of Pr(III) is bound to phosphate and to form precipitate. The results obtained are in accord with experimental observations.

  18. HCI^2 Workbench: A Development Tool for Multimodal Human-Computer Interaction Systems

    NARCIS (Netherlands)

    Shen, Jie; Wenzhe, Shi; Pantic, Maja

    2011-01-01

    In this paper, we present a novel software tool designed and implemented to simplify the development process of Multimodal Human-Computer Interaction (MHCI) systems. This tool, which is called the HCI^2 Workbench, exploits a Publish / Subscribe (P/S) architecture [13] [14] to facilitate efficient an

  19. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems

    DEFF Research Database (Denmark)

    The CHI Papers and Notes program is continuing to grow along with many of our sister conferences. We are pleased that CHI is still the leading venue for research in human-computer interaction. CHI 2013 continued the use of subcommittees to manage the review process. Authors selected the subcommit...

  20. Use of computational modeling approaches in studying the binding interactions of compounds with human estrogen receptors.

    Science.gov (United States)

    Wang, Pan; Dang, Li; Zhu, Bao-Ting

    2016-01-01

    Estrogens have a whole host of physiological functions in many human organs and systems, including the reproductive, cardiovascular, and central nervous systems. Many naturally-occurring compounds with estrogenic or antiestrogenic activity are present in our environment and food sources. Synthetic estrogens and antiestrogens are also important therapeutic agents. At the molecular level, estrogen receptors (ERs) mediate most of the well-known actions of estrogens. Given recent advances in computational modeling tools, it is now highly practical to use these tools to study the interaction of human ERs with various types of ligands. There are two common categories of modeling techniques: one is the quantitative structure activity relationship (QSAR) analysis, which uses the structural information of the interacting ligands to predict the binding site properties of a macromolecule, and the other one is molecular docking-based computational analysis, which uses the 3-dimensional structural information of both the ligands and the receptor to predict the binding interaction. In this review, we discuss recent results that employed these and other related computational modeling approaches to characterize the binding interaction of various estrogens and antiestrogens with the human ERs. These examples clearly demonstrate that the computational modeling approaches, when used in combination with other experimental methods, are powerful tools that can precisely predict the binding interaction of various estrogenic ligands and their derivatives with the human ERs.

  1. A Framework and Implementation of User Interface and Human-Computer Interaction Instruction

    Science.gov (United States)

    Peslak, Alan

    2005-01-01

    Researchers have suggested that up to 50 % of the effort in development of information systems is devoted to user interface development (Douglas, Tremaine, Leventhal, Wills, & Manaris, 2002; Myers & Rosson, 1992). Yet little study has been performed on the inclusion of important interface and human-computer interaction topics into a current…

  2. Computer-based personality judgments are more accurate than those made by humans

    Science.gov (United States)

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-01

    Judging others’ personalities is an essential skill in successful social living, as personality is a key driver behind people’s interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants’ Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy. PMID:25583507

  3. Computer-based personality judgments are more accurate than those made by humans.

    Science.gov (United States)

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-27

    Judging others' personalities is an essential skill in successful social living, as personality is a key driver behind people's interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants' Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy.

  4. MoCog1: A computer simulation of recognition-primed human decision making, considering emotions

    Science.gov (United States)

    Gevarter, William B.

    1992-01-01

    The successful results of the first stage of a research effort to develop a versatile computer model of motivated human cognitive behavior are reported. Most human decision making appears to be an experience-based, relatively straightforward, largely automatic response to situations, utilizing cues and opportunities perceived from the current environment. The development, considering emotions, of the architecture and computer program associated with such 'recognition-primed' decision-making is described. The resultant computer program (MoCog1) was successfully utilized as a vehicle to simulate earlier findings that relate how an individual's implicit theories orient the individual toward particular goals, with resultant cognitions, affects, and behavior in response to their environment.

  5. Exploring Effective Decision Making through Human-Centered and Computational Intelligence Methods

    Energy Technology Data Exchange (ETDEWEB)

    Han, Kyungsik; Cook, Kristin A.; Shih, Patrick C.

    2016-06-13

    Decision-making has long been studied to understand a psychological, cognitive, and social process of selecting an effective choice from alternative options. Its studies have been extended from a personal level to a group and collaborative level, and many computer-aided decision-making systems have been developed to help people make right decisions. There has been significant research growth in computational aspects of decision-making systems, yet comparatively little effort has existed in identifying and articulating user needs and requirements in assessing system outputs and the extent to which human judgments could be utilized for making accurate and reliable decisions. Our research focus is decision-making through human-centered and computational intelligence methods in a collaborative environment, and the objectives of this position paper are to bring our research ideas to the workshop, and share and discuss ideas.

  6. Human-Computer Interaction and Operators' Performance Optimizing Work Design with Activity Theory

    CERN Document Server

    Bedny, Gregory Z

    2010-01-01

    Directed to a broad and interdisciplinary audience, this book provides a complete account of what has been accomplished in applied and systemic-structural activity theory. It presents a new approach to applied psychology and the study of human work that has derived from activity theory. The selected articles demonstrate the basic principles of studying human work and particularly computer-based work in complex sociotechnical systems. The book includes examples of applied and systemic-structural activity theory to HCI and man-machine-systems, aviation, safety, design and optimization of human p

  7. [The human body and the computer as pedagogic tools for anatomy: review of the literature].

    Science.gov (United States)

    Captier, G; Canovas, F; Bonnel, F

    2005-09-01

    Since the first dissections, the human body has been the main tool for the teaching of anatomy in medical courses. For the last 30 years, university anatomy laboratory dissection has been brought into question and the total hours of anatomy teaching have decreased. In parallel, new technologies have progressed and become more competitive and more attractive than dissection. The aim of this review of the literature was to evaluate the use of the human body as a pedagogic tool compared to today's computer tools. Twenty comparative studies were reviewed. Their analysis showed that the human body remains the main tool in anatomy teaching even if anatomic demonstration (prosection) can replace dissection, and that the computer tools were complementary but not a substitute to dissection.

  8. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  9. HEADING RECOVERY FROM OPTIC FLOW: COMPARING PERFORMANCE OF HUMANS AND COMPUTATIONAL MODELS

    Directory of Open Access Journals (Sweden)

    Andrew John Foulkes

    2013-06-01

    Full Text Available Human observers can perceive their direction of heading with a precision of about a degree. Several computational models of the processes underpinning the perception of heading have been proposed. In the present study we set out to assess which of four candidate models best captured human performance; the four models we selected reflected key differences in terms of approach and methods to modelling optic flow processing to recover movement parameters. We first generated a performance profile for human observers by measuring how performance changed as we systematically manipulated both the quantity (number of dots in the stimulus per frame and quality (amount of 2D directional noise of the flow field information. We then generated comparable performance profiles for the four candidate models. Models varied markedly in terms of both their performance and similarity to human data. To formally assess the match between the models and human performance we regressed the output of each of the four models against human performance data. We were able to rule out two models that produced very different performance profiles to human observers. The remaining two shared some similarities with human performance profiles in terms of the magnitude and pattern of thresholds. However none of the models tested could capture all aspect of the human data.

  10. Computation of electrostatic fields in anisotropic human tissues using the Finite Integration Technique (FIT)

    Science.gov (United States)

    Motresc, V. C.; van Rienen, U.

    2004-05-01

    The exposure of human body to electromagnetic fields has in the recent years become a matter of great interest for scientists working in the area of biology and biomedicine. Due to the difficulty of performing measurements, accurate models of the human body, in the form of a computer data set, are used for computations of the fields inside the body by employing numerical methods such as the method used for our calculations, namely the Finite Integration Technique (FIT). A fact that has to be taken into account when computing electromagnetic fields in the human body is that some tissue classes, i.e. cardiac and skeletal muscles, have higher electrical conductivity and permittivity along fibers rather than across them. This property leads to diagonal conductivity and permittivity tensors only when expressing them in a local coordinate system while in a global coordinate system they become full tensors. The Finite Integration Technique (FIT) in its classical form can handle diagonally anisotropic materials quite effectively but it needed an extension for handling fully anisotropic materials. New electric voltages were placed on the grid and a new averaging method of conductivity and permittivity on the grid was found. In this paper, we present results from electrostatic computations performed with the extended version of FIT for fully anisotropic materials.

  11. Computation of electrostatic fields in anisotropic human tissues using the Finite Integration Technique (FIT

    Directory of Open Access Journals (Sweden)

    V. C. Motresc

    2004-01-01

    Full Text Available The exposure of human body to electromagnetic fields has in the recent years become a matter of great interest for scientists working in the area of biology and biomedicine. Due to the difficulty of performing measurements, accurate models of the human body, in the form of a computer data set, are used for computations of the fields inside the body by employing numerical methods such as the method used for our calculations, namely the Finite Integration Technique (FIT. A fact that has to be taken into account when computing electromagnetic fields in the human body is that some tissue classes, i.e. cardiac and skeletal muscles, have higher electrical conductivity and permittivity along fibers rather than across them. This property leads to diagonal conductivity and permittivity tensors only when expressing them in a local coordinate system while in a global coordinate system they become full tensors. The Finite Integration Technique (FIT in its classical form can handle diagonally anisotropic materials quite effectively but it needed an extension for handling fully anisotropic materials. New electric voltages were placed on the grid and a new averaging method of conductivity and permittivity on the grid was found. In this paper, we present results from electrostatic computations performed with the extended version of FIT for fully anisotropic materials.

  12. Towards passive brain-computer interfaces: applying brain-computer interface technology to human-machine systems in general

    Science.gov (United States)

    Zander, Thorsten O.; Kothe, Christian

    2011-04-01

    Cognitive monitoring is an approach utilizing realtime brain signal decoding (RBSD) for gaining information on the ongoing cognitive user state. In recent decades this approach has brought valuable insight into the cognition of an interacting human. Automated RBSD can be used to set up a brain-computer interface (BCI) providing a novel input modality for technical systems solely based on brain activity. In BCIs the user usually sends voluntary and directed commands to control the connected computer system or to communicate through it. In this paper we propose an extension of this approach by fusing BCI technology with cognitive monitoring, providing valuable information about the users' intentions, situational interpretations and emotional states to the technical system. We call this approach passive BCI. In the following we give an overview of studies which utilize passive BCI, as well as other novel types of applications resulting from BCI technology. We especially focus on applications for healthy users, and the specific requirements and demands of this user group. Since the presented approach of combining cognitive monitoring with BCI technology is very similar to the concept of BCIs itself we propose a unifying categorization of BCI-based applications, including the novel approach of passive BCI.

  13. The use of analytical models in human-computer interface design

    Science.gov (United States)

    Gugerty, Leo

    1993-01-01

    Recently, a large number of human-computer interface (HCI) researchers have investigated building analytical models of the user, which are often implemented as computer models. These models simulate the cognitive processes and task knowledge of the user in ways that allow a researcher or designer to estimate various aspects of an interface's usability, such as when user errors are likely to occur. This information can lead to design improvements. Analytical models can supplement design guidelines by providing designers rigorous ways of analyzing the information-processing requirements of specific tasks (i.e., task analysis). These models offer the potential of improving early designs and replacing some of the early phases of usability testing, thus reducing the cost of interface design. This paper describes some of the many analytical models that are currently being developed and evaluates the usefulness of analytical models for human-computer interface design. This paper will focus on computational, analytical models, such as the GOMS model, rather than less formal, verbal models, because the more exact predictions and task descriptions of computational models may be useful to designers. The paper also discusses some of the practical requirements for using analytical models in complex design organizations such as NASA.

  14. Application of high-performance computing to numerical simulation of human movement

    Science.gov (United States)

    Anderson, F. C.; Ziegler, J. M.; Pandy, M. G.; Whalen, R. T.

    1995-01-01

    We have examined the feasibility of using massively-parallel and vector-processing supercomputers to solve large-scale optimization problems for human movement. Specifically, we compared the computational expense of determining the optimal controls for the single support phase of gait using a conventional serial machine (SGI Iris 4D25), a MIMD parallel machine (Intel iPSC/860), and a parallel-vector-processing machine (Cray Y-MP 8/864). With the human body modeled as a 14 degree-of-freedom linkage actuated by 46 musculotendinous units, computation of the optimal controls for gait could take up to 3 months of CPU time on the Iris. Both the Cray and the Intel are able to reduce this time to practical levels. The optimal solution for gait can be found with about 77 hours of CPU on the Cray and with about 88 hours of CPU on the Intel. Although the overall speeds of the Cray and the Intel were found to be similar, the unique capabilities of each machine are better suited to different portions of the computational algorithm used. The Intel was best suited to computing the derivatives of the performance criterion and the constraints whereas the Cray was best suited to parameter optimization of the controls. These results suggest that the ideal computer architecture for solving very large-scale optimal control problems is a hybrid system in which a vector-processing machine is integrated into the communication network of a MIMD parallel machine.

  15. A Review on the Computational Methods for Emotional State Estimation from the Human EEG

    Science.gov (United States)

    Kim, Min-Ki; Kim, Miyoung; Oh, Eunmi

    2013-01-01

    A growing number of affective computing researches recently developed a computer system that can recognize an emotional state of the human user to establish affective human-computer interactions. Various measures have been used to estimate emotional states, including self-report, startle response, behavioral response, autonomic measurement, and neurophysiologic measurement. Among them, inferring emotional states from electroencephalography (EEG) has received considerable attention as EEG could directly reflect emotional states with relatively low costs and simplicity. Yet, EEG-based emotional state estimation requires well-designed computational methods to extract information from complex and noisy multichannel EEG data. In this paper, we review the computational methods that have been developed to deduct EEG indices of emotion, to extract emotion-related features, or to classify EEG signals into one of many emotional states. We also propose using sequential Bayesian inference to estimate the continuous emotional state in real time. We present current challenges for building an EEG-based emotion recognition system and suggest some future directions.   PMID:23634176

  16. Automatic procedure for realistic 3D finite element modelling of human brain for bioelectromagnetic computations

    Energy Technology Data Exchange (ETDEWEB)

    Aristovich, K Y; Khan, S H, E-mail: kirill.aristovich.1@city.ac.u [School of Engineering and Mathematical Sciences, City University London, Northampton Square, London EC1V 0HB (United Kingdom)

    2010-07-01

    Realistic computer modelling of biological objects requires building of very accurate and realistic computer models based on geometric and material data, type, and accuracy of numerical analyses. This paper presents some of the automatic tools and algorithms that were used to build accurate and realistic 3D finite element (FE) model of whole-brain. These models were used to solve the forward problem in magnetic field tomography (MFT) based on Magnetoencephalography (MEG). The forward problem involves modelling and computation of magnetic fields produced by human brain during cognitive processing. The geometric parameters of the model were obtained from accurate Magnetic Resonance Imaging (MRI) data and the material properties - from those obtained from Diffusion Tensor MRI (DTMRI). The 3D FE models of the brain built using this approach has been shown to be very accurate in terms of both geometric and material properties. The model is stored on the computer in Computer-Aided Parametrical Design (CAD) format. This allows the model to be used in a wide a range of methods of analysis, such as finite element method (FEM), Boundary Element Method (BEM), Monte-Carlo Simulations, etc. The generic model building approach presented here could be used for accurate and realistic modelling of human brain and many other biological objects.

  17. A computational predictor of human episodic memory based on a theta phase precession network.

    Directory of Open Access Journals (Sweden)

    Naoyuki Sato

    Full Text Available In the rodent hippocampus, a phase precession phenomena of place cell firing with the local field potential (LFP theta is called "theta phase precession" and is considered to contribute to memory formation with spike time dependent plasticity (STDP. On the other hand, in the primate hippocampus, the existence of theta phase precession is unclear. Our computational studies have demonstrated that theta phase precession dynamics could contribute to primate-hippocampal dependent memory formation, such as object-place association memory. In this paper, we evaluate human theta phase precession by using a theory-experiment combined analysis. Human memory recall of object-place associations was analyzed by an individual hippocampal network simulated by theta phase precession dynamics of human eye movement and EEG data during memory encoding. It was found that the computational recall of the resultant network is significantly correlated with human memory recall performance, while other computational predictors without theta phase precession are not significantly correlated with subsequent memory recall. Moreover the correlation is larger than the correlation between human recall and traditional experimental predictors. These results indicate that theta phase precession dynamics are necessary for the better prediction of human recall performance with eye movement and EEG data. In this analysis, theta phase precession dynamics appear useful for the extraction of memory-dependent components from the spatio-temporal pattern of eye movement and EEG data as an associative network. Theta phase precession may be a common neural dynamic between rodents and humans for the formation of environmental memories.

  18. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  19. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  20. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  1. Cross-cultural human-computer interaction and user experience design a semiotic perspective

    CERN Document Server

    Brejcha, Jan

    2015-01-01

    This book describes patterns of language and culture in human-computer interaction (HCI). Through numerous examples, it shows why these patterns matter and how to exploit them to design a better user experience (UX) with computer systems. It provides scientific information on the theoretical and practical areas of the interaction and communication design for research experts and industry practitioners and covers the latest research in semiotics and cultural studies, bringing a set of tools and methods to benefit the process of designing with the cultural background in mind.

  2. Portable tongue-supported human computer interaction system design and implementation.

    Science.gov (United States)

    Quain, Rohan; Khan, Masood Mehmood

    2014-01-01

    Tongue supported human-computer interaction (TSHCI) systems can help critically ill patients interact with both computers and people. These systems can be particularly useful for patients suffering injuries above C7 on their spinal vertebrae. Despite recent successes in their application, several limitations restrict performance of existing TSHCI systems and discourage their use in real life situations. This paper proposes a low-cost, less-intrusive, portable and easy to use design for implementing a TSHCI system. Two applications of the proposed system are reported. Design considerations and performance of the proposed system are also presented.

  3. Is the corticomedullary index valid to distinguish human from nonhuman bones: a multislice computed tomography study.

    Science.gov (United States)

    Rérolle, Camille; Saint-Martin, Pauline; Dedouit, Fabrice; Rousseau, Hervé; Telmon, Norbert

    2013-09-10

    The first step in the identification process of bone remains is to determine whether they are of human or nonhuman origin. This issue may arise when only a fragment of bone is available, as the species of origin is usually easily determined on a complete bone. The present study aims to assess the validity of a morphometric method used by French forensic anthropologists to determine the species of origin: the corticomedullary index (CMI), defined by the ratio of the diameter of the medullary cavity to the total diameter of the bone. We studied the constancy of the CMI from measurements made on computed tomography images (CT scans) of different human bones, and compared our measurements with reference values selected in the literature. The measurements obtained on CT scans at three different sites of 30 human femurs, 24 tibias, and 24 fibulas were compared between themselves and with the CMI reference values for humans, pigs, dogs and sheep. Our results differed significantly from these reference values, with three exceptions: the proximal quarter of the femur and mid-fibular measurements for the human CMI, and the proximal quarter of the tibia for the sheep CMI. Mid-tibial, mid-femoral, and mid-fibular measurements also differed significantly between themselves. Only 22.6% of CT scans of human bones were correctly identified as human. We concluded that the CMI is not an effective method for determining the human origin of bone remains.

  4. Human Factors Principles in Design of Computer-Mediated Visualization for Robot Missions

    Energy Technology Data Exchange (ETDEWEB)

    David I Gertman; David J Bruemmer

    2008-12-01

    With increased use of robots as a resource in missions supporting countermine, improvised explosive devices (IEDs), and chemical, biological, radiological nuclear and conventional explosives (CBRNE), fully understanding the best means by which to complement the human operator’s underlying perceptual and cognitive processes could not be more important. Consistent with control and display integration practices in many other high technology computer-supported applications, current robotic design practices rely highly upon static guidelines and design heuristics that reflect the expertise and experience of the individual designer. In order to use what we know about human factors (HF) to drive human robot interaction (HRI) design, this paper reviews underlying human perception and cognition principles and shows how they were applied to a threat detection domain.

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  6. Computational Characterization of Exogenous MicroRNAs that Can Be Transferred into Human Circulation.

    Directory of Open Access Journals (Sweden)

    Jiang Shu

    Full Text Available MicroRNAs have been long considered synthesized endogenously until very recent discoveries showing that human can absorb dietary microRNAs from animal and plant origins while the mechanism remains unknown. Compelling evidences of microRNAs from rice, milk, and honeysuckle transported to human blood and tissues have created a high volume of interests in the fundamental questions that which and how exogenous microRNAs can be transferred into human circulation and possibly exert functions in humans. Here we present an integrated genomics and computational analysis to study the potential deciding features of transportable microRNAs. Specifically, we analyzed all publicly available microRNAs, a total of 34,612 from 194 species, with 1,102 features derived from the microRNA sequence and structure. Through in-depth bioinformatics analysis, 8 groups of discriminative features have been used to characterize human circulating microRNAs and infer the likelihood that a microRNA will get transferred into human circulation. For example, 345 dietary microRNAs have been predicted as highly transportable candidates where 117 of them have identical sequences with their homologs in human and 73 are known to be associated with exosomes. Through a milk feeding experiment, we have validated 9 cow-milk microRNAs in human plasma using microRNA-sequencing analysis, including the top ranked microRNAs such as bta-miR-487b, miR-181b, and miR-421. The implications in health-related processes have been illustrated in the functional analysis. This work demonstrates the data-driven computational analysis is highly promising to study novel molecular characteristics of transportable microRNAs while bypassing the complex mechanistic details.

  7. Advancements in remote physiological measurement and applications in human-computer interaction

    Science.gov (United States)

    McDuff, Daniel

    2017-04-01

    Physiological signals are important for tracking health and emotional states. Imaging photoplethysmography (iPPG) is a set of techniques for remotely recovering cardio-pulmonary signals from video of the human body. Advances in iPPG methods over the past decade combined with the ubiquity of digital cameras presents the possibility for many new, lowcost applications of physiological monitoring. This talk will highlight methods for recovering physiological signals, work characterizing the impact of video parameters and hardware on these measurements, and applications of this technology in human-computer interfaces.

  8. Toward Scalable Trustworthy Computing Using the Human-Physiology-Immunity Metaphor

    Energy Technology Data Exchange (ETDEWEB)

    Hively, Lee M [ORNL; Sheldon, Frederick T [ORNL

    2011-01-01

    The cybersecurity landscape consists of an ad hoc patchwork of solutions. Optimal cybersecurity is difficult for various reasons: complexity, immense data and processing requirements, resource-agnostic cloud computing, practical time-space-energy constraints, inherent flaws in 'Maginot Line' defenses, and the growing number and sophistication of cyberattacks. This article defines the high-priority problems and examines the potential solution space. In that space, achieving scalable trustworthy computing and communications is possible through real-time knowledge-based decisions about cyber trust. This vision is based on the human-physiology-immunity metaphor and the human brain's ability to extract knowledge from data and information. The article outlines future steps toward scalable trustworthy systems requiring a long-term commitment to solve the well-known challenges.

  9. Human perceptual deficits as factors in computer interface test and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Bowser, S.E.

    1992-06-01

    Issues related to testing and evaluating human computer interfaces are usually based on the machine rather than on the human portion of the computer interface. Perceptual characteristics of the expected user are rarely investigated, and interface designers ignore known population perceptual limitations. For these reasons, environmental impacts on the equipment will more likely be defined than will user perceptual characteristics. The investigation of user population characteristics is most often directed toward intellectual abilities and anthropometry. This problem is compounded by the fact that some deficits capabilities tend to be found in higher-than-overall population distribution in some user groups. The test and evaluation community can address the issue from two primary aspects. First, assessing user characteristics should be extended to include tests of perceptual capability. Secondly, interface designs should use multimode information coding.

  10. Interactions among human behavior, social networks, and societal infrastructures: A Case Study in Computational Epidemiology

    Science.gov (United States)

    Barrett, Christopher L.; Bisset, Keith; Chen, Jiangzhuo; Eubank, Stephen; Lewis, Bryan; Kumar, V. S. Anil; Marathe, Madhav V.; Mortveit, Henning S.

    Human behavior, social networks, and the civil infrastructures are closely intertwined. Understanding their co-evolution is critical for designing public policies and decision support for disaster planning. For example, human behaviors and day to day activities of individuals create dense social interactions that are characteristic of modern urban societies. These dense social networks provide a perfect fabric for fast, uncontrolled disease propagation. Conversely, people’s behavior in response to public policies and their perception of how the crisis is unfolding as a result of disease outbreak can dramatically alter the normally stable social interactions. Effective planning and response strategies must take these complicated interactions into account. In this chapter, we describe a computer simulation based approach to study these issues using public health and computational epidemiology as an illustrative example. We also formulate game-theoretic and stochastic optimization problems that capture many of the problems that we study empirically.

  11. AFFECTIVE AND EMOTIONAL ASPECTS OF HUMAN-COMPUTER INTERACTION: Game-Based and Innovative Learning Approaches

    Directory of Open Access Journals (Sweden)

    A. Askim GULUMBAY, Anadolu University, TURKEY

    2006-07-01

    Full Text Available This book was edited by, Maja Pivec, an educator at the University of Applied Sciences, and published by IOS Pres in 2006. The learning process can be seen as an emotional and personal experience that is addictive and leads learners to proactive behavior. New research methods in this field are related to affective and emotional approaches to computersupported learning and human-computer interactions.Bringing together scientists and research aspects from psychology, educational sciences, cognitive sciences, various aspects of communication and human computer interaction, interface design andcomputer science on one hand and educators and game industry on the other, this should open gates to evolutionary changes of the learning industry. The major topics discussed are emotions, motivation, games and game-experience.

  12. Computer

    CERN Document Server

    Atkinson, Paul

    2011-01-01

    The pixelated rectangle we spend most of our day staring at in silence is not the television as many long feared, but the computer-the ubiquitous portal of work and personal lives. At this point, the computer is almost so common we don't notice it in our view. It's difficult to envision that not that long ago it was a gigantic, room-sized structure only to be accessed by a few inspiring as much awe and respect as fear and mystery. Now that the machine has decreased in size and increased in popular use, the computer has become a prosaic appliance, little-more noted than a toaster. These dramati

  13. Conformational effects on the circular dichroism of Human Carbonic Anhydrase II: a multilevel computational study.

    Directory of Open Access Journals (Sweden)

    Tatyana G Karabencheva-Christova

    Full Text Available Circular Dichroism (CD spectroscopy is a powerful method for investigating conformational changes in proteins and therefore has numerous applications in structural and molecular biology. Here a computational investigation of the CD spectrum of the Human Carbonic Anhydrase II (HCAII, with main focus on the near-UV CD spectra of the wild-type enzyme and it seven tryptophan mutant forms, is presented and compared to experimental studies. Multilevel computational methods (Molecular Dynamics, Semiempirical Quantum Mechanics, Time-Dependent Density Functional Theory were applied in order to gain insight into the mechanisms of interaction between the aromatic chromophores within the protein environment and understand how the conformational flexibility of the protein influences these mechanisms. The analysis suggests that combining CD semi empirical calculations, crystal structures and molecular dynamics (MD could help in achieving a better agreement between the computed and experimental protein spectra and provide some unique insight into the dynamic nature of the mechanisms of chromophore interactions.

  14. Building HAL: computers that sense, recognize, and respond to human emotion

    Science.gov (United States)

    Picard, Rosalind W.

    2001-06-01

    The HAL 9000 computer, the inimitable star of the classic Kubrick and Clarke film '2001: A Space Odyssey,' displayed image understanding capabilities vastly beyond today's computer systems. HAL could not only instantly recognize who he was interacting with, but also he could lip read, judge aesthetics of visual sketches, recognize emotions subtly expressed by scientists on board the ship, and respond to these emotions in an adaptive personalized way. Of course, HAL also had capabilities that we might not want to give to machines, like the ability to terminate life support or otherwise take lives of people. This presentation highlights recent research in giving machines certain affective abilities that aim to make them ore intelligent, shows examples of some of these systems, and describes the role that affective abilities may play in future human-computer interaction.

  15. Measuring human emotions with modular neural networks and computer vision based applications

    Directory of Open Access Journals (Sweden)

    Veaceslav Albu

    2015-05-01

    Full Text Available This paper describes a neural network architecture for emotion recognition for human-computer interfaces and applied systems. In the current research, we propose a combination of the most recent biometric techniques with the neural networks (NN approach for real-time emotion and behavioral analysis. The system will be tested in real-time applications of customers' behavior for distributed on-land systems, such as kiosks and ATMs.

  16. Rapid Human-Computer Interactive Conceptual Design of Mobile and Manipulative Robot Systems

    Science.gov (United States)

    2015-05-19

    Learning Comparative User Models for Accelerating Human-Computer Collaborative Search, Evolutionary and Biologically Inspired Music , Sound, Art and...has been investigated theoretically to some extent ([12]) and successfully applied to artistic tasks ([11, 5]). Our hypothesis is that it is possible...model’s prediction to the sign of the original entry. If the signs coincide for all entries, the network is considered to be successfully trained

  17. Computers in a human perspective: an alternative way of teaching informatics to health professionals.

    Science.gov (United States)

    Schneider, W

    1989-11-01

    An alternative way of teaching informatics, especially health informatics, to health professionals of different categories has been developed and practiced. The essentials of human competence and skill in handling and processing information are presented parallel with the essentials of computer-assisted methodologies and technologies of formal language-based informatics. Requirements on how eventually useful computer-based tools will have to be designed in order to be well adapted to genuine human skill and competence in handling tools in various work contexts are established. On the basis of such a balanced knowledge methods for work analysis are introduced. These include how the existing problems at a workplace can be identified and analyzed in relation to the goals to be achieved. Special emphasis is given to new ways of information analysis, i.e. methods which even allow the comprehension and documentation of those parts of the actually practiced 'human' information handling and processing which are normally overlooked, as e.g. non-verbal communication processes and so-called 'tacit knowledge' based information handling and processing activities. Different ways of problem solving are discussed involving in an integrated human perspective--alternative staffing, enhancement of the competence of the staff, optimal planning of premises as well as organizational and technical means. The main result of this alternative way of education has been a considerably improved user competence which in turn has led to very different designs of computer assistance and man-computer interfaces. It is the purpose of this paper to give a brief outline of the teaching material and a short presentation of the above mentioned results.(ABSTRACT TRUNCATED AT 250 WORDS)

  18. OPTIMIZATION DESIGN OF HYDRAU-LIC MANIFOLD BLOCKS BASED ON HUMAN-COMPUTER COOPERATIVE GENETIC ALGORITHM

    Institute of Scientific and Technical Information of China (English)

    Feng Yi; Li Li; Tian Shujun

    2003-01-01

    Optimization design of hydraulic manifold blocks (HMB) is studied as a complex solid spatial layout problem. Based on comprehensive research into structure features and design rules of HMB, an optimal mathematical model for this problem is presented. Using human-computer cooperative genetic algorithm (GA) and its hybrid optimization strategies, integrated layout and connection design schemes of HMB can be automatically optimized. An example is given to testify it.

  19. 08292 Abstracts Collection -- The Study of Visual Aesthetics in Human-Computer Interaction

    OpenAIRE

    Hassenzahl, Marc; Lindgaard, Gitte; Platz, Axel; Tractinsky, Noam

    2008-01-01

    From 13.07. to 16.07.2008, the Dagstuhl Seminar 08292 ``The Study of Visual Aesthetics in Human-Computer Interaction'' was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first secti...

  20. AFFECTIVE AND EMOTIONAL ASPECTS OF HUMAN-COMPUTER INTERACTION: Game-Based and Innovative Learning Approaches

    OpenAIRE

    A. Askim GULUMBAY, Anadolu University, TURKEY

    2006-01-01

    This book was edited by, Maja Pivec, an educator at the University of Applied Sciences, and published by IOS Pres in 2006. The learning process can be seen as an emotional and personal experience that is addictive and leads learners to proactive behavior. New research methods in this field are related to affective and emotional approaches to computersupported learning and human-computer interactions.Bringing together scientists and research aspects from psychology, educational sciences, cogni...

  1. Towards a semio-cognitive theory of human-computer interaction

    OpenAIRE

    Scolari, Carlos Alberto

    2001-01-01

    The research here presented is theoretical and introduces a critical analysis of instrumental approaches in Human-Computer Interaction (HCI). From a semiotic point of view interfaces are not "natural" or "neutral" instruments, but rather complex sense production devices. Interaction, in other words, is far from being a "transparent" process.In this abstract we present the fundaments of a theoretical model that combines Semiotics with Cognitive Science approaches.

  2. US Army Weapon Systems Human-Computer Interface (WSHCI) style guide, Version 1

    Energy Technology Data Exchange (ETDEWEB)

    Avery, L.W.; O`Mara, P.A.; Shepard, A.P.

    1996-09-30

    A stated goal of the U.S. Army has been the standardization of the human computer interfaces (HCIS) of its system. Some of the tools being used to accomplish this standardization are HCI design guidelines and style guides. Currently, the Army is employing a number of style guides. While these style guides provide good guidance for the command, control, communications, computers, and intelligence (C4I) domain, they do not necessarily represent the more unique requirements of the Army`s real time and near-real time (RT/NRT) weapon systems. The Office of the Director of Information for Command, Control, Communications, and Computers (DISC4), in conjunction with the Weapon Systems Technical Architecture Working Group (WSTAWG), recognized this need as part of their activities to revise the Army Technical Architecture (ATA). To address this need, DISC4 tasked the Pacific Northwest National Laboratory (PNNL) to develop an Army weapon systems unique HCI style guide. This document, the U.S. Army Weapon Systems Human-Computer Interface (WSHCI) Style Guide, represents the first version of that style guide. The purpose of this document is to provide HCI design guidance for RT/NRT Army systems across the weapon systems domains of ground, aviation, missile, and soldier systems. Each domain should customize and extend this guidance by developing their domain-specific style guides, which will be used to guide the development of future systems within their domains.

  3. The experience of agency in human-computer interactions: a review.

    Science.gov (United States)

    Limerick, Hannah; Coyle, David; Moore, James W

    2014-01-01

    The sense of agency is the experience of controlling both one's body and the external environment. Although the sense of agency has been studied extensively, there is a paucity of studies in applied "real-life" situations. One applied domain that seems highly relevant is human-computer-interaction (HCI), as an increasing number of our everyday agentive interactions involve technology. Indeed, HCI has long recognized the feeling of control as a key factor in how people experience interactions with technology. The aim of this review is to summarize and examine the possible links between sense of agency and understanding control in HCI. We explore the overlap between HCI and sense of agency for computer input modalities and system feedback, computer assistance, and joint actions between humans and computers. An overarching consideration is how agency research can inform HCI and vice versa. Finally, we discuss the potential ethical implications of personal responsibility in an ever-increasing society of technology users and intelligent machine interfaces.

  4. An Efficient and Secure m-IPS Scheme of Mobile Devices for Human-Centric Computing

    Directory of Open Access Journals (Sweden)

    Young-Sik Jeong

    2014-01-01

    Full Text Available Recent rapid developments in wireless and mobile IT technologies have led to their application in many real-life areas, such as disasters, home networks, mobile social networks, medical services, industry, schools, and the military. Business/work environments have become wire/wireless, integrated with wireless networks. Although the increase in the use of mobile devices that can use wireless networks increases work efficiency and provides greater convenience, wireless access to networks represents a security threat. Currently, wireless intrusion prevention systems (IPSs are used to prevent wireless security threats. However, these are not an ideal security measure for businesses that utilize mobile devices because they do not take account of temporal-spatial and role information factors. Therefore, in this paper, an efficient and secure mobile-IPS (m-IPS is proposed for businesses utilizing mobile devices in mobile environments for human-centric computing. The m-IPS system incorporates temporal-spatial awareness in human-centric computing with various mobile devices and checks users’ temporal spatial information, profiles, and role information to provide precise access control. And it also can extend application of m-IPS to the Internet of things (IoT, which is one of the important advanced technologies for supporting human-centric computing environment completely, for real ubiquitous field with mobile devices.

  5. Adaptation of hybrid human-computer interaction systems using EEG error-related potentials.

    Science.gov (United States)

    Chavarriaga, Ricardo; Biasiucci, Andrea; Forster, Killian; Roggen, Daniel; Troster, Gerhard; Millan, Jose Del R

    2010-01-01

    Performance improvement in both humans and artificial systems strongly relies in the ability of recognizing erroneous behavior or decisions. This paper, that builds upon previous studies on EEG error-related signals, presents a hybrid approach for human computer interaction that uses human gestures to send commands to a computer and exploits brain activity to provide implicit feedback about the recognition of such commands. Using a simple computer game as a case study, we show that EEG activity evoked by erroneous gesture recognition can be classified in single trials above random levels. Automatic artifact rejection techniques are used, taking into account that subjects are allowed to move during the experiment. Moreover, we present a simple adaptation mechanism that uses the EEG signal to label newly acquired samples and can be used to re-calibrate the gesture recognition system in a supervised manner. Offline analysis show that, although the achieved EEG decoding accuracy is far from being perfect, these signals convey sufficient information to significantly improve the overall system performance.

  6. Computational analysis of expression of human embryonic stem cell-associated signatures in tumors

    Directory of Open Access Journals (Sweden)

    Wang Xiaosheng

    2011-10-01

    Full Text Available Abstract Background The cancer stem cell model has been proposed based on the linkage between human embryonic stem cells and human cancer cells. However, the evidences supporting the cancer stem cell model remain to be collected. In this study, we extensively examined the expression of human embryonic stem cell-associated signatures including core genes, transcription factors, pathways and microRNAs in various cancers using the computational biology approach. Results We used the class comparison analysis and survival analysis algorithms to identify differentially expressed genes and their associated transcription factors, pathways and microRNAs among normal vs. tumor or good prognosis vs. poor prognosis phenotypes classes based on numerous human cancer gene expression data. We found that most of the human embryonic stem cell- associated signatures were frequently identified in the analysis, suggesting a strong linkage between human embryonic stem cells and cancer cells. Conclusions The present study revealed the close linkage between the human embryonic stem cell associated gene expression profiles and cancer-associated gene expression profiles, and therefore offered an indirect support for the cancer stem cell theory. However, many interest issues remain to be addressed further.

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  8. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  9. Human Computer Confluence in Rehabilitation: Digital Media Plasticity and Human Performance Plasticity

    DEFF Research Database (Denmark)

    Brooks, Anthony Lewis

    2013-01-01

    approaches promoting mindsets and activities commonly considered enduring, mundane and boring. The concept focuses on sensor-based interfaces mapped to control tailored-content that acts as direct and immediate feedbacks mirroring input. These flexible, adaptive, and ‘plastic’ options offer facilitators new......Digital media plasticity evocative to embodied interaction is presented as a utilitarian tool when mixed and matched to target human performance potentials specific to nuance of development for those with impairment. A distinct intervention strategy trains via alternative channeling of external...

  10. Development of human reliability analysis methodology and its computer code during low power/shutdown operation

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Chang Hyun; You, Young Woo; Huh, Chang Wook; Kim, Ju Yeul; Kim Do Hyung; Kim, Yoon Ik; Yang, Hui Chang [Seoul National University, Seoul (Korea, Republic of); Jae, Moo Sung [Hansung University, Seoul (Korea, Republic of)

    1997-07-01

    The objective of this study is to develop the appropriate procedure that can evaluate the human error in LP/S(lower power/shutdown) and the computer code that calculate the human error probabilities(HEPs) using this framework. The assessment of applicability of the typical HRA methodologies to LP/S is conducted and a new HRA procedure, SEPLOT (Systematic Evaluation Procedure for LP/S Operation Tasks) which presents the characteristics of LP/S is developed by selection and categorization of human actions by reviewing present studies. This procedure is applied to evaluate the LOOP(Loss of Off-site Power) sequence and the HEPs obtained by using SEPLOT are used to quantitative evaluation of the core uncovery frequency. In this evaluation one of the dynamic reliability computer codes, DYLAM-3 which has the advantages against the ET/FT is used. The SEPLOT developed in this study can give the basis and arrangement as to the human error evaluation technique. And this procedure can make it possible to assess the dynamic aspects of accidents leading to core uncovery applying the HEPs obtained by using the SEPLOT as input data to DYLAM-3 code, Eventually, it is expected that the results of this study will contribute to improve safety in LP/S and reduce uncertainties in risk. 57 refs. 17 tabs., 33 figs. (author)

  11. Human Computation: Object Recognition for Mobile Games Based on Single Player

    Directory of Open Access Journals (Sweden)

    Mohamed Sakr

    2014-07-01

    Full Text Available Smart phones and its applications gain a lot of popularity nowadays. Many people depend on them to finish their tasks banking, social networking, fun and a lot other things. Games with a purpose (GWAP and microtask crowdsourcing are considered two techniques of the human-computation. GWAPs depend on humans to accomplish their tasks. Porting GWAPs to smart phones will be great in increasing the number of humans in it. One of the systems of human-computation is ESP Game. ESP Game is a type of games with a purpose. ESP game will be good candidate to be ported to smart phones. This paper presents a new mobile game called MemoryLabel. It is a single player mobile game. It helps in labeling images and gives description for them. In addition, the game gives description for objects in the image not the whole image. We deploy our algorithm at the University of Menoufia for evaluation. In addition, the game is published on Google play market for android applications. In this trial, we first focused on measuring the total number of labels generated by our game and also the number of objects that have been labeled. The results reveal that the proposed game has promising results in describing images and objects.

  12. CaPSID: A bioinformatics platform for computational pathogen sequence identification in human genomes and transcriptomes

    Directory of Open Access Journals (Sweden)

    Borozan Ivan

    2012-08-01

    Full Text Available Abstract Background It is now well established that nearly 20% of human cancers are caused by infectious agents, and the list of human oncogenic pathogens will grow in the future for a variety of cancer types. Whole tumor transcriptome and genome sequencing by next-generation sequencing technologies presents an unparalleled opportunity for pathogen detection and discovery in human tissues but requires development of new genome-wide bioinformatics tools. Results Here we present CaPSID (Computational Pathogen Sequence IDentification, a comprehensive bioinformatics platform for identifying, querying and visualizing both exogenous and endogenous pathogen nucleotide sequences in tumor genomes and transcriptomes. CaPSID includes a scalable, high performance database for data storage and a web application that integrates the genome browser JBrowse. CaPSID also provides useful metrics for sequence analysis of pre-aligned BAM files, such as gene and genome coverage, and is optimized to run efficiently on multiprocessor computers with low memory usage. Conclusions To demonstrate the usefulness and efficiency of CaPSID, we carried out a comprehensive analysis of both a simulated dataset and transcriptome samples from ovarian cancer. CaPSID correctly identified all of the human and pathogen sequences in the simulated dataset, while in the ovarian dataset CaPSID’s predictions were successfully validated in vitro.

  13. Evaluating the microstructure of human brain tissues using synchrotron radiation-based micro-computed tomography

    Science.gov (United States)

    Schulz, Georg; Morel, Anne; Imholz, Martha S.; Deyhle, Hans; Weitkamp, Timm; Zanette, Irene; Pfeiffer, Franz; David, Christian; Müller-Gerbl, Magdalena; Müller, Bert

    2010-09-01

    Minimally invasive deep brain neurosurgical interventions require a profound knowledge of the morphology of the human brain. Generic brain atlases are based on histology including multiple preparation steps during the sectioning and staining. In order to correct the distortions induced in the anisotropic, inhomogeneous soft matter and therefore improve the accuracy of brain atlases, a non-destructive 3D imaging technique with the required spatial and density resolution is of great significance. Micro computed tomography provides true micrometer resolution. The application to post mortem human brain, however, is questionable because the differences of the components concerning X-ray absorption are weak. Therefore, magnetic resonance tomography has become the method of choice for three-dimensional imaging of human brain. Because the spatial resolution of this method is limited, an alternative has to be found for the three-dimensional imaging of cellular microstructures within the brain. Therefore, the present study relies on the synchrotron radiationbased micro computed tomography in the recently developed grating-based phase contrast mode. Using data acquired at the beamline ID 19 (ESRF, Grenoble, France) we demonstrate that grating-based tomography yields premium images of human thalamus, which can be used for the correction of histological distortions by 3D non-rigid registration.

  14. Cognitive engineering models: A prerequisite to the design of human-computer interaction in complex dynamic systems

    Science.gov (United States)

    Mitchell, Christine M.

    1993-01-01

    This chapter examines a class of human-computer interaction applications, specifically the design of human-computer interaction for the operators of complex systems. Such systems include space systems (e.g., manned systems such as the Shuttle or space station, and unmanned systems such as NASA scientific satellites), aviation systems (e.g., the flight deck of 'glass cockpit' airplanes or air traffic control) and industrial systems (e.g., power plants, telephone networks, and sophisticated, e.g., 'lights out,' manufacturing facilities). The main body of human-computer interaction (HCI) research complements but does not directly address the primary issues involved in human-computer interaction design for operators of complex systems. Interfaces to complex systems are somewhat special. The 'user' in such systems - i.e., the human operator responsible for safe and effective system operation - is highly skilled, someone who in human-machine systems engineering is sometimes characterized as 'well trained, well motivated'. The 'job' or task context is paramount and, thus, human-computer interaction is subordinate to human job interaction. The design of human interaction with complex systems, i.e., the design of human job interaction, is sometimes called cognitive engineering.

  15. Study of movement coordination in human ensembles via a novel computer-based set-up

    CERN Document Server

    Alderisio, Francesco; Fiore, Gianfranco; di Bernardo, Mario

    2016-01-01

    Movement coordination in human ensembles has been studied little in the current literature. In the existing experimental works, situations where all subjects are connected with each other through direct visual and auditory coupling, and social interaction affects their coordination, have been investigated. Here, we study coordination in human ensembles via a novel computer-based set-up that enables individuals to coordinate each other's motion from a distance so as to minimize the influence of social interaction. The proposed platform makes it possible to implement different visual interaction patterns among the players, so that participants take into consideration the motion of a designated subset of the others. This allows the evaluation of the exclusive effects on coordination of the structure of interconnections among the players and their own dynamics. Our set-up enables also the deployment of virtual players to investigate dyadic interaction between a human and a virtual agent, as well as group synchron...

  16. [Geomagnetic storm decreases coherence of electric oscillations of human brain while working at the computer].

    Science.gov (United States)

    Novik, O B; Smirnov, F A

    2013-01-01

    The effect of geomagnetic storms at the latitude of Moscow on the electric oscillations of the human brain cerebral cortex was studied. In course of electroencephalogram measurements it was shown that when the voluntary persons at the age of 18-23 years old were performing tasks using a computer during moderate magnetic storm or no later than 24 hrs after it, the value of the coherence function of electric oscillations of the human brain in the frontal and occipital areas in a range of 4.0-7.9 Hz (so-called the theta rhythm oscillations of the human brain) decreased by a factor of two or more, sometimes reaching zero, although arterial blood pressure, respiratory rate and the electrocardiogram registered during electroencephalogram measurements remained within the standard values.

  17. Simulation-based computation of dose to humans in radiological environments

    Energy Technology Data Exchange (ETDEWEB)

    Breazeal, N.L. [Sandia National Labs., Livermore, CA (United States); Davis, K.R.; Watson, R.A. [Sandia National Labs., Albuquerque, NM (United States); Vickers, D.S. [Brigham Young Univ., Provo, UT (United States). Dept. of Electrical and Computer Engineering; Ford, M.S. [Battelle Pantex, Amarillo, TX (United States). Dept. of Radiation Safety

    1996-03-01

    The Radiological Environment Modeling System (REMS) quantifies dose to humans working in radiological environments using the IGRIP (Interactive Graphical Robot Instruction Program) and Deneb/ERGO simulation software. These commercially available products are augmented with custom C code to provide radiation exposure information to, and collect radiation dose information from, workcell simulations. Through the use of any radiation transport code or measured data, a radiation exposure input database may be formulated. User-specified IGRIP simulations utilize these databases to compute and accumulate dose to programmable human models operating around radiation sources. Timing, distances, shielding, and human activity may be modeled accurately in the simulations. The accumulated dose is recorded in output files, and the user is able to process and view this output. The entire REMS capability can be operated from a single graphical user interface.

  18. Human Factors and Human-Computer Considerations in Teleradiology and Telepathology

    Directory of Open Access Journals (Sweden)

    Elizabeth A. Krupinski

    2014-02-01

    Full Text Available Radiology and pathology are unique among other clinical specialties that incorporate telemedicine technologies into clinical practice, as, for the most part in traditional practice, there are few or no direct patient encounters. The majority of teleradiology and telepathology involves viewing images, which is exactly what occurs without the “tele” component. The images used are generally quite large, require dedicated displays and software for viewing, and present challenges to the clinician who must navigate through the presented data to render a diagnostic decision or interpretation. This digital viewing environment is very different from the more traditional reading environment (i.e., film and microscopy, necessitating a new look at how to optimize reading environments and address human factors issues. This paper will review some of the key components that need to be optimized for effective and efficient practice of teleradiology and telepathology using traditional workstations as well as some of the newer mobile viewing applications.

  19. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  20. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  1. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  2. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  3. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  4. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  5. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  6. The use of computers to teach human anatomy and physiology to allied health and nursing students

    Science.gov (United States)

    Bergeron, Valerie J.

    Educational institutions are under tremendous pressure to adopt the newest technologies in order to prepare their students to meet the challenges of the twenty-first century. For the last twenty years huge amounts of money have been spent on computers, printers, software, multimedia projection equipment, and so forth. A reasonable question is, "Has it worked?" Has this infusion of resources, financial as well as human, resulted in improved learning? Are the students meeting the intended learning goals? Any attempt to develop answers to these questions should include examining the intended goals and exploring the effects of the changes on students and faculty. This project investigated the impact of a specific application of a computer program in a community college setting on students' attitudes and understanding of human anatomy and physiology. In this investigation two sites of the same community college with seemingly similar students populations, seven miles apart, used different laboratory activities to teach human anatomy and physiology. At one site nursing students were taught using traditional dissections and laboratory activities; at the other site two of the dissections, specifically cat and sheep pluck, were replaced with the A.D.A.M.RTM (Animated Dissection of Anatomy for Medicine) computer program. Analysis of the attitude data indicated that students at both sites were extremely positive about their laboratory experiences. Analysis of the content data indicated a statistically significant difference in performance between the two sites in two of the eight content areas that were studied. For both topics the students using the computer program scored higher. A detailed analysis of the surveys, interviews with faculty and students, examination of laboratory materials, and observations of laboratory facilities in both sites, and cost-benefit analysis led to the development of seven recommendations. The recommendations call for action at the level of the

  7. Experimental verification of a computational technique for determining ground reactions in human bipedal stance.

    Science.gov (United States)

    Audu, Musa L; Kirsch, Robert F; Triolo, Ronald J

    2007-01-01

    We have developed a three-dimensional (3D) biomechanical model of human standing that enables us to study the mechanisms of posture and balance simultaneously in various directions in space. Since the two feet are on the ground, the system defines a kinematically closed-chain which has redundancy problems that cannot be resolved using the laws of mechanics alone. We have developed a computational (optimization) technique that avoids the problems with the closed-chain formulation thus giving users of such models the ability to make predictions of joint moments, and potentially, muscle activations using more sophisticated musculoskeletal models. This paper describes the experimental verification of the computational technique that is used to estimate the ground reaction vector acting on an unconstrained foot while the other foot is attached to the ground, thus allowing human bipedal standing to be analyzed as an open-chain system. The computational approach was verified in terms of its ability to predict lower extremity joint moments derived from inverse dynamic simulations performed on data acquired from four able-bodied volunteers standing in various postures on force platforms. Sensitivity analyses performed with model simulations indicated which ground reaction force (GRF) and center of pressure (COP) components were most critical for providing better estimates of the joint moments. Overall, the joint moments predicted by the optimization approach are strongly correlated with the joint moments computed using the experimentally measured GRF and COP (0.78 unity slope (experimental=computational results) for postures of the four subjects examined. These results indicate that this model-based technique can be relied upon to predict reasonable and consistent estimates of the joint moments using the predicted GRF and COP for most standing postures.

  8. Inhibitory surround and grouping effects in human and computational multiple object tracking

    Science.gov (United States)

    Yilmaz, Ozgur; Guler, Sadiye; Ogmen, Haluk

    2008-02-01

    Multiple Object Tracking (MOT) experiments show that human observers can track over several seconds up to five moving targets among several moving distractors. We extended these studies by designing modified MOT experiments to investigate the spatio-temporal characteristics of human visuo-cognitive mechanisms for tracking and applied the findings and insights obtained from these experiments in designing computational multiple object tracking algorithms. Recent studies indicate that attention both enhances the neural activity of relevant information and suppresses the irrelevant visual information in the surround. Results of our experiments suggest that the suppressive surround of attention extends up to 4 deg from the target stimulus, and it takes at least 100 ms to build it. We suggest that when the attentional windows corresponding to separate target regions are spatially close, they can be grouped to form a single attentional window to avoid interference originating from suppressive surrounds. The grouping experiment results indicate that the attentional windows are grouped into a single one when the distance between them is less than 1.5 deg. Preliminary implementation of the suppressive surround concept in our computational video object tracker resulted in less number of unnecessary object merges in computational video tracking experiments.

  9. Kansei Colour Concepts to Improve Effective Colour Selection in Designing Human Computer Interfaces

    Directory of Open Access Journals (Sweden)

    Tharangie K G D

    2010-05-01

    Full Text Available Colours have a major impact on Human Computer Interaction. Although there is a very thin line between appropriate and inappropriate use of colours, if used properly, colours can be a powerful tool to improve the usefulness of a computer interface in a wide variety of areas. Many designers mostly consider the physical aspect of the colour and tend to forget that psychological aspect of colour exists. However the findings of this study confirm that the psychological aspect or the affective dimension of colour also plays an important role in colour Interface design towards user satisfaction. Using Kansei Engineering principles the study explores the affective variability of colours and how it can be manipulated to provide better design guidance and solutions. A group of twenty adults from Sri Lanka, age ranging from 30 to 40 took part in the study. Survey was conducted using a Kansei colour questionnaire in normal atmospheric conditions. The results reveal that the affective variability of colours plays an important role in human computer interaction as an influential factor in drawing the user towards or withdrawing from the Interface. Thereby improving or degrading the user satisfaction.

  10. Delays and user performance in human-computer-network interaction tasks.

    Science.gov (United States)

    Caldwell, Barrett S; Wang, Enlie

    2009-12-01

    This article describes a series of studies conducted to examine factors affecting user perceptions, responses, and tolerance for network-based computer delays affecting distributed human-computer-network interaction (HCNI) tasks. HCNI tasks, even with increasing computing and network bandwidth capabilities, are still affected by human perceptions of delay and appropriate waiting times for information flow latencies. Conducted were 6 laboratory studies with university participants in China (Preliminary Experiments 1 through 3) and the United States (Experiments 4 through 6) to examine users' perceptions of elapsed time, effect of perceived network task performance partners on delay tolerance, and expectations of appropriate delays based on task, situation, and network conditions. Results across the six experiments indicate that users' delay tolerance and estimated delay were affected by multiple task and expectation factors, including task complexity and importance, situation urgency and time availability, file size, and network bandwidth capacity. Results also suggest a range of user strategies for incorporating delay tolerance in task planning and performance. HCNI user experience is influenced by combinations of task requirements, constraints, and understandings of system performance; tolerance is a nonlinear function of time constraint ratios or decay. Appropriate user interface tools providing delay feedback information can help modify user expectations and delay tolerance. These tools are especially valuable when delay conditions exceed a few seconds or when task constraints and system demands are high. Interface designs for HCNI tasks should consider assistant-style presentations of delay feedback, information freshness, and network characteristics. Assistants should also gather awareness of user time constraints.

  11. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  13. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  14. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  15. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  16. COMPUTING

    CERN Document Server

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  17. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  18. Computational fluid dynamics modeling of Bacillus anthracis spore deposition in rabbit and human respiratory airways

    Energy Technology Data Exchange (ETDEWEB)

    Kabilan, S.; Suffield, S. R.; Recknagle, K. P.; Jacob, R. E.; Einstein, D. R.; Kuprat, A. P.; Carson, J. P.; Colby, S. M.; Saunders, J. H.; Hines, S. A.; Teeguarden, J. G.; Straub, T. M.; Moe, M.; Taft, S. C.; Corley, R. A.

    2016-09-01

    Three-dimensional computational fluid dynamics and Lagrangian particle deposition models were developed to compare the deposition of aerosolized Bacillus anthracis spores in the respiratory airways of a human with that of the rabbit, a species commonly used in the study of anthrax disease. The respiratory airway geometries for each species were derived respectively from computed tomography (CT) and µCT images. Both models encompassed airways that extended from the external nose to the lung with a total of 272 outlets in the human model and 2878 outlets in the rabbit model. All simulations of spore deposition were conducted under transient, inhalation–exhalation breathing conditions using average species-specific minute volumes. Two different exposure scenarios were modeled in the rabbit based upon experimental inhalation studies. For comparison, human simulations were conducted at the highest exposure concentration used during the rabbit experimental exposures. Results demonstrated that regional spore deposition patterns were sensitive to airway geometry and ventilation profiles. Due to the complex airway geometries in the rabbit nose, higher spore deposition efficiency was predicted in the nasal sinus compared to the human at the same air concentration of anthrax spores. In contrast, higher spore deposition was predicted in the lower conducting airways of the human compared to the rabbit lung due to differences in airway branching pattern. This information can be used to refine published and ongoing biokinetic models of inhalation anthrax spore exposures, which currently estimate deposited spore concentrations based solely upon exposure concentrations and inhaled doses that do not factor in species-specific anatomy and physiology for deposition.

  19. Computational Fluid Dynamics Modeling of Bacillus anthracis Spore Deposition in Rabbit and Human Respiratory Airways

    Energy Technology Data Exchange (ETDEWEB)

    Kabilan, Senthil; Suffield, Sarah R.; Recknagle, Kurtis P.; Jacob, Rick E.; Einstein, Daniel R.; Kuprat, Andrew P.; Carson, James P.; Colby, Sean M.; Saunders, James H.; Hines, Stephanie; Teeguarden, Justin G.; Straub, Tim M.; Moe, M.; Taft, Sarah; Corley, Richard A.

    2016-09-30

    Three-dimensional computational fluid dynamics and Lagrangian particle deposition models were developed to compare the deposition of aerosolized Bacillus anthracis spores in the respiratory airways of a human with that of the rabbit, a species commonly used in the study of anthrax disease. The respiratory airway geometries for each species were derived from computed tomography (CT) or µCT images. Both models encompassed airways that extended from the external nose to the lung with a total of 272 outlets in the human model and 2878 outlets in the rabbit model. All simulations of spore deposition were conducted under transient, inhalation-exhalation breathing conditions using average species-specific minute volumes. The highest exposure concentration was modeled in the rabbit based upon prior acute inhalation studies. For comparison, human simulation was also conducted at the same concentration. Results demonstrated that regional spore deposition patterns were sensitive to airway geometry and ventilation profiles. Due to the complex airway geometries in the rabbit nose, higher spore deposition efficiency was predicted in the upper conducting airways compared to the human at the same air concentration of anthrax spores. As a result, higher particle deposition was predicted in the conducting airways and deep lung of the human compared to the rabbit lung due to differences in airway branching pattern. This information can be used to refine published and ongoing biokinetic models of inhalation anthrax spore exposures, which currently estimate deposited spore concentrations based solely upon exposure concentrations and inhaled doses that do not factor in species-specific anatomy and physiology.

  20. Dynamic management of multi-channel interfaces for human interactions with computer-based intelligent assistants

    Energy Technology Data Exchange (ETDEWEB)

    Strickland, T.D. Jr.

    1989-01-01

    For complex man-machine tasks where multi-media interaction with computer-based assistants is appropriate, a portion of the assistant's intelligence must be devoted to managing its communication processes with the user. Since people often serve the role of assistants, the conventions of human communication provide a basis for designing the communication processes of the computer-based assistant. Human decision making for communication requires knowledge of the user's style, the task demands, and communication practices, and knowledge of the current situation. Decisions necessary for effective communication, when, how, and what to communicate, can be expressed using these knowledge sources. A system based on human communication rules was developed to manage the communication decisions of an intelligent assistant. The Dynamic Communication Management (DCM) system consists of four components, three models and a manager. The model of the user describes the user's communication preferences for different task situations. The model of the task is used to establish the user's current activity and to describe how communication should be conducted for this activity. The communication model provides the rules needed to make decisions: when to communicate the message, how to present the message to the user, and what information should be communicated. The Communication Manager controls and coordinates these models to conduct all communication with the user. Performance with DCM as the interface to a simulated Flexible Manufacturing System (FMS) control task was established to learn about the potential benefits of the concept.

  1. Evidence for model-based computations in the human amygdala during Pavlovian conditioning.

    Science.gov (United States)

    Prévost, Charlotte; McNamee, Daniel; Jessup, Ryan K; Bossaerts, Peter; O'Doherty, John P

    2013-01-01

    Contemporary computational accounts of instrumental conditioning have emphasized a role for a model-based system in which values are computed with reference to a rich model of the structure of the world, and a model-free system in which values are updated without encoding such structure. Much less studied is the possibility of a similar distinction operating at the level of Pavlovian conditioning. In the present study, we scanned human participants while they participated in a Pavlovian conditioning task with a simple structure while measuring activity in the human amygdala using a high-resolution fMRI protocol. After fitting a model-based algorithm and a variety of model-free algorithms to the fMRI data, we found evidence for the superiority of a model-based algorithm in accounting for activity in the amygdala compared to the model-free counterparts. These findings support an important role for model-based algorithms in describing the processes underpinning Pavlovian conditioning, as well as providing evidence of a role for the human amygdala in model-based inference.

  2. Evidence for model-based computations in the human amygdala during Pavlovian conditioning.

    Directory of Open Access Journals (Sweden)

    Charlotte Prévost

    Full Text Available Contemporary computational accounts of instrumental conditioning have emphasized a role for a model-based system in which values are computed with reference to a rich model of the structure of the world, and a model-free system in which values are updated without encoding such structure. Much less studied is the possibility of a similar distinction operating at the level of Pavlovian conditioning. In the present study, we scanned human participants while they participated in a Pavlovian conditioning task with a simple structure while measuring activity in the human amygdala using a high-resolution fMRI protocol. After fitting a model-based algorithm and a variety of model-free algorithms to the fMRI data, we found evidence for the superiority of a model-based algorithm in accounting for activity in the amygdala compared to the model-free counterparts. These findings support an important role for model-based algorithms in describing the processes underpinning Pavlovian conditioning, as well as providing evidence of a role for the human amygdala in model-based inference.

  3. Wearable Computing System with Input-Output Devices Based on Eye-Based Human Computer Interaction Allowing Location Based Web Services

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2013-08-01

    Full Text Available Wearable computing with Input-Output devices Base on Eye-Based Human Computer Interaction: EBHCI which allows location based web services including navigation, location/attitude/health condition monitoring is proposed. Through implementation of the proposed wearable computing system, all the functionality is confirmed. It is also found that the system does work well. It can be used easily and also is not expensive. Experimental results for EBHCI show excellent performance in terms of key-in accuracy as well as input speed. It is accessible to internet, obviously, and has search engine capability.

  4. Study of human performance in computer-aided architectural design: methods and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cuomo, D.L.

    1988-01-01

    The goal of this study was to develop a performance methodology which will be useful for evaluating human performance for different types of tasks on a given system and across different levels of complexity within a single task. To meet the above goals, performance measures that reflect meaningful changes in humans behavior during CAAD tasks were developed. These measures were based on models of human information processing. Two cognitively different architectural tasks formulated differed in terms of the stimulus-central processing component-response compatibility and the structuredness of their problem spaces. Methods of varying task complexity within each of these tasks were also developed to test the sensitivity of the performance measures across levels of complexity and to introduce variability into the humans design behavior. From the developed performance measures task complexity, type of task, and subjective effects on performance could be seen. It was also shown that some measures more directly reflected the computer-interaction aspects of the task while other measures reflected the cognitive design activity of the human.

  5. Single-photon emission computed tomography in human immunodeficiency virus encephalopathy: A preliminary report

    Energy Technology Data Exchange (ETDEWEB)

    Masdeu, J.C.; Yudd, A.; Van Heertum, R.L.; Grundman, M.; Hriso, E.; O' Connell, R.A.; Luck, D.; Camli, U.; King, L.N. (St. Vincent' s Medical Center, New York, NY (USA))

    1991-08-01

    Depression or psychosis in a previously asymptomatic individual infected with the human immunodeficiency virus (HIV) may be psychogenic, related to brain involvement by the HIV or both. Although prognosis and treatment differ depending on etiology, computed tomography (CT) and magnetic resonance imaging (MRI) are usually unrevealing in early HIV encephalopathy and therefore cannot differentiate it from psychogenic conditions. Thirty of 32 patients (94%) with HIV encephalopathy had single-photon emission computed tomography (SPECT) findings that differed from the findings in 15 patients with non-HIV psychoses and 6 controls. SPECT showed multifocal cortical and subcortical areas of hypoperfusion. In 4 cases, cognitive improvement after 6-8 weeks of zidovudine (AZT) therapy was reflected in amelioration of SPECT findings. CT remained unchanged. SPECT may be a useful technique for the evaluation of HIV encephalopathy.

  6. Using Noninvasive Wearable Computers to Recognize Human Emotions from Physiological Signals

    Directory of Open Access Journals (Sweden)

    Nasoz Fatma

    2004-01-01

    Full Text Available We discuss the strong relationship between affect and cognition and the importance of emotions in multimodal human computer interaction (HCI and user modeling. We introduce the overall paradigm for our multimodal system that aims at recognizing its users' emotions and at responding to them accordingly depending upon the current context or application. We then describe the design of the emotion elicitation experiment we conducted by collecting, via wearable computers, physiological signals from the autonomic nervous system (galvanic skin response, heart rate, temperature and mapping them to certain emotions (sadness, anger, fear, surprise, frustration, and amusement. We show the results of three different supervised learning algorithms that categorize these collected signals in terms of emotions, and generalize their learning to recognize emotions from new collections of signals. We finally discuss possible broader impact and potential applications of emotion recognition for multimodal intelligent systems.

  7. Large Metasurface Aperture for Millimeter Wave Computational Imaging at the Human-Scale

    Science.gov (United States)

    Gollub, J. N.; Yurduseven, O.; Trofatter, K. P.; Arnitz, D.; F. Imani, M.; Sleasman, T.; Boyarsky, M.; Rose, A.; Pedross-Engel, A.; Odabasi, H.; Zvolensky, T.; Lipworth, G.; Brady, D.; Marks, D. L.; Reynolds, M. S.; Smith, D. R.

    2017-02-01

    We demonstrate a low-profile holographic imaging system at millimeter wavelengths based on an aperture composed of frequency-diverse metasurfaces. Utilizing measurements of spatially-diverse field patterns, diffraction-limited images of human-sized subjects are reconstructed. The system is driven by a single microwave source swept over a band of frequencies (17.5–26.5 GHz) and switched between a collection of transmit and receive metasurface panels. High fidelity image reconstruction requires a precise model for each field pattern generated by the aperture, as well as the manner in which the field scatters from objects in the scene. This constraint makes scaling of computational imaging systems inherently challenging for electrically large, coherent apertures. To meet the demanding requirements, we introduce computational methods and calibration approaches that enable rapid and accurate imaging performance.

  8. Using Noninvasive Wearable Computers to Recognize Human Emotions from Physiological Signals

    Science.gov (United States)

    Lisetti, Christine Lætitia; Nasoz, Fatma

    2004-12-01

    We discuss the strong relationship between affect and cognition and the importance of emotions in multimodal human computer interaction (HCI) and user modeling. We introduce the overall paradigm for our multimodal system that aims at recognizing its users' emotions and at responding to them accordingly depending upon the current context or application. We then describe the design of the emotion elicitation experiment we conducted by collecting, via wearable computers, physiological signals from the autonomic nervous system (galvanic skin response, heart rate, temperature) and mapping them to certain emotions (sadness, anger, fear, surprise, frustration, and amusement). We show the results of three different supervised learning algorithms that categorize these collected signals in terms of emotions, and generalize their learning to recognize emotions from new collections of signals. We finally discuss possible broader impact and potential applications of emotion recognition for multimodal intelligent systems.

  9. Computational high-resolution optical imaging of the living human retina

    Science.gov (United States)

    Shemonski, Nathan D.; South, Fredrick A.; Liu, Yuan-Zhi; Adie, Steven G.; Scott Carney, P.; Boppart, Stephen A.

    2015-07-01

    High-resolution in vivo imaging is of great importance for the fields of biology and medicine. The introduction of hardware-based adaptive optics (HAO) has pushed the limits of optical imaging, enabling high-resolution near diffraction-limited imaging of previously unresolvable structures. In ophthalmology, when combined with optical coherence tomography, HAO has enabled a detailed three-dimensional visualization of photoreceptor distributions and individual nerve fibre bundles in the living human retina. However, the introduction of HAO hardware and supporting software adds considerable complexity and cost to an imaging system, limiting the number of researchers and medical professionals who could benefit from the technology. Here we demonstrate a fully automated computational approach that enables high-resolution in vivo ophthalmic imaging without the need for HAO. The results demonstrate that computational methods in coherent microscopy are applicable in highly dynamic living systems.

  10. Computational methods to extract meaning from text and advance theories of human cognition.

    Science.gov (United States)

    McNamara, Danielle S

    2011-01-01

    Over the past two decades, researchers have made great advances in the area of computational methods for extracting meaning from text. This research has to a large extent been spurred by the development of latent semantic analysis (LSA), a method for extracting and representing the meaning of words using statistical computations applied to large corpora of text. Since the advent of LSA, researchers have developed and tested alternative statistical methods designed to detect and analyze meaning in text corpora. This research exemplifies how statistical models of semantics play an important role in our understanding of cognition and contribute to the field of cognitive science. Importantly, these models afford large-scale representations of human knowledge and allow researchers to explore various questions regarding knowledge, discourse processing, text comprehension, and language. This topic includes the latest progress by the leading researchers in the endeavor to go beyond LSA.

  11. Computational approaches towards understanding human long non-coding RNA biology.

    Science.gov (United States)

    Jalali, Saakshi; Kapoor, Shruti; Sivadas, Ambily; Bhartiya, Deeksha; Scaria, Vinod

    2015-07-15

    Long non-coding RNAs (lncRNAs) form the largest class of non-protein coding genes in the human genome. While a small subset of well-characterized lncRNAs has demonstrated their significant role in diverse biological functions like chromatin modifications, post-transcriptional regulation, imprinting etc., the functional significance of a vast majority of them still remains an enigma. Increasing evidence of the implications of lncRNAs in various diseases including cancer and major developmental processes has further enhanced the need to gain mechanistic insights into the lncRNA functions. Here, we present a comprehensive review of the various computational approaches and tools available for the identification and annotation of long non-coding RNAs. We also discuss a conceptual roadmap to systematically explore the functional properties of the lncRNAs using computational approaches.

  12. Computational drug design strategies applied to the modelling of human immunodeficiency virus-1 reverse transcriptase inhibitors

    Directory of Open Access Journals (Sweden)

    Lucianna Helene Santos

    2015-11-01

    Full Text Available Reverse transcriptase (RT is a multifunctional enzyme in the human immunodeficiency virus (HIV-1 life cycle and represents a primary target for drug discovery efforts against HIV-1 infection. Two classes of RT inhibitors, the nucleoside RT inhibitors (NRTIs and the nonnucleoside transcriptase inhibitors are prominently used in the highly active antiretroviral therapy in combination with other anti-HIV drugs. However, the rapid emergence of drug-resistant viral strains has limited the successful rate of the anti-HIV agents. Computational methods are a significant part of the drug design process and indispensable to study drug resistance. In this review, recent advances in computer-aided drug design for the rational design of new compounds against HIV-1 RT using methods such as molecular docking, molecular dynamics, free energy calculations, quantitative structure-activity relationships, pharmacophore modelling and absorption, distribution, metabolism, excretion and toxicity prediction are discussed. Successful applications of these methodologies are also highlighted.

  13. Computational drug design strategies applied to the modelling of human immunodeficiency virus-1 reverse transcriptase inhibitors.

    Science.gov (United States)

    Santos, Lucianna Helene; Ferreira, Rafaela Salgado; Caffarena, Ernesto Raúl

    2015-11-01

    Reverse transcriptase (RT) is a multifunctional enzyme in the human immunodeficiency virus (HIV)-1 life cycle and represents a primary target for drug discovery efforts against HIV-1 infection. Two classes of RT inhibitors, the nucleoside RT inhibitors (NRTIs) and the nonnucleoside transcriptase inhibitors are prominently used in the highly active antiretroviral therapy in combination with other anti-HIV drugs. However, the rapid emergence of drug-resistant viral strains has limited the successful rate of the anti-HIV agents. Computational methods are a significant part of the drug design process and indispensable to study drug resistance. In this review, recent advances in computer-aided drug design for the rational design of new compounds against HIV-1 RT using methods such as molecular docking, molecular dynamics, free energy calculations, quantitative structure-activity relationships, pharmacophore modelling and absorption, distribution, metabolism, excretion and toxicity prediction are discussed. Successful applications of these methodologies are also highlighted.

  14. 3D virtual human atria: A computational platform for studying clinical atrial fibrillation.

    Science.gov (United States)

    Aslanidi, Oleg V; Colman, Michael A; Stott, Jonathan; Dobrzynski, Halina; Boyett, Mark R; Holden, Arun V; Zhang, Henggui

    2011-10-01

    Despite a vast amount of experimental and clinical data on the underlying ionic, cellular and tissue substrates, the mechanisms of common atrial arrhythmias (such as atrial fibrillation, AF) arising from the functional interactions at the whole atria level remain unclear. Computational modelling provides a quantitative framework for integrating such multi-scale data and understanding the arrhythmogenic behaviour that emerges from the collective spatio-temporal dynamics in all parts of the heart. In this study, we have developed a multi-scale hierarchy of biophysically detailed computational models for the human atria--the 3D virtual human atria. Primarily, diffusion tensor MRI reconstruction of the tissue geometry and fibre orientation in the human sinoatrial node (SAN) and surrounding atrial muscle was integrated into the 3D model of the whole atria dissected from the Visible Human dataset. The anatomical models were combined with the heterogeneous atrial action potential (AP) models, and used to simulate the AP conduction in the human atria under various conditions: SAN pacemaking and atrial activation in the normal rhythm, break-down of regular AP wave-fronts during rapid atrial pacing, and the genesis of multiple re-entrant wavelets characteristic of AF. Contributions of different properties of the tissue to mechanisms of the normal rhythm and arrhythmogenesis were investigated. Primarily, the simulations showed that tissue heterogeneity caused the break-down of the normal AP wave-fronts at rapid pacing rates, which initiated a pair of re-entrant spiral waves; and tissue anisotropy resulted in a further break-down of the spiral waves into multiple meandering wavelets characteristic of AF. The 3D virtual atria model itself was incorporated into the torso model to simulate the body surface ECG patterns in the normal and arrhythmic conditions. Therefore, a state-of-the-art computational platform has been developed, which can be used for studying multi

  15. Real-time non-invasive eyetracking and gaze-point determination for human-computer interaction and biomedicine

    Science.gov (United States)

    Talukder, Ashit; Morookian, John-Michael; Monacos, S.; Lam, R.; Lebaw, C.; Bond, A.

    2004-01-01

    Eyetracking is one of the latest technologies that has shown potential in several areas including human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological problems in individuals.

  16. Computational Strategy for Quantifying Human Pesticide Exposure based upon a Saliva Measurement

    Directory of Open Access Journals (Sweden)

    Charles eTimchalk

    2015-05-01

    Full Text Available Quantitative exposure data is important for evaluating toxicity risk and biomonitoring is a critical tool for evaluating human exposure. Direct personal monitoring provides the most accurate estimation of a subject’s true dose, and non-invasive methods are advocated for quantifying exposure to xenobiotics. In this regard, there is a need to identify chemicals that are cleared in saliva at concentrations that can be quantified to support the implementation of this approach. This manuscript reviews the computational modeling approaches that are coupled to in vivo and in vitro experiments to predict salivary uptake and clearance of xenobiotics and provides additional insight on species-dependent differences in partitioning that are of key importance for extrapolation. The primary mechanism by which xenobiotics leave the blood and enter saliva involves paracellular transport, passive transcellular diffusion, or trancellular active transport with the majority of xenobiotics transferred by passive diffusion. The transcellular or paracellular diffusion of unbound chemicals in plasma to saliva has been computationally modeled using compartmental and physiologically based approaches. Of key importance for determining the plasma:saliva partitioning was the utilization of the Schmitt algorithm that calculates partitioning based upon the tissue composition, pH, chemical pKa and plasma protein-binding. Sensitivity analysis identified that both protein-binding and pKa (for weak acids and bases have significant impact on determining partitioning and species dependent differences based upon physiological variance. Future strategies are focused on an in vitro salivary acinar cell based system to experimentally determine and computationally predict salivary gland uptake and clearance for xenobiotics. It is envisioned that a combination of salivary biomonitoring and computational modeling will enable the non-invasive measurement of chemical exposures in human

  17. Computational strategy for quantifying human pesticide exposure based upon a saliva measurement.

    Science.gov (United States)

    Timchalk, Charles; Weber, Thomas J; Smith, Jordan N

    2015-01-01

    Quantitative exposure data is important for evaluating toxicity risk and biomonitoring is a critical tool for evaluating human exposure. Direct personal monitoring provides the most accurate estimation of a subject's true dose, and non-invasive methods are advocated for quantifying exposure to xenobiotics. In this regard, there is a need to identify chemicals that are cleared in saliva at concentrations that can be quantified to support the implementation of this approach. This manuscript reviews the computational modeling approaches that are coupled to in vivo and in vitro experiments to predict salivary uptake and clearance of xenobiotics and provides additional insight on species-dependent differences in partitioning that are of key importance for extrapolation. The primary mechanism by which xenobiotics leave the blood and enter saliva involves paracellular transport, passive transcellular diffusion, or transcellular active transport with the majority of xenobiotics transferred by passive diffusion. The transcellular or paracellular diffusion of unbound chemicals in plasma to saliva has been computationally modeled using compartmental and physiologically based approaches. Of key importance for determining the plasma:saliva partitioning was the utilization of the Schmitt algorithm that calculates partitioning based upon the tissue composition, pH, chemical pKa, and plasma protein-binding. Sensitivity analysis identified that both protein-binding and pKa (for weak acids and bases) have significant impact on determining partitioning and species dependent differences based upon physiological variance. Future strategies are focused on an in vitro salivary acinar cell based system to experimentally determine and computationally predict salivary gland uptake and clearance for xenobiotics. It is envisioned that a combination of salivary biomonitoring and computational modeling will enable the non-invasive measurement of chemical exposures in human populations.

  18. The Human Toxome Collaboratorium: a shared environment for multi-omic computational collaboration within a consortium

    Directory of Open Access Journals (Sweden)

    Rick A Fasani

    2016-02-01

    Full Text Available The Human Toxome Project is part of a long-term vision to modernize toxicity testing for the 21st century. In the initial phase of the project, a consortium of six academic, commercial, and government organizations has partnered to map pathways of toxicity, using endocrine disruption as a model hazard. Experimental data is generated at multiple sites, and analyzed using a range of computational tools. While effectively gathering, managing, and analyzing the data for high-content experiments is a challenge in its own right, doing so for a growing number of -omics technologies, with larger data sets, across multiple institutions complicates the process. Interestingly, one of the most difficult, ongoing challenges has been the computational collaboration between the geographically separate institutions. Existing solutions cannot handle the growing heterogeneous data, provide a computational environment for consistent analysis, accommodate different workflows, and adapt to the constantly evolving methods and goals of a research project. To meet the needs of the project, we have created and managed The Human Toxome Collaboratorium, a shared computational environment hosted on third-party cloud services. The Collaboratorium provides a familiar virtual desktop, with a mix of commercial, open-source, and custom-built applications. It shares some of the challenges of traditional information technology, but with unique and unexpected constraints that emerge from the cloud. Here we describe the problems we faced, the current architecture of the solution, an example of its use, the major lessons we learned, and the future potential of the concept. In particular, the Collaboratorium represents a novel distribution method that could increase the reproducibility and reusability of results from similar large, multi-omic studies.

  19. U.S. Army weapon systems human-computer interface style guide. Version 2

    Energy Technology Data Exchange (ETDEWEB)

    Avery, L.W.; O`Mara, P.A.; Shepard, A.P.; Donohoo, D.T.

    1997-12-31

    A stated goal of the US Army has been the standardization of the human computer interfaces (HCIs) of its system. Some of the tools being used to accomplish this standardization are HCI design guidelines and style guides. Currently, the Army is employing a number of HCI design guidance documents. While these style guides provide good guidance for the command, control, communications, computers, and intelligence (C4I) domain, they do not necessarily represent the more unique requirements of the Army`s real time and near-real time (RT/NRT) weapon systems. The Office of the Director of Information for Command, Control, Communications, and Computers (DISC4), in conjunction with the Weapon Systems Technical Architecture Working Group (WSTAWG), recognized this need as part of their activities to revise the Army Technical Architecture (ATA), now termed the Joint Technical Architecture-Army (JTA-A). To address this need, DISC4 tasked the Pacific Northwest National Laboratory (PNNL) to develop an Army weapon systems unique HCI style guide, which resulted in the US Army Weapon Systems Human-Computer Interface (WSHCI) Style Guide Version 1. Based on feedback from the user community, DISC4 further tasked PNNL to revise Version 1 and publish Version 2. The intent was to update some of the research and incorporate some enhancements. This document provides that revision. The purpose of this document is to provide HCI design guidance for the RT/NRT Army system domain across the weapon systems subdomains of ground, aviation, missile, and soldier systems. Each subdomain should customize and extend this guidance by developing their domain-specific style guides, which will be used to guide the development of future systems within their subdomains.

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  1. Computational prediction of vaccine strains for human influenza A (H3N2) viruses.

    Science.gov (United States)

    Steinbrück, L; Klingen, T R; McHardy, A C

    2014-10-01

    Human influenza A viruses are rapidly evolving pathogens that cause substantial morbidity and mortality in seasonal epidemics around the globe. To ensure continued protection, the strains used for the production of the seasonal influenza vaccine have to be regularly updated, which involves data collection and analysis by numerous experts worldwide. Computer-guided analysis is becoming increasingly important in this problem due to the vast amounts of generated data. We here describe a computational method for selecting a suitable strain for production of the human influenza A virus vaccine. It interprets available antigenic and genomic sequence data based on measures of antigenic novelty and rate of propagation of the viral strains throughout the population. For viral isolates sampled between 2002 and 2007, we used this method to predict the antigenic evolution of the H3N2 viruses in retrospective testing scenarios. When seasons were scored as true or false predictions, our method returned six true positives, three false negatives, eight true negatives, and one false positive, or 78% accuracy overall. In comparison to the recommendations by the WHO, we identified the correct antigenic variant once at the same time and twice one season ahead. Even though it cannot be ruled out that practical reasons such as lack of a sufficiently well-growing candidate strain may in some cases have prevented recommendation of the best-matching strain by the WHO, our computational decision procedure allows quantitative interpretation of the growing amounts of data and may help to match the vaccine better to predominating strains in seasonal influenza epidemics. Importance: Human influenza A viruses continuously change antigenically to circumvent the immune protection evoked by vaccination or previously circulating viral strains. To maintain vaccine protection and thereby reduce the mortality and morbidity caused by infections, regular updates of the vaccine strains are required. We

  2. Computer Simulation of Gd(III) Speciation in Human Interstitial Fluid

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The speciation and distribution of Gd(III) in human interstitial fluid was studied by computer simulation. Meantime artificial neural network was applied to the estimation of log β values of complexes. The results show that the precipitate species, GdPO4 and Gd2(CO3)3, are the predominant species. Among soluble species, the free Gd(III), [Gd(HSA)] , [Gd(Ox)] and then the ternary complexes of Gd(III) with citrate are main species and [Gd3(OH)4] becomes the predominant species at the Gd(III) total concentration of 2.2×10-2mol/L.

  3. Human-Computer Interaction Handbook Fundamentals, Evolving Technologies, and Emerging Applications

    CERN Document Server

    Jacko, Julie A

    2012-01-01

    The third edition of a groundbreaking reference, The Human--Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications raises the bar for handbooks in this field. It is the largest, most complete compilation of HCI theories, principles, advances, case studies, and more that exist within a single volume. The book captures the current and emerging sub-disciplines within HCI related to research, development, and practice that continue to advance at an astonishing rate. It features cutting-edge advances to the scientific knowledge base as well as visionary perspe

  4. Machine takeover the growing threat to human freedom in a computer-controlled society

    CERN Document Server

    George, Frank Honywill

    1977-01-01

    Machine Takeover: The Growing Threat to Human Freedom in a Computer-Controlled Society discusses the implications of technological advancement. The title identifies the changes in society that no one is aware of, along with what this changes entails. The text first covers the information science, particularly the aspect of an automated system for information processing. Next, the selection deals with social implications of information science, such as information pollution. The text also tackles the concerns in the utilization of technology in order to manipulate the lives of people without th

  5. Advances in Human-Computer Interaction: Graphics and Animation Components for Interface Design

    Science.gov (United States)

    Cipolla Ficarra, Francisco V.; Nicol, Emma; Cipolla-Ficarra, Miguel; Richardson, Lucy

    We present an analysis of communicability methodology in graphics and animation components for interface design, called CAN (Communicability, Acceptability and Novelty). This methodology has been under development between 2005 and 2010, obtaining excellent results in cultural heritage, education and microcomputing contexts. In studies where there is a bi-directional interrelation between ergonomics, usability, user-centered design, software quality and the human-computer interaction. We also present the heuristic results about iconography and layout design in blogs and websites of the following countries: Spain, Italy, Portugal and France.

  6. A multi-tissue segmentation of the human head for detailed computational models.

    Science.gov (United States)

    Hannula, Markus; Narra, Nathaniel; Onnela, Niina; Dastidar, Prasun; Hyttinen, Jari

    2014-01-01

    This paper describes the creation of an anatomically detailed high resolution model of the human head based on the Visible Human Female data from the National Library of Medicine archives. Automatic and semi-automatic segmentation algorithms were applied over the 3 image volumes – CT, MRI and anatomical cryo-sections of the cadaver – to label a total of 23 tissues. The results were combined to create a labeled volume of the head with voxel dimensions of 0.33×0.33×0.33 mm. The individual label matrices and their corresponding surface meshes are made available to be used freely. The detailed blood vessel network and ocular tissues will be of interest in computational modelling and simulation studies.

  7. The impact of scaled boundary conditions on wall shear stress computations in atherosclerotic human coronary bifurcations.

    Science.gov (United States)

    Schrauwen, Jelle T C; Schwarz, Janina C V; Wentzel, Jolanda J; van der Steen, Antonius F W; Siebes, Maria; Gijsen, Frank J H

    2016-05-15

    The aim of this study was to determine if reliable patient-specific wall shear stress (WSS) can be computed when diameter-based scaling laws are used to impose the boundary conditions for computational fluid dynamics. This study focused on mildly diseased human coronary bifurcations since they are predilection sites for atherosclerosis. Eight patients scheduled for percutaneous coronary intervention were imaged with angiography. The velocity proximal and distal of a bifurcation was acquired with intravascular Doppler measurements. These measurements were used for inflow and outflow boundary conditions for the first set of WSS computations. For the second set of computations, absolute inflow and outflow ratios were derived from geometry-based scaling laws based on angiography data. Normalized WSS maps per segment were obtained by dividing the absolute WSS by the mean WSS value. Absolute and normalized WSS maps from the measured-approach and the scaled-approach were compared. A reasonable agreement was found between the measured and scaled inflows, with a median difference of 0.08 ml/s [-0.01; 0.20]. The measured and the scaled outflow ratios showed a good agreement: 1.5 percentage points [-19.0; 4.5]. Absolute WSS maps were sensitive to the inflow and outflow variations, and relatively large differences between the two approaches were observed. For normalized WSS maps, the results for the two approaches were equivalent. This study showed that normalized WSS can be obtained from angiography data alone by applying diameter-based scaling laws to define the boundary conditions. Caution should be taken when absolute WSS is assessed from computations using scaled boundary conditions.

  8. Human errors and violations in computer and information security: the viewpoint of network administrators and security specialists.

    Science.gov (United States)

    Kraemer, Sara; Carayon, Pascale

    2007-03-01

    This paper describes human errors and violations of end users and network administration in computer and information security. This information is summarized in a conceptual framework for examining the human and organizational factors contributing to computer and information security. This framework includes human error taxonomies to describe the work conditions that contribute adversely to computer and information security, i.e. to security vulnerabilities and breaches. The issue of human error and violation in computer and information security was explored through a series of 16 interviews with network administrators and security specialists. The interviews were audio taped, transcribed, and analyzed by coding specific themes in a node structure. The result is an expanded framework that classifies types of human error and identifies specific human and organizational factors that contribute to computer and information security. Network administrators tended to view errors created by end users as more intentional than unintentional, while errors created by network administrators as more unintentional than intentional. Organizational factors, such as communication, security culture, policy, and organizational structure, were the most frequently cited factors associated with computer and information security.

  9. Computational promoter analysis of mouse, rat and human antimicrobial peptide-coding genes

    Directory of Open Access Journals (Sweden)

    Kai Chikatoshi

    2006-12-01

    Full Text Available Abstract Background Mammalian antimicrobial peptides (AMPs are effectors of the innate immune response. A multitude of signals coming from pathways of mammalian pathogen/pattern recognition receptors and other proteins affect the expression of AMP-coding genes (AMPcgs. For many AMPcgs the promoter elements and transcription factors that control their tissue cell-specific expression have yet to be fully identified and characterized. Results Based upon the RIKEN full-length cDNA and public sequence data derived from human, mouse and rat, we identified 178 candidate AMP transcripts derived from 61 genes belonging to 29 AMP families. However, only for 31 mouse genes belonging to 22 AMP families we were able to determine true orthologous relationships with 30 human and 15 rat sequences. We screened the promoter regions of AMPcgs in the three species for motifs by an ab initio motif finding method and analyzed the derived promoter characteristics. Promoter models were developed for alpha-defensins, penk and zap AMP families. The results suggest a core set of transcription factors (TFs that regulate the transcription of AMPcg families in mouse, rat and human. The three most frequent core TFs groups include liver-, nervous system-specific and nuclear hormone receptors (NHRs. Out of 440 motifs analyzed, we found that three represent potentially novel TF-binding motifs enriched in promoters of AMPcgs, while the other four motifs appear to be species-specific. Conclusion Our large-scale computational analysis of promoters of 22 families of AMPcgs across three mammalian species suggests that their key transcriptional regulators are likely to be TFs of the liver-, nervous system-specific and NHR groups. The computationally inferred promoter elements and potential TF binding motifs provide a rich resource for targeted experimental validation of TF binding and signaling studies that aim at the regulation of mouse, rat or human AMPcgs.

  10. Computational assessment of mammography accreditation phantom images and correlation with human observer analysis

    Science.gov (United States)

    Barufaldi, Bruno; Lau, Kristen C.; Schiabel, Homero; Maidment, D. A.

    2015-03-01

    Routine performance of basic test procedures and dose measurements are essential for assuring high quality of mammograms. International guidelines recommend that breast care providers ascertain that mammography systems produce a constant high quality image, using as low a radiation dose as is reasonably achievable. The main purpose of this research is to develop a framework to monitor radiation dose and image quality in a mixed breast screening and diagnostic imaging environment using an automated tracking system. This study presents a module of this framework, consisting of a computerized system to measure the image quality of the American College of Radiology mammography accreditation phantom. The methods developed combine correlation approaches, matched filters, and data mining techniques. These methods have been used to analyze radiological images of the accreditation phantom. The classification of structures of interest is based upon reports produced by four trained readers. As previously reported, human observers demonstrate great variation in their analysis due to the subjectivity of human visual inspection. The software tool was trained with three sets of 60 phantom images in order to generate decision trees using the software WEKA (Waikato Environment for Knowledge Analysis). When tested with 240 images during the classification step, the tool correctly classified 88%, 99%, and 98%, of fibers, speck groups and masses, respectively. The variation between the computer classification and human reading was comparable to the variation between human readers. This computerized system not only automates the quality control procedure in mammography, but also decreases the subjectivity in the expert evaluation of the phantom images.

  11. Impact of familiarity on information complexity in human-computer interfaces

    Directory of Open Access Journals (Sweden)

    Bakaev Maxim

    2016-01-01

    Full Text Available A quantitative measure of information complexity remains very much desirable in HCI field, since it may aid in optimization of user interfaces, especially in human-computer systems for controlling complex objects. Our paper is dedicated to exploration of subjective (subject-depended aspect of the complexity, conceptualized as information familiarity. Although research of familiarity in human cognition and behaviour is done in several fields, the accepted models in HCI, such as Human Processor or Hick-Hyman’s law do not generally consider this issue. In our experimental study the subjects performed search and selection of digits and letters, whose familiarity was conceptualized as frequency of occurrence in numbers and texts. The analysis showed significant effect of information familiarity on selection time and throughput in regression models, although the R2 values were somehow low. Still, we hope that our results might aid in quantification of information complexity and its further application for optimizing interaction in human-machine systems.

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  13. Dual-Modality Imaging of the Human Finger Joint Systems by Using Combined Multispectral Photoacoustic Computed Tomography and Ultrasound Computed Tomography

    Science.gov (United States)

    Liu, Yubin; Wang, Yating

    2016-01-01

    We developed a homemade dual-modality imaging system that combines multispectral photoacoustic computed tomography and ultrasound computed tomography for reconstructing the structural and functional information of human finger joint systems. The fused multispectral photoacoustic-ultrasound computed tomography (MPAUCT) system was examined by the phantom and in vivo experimental tests. The imaging results indicate that the hard tissues such as the bones and the soft tissues including the blood vessels, the tendon, the skins, and the subcutaneous tissues in the finger joints systems can be effectively recovered by using our multimodality MPAUCT system. The developed MPAUCT system is able to provide us with more comprehensive information of the human finger joints, which shows its potential for characterization and diagnosis of bone or joint diseases. PMID:27774453

  14. Dual-Modality Imaging of the Human Finger Joint Systems by Using Combined Multispectral Photoacoustic Computed Tomography and Ultrasound Computed Tomography

    Directory of Open Access Journals (Sweden)

    Yubin Liu

    2016-01-01

    Full Text Available We developed a homemade dual-modality imaging system that combines multispectral photoacoustic computed tomography and ultrasound computed tomography for reconstructing the structural and functional information of human finger joint systems. The fused multispectral photoacoustic-ultrasound computed tomography (MPAUCT system was examined by the phantom and in vivo experimental tests. The imaging results indicate that the hard tissues such as the bones and the soft tissues including the blood vessels, the tendon, the skins, and the subcutaneous tissues in the finger joints systems can be effectively recovered by using our multimodality MPAUCT system. The developed MPAUCT system is able to provide us with more comprehensive information of the human finger joints, which shows its potential for characterization and diagnosis of bone or joint diseases.

  15. Hand gesture recognition based on motion history images for a simple human-computer interaction system

    Science.gov (United States)

    Timotius, Ivanna K.; Setyawan, Iwan

    2013-03-01

    A human-computer interaction can be developed using several kind of tools. One choice is using images captured using a camera. This paper proposed a simple human-computer interaction system based on hand movement captured by a web camera. The system aims to classify the captured movement into one of three classes. The first two classes contain hand movements to the left and right, respectively. The third class contains non-hand movements or hand movements to other directions. The method used in this paper is based on Motion History Images (MHIs) and nearest neighbor classifier. The resulting MHIs are processed in two manners, namely by summing the pixel values along the vertical axis and reshaping into vectors. We also use two distance criteria in this paper, respectively the Euclidian distance and cross correlation. This paper compared the performance of the combinations of different MHI data processing and distance criteria using 10 runs of 2-fold cross validation. Our experiments show that reshaping the MHI data into vectors combined with a Euclidean distance criterion gives the highest average accuracy, namely 55.67%.

  16. Neural mechanisms of transient neocortical beta rhythms: Converging evidence from humans, computational modeling, monkeys, and mice

    Science.gov (United States)

    Sherman, Maxwell A.; Lee, Shane; Law, Robert; Haegens, Saskia; Thorn, Catherine A.; Hämäläinen, Matti S.; Moore, Christopher I.; Jones, Stephanie R.

    2016-01-01

    Human neocortical 15–29-Hz beta oscillations are strong predictors of perceptual and motor performance. However, the mechanistic origin of beta in vivo is unknown, hindering understanding of its functional role. Combining human magnetoencephalography (MEG), computational modeling, and laminar recordings in animals, we present a new theory that accounts for the origin of spontaneous neocortical beta. In our MEG data, spontaneous beta activity from somatosensory and frontal cortex emerged as noncontinuous beta events typically lasting <150 ms with a stereotypical waveform. Computational modeling uniquely designed to infer the electrical currents underlying these signals showed that beta events could emerge from the integration of nearly synchronous bursts of excitatory synaptic drive targeting proximal and distal dendrites of pyramidal neurons, where the defining feature of a beta event was a strong distal drive that lasted one beta period (∼50 ms). This beta mechanism rigorously accounted for the beta event profiles; several other mechanisms did not. The spatial location of synaptic drive in the model to supragranular and infragranular layers was critical to the emergence of beta events and led to the prediction that beta events should be associated with a specific laminar current profile. Laminar recordings in somatosensory neocortex from anesthetized mice and awake monkeys supported these predictions, suggesting this beta mechanism is conserved across species and recording modalities. These findings make several predictions about optimal states for perceptual and motor performance and guide causal interventions to modulate beta for optimal function. PMID:27469163

  17. Effects of LED-backlit computer screen and emotional selfregulation on human melatonin production.

    Science.gov (United States)

    Sroykham, Watchara; Wongsawat, Yodchanan

    2013-01-01

    Melatonin is a circadian hormone transmitted via suprachiasmatic nucleus (SCN) in the hypothalamus and sympathetic nervous system to the pineal gland. It is a hormone necessary to many human functions such as immune, cardiovascular, neuron and sleep/awake functions. Since melatonin enhancement or suppression is reported to be closely related to the photic information from retina, in this paper, we aim further to study both the lighting condition and the emotional self-regulation in different lighting conditions together with their effects on the production of human melatonin. In this experiment, five participants are in three light exposure conditions by LED backlit computer screen (No light, Red light (∼650nm) and Blue light (∼470nm)) for 30 minute (8-8:30pm), then they are collected saliva both before and after the experiments. After the experiment, the participants are also asked to answer the emotional self-regulation questionnaire of PANAS and BRUMS regarding each light exposure condition. These results show that positive mood mean difference of PANAS between no light and red light is significant with p=0.001. Tension, depression, fatigue, confusion and vigor from BRUMS are not significantly changed while we can observe the significant change in anger mood. Finally, we can also report that the blue light of LED-backlit computer screen significantly suppress melatonin production (91%) more than red light (78%) and no light (44%).

  18. Computational Model of Human and System Dynamics in Free Flight: Studies in Distributed Control Technologies

    Science.gov (United States)

    Corker, Kevin M.; Pisanich, Gregory; Lebacqz, J. Victor (Technical Monitor)

    1998-01-01

    This paper presents a set of studies in full mission simulation and the development of a predictive computational model of human performance in control of complex airspace operations. NASA and the FAA have initiated programs of research and development to provide flight crew, airline operations and air traffic managers with automation aids to increase capacity in en route and terminal area to support the goals of safe, flexible, predictable and efficient operations. In support of these developments, we present a computational model to aid design that includes representation of multiple cognitive agents (both human operators and intelligent aiding systems). The demands of air traffic management require representation of many intelligent agents sharing world-models, coordinating action/intention, and scheduling goals and actions in a potentially unpredictable world of operations. The operator-model structure includes attention functions, action priority, and situation assessment. The cognitive model has been expanded to include working memory operations including retrieval from long-term store, and interference. The operator's activity structures have been developed to provide for anticipation (knowledge of the intention and action of remote operators), and to respond to failures of the system and other operators in the system in situation-specific paradigms. System stability and operator actions can be predicted by using the model. The model's predictive accuracy was verified using the full-mission simulation data of commercial flight deck operations with advanced air traffic management techniques.

  19. Using the electrocorticographic speech network to control a brain-computer interface in humans

    Science.gov (United States)

    Leuthardt, Eric C.; Gaona, Charles; Sharma, Mohit; Szrama, Nicholas; Roland, Jarod; Freudenberg, Zac; Solis, Jamie; Breshears, Jonathan; Schalk, Gerwin

    2011-06-01

    Electrocorticography (ECoG) has emerged as a new signal platform for brain-computer interface (BCI) systems. Classically, the cortical physiology that has been commonly investigated and utilized for device control in humans has been brain signals from the sensorimotor cortex. Hence, it was unknown whether other neurophysiological substrates, such as the speech network, could be used to further improve on or complement existing motor-based control paradigms. We demonstrate here for the first time that ECoG signals associated with different overt and imagined phoneme articulation can enable invasively monitored human patients to control a one-dimensional computer cursor rapidly and accurately. This phonetic content was distinguishable within higher gamma frequency oscillations and enabled users to achieve final target accuracies between 68% and 91% within 15 min. Additionally, one of the patients achieved robust control using recordings from a microarray consisting of 1 mm spaced microwires. These findings suggest that the cortical network associated with speech could provide an additional cognitive and physiologic substrate for BCI operation and that these signals can be acquired from a cortical array that is small and minimally invasive.

  20. Effects of muscle fatigue on the usability of a myoelectric human-computer interface.

    Science.gov (United States)

    Barszap, Alexander G; Skavhaug, Ida-Maria; Joshi, Sanjay S

    2016-10-01

    Electromyography-based human-computer interface development is an active field of research. However, knowledge on the effects of muscle fatigue for specific devices is limited. We have developed a novel myoelectric human-computer interface in which subjects continuously navigate a cursor to targets by manipulating a single surface electromyography (sEMG) signal. Two-dimensional control is achieved through simultaneous adjustments of power in two frequency bands through a series of dynamic low-level muscle contractions. Here, we investigate the potential effects of muscle fatigue during the use of our interface. In the first session, eight subjects completed 300 cursor-to-target trials without breaks; four using a wrist muscle and four using a head muscle. The wrist subjects returned for a second session in which a static fatiguing exercise took place at regular intervals in-between cursor-to-target trials. In the first session we observed no declines in performance as a function of use, even after the long period of use. In the second session, we observed clear changes in cursor trajectories, paired with a target-specific decrease in hit rates.

  1. Computational strategy for quantifying human pesticide exposure based upon a saliva measurement

    Energy Technology Data Exchange (ETDEWEB)

    Timchalk, Charles; Weber, Thomas J.; Smith, Jordan N.

    2015-05-27

    The National Research Council of the National Academies report, Toxicity Testing in the 21st Century: A Vision and Strategy, highlighted the importance of quantitative exposure data for evaluating human toxicity risk and noted that biomonitoring is a critical tool for quantitatively evaluating exposure from both environmental and occupational settings. Direct measurement of chemical exposures using personal monitoring provides the most accurate estimation of a subject’s true exposure, and non-invasive methods have also been advocated for quantifying the pharmacokinetics and bioavailability of drugs and xenobiotics. In this regard, there is a need to identify chemicals that are readily cleared in saliva at concentrations that can be quantified to support the implementation of this approach.. The current manuscript describes the use of computational modeling approaches that are closely coupled to in vivo and in vitro experiments to predict salivary uptake and clearance of xenobiotics. The primary mechanism by which xenobiotics leave the blood and enter saliva is thought to involve paracellular transport, passive transcellular diffusion, or trancellular active transport with the majority of drugs and xenobiotics cleared from plasma into saliva by passive diffusion. The transcellular or paracellular diffusion of unbound chemicals in plasma to saliva has been computational modeled using a combination of compartmental and physiologically based approaches. Of key importance for determining the plasma:saliva partitioning was the utilization of a modified Schmitt algorithm that calculates partitioning based upon the tissue composition, pH, chemical pKa and plasma protein-binding. Sensitivity analysis of key model parameters specifically identified that both protein-binding and pKa (for weak acids and bases) had the most significant impact on the determination of partitioning and that there were clear species dependent differences based upon physiological variance between

  2. FDTD Computation of Human Eye Exposure to Ultra-wideband Electromagnetic Pulses

    CERN Document Server

    Simicevic, Neven

    2007-01-01

    With an increase in the application of ultra-wideband (UWB) electromagnetic pulses in the communications industry, radar, biotechnology and medicine, comes an interest in UWB exposure safety standards. Despite an increase of the scientific research on bioeffects of exposure to non-ionizing UWB pulses, characterization of those effects is far from complete. A numerical computational approach, such as a finite-difference time domain (FDTD) method, is required to visualize and understand the complexity of broadband electromagnetic interactions. The FDTD method has almost no limits in the description of the geometrical and dispersive properties of the simulated material, it is numerically robust and appropriate for current computer technology. In this paper, a complete calculation of exposure of the human eye to UWB electromagnetic pulses in the frequency range of 3.1-10.6, 22-29, and 57-64 GHz is performed. Computation in this frequency range required a geometrical resolution of the eye of $\\rm 0.1 \\: mm$ and an...

  3. The discovery of antidepressant drugs by computer-analyzed human cerebral bio-electrical potentials (CEEG).

    Science.gov (United States)

    Itil, T M

    1983-01-01

    Antidepressant properties of six compounds were predicted based on their computer-analyzed human electroencephalographical (CEEG) profiles. The clinical investigations with mianserin (GB-94) confirmed the CEEG prediction. This compound has now been marketed as the first antidepressant of which the clinical effects were discovered solely by the quantitative pharmaco-EEG method. As predicted by the CEEG, clinical antidepressant properties of GC-46, mesterolone, and estradiol valerate were observed in preliminary investigations. No extensive studies with definite statistical results were yet carried out with these compounds. No systematic large studies could be conducted with cyclozocine and cyproterone acetate because of the intolerable side effects with these compounds. The optical isomers of mianserin, GF-59 and GF-60, both predicted as antidepressant by the computer EEG data base, have not yet been tested in depressive patients. None of these compounds possess the "typical" pharmacological and/or biochemical profiles of marketed antidepressants. Thus, the discovery of the established antidepressant properties of mianserin (GB-94) by computer analyzed EEG method challenges the well-known biochemical hypotheses of depression and the "classical" development of antidepressant drugs.

  4. The UF family of hybrid phantoms of the developing human fetus for computational radiation dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Maynard, Matthew R; Geyer, John W; Bolch, Wesley [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL (United States); Aris, John P [Department of Anatomy and Cell Biology, University of Florida, Gainesville, FL (United States); Shifrin, Roger Y, E-mail: wbolch@ufl.edu [Department of Radiology, University of Florida, Gainesville, FL (United States)

    2011-08-07

    Historically, the development of computational phantoms for radiation dosimetry has primarily been directed at capturing and representing adult and pediatric anatomy, with less emphasis devoted to models of the human fetus. As concern grows over possible radiation-induced cancers from medical and non-medical exposures of the pregnant female, the need to better quantify fetal radiation doses, particularly at the organ-level, also increases. Studies such as the European Union's SOLO (Epidemiological Studies of Exposed Southern Urals Populations) hope to improve our understanding of cancer risks following chronic in utero radiation exposure. For projects such as SOLO, currently available fetal anatomic models do not provide sufficient anatomical detail for organ-level dose assessment. To address this need, two fetal hybrid computational phantoms were constructed using high-quality magnetic resonance imaging and computed tomography image sets obtained for two well-preserved fetal specimens aged 11.5 and 21 weeks post-conception. Individual soft tissue organs, bone sites and outer body contours were segmented from these images using 3D-DOCTOR(TM) and then imported to the 3D modeling software package Rhinoceros(TM) for further modeling and conversion of soft tissue organs, certain bone sites and outer body contours to deformable non-uniform rational B-spline surfaces. The two specimen-specific phantoms, along with a modified version of the 38 week UF hybrid newborn phantom, comprised a set of base phantoms from which a series of hybrid computational phantoms was derived for fetal ages 8, 10, 15, 20, 25, 30, 35 and 38 weeks post-conception. The methodology used to construct the series of phantoms accounted for the following age-dependent parameters: (1) variations in skeletal size and proportion, (2) bone-dependent variations in relative levels of bone growth, (3) variations in individual organ masses and total fetal masses and (4) statistical percentile variations

  5. The UF family of hybrid phantoms of the developing human fetus for computational radiation dosimetry

    Science.gov (United States)

    Maynard, Matthew R.; Geyer, John W.; Aris, John P.; Shifrin, Roger Y.; Bolch, Wesley

    2011-08-01

    Historically, the development of computational phantoms for radiation dosimetry has primarily been directed at capturing and representing adult and pediatric anatomy, with less emphasis devoted to models of the human fetus. As concern grows over possible radiation-induced cancers from medical and non-medical exposures of the pregnant female, the need to better quantify fetal radiation doses, particularly at the organ-level, also increases. Studies such as the European Union's SOLO (Epidemiological Studies of Exposed Southern Urals Populations) hope to improve our understanding of cancer risks following chronic in utero radiation exposure. For projects such as SOLO, currently available fetal anatomic models do not provide sufficient anatomical detail for organ-level dose assessment. To address this need, two fetal hybrid computational phantoms were constructed using high-quality magnetic resonance imaging and computed tomography image sets obtained for two well-preserved fetal specimens aged 11.5 and 21 weeks post-conception. Individual soft tissue organs, bone sites and outer body contours were segmented from these images using 3D-DOCTOR™ and then imported to the 3D modeling software package Rhinoceros™ for further modeling and conversion of soft tissue organs, certain bone sites and outer body contours to deformable non-uniform rational B-spline surfaces. The two specimen-specific phantoms, along with a modified version of the 38 week UF hybrid newborn phantom, comprised a set of base phantoms from which a series of hybrid computational phantoms was derived for fetal ages 8, 10, 15, 20, 25, 30, 35 and 38 weeks post-conception. The methodology used to construct the series of phantoms accounted for the following age-dependent parameters: (1) variations in skeletal size and proportion, (2) bone-dependent variations in relative levels of bone growth, (3) variations in individual organ masses and total fetal masses and (4) statistical percentile variations in

  6. POLYAR, a new computer program for prediction of poly(A sites in human sequences

    Directory of Open Access Journals (Sweden)

    Qamar Raheel

    2010-11-01

    Full Text Available Abstract Background mRNA polyadenylation is an essential step of pre-mRNA processing in eukaryotes. Accurate prediction of the pre-mRNA 3'-end cleavage/polyadenylation sites is important for defining the gene boundaries and understanding gene expression mechanisms. Results 28761 human mapped poly(A sites have been classified into three classes containing different known forms of polyadenylation signal (PAS or none of them (PAS-strong, PAS-weak and PAS-less, respectively and a new computer program POLYAR for the prediction of poly(A sites of each class was developed. In comparison with polya_svm (till date the most accurate computer program for prediction of poly(A sites while searching for PAS-strong poly(A sites in human sequences, POLYAR had a significantly higher prediction sensitivity (80.8% versus 65.7% and specificity (66.4% versus 51.7% However, when a similar sort of search was conducted for PAS-weak and PAS-less poly(A sites, both programs had a very low prediction accuracy, which indicates that our knowledge about factors involved in the determination of the poly(A sites is not sufficient to identify such polyadenylation regions. Conclusions We present a new classification of polyadenylation sites into three classes and a novel computer program POLYAR for prediction of poly(A sites/regions of each of the class. In tests, POLYAR shows high accuracy of prediction of the PAS-strong poly(A sites, though this program's efficiency in searching for PAS-weak and PAS-less poly(A sites is not very high but is comparable to other available programs. These findings suggest that additional characteristics of such poly(A sites remain to be elucidated. POLYAR program with a stand-alone version for downloading is available at http://cub.comsats.edu.pk/polyapredict.htm.

  7. Irrigation of human prepared root canal – ex vivo based computational fluid dynamics analysis

    Science.gov (United States)

    Šnjarić, Damir; Čarija, Zoran; Braut, Alen; Halaji, Adelaida; Kovačević, Maja; Kuiš, Davor

    2012-01-01

    Aim To analyze the influence of the needle type, insertion depth, and irrigant flow rate on irrigant flow pattern, flow velocity, and apical pressure by ex-vivo based endodontic irrigation computational fluid dynamics (CFD) analysis. Methods Human upper canine root canal was prepared using rotary files. Contrast fluid was introduced in the root canal and scanned by computed tomography (CT) providing a three-dimensional object that was exported to the computer-assisted design (CAD) software. Two probe points were established in the apical portion of the root canal model for flow velocity and pressure measurement. Three different CAD models of 27G irrigation needles (closed-end side-vented, notched open-end, and bevel open-end) were created and placed at 25, 50, 75, and 95% of the working length (WL). Flow rates of 0.05, 0.1, 0.2, 0.3, and 0.4 mL/s were simulated. A total of 60 irrigation simulations were performed by CFD fluid flow solver. Results Closed-end side-vented needle required insertion depth closer to WL, regarding efficient irrigant replacement, compared to open-end irrigation needle types, which besides increased velocity produced increased irrigant apical pressure. For all irrigation needle types and needle insertion depths, the increase of flow rate was followed by an increased irrigant apical pressure. Conclusions The human root canal shape obtained by CT is applicable in the CFD analysis of endodontic irrigation. All the analyzed values –irrigant flow pattern, velocity, and pressure – were influenced by irrigation needle type, as well as needle insertion depth and irrigant flow rate. PMID:23100209

  8. High School Students' Written Argumentation Qualities with Problem-Based Computer-Aided Material (PBCAM) Designed about Human Endocrine System

    Science.gov (United States)

    Vekli, Gülsah Sezen; Çimer, Atilla

    2017-01-01

    This study investigated development of students' scientific argumentation levels in the applications made with Problem-Based Computer-Aided Material (PBCAM) designed about Human Endocrine System. The case study method was used: The study group was formed of 43 students in the 11th grade of the science high school in Rize. Human Endocrine System…

  9. Reciprocity in computer-human interaction: source-based, norm-based, and affect-based explanations.

    Science.gov (United States)

    Lee, Seungcheol Austin; Liang, Yuhua Jake

    2015-04-01

    Individuals often apply social rules when they interact with computers, and this is known as the Computers Are Social Actors (CASA) effect. Following previous work, one approach to understand the mechanism responsible for CASA is to utilize computer agents and have the agents attempt to gain human compliance (e.g., completing a pattern recognition task). The current study focuses on three key factors frequently cited to influence traditional notions of compliance: evaluations toward the source (competence and warmth), normative influence (reciprocity), and affective influence (mood). Structural equation modeling assessed the effects of these factors on human compliance with computer request. The final model shows that norm-based influence (reciprocity) increased the likelihood of compliance, while evaluations toward the computer agent did not significantly influence compliance.

  10. A Computational Protein Phenotype Prediction Approach to Analyze the Deleterious Mutations of Human MED12 Gene.

    Science.gov (United States)

    Banaganapalli, Babajan; Mohammed, Kaleemuddin; Khan, Imran Ali; Al-Aama, Jumana Y; Elango, Ramu; Shaik, Noor Ahmad

    2016-09-01

    Genetic mutations in MED12, a subunit of Mediator complex are seen in a broad spectrum of human diseases. However, the underlying basis of how these pathogenic mutations elicit protein phenotype changes in terms of 3D structure, stability and protein binding sites remains unknown. Therefore, we aimed to investigate the structural and functional impacts of MED12 mutations, using computational methods as an alternate to traditional in vivo and in vitro approaches. The MED12 gene mutations details and their corresponding clinical associations were collected from different databases and by text-mining. Initially, diverse computational approaches were applied to categorize the different classes of mutations based on their deleterious impact to MED12. Then, protein structures for wild and mutant types built by integrative modeling were analyzed for structural divergence, solvent accessibility, stability, and functional interaction deformities. Finally, this study was able to identify that genetic mutations mapped to exon-2 region, highly conserved LCEWAV and Catenin domains induce biochemically severe amino acid changes which alters the protein phenotype as well as the stability of MED12-CYCC interactions. To better understand the deleterious nature of FS-IDs and Indels, this study asserts the utility of computational screening based on their propensity towards non-sense mediated decay. Current study findings may help to narrow down the number of MED12 mutations to be screened for mediator complex dysfunction associated genetic diseases. This study supports computational methods as a primary filter to verify the plausible impact of pathogenic mutations based on the perspective of evolution, expression and phenotype of proteins. J. Cell. Biochem. 117: 2023-2035, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  11. Adaptive intermittent control: A computational model explaining motor intermittency observed in human behavior.

    Science.gov (United States)

    Sakaguchi, Yutaka; Tanaka, Masato; Inoue, Yasuyuki

    2015-07-01

    It is a fundamental question how our brain performs a given motor task in a real-time fashion with the slow sensorimotor system. Computational theory proposed an influential idea of feed-forward control, but it has mainly treated the case that the movement is ballistic (such as reaching) because the motor commands should be calculated in advance of movement execution. As a possible mechanism for operating feed-forward control in continuous motor tasks (such as target tracking), we propose a control model called "adaptive intermittent control" or "segmented control," that brain adaptively divides the continuous time axis into discrete segments and executes feed-forward control in each segment. The idea of intermittent control has been proposed in the fields of control theory, biological modeling and nonlinear dynamical system. Compared with these previous models, the key of the proposed model is that the system speculatively determines the segmentation based on the future prediction and its uncertainty. The result of computer simulation showed that the proposed model realized faithful visuo-manual tracking with realistic sensorimotor delays and with less computational costs (i.e., with fewer number of segments). Furthermore, it replicated "motor intermittency", that is, intermittent discontinuities commonly observed in human movement trajectories. We discuss that the temporally segmented control is an inevitable strategy for brain which has to achieve a given task with small computational (or cognitive) cost, using a slow control system in an uncertain variable environment, and the motor intermittency is the side-effect of this strategy. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Distributed human intelligence for colonic polyp classification in computer-aided detection for CT colonography.

    Science.gov (United States)

    Nguyen, Tan B; Wang, Shijun; Anugu, Vishal; Rose, Natalie; McKenna, Matthew; Petrick, Nicholas; Burns, Joseph E; Summers, Ronald M

    2012-03-01

    To assess the diagnostic performance of distributed human intelligence for the classification of polyp candidates identified with computer-aided detection (CAD) for computed tomographic (CT) colonography. This study was approved by the institutional Office of Human Subjects Research. The requirement for informed consent was waived for this HIPAA-compliant study. CT images from 24 patients, each with at least one polyp of 6 mm or larger, were analyzed by using CAD software to identify 268 polyp candidates. Twenty knowledge workers (KWs) from a crowdsourcing platform labeled each polyp candidate as a true or false polyp. Two trials involving 228 KWs were conducted to assess reproducibility. Performance was assessed by comparing the area under the receiver operating characteristic curve (AUC) of KWs with the AUC of CAD for polyp classification. The detection-level AUC for KWs was 0.845 ± 0.045 (standard error) in trial 1 and 0.855 ± 0.044 in trial 2. These were not significantly different from the AUC for CAD, which was 0.859 ± 0.043. When polyp candidates were stratified by difficulty, KWs performed better than CAD on easy detections; AUCs were 0.951 ± 0.032 in trial 1, 0.966 ± 0.027 in trial 2, and 0.877 ± 0.048 for CAD (P = .039 for trial 2). KWs who participated in both trials showed a significant improvement in performance going from trial 1 to trial 2; AUCs were 0.759 ± 0.052 in trial 1 and 0.839 ± 0.046 in trial 2 (P = .041). The performance of distributed human intelligence is not significantly different from that of CAD for colonic polyp classification. © RSNA.

  13. Strategies for improved interpretation of computer-aided detections for CT colonography utilizing distributed human intelligence.

    Science.gov (United States)

    McKenna, Matthew T; Wang, Shijun; Nguyen, Tan B; Burns, Joseph E; Petrick, Nicholas; Summers, Ronald M

    2012-08-01

    Computer-aided detection (CAD) systems have been shown to improve the diagnostic performance of CT colonography (CTC) in the detection of premalignant colorectal polyps. Despite the improvement, the overall system is not optimal. CAD annotations on true lesions are incorrectly dismissed, and false positives are misinterpreted as true polyps. Here, we conduct an observer performance study utilizing distributed human intelligence in the form of anonymous knowledge workers (KWs) to investigate human performance in classifying polyp candidates under different presentation strategies. We evaluated 600 polyp candidates from 50 patients, each case having at least one polyp ≥6 mm, from a large database of CTC studies. Each polyp candidate was labeled independently as a true or false polyp by 20 KWs and an expert radiologist. We asked each labeler to determine whether the candidate was a true polyp after looking at a single 3D-rendered image of the candidate and after watching a video fly-around of the candidate. We found that distributed human intelligence improved significantly when presented with the additional information in the video fly-around. We noted that performance degraded with increasing interpretation time and increasing difficulty, but distributed human intelligence performed better than our CAD classifier for "easy" and "moderate" polyp candidates. Further, we observed numerous parallels between the expert radiologist and the KWs. Both showed similar improvement in classification moving from single-image to video interpretation. Additionally, difficulty estimates obtained from the KWs using an expectation maximization algorithm correlated well with the difficulty rating assigned by the expert radiologist. Our results suggest that distributed human intelligence is a powerful tool that will aid in the development of CAD for CTC. Copyright © 2012. Published by Elsevier B.V.

  14. Computer-based or human patient simulation-based case analysis: which works better for teaching diagnostic reasoning skills?

    Science.gov (United States)

    Wilson, Rebecca D; Klein, James D; Hagler, Debra

    2014-01-01

    The purpose of this study was to determine whether a difference exists in learner performance and the type and frequency of diagnostic reasoning skills used, based on the method of case presentation. Faculty can select from a variety of methods for presenting cases when teaching diagnostic reasoning, but little evidence exists with regard to how students use these skills while interacting with the cases. A total of 54 nursing students participated in two case analyses using human patient and computer-based simulations. Participant performance and diagnostic reasoning skills were analyzed. Performance was significantly better with the human patient simulation case. All diagnostic reasoning skills were used during both methods of case presentation, with greater performance variation in the computer-based simulation. Both human patient and computer-based simulations are beneficial for practicing diagnostic reasoning skills; however, these findings support the use of human patient simulations for improving student performance in case synthesis.

  15. Three-dimensional computer-aided human factors engineering analysis of a grafting robot.

    Science.gov (United States)

    Chiu, Y C; Chen, S; Wu, G J; Lin, Y H

    2012-07-01

    The objective of this research was to conduct a human factors engineering analysis of a grafting robot design using computer-aided 3D simulation technology. A prototype tubing-type grafting robot for fruits and vegetables was the subject of a series of case studies. To facilitate the incorporation of human models into the operating environment of the grafting robot, I-DEAS graphic software was applied to establish individual models of the grafting robot in line with Jack ergonomic analysis. Six human models (95th percentile, 50th percentile, and 5th percentile by height for both males and females) were employed to simulate the operating conditions and working postures in a real operating environment. The lower back and upper limb stresses of the operators were analyzed using the lower back analysis (LBA) and rapid upper limb assessment (RULA) functions in Jack. The experimental results showed that if a leg space is introduced under the robot, the operator can sit closer to the robot, which reduces the operator's level of lower back and upper limbs stress. The proper environmental layout for Taiwanese operators for minimum levels of lower back and upper limb stress are to set the grafting operation at 23.2 cm away from the operator at a height of 85 cm and with 45 cm between the rootstock and scion units.

  16. Task analysis and computer aid development for human reliability analysis in nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, W. C.; Kim, H.; Park, H. S.; Choi, H. H.; Moon, J. M.; Heo, J. Y.; Ham, D. H.; Lee, K. K.; Han, B. T. [Korea Advanced Institute of Science and Technology, Taejeon (Korea)

    2001-04-01

    Importance of human reliability analysis (HRA) that predicts the error's occurrence possibility in a quantitative and qualitative manners is gradually increased by human errors' effects on the system's safety. HRA needs a task analysis as a virtue step, but extant task analysis techniques have the problem that a collection of information about the situation, which the human error occurs, depends entirely on HRA analyzers. The problem makes results of the task analysis inconsistent and unreliable. To complement such problem, KAERI developed the structural information analysis (SIA) that helps to analyze task's structure and situations systematically. In this study, the SIA method was evaluated by HRA experts, and a prototype computerized supporting system named CASIA (Computer Aid for SIA) was developed for the purpose of supporting to perform HRA using the SIA method. Additionally, through applying the SIA method to emergency operating procedures, we derived generic task types used in emergency and accumulated the analysis results in the database of the CASIA. The CASIA is expected to help HRA analyzers perform the analysis more easily and consistently. If more analyses will be performed and more data will be accumulated to the CASIA's database, HRA analyzers can share freely and spread smoothly his or her analysis experiences, and there by the quality of the HRA analysis will be improved. 35 refs., 38 figs., 25 tabs. (Author)

  17. Creating Communications, Computing, and Networking Technology Development Road Maps for Future NASA Human and Robotic Missions

    Science.gov (United States)

    Bhasin, Kul; Hayden, Jeffrey L.

    2005-01-01

    For human and robotic exploration missions in the Vision for Exploration, roadmaps are needed for capability development and investments based on advanced technology developments. A roadmap development process was undertaken for the needed communications, and networking capabilities and technologies for the future human and robotics missions. The underlying processes are derived from work carried out during development of the future space communications architecture, an d NASA's Space Architect Office (SAO) defined formats and structures for accumulating data. Interrelationships were established among emerging requirements, the capability analysis and technology status, and performance data. After developing an architectural communications and networking framework structured around the assumed needs for human and robotic exploration, in the vicinity of Earth, Moon, along the path to Mars, and in the vicinity of Mars, information was gathered from expert participants. This information was used to identify the capabilities expected from the new infrastructure and the technological gaps in the way of obtaining them. We define realistic, long-term space communication architectures based on emerging needs and translate the needs into interfaces, functions, and computer processing that will be required. In developing our roadmapping process, we defined requirements for achieving end-to-end activities that will be carried out by future NASA human and robotic missions. This paper describes: 10 the architectural framework developed for analysis; 2) our approach to gathering and analyzing data from NASA, industry, and academia; 3) an outline of the technology research to be done, including milestones for technology research and demonstrations with timelines; and 4) the technology roadmaps themselves.

  18. Standardized Procedure Content And Data Structure Based On Human Factors Requirements For Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Bly, Aaron; Oxstrand, Johanna; Le Blanc, Katya L

    2015-02-01

    Most activities that involve human interaction with systems in a nuclear power plant are guided by procedures. Traditionally, the use of procedures has been a paper-based process that supports safe operation of the nuclear power industry. However, the nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. Advances in digital technology make computer-based procedures (CBPs) a valid option that provides further enhancement of safety by improving human performance related to procedure use. The transition from paper-based procedures (PBPs) to CBPs creates a need for a computer-based procedure system (CBPS). A CBPS needs to have the ability to perform logical operations in order to adjust to the inputs received from either users or real time data from plant status databases. Without the ability for logical operations the procedure is just an electronic copy of the paper-based procedure. In order to provide the CBPS with the information it needs to display the procedure steps to the user, special care is needed in the format used to deliver all data and instructions to create the steps. The procedure should be broken down into basic elements and formatted in a standard method for the CBPS. One way to build the underlying data architecture is to use an Extensible Markup Language (XML) schema, which utilizes basic elements to build each step in the smart procedure. The attributes of each step will determine the type of functionality that the system will generate for that step. The CBPS will provide the context for the step to deliver referential information, request a decision, or accept input from the user. The XML schema needs to provide all data necessary for the system to accurately perform each step without the need for the procedure writer to reprogram the CBPS. The research team at the Idaho National Laboratory has developed a prototype CBPS for field workers as well as the

  19. The Dimensions of the Orbital Cavity Based on High-Resolution Computed Tomography of Human Cadavers

    DEFF Research Database (Denmark)

    Felding, Ulrik Ascanius; Bloch, Sune Land; Buchwald, Christian von

    2016-01-01

    for surface area. To authors' knowledge, this study is the first to have measured the entire surface area of the orbital cavity.The volume and surface area of the orbital cavity were estimated in computed tomography scans of 11 human cadavers using unbiased stereological sampling techniques. The mean (± SD......Blow-out fractures affect the volume and surface area of the orbital cavity. Estimation of these values after the trauma may help in deciding whether or not a patient is a candidate for surgery. Recent studies have provided estimates of orbital volume and area of bone defect, and correlated them...... with the degree of enophthalmos. However, a large degree of biological variation between individuals may preclude such absolute values from being successful indicators for surgery.Stereological methods have been used to estimate orbital cavity volume in a few studies, but to date these have not been used...

  20. A combined computational and structural model of the full-length human prolactin receptor

    DEFF Research Database (Denmark)

    Bugge, Katrine Østergaard; Papaleo, Elena; Haxholm, Gitte Wolfsberg;

    2016-01-01

    The prolactin receptor is an archetype member of the class I cytokine receptor family, comprising receptors with fundamental functions in biology as well as key drug targets. Structurally, each of these receptors represent an intriguing diversity, providing an exceptionally challenging target...... for structural biology. Here, we access the molecular architecture of the monomeric human prolactin receptor by combining experimental and computational efforts. We solve the NMR structure of its transmembrane domain in micelles and collect structural data on overlapping fragments of the receptor with small......-angle X-ray scattering, native mass spectrometry and NMR spectroscopy. Along with previously published data, these are integrated by molecular modelling to generate a full receptor structure. The result provides the first full view of a class I cytokine receptor, exemplifying the architecture of more than...

  1. Categorisation of visualisation methods to support the design of Human-Computer Interaction Systems.

    Science.gov (United States)

    Li, Katie; Tiwari, Ashutosh; Alcock, Jeffrey; Bermell-Garcia, Pablo

    2016-07-01

    During the design of Human-Computer Interaction (HCI) systems, the creation of visual artefacts forms an important part of design. On one hand producing a visual artefact has a number of advantages: it helps designers to externalise their thought and acts as a common language between different stakeholders. On the other hand, if an inappropriate visualisation method is employed it could hinder the design process. To support the design of HCI systems, this paper reviews the categorisation of visualisation methods used in HCI. A keyword search is conducted to identify a) current HCI design methods, b) approaches of selecting these methods. The resulting design methods are filtered to create a list of just visualisation methods. These are then categorised using the approaches identified in (b). As a result 23 HCI visualisation methods are identified and categorised in 5 selection approaches (The Recipient, Primary Purpose, Visual Archetype, Interaction Type, and The Design Process).

  2. A combined computational and structural model of the full-length human prolactin receptor

    Science.gov (United States)

    Bugge, Katrine; Papaleo, Elena; Haxholm, Gitte W.; Hopper, Jonathan T. S.; Robinson, Carol V.; Olsen, Johan G.; Lindorff-Larsen, Kresten; Kragelund, Birthe B.

    2016-05-01

    The prolactin receptor is an archetype member of the class I cytokine receptor family, comprising receptors with fundamental functions in biology as well as key drug targets. Structurally, each of these receptors represent an intriguing diversity, providing an exceptionally challenging target for structural biology. Here, we access the molecular architecture of the monomeric human prolactin receptor by combining experimental and computational efforts. We solve the NMR structure of its transmembrane domain in micelles and collect structural data on overlapping fragments of the receptor with small-angle X-ray scattering, native mass spectrometry and NMR spectroscopy. Along with previously published data, these are integrated by molecular modelling to generate a full receptor structure. The result provides the first full view of a class I cytokine receptor, exemplifying the architecture of more than 40 different receptor chains, and reveals that the extracellular domain is merely the tip of a molecular iceberg.

  3. An Overview of a Decade of Journal Publications about Culture and Human-Computer Interaction (HCI)

    Science.gov (United States)

    Clemmensen, Torkil; Roese, Kerstin

    In this paper, we analyze the concept of human-computer interaction in cultural and national contexts. Building and extending upon the framework for understanding research in usability and culture by Honold [3], we give an overview of publications in culture and HCI between 1998 and 2008, with a narrow focus on high-level journal publications only. The purpose is to review current practice in how cultural HCI issues are studied, and to analyse problems with the measures and interpretation of this studies. We find that Hofstede's cultural dimensions has been the dominating model of culture, participants have been picked because they could speak English, and most studies have been large scale quantitative studies. In order to balance this situation, we recommend that more researchers and practitioners do qualitative, empirical work studies.

  4. Computer Simulation for Effect of Tb3+ on Ca2+Speciation in Human Plasma

    Institute of Scientific and Technical Information of China (English)

    卢兴; 王悦; 张海元; 王进平; 牛春吉; 倪嘉缵

    2002-01-01

    Effect of Tb3+ on Ca2+ speciation in human plasma was studied by means of the computer program of MINTEQA2. When Tb3+ ions are not added into the system, Ca2+ ions mostly distribute in free Ca2+ (74.7%) and the surplus distributes in Ca2+ complexes, such as [CaHCO3]+(7.9%), [Ca(Lac)]+(6.4%), CaHPO4 (1.3%), [CaHistidinateThreoninateH3]3+(2.4%), [CaCitrateHistidinateH2] (2.3%) and CaCO3(1.1%). Tb3+ can compete with Ca2+ for inorganic as well as biological ligands. An increase of concentration of Tb3+ in the system results in an increase of content of free Ca2+ and a decrease of contents of Ca2+ complexes.

  5. Soft Electronics Enabled Ergonomic Human-Computer Interaction for Swallowing Training

    Science.gov (United States)

    Lee, Yongkuk; Nicholls, Benjamin; Sup Lee, Dong; Chen, Yanfei; Chun, Youngjae; Siang Ang, Chee; Yeo, Woon-Hong

    2017-04-01

    We introduce a skin-friendly electronic system that enables human-computer interaction (HCI) for swallowing training in dysphagia rehabilitation. For an ergonomic HCI, we utilize a soft, highly compliant (“skin-like”) electrode, which addresses critical issues of an existing rigid and planar electrode combined with a problematic conductive electrolyte and adhesive pad. The skin-like electrode offers a highly conformal, user-comfortable interaction with the skin for long-term wearable, high-fidelity recording of swallowing electromyograms on the chin. Mechanics modeling and experimental quantification captures the ultra-elastic mechanical characteristics of an open mesh microstructured sensor, conjugated with an elastomeric membrane. Systematic in vivo studies investigate the functionality of the soft electronics for HCI-enabled swallowing training, which includes the application of a biofeedback system to detect swallowing behavior. The collection of results demonstrates clinical feasibility of the ergonomic electronics in HCI-driven rehabilitation for patients with swallowing disorders.

  6. Computer-aided geometric modeling of the human eye and orbit.

    Science.gov (United States)

    Parshall, R F

    1991-01-01

    The author advocates, as a long-term development agenda for the profession, a shift in the working methods of medical illustrators from a two-dimensional image processing mode to a computer-aided design and drafting (CADD) mode. Existing CADD technology, which can make short work of the complex graphic construction problems of anatomical visualization, performs virtually all of its manipulations through systematic exercise of graphic geometry which illustrators tend to reduce to an intuitive, almost vestigial supplement to 2D image processing methods. The primary barrier to the immediate use of CADD is a lack of geometric database materials on anatomical component systems of the body. An on-going experimental project in modeling the human eye and orbit, utilizing a Silicon Graphics Iris workstation and Control Data Corporation's Integrated Computerized Engineering and Manufacturing (ICEM) software, exemplifies the preparatory work needed to create such database materials.

  7. Interaction of cyclodextrins with human and bovine serum albumins: A combined spectroscopic and computational investigation

    Indian Academy of Sciences (India)

    Saptarshi Ghosh; Bijan Kumar Paul; Nitin Chattopadhyay

    2014-07-01

    Interaction of cyclodextrins (CDs) with the two most abundant proteins, namely human serum albumin (HSA) and bovine serum albumin (BSA), has been investigated using steady-state and time-resolved fluorometric techniques, circular dichroism measurements and molecular docking simulation. The study reveals that the three CDs interact differently on the fluorescence and fluorescence lifetimes of the serum albumins. However, fluorescence anisotropy and circular dichroism are not affected. Depending on their size, different CDs bind to the serum albumins in different positions, resulting in changes in the spectral behaviour of the proteins. Docking study suggests the probable binding sites of the three CDs with the proteins. Combined experimental and computational studies imply that sufficiently high concentration of CDs causes loosening of the rigid structures of these transport proteins, although their secondary structures remain intact. Thus, CDs are found to be safe for the serum proteins from the structural point of view.

  8. An Human-Computer Interactive Augmented Reality System for Coronary Artery Diagnosis Planning and Training.

    Science.gov (United States)

    Li, Qiming; Huang, Chen; Lv, Shengqing; Li, Zeyu; Chen, Yimin; Ma, Lizhuang

    2017-09-02

    In order to let the doctor carry on the coronary artery diagnosis and preoperative planning in a more intuitive and more natural way, and to improve the training effect for interns, an augmented reality system for coronary artery diagnosis planning and training (ARS-CADPT) is designed and realized in this paper. At first, a 3D reconstruction algorithm based on computed tomographic (CT) images is proposed to model the coronary artery vessels (CAV). Secondly, the algorithms of static gesture recognition and dynamic gesture spotting and recognition are presented to realize the real-time and friendly human-computer interaction (HCI), which is the characteristic of ARS-CADPT. Thirdly, a Sort-First parallel rendering and splicing display subsystem is developed, which greatly expands the capacity of student users. The experimental results show that, with the use of ARS-CADPT, the reconstruction accuracy of CAV model is high, the HCI is natural and fluent, and the visual effect is good. In a word, the system fully meets the application requirement.

  9. HumanComputer Systems Interaction Backgrounds and Applications 2 Part 2

    CERN Document Server

    Kulikowski, Juliusz; Mroczek, Teresa

    2012-01-01

    This volume of the book contains a collection of chapters selected from the papers which originally (in shortened form) have been presented at the 3rd International Conference on Human-Systems Interaction held in Rzeszow, Poland, in 2010. The chapters are divided into five sections concerning: IV. Environment monitoring and robotic systems, V. Diagnostic systems, VI. Educational Systems, and VII. General Problems. The novel concepts and realizations of humanoid robots, talking robots and orthopedic surgical robots, as well as those of direct brain-computer interface  are examples of particularly interesting topics presented in Sec. VI. In Sec. V the problems of  skin cancer recognition, colonoscopy diagnosis, and brain strokes diagnosis as well as more general problems of ontology design for  medical diagnostic knowledge are presented. Example of an industrial diagnostic system and a concept of new algorithm for edges detection in computer-analyzed images  are also presented in this Section. Among the edu...

  10. Parallel computing simulation of electrical excitation and conduction in the 3D human heart.

    Science.gov (United States)

    Di Yu; Dongping Du; Hui Yang; Yicheng Tu

    2014-01-01

    A correctly beating heart is important to ensure adequate circulation of blood throughout the body. Normal heart rhythm is produced by the orchestrated conduction of electrical signals throughout the heart. Cardiac electrical activity is the resulted function of a series of complex biochemical-mechanical reactions, which involves transportation and bio-distribution of ionic flows through a variety of biological ion channels. Cardiac arrhythmias are caused by the direct alteration of ion channel activity that results in changes in the AP waveform. In this work, we developed a whole-heart simulation model with the use of massive parallel computing with GPGPU and OpenGL. The simulation algorithm was implemented under several different versions for the purpose of comparisons, including one conventional CPU version and two GPU versions based on Nvidia CUDA platform. OpenGL was utilized for the visualization / interaction platform because it is open source, light weight and universally supported by various operating systems. The experimental results show that the GPU-based simulation outperforms the conventional CPU-based approach and significantly improves the speed of simulation. By adopting modern computer architecture, this present investigation enables real-time simulation and visualization of electrical excitation and conduction in the large and complicated 3D geometry of a real-world human heart.

  11. Computational dissection of human episodic memory reveals mental process-specific genetic profiles.

    Science.gov (United States)

    Luksys, Gediminas; Fastenrath, Matthias; Coynel, David; Freytag, Virginie; Gschwind, Leo; Heck, Angela; Jessen, Frank; Maier, Wolfgang; Milnik, Annette; Riedel-Heller, Steffi G; Scherer, Martin; Spalek, Klara; Vogler, Christian; Wagner, Michael; Wolfsgruber, Steffen; Papassotiropoulos, Andreas; de Quervain, Dominique J-F

    2015-09-01

    Episodic memory performance is the result of distinct mental processes, such as learning, memory maintenance, and emotional modulation of memory strength. Such processes can be effectively dissociated using computational models. Here we performed gene set enrichment analyses of model parameters estimated from the episodic memory performance of 1,765 healthy young adults. We report robust and replicated associations of the amine compound SLC (solute-carrier) transporters gene set with the learning rate, of the collagen formation and transmembrane receptor protein tyrosine kinase activity gene sets with the modulation of memory strength by negative emotional arousal, and of the L1 cell adhesion molecule (L1CAM) interactions gene set with the repetition-based memory improvement. Furthermore, in a large functional MRI sample of 795 subjects we found that the association between L1CAM interactions and memory maintenance revealed large clusters of differences in brain activity in frontal cortical areas. Our findings provide converging evidence that distinct genetic profiles underlie specific mental processes of human episodic memory. They also provide empirical support to previous theoretical and neurobiological studies linking specific neuromodulators to the learning rate and linking neural cell adhesion molecules to memory maintenance. Furthermore, our study suggests additional memory-related genetic pathways, which may contribute to a better understanding of the neurobiology of human memory.

  12. Neural and cortisol responses during play with human and computer partners in children with autism.

    Science.gov (United States)

    Edmiston, Elliot Kale; Merkle, Kristen; Corbett, Blythe A

    2015-08-01

    Children with autism spectrum disorder (ASD) exhibit impairment in reciprocal social interactions, including play, which can manifest as failure to show social preference or discrimination between social and nonsocial stimuli. To explore mechanisms underlying these deficits, we collected salivary cortisol from 42 children 8-12 years with ASD or typical development during a playground interaction with a confederate child. Participants underwent functional MRI during a prisoner's dilemma game requiring cooperation or defection with a human (confederate) or computer partner. Search region of interest analyses were based on previous research (e.g. insula, amygdala, temporal parietal junction-TPJ). There were significant group differences in neural activation based on partner and response pattern. When playing with a human partner, children with ASD showed limited engagement of a social salience brain circuit during defection. Reduced insula activation during defection in the ASD children relative to TD children, regardless of partner type, was also a prominent finding. Insula and TPJ BOLD during defection was also associated with stress responsivity and behavior in the ASD group under playground conditions. Children with ASD engage social salience networks less than TD children during conditions of social salience, supporting a fundamental disturbance of social engagement. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  13. Comparison between a Computational Seated Human Model and Experimental Verification Data

    Directory of Open Access Journals (Sweden)

    Christian G. Olesen

    2014-01-01

    Full Text Available Sitting-acquired deep tissue injuries (SADTI are the most serious type of pressure ulcers. In order to investigate the aetiology of SADTI a new approach is under development: a musculo-skeletal model which can predict forces between the chair and the human body at different seated postures. This study focuses on comparing results from a model developed in the AnyBody Modeling System, with data collected from an experimental setup. A chair with force-measuring equipment was developed, an experiment was conducted with three subjects, and the experimental results were compared with the predictions of the computational model. The results show that the model predicted the reaction forces for different chair postures well. The correlation coefficients of how well the experiment and model correlate for the seat angle, backrest angle and footrest height was 0.93, 0.96, and 0.95. The study show a good agreement between experimental data and model prediction of forces between a human body and a chair. The model can in the future be used in designing wheelchairs or automotive seats.

  14. In-vivo measurement of the human soft tissues constitutive laws. Applications to Computer Aided Surgery

    CERN Document Server

    Schiavone, Patrick; Ohayon, J; Payan, Y

    2007-01-01

    In the 80's, biomechanicians were asked to work on Computer Aided Surgery applications since orthopaedic surgeons were looking for numerical tools able to predict risks of fractures. More recently, biomechanicians started to address soft tissues arguing that most of the human body is made of such tissues that can move as well as deform during surgical gestures [1]. An intra-operative use of a continuous Finite Element (FE) Model of a given tissue mainly faces two problems: (1) the numerical simulations have to be "interactive", i.e. sufficiently fast to provide results during surgery (which can be a strong issue in the context of hyperelastic models for example) and (2) during the intervention, the surgeon needs a device that can be used to provide to the model an estimation of the patient-specific constitutive behaviour of the soft tissues. This work proposes an answer to the second point, with the design of a new aspiration device aiming at characterizing the in vivo constitutive laws of human soft tissues....

  15. X-ray micro computed tomography for the visualization of an atherosclerotic human coronary artery

    Science.gov (United States)

    Matviykiv, Sofiya; Buscema, Marzia; Deyhle, Hans; Pfohl, Thomas; Zumbuehl, Andreas; Saxer, Till; Müller, Bert

    2017-06-01

    Atherosclerosis refers to narrowing or blocking of blood vessels that can lead to a heart attack, chest pain or stroke. Constricted segments of diseased arteries exhibit considerably increased wall shear stress, compared to the healthy ones. One of the possibilities to improve patient’s treatment is the application of nano-therapeutic approaches, based on shear stress sensitive nano-containers. In order to tailor the chemical composition and subsequent physical properties of such liposomes, one has to know precisely the morphology of critically stenosed arteries at micrometre resolution. It is often obtained by means of histology, which has the drawback of offering only two-dimensional information. Additionally, it requires the artery to be decalcified before sectioning, which might lead to deformations within the tissue. Micro computed tomography (μCT) enables the three-dimensional (3D) visualization of soft and hard tissues at micrometre level. μCT allows lumen segmentation that is crucial for subsequent flow simulation analysis. In this communication, tomographic images of a human coronary artery before and after decalcification are qualitatively and quantitatively compared. We analyse the cross section of the diseased human coronary artery before and after decalcification, and calculate the lumen area of both samples.

  16. Cloud computing-based TagSNP selection algorithm for human genome data.

    Science.gov (United States)

    Hung, Che-Lun; Chen, Wen-Pei; Hua, Guan-Jie; Zheng, Huiru; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2015-01-05

    Single nucleotide polymorphisms (SNPs) play a fundamental role in human genetic variation and are used in medical diagnostics, phylogeny construction, and drug design. They provide the highest-resolution genetic fingerprint for identifying disease associations and human features. Haplotypes are regions of linked genetic variants that are closely spaced on the genome and tend to be inherited together. Genetics research has revealed SNPs within certain haplotype blocks that introduce few distinct common haplotypes into most of the population. Haplotype block structures are used in association-based methods to map disease genes. In this paper, we propose an efficient algorithm for identifying haplotype blocks in the genome. In chromosomal haplotype data retrieved from the HapMap project website, the proposed algorithm identified longer haplotype blocks than an existing algorithm. To enhance its performance, we extended the proposed algorithm into a parallel algorithm that copies data in parallel via the Hadoop MapReduce framework. The proposed MapReduce-paralleled combinatorial algorithm performed well on real-world data obtained from the HapMap dataset; the improvement in computational efficiency was proportional to the number of processors used.

  17. Cloud Computing-Based TagSNP Selection Algorithm for Human Genome Data

    Science.gov (United States)

    Hung, Che-Lun; Chen, Wen-Pei; Hua, Guan-Jie; Zheng, Huiru; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2015-01-01

    Single nucleotide polymorphisms (SNPs) play a fundamental role in human genetic variation and are used in medical diagnostics, phylogeny construction, and drug design. They provide the highest-resolution genetic fingerprint for identifying disease associations and human features. Haplotypes are regions of linked genetic variants that are closely spaced on the genome and tend to be inherited together. Genetics research has revealed SNPs within certain haplotype blocks that introduce few distinct common haplotypes into most of the population. Haplotype block structures are used in association-based methods to map disease genes. In this paper, we propose an efficient algorithm for identifying haplotype blocks in the genome. In chromosomal haplotype data retrieved from the HapMap project website, the proposed algorithm identified longer haplotype blocks than an existing algorithm. To enhance its performance, we extended the proposed algorithm into a parallel algorithm that copies data in parallel via the Hadoop MapReduce framework. The proposed MapReduce-paralleled combinatorial algorithm performed well on real-world data obtained from the HapMap dataset; the improvement in computational efficiency was proportional to the number of processors used. PMID:25569088

  18. Identifying human disease genes: advances in molecular genetics and computational approaches.

    Science.gov (United States)

    Bakhtiar, S M; Ali, A; Baig, S M; Barh, D; Miyoshi, A; Azevedo, V

    2014-07-04

    The human genome project is one of the significant achievements that have provided detailed insight into our genetic legacy. During the last two decades, biomedical investigations have gathered a considerable body of evidence by detecting more than 2000 disease genes. Despite the imperative advances in the genetic understanding of various diseases, the pathogenesis of many others remains obscure. With recent advances, the laborious methodologies used to identify DNA variations are replaced by direct sequencing of genomic DNA to detect genetic changes. The ability to perform such studies depends equally on the development of high-throughput and economical genotyping methods. Currently, basically for every disease whose origen is still unknown, genetic approaches are available which could be pedigree-dependent or -independent with the capacity to elucidate fundamental disease mechanisms. Computer algorithms and programs for linkage analysis have formed the foundation for many disease gene detection projects, similarly databases of clinical findings have been widely used to support diagnostic decisions in dysmorphology and general human disease. For every disease type, genome sequence variations, particularly single nucleotide polymorphisms are mapped by comparing the genetic makeup of case and control groups. Methods that predict the effects of polymorphisms on protein stability are useful for the identification of possible disease associations, whereas structural effects can be assessed using methods to predict stability changes in proteins using sequence and/or structural information.

  19. A computational study of influence of helmet padding materials on the human brain under ballistic impacts.

    Science.gov (United States)

    Salimi Jazi, Mehdi; Rezaei, Asghar; Karami, Ghodrat; Azarmi, Fardad; Ziejewski, Mariusz

    2014-01-01

    The results of a computational study of a helmeted human head are presented in this paper. The focus of the work is to study the effects of helmet pad materials on the level of acceleration, inflicted pressure and shear stress in a human brain model subjected to a ballistic impact. Four different closed cell foam materials, made of expanded polystyrene and expanded polypropylene, are examined for the padding material. It is assumed that bullets cannot penetrate the helmet shell. Finite element modelling of the helmet, padding system, head and head components is used for this dynamic nonlinear analysis. Appropriate contacts and conditions are applied between the different components of the head, as well as between the head and the pads, and the pads and the helmet. Based on the results of simulations in this work, it is concluded that the stiffness of the foam has a prominent role in reducing the level of the transferred load to the brain. A pad that is less stiff is more efficient in absorbing the impact energy and reducing the sudden acceleration of the head and consequently lowers the brain injury level. Using the pad with the least stiffness, the influence of the angle of impacts as well as the locations of the ballistic strike is studied.

  20. Computational Fluid Dynamics Ventilation Study for the Human Powered Centrifuge at the International Space Station

    Science.gov (United States)

    Son, Chang H.

    2012-01-01

    The Human Powered Centrifuge (HPC) is a facility that is planned to be installed on board the International Space Station (ISS) to enable crew exercises under the artificial gravity conditions. The HPC equipment includes a "bicycle" for long-term exercises of a crewmember that provides power for rotation of HPC at a speed of 30 rpm. The crewmember exercising vigorously on the centrifuge generates the amount of carbon dioxide of about two times higher than a crewmember in ordinary conditions. The goal of the study is to analyze the airflow and carbon dioxide distribution within Pressurized Multipurpose Module (PMM) cabin when HPC is operating. A full unsteady formulation is used for airflow and CO2 transport CFD-based modeling with the so-called sliding mesh concept when the HPC equipment with the adjacent Bay 4 cabin volume is considered in the rotating reference frame while the rest of the cabin volume is considered in the stationary reference frame. The rotating part of the computational domain includes also a human body model. Localized effects of carbon dioxide dispersion are examined. Strong influence of the rotating HPC equipment on the CO2 distribution detected is discussed.

  1. Resistance to change and resurgence in humans engaging in a computer task.

    Science.gov (United States)

    Kuroda, Toshikazu; Cançado, Carlos R X; Podlesnik, Christopher A

    2016-04-01

    The relation between persistence, as measured by resistance to change, and resurgence has been examined with nonhuman animals but not systematically with humans. The present study examined persistence and resurgence with undergraduate students engaging in a computer task for points exchangeable for money. In Phase 1, a target response was maintained on a multiple variable-interval (VI) 15-s (Rich) VI 60-s (Lean) schedule of reinforcement. In Phase 2, the target response was extinguished while an alternative response was reinforced at equal rates in both schedule components. In Phase 3, the target and the alternative responses were extinguished. In an additional test of persistence (Phase 4), target responding was reestablished as in Phase 1 and then disrupted by access to videos in both schedule components. In Phases 2 and 4, target responding was more persistent in the Rich than in the Lean component. Also, resurgence generally was greater in the Rich than in the Lean component in Phase 3. The present findings with humans extend the generality of those obtained with nonhuman animals showing that higher reinforcement rates produce both greater persistence and resurgence, and suggest that common processes underlie response persistence and relapse. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. REVIEW: Affective and Emotional Aspects of Human-Computer Interaction: Game-Based and Innovative Learning Approaches

    OpenAIRE

    GULUMBAY, Reviewed By Dr. A. Askim

    2006-01-01

    This book was edited by, Maja Pivec, an educator at the University of Applied Sciences, and published by IOS Pres in 2006. The learning process can be seen as an emotional and personal experience that is addictive and leads learners to proactive behavior. New research methods in this field are related to affective and emotional approaches to computer-supported learning and human-computer interactions. Bringing together scientists and research aspects from psychology, educational sciences, cog...

  3. Population of 224 realistic human subject-based computational breast phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Erickson, David W. [Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, Durham, North Carolina 27705 and Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Wells, Jered R., E-mail: jered.wells@duke.edu [Clinical Imaging Physics Group and Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, Durham, North Carolina 27705 and Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Sturgeon, Gregory M. [Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, Durham, North Carolina 27705 (United States); Samei, Ehsan [Department of Radiology and Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, Durham, North Carolina 27705 and Departments of Physics, Electrical and Computer Engineering, and Biomedical Engineering, and Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Dobbins, James T. [Department of Radiology and Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, Durham, North Carolina 27705 and Departments of Physics and Biomedical Engineering and Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Segars, W. Paul [Department of Radiology and Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, Durham, North Carolina 27705 and Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Lo, Joseph Y. [Department of Radiology and Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, Durham, North Carolina 27705 and Departments of Electrical and Computer Engineering and Biomedical Engineering and Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States)

    2016-01-15

    Purpose: To create a database of highly realistic and anatomically variable 3D virtual breast phantoms based on dedicated breast computed tomography (bCT) data. Methods: A tissue classification and segmentation algorithm was used to create realistic and detailed 3D computational breast phantoms based on 230 + dedicated bCT datasets from normal human subjects. The breast volume was identified using a coarse three-class fuzzy C-means segmentation algorithm which accounted for and removed motion blur at the breast periphery. Noise in the bCT data was reduced through application of a postreconstruction 3D bilateral filter. A 3D adipose nonuniformity (bias field) correction was then applied followed by glandular segmentation using a 3D bias-corrected fuzzy C-means algorithm. Multiple tissue classes were defined including skin, adipose, and several fractional glandular densities. Following segmentation, a skin mask was produced which preserved the interdigitated skin, adipose, and glandular boundaries of the skin interior. Finally, surface modeling was used to produce digital phantoms with methods complementary to the XCAT suite of digital human phantoms. Results: After rejecting some datasets due to artifacts, 224 virtual breast phantoms were created which emulate the complex breast parenchyma of actual human subjects. The volume breast density (with skin) ranged from 5.5% to 66.3% with a mean value of 25.3% ± 13.2%. Breast volumes ranged from 25.0 to 2099.6 ml with a mean value of 716.3 ± 386.5 ml. Three breast phantoms were selected for imaging with digital compression (using finite element modeling) and simple ray-tracing, and the results show promise in their potential to produce realistic simulated mammograms. Conclusions: This work provides a new population of 224 breast phantoms based on in vivo bCT data for imaging research. Compared to previous studies based on only a few prototype cases, this dataset provides a rich source of new cases spanning a wide range

  4. Computational dosimetry for grounded and ungrounded human models due to contact current

    Science.gov (United States)

    Chan, Kwok Hung; Hattori, Junya; Laakso, Ilkka; Hirata, Akimasa; Taki, Masao

    2013-08-01

    This study presents the computational dosimetry of contact currents for grounded and ungrounded human models. The uncertainty of the quasi-static (QS) approximation of the in situ electric field induced in a grounded/ungrounded human body due to the contact current is first estimated. Different scenarios of cylindrical and anatomical human body models are considered, and the results are compared with the full-wave analysis. In the QS analysis, the induced field in the grounded cylindrical model is calculated by the QS finite-difference time-domain (QS-FDTD) method, and compared with the analytical solution. Because no analytical solution is available for the grounded/ungrounded anatomical human body model, the results of the QS-FDTD method are then compared with those of the conventional FDTD method. The upper frequency limit for the QS approximation in the contact current dosimetry is found to be 3 MHz, with a relative local error of less than 10%. The error increases above this frequency, which can be attributed to the neglect of the displacement current. The QS or conventional FDTD method is used for the dosimetry of induced electric field and/or specific absorption rate (SAR) for a contact current injected into the index finger of a human body model in the frequency range from 10 Hz to 100 MHz. The in situ electric fields or SAR are compared with the basic restrictions in the international guidelines/standards. The maximum electric field or the 99th percentile value of the electric fields appear not only in the fat and muscle tissues of the finger, but also around the wrist, forearm, and the upper arm. Some discrepancies are observed between the basic restrictions for the electric field and SAR and the reference levels for the contact current, especially in the extremities. These discrepancies are shown by an equation that relates the current density, tissue conductivity, and induced electric field in the finger with a cross-sectional area of 1 cm2.

  5. Human Perception, SBS Sympsoms and Performance of Office Work during Exposure to Air Polluted by Building Materials and Personal Computers

    DEFF Research Database (Denmark)

    Bako-Biro, Zsolt

    The present thesis deals with the impact of polluted air from building materials and personal computers on human perception, Sick Building Syndrome (SBS) symptoms and performance of office work. These effects have been studies in a series of experiments that are described in two different chapter......, each of them with one type of pollution source.......The present thesis deals with the impact of polluted air from building materials and personal computers on human perception, Sick Building Syndrome (SBS) symptoms and performance of office work. These effects have been studies in a series of experiments that are described in two different chapters...

  6. Conversion of IVA Human Computer Model to EVA Use and Evaluation and Comparison of the Result to Existing EVA Models

    Science.gov (United States)

    Hamilton, George S.; Williams, Jermaine C.

    1998-01-01

    This paper describes the methods, rationale, and comparative results of the conversion of an intravehicular (IVA) 3D human computer model (HCM) to extravehicular (EVA) use and compares the converted model to an existing model on another computer platform. The task of accurately modeling a spacesuited human figure in software is daunting: the suit restricts the human's joint range of motion (ROM) and does not have joints collocated with human joints. The modeling of the variety of materials needed to construct a space suit (e. g. metal bearings, rigid fiberglass torso, flexible cloth limbs and rubber coated gloves) attached to a human figure is currently out of reach of desktop computer hardware and software. Therefore a simplified approach was taken. The HCM's body parts were enlarged and the joint ROM was restricted to match the existing spacesuit model. This basic approach could be used to model other restrictive environments in industry such as chemical or fire protective clothing. In summary, the approach provides a moderate fidelity, usable tool which will run on current notebook computers.

  7. Supervisory Control: Problems, Theory and Experiment for Application to Human-Computer Interaction in Undersea Remote Systems

    Science.gov (United States)

    1982-03-01

    very difficult for human workers . For example, if a known mix of products is coming down the assembly line in a known order, the computer can then treat...each product according to its appropriate (different) program without any forgetting or confusion. A human worker would become very confused. In...An operator who formerly found his dignity in being an expert at some marn.a] or visual skill may may become " deskilled ". He may become a supervisor

  8. Towards human-computer synergetic analysis of large-scale biological data.

    Science.gov (United States)

    Singh, Rahul; Yang, Hui; Dalziel, Ben; Asarnow, Daniel; Murad, William; Foote, David; Gormley, Matthew; Stillman, Jonathan; Fisher, Susan

    2013-01-01

    Advances in technology have led to the generation of massive amounts of complex and multifarious biological data in areas ranging from genomics to structural biology. The volume and complexity of such data leads to significant challenges in terms of its analysis, especially when one seeks to generate hypotheses or explore the underlying biological processes. At the state-of-the-art, the application of automated algorithms followed by perusal and analysis of the results by an expert continues to be the predominant paradigm for analyzing biological data. This paradigm works well in many problem domains. However, it also is limiting, since domain experts are forced to apply their instincts and expertise such as contextual reasoning, hypothesis formulation, and exploratory analysis after the algorithm has produced its results. In many areas where the organization and interaction of the biological processes is poorly understood and exploratory analysis is crucial, what is needed is to integrate domain expertise during the data analysis process and use it to drive the analysis itself. In context of the aforementioned background, the results presented in this paper describe advancements along two methodological directions. First, given the context of biological data, we utilize and extend a design approach called experiential computing from multimedia information system design. This paradigm combines information visualization and human-computer interaction with algorithms for exploratory analysis of large-scale and complex data. In the proposed approach, emphasis is laid on: (1) allowing users to directly visualize, interact, experience, and explore the data through interoperable visualization-based and algorithmic components, (2) supporting unified query and presentation spaces to facilitate experimentation and exploration, (3) providing external contextual information by assimilating relevant supplementary data, and (4) encouraging user-directed information

  9. The Dimensions of the Orbital Cavity Based on High-Resolution Computed Tomography of Human Cadavers.

    Science.gov (United States)

    Felding, Ulrik Ascanius; Bloch, Sune Land; Buchwald, Christian von

    2016-06-01

    Blow-out fractures affect the volume and surface area of the orbital cavity. Estimation of these values after the trauma may help in deciding whether or not a patient is a candidate for surgery. Recent studies have provided estimates of orbital volume and area of bone defect, and correlated them with the degree of enophthalmos. However, a large degree of biological variation between individuals may preclude such absolute values from being successful indicators for surgery.Stereological methods have been used to estimate orbital cavity volume in a few studies, but to date these have not been used for surface area. To authors' knowledge, this study is the first to have measured the entire surface area of the orbital cavity.The volume and surface area of the orbital cavity were estimated in computed tomography scans of 11 human cadavers using unbiased stereological sampling techniques. The mean (± SD) total volume and total surface area of the orbital cavities was 24.27 ± 3.88 cm and 32.47 ± 2.96 cm, respectively. There was no significant difference in volume (P = 0.315) or surface area (P = 0.566) between the 2 orbital cavities.The stereological technique proved to be a robust and unbiased method that may be used as a gold standard for comparison with automated computer software. Future imaging studies in blow-out fracture patients may be based on individual and relative calculation involving both herniated volume and fractured surface area in relation to the total volume and surface area of the uninjured orbital cavity.

  10. Development of a Three Dimensional Multiscale Computational Model of the Human Epidermis

    Science.gov (United States)

    Adra, Salem; Sun, Tao; MacNeil, Sheila; Holcombe, Mike; Smallwood, Rod

    2010-01-01

    Transforming Growth Factor (TGF-β1) is a member of the TGF-beta superfamily ligand-receptor network. and plays a crucial role in tissue regeneration. The extensive in vitro and in vivo experimental literature describing its actions nevertheless describe an apparent paradox in that during re-epithelialisation it acts as proliferation inhibitor for keratinocytes. The majority of biological models focus on certain aspects of TGF-β1 behaviour and no one model provides a comprehensive story of this regulatory factor's action. Accordingly our aim was to develop a computational model to act as a complementary approach to improve our understanding of TGF-β1. In our previous study, an agent-based model of keratinocyte colony formation in 2D culture was developed. In this study this model was extensively developed into a three dimensional multiscale model of the human epidermis which is comprised of three interacting and integrated layers: (1) an agent-based model which captures the biological rules governing the cells in the human epidermis at the cellular level and includes the rules for injury induced emergent behaviours, (2) a COmplex PAthway SImulator (COPASI) model which simulates the expression and signalling of TGF-β1 at the sub-cellular level and (3) a mechanical layer embodied by a numerical physical solver responsible for resolving the forces exerted between cells at the multi-cellular level. The integrated model was initially validated by using it to grow a piece of virtual epidermis in 3D and comparing the in virtuo simulations of keratinocyte behaviour and of TGF-β1 signalling with the extensive research literature describing this key regulatory protein. This research reinforces the idea that computational modelling can be an effective additional tool to aid our understanding of complex systems. In the accompanying paper the model is used to explore hypotheses of the functions of TGF-β1 at the cellular and subcellular level on different keratinocyte

  11. Development of a three dimensional multiscale computational model of the human epidermis.

    Directory of Open Access Journals (Sweden)

    Salem Adra

    Full Text Available Transforming Growth Factor (TGF-beta1 is a member of the TGF-beta superfamily ligand-receptor network. and plays a crucial role in tissue regeneration. The extensive in vitro and in vivo experimental literature describing its actions nevertheless describe an apparent paradox in that during re-epithelialisation it acts as proliferation inhibitor for keratinocytes. The majority of biological models focus on certain aspects of TGF-beta1 behaviour and no one model provides a comprehensive story of this regulatory factor's action. Accordingly our aim was to develop a computational model to act as a complementary approach to improve our understanding of TGF-beta1. In our previous study, an agent-based model of keratinocyte colony formation in 2D culture was developed. In this study this model was extensively developed into a three dimensional multiscale model of the human epidermis which is comprised of three interacting and integrated layers: (1 an agent-based model which captures the biological rules governing the cells in the human epidermis at the cellular level and includes the rules for injury induced emergent behaviours, (2 a COmplex PAthway SImulator (COPASI model which simulates the expression and signalling of TGF-beta1 at the sub-cellular level and (3 a mechanical layer embodied by a numerical physical solver responsible for resolving the forces exerted between cells at the multi-cellular level. The integrated model was initially validated by using it to grow a piece of virtual epidermis in 3D and comparing the in virtuo simulations of keratinocyte behaviour and of TGF-beta1 signalling with the extensive research literature describing this key regulatory protein. This research reinforces the idea that computational modelling can be an effective additional tool to aid our understanding of complex systems. In the accompanying paper the model is used to explore hypotheses of the functions of TGF-beta1 at the cellular and subcellular level on

  12. Rethinking Human-Centered Computing: Finding the Customer and Negotiated Interactions at the Airport

    Science.gov (United States)

    Wales, Roxana; O'Neill, John; Mirmalek, Zara

    2003-01-01

    The breakdown in the air transportation system over the past several years raises an interesting question for researchers: How can we help improve the reliability of airline operations? In offering some answers to this question, we make a statement about Huuman-Centered Computing (HCC). First we offer the definition that HCC is a multi-disciplinary research and design methodology focused on supporting humans as they use technology by including cognitive and social systems, computational tools and the physical environment in the analysis of organizational systems. We suggest that a key element in understanding organizational systems is that there are external cognitive and social systems (customers) as well as internal cognitive and social systems (employees) and that they interact dynamically to impact the organization and its work. The design of human-centered intelligent systems must take this outside-inside dynamic into account. In the past, the design of intelligent systems has focused on supporting the work and improvisation requirements of employees but has often assumed that customer requirements are implicitly satisfied by employee requirements. Taking a customer-centric perspective provides a different lens for understanding this outside-inside dynamic, the work of the organization and the requirements of both customers and employees In this article we will: 1) Demonstrate how the use of ethnographic methods revealed the important outside-inside dynamic in an airline, specifically the consequential relationship between external customer requirements and perspectives and internal organizational processes and perspectives as they came together in a changing environment; 2) Describe how taking a customer centric perspective identifies places where the impact of the outside-inside dynamic is most critical and requires technology that can be adaptive; 3) Define and discuss the place of negotiated interactions in airline operations, identifying how these

  13. Sex determination of human mandible using metrical parameters by computed tomography: A prospective radiographic short study

    Directory of Open Access Journals (Sweden)

    Basavaraj N Kallalli

    2016-01-01

    Full Text Available Introduction: Sex determination of unidentified human remains is very important in forensic medicine, medicolegal cases, and forensic anthropology. The mandible is the largest and hardest facial bone that commonly resists postmortem damage and forms an important source of personal identification. Additional studies have demonstrated the applicability of facial reconstruction using three-dimensional computed tomography scan (3D-CT for the purpose of individual identification. Aim: To determine the sex of human mandible using metrical parameters by CT. Materials and Methods: The study included thirty subjects (15 males and 15 females, with age group ranging between 10 and 60 years obtained from the outpatient department of Oral Medicine and Radiology, Narsinhbhai Patel Dental College and Hospital. CT scan was performed on all the subjects, and the data obtained were reconstructed for 3D viewing. After obtaining 3D-CT scan, a total of seven mandibular measurements, i.e., gonial angle (G-angle, ramus length (Ramus-L, minimum ramus breadth and gonion-gnathion length (G-G-L, bigonial breadth, bicondylar breadth (BIC-Br, and coronoid length (CO-L were measured; collected data were analyzed using SPSS statistical analysis program by Student's t-test. Results: The result of the study showed that out of seven parameters, G-angle, Ramus-L, G-G-L, BIC-Br, and CO-L showed a significant statistical difference (P < 0.05, with overall accuracy of 86% for males and 82% for females. Conclusion: Personal identification using mandible by conventional methods has already been proved but with variable efficacies. Advanced imaging modalities can aid in personal identification with much higher accuracy than conventional methods.

  14. Enhancement of Student Learning through the Use of a Hinting Computer E-Learning System and Comparison with Human Teachers

    Science.gov (United States)

    Munoz-Merino, P. J.; Kloos, C. D.; Munoz-Organero, M.

    2011-01-01

    This paper reports the results of an experiment in a Computer Architecture Laboratory course classroom session, in which students were divided into two groups for interaction both with a hinting e-learning system and with human teachers generating hints. The results show that there were high learning gains for both groups, demonstrating the…

  15. The mind-writing pupil : A human-computer interface based on decoding of covert attention through pupillometry

    NARCIS (Netherlands)

    Mathôt, Sebastiaan; Melmi, Jean Baptiste; Van Der Linden, Lotje; Van Der Stigchel, Stefan

    2016-01-01

    We present a new human-computer interface that is based on decoding of attention through pupillometry. Our method builds on the recent finding that covert visual attention affects the pupillary light response: Your pupil constricts when you covertly (without looking at it) attend to a bright, compar

  16. The Mind-Writing Pupil : A Human-Computer Interface Based on Decoding of Covert Attention through Pupillometry

    NARCIS (Netherlands)

    Mathot, Sebastiaan; Melmi, Jean-Baptiste; van der Linden, Lotje; van der Stigchel, Stefan

    2016-01-01

    We present a new human-computer interface that is based on decoding of attention through pupillometry. Our method builds on the recent finding that covert visual attention affects the pupillary light response: Your pupil constricts when you covertly (without looking at it) attend to a bright, compar

  17. 3D Computer Simulations of Pulsatile Human Blood Flows in Vessels and in the Aortic Arch: Investigation of Non-Newtonian Characteristics of Human Blood

    CERN Document Server

    Sultanov, Renat A; Engelbrekt, Brent; Blankenbecler, Richard

    2008-01-01

    Methods of Computational Fluid Dynamics are applied to simulate pulsatile blood flow in human vessels and in the aortic arch. The non-Newtonian behaviour of the human blood is investigated in simple vessels of actual size. A detailed time-dependent mathematical convergence test has been carried out. The realistic pulsatile flow is used in all simulations. Results of computer simulations of the blood flow in vessels of two different geometries are presented. For pressure, strain rate and velocity component distributions we found significant disagreements between our results obtained with realistic non-Newtonian treatment of human blood and widely used method in literature: a simple Newtonian approximation. A significant increase of the strain rate and, as a result, wall sear stress distribution, is found in the region of the aortic arch. We consider this result as theoretical evidence that supports existing clinical observations and those models not using non-Newtonian treatment underestimate the risk of disru...

  18. Using minimal human-computer interfaces for studying the interactive development of social awareness

    Directory of Open Access Journals (Sweden)

    Tom eFroese

    2014-09-01

    Full Text Available According to the enactive approach to cognitive science, perception is essentially a skillful engagement with the world. Learning how to engage via a human-computer interface (HCI can therefore be taken as an instance of developing a new mode of experiencing. Similarly, social perception is theorized to be primarily constituted by skillful engagement between people, which implies that it is possible to investigate the origins and development of social awareness using multi-user HCIs. We analyzed the trial-by-trial objective and subjective changes in sociality that took place during a perceptual crossing experiment in which embodied interaction between pairs of adults was mediated over a minimalist haptic HCI. Since that study required participants to implicitly relearn how to mutually engage so as to perceive each other’s presence, we hypothesized that there would be indications that the initial developmental stages of social awareness were recapitulated. Preliminary results reveal that, despite the lack of explicit feedback about task performance, there was a trend for the clarity of social awareness to increase over time. We discuss the methodological challenges involved in evaluating whether this trend was characterized by distinct developmental stages of objective behavior and subjective experience.

  19. Using minimal human-computer interfaces for studying the interactive development of social awareness.

    Science.gov (United States)

    Froese, Tom; Iizuka, Hiroyuki; Ikegami, Takashi

    2014-01-01

    According to the enactive approach to cognitive science, perception is essentially a skillful engagement with the world. Learning how to engage via a human-computer interface (HCI) can therefore be taken as an instance of developing a new mode of experiencing. Similarly, social perception is theorized to be primarily constituted by skillful engagement between people, which implies that it is possible to investigate the origins and development of social awareness using multi-user HCIs. We analyzed the trial-by-trial objective and subjective changes in sociality that took place during a perceptual crossing experiment in which embodied interaction between pairs of adults was mediated over a minimalist haptic HCI. Since that study required participants to implicitly relearn how to mutually engage so as to perceive each other's presence, we hypothesized that there would be indications that the initial developmental stages of social awareness were recapitulated. Preliminary results reveal that, despite the lack of explicit feedback about task performance, there was a trend for the clarity of social awareness to increase over time. We discuss the methodological challenges involved in evaluating whether this trend was characterized by distinct developmental stages of objective behavior and subjective experience.

  20. A Language/Action Model of Human-Computer Communication in a Psychiatric Hospital

    Science.gov (United States)

    Morelli, R. A.; Goethe, J. W.; Bronzino, J. D.

    1990-01-01

    When a staff physician says to an intern he is supervising “I think you should try medication X,” this statement may differ in meaning from the same string of words spoken between colleagues. In the first case, the statement may have the force of an order (“Do this!”), while in the latter it is merely a suggestion. In either case, the utterance sets up important expectations which constrain the future actions of the parties involved. This paper lays out an analytic framework, based on speech act theory, for representing such “conversations for action” so that they may be used to inform the design of human-computer interaction. The language/action design perspective views the information system -- in this case an expert system that monitors drug treatment -- as one of many “agents” within a broad communicative network. Speech act theory is used to model a typical psychiatric hospital unit as a system of communicative action. In addition to identifying and characterizing the primary communicative agents and speech acts, the model presents a taxonomy of key conversational patterns and shows how they may be applied to the design of a clinical monitoring system. In the final section, the advantages and implications of this design approach are discussed.

  1. Role of cranial computed tomography in human immunodeficiency virus-positive patients with generalised seizures

    Directory of Open Access Journals (Sweden)

    Chris van Zyl

    2016-03-01

    Full Text Available Background: Emergency neuroimaging of human immunodeficiency virus (HIV-positive patients with generalised new onset seizures (NOS and a normal post-ictal neurological examination remains controversial, with the general impression being that emergency imaging is necessary because immunosuppression may blur clinical indicators of acute intracranial pathology. The objectives of our study were to establish whether cranial computed tomography (CT affects the emergency management of HIV-positive patients with generalised NOS and a normal post-ictal neurological examination.Method: We conducted a prospective descriptive observational study. Consecutive HIVpositive patients of 18 years and older, who presented to the Kimberley Hospital Complex’s Emergency Department within 24 hours of their first generalised seizures and who had undergone normal post-ictal neurological examinations, were included. Emergency CT results as well as CD4-count levels were evaluated.Results: A total of 25 HIV-positive patients were included in the study. The results of cranial CT brought about a change in emergency care management in 12% of patients, all of them with CD4 counts below 200 cells/mm3 .Conclusion: We suggest that emergency cranial CT be performed on all HIV-positive patients presenting with generalised NOS and a normal post-ictal neurological examination, particularly if the CD4 count is below 200 cells/mm3.Keywords: HIV; Seizures; CT Brain

  2. The design of an intelligent human-computer interface for the test, control and monitor system

    Science.gov (United States)

    Shoaff, William D.

    1988-01-01

    The graphical intelligence and assistance capabilities of a human-computer interface for the Test, Control, and Monitor System at Kennedy Space Center are explored. The report focuses on how a particular commercial off-the-shelf graphical software package, Data Views, can be used to produce tools that build widgets such as menus, text panels, graphs, icons, windows, and ultimately complete interfaces for monitoring data from an application; controlling an application by providing input data to it; and testing an application by both monitoring and controlling it. A complete set of tools for building interfaces is described in a manual for the TCMS toolkit. Simple tools create primitive widgets such as lines, rectangles and text strings. Intermediate level tools create pictographs from primitive widgets, and connect processes to either text strings or pictographs. Other tools create input objects; Data Views supports output objects directly, thus output objects are not considered. Finally, a set of utilities for executing, monitoring use, editing, and displaying the content of interfaces is included in the toolkit.

  3. The Human Factors and Ergonomics of P300-Based Brain-Computer Interfaces

    Directory of Open Access Journals (Sweden)

    J. Clark Powers

    2015-08-01

    Full Text Available Individuals with severe neuromuscular impairments face many challenges in communication and manipulation of the environment. Brain-computer interfaces (BCIs show promise in presenting real-world applications that can provide such individuals with the means to interact with the world using only brain waves. Although there has been a growing body of research in recent years, much relates only to technology, and not to technology in use—i.e., real-world assistive technology employed by users. This review examined the literature to highlight studies that implicate the human factors and ergonomics (HFE of P300-based BCIs. We assessed 21 studies on three topics to speak directly to improving the HFE of these systems: (1 alternative signal evocation methods within the oddball paradigm; (2 environmental interventions to improve user performance and satisfaction within the constraints of current BCI systems; and (3 measures and methods of measuring user acceptance. We found that HFE is central to the performance of P300-based BCI systems, although researchers do not often make explicit this connection. Incorporation of measures of user acceptance and rigorous usability evaluations, increased engagement of disabled users as test participants, and greater realism in testing will help progress the advancement of P300-based BCI systems in assistive applications.

  4. Computer simulation of leadership, consensus decision making and collective behaviour in humans.

    Science.gov (United States)

    Wu, Song; Sun, Quanbin

    2014-01-01

    The aim of this study is to evaluate the reliability of a crowd simulation model developed by the authors by reproducing Dyer et al.'s experiments (published in Philosophical Transactions in 2009) on human leadership and consensus decision making in a computer-based environment. The theoretical crowd model of the simulation environment is presented, and its results are compared and analysed against Dyer et al.'s original experiments. It is concluded that the simulation results are largely consistent with the experiments, which demonstrates the reliability of the crowd model. Furthermore, the simulation data also reveals several additional new findings, namely: 1) the phenomena of sacrificing accuracy to reach a quicker consensus decision found in ants colonies was also discovered in the simulation; 2) the ability of reaching consensus in groups has a direct impact on the time and accuracy of arriving at the target position; 3) the positions of the informed individuals or leaders in the crowd could have significant impact on the overall crowd movement; and 4) the simulation also confirmed Dyer et al.'s anecdotal evidence of the proportion of the leadership in large crowds and its effect on crowd movement. The potential applications of these findings are highlighted in the final discussion of this paper.

  5. Three-dimensional microstructure of human alveolar trabecular bone: a micro-computed tomography study

    Science.gov (United States)

    2017-01-01

    Purpose The microstructural characteristics of trabecular bone were identified using micro-computed tomography (micro-CT), in order to develop a potential strategy for implant surface improvement to facilitate osseointegration. Methods Alveolar bone specimens from the cadavers of 30 humans were scanned by high-resolution micro-CT and reconstructed. Volumes of interest chosen within the jaw were classified according to Hounsfield units into 4 bone quality categories. Several structural parameters were measured and statistically analyzed. Results Alveolar bone specimens with D1 bone quality had significantly higher values for all structural parameters than the other bone quality categories, except for trabecular thickness (Tb.Th). The percentage of bone volume, trabecular separation (Tb.Sp), and trabecular number (Tb.N) varied significantly among bone quality categories. Tb.Sp varied markedly across the bone quality categories (D1: 0.59±0.22 mm, D4: 1.20±0.48 mm), whereas Tb.Th had similar values (D1: 0.30±0.08 mm, D4: 0.22±0.05 mm). Conclusions Bone quality depended on Tb.Sp and number—that is, endosteal space architecture—rather than bone surface and Tb.Th. Regardless of bone quality, Tb.Th showed little variation. These factors should be taken into account when developing individualized implant surface topographies. PMID:28261521

  6. Design of a compact low-power human-computer interaction equipment for hand motion

    Science.gov (United States)

    Wu, Xianwei; Jin, Wenguang

    2017-01-01

    Human-Computer Interaction (HCI) raises demand of convenience, endurance, responsiveness and naturalness. This paper describes a design of a compact wearable low-power HCI equipment applied to gesture recognition. System combines multi-mode sense signals: the vision sense signal and the motion sense signal, and the equipment is equipped with the depth camera and the motion sensor. The dimension (40 mm × 30 mm) and structure is compact and portable after tight integration. System is built on a module layered framework, which contributes to real-time collection (60 fps), process and transmission via synchronous confusion with asynchronous concurrent collection and wireless Blue 4.0 transmission. To minimize equipment's energy consumption, system makes use of low-power components, managing peripheral state dynamically, switching into idle mode intelligently, pulse-width modulation (PWM) of the NIR LEDs of the depth camera and algorithm optimization by the motion sensor. To test this equipment's function and performance, a gesture recognition algorithm is applied to system. As the result presents, general energy consumption could be as low as 0.5 W.

  7. The right to a human in the loop: Political constructions of computer automation and personhood.

    Science.gov (United States)

    Jones, Meg Leta

    2017-04-01

    Contributing to recent scholarship on the governance of algorithms, this article explores the role of dignity in data protection law addressing automated decision-making. Delving into the historical roots of contemporary disputes between information societies, notably European Union and Council of Europe countries and the United States, reveals that the regulation of algorithms has a rich, culturally entrenched, politically relevant backstory. The article compares the making of law concerning data protection and privacy, focusing on the role automation has played in the two regimes. By situating diverse policy treatments within the cultural contexts from which they emerged, the article uncovers and examines two different legal constructions of automated data processing, one that has furnished a right to a human in the loop that is intended to protect the dignity of the data subject and the other that promotes and fosters full automation to establish and celebrate the fairness and objectivity of computers. The existence of a subtle right across European countries and its absence in the US will no doubt continue to be relevant to international technology policy as smart technologies are introduced in more and more areas of society.

  8. Volumetric characterization of human patellar cartilage matrix on phase contrast x-ray computed tomography

    Science.gov (United States)

    Abidin, Anas Z.; Nagarajan, Mahesh B.; Checefsky, Walter A.; Coan, Paola; Diemoz, Paul C.; Hobbs, Susan K.; Huber, Markus B.; Wismüller, Axel

    2015-03-01

    Phase contrast X-ray computed tomography (PCI-CT) has recently emerged as a novel imaging technique that allows visualization of cartilage soft tissue, subsequent examination of chondrocyte patterns, and their correlation to osteoarthritis. Previous studies have shown that 2D texture features are effective at distinguishing between healthy and osteoarthritic regions of interest annotated in the radial zone of cartilage matrix on PCI-CT images. In this study, we further extend the texture analysis to 3D and investigate the ability of volumetric texture features at characterizing chondrocyte patterns in the cartilage matrix for purposes of classification. Here, we extracted volumetric texture features derived from Minkowski Functionals and gray-level co-occurrence matrices (GLCM) from 496 volumes of interest (VOI) annotated on PCI-CT images of human patellar cartilage specimens. The extracted features were then used in a machine-learning task involving support vector regression to classify ROIs as healthy or osteoarthritic. Classification performance was evaluated using the area under the receiver operating characteristic (ROC) curve (AUC). The best classification performance was observed with GLCM features correlation (AUC = 0.83 +/- 0.06) and homogeneity (AUC = 0.82 +/- 0.07), which significantly outperformed all Minkowski Functionals (p GLCM-derived statistical features can distinguish between healthy and osteoarthritic tissue with high accuracy.

  9. Computational Modelling of Gas-Particle Flows with Different Particle Morphology in the Human Nasal Cavity

    Directory of Open Access Journals (Sweden)

    Kiao Inthavong

    2009-01-01

    Full Text Available This paper summarises current studies related to numerical gas-particle flows in the human nasal cavity. Of interest are the numerical modelling requirements to consider the effects of particle morphology for a variety of particle shapes and sizes such as very small particles sizes (nanoparticles, elongated shapes (asbestos fibres, rough shapes (pollen, and porous light density particles (drug particles are considered. It was shown that important physical phenomena needed to be addressed for different particle characteristics. This included the Brownian diffusion for submicron particles. Computational results for the nasal capture efficiency for nano-particles and various breathing rates in the laminar regime were found to correlate well with the ratio of particle diffusivity to the breathing rate. For micron particles, particle inertia is the most significant property and the need to use sufficient drag laws is important. Drag correlations for fibrous and rough surfaced particles were investigated to enable particle tracking. Based on the simulated results, semi-empirical correlations for particle deposition were fitted in terms of Peclet number and inertial parameter for nanoparticles and micron particles respectively.

  10. Redesign of a computerized clinical reminder for colorectal cancer screening: a human-computer interaction evaluation

    Directory of Open Access Journals (Sweden)

    Saleem Jason J

    2011-11-01

    Full Text Available Abstract Background Based on barriers to the use of computerized clinical decision support (CDS learned in an earlier field study, we prototyped design enhancements to the Veterans Health Administration's (VHA's colorectal cancer (CRC screening clinical reminder to compare against the VHA's current CRC reminder. Methods In a controlled simulation experiment, 12 primary care providers (PCPs used prototypes of the current and redesigned CRC screening reminder in a within-subject comparison. Quantitative measurements were based on a usability survey, workload assessment instrument, and workflow integration survey. We also collected qualitative data on both designs. Results Design enhancements to the VHA's existing CRC screening clinical reminder positively impacted aspects of usability and workflow integration but not workload. The qualitative analysis revealed broad support across participants for the design enhancements with specific suggestions for improving the reminder further. Conclusions This study demonstrates the value of a human-computer interaction evaluation in informing the redesign of information tools to foster uptake, integration into workflow, and use in clinical practice.

  11. Computational study of Wolff's law with trabecular architecture in the human proximal femur using topology optimization.

    Science.gov (United States)

    Jang, In Gwun; Kim, Il Yong

    2008-08-07

    In the field of bone adaptation, it is believed that the morphology of bone is affected by its mechanical loads, and bone has self-optimizing capability; this phenomenon is well known as Wolff's law of the transformation of bone. In this paper, we simulated trabecular bone adaptation in the human proximal femur using topology optimization and quantitatively investigated the validity of Wolff's law. Topology optimization iteratively distributes material in a design domain producing optimal layout or configuration, and it has been widely and successfully used in many engineering fields. We used a two-dimensional micro-FE model with 50 microm pixel resolution to represent the full trabecular architecture in the proximal femur, and performed topology optimization to study the trabecular morphological changes under three loading cases in daily activities. The simulation results were compared to the actual trabecular architecture in previous experimental studies. We discovered that there are strong similarities in trabecular patterns between the computational results and observed data in the literature. The results showed that the strain energy distribution of the trabecular architecture became more uniform during the optimization; from the viewpoint of structural topology optimization, this bone morphology may be considered as an optimal structure. We also showed that the non-orthogonal intersections were constructed to support daily activity loadings in the sense of optimization, as opposed to Wolff's drawing.

  12. Using minimal human-computer interfaces for studying the interactive development of social awareness

    Science.gov (United States)

    Froese, Tom; Iizuka, Hiroyuki; Ikegami, Takashi

    2014-01-01

    According to the enactive approach to cognitive science, perception is essentially a skillful engagement with the world. Learning how to engage via a human-computer interface (HCI) can therefore be taken as an instance of developing a new mode of experiencing. Similarly, social perception is theorized to be primarily constituted by skillful engagement between people, which implies that it is possible to investigate the origins and development of social awareness using multi-user HCIs. We analyzed the trial-by-trial objective and subjective changes in sociality that took place during a perceptual crossing experiment in which embodied interaction between pairs of adults was mediated over a minimalist haptic HCI. Since that study required participants to implicitly relearn how to mutually engage so as to perceive each other's presence, we hypothesized that there would be indications that the initial developmental stages of social awareness were recapitulated. Preliminary results reveal that, despite the lack of explicit feedback about task performance, there was a trend for the clarity of social awareness to increase over time. We discuss the methodological challenges involved in evaluating whether this trend was characterized by distinct developmental stages of objective behavior and subjective experience. PMID:25309490

  13. User participation in the development of the human/computer interface for control centers

    Science.gov (United States)

    Broome, Richard; Quick-Campbell, Marlene; Creegan, James; Dutilly, Robert

    1996-01-01

    Technological advances coupled with the requirements to reduce operations staffing costs led to the demand for efficient, technologically-sophisticated mission operations control centers. The control center under development for the earth observing system (EOS) is considered. The users are involved in the development of a control center in order to ensure that it is cost-efficient and flexible. A number of measures were implemented in the EOS program in order to encourage user involvement in the area of human-computer interface development. The following user participation exercises carried out in relation to the system analysis and design are described: the shadow participation of the programmers during a day of operations; the flight operations personnel interviews; and the analysis of the flight operations team tasks. The user participation in the interface prototype development, the prototype evaluation, and the system implementation are reported on. The involvement of the users early in the development process enables the requirements to be better understood and the cost to be reduced.

  14. Computer simulation of leadership, consensus decision making and collective behaviour in humans.

    Directory of Open Access Journals (Sweden)

    Song Wu

    Full Text Available The aim of this study is to evaluate the reliability of a crowd simulation model developed by the authors by reproducing Dyer et al.'s experiments (published in Philosophical Transactions in 2009 on human leadership and consensus decision making in a computer-based environment. The theoretical crowd model of the simulation environment is presented, and its results are compared and analysed against Dyer et al.'s original experiments. It is concluded that the simulation results are largely consistent with the experiments, which demonstrates the reliability of the crowd model. Furthermore, the simulation data also reveals several additional new findings, namely: 1 the phenomena of sacrificing accuracy to reach a quicker consensus decision found in ants colonies was also discovered in the simulation; 2 the ability of reaching consensus in groups has a direct impact on the time and accuracy of arriving at the target position; 3 the positions of the informed individuals or leaders in the crowd could have significant impact on the overall crowd movement; and 4 the simulation also confirmed Dyer et al.'s anecdotal evidence of the proportion of the leadership in large crowds and its effect on crowd movement. The potential applications of these findings are highlighted in the final discussion of this paper.

  15. Study of atrial arrhythmias in a computer model based on magnetic resonance images of human atria

    Science.gov (United States)

    Virag, N.; Jacquemet, V.; Henriquez, C. S.; Zozor, S.; Blanc, O.; Vesin, J.-M.; Pruvot, E.; Kappenberger, L.

    2002-09-01

    The maintenance of multiple wavelets appears to be a consistent feature of atrial fibrillation (AF). In this paper, we investigate possible mechanisms of initiation and perpetuation of multiple wavelets in a computer model of AF. We developed a simplified model of human atria that uses an ionic-based membrane model and whose geometry is derived from a segmented magnetic resonance imaging data set. The three-dimensional surface has a realistic size and includes obstacles corresponding to the location of major vessels and valves, but it does not take into account anisotropy. The main advantage of this approach is its ability to simulate long duration arrhythmias (up to 40 s). Clinically relevant initiation protocols, such as single-site burst pacing, were used. The dynamics of simulated AF were investigated in models with different action potential durations and restitution properties, controlled by the conductance of the slow inward current in a modified Luo-Rudy model. The simulation studies show that (1) single-site burst pacing protocol can be used to induce wave breaks even in tissue with uniform membrane properties, (2) the restitution-based wave breaks in an atrial model with realistic size and conduction velocities are transient, and (3) a significant reduction in action potential duration (even with apparently flat restitution) increases the duration of AF.

  16. Elucidating Mechanisms of Molecular Recognition Between Human Argonaute and miRNA Using Computational Approaches

    KAUST Repository

    Jiang, Hanlun

    2016-12-06

    MicroRNA (miRNA) and Argonaute (AGO) protein together form the RNA-induced silencing complex (RISC) that plays an essential role in the regulation of gene expression. Elucidating the underlying mechanism of AGO-miRNA recognition is thus of great importance not only for the in-depth understanding of miRNA function but also for inspiring new drugs targeting miRNAs. In this chapter we introduce a combined computational approach of molecular dynamics (MD) simulations, Markov state models (MSMs), and protein-RNA docking to investigate AGO-miRNA recognition. Constructed from MD simulations, MSMs can elucidate the conformational dynamics of AGO at biologically relevant timescales. Protein-RNA docking can then efficiently identify the AGO conformations that are geometrically accessible to miRNA. Using our recent work on human AGO2 as an example, we explain the rationale and the workflow of our method in details. This combined approach holds great promise to complement experiments in unraveling the mechanisms of molecular recognition between large, flexible, and complex biomolecules.

  17. Product wastage from modern human growth hormone administration devices: a laboratory and computer simulation analysis

    Directory of Open Access Journals (Sweden)

    Pollock RF

    2013-08-01

    Omnitrope, corresponding to 7–8 additional pens per patient annually. Conclusion: Overall, Norditropin pens resulted in significantly less wastage than the Omnitrope Pen-5. The study suggests that GH devices of the same nominal volume exhibit differences that may affect the frequency of GH prescription refills required to remain adherent to therapy. Keywords: human growth hormone, administration, dosage, injections, subcutaneous, computer models

  18. Radiotherapy infrastructure and human resources in Switzerland. Present status and projected computations for 2020

    Energy Technology Data Exchange (ETDEWEB)

    Datta, Niloy Ranjan; Khan, Shaka; Marder, Dietmar [KSA-KSB, Kantonsspital Aarau, RadioOnkologieZentrum, Aarau (Switzerland); Zwahlen, Daniel [Kantonsspital Graubuenden, Department of Radiotherapy, Chur (Switzerland); Bodis, Stephan [KSA-KSB, Kantonsspital Aarau, RadioOnkologieZentrum, Aarau (Switzerland); University Hospital Zurich, Department of Radiation Oncology, Zurich (Switzerland)

    2016-09-15

    The purpose of this study was to evaluate the present status of radiotherapy infrastructure and human resources in Switzerland and compute projections for 2020. The European Society of Therapeutic Radiation Oncology ''Quantification of Radiation Therapy Infrastructure and Staffing'' guidelines (ESTRO-QUARTS) and those of the International Atomic Energy Agency (IAEA) were applied to estimate the requirements for teleradiotherapy (TRT) units, radiation oncologists (RO), medical physicists (MP) and radiotherapy technologists (RTT). The databases used for computation of the present gap and additional requirements are (a) Global Cancer Incidence, Mortality and Prevalence (GLOBOCAN) for cancer incidence (b) the Directory of Radiotherapy Centres (DIRAC) of the IAEA for existing TRT units (c) human resources from the recent ESTRO ''Health Economics in Radiation Oncology'' (HERO) survey and (d) radiotherapy utilization (RTU) rates for each tumour site, published by the Ingham Institute for Applied Medical Research (IIAMR). In 2015, 30,999 of 45,903 cancer patients would have required radiotherapy. By 2020, this will have increased to 34,041 of 50,427 cancer patients. Switzerland presently has an adequate number of TRTs, but a deficit of 57 ROs, 14 MPs and 36 RTTs. By 2020, an additional 7 TRTs, 72 ROs, 22 MPs and 66 RTTs will be required. In addition, a realistic dynamic model for calculation of staff requirements due to anticipated changes in future radiotherapy practices has been proposed. This model could be tailor-made and individualized for any radiotherapy centre. A 9.8 % increase in radiotherapy requirements is expected for cancer patients over the next 5 years. The present study should assist the stakeholders and health planners in designing an appropriate strategy for meeting future radiotherapy needs for Switzerland. (orig.) [German] Ziel dieser Studie war es, den aktuellen Stand der Infrastruktur und Personalausstattung der

  19. Contextual Computing

    CERN Document Server

    Porzel, Robert

    2011-01-01

    This book uses the latest in knowledge representation and human-computer interaction to address the problem of contextual computing in artificial intelligence. It uses high-level context to solve some challenging problems in natural language understanding.

  20. Three dimensional imaging of paraffin embedded human lung tissue samples by micro-computed tomography.

    Directory of Open Access Journals (Sweden)

    Anna E Scott

    Full Text Available Understanding the three-dimensional (3-D micro-architecture of lung tissue can provide insights into the pathology of lung disease. Micro computed tomography (µCT has previously been used to elucidate lung 3D histology and morphometry in fixed samples that have been stained with contrast agents or air inflated and dried. However, non-destructive microstructural 3D imaging of formalin-fixed paraffin embedded (FFPE tissues would facilitate retrospective analysis of extensive tissue archives of lung FFPE lung samples with linked clinical data.FFPE human lung tissue samples (n = 4 were scanned using a Nikon metrology µCT scanner. Semi-automatic techniques were used to segment the 3D structure of airways and blood vessels. Airspace size (mean linear intercept, Lm was measured on µCT images and on matched histological sections from the same FFPE samples imaged by light microscopy to validate µCT imaging.The µCT imaging protocol provided contrast between tissue and paraffin in FFPE samples (15 mm x 7 mm. Resolution (voxel size 6.7 µm in the reconstructed images was sufficient for semi-automatic image segmentation of airways and blood vessels as well as quantitative airspace analysis. The scans were also used to scout for regions of interest, enabling time-efficient preparation of conventional histological sections. The Lm measurements from µCT images were not significantly different to those from matched histological sections.We demonstrated how non-destructive imaging of routinely prepared FFPE samples by laboratory µCT can be used to visualize and assess the 3D morphology of the lung including by morphometric analysis.

  1. Spectral and computational features of the binding between riparins and human serum albumin.

    Science.gov (United States)

    Camargo, Cintia Ramos; Caruso, Ícaro Putinhon; Gutierrez, Stanley Juan Chavez; Fossey, Marcelo Andres; Filho, José Maria Barbosa; Cornélio, Marinônio Lopes

    2017-09-08

    The green Brazilian bay leaf, a spice much prized in local cuisine (Aniba riparia, Lauraceae), contains chemical compounds presenting benzoyl-derivatives named riparins, which have anti-inflammatory, antimicrobial and anxiolytic properties. However, it is unclear what kind of interaction riparins perform with any molecular target. As a profitable target, human serum albumin (HSA) is one of the principal extracellular proteins, with an exceptional capacity to interact with several molecules, and it also plays a crucial role in the transport, distribution, and metabolism of a wide variety of endogenous and exogenous ligands. To outline the HSA-riparin interaction mechanism, spectroscopy and computational methods were synergistically applied. An evaluation through fluorescence spectroscopy showed that the emission, attributed to Trp 214, at 346 nm decreased with titrations of riparins. A static quenching mechanism was observed in the binding of riparins to HSA. Fluorescence experiments performed at 298, 308 and 318 K made it possible to conduct thermodynamic analysis indicating a spontaneous reaction in the complex formation (ΔGcomplex, Hill's approach was utilized to distinguish the index of affinity and the binding constant. A correspondence between the molecular structures of riparins, due to the presence of the hydroxyl group in the B-ring, with thermodynamic parameters and index of affinity were observed. Riparin III performs an intramolecular hydrogen bond, which affects the Hill coefficient and the binding constant. Therefore, the presence of hydroxyl groups is capable of modulating the interaction between riparins and HSA. Site marker competitive experiments indicated Site I as being the most suitable, and the molecular modeling tools reinforced the experimental results detailing the participation of residues. Copyright © 2017. Published by Elsevier B.V.

  2. VX hydrolysis by human serum paraoxonase 1: a comparison of experimental and computational results.

    Directory of Open Access Journals (Sweden)

    Matthew W Peterson

    Full Text Available Human Serum paraoxonase 1 (HuPON1 is an enzyme that has been shown to hydrolyze a variety of chemicals including the nerve agent VX. While wildtype HuPON1 does not exhibit sufficient activity against VX to be used as an in vivo countermeasure, it has been suggested that increasing HuPON1's organophosphorous hydrolase activity by one or two orders of magnitude would make the enzyme suitable for this purpose. The binding interaction between HuPON1 and VX has recently been modeled, but the mechanism for VX hydrolysis is still unknown. In this study, we created a transition state model for VX hydrolysis (VX(ts in water using quantum mechanical/molecular mechanical simulations, and docked the transition state model to 22 experimentally characterized HuPON1 variants using AutoDock Vina. The HuPON1-VX(ts complexes were grouped by reaction mechanism using a novel clustering procedure. The average Vina interaction energies for different clusters were compared to the experimentally determined activities of HuPON1 variants to determine which computational procedures best predict how well HuPON1 variants will hydrolyze VX. The analysis showed that only conformations which have the attacking hydroxyl group of VX(ts coordinated by the sidechain oxygen of D269 have a significant correlation with experimental results. The results from this study can be used for further characterization of how HuPON1 hydrolyzes VX and design of HuPON1 variants with increased activity against VX.

  3. VX hydrolysis by human serum paraoxonase 1: a comparison of experimental and computational results.

    Science.gov (United States)

    Peterson, Matthew W; Fairchild, Steven Z; Otto, Tamara C; Mohtashemi, Mojdeh; Cerasoli, Douglas M; Chang, Wenling E

    2011-01-01

    Human Serum paraoxonase 1 (HuPON1) is an enzyme that has been shown to hydrolyze a variety of chemicals including the nerve agent VX. While wildtype HuPON1 does not exhibit sufficient activity against VX to be used as an in vivo countermeasure, it has been suggested that increasing HuPON1's organophosphorous hydrolase activity by one or two orders of magnitude would make the enzyme suitable for this purpose. The binding interaction between HuPON1 and VX has recently been modeled, but the mechanism for VX hydrolysis is still unknown. In this study, we created a transition state model for VX hydrolysis (VX(ts)) in water using quantum mechanical/molecular mechanical simulations, and docked the transition state model to 22 experimentally characterized HuPON1 variants using AutoDock Vina. The HuPON1-VX(ts) complexes were grouped by reaction mechanism using a novel clustering procedure. The average Vina interaction energies for different clusters were compared to the experimentally determined activities of HuPON1 variants to determine which computational procedures best predict how well HuPON1 variants will hydrolyze VX. The analysis showed that only conformations which have the attacking hydroxyl group of VX(ts) coordinated by the sidechain oxygen of D269 have a significant correlation with experimental results. The results from this study can be used for further characterization of how HuPON1 hydrolyzes VX and design of HuPON1 variants with increased activity against VX.

  4. Computational characterization of how the VX nerve agent binds human serum paraoxonase 1.

    Science.gov (United States)

    Fairchild, Steven Z; Peterson, Matthew W; Hamza, Adel; Zhan, Chang-Guo; Cerasoli, Douglas M; Chang, Wenling E

    2011-01-01

    Human serum paraoxonase 1 (HuPON1) is an enzyme that can hydrolyze various chemical warfare nerve agents including VX. A previous study has suggested that increasing HuPON1's VX hydrolysis activity one to two orders of magnitude would make the enzyme an effective countermeasure for in vivo use against VX. This study helps facilitate further engineering of HuPON1 for enhanced VX-hydrolase activity by computationally characterizing HuPON1's tertiary structure and how HuPON1 binds VX. HuPON1's structure is first predicted through two homology modeling procedures. Docking is then performed using four separate methods, and the stability of each bound conformation is analyzed through molecular dynamics and solvated interaction energy calculations. The results show that VX's lone oxygen atom has a strong preference for forming a direct electrostatic interaction with HuPON1's active site calcium ion. Various HuPON1 residues are also detected that are in close proximity to VX and are therefore potential targets for future mutagenesis studies. These include E53, H115, N168, F222, N224, L240, D269, I291, F292, and V346. Additionally, D183 was found to have a predicted pKa near physiological pH. Given D183's location in HuPON1's active site, this residue could potentially act as a proton donor or accepter during hydrolysis. The results from the binding simulations also indicate that steered molecular dynamics can potentially be used to obtain accurate binding predictions even when starting with a closed conformation of a protein's binding or active site.

  5. Validation of a Computational Platform for the Analysis of the Physiologic Mechanisms of a Human Experimental Model of Hemorrhage

    Science.gov (United States)

    2009-12-01

    physiologic functioning of a virtual subject is a special adaptation of an established computer model of human physiology ( Guyton /Coleman/Summers model...RL. Computer simulation studies and the scientific method. J Appl Anim Welf Sci 1998;1:119–31. [PubMed: 16363976] 3. Guyton AC, Montani J-P, Hall JE...1197388] 7. Guyton AC, Coleman TG, Granger HJ. Circulation: overall regulation. Annu Rev Physiol 1972;34:13–46. [PubMed: 4334846] 8. Guyton AC, Coleman

  6. Digital scene simulation: the synergy of computer technology and human creativity

    Energy Technology Data Exchange (ETDEWEB)

    Demos, G.; Brown, M.D.; Weinberg, R.A.

    1984-01-01

    The computer graphics capabilities at Digital Productions, based on a Cray X-MP supercomputer for computer-created imagery, are presented. The authors describe the hardware, software for image rendering and simulation and the use of these by artists. The computing requirements for high-resolution graphics are given. Particular emphasis is placed on the system's ease of use and speed compared to traditional animation techniques. The direction of current research is summarized. 38 references.

  7. Digital scene simulation /SUP sm/: The synergy of computer technology and human creativity

    Energy Technology Data Exchange (ETDEWEB)

    Demos, G.; Brown, M.D.; Weinberg, R.A.

    1984-01-01

    Digital Scene Simulation is Digital Productions' philosophy for creating visual excellence in computer-generated imagery and simulation. The approach it advocates requires the use of powerful hardware, sophisticated software, and top creative talent. With a CRAY supercomputer at the heart of its computer network and its own proprietary image rendering and simulation software, Digital Productions is revolutionizing state-of-the-art computer graphics. At the forefront of computer graphics technology, Digital Productions is redefining traditional methods of visual communications and creating new forms of self-expression, instruction, and entertainment.

  8. The Electronic Mirror: Human-Computer Interaction and Change in Self-Appraisals.

    Science.gov (United States)

    De Laere, Kevin H.; Lundgren, David C.; Howe, Steven R.

    1998-01-01

    Compares humanlike versus machinelike interactional styles of computer interfaces, testing hypotheses that evaluative feedback conveyed through a humanlike interface will have greater impact on individuals' self-appraisals. Reflected appraisals were more influenced by computer feedback than were self-appraisals. Humanlike and machinelike interface…

  9. University Students and Ethics of Computer Technology Usage: Human Resource Development

    Science.gov (United States)

    Iyadat, Waleed; Iyadat, Yousef; Ashour, Rateb; Khasawneh, Samer

    2012-01-01

    The primary purpose of this study was to determine the level of students' awareness about computer technology ethics at the Hashemite University in Jordan. A total of 180 university students participated in the study by completing the questionnaire designed by the researchers, named the Computer Technology Ethics Questionnaire (CTEQ). Results…

  10. Using Tablet PCs in Classroom for Teaching Human-Computer Interaction: An Experience in High Education

    Science.gov (United States)

    da Silva, André Constantino; Marques, Daniela; de Oliveira, Rodolfo Francisco; Noda, Edgar

    2014-01-01

    The use of computers in the teaching and learning process is investigated by many researches and, nowadays, due the available diversity of computing devices, tablets are become popular in classroom too. So what are the advantages and disadvantages to use tablets in classroom? How can we shape the teaching and learning activities to get the best of…

  11. University Students and Ethics of Computer Technology Usage: Human Resource Development

    Science.gov (United States)

    Iyadat, Waleed; Iyadat, Yousef; Ashour, Rateb; Khasawneh, Samer

    2012-01-01

    The primary purpose of this study was to determine the level of students' awareness about computer technology ethics at the Hashemite University in Jordan. A total of 180 university students participated in the study by completing the questionnaire designed by the researchers, named the Computer Technology Ethics Questionnaire (CTEQ). Results…

  12. A user-friendly wearable single-channel EOG-based human-computer interface for cursor control

    OpenAIRE

    2015-01-01

    This paper presents a novel wearable single-channel electrooculography (EOG) based human-computer interface (HCI) with a simple system design and robust performance. In the proposed system, EOG signals for control are generated from double eye blinks, collected by a commercial wearable device (the NeuroSky MindWave headset), and then converted into a sequence of commands that can control cursor navigations and actions. The EOG-based cursor control system was tested on 8 subjects in indoor or ...

  13. Study on Effect of Gd (III) Speciation on Ca (II) Speciation in Human Blood Plasma by Computer Simulation

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Ca (II) speciation and effect of Gd (III) speciation on Ca (II) speciation in human blood plasma were studied by computer simulation. [CaHCO3]+ is a predominant compound species of Ca (II). Gd (III) can compete with Ca (II) for biological molecules. The presence of Gd (III) results in a increase of concentration of free Ca (II) and a decrease of concentration of Ca (II) compounds.

  14. A Conceptual Architecture for Adaptive Human-Computer Interface of a PT Operation Platform Based on Context-Awareness

    Directory of Open Access Journals (Sweden)

    Qing Xue

    2014-01-01

    Full Text Available We present a conceptual architecture for adaptive human-computer interface of a PT operation platform based on context-awareness. This architecture will form the basis of design for such an interface. This paper describes components, key technologies, and working principles of the architecture. The critical contents covered context information modeling, processing, relationship establishing between contexts and interface design knowledge by use of adaptive knowledge reasoning, and visualization implementing of adaptive interface with the aid of interface tools technology.

  15. Proof-of-Concept Demonstrations for Computation-Based Human Reliability Analysis. Modeling Operator Performance During Flooding Scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Joe, Jeffrey Clark [Idaho National Lab. (INL), Idaho Falls, ID (United States); Boring, Ronald Laurids [Idaho National Lab. (INL), Idaho Falls, ID (United States); Herberger, Sarah Elizabeth Marie [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with other experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.

  16. Computation of currents induced by ELF electric fields in anisotropic human tissues using the Finite Integration Technique (FIT

    Directory of Open Access Journals (Sweden)

    V. C. Motrescu

    2005-01-01

    Full Text Available In the recent years, the task of estimating the currents induced within the human body by environmental electromagnetic fields has received increased attention from scientists around the world. While important progress was made in this direction, the unpredictable behaviour of living biological tissue made it difficult to quantify its reaction to electromagnetic fields and has kept the problem open. A successful alternative to the very difficult one of performing measurements is that of computing the fields within a human body model using numerical methods implemented in a software code. One of the difficulties is represented by the fact that some tissue types exhibit an anisotropic character with respect to their dielectric properties. Our work consists of computing currents induced by extremely low frequency (ELF electric fields in anisotropic muscle tissues using in this respect, a human body model extended with muscle fibre orientations as well as an extended version of the Finite Integration Technique (FIT able to compute fully anisotropic dielectric properties.

  17. About possibility of temperature trace observing on a human skin through clothes by using computer processing of IR image

    Science.gov (United States)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Shestakov, Ivan L.; Blednov, Roman G.

    2017-05-01

    One of urgent security problems is a detection of objects placed inside the human body. Obviously, for safety reasons one cannot use X-rays for such object detection widely and often. For this purpose, we propose to use THz camera and IR camera. Below we continue a possibility of IR camera using for a detection of temperature trace on a human body. In contrast to passive THz camera using, the IR camera does not allow to see very pronounced the object under clothing. Of course, this is a big disadvantage for a security problem solution based on the IR camera using. To find possible ways for this disadvantage overcoming we make some experiments with IR camera, produced by FLIR Company and develop novel approach for computer processing of images captured by IR camera. It allows us to increase a temperature resolution of IR camera as well as human year effective susceptibility enhancing. As a consequence of this, a possibility for seeing of a human body temperature changing through clothing appears. We analyze IR images of a person, which drinks water and eats chocolate. We follow a temperature trace on human body skin, caused by changing of temperature inside the human body. Some experiments are made with observing of temperature trace from objects placed behind think overall. Demonstrated results are very important for the detection of forbidden objects, concealed inside the human body, by using non-destructive control without using X-rays.

  18. Ontology for assessment studies of human-computer-interaction in surgery.

    Science.gov (United States)

    Machno, Andrej; Jannin, Pierre; Dameron, Olivier; Korb, Werner; Scheuermann, Gerik; Meixensberger, Jürgen

    2015-02-01

    New technologies improve modern medicine, but may result in unwanted consequences. Some occur due to inadequate human-computer-interactions (HCI). To assess these consequences, an investigation model was developed to facilitate the planning, implementation and documentation of studies for HCI in surgery. The investigation model was formalized in Unified Modeling Language and implemented as an ontology. Four different top-level ontologies were compared: Object-Centered High-level Reference, Basic Formal Ontology, General Formal Ontology (GFO) and Descriptive Ontology for Linguistic and Cognitive Engineering, according to the three major requirements of the investigation model: the domain-specific view, the experimental scenario and the representation of fundamental relations. Furthermore, this article emphasizes the distinction of "information model" and "model of meaning" and shows the advantages of implementing the model in an ontology rather than in a database. The results of the comparison show that GFO fits the defined requirements adequately: the domain-specific view and the fundamental relations can be implemented directly, only the representation of the experimental scenario requires minor extensions. The other candidates require wide-ranging extensions, concerning at least one of the major implementation requirements. Therefore, the GFO was selected to realize an appropriate implementation of the developed investigation model. The ensuing development considered the concrete implementation of further model aspects and entities: sub-domains, space and time, processes, properties, relations and functions. The investigation model and its ontological implementation provide a modular guideline for study planning, implementation and documentation within the area of HCI research in surgery. This guideline helps to navigate through the whole study process in the form of a kind of standard or good clinical practice, based on the involved foundational frameworks

  19. 脑控:基于脑-机接口的人机融合控制%Brain Control: Human-computer Integration Control Based on Brain-computer Interface

    Institute of Scientific and Technical Information of China (English)

    王行愚; 金晶; 张宇; 王蓓

    2013-01-01

    近年来,一类被称之为脑控的新型控制系统发展迅速,这是一种基于脑-机接口(Brain-computer interface,BCI)的人机融合控制系统,也是一种基于人的意念和思维的控制系统.脑控系统己被成功应用于残疾人的生活辅助、中风病人和损伤肢体的康复训练、操作员状态的实时监控、游戏娱乐和智能家居等广泛的领域.本文在简要介绍了脑控的研究背景、基本原理、系统结构和发展概况的基础上,着重对脑电信号(Electroencephalogram,EEG)模式、控制信号转换算法和应用系统研究等主要问题的研究现状,进行了较为详细的论述和分析,并探讨了进一步研究的方向和思路.最后对脑控的未来发展方向和应用前景进行了分析和展望.%Recently, a new system called brain control system has been developed rapidly. Brain control system is a human-computer integration control system based on brain-computer interface (BCI), which relies on human's ideas and thinking. Brain control system has been successfully applied in wide fields, assisting disabled patients daily life, training patients with stroke or limb injury, monitoring the status of human operator, as well as entertainment and smart house etc. In this paper, the background, basic principle, system structure and developments are firstly introduced briefly. The current research status focusing on the problems of electroencephalogram (EEG) signal pattern, control signal transfer algorithm and system application is summarized and analyzed in detail. The further research direction and thoughts are discussed. Finally, the future development of brain control is analyzed and prospects are given.

  20. A truly human interface: Interacting face-to-face with someone whose words are determined by a computer program

    Directory of Open Access Journals (Sweden)

    Kevin eCorti

    2015-05-01

    Full Text Available We use speech shadowing to create situations wherein people converse in person with a human whose words are determined by a conversational agent computer program. Speech shadowing involves a person (the shadower repeating vocal stimuli originating from a separate communication source in real-time. Humans shadowing for conversational agent sources (e.g., chat bots become hybrid agents (echoborgs capable of face-to-face interlocution. We report three studies that investigated people’s experiences interacting with echoborgs and the extent to which echoborgs pass as autonomous humans. First, participants in a Turing Test spoke with a chat bot via either a text interface or an echoborg. Human shadowing did not improve the chat bot’s chance of passing but did increase interrogators’ ratings of how human-like the chat bot seemed. In our second study, participants had to decide whether their interlocutor produced words generated by a chat bot or simply pretended to be one. Compared to those who engaged a text interface, participants who engaged an echoborg were more likely to perceive their interlocutor as pretending to be a chat bot. In our third study, participants were naïve to the fact that their interlocutor produced words generated by a chat bot. Unlike those who engaged a text interface, the vast majority of participants who engaged an echoborg neither sensed nor suspected a robotic interaction. These findings have implications for android science, the Turing Test paradigm, and human-computer interaction. The human body, as the delivery mechanism of communication, fundamentally alters the social psychological dynamics of interactions with machine intelligence.

  1. Human and Robotic Space Mission Use Cases for High-Performance Spaceflight Computing

    Science.gov (United States)

    Doyle, Richard; Bergman, Larry; Some, Raphael; Whitaker, William; Powell, Wesley; Johnson, Michael; Goforth, Montgomery; Lowry, Michael

    2013-01-01

    Spaceflight computing is a key resource in NASA space missions and a core determining factor of spacecraft capability, with ripple effects throughout the spacecraft, end-to-end system, and the mission; it can be aptly viewed as a "technology multiplier" in that advances in onboard computing provide dramatic improvements in flight functions and capabilities across the NASA mission classes, and will enable new flight capabilities and mission scenarios, increasing science and exploration return per mission-dollar.

  2. Genetic crossovers are predicted accurately by the computed human recombination map.

    Directory of Open Access Journals (Sweden)

    Pavel P Khil

    2010-01-01

    Full Text Available Hotspots of meiotic recombination can change rapidly over time. This instability and the reported high level of inter-individual variation in meiotic recombination puts in question the accuracy of the calculated hotspot map, which is based on the summation of past genetic crossovers. To estimate the accuracy of the computed recombination rate map, we have mapped genetic crossovers to a median resolution of 70 Kb in 10 CEPH pedigrees. We then compared the positions of crossovers with the hotspots computed from HapMap data and performed extensive computer simulations to compare the observed distributions of crossovers with the distributions expected from the calculated recombination rate maps. Here we show that a population-averaged hotspot map computed from linkage disequilibrium data predicts well present-day genetic crossovers. We find that computed hotspot maps accurately estimate both the strength and the position of meiotic hotspots. An in-depth examination of not-predicted crossovers shows that they are preferentially located in regions where hotspots are found in other populations. In summary, we find that by combining several computed population-specific maps we can capture the variation in individual hotspots to generate a hotspot map that can predict almost all present-day genetic crossovers.

  3. Detection of differentially methylated gene promoters in failing and nonfailing human left ventricle myocardium using computation analysis.

    Science.gov (United States)

    Koczor, Christopher A; Lee, Eva K; Torres, Rebecca A; Boyd, Amy; Vega, J David; Uppal, Karan; Yuan, Fan; Fields, Earl J; Samarel, Allen M; Lewis, William

    2013-07-15

    Human dilated cardiomyopathy (DCM) is characterized by congestive heart failure and altered myocardial gene expression. Epigenetic changes, including DNA methylation, are implicated in the development of DCM but have not been studied extensively. Clinical human DCM and nonfailing control left ventricle samples were individually analyzed for DNA methylation and expressional changes. Expression microarrays were used to identify 393 overexpressed and 349 underexpressed genes in DCM (GEO accession number: GSE43435). Gene promoter microarrays were utilized for DNA methylation analysis, and the resulting data were analyzed by two different computational methods. In the first method, we utilized subtractive analysis of DNA methylation peak data to identify 158 gene promoters exhibiting DNA methylation changes that correlated with expression changes. In the second method, a two-stage approach combined a particle swarm optimization feature selection algorithm and a discriminant analysis via mixed integer programming classifier to identify differentially methylated gene promoters. This analysis identified 51 hypermethylated promoters and six hypomethylated promoters in DCM with 100% cross-validation accuracy in the group assignment. Generation of a composite list of genes identified by subtractive analysis and two-stage computation analysis revealed four genes that exhibited differential DNA methylation by both methods in addition to altered gene expression. Computationally identified genes (AURKB, BTNL9, CLDN5, and TK1) define a central set of differentially methylated gene promoters that are important in classifying DCM. These genes have no previously reported role in DCM. This study documents that rigorous computational analysis applied to microarray analysis of healthy and diseased human heart samples helps to define clinically relevant DNA methylation and expressional changes in DCM.

  4. A truly human interface: interacting face-to-face with someone whose words are determined by a computer program

    Science.gov (United States)

    Corti, Kevin; Gillespie, Alex

    2015-01-01

    We use speech shadowing to create situations wherein people converse in person with a human whose words are determined by a conversational agent computer program. Speech shadowing involves a person (the shadower) repeating vocal stimuli originating from a separate communication source in real-time. Humans shadowing for conversational agent sources (e.g., chat bots) become hybrid agents (“echoborgs”) capable of face-to-face interlocution. We report three studies that investigated people’s experiences interacting with echoborgs and the extent to which echoborgs pass as autonomous humans. First, participants in a Turing Test spoke with a chat bot via either a text interface or an echoborg. Human shadowing did not improve the chat bot’s chance of passing but did increase interrogators’ ratings of how human-like the chat bot seemed. In our second study, participants had to decide whether their interlocutor produced words generated by a chat bot or simply pretended to be one. Compared to those who engaged a text interface, participants who engaged an echoborg were more likely to perceive their interlocutor as pretending to be a chat bot. In our third study, participants were naïve to the fact that their interlocutor produced words generated by a chat bot. Unlike those who engaged a text interface, the vast majority of participants who engaged an echoborg did not sense a robotic interaction. These findings have implications for android science, the Turing Test paradigm, and human–computer interaction. The human body, as the delivery mechanism of communication, fundamentally alters the social psychological dynamics of interactions with machine intelligence. PMID:26042066

  5. [Design and trial of computer test system for experiment courses of human parasitology].

    Science.gov (United States)

    Liao, Hua; Ling, Jin; Su, Shui-Lian; Zeng, Jie; Xie, Qiong-Jun

    2011-06-01

    Based on the traditional experimental test of human parasitology, a reform was conducted to avoid the shortage of specimens and a disclosure of test questions. An experimental test system of human parasitology based on client/server (C/S) structure was therefore developed. This practicable system can increase the efficiency and fairness of examination and reduce cost.

  6. Computer-aided diagnosis in phase contrast imaging X-ray computed tomography for quantitative characterization of ex vivo human patellar cartilage.

    Science.gov (United States)

    Nagarajan, Mahesh B; Coan, Paola; Huber, Markus B; Diemoz, Paul C; Glaser, Christian; Wismuller, Axel

    2013-10-01

    Visualization of ex vivo human patellar cartilage matrix through the phase contrast imaging X-ray computed tomography (PCI-CT) has been previously demonstrated. Such studies revealed osteoarthritis-induced changes to chondrocyte organization in the radial zone. This study investigates the application of texture analysis to characterizing such chondrocyte patterns in the presence and absence of osteoarthritic damage. Texture features derived from Minkowski functionals (MF) and gray-level co-occurrence matrices (GLCM) were extracted from 842 regions of interest (ROI) annotated on PCI-CT images of ex vivo human patellar cartilage specimens. These texture features were subsequently used in a machine learning task with support vector regression to classify ROIs as healthy or osteoarthritic; classification performance was evaluated using the area under the receiver operating characteristic curve (AUC). The best classification performance was observed with the MF features perimeter (AUC: 0.94 ±0.08 ) and "Euler characteristic" (AUC: 0.94 ±0.07 ), and GLCM-derived feature "Correlation" (AUC: 0.93 ±0.07). These results suggest that such texture features can provide a detailed characterization of the chondrocyte organization in the cartilage matrix, enabling classification of cartilage as healthy or osteoarthritic with high accuracy.

  7. The application of computed tomography and magnetic resonance imaging at diagnostics of the human maxillofacial system

    Science.gov (United States)

    Nikitin, V.; Karavaeva, E.; Cherepennikov, Yu; Miloichikova, I.

    2016-06-01

    The application of computed tomography and magnetic resonance imaging has entered into wide practice at diagnosis of the maxillofacial system. Computed tomography allows us to obtain information about only bone structures. Magnetic resonance imaging gives information about bone and soft tissue structures of the maxillofacial system. The sagittal and coronal projections should make for complete diagnosis of the temporomandibular joint, because the articular disc is very mobile structure. We suggest that the temporomandibular joint can influences the internal carotid artery at medial displacement of the articular disc. As a result of analysis of the literature and our own studies concluded that changes TMJ affect the internal carotid artery.

  8. Computer-aided prediction of xenobiotic metabolism in the human body

    Science.gov (United States)

    Bezhentsev, V. M.; Tarasova, O. A.; Dmitriev, A. V.; Rudik, A. V.; Lagunin, A. A.; Filimonov, D. A.; Poroikov, V. V.

    2016-08-01

    The review describes the major databases containing information about the metabolism of xenobiotics, including data on drug metabolism, metabolic enzymes, schemes of biotransformation and the structures of some substrates and metabolites. Computational approaches used to predict the interaction of xenobiotics with metabolic enzymes, prediction of metabolic sites in the molecule, generation of structures of potential metabolites for subsequent evaluation of their properties are considered. The advantages and limitations of various computational methods for metabolism prediction and the prospects for their applications to improve the safety and efficacy of new drugs are discussed. Bibliography — 165 references.

  9. Computational modeling of blast wave interaction with a human body and assessment of traumatic brain injury

    Science.gov (United States)

    Tan, X. G.; Przekwas, A. J.; Gupta, R. K.

    2017-07-01

    The modeling of human body biomechanics resulting from blast exposure poses great challenges because of the complex geometry and the substantial material heterogeneity. We developed a detailed human body finite element model representing both the geometry and the materials realistically. The model includes the detailed head (face, skull, brain and spinal cord), the neck, the skeleton, air cavities (lungs) and the tissues. Hence, it can be used to properly model the stress wave propagation in the human body subjected to blast loading. The blast loading on the human was generated from a simulated C4 explosion. We used the highly scalable solvers in the multi-physics code CoBi for both the blast simulation and the human body biomechanics. The meshes generated for these simulations are of good quality so that relatively large time-step sizes can be used without resorting to artificial time scaling treatments. The coupled gas dynamics and biomechanics solutions were validated against the shock tube test data. The human body models were used to conduct parametric simulations to find the biomechanical response and the brain injury mechanism due to blasts impacting the human body. Under the same blast loading condition, we showed the importance of inclusion of the whole body.

  10. Brain-Computer Interface Controlled Cyborg: Establishing a Functional Information Transfer Pathway from Human Brain to Cockroach Brain.

    Science.gov (United States)

    Li, Guangye; Zhang, Dingguo

    2016-01-01

    An all-chain-wireless brain-to-brain system (BTBS), which enabled motion control of a cyborg cockroach via human brain, was developed in this work. Steady-state visual evoked potential (SSVEP) based brain-computer interface (BCI) was used in this system for recognizing human motion intention and an optimization algorithm was proposed in SSVEP to improve online performance of the BCI. The cyborg cockroach was developed by surgically integrating a portable microstimulator that could generate invasive electrical nerve stimulation. Through Bluetooth communication, specific electrical pulse trains could be triggered from the microstimulator by BCI commands and were sent through the antenna nerve to stimulate the brain of cockroach. Serial experiments were designed and conducted to test overall performance of the BTBS with six human subjects and three cockroaches. The experimental results showed that the online classification accuracy of three-mode BCI increased from 72.86% to 78.56% by 5.70% using the optimization algorithm and the mean response accuracy of the cyborgs using this system reached 89.5%. Moreover, the results also showed that the cyborg could be navigated by the human brain to complete walking along an S-shape track with the success rate of about 20%, suggesting the proposed BTBS established a feasible functional information transfer pathway from the human brain to the cockroach brain.

  11. COMPARATIVE COMPUTATIONAL MODELING OF AIRFLOWS AND VAPOR DOSIMETY IN THE RESPIRATORY TRACTS OF RAT, MONKEY, AND HUMAN

    Energy Technology Data Exchange (ETDEWEB)

    Corley, Richard A.; Kabilan, Senthil; Kuprat, Andrew P.; Carson, James P.; Minard, Kevin R.; Jacob, Rick E.; Timchalk, Charles; Glenny, Robb W.; Pipavath, Sudhaker; Cox, Timothy C.; Wallis, Chris; Larson, Richard; Fanucchi, M.; Postlewait, Ed; Einstein, Daniel R.

    2012-07-01

    Coupling computational fluid dynamics (CFD) with physiologically based pharmacokinetic (PBPK) models is useful for predicting site-specific dosimetry of airborne materials in the respiratory tract and elucidating the importance of species differences in anatomy, physiology, and breathing patterns. Historically, these models were limited to discrete regions of the respiratory system. CFD/PBPK models have now been developed for the rat, monkey, and human that encompass airways from the nose or mouth to the lung. A PBPK model previously developed to describe acrolein uptake in nasal tissues was adapted to the extended airway models as an example application. Model parameters for each anatomic region were obtained from the literature, measured directly, or estimated from published data. Airflow and site-specific acrolein uptake patterns were determined under steadystate inhalation conditions to provide direct comparisons with prior data and nasalonly simulations. Results confirmed that regional uptake was dependent upon airflow rates and acrolein concentrations with nasal extraction efficiencies predicted to be greatest in the rat, followed by the monkey, then the human. For human oral-breathing simulations, acrolein uptake rates in oropharyngeal and laryngeal tissues were comparable to nasal tissues following nasal breathing under the same exposure conditions. For both breathing modes, higher uptake rates were predicted for lower tracheo-bronchial tissues of humans than either the rat or monkey. These extended airway models provide a unique foundation for comparing dosimetry across a significantly more extensive range of conducting airways in the rat, monkey, and human than prior CFD models.

  12. Brain-Computer Interface Controlled Cyborg: Establishing a Functional Information Transfer Pathway from Human Brain to Cockroach Brain.

    Directory of Open Access Journals (Sweden)

    Guangye Li

    Full Text Available An all-chain-wireless brain-to-brain system (BTBS, which enabled motion control of a cyborg cockroach via human brain, was developed in this work. Steady-state visual evoked potential (SSVEP based brain-computer interface (BCI was used in this system for recognizing human motion intention and an optimization algorithm was proposed in SSVEP to improve online performance of the BCI. The cyborg cockroach was developed by surgically integrating a portable microstimulator that could generate invasive electrical nerve stimulation. Through Bluetooth communication, specific electrical pulse trains could be triggered from the microstimulator by BCI commands and were sent through the antenna nerve to stimulate the brain of cockroach. Serial experiments were designed and conducted to test overall performance of the BTBS with six human subjects and three cockroaches. The experimental results showed that the online classification accuracy of three-mode BCI increased from 72.86% to 78.56% by 5.70% using the optimization algorithm and the mean response accuracy of the cyborgs using this system reached 89.5%. Moreover, the results also showed that the cyborg could be navigated by the human brain to complete walking along an S-shape track with the success rate of about 20%, suggesting the proposed BTBS established a feasible functional information transfer pathway from the human brain to the cockroach brain.

  13. A computer vision system for rapid search inspired by surface-based attention mechanisms from human perception.

    Science.gov (United States)

    Mohr, Johannes; Park, Jong-Han; Obermayer, Klaus

    2014-12-01

    Humans are highly efficient at visual search tasks by focusing selective attention on a small but relevant region of a visual scene. Recent results from biological vision suggest that surfaces of distinct physical objects form the basic units of this attentional process. The aim of this paper is to demonstrate how such surface-based attention mechanisms can speed up a computer vision system for visual search. The system uses fast perceptual grouping of depth cues to represent the visual world at the level of surfaces. This representation is stored in short-term memory and updated over time. A top-down guided attention mechanism sequentially selects one of the surfaces for detailed inspection by a recognition module. We show that the proposed attention framework requires little computational overhead (about 11 ms), but enables the system to operate in real-time and leads to a substantial increase in search efficiency.

  14. Computer Simulation Study of Human Locomotion with a Three-Dimensional Entire-Body Neuro-Musculo-Skeletal Model

    Science.gov (United States)

    Hase, Kazunori; Obinata, Goro

    It is essential for the biomechanical study of human walking motion to consider not only in vivo mechanical load and energy efficiency but also aspects of motor control such as walking stability. In this study, walking stability was investigated using a three-dimensional entire-body neuro-musculo-skeletal model in the computer simulation. In the computational experiments, imaginary constraints, such as no muscular system, were set in the neuro-musculo-skeletal model to investigate their influence on walking stability. The neuronal parameters were adjusted using numerical search techniques in order to adapt walking patterns to constraints on the neuro-musculo-skeletal system. Simulation results revealed that the model of the normal neuro-musculo-skeletal system yielded a higher stability than the imaginary models. Unstable walking by a model with a time delay in the neuronal system suggested significant unknown mechanisms which stabilized walking patterns that have been neglected in previous studies.

  15. Human Computing in the Life Sciences: What does the future hold?

    NARCIS (Netherlands)

    Fikkert, F.W.

    2007-01-01

    In future computing environments you will be surrounded and supported by all kinds of technologies. Characteristic is that you can interact with them in a natural way: you can speak to, point at, or even frown about some piece of presented information: the environment understands your intent. Natura

  16. Human Computer Interaction (HCI) and Internet Residency: Implications for Both Personal Life and Teaching/Learning

    Science.gov (United States)

    Crearie, Linda

    2016-01-01

    Technological advances over the last decade have had a significant impact on the teaching and learning experiences students encounter today. We now take technologies such as Web 2.0, mobile devices, cloud computing, podcasts, social networking, super-fast broadband, and connectedness for granted. So what about the student use of these types of…

  17. Implications of recent research on human factors for teachers of computing skills

    Energy Technology Data Exchange (ETDEWEB)

    Gabriel, J.R.

    1983-01-01

    Recent studies of people at work have produced a classification of their activity applicable to the history of education and to teaching itself. These insights suggest ways to teach computing and to build an innovative program to integrate schools and colleges with local industry to the benefit of all three.

  18. Solving Human Performance Problems with Computers. A Case Study: Building an Electronic Performance Support System.

    Science.gov (United States)

    Raybould, Barry

    1990-01-01

    Describes the design of an electronic performance support system (PSS) that was developed to help sales and support personnel access relevant information needed for good job performance. Highlights include expert systems, databases, interactive video discs, formatting information online, information retrieval techniques, HyperCard, computer-based…

  19. South African sign language human-computer interface in the context of the national accessibility portal

    CSIR Research Space (South Africa)

    Olivrin, GJ

    2006-02-01

    Full Text Available Barker, “South African Sign Language Machine Translation System”, ACM, 2002 2. M del Puy Carretero and D Oyarzun et al, “Virtual characters facial and body animation through the edition and interpretation of mark-up languages”, Science Direct Computers...

  20. Review: Human Intracortical recording and neural decoding for brain-computer interfaces.

    Science.gov (United States)

    Brandman, David M; Cash, Sydney S; Hochberg, Leigh R

    2017-03-02

    Brain Computer Interfaces (BCIs) use neural information recorded from the brain for voluntary control of external devices. The development of BCI systems has largely focused on improving functional independence for individuals with severe motor impairments, including providing tools for communication and mobility. In this review, we describe recent advances in intracortical BCI technology and provide potential directions for further research.

  1. Human-Computer Interaction for BCI Games: Usability and User Experience

    NARCIS (Netherlands)

    Plass-Oude Bos, Danny; Reuderink, Boris; Laar, van de Bram; Gürkök, Hayrettin; Mühl, Christian; Poel, Mannes; Heylen, Dirk; Nijholt, Anton; Sourin, A.

    2010-01-01

    Brain-computer interfaces (BCI) come with a lot of issues, such as delays, bad recognition, long training times, and cumbersome hardware. Gamers are a large potential target group for this new interaction modality, but why would healthy subjects want to use it? BCI provides a combination of informat

  2. Perspectives on the Design of Human-Computer Interactions: Issues and Implications.

    Science.gov (United States)

    Gavora, Mark J.; Hannafin, Michael

    1994-01-01

    Considers several perspectives on interaction strategies for computer-aided learning; examines dimensions of interaction; and presents a model for the design of interaction strategies. Topics include pacing; navigation; mental processes; cognitive and physical responses; the role of quality and quantity; a conceptual approach; and suggestions for…

  3. Non-Speech Sound in Human-Computer Interaction: A Review and Design Guidelines.

    Science.gov (United States)

    Hereford, James; Winn, William

    1994-01-01

    Reviews research on uses of computer sound and suggests how sound might be used effectively by instructional and interface designers. Topics include principles of interface design; the perception of sound; earcons, both symbolic and iconic; sound in data analysis; sound in virtual environments; and guidelines for using sound. (70 references) (LRW)

  4. Toward affective brain-computer interfaces : exploring the neurophysiology of affect during human media interaction

    NARCIS (Netherlands)

    Mühl, Christian

    2012-01-01

    Affective Brain-Computer Interfaces (aBCI), the sensing of emotions from brain activity, seems a fantasy from the realm of science fiction. But unlike faster-than-light travel or teleportation, aBCI seems almost within reach due to novel sensor technologies, the advancement of neuroscience, and the

  5. Computational composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.; Redström, Johan

    2007-01-01

    Computational composite is introduced as a new type of composite material. Arguing that this is not just a metaphorical maneuver, we provide an analysis of computational technology as material in design, which shows how computers share important characteristics with other materials used in design...... and architecture. We argue that the notion of computational composites provides a precise understanding of the computer as material, and of how computations need to be combined with other materials to come to expression as material. Besides working as an analysis of computers from a designer’s point of view......, the notion of computational composites may also provide a link for computer science and human-computer interaction to an increasingly rapid development and use of new materials in design and architecture....

  6. Deep Neural Networks as a Computational Model for Human Shape Sensitivity

    Science.gov (United States)

    Op de Beeck, Hans P.

    2016-01-01

    Theories of object recognition agree that shape is of primordial importance, but there is no consensus about how shape might be represented, and so far attempts to implement a model of shape perception that would work with realistic stimuli have largely failed. Recent studies suggest that state-of-the-art convolutional ‘deep’ neural networks (DNNs) capture important aspects of human object perception. We hypothesized that these successes might be partially related to a human-like representation of object shape. Here we demonstrate that sensitivity for shape features, characteristic to human and primate vision, emerges in DNNs when trained for generic object recognition from natural photographs. We show that these models explain human shape judgments for several benchmark behavioral and neural stimulus sets on which earlier models mostly failed. In particular, although never explicitly trained for such stimuli, DNNs develop acute sensitivity to minute variations in shape and to non-accidental properties that have long been implicated to form the basis for object recognition. Even more strikingly, when tested with a challenging stimulus set in which shape and category membership are dissociated, the most complex model architectures capture human shape sensitivity as well as some aspects of the category structure that emerges from human judgments. As a whole, these results indicate that convolutional neural networks not only learn physically correct representations of object categories but also develop perceptually accurate representational spaces of shapes. An even more complete model of human object representations might be in sight by training deep architectures for multiple tasks, which is so characteristic in human development. PMID:27124699

  7. Computational Comparison of Human Genomic Sequence Assemblies for a Region of Chromosome 4

    OpenAIRE

    Semple, Colin; Stewart W. Morris; Porteous, David J.; Evans, Kathryn L.

    2002-01-01

    Much of the available human genomic sequence data exist in a fragmentary draft state following the completion of the initial high-volume sequencing performed by the International Human Genome Sequencing Consortium (IHGSC) and Celera Genomics (CG). We compared six draft genome assemblies over a region of chromosome 4p (D4S394–D4S403), two consecutive releases by the IHGSC at University of California, Santa Cruz (UCSC), two consecutive releases from the National Centre for Biotechnology Informa...

  8. 计算机人机界面交互的美感体现%Beauty of Human-computer Interface Interaction

    Institute of Scientific and Technical Information of China (English)

    高超; 王坤茜

    2014-01-01

    By the angle of the application of aesthetic principles in human-computer interface, the paper explores the application of aesthetics in human-computer interface, and sums up improve the use efficient and use feeling in human-computer interaction by enhancing the beauty of human-computer interface.%本文从美学原则在计算机人机界面中的应用的角度进行分析,探讨美学在计算机人机界面中的应用,从而总结出,如何通过提高计算机人机交互界面的美感来提升用户进行人机交互时的使用效率及使用感受。

  9. Robotic lower limb prosthesis design through simultaneous computer optimizations of human and prosthesis costs

    Science.gov (United States)

    Handford, Matthew L.; Srinivasan, Manoj

    2016-02-01

    Robotic lower limb prostheses can improve the quality of life for amputees. Development of such devices, currently dominated by long prototyping periods, could be sped up by predictive simulations. In contrast to some amputee simulations which track experimentally determined non-amputee walking kinematics, here, we explicitly model the human-prosthesis interaction to produce a prediction of the user’s walking kinematics. We obtain simulations of an amputee using an ankle-foot prosthesis by simultaneously optimizing human movements and prosthesis actuation, minimizing a weighted sum of human metabolic and prosthesis costs. The resulting Pareto optimal solutions predict that increasing prosthesis energy cost, decreasing prosthesis mass, and allowing asymmetric gaits all decrease human metabolic rate for a given speed and alter human kinematics. The metabolic rates increase monotonically with speed. Remarkably, by performing an analogous optimization for a non-amputee human, we predict that an amputee walking with an appropriately optimized robotic prosthesis can have a lower metabolic cost – even lower than assuming that the non-amputee’s ankle torques are cost-free.

  10. Influence of consonantal context on the pronunciation of vowels: a comparison of human readers and computational models.

    Science.gov (United States)

    Treiman, Rebecca; Kessler, Brett; Bick, Suzanne

    2003-05-01

    In two experiments, we found that college students' pronunciations of vowels in nonwords are influenced both by preceding and following consonants. The predominance of rimes in previous studies of reading does not appear to arise because readers are unable to pick up associations that cross the onset-rime boundary, but rather because English has relatively few such associations. Comparisons between people's vowel pronunciations and those produced by various computational models of reading showed that no model provided a good account of human performance on nonwords for which the vowel shows contextual conditioning. Possible directions for improved models are suggested.

  11. 人机交互的若干关键技术%Some Key Techniques on Human-Computer Interaction

    Institute of Scientific and Technical Information of China (English)

    王红兵; 瞿裕忠; 徐冬梅; 王; 尧

    2001-01-01

    人机交互(Human-Computer Interaction)是研究人、计算机以及它们相互影响的技术.人机结合以人为主,将是未来计算机系统的特点,实现人机高效合作将是新一代人机界面的主要目的.多通道用户界面、计算机支持的协同工作、三维人机交互等是实现高效自然的人机交互的关键技术.

  12. 人机交互中的场景开发%Scenarios Development in Human-Computer Interaction

    Institute of Scientific and Technical Information of China (English)

    张向波; 邢朝伟

    2003-01-01

    场景是人机交互HCI(Human-Computer Interaction)中的重要技术.文章针对交互系统设计中通常存在的问题,比较深入地分析了基于模型的人机交互过程,对任务分析中场景的作用、应用、包含内容作了较深入的探讨.结果说明场景开发是交互系统深入研究、成功开发的关键步骤之一.

  13. Comparison of Cone Beam Computed Tomography and Multi Slice Computed Tomography Image Quality of Human Dried Mandible using 10 Anatomical Landmarks

    Science.gov (United States)

    Saati, Samira; Kaveh, Fatemeh

    2017-01-01

    Introduction Cone Beam Computed Tomography (CBCT) has gained a broad acceptance in dentomaxillofacial imaging. Computed Tomography (CT) is another imaging modality for diagnosis and preoperative assessments of the head and neck region. Aim Considering the increased radiation exposure and high cost of CT, this study sought to subjectively assess the image quality of CBCT and Multi Slice CT (MSCT). Materials and Methods A dry human mandible was scanned by five CBCT systems (New Tom 3G, Scanora, CRANEX 3D, Promax and Galileos) and one MSCT system. Three independent oral and maxillofacial radiologists reviewed the CBCT and MSCT scans for the quality of 10 landmarks namely mental foramen, trabecular bone, Periodontal Ligament (PDL), dentin, incisive canal, mandibular canal, dental pulp, enamel, lamina dura and cortical bone using a five-point scale. Results Significant differences were found between MSCT and CBCT and among the five CBCT systems (p<0.05) in visualization of different anatomical structures. A fine structure such as the incisive canal was significantly less visible and more variable among the systems in comparison with other anatomical landmarks such as the mental foramen, mandibular canal, cortical bone, dental pulp, enamel and dentin (p<0.05). The Cranex 3D and Promax systems were superior to MSCT and all other CBCT systems in visualizing anatomical structures. Conclusion The CBCT image quality was superior to that of MSCT even though some variability existed among different CBCT systems in visualizing fine structures. Considering the low radiation dose and high resolution, CBCT may be beneficial for dentomaxillofacial imaging. PMID:28384972

  14. Computational Analysis of Transcriptional Circuitries in Human Embryonic Stem Cells Reveals Multiple and Independent Networks

    Directory of Open Access Journals (Sweden)

    Xiaosheng Wang

    2014-01-01

    Full Text Available It has been known that three core transcription factors (TFs, NANOG, OCT4, and SOX2, collaborate to form a transcriptional circuitry to regulate pluripotency and self-renewal of human embryonic stem (ES cells. Similarly, MYC also plays an important role in regulating pluripotency and self-renewal of human ES cells. However, the precise mechanism by which the transcriptional regulatory networks control the activity of ES cells remains unclear. In this study, we reanalyzed an extended core network, which includes the set of genes that are cobound by the three core TFs and additional TFs that also bind to these cobound genes. Our results show that beyond the core transcriptional network, additional transcriptional networks are potentially important in the regulation of the fate of human ES cells. Several gene families that encode TFs play a key role in the transcriptional circuitry of ES cells. We also demonstrate that MYC acts independently of the core module in the regulation of the fate of human ES cells, consistent with the established argument. We find that TP53 is a key connecting molecule between the core-centered and MYC-centered modules. This study provides additional insights into the underlying regulatory mechanisms involved in the fate determination of human ES cells.

  15. Trends in Human-Computer Interaction to Support Future Intelligence Analysis Capabilities

    Science.gov (United States)

    2011-06-01

    strategies including (DARPA, 2011a): • Intelligent interruption to improve limited working memory ; • Attention management to improve focus during...complex tasks; • Cued memory retrieval to improve situational awareness and context recovery; • Modality switching (i.e., audio, visual) to increase...www.biometry.com www.handresearch.com Vein pattern palm reading by Fujitsu www.dealspwn.com 16 Augmented Cognition / Brain Computer Interfaces NeuroSky MindSet OCZ

  16. Designing Computer Agents With Facial Personality To Improve Human-Machine Collaboration

    Science.gov (United States)

    2006-05-25

    variability of 60 personality traits. Since then, a number of researchers (Borgatta (1964), Digman and Takemoto- Chock (1981) and McCrae and Costa (1985)) have...characteristics in a computer game . Results showed that different personalities were perceived, but more importantly they documented the actions...Digman, J.M. and Takemoto- Chock , N.K. (1981). Factors in the natural language of personality: Re-analysis, comparison and interpretation of six

  17. Computer-Based Procedure Systems: Technical Basis and Human Factors Review Guidance

    Science.gov (United States)

    2000-03-01

    computer-assisted system for fuel reloading while at power was designed for CANDU NPPs (Gertman et al., 1994). AECL has several aids under...for visualizing the interior of a reactor fuel channel to support the removal of stuck fuel bundles. This system is envisioned as a training aid...and vendor documents and correspondence; NRC correspondence and internal memoranda; bulletins and information no- tices; inspection and

  18. The computer as referee in the analysis of human small bowel motility.

    Science.gov (United States)

    Benson, M J; Castillo, F D; Wingate, D L; Demetrakopoulos, J; Spyrou, N M

    1993-04-01

    The aim of this study was to determine whether visual analysis of graphic records of small bowel motility is a reliable method of discriminating pressure events caused by bowel wall contraction from those of extraenteric origin and to compare this method with computerized analysis. Each of six independent observers was supplied with the same pair of records of 1 h of fasting diurnal duodenojejunal motility, acquired with a 3-channel ambulant data-logging system; one record included many artifacts due to body movement while the other did not. The observers were asked to identify and classify pressure events and to measure the duration and amplitude of "true" contractions. A computer program for on-line analysis is described; the algorithm was designed to overcome the problems of a variable baseline and sudden changes in pressure due to body movements that are unavoidable in prolonged recording from the small bowel of ambulant subjects. For regular contractions (phase III of migrating motor complex) there was good agreement between observers but not for irregular contractions, particularly when movement artifacts were abundant. When the observers were asked to repeat the analysis 6 mo later, there was poor agreement with their original identification of irregular contractions and artifacts. There was, however, good agreement between the computer analysis, which was totally reproducible, and the median decisions of the observer group; this agreement supports the validity of our computer algorithm. We conclude that computer analysis is not merely a valuable ergonomic aid for analysis of large quantity of data acquired in prolonged ambulatory monitoring, but also that, even for brief recordings, it provides a standard of reproducibility unmatched by "expert" inspection. Visual analysis is unreliable and thus susceptible to subjective bias; this may, in part, account for conflicting reports of small bowel motility under similar conditions reported by different workers in

  19. Modeling Goal-Directed User Exploration in Human-Computer Interaction

    Science.gov (United States)

    2011-02-01

    is implemented as a LISP program outside the confines of a cognitive architecture. The normalization assumption is implemented by simply normalizing...invoke a LISP function to compute the infoscent of the link with respect to the goal. The LISP function will then update the utilities of the three...competing productions (see Section 4.2.1.1) based on the link’s infoscent. This LISP function is an example of a black-box implementation of the

  20. Maximal thickness of the normal human pericardium assessed by electron-beam computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Delille, J.P.; Hernigou, A.; Sene, V.; Chatellier, G.; Boudeville, J.C.; Challande, P.; Plainfosse, M.C. [Service de Radiologie Centrale, Hopital Broussais, Paris (France)

    1999-08-01

    The purpose of this study was to determine the maximal value of normal pericardial thickness with an electron-beam computed tomography unit allowing fast scan times of 100 ms to reduce cardiac motion artifacts. Electron-beam computed tomography was performed in 260 patients with hypercholesterolemia and/or hypertension, as these pathologies have no effect on pericardial thickness. The pixel size was 0.5 mm. Measurements could be performed in front of the right ventricle, the right atrioventricular groove, the right atrium, the left ventricle, and the interventricular groove. Maximal thickness of normal pericardium was defined at the 95th percentile. Inter-observer and intra-observer reproducibility studies were assessed from additional CT scans by the Bland and Altman method [24]. The maximal thickness of the normal pericardium was 2 mm for 95 % of cases. For the reproducibility studies, there was no significant relationship between the inter-observer and intra-observer measurements, but all pericardial thickness measurements were {<=} 1.6 mm. Using electron-beam computed tomography, which assists in decreasing substantially cardiac motion artifacts, the threshold of detection of thickened pericardium is statistically established as being 2 mm for 95 % of the patients with hypercholesterolemia and/or hypertension. However, the spatial resolution available prevents a reproducible measure of the real thickness of thin pericardium. (orig.) With 6 figs., 1 tab., 31 refs.