WorldWideScience

Sample records for recognition time suggests

  1. Gender differences in recognition of toy faces suggest a contribution of experience.

    Science.gov (United States)

    Ryan, Kaitlin F; Gauthier, Isabel

    2016-12-01

    When there is a gender effect, women perform better then men in face recognition tasks. Prior work has not documented a male advantage on a face recognition task, suggesting that women may outperform men at face recognition generally either due to evolutionary reasons or the influence of social roles. Here, we question the idea that women excel at all face recognition and provide a proof of concept based on a face category for which men outperform women. We developed a test of face learning to measures individual differences with face categories for which men and women may differ in experience, using the faces of Barbie dolls and of Transformers. The results show a crossover interaction between subject gender and category, where men outperform women with Transformers' faces. We demonstrate that men can outperform women with some categories of faces, suggesting that explanations for a general face recognition advantage for women are in fact not needed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. An individual differences approach to the suggestibility of memory over time.

    Science.gov (United States)

    Frost, Peter; Nussbaum, Gregory; Loconto, Taylor; Syke, Richard; Warren, Casey; Muise, Christina

    2013-04-01

    We examined how certain personality traits might relate to the formation of suggestive memory over time. We hypothesised that compliance and trust relate to initial acceptance of misinformation as memory, whereas fantasy proneness might relate to integration of misinformation into memory after later intervals (relative to the time of exposure to misinformation). Participants watched an excerpt from a movie--the simulated eyewitness event. They next answered a recall test that included embedded misinformation about the movie. Participants then answered a yes/no recognition test. A week later, participants answered a second yes/no recognition test about the movie (each yes/no recognition test included different questions). Before both recognition tests, participants were warned about the misinformation shown during recall and were asked to base their answer on the movie excerpt only. After completing the second recognition test, participants answered questions from the Neuroticism Extroversion Openness Personality Inventory-3 (McCrae, Costa, & Martin, 2005) and Creative Experiences Questionnaire (Merckelbach, Horselenberg, & Muris, 2001). While compliance correlated with misinformation effects immediately after exposure to misinformation, fantasy-prone personality accounted for more of the variability in false recognition rates than compliance after a 1-week interval.

  3. Timely loss recognition and termination of unprofitable projects

    Directory of Open Access Journals (Sweden)

    Anup Srivastava

    2015-09-01

    Full Text Available Ideally, firms should discontinue projects that become unprofitable. Managers, however, continue to operate such projects because of their limited employment horizons and empire-building motivations (Jensen, 1986; Ball, 2001. Prior studies suggest that timely loss recognition in accounting earnings enables lenders, shareholders, and boards of directors to identify unprofitable projects; thereby, enabling them to force managers to discontinue such projects before large value erosion occurs. However, this conjecture has not been tested empirically. Consistent with this notion, we find that timely loss recognition increases the likelihood of timely closures of unprofitable projects. Moreover, managers, by announcing late discontinuations of such projects, reveal their inability to select good projects and/or to contain losses, when projects turn unprofitable. Accordingly, thereafter, the fund providers and board of directors are likely to demand improved timeliness of loss recognition and stringent scrutiny of firms’ capital expenditure plans. Consistently, we find that firms that announce large discontinuation losses reduce capital expenditures and improve timeliness of loss recognition in subsequent years. Our study provides evidence that timely loss reporting affects “real” economic decisions and creates economic benefits.

  4. Gender differences in recognition of toy faces suggest a contribution of experience

    OpenAIRE

    Ryan, Kaitlin F.; Gauthier, Isabel

    2016-01-01

    When there is a gender effect, women perform better then men in face recognition tasks. Prior work has not documented a male advantage on a face recognition task, suggesting that women may outperform men at face recognition generally either due to evolutionary reasons or the influence of social roles. Here, we question the idea that women excel at all face recognition and provide a proof of concept based on a face category for which men outperform women. We developed a test of face learning t...

  5. Activity Recognition for Personal Time Management

    Science.gov (United States)

    Prekopcsák, Zoltán; Soha, Sugárka; Henk, Tamás; Gáspár-Papanek, Csaba

    We describe an accelerometer based activity recognition system for mobile phones with a special focus on personal time management. We compare several data mining algorithms for the automatic recognition task in the case of single user and multiuser scenario, and improve accuracy with heuristics and advanced data mining methods. The results show that daily activities can be recognized with high accuracy and the integration with the RescueTime software can give good insights for personal time management.

  6. Changes in recognition memory over time: an ERP investigation into vocabulary learning.

    Directory of Open Access Journals (Sweden)

    Shekeila D Palmer

    Full Text Available Although it seems intuitive to assume that recognition memory fades over time when information is not reinforced, some aspects of word learning may benefit from a period of consolidation. In the present study, event-related potentials (ERP were used to examine changes in recognition memory responses to familiar and newly learned (novel words over time. Native English speakers were taught novel words associated with English translations, and subsequently performed a Recognition Memory task in which they made old/new decisions in response to both words (trained word vs. untrained word, and novel words (trained novel word vs. untrained novel word. The Recognition task was performed 45 minutes after training (Day 1 and then repeated the following day (Day 2 with no additional training session in between. For familiar words, the late parietal old/new effect distinguished old from new items on both Day 1 and Day 2, although response to trained items was significantly weaker on Day 2. For novel words, the LPC again distinguished old from new items on both days, but the effect became significantly larger on Day 2. These data suggest that while recognition memory for familiar items may fade over time, recognition of novel items, conscious recollection in particular may benefit from a period of consolidation.

  7. The Slow Developmental Time Course of Real-Time Spoken Word Recognition

    Science.gov (United States)

    Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob

    2015-01-01

    This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than…

  8. Oxytocin Reduces Face Processing Time but Leaves Recognition Accuracy and Eye-Gaze Unaffected.

    Science.gov (United States)

    Hubble, Kelly; Daughters, Katie; Manstead, Antony S R; Rees, Aled; Thapar, Anita; van Goozen, Stephanie H M

    2017-01-01

    Previous studies have found that oxytocin (OXT) can improve the recognition of emotional facial expressions; it has been proposed that this effect is mediated by an increase in attention to the eye-region of faces. Nevertheless, evidence in support of this claim is inconsistent, and few studies have directly tested the effect of oxytocin on emotion recognition via altered eye-gaze Methods: In a double-blind, within-subjects, randomized control experiment, 40 healthy male participants received 24 IU intranasal OXT and placebo in two identical experimental sessions separated by a 2-week interval. Visual attention to the eye-region was assessed on both occasions while participants completed a static facial emotion recognition task using medium intensity facial expressions. Although OXT had no effect on emotion recognition accuracy, recognition performance was improved because face processing was faster across emotions under the influence of OXT. This effect was marginally significant (pfaces and this was not related to recognition accuracy or face processing time. These findings suggest that OXT-induced enhanced facial emotion recognition is not necessarily mediated by an increase in attention to the eye-region of faces, as previously assumed. We discuss several methodological issues which may explain discrepant findings and suggest the effect of OXT on visual attention may differ depending on task requirements. (JINS, 2017, 23, 23-33).

  9. Environmental Sound Recognition Using Time-Frequency Intersection Patterns

    Directory of Open Access Journals (Sweden)

    Xuan Guo

    2012-01-01

    Full Text Available Environmental sound recognition is an important function of robots and intelligent computer systems. In this research, we use a multistage perceptron neural network system for environmental sound recognition. The input data is a combination of time-variance pattern of instantaneous powers and frequency-variance pattern with instantaneous spectrum at the power peak, referred to as a time-frequency intersection pattern. Spectra of many environmental sounds change more slowly than those of speech or voice, so the intersectional time-frequency pattern will preserve the major features of environmental sounds but with drastically reduced data requirements. Two experiments were conducted using an original database and an open database created by the RWCP project. The recognition rate for 20 kinds of environmental sounds was 92%. The recognition rate of the new method was about 12% higher than methods using only an instantaneous spectrum. The results are also comparable with HMM-based methods, although those methods need to treat the time variance of an input vector series with more complicated computations.

  10. Impact of a voice recognition system on report cycle time and radiologist reading time

    Science.gov (United States)

    Melson, David L.; Brophy, Robert; Blaine, G. James; Jost, R. Gilbert; Brink, Gary S.

    1998-07-01

    Because of its exciting potential to improve clinical service, as well as reduce costs, a voice recognition system for radiological dictation was recently installed at our institution. This system will be clinically successful if it dramatically reduces radiology report turnaround time without substantially affecting radiologist dictation and editing time. This report summarizes an observer study currently under way in which radiologist reporting times using the traditional transcription system and the voice recognition system are compared. Four radiologists are observed interpreting portable intensive care unit (ICU) chest examinations at a workstation in the chest reading area. Data are recorded with the radiologists using the transcription system and using the voice recognition system. The measurements distinguish between time spent performing clerical tasks and time spent actually dictating the report. Editing time and the number of corrections made are recorded. Additionally, statistics are gathered to assess the voice recognition system's impact on the report cycle time -- the time from report dictation to availability of an edited and finalized report -- and the length of reports.

  11. Real-Time Hand Posture Recognition Using a Range Camera

    Science.gov (United States)

    Lahamy, Herve

    The basic goal of human computer interaction is to improve the interaction between users and computers by making computers more usable and receptive to the user's needs. Within this context, the use of hand postures in replacement of traditional devices such as keyboards, mice and joysticks is being explored by many researchers. The goal is to interpret human postures via mathematical algorithms. Hand posture recognition has gained popularity in recent years, and could become the future tool for humans to interact with computers or virtual environments. An exhaustive description of the frequently used methods available in literature for hand posture recognition is provided. It focuses on the different types of sensors and data used, the segmentation and tracking methods, the features used to represent the hand postures as well as the classifiers considered in the recognition process. Those methods are usually presented as highly robust with a recognition rate close to 100%. However, a couple of critical points necessary for a successful real-time hand posture recognition system require major improvement. Those points include the features used to represent the hand segment, the number of postures simultaneously recognizable, the invariance of the features with respect to rotation, translation and scale and also the behavior of the classifiers against non-perfect hand segments for example segments including part of the arm or missing part of the palm. A 3D time-of-flight camera named SR4000 has been chosen to develop a new methodology because of its capability to provide in real-time and at high frame rate 3D information on the scene imaged. This sensor has been described and evaluated for its capability for capturing in real-time a moving hand. A new recognition method that uses the 3D information provided by the range camera to recognize hand postures has been proposed. The different steps of this methodology including the segmentation, the tracking, the hand

  12. Real-time embedded face recognition for smart home

    NARCIS (Netherlands)

    Zuo, F.; With, de P.H.N.

    2005-01-01

    We propose a near real-time face recognition system for embedding in consumer applications. The system is embedded in a networked home environment and enables personalized services by automatic identification of users. The aim of our research is to design and build a face recognition system that is

  13. New technique for real-time distortion-invariant multiobject recognition and classification

    Science.gov (United States)

    Hong, Rutong; Li, Xiaoshun; Hong, En; Wang, Zuyi; Wei, Hongan

    2001-04-01

    A real-time hybrid distortion-invariant OPR system was established to make 3D multiobject distortion-invariant automatic pattern recognition. Wavelet transform technique was used to make digital preprocessing of the input scene, to depress the noisy background and enhance the recognized object. A three-layer backpropagation artificial neural network was used in correlation signal post-processing to perform multiobject distortion-invariant recognition and classification. The C-80 and NOA real-time processing ability and the multithread programming technology were used to perform high speed parallel multitask processing and speed up the post processing rate to ROIs. The reference filter library was constructed for the distortion version of 3D object model images based on the distortion parameter tolerance measuring as rotation, azimuth and scale. The real-time optical correlation recognition testing of this OPR system demonstrates that using the preprocessing, post- processing, the nonlinear algorithm os optimum filtering, RFL construction technique and the multithread programming technology, a high possibility of recognition and recognition rate ere obtained for the real-time multiobject distortion-invariant OPR system. The recognition reliability and rate was improved greatly. These techniques are very useful to automatic target recognition.

  14. Experience moderates overlap between object and face recognition, suggesting a common ability.

    Science.gov (United States)

    Gauthier, Isabel; McGugin, Rankin W; Richler, Jennifer J; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E

    2014-07-03

    Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. © 2014 ARVO.

  15. Towards Real-Time Speech Emotion Recognition for Affective E-Learning

    Science.gov (United States)

    Bahreini, Kiavash; Nadolski, Rob; Westera, Wim

    2016-01-01

    This paper presents the voice emotion recognition part of the FILTWAM framework for real-time emotion recognition in affective e-learning settings. FILTWAM (Framework for Improving Learning Through Webcams And Microphones) intends to offer timely and appropriate online feedback based upon learner's vocal intonations and facial expressions in order…

  16. Time course of Chinese monosyllabic spoken word recognition: evidence from ERP analyses.

    Science.gov (United States)

    Zhao, Jingjing; Guo, Jingjing; Zhou, Fengying; Shu, Hua

    2011-06-01

    Evidence from event-related potential (ERP) analyses of English spoken words suggests that the time course of English word recognition in monosyllables is cumulative. Different types of phonological competitors (i.e., rhymes and cohorts) modulate the temporal grain of ERP components differentially (Desroches, Newman, & Joanisse, 2009). The time course of Chinese monosyllabic spoken word recognition could be different from that of English due to the differences in syllable structure between the two languages (e.g., lexical tones). The present study investigated the time course of Chinese monosyllabic spoken word recognition using ERPs to record brain responses online while subjects listened to spoken words. During the experiment, participants were asked to compare a target picture with a subsequent picture by judging whether or not these two pictures belonged to the same semantic category. The spoken word was presented between the two pictures, and participants were not required to respond during its presentation. We manipulated phonological competition by presenting spoken words that either matched or mismatched the target picture in one of the following four ways: onset mismatch, rime mismatch, tone mismatch, or syllable mismatch. In contrast to the English findings, our findings showed that the three partial mismatches (onset, rime, and tone mismatches) equally modulated the amplitudes and time courses of the N400 (a negative component that peaks about 400ms after the spoken word), whereas, the syllable mismatched words elicited an earlier and stronger N400 than the three partial mismatched words. The results shed light on the important role of syllable-level awareness in Chinese spoken word recognition and also imply that the recognition of Chinese monosyllabic words might rely more on global similarity of the whole syllable structure or syllable-based holistic processing rather than phonemic segment-based processing. We interpret the differences in spoken word

  17. Cough Recognition Based on Mel Frequency Cepstral Coefficients and Dynamic Time Warping

    Science.gov (United States)

    Zhu, Chunmei; Liu, Baojun; Li, Ping

    Cough recognition provides important clinical information for the treatment of many respiratory diseases, but the assessment of cough frequency over a long period of time remains unsatisfied for either clinical or research purpose. In this paper, according to the advantage of dynamic time warping (DTW) and the characteristic of cough recognition, an attempt is made to adapt DTW as the recognition algorithm for cough recognition. The process of cough recognition based on mel frequency cepstral coefficients (MFCC) and DTW is introduced. Experiment results of testing samples from 3 subjects show that acceptable performances of cough recognition are obtained by DTW with a small training set.

  18. Haar-like Features for Robust Real-Time Face Recognition

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2013-01-01

    Face recognition is still a very challenging task when the input face image is noisy, occluded by some obstacles, of very low-resolution, not facing the camera, and not properly illuminated. These problems make the feature extraction and consequently the face recognition system unstable....... The proposed system in this paper introduces the novel idea of using Haar-like features, which have commonly been used for object detection, along with a probabilistic classifier for face recognition. The proposed system is simple, real-time, effective and robust against most of the mentioned problems....... Experimental results on public databases show that the proposed system indeed outperforms the state-of-the-art face recognition systems....

  19. Conducting spoken word recognition research online: Validation and a new timing method.

    Science.gov (United States)

    Slote, Joseph; Strand, Julia F

    2016-06-01

    Models of spoken word recognition typically make predictions that are then tested in the laboratory against the word recognition scores of human subjects (e.g., Luce & Pisoni Ear and Hearing, 19, 1-36, 1998). Unfortunately, laboratory collection of large sets of word recognition data can be costly and time-consuming. Due to the numerous advantages of online research in speed, cost, and participant diversity, some labs have begun to explore the use of online platforms such as Amazon's Mechanical Turk (AMT) to source participation and collect data (Buhrmester, Kwang, & Gosling Perspectives on Psychological Science, 6, 3-5, 2011). Many classic findings in cognitive psychology have been successfully replicated online, including the Stroop effect, task-switching costs, and Simon and flanker interference (Crump, McDonnell, & Gureckis PLoS ONE, 8, e57410, 2013). However, tasks requiring auditory stimulus delivery have not typically made use of AMT. In the present study, we evaluated the use of AMT for collecting spoken word identification and auditory lexical decision data. Although online users were faster and less accurate than participants in the lab, the results revealed strong correlations between the online and laboratory measures for both word identification accuracy and lexical decision speed. In addition, the scores obtained in the lab and online were equivalently correlated with factors that have been well established to predict word recognition, including word frequency and phonological neighborhood density. We also present and analyze a method for precise auditory reaction timing that is novel to behavioral research. Taken together, these findings suggest that AMT can be a viable alternative to the traditional laboratory setting as a source of participation for some spoken word recognition research.

  20. Gliding and Saccadic Gaze Gesture Recognition in Real Time

    DEFF Research Database (Denmark)

    Rozado, David; San Agustin, Javier; Rodriguez, Francisco

    2012-01-01

    , and their corresponding real-time recognition algorithms, Hierarchical Temporal Memory networks and the Needleman-Wunsch algorithm for sequence alignment. Our results show how a specific combination of gaze gesture modality, namely saccadic gaze gestures, and recognition algorithm, Needleman-Wunsch, allows for reliable...... usage of intentional gaze gestures to interact with a computer with accuracy rates of up to 98% and acceptable completion speed. Furthermore, the gesture recognition engine does not interfere with otherwise standard human-machine gaze interaction generating therefore, very low false positive rates...

  1. Novel methods for real-time 3D facial recognition

    OpenAIRE

    Rodrigues, Marcos; Robinson, Alan

    2010-01-01

    In this paper we discuss our approach to real-time 3D face recognition. We argue the need for real time operation in a realistic scenario and highlight the required pre- and post-processing operations for effective 3D facial recognition. We focus attention to some operations including face and eye detection, and fast post-processing operations such as hole filling, mesh smoothing and noise removal. We consider strategies for hole filling such as bilinear and polynomial interpolation and Lapla...

  2. Recognition of Action as a Bayesian Parameter Estimation Problem over Time

    DEFF Research Database (Denmark)

    Krüger, Volker

    2007-01-01

    In this paper we will discuss two problems related to action recognition: The first problem is the one of identifying in a surveillance scenario whether a person is walking or running and in what rough direction. The second problem is concerned with the recovery of action primitives from observed...... complex actions. Both problems will be discussed within a statistical framework. Bayesian propagation over time offers a framework to treat likelihood observations at each time step and the dynamics between the time steps in a unified manner. The first problem will be approached as a patter recognition...... of the Bayesian framework for action recognition and round up our discussion....

  3. On the Time Course of Vocal Emotion Recognition

    Science.gov (United States)

    Pell, Marc D.; Kotz, Sonja A.

    2011-01-01

    How quickly do listeners recognize emotions from a speaker's voice, and does the time course for recognition vary by emotion type? To address these questions, we adapted the auditory gating paradigm to estimate how much vocal information is needed for listeners to categorize five basic emotions (anger, disgust, fear, sadness, happiness) and neutral utterances produced by male and female speakers of English. Semantically-anomalous pseudo-utterances (e.g., The rivix jolled the silling) conveying each emotion were divided into seven gate intervals according to the number of syllables that listeners heard from sentence onset. Participants (n = 48) judged the emotional meaning of stimuli presented at each gate duration interval, in a successive, blocked presentation format. Analyses looked at how recognition of each emotion evolves as an utterance unfolds and estimated the “identification point” for each emotion. Results showed that anger, sadness, fear, and neutral expressions are recognized more accurately at short gate intervals than happiness, and particularly disgust; however, as speech unfolds, recognition of happiness improves significantly towards the end of the utterance (and fear is recognized more accurately than other emotions). When the gate associated with the emotion identification point of each stimulus was calculated, data indicated that fear (M = 517 ms), sadness (M = 576 ms), and neutral (M = 510 ms) expressions were identified from shorter acoustic events than the other emotions. These data reveal differences in the underlying time course for conscious recognition of basic emotions from vocal expressions, which should be accounted for in studies of emotional speech processing. PMID:22087275

  4. Assessment of Time-Lapse in Visible and Thermal Face Recognition

    Czech Academy of Sciences Publication Activity Database

    Farokhi, Sajad; Shamsuddin, Siti Mariyam; Flusser, Jan; Sheikh, Usman Ullah

    2012-01-01

    Roč. 6, č. 1 (2012), s. 181-186 R&D Projects: GA ČR GAP103/11/1552 Institutional support: RVO:67985556 Keywords : face recognition * moment invariants * Zernike moments Subject RIV: JD - Computer Applications, Robotics http://library.utia.cas.cz/separaty/2012/ZOI/flusser-assessment of time-lapse in visible and thermal face recognition -j.pdf

  5. Real-Time (Vision-Based) Road Sign Recognition Using an Artificial Neural Network

    Science.gov (United States)

    Islam, Kh Tohidul; Raj, Ram Gopal

    2017-01-01

    Road sign recognition is a driver support function that can be used to notify and warn the driver by showing the restrictions that may be effective on the current stretch of road. Examples for such regulations are ‘traffic light ahead’ or ‘pedestrian crossing’ indications. The present investigation targets the recognition of Malaysian road and traffic signs in real-time. Real-time video is taken by a digital camera from a moving vehicle and real world road signs are then extracted using vision-only information. The system is based on two stages, one performs the detection and another one is for recognition. In the first stage, a hybrid color segmentation algorithm has been developed and tested. In the second stage, an introduced robust custom feature extraction method is used for the first time in a road sign recognition approach. Finally, a multilayer artificial neural network (ANN) has been created to recognize and interpret various road signs. It is robust because it has been tested on both standard and non-standard road signs with significant recognition accuracy. This proposed system achieved an average of 99.90% accuracy with 99.90% of sensitivity, 99.90% of specificity, 99.90% of f-measure, and 0.001 of false positive rate (FPR) with 0.3 s computational time. This low FPR can increase the system stability and dependability in real-time applications. PMID:28406471

  6. Real-Time (Vision-Based) Road Sign Recognition Using an Artificial Neural Network.

    Science.gov (United States)

    Islam, Kh Tohidul; Raj, Ram Gopal

    2017-04-13

    Road sign recognition is a driver support function that can be used to notify and warn the driver by showing the restrictions that may be effective on the current stretch of road. Examples for such regulations are 'traffic light ahead' or 'pedestrian crossing' indications. The present investigation targets the recognition of Malaysian road and traffic signs in real-time. Real-time video is taken by a digital camera from a moving vehicle and real world road signs are then extracted using vision-only information. The system is based on two stages, one performs the detection and another one is for recognition. In the first stage, a hybrid color segmentation algorithm has been developed and tested. In the second stage, an introduced robust custom feature extraction method is used for the first time in a road sign recognition approach. Finally, a multilayer artificial neural network (ANN) has been created to recognize and interpret various road signs. It is robust because it has been tested on both standard and non-standard road signs with significant recognition accuracy. This proposed system achieved an average of 99.90% accuracy with 99.90% of sensitivity, 99.90% of specificity, 99.90% of f-measure, and 0.001 of false positive rate (FPR) with 0.3 s computational time. This low FPR can increase the system stability and dependability in real-time applications.

  7. Effects of study time and meaningfulness on environmental context-dependent recognition.

    Science.gov (United States)

    Isarida, Takeo; Isarida, Toshiko K; Sakai, Tetsuya

    2012-11-01

    In two experiments, we examined whether the size of place-context-dependent recognition decreased with study time and with the meaningfulness of the to-be-remembered materials. A group of 80 undergraduates intentionally studied a list of words in a short (1.5 s per item) or a long (4.0 s per item) study-time condition (Exp. 1). Another 40 undergraduates studied lists consisting of words and nonwords in the long-study-time condition (Exp. 2). After a short retention interval, recognition for the targets was tested in the same or in a different context. Context was manipulated by means of the combination of place, subsidiary task, and experimenter. Significant context-dependent recognition discrimination was found for words in the short-study-time condition (Exp. 1), but not in the long-study-time condition (Exps. 1 and 2). Significant effects were found as well for nonwords, even in the long-study-time condition (Exp. 2). These results are explained well by an outshining account: that is, by principles of outshining and encoding specificity.

  8. Effect of Time Delay on Recognition Memory for Pictures: The Modulatory Role of Emotion

    Science.gov (United States)

    Wang, Bo

    2014-01-01

    This study investigated the modulatory role of emotion in the effect of time delay on recognition memory for pictures. Participants viewed neutral, positive and negative pictures, and took a recognition memory test 5 minutes, 24 hours, or 1 week after learning. The findings are: 1) For neutral, positive and negative pictures, overall recognition accuracy in the 5-min delay did not significantly differ from that in the 24-h delay. For neutral and positive pictures, overall recognition accuracy in the 1-week delay was lower than in the 24-h delay; for negative pictures, overall recognition in the 24-h and 1-week delay did not significantly differ. Therefore negative emotion modulates the effect of time delay on recognition memory, maintaining retention of overall recognition accuracy only within a certain frame of time. 2) For the three types of pictures, recollection and familiarity in the 5-min delay did not significantly differ from that in the 24-h and the 1-week delay. Thus emotion does not appear to modulate the effect of time delay on recollection and familiarity. However, recollection in the 24-h delay was higher than in the 1-week delay, whereas familiarity in the 24-h delay was lower than in the 1-week delay. PMID:24971457

  9. Investigation of Time Series Representations and Similarity Measures for Structural Damage Pattern Recognition

    Science.gov (United States)

    Swartz, R. Andrew

    2013-01-01

    This paper investigates the time series representation methods and similarity measures for sensor data feature extraction and structural damage pattern recognition. Both model-based time series representation and dimensionality reduction methods are studied to compare the effectiveness of feature extraction for damage pattern recognition. The evaluation of feature extraction methods is performed by examining the separation of feature vectors among different damage patterns and the pattern recognition success rate. In addition, the impact of similarity measures on the pattern recognition success rate and the metrics for damage localization are also investigated. The test data used in this study are from the System Identification to Monitor Civil Engineering Structures (SIMCES) Z24 Bridge damage detection tests, a rigorous instrumentation campaign that recorded the dynamic performance of a concrete box-girder bridge under progressively increasing damage scenarios. A number of progressive damage test case datasets and damage test data with different damage modalities are used. The simulation results show that both time series representation methods and similarity measures have significant impact on the pattern recognition success rate. PMID:24191136

  10. Investigation of Time Series Representations and Similarity Measures for Structural Damage Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Wenjia Liu

    2013-01-01

    Full Text Available This paper investigates the time series representation methods and similarity measures for sensor data feature extraction and structural damage pattern recognition. Both model-based time series representation and dimensionality reduction methods are studied to compare the effectiveness of feature extraction for damage pattern recognition. The evaluation of feature extraction methods is performed by examining the separation of feature vectors among different damage patterns and the pattern recognition success rate. In addition, the impact of similarity measures on the pattern recognition success rate and the metrics for damage localization are also investigated. The test data used in this study are from the System Identification to Monitor Civil Engineering Structures (SIMCES Z24 Bridge damage detection tests, a rigorous instrumentation campaign that recorded the dynamic performance of a concrete box-girder bridge under progressively increasing damage scenarios. A number of progressive damage test case datasets and damage test data with different damage modalities are used. The simulation results show that both time series representation methods and similarity measures have significant impact on the pattern recognition success rate.

  11. Using Constant Time Delay to Teach Braille Word Recognition

    Science.gov (United States)

    Hooper, Jonathan; Ivy, Sarah; Hatton, Deborah

    2014-01-01

    Introduction: Constant time delay has been identified as an evidence-based practice to teach print sight words and picture recognition (Browder, Ahlbrim-Delzell, Spooner, Mims, & Baker, 2009). For the study presented here, we tested the effectiveness of constant time delay to teach new braille words. Methods: A single-subject multiple baseline…

  12. The time course of spoken word recognition in Mandarin Chinese: a unimodal ERP study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen; Zhang, Qin; Guo, Chunyan

    2014-10-01

    In the present study, two experiments were carried out to investigate the time course of spoken word recognition in Mandarin Chinese using both event-related potentials (ERPs) and behavioral measures. To address the hypothesis that there is an early phonological processing stage independent of semantics during spoken word recognition, a unimodal word-matching paradigm was employed, in which both prime and target words were presented auditorily. Experiment 1 manipulated the phonological relations between disyllabic primes and targets, and found an enhanced P2 (200-270 ms post-target onset) as well as a smaller early N400 to word-initial phonological mismatches over fronto-central scalp sites. Experiment 2 manipulated both phonological and semantic relations between monosyllabic primes and targets, and replicated the phonological mismatch-associated P2, which was not modulated by semantic relations. Overall, these results suggest that P2 is a sensitive electrophysiological index of early phonological processing independent of semantics in Mandarin Chinese spoken word recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. An HMM-Like Dynamic Time Warping Scheme for Automatic Speech Recognition

    Directory of Open Access Journals (Sweden)

    Ing-Jr Ding

    2014-01-01

    Full Text Available In the past, the kernel of automatic speech recognition (ASR is dynamic time warping (DTW, which is feature-based template matching and belongs to the category technique of dynamic programming (DP. Although DTW is an early developed ASR technique, DTW has been popular in lots of applications. DTW is playing an important role for the known Kinect-based gesture recognition application now. This paper proposed an intelligent speech recognition system using an improved DTW approach for multimedia and home automation services. The improved DTW presented in this work, called HMM-like DTW, is essentially a hidden Markov model- (HMM- like method where the concept of the typical HMM statistical model is brought into the design of DTW. The developed HMM-like DTW method, transforming feature-based DTW recognition into model-based DTW recognition, will be able to behave as the HMM recognition technique and therefore proposed HMM-like DTW with the HMM-like recognition model will have the capability to further perform model adaptation (also known as speaker adaptation. A series of experimental results in home automation-based multimedia access service environments demonstrated the superiority and effectiveness of the developed smart speech recognition system by HMM-like DTW.

  14. 4D Unconstrained Real-time Face Recognition Using a Commodity Depthh Camera

    NARCIS (Netherlands)

    Schimbinschi, Florin; Wiering, Marco; Mohan, R.E.; Sheba, J.K.

    2012-01-01

    Robust unconstrained real-time face recognition still remains a challenge today. The recent addition to the market of lightweight commodity depth sensors brings new possibilities for human-machine interaction and therefore face recognition. This article accompanies the reader through a succinct

  15. A Dynamic Time Warping Approach to Real-Time Activity Recognition for Food Preparation

    Science.gov (United States)

    Pham, Cuong; Plötz, Thomas; Olivier, Patrick

    We present a dynamic time warping based activity recognition system for the analysis of low-level food preparation activities. Accelerometers embedded into kitchen utensils provide continuous sensor data streams while people are using them for cooking. The recognition framework analyzes frames of contiguous sensor readings in real-time with low latency. It thereby adapts to the idiosyncrasies of utensil use by automatically maintaining a template database. We demonstrate the effectiveness of the classification approach by a number of real-world practical experiments on a publically available dataset. The adaptive system shows superior performance compared to a static recognizer. Furthermore, we demonstrate the generalization capabilities of the system by gradually reducing the amount of training samples. The system achieves excellent classification results even if only a small number of training samples is available, which is especially relevant for real-world scenarios.

  16. Electrophysiological assessment of the time course of bilingual visual word recognition: Early access to language membership.

    Science.gov (United States)

    Yiu, Loretta K; Pitts, Michael A; Canseco-Gonzalez, Enriqueta

    2015-08-01

    Previous research examining the time course of lexical access during word recognition suggests that phonological processing precedes access to semantic information, which in turn precedes access to syntactic information. Bilingual word recognition likely requires an additional level: knowledge of which language a specific word belongs to. Using the recording of event-related potentials, we investigated the time course of access to language membership information relative to semantic (Experiment 1) and syntactic (Experiment 2) encoding during visual word recognition. In Experiment 1, Spanish-English bilinguals viewed a series of printed words while making dual-choice go/nogo and left/right hand decisions based on semantic (whether the word referred to an animal or an object) and language membership information (whether the word was in English or in Spanish). Experiment 2 used a similar paradigm but with syntactic information (whether the word was a noun or a verb) as one of the response contingencies. The onset and peak latency of the N200, a component related to response inhibition, indicated that language information is accessed earlier than semantic information. Similarly, language information was also accessed earlier than syntactic information (but only based on peak latency). We discuss these findings with respect to models of bilingual word recognition and language comprehension in general. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Adaptive pattern recognition in real-time video-based soccer analysis

    DEFF Research Database (Denmark)

    Schlipsing, Marc; Salmen, Jan; Tschentscher, Marc

    2017-01-01

    are taken into account. Our contribution is twofold: (1) the deliberate use of machine learning and pattern recognition techniques allows us to achieve high classification accuracy in varying environments. We systematically evaluate combinations of image features and learning machines in the given online......Computer-aided sports analysis is demanded by coaches and the media. Image processing and machine learning techniques that allow for "live" recognition and tracking of players exist. But these methods are far from collecting and analyzing event data fully autonomously. To generate accurate results......, human interaction is required at different stages including system setup, calibration, supervision of classifier training, and resolution of tracking conflicts. Furthermore, the real-time constraints are challenging: in contrast to other object recognition and tracking applications, we cannot treat data...

  18. Real-time Multiresolution Crosswalk Detection with Walk Light Recognition for the Blind

    Directory of Open Access Journals (Sweden)

    ROMIC, K.

    2018-02-01

    Full Text Available Real-time image processing and object detection techniques have a great potential to be applied in digital assistive tools for the blind and visually impaired persons. In this paper, algorithm for crosswalk detection and walk light recognition is proposed with the main aim to help blind person when crossing the road. The proposed algorithm is optimized to work in real-time on portable devices using standard cameras. Images captured by camera are processed while person is moving and decision about detected crosswalk is provided as an output along with the information about walk light if one is present. Crosswalk detection method is based on multiresolution morphological image processing, while the walk light recognition is performed by proposed 6-stage algorithm. The main contributions of this paper are accurate crosswalk detection with small processing time due to multiresolution processing and the recognition of the walk lights covering only small amount of pixels in image. The experiment is conducted using images from video sequences captured in realistic situations on crossings. The results show 98.3% correct crosswalk detections and 89.5% correct walk lights recognition with average processing speed of about 16 frames per second.

  19. Unsupervised Learning of Digit Recognition Using Spike-Timing-Dependent Plasticity

    Directory of Open Access Journals (Sweden)

    Peter U. Diehl

    2015-08-01

    Full Text Available In order to understand how the mammalian neocortex is performing computations, two things are necessary; we need to have a good understanding of the available neuronal processing units and mechanisms, and we need to gain a better understanding of how those mechanisms are combined to build functioning systems. Therefore, in recent years there is an increasing interest in how spiking neural networks (SNN can be used to perform complex computations or solve pattern recognition tasks. However, it remains a challenging task to design SNNs which use biologically plausible mechanisms (especially for learning new patterns, since most of such SNN architectures rely on training in a rate-based network and subsequent conversion to a SNN. We present a SNN for digit recognition which is based on mechanisms with increased biological plausibility, i.e. conductance-based instead of current-based synapses, spike-timing-dependent plasticity with time-dependent weight change, lateral inhibition, and an adaptive spiking threshold. Unlike most other systems, we do not use a teaching signal and do not present any class labels to the network. Using this unsupervised learning scheme, our architecture achieves 95% accuracy on the MNIST benchmark, which is better than previous SNN implementations without supervision. The fact that we used no domain-specific knowledge points toward the general applicability of our network design. Also, the performance of our network scales well with the number of neurons used and shows similar performance for four different learning rules, indicating robustness of the full combination of mechanisms, which suggests applicability in heterogeneous biological neural networks.

  20. Sistem Kontrol Akses Berbasis Real Time Face Recognition dan Gender Information

    Directory of Open Access Journals (Sweden)

    Putri Nurmala

    2015-06-01

    Full Text Available Face recognition and gender information is a computer application for automatically identifying or verifying a person's face from a camera to capture a person's face. It is usually used in access control systemsand it can be compared to other biometrics such as finger print identification system or iris. Many of face recognition algorithms have been developed in recent years. Face recognition system and gender information inthis system based on the Principal Component Analysis method (PCA. Computational method has a simple and fast compared with the use of the method requires a lot of learning, such as artificial neural network. In thisaccess control system, relay used and Arduino controller. In this essay focuses on face recognition and gender - based information in real time using the method of Principal Component Analysis ( PCA . The result achievedfrom the application design is the identification of a person’s face with gender using PCA. The results achieved by the application is face recognition system using PCA can obtain good results the 85 % success rate in face recognition with face images that have been tested by a few people and a fairly high degree of accuracy.

  1. A Study on Efficient Robust Speech Recognition with Stochastic Dynamic Time Warping

    OpenAIRE

    孫, 喜浩

    2014-01-01

    In recent years, great progress has been made in automatic speech recognition (ASR) system. The hidden Markov model (HMM) and dynamic time warping (DTW) are the two main algorithms which have been widely applied to ASR system. Although, HMM technique achieves higher recognition accuracy in clear speech environment and noisy environment. It needs large-set of words and realizes the algorithm more complexly.Thus, more and more researchers have focused on DTW-based ASR system.Dynamic time warpin...

  2. Role of short-time acoustic temporal fine structure cues in sentence recognition for normal-hearing listeners.

    Science.gov (United States)

    Hou, Limin; Xu, Li

    2018-02-01

    Short-time processing was employed to manipulate the amplitude, bandwidth, and temporal fine structure (TFS) in sentences. Fifty-two native-English-speaking, normal-hearing listeners participated in four sentence-recognition experiments. Results showed that recovered envelope (E) played an important role in speech recognition when the bandwidth was > 1 equivalent rectangular bandwidth. Removing TFS drastically reduced sentence recognition. Preserving TFS greatly improved sentence recognition when amplitude information was available at a rate ≥ 10 Hz (i.e., time segment ≤ 100 ms). Therefore, the short-time TFS facilitates speech perception together with the recovered E and works with the coarse amplitude cues to provide useful information for speech recognition.

  3. Real-time traffic sign recognition based on a general purpose GPU and deep-learning.

    Science.gov (United States)

    Lim, Kwangyong; Hong, Yongwon; Choi, Yeongwoo; Byun, Hyeran

    2017-01-01

    We present a General Purpose Graphics Processing Unit (GPGPU) based real-time traffic sign detection and recognition method that is robust against illumination changes. There have been many approaches to traffic sign recognition in various research fields; however, previous approaches faced several limitations when under low illumination or wide variance of light conditions. To overcome these drawbacks and improve processing speeds, we propose a method that 1) is robust against illumination changes, 2) uses GPGPU-based real-time traffic sign detection, and 3) performs region detecting and recognition using a hierarchical model. This method produces stable results in low illumination environments. Both detection and hierarchical recognition are performed in real-time, and the proposed method achieves 0.97 F1-score on our collective dataset, which uses the Vienna convention traffic rules (Germany and South Korea).

  4. An Investigation of a New Social Networks Contact Suggestion Based on Face Recognition Algorithm

    Directory of Open Access Journals (Sweden)

    Ivan Zelinka

    2016-01-01

    Full Text Available Automated comparison of faces in the photographs is a well established discipline. The main aim of this paper is to describe an approach whereby face recognition can be used in suggestion of a new contacts. The new contact suggestion is a common technique used across all main social networks. Our approach uses a freely available face comparison called "Betaface" together with our automated processig of the user´s Facebook profile. The research´s main point of interest is the comparison of friend´s facial images in a social network itself, how to process such a great amount of photos and what additional sources of data should be used. In this approach we used our automated processing algorithm Betaface in the social network Facebook and for the additional data, the Flickr social network was used. The results and their quality are discussed at the end.

  5. Real time biometric surveillance with gait recognition

    Science.gov (United States)

    Mohapatra, Subasish; Swain, Anisha; Das, Manaswini; Mohanty, Subhadarshini

    2018-04-01

    Bio metric surveillance has become indispensable for every system in the recent years. The contribution of bio metric authentication, identification, and screening purposes are widely used in various domains for preventing unauthorized access. A large amount of data needs to be updated, segregated and safeguarded from malicious software and misuse. Bio metrics is the intrinsic characteristics of each individual. Recently fingerprints, iris, passwords, unique keys, and cards are commonly used for authentication purposes. These methods have various issues related to security and confidentiality. These systems are not yet automated to provide the safety and security. The gait recognition system is the alternative for overcoming the drawbacks of the recent bio metric based authentication systems. Gait recognition is newer as it hasn't been implemented in the real-world scenario so far. This is an un-intrusive system that requires no knowledge or co-operation of the subject. Gait is a unique behavioral characteristic of every human being which is hard to imitate. The walking style of an individual teamed with the orientation of joints in the skeletal structure and inclinations between them imparts the unique characteristic. A person can alter one's own external appearance but not skeletal structure. These are real-time, automatic systems that can even process low-resolution images and video frames. In this paper, we have proposed a gait recognition system and compared the performance with conventional bio metric identification systems.

  6. Obstetric vesico-vaginal fistula is preventable by timely recognition ...

    African Journals Online (AJOL)

    Prevention of obstetric fistula should include universal access to maternity care, recognition and timely correction of abnormal progress of labour and punctilious attention to bladder care to avoid post-partum urinary retention. Key words: Obstetric fistula, Risk factors, Pathophysiology, Post-partum urinary retention ...

  7. Speech Silicon: An FPGA Architecture for Real-Time Hidden Markov-Model-Based Speech Recognition

    Directory of Open Access Journals (Sweden)

    Schuster Jeffrey

    2006-01-01

    Full Text Available This paper examines the design of an FPGA-based system-on-a-chip capable of performing continuous speech recognition on medium sized vocabularies in real time. Through the creation of three dedicated pipelines, one for each of the major operations in the system, we were able to maximize the throughput of the system while simultaneously minimizing the number of pipeline stalls in the system. Further, by implementing a token-passing scheme between the later stages of the system, the complexity of the control was greatly reduced and the amount of active data present in the system at any time was minimized. Additionally, through in-depth analysis of the SPHINX 3 large vocabulary continuous speech recognition engine, we were able to design models that could be efficiently benchmarked against a known software platform. These results, combined with the ability to reprogram the system for different recognition tasks, serve to create a system capable of performing real-time speech recognition in a vast array of environments.

  8. Speech Silicon: An FPGA Architecture for Real-Time Hidden Markov-Model-Based Speech Recognition

    Directory of Open Access Journals (Sweden)

    Alex K. Jones

    2006-11-01

    Full Text Available This paper examines the design of an FPGA-based system-on-a-chip capable of performing continuous speech recognition on medium sized vocabularies in real time. Through the creation of three dedicated pipelines, one for each of the major operations in the system, we were able to maximize the throughput of the system while simultaneously minimizing the number of pipeline stalls in the system. Further, by implementing a token-passing scheme between the later stages of the system, the complexity of the control was greatly reduced and the amount of active data present in the system at any time was minimized. Additionally, through in-depth analysis of the SPHINX 3 large vocabulary continuous speech recognition engine, we were able to design models that could be efficiently benchmarked against a known software platform. These results, combined with the ability to reprogram the system for different recognition tasks, serve to create a system capable of performing real-time speech recognition in a vast array of environments.

  9. Brief report: accuracy and response time for the recognition of facial emotions in a large sample of children with autism spectrum disorders.

    Science.gov (United States)

    Fink, Elian; de Rosnay, Marc; Wierda, Marlies; Koot, Hans M; Begeer, Sander

    2014-09-01

    The empirical literature has presented inconsistent evidence for deficits in the recognition of basic emotion expressions in children with autism spectrum disorders (ASD), which may be due to the focus on research with relatively small sample sizes. Additionally, it is proposed that although children with ASD may correctly identify emotion expression they rely on more deliberate, more time-consuming strategies in order to accurately recognize emotion expressions when compared to typically developing children. In the current study, we examine both emotion recognition accuracy and response time in a large sample of children, and explore the moderating influence of verbal ability on these findings. The sample consisted of 86 children with ASD (M age = 10.65) and 114 typically developing children (M age = 10.32) between 7 and 13 years of age. All children completed a pre-test (emotion word-word matching), and test phase consisting of basic emotion recognition, whereby they were required to match a target emotion expression to the correct emotion word; accuracy and response time were recorded. Verbal IQ was controlled for in the analyses. We found no evidence of a systematic deficit in emotion recognition accuracy or response time for children with ASD, controlling for verbal ability. However, when controlling for children's accuracy in word-word matching, children with ASD had significantly lower emotion recognition accuracy when compared to typically developing children. The findings suggest that the social impairments observed in children with ASD are not the result of marked deficits in basic emotion recognition accuracy or longer response times. However, children with ASD may be relying on other perceptual skills (such as advanced word-word matching) to complete emotion recognition tasks at a similar level as typically developing children.

  10. Bidirectional Modulation of Recognition Memory.

    Science.gov (United States)

    Ho, Jonathan W; Poeta, Devon L; Jacobson, Tara K; Zolnik, Timothy A; Neske, Garrett T; Connors, Barry W; Burwell, Rebecca D

    2015-09-30

    Perirhinal cortex (PER) has a well established role in the familiarity-based recognition of individual items and objects. For example, animals and humans with perirhinal damage are unable to distinguish familiar from novel objects in recognition memory tasks. In the normal brain, perirhinal neurons respond to novelty and familiarity by increasing or decreasing firing rates. Recent work also implicates oscillatory activity in the low-beta and low-gamma frequency bands in sensory detection, perception, and recognition. Using optogenetic methods in a spontaneous object exploration (SOR) task, we altered recognition memory performance in rats. In the SOR task, normal rats preferentially explore novel images over familiar ones. We modulated exploratory behavior in this task by optically stimulating channelrhodopsin-expressing perirhinal neurons at various frequencies while rats looked at novel or familiar 2D images. Stimulation at 30-40 Hz during looking caused rats to treat a familiar image as if it were novel by increasing time looking at the image. Stimulation at 30-40 Hz was not effective in increasing exploration of novel images. Stimulation at 10-15 Hz caused animals to treat a novel image as familiar by decreasing time looking at the image, but did not affect looking times for images that were already familiar. We conclude that optical stimulation of PER at different frequencies can alter visual recognition memory bidirectionally. Significance statement: Recognition of novelty and familiarity are important for learning, memory, and decision making. Perirhinal cortex (PER) has a well established role in the familiarity-based recognition of individual items and objects, but how novelty and familiarity are encoded and transmitted in the brain is not known. Perirhinal neurons respond to novelty and familiarity by changing firing rates, but recent work suggests that brain oscillations may also be important for recognition. In this study, we showed that stimulation of

  11. False recall and recognition of brand names increases over time.

    Science.gov (United States)

    Sherman, Susan M

    2013-01-01

    Using the Deese-Roediger-McDermott (DRM) paradigm, participants are presented with lists of associated words (e.g., bed, awake, night). Subsequently, they reliably have false memories for related but nonpresented words (e.g., SLEEP). Previous research has found that false memories can be created for brand names (e.g., Morrisons, Sainsbury's, Waitrose, and TESCO). The present study investigates the effect of a week's delay on false memories for brand names. Participants were presented with lists of brand names followed by a distractor task. In two between-subjects experiments, participants completed a free recall task or a recognition task either immediately or a week later. In two within-subjects experiments, participants completed a free recall task or a recognition task both immediately and a week later. Correct recall for presented list items decreased over time, whereas false recall for nonpresented lure items increased. For recognition, raw scores revealed an increase in false memory across time reflected in an increase in Remember responses. Analysis of Pr scores revealed that false memory for lures stayed constant over a week, but with an increase in Remember responses in the between-subjects experiment and a trend in the same direction in the within-subjects experiment. Implications for theories of false memory are discussed.

  12. Real-time image restoration for iris recognition systems.

    Science.gov (United States)

    Kang, Byung Jun; Park, Kang Ryoung

    2007-12-01

    In the field of biometrics, it has been reported that iris recognition techniques have shown high levels of accuracy because unique patterns of the human iris, which has very many degrees of freedom, are used. However, because conventional iris cameras have small depth-of-field (DOF) areas, input iris images can easily be blurred, which can lead to lower recognition performance, since iris patterns are transformed by the blurring caused by optical defocusing. To overcome these problems, an autofocusing camera can be used. However, this inevitably increases the cost, size, and complexity of the system. Therefore, we propose a new real-time iris image-restoration method, which can increase the camera's DOF without requiring any additional hardware. This paper presents five novelties as compared to previous works: 1) by excluding eyelash and eyelid regions, it is possible to obtain more accurate focus scores from input iris images; 2) the parameter of the point spread function (PSF) can be estimated in terms of camera optics and measured focus scores; therefore, parameter estimation is more accurate than it has been in previous research; 3) because the PSF parameter can be obtained by using a predetermined equation, iris image restoration can be done in real-time; 4) by using a constrained least square (CLS) restoration filter that considers noise, performance can be greatly enhanced; and 5) restoration accuracy can also be enhanced by estimating the weight value of the noise-regularization term of the CLS filter according to the amount of image blurring. Experimental results showed that iris recognition errors when using the proposed restoration method were greatly reduced as compared to those results achieved without restoration or those achieved using previous iris-restoration methods.

  13. The time course of individual face recognition: A pattern analysis of ERP signals.

    Science.gov (United States)

    Nemrodov, Dan; Niemeier, Matthias; Mok, Jenkin Ngo Yin; Nestor, Adrian

    2016-05-15

    An extensive body of work documents the time course of neural face processing in the human visual cortex. However, the majority of this work has focused on specific temporal landmarks, such as N170 and N250 components, derived through univariate analyses of EEG data. Here, we take on a broader evaluation of ERP signals related to individual face recognition as we attempt to move beyond the leading theoretical and methodological framework through the application of pattern analysis to ERP data. Specifically, we investigate the spatiotemporal profile of identity recognition across variation in emotional expression. To this end, we apply pattern classification to ERP signals both in time, for any single electrode, and in space, across multiple electrodes. Our results confirm the significance of traditional ERP components in face processing. At the same time though, they support the idea that the temporal profile of face recognition is incompletely described by such components. First, we show that signals associated with different facial identities can be discriminated from each other outside the scope of these components, as early as 70ms following stimulus presentation. Next, electrodes associated with traditional ERP components as well as, critically, those not associated with such components are shown to contribute information to stimulus discriminability. And last, the levels of ERP-based pattern discrimination are found to correlate with recognition accuracy across subjects confirming the relevance of these methods for bridging brain and behavior data. Altogether, the current results shed new light on the fine-grained time course of neural face processing and showcase the value of novel methods for pattern analysis to investigating fundamental aspects of visual recognition. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Sudden Event Recognition: A Survey

    Directory of Open Access Journals (Sweden)

    Mohd Asyraf Zulkifley

    2013-08-01

    Full Text Available Event recognition is one of the most active research areas in video surveillance fields. Advancement in event recognition systems mainly aims to provide convenience, safety and an efficient lifestyle for humanity. A precise, accurate and robust approach is necessary to enable event recognition systems to respond to sudden changes in various uncontrolled environments, such as the case of an emergency, physical threat and a fire or bomb alert. The performance of sudden event recognition systems depends heavily on the accuracy of low level processing, like detection, recognition, tracking and machine learning algorithms. This survey aims to detect and characterize a sudden event, which is a subset of an abnormal event in several video surveillance applications. This paper discusses the following in detail: (1 the importance of a sudden event over a general anomalous event; (2 frameworks used in sudden event recognition; (3 the requirements and comparative studies of a sudden event recognition system and (4 various decision-making approaches for sudden event recognition. The advantages and drawbacks of using 3D images from multiple cameras for real-time application are also discussed. The paper concludes with suggestions for future research directions in sudden event recognition.

  15. Real Time Recognition Of Speakers From Internet Audio Stream

    Directory of Open Access Journals (Sweden)

    Weychan Radoslaw

    2015-09-01

    Full Text Available In this paper we present an automatic speaker recognition technique with the use of the Internet radio lossy (encoded speech signal streams. We show an influence of the audio encoder (e.g., bitrate on the speaker model quality. The model of each speaker was calculated with the use of the Gaussian mixture model (GMM approach. Both the speaker recognition and the further analysis were realized with the use of short utterances to facilitate real time processing. The neighborhoods of the speaker models were analyzed with the use of the ISOMAP algorithm. The experiments were based on four 1-hour public debates with 7–8 speakers (including the moderator, acquired from the Polish radio Internet services. The presented software was developed with the MATLAB environment.

  16. Recognition Errors Suggest Fast Familiarity and Slow Recollection in Rhesus Monkeys

    Science.gov (United States)

    Basile, Benjamin M.; Hampton, Robert R.

    2013-01-01

    One influential model of recognition posits two underlying memory processes: recollection, which is detailed but relatively slow, and familiarity, which is quick but lacks detail. Most of the evidence for this dual-process model in nonhumans has come from analyses of receiver operating characteristic (ROC) curves in rats, but whether ROC analyses…

  17. Energy-Efficient Real-Time Human Activity Recognition on Smart Mobile Devices

    Directory of Open Access Journals (Sweden)

    Jin Lee

    2016-01-01

    Full Text Available Nowadays, human activity recognition (HAR plays an important role in wellness-care and context-aware systems. Human activities can be recognized in real-time by using sensory data collected from various sensors built in smart mobile devices. Recent studies have focused on HAR that is solely based on triaxial accelerometers, which is the most energy-efficient approach. However, such HAR approaches are still energy-inefficient because the accelerometer is required to run without stopping so that the physical activity of a user can be recognized in real-time. In this paper, we propose a novel approach for HAR process that controls the activity recognition duration for energy-efficient HAR. We investigated the impact of varying the acceleration-sampling frequency and window size for HAR by using the variable activity recognition duration (VARD strategy. We implemented our approach by using an Android platform and evaluated its performance in terms of energy efficiency and accuracy. The experimental results showed that our approach reduced energy consumption by a minimum of about 44.23% and maximum of about 78.85% compared to conventional HAR without sacrificing accuracy.

  18. Timing of presentation and nature of stimuli determine retroactive interference with social recognition memory in mice.

    Science.gov (United States)

    Perna, Judith Camats; Wotjak, Carsten T; Stork, Oliver; Engelmann, Mario

    2015-05-01

    The present study was designed to further investigate the nature of stimuli and the timing of their presentation, which can induce retroactive interference with social recognition memory in mice. In accordance with our previous observations, confrontation with an unfamiliar conspecific juvenile 3h and 6h, but not 22 h, after the initial learning session resulted in retroactive interference. The same effect was observed with the exposure to both enantiomers of the monomolecular odour carvone, and with a novel object. Exposure to a loud tone (12 KHz, 90 dB) caused retroactive interference at 6h, but not 3h and 22 h, after sampling. Our data show that retroactive interference of social recognition memory can be induced by exposing the experimental subjects to the defined stimuli presented <22 h after learning in their home cage. The distinct interference triggered by the tone presentation at 6h after sampling may be linked to the intrinsic aversiveness of the loud tone and suggests that at this time point memory consolidation is particularly sensitive to stress. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Real-time Human Activity Recognition using a Body Sensor Network

    DEFF Research Database (Denmark)

    Wang, Liang; Gu, Tao; Chen, Hanhua

    2010-01-01

    Real-time activity recognition using body sensor networks is an important and challenging task and it has many potential applications. In this paper, we propose a realtime, hierarchical model to recognize both simple gestures and complex activities using a wireless body sensor network. In this mo...

  20. Time-Elastic Generative Model for Acceleration Time Series in Human Activity Recognition.

    Science.gov (United States)

    Munoz-Organero, Mario; Ruiz-Blazquez, Ramona

    2017-02-08

    Body-worn sensors in general and accelerometers in particular have been widely used in order to detect human movements and activities. The execution of each type of movement by each particular individual generates sequences of time series of sensed data from which specific movement related patterns can be assessed. Several machine learning algorithms have been used over windowed segments of sensed data in order to detect such patterns in activity recognition based on intermediate features (either hand-crafted or automatically learned from data). The underlying assumption is that the computed features will capture statistical differences that can properly classify different movements and activities after a training phase based on sensed data. In order to achieve high accuracy and recall rates (and guarantee the generalization of the system to new users), the training data have to contain enough information to characterize all possible ways of executing the activity or movement to be detected. This could imply large amounts of data and a complex and time-consuming training phase, which has been shown to be even more relevant when automatically learning the optimal features to be used. In this paper, we present a novel generative model that is able to generate sequences of time series for characterizing a particular movement based on the time elasticity properties of the sensed data. The model is used to train a stack of auto-encoders in order to learn the particular features able to detect human movements. The results of movement detection using a newly generated database with information on five users performing six different movements are presented. The generalization of results using an existing database is also presented in the paper. The results show that the proposed mechanism is able to obtain acceptable recognition rates ( F = 0.77) even in the case of using different people executing a different sequence of movements and using different hardware.

  1. Human Activity Recognition in Real-Times Environments using Skeleton Joints

    Directory of Open Access Journals (Sweden)

    Ajay Kumar

    2016-06-01

    Full Text Available In this research work, we proposed a most effective noble approach for Human activity recognition in real-time environments. We recognize several distinct dynamic human activity actions using kinect. A 3D skeleton data is processed from real-time video gesture to sequence of frames and getter skeleton joints (Energy Joints, orientation, rotations of joint angles from selected setof frames. We are using joint angle and orientations, rotations information from Kinect therefore less computation required. However, after extracting the set of frames we implemented several classification techniques Principal Component Analysis (PCA with several distance based classifiers and Artificial Neural Network (ANN respectively with some variants for classify our all different gesture models. However, we conclude that use very less number of frame (10-15% for train our system efficiently from the entire set of gesture frames. Moreover, after successfully completion of our classification methods we clinch an excellent overall accuracy 94%, 96% and 98% respectively. We finally observe that our proposed system is more useful than comparing to other existing system, therefore our model is best suitable for real-time application such as in video games for player action/gesture recognition.

  2. Impaired recognition of happy facial expressions in bipolar disorder.

    Science.gov (United States)

    Lawlor-Savage, Linette; Sponheim, Scott R; Goghari, Vina M

    2014-08-01

    The ability to accurately judge facial expressions is important in social interactions. Individuals with bipolar disorder have been found to be impaired in emotion recognition; however, the specifics of the impairment are unclear. This study investigated whether facial emotion recognition difficulties in bipolar disorder reflect general cognitive, or emotion-specific, impairments. Impairment in the recognition of particular emotions and the role of processing speed in facial emotion recognition were also investigated. Clinically stable bipolar patients (n = 17) and healthy controls (n = 50) judged five facial expressions in two presentation types, time-limited and self-paced. An age recognition condition was used as an experimental control. Bipolar patients' overall facial recognition ability was unimpaired. However, patients' specific ability to judge happy expressions under time constraints was impaired. Findings suggest a deficit in happy emotion recognition impacted by processing speed. Given the limited sample size, further investigation with a larger patient sample is warranted.

  3. LPI Radar Waveform Recognition Based on Time-Frequency Distribution

    Directory of Open Access Journals (Sweden)

    Ming Zhang

    2016-10-01

    Full Text Available In this paper, an automatic radar waveform recognition system in a high noise environment is proposed. Signal waveform recognition techniques are widely applied in the field of cognitive radio, spectrum management and radar applications, etc. We devise a system to classify the modulating signals widely used in low probability of intercept (LPI radar detection systems. The radar signals are divided into eight types of classifications, including linear frequency modulation (LFM, BPSK (Barker code modulation, Costas codes and polyphase codes (comprising Frank, P1, P2, P3 and P4. The classifier is Elman neural network (ENN, and it is a supervised classification based on features extracted from the system. Through the techniques of image filtering, image opening operation, skeleton extraction, principal component analysis (PCA, image binarization algorithm and Pseudo–Zernike moments, etc., the features are extracted from the Choi–Williams time-frequency distribution (CWD image of the received data. In order to reduce the redundant features and simplify calculation, the features selection algorithm based on mutual information between classes and features vectors are applied. The superiority of the proposed classification system is demonstrated by the simulations and analysis. Simulation results show that the overall ratio of successful recognition (RSR is 94.7% at signal-to-noise ratio (SNR of −2 dB.

  4. Modeling Fan Effects on the Time Course of Associative Recognition

    Science.gov (United States)

    Schneider, Darryl W.; Anderson, John R.

    2012-01-01

    We investigated the time course of associative recognition using the response signal procedure, whereby a stimulus is presented and followed after a variable lag by a signal indicating that an immediate response is required. More specifically, we examined the effects of associative fan (the number of associations that an item has with other items…

  5. A Hierarchical Approach to Real-time Activity Recognition in Body Sensor Networks

    DEFF Research Database (Denmark)

    Wang, Liang; Gu, Tao; Tao, Xianping

    2012-01-01

    Real-time activity recognition in body sensor networks is an important and challenging task. In this paper, we propose a real-time, hierarchical model to recognize both simple gestures and complex activities using a wireless body sensor network. In this model, we rst use a fast and lightweight al...

  6. Clarification of the memory artefact in the assessment of suggestibility.

    Science.gov (United States)

    Willner, P

    2008-04-01

    The Gudjonsson Suggestibility Scale (GSS) assesses suggestibility by asking respondents to recall a short story, followed by exposure to leading questions and pressure to change their responses. Suggestibility, as assessed by the GSS, appears to be elevated in people with intellectual disabilities (ID). This has been shown to reflect to some extent the fact that people with ID have poor recall of the story; however, there are discrepancies in this relationship. The aim of the present study was to investigate whether a closer match between memory and suggestibility would be found using a measure of recognition memory rather than free recall. Three modifications to the procedure were presented to users of a learning disabilities day service. In all three experiments, a measure of forced-choice recognition memory was built into the suggestibility test. In experiments 1 and 2, the GSS was presented using either divided presentation (splitting the story into two halves, with memory and suggestibility tests after each half) or multiple presentation (the story was presented three times before presentation of the memory and suggestibility tests). Participants were tested twice, once with the standard version of the test and once with one of the modified versions. In experiment 3, an alternative suggestibility scale (ASS3) was created, based on real events in a learning disabilities day service. The ASS3 was presented to one group of participants who had been present at the events, and a second group who attended a different day service, to whom the events were unfamiliar. As observed previously, suggestibility was not closely related to free recall performance: recall was increased equally by all three manipulations, but they produced, respectively, no effect, a modest effect and a large effect on suggestibility. However, the effects on suggestibility were closely related to performance on the forced-choice recognition memory task: divided presentation of the GSS2 had no

  7. Real-time intelligent pattern recognition algorithm for surface EMG signals

    Directory of Open Access Journals (Sweden)

    Jahed Mehran

    2007-12-01

    Full Text Available Abstract Background Electromyography (EMG is the study of muscle function through the inquiry of electrical signals that the muscles emanate. EMG signals collected from the surface of the skin (Surface Electromyogram: sEMG can be used in different applications such as recognizing musculoskeletal neural based patterns intercepted for hand prosthesis movements. Current systems designed for controlling the prosthetic hands either have limited functions or can only be used to perform simple movements or use excessive amount of electrodes in order to achieve acceptable results. In an attempt to overcome these problems we have proposed an intelligent system to recognize hand movements and have provided a user assessment routine to evaluate the correctness of executed movements. Methods We propose to use an intelligent approach based on adaptive neuro-fuzzy inference system (ANFIS integrated with a real-time learning scheme to identify hand motion commands. For this purpose and to consider the effect of user evaluation on recognizing hand movements, vision feedback is applied to increase the capability of our system. By using this scheme the user may assess the correctness of the performed hand movement. In this work a hybrid method for training fuzzy system, consisting of back-propagation (BP and least mean square (LMS is utilized. Also in order to optimize the number of fuzzy rules, a subtractive clustering algorithm has been developed. To design an effective system, we consider a conventional scheme of EMG pattern recognition system. To design this system we propose to use two different sets of EMG features, namely time domain (TD and time-frequency representation (TFR. Also in order to decrease the undesirable effects of the dimension of these feature sets, principle component analysis (PCA is utilized. Results In this study, the myoelectric signals considered for classification consists of six unique hand movements. Features chosen for EMG signal

  8. A SIMD-VLIW Smart Camera Architecture for Real-Time Face Recognition

    NARCIS (Netherlands)

    Kleihorst, R.P.; Broers, H.A.T.; Abbo, A.A.; Ebrahimmalek, H.; Fatemi, H.; Corporaal, H.; Jonker, P.P.

    2003-01-01

    There is a rapidly growing demand for using smart cameras for various applications in surveillance and identification. Although having a small form-factor, most of these applications demand huge processing performance for real-time processing. Face recognition is one of those applications. In this

  9. Invariant Face recognition Using Infrared Images

    International Nuclear Information System (INIS)

    Zahran, E.G.

    2012-01-01

    Over the past few decades, face recognition has become a rapidly growing research topic due to the increasing demands in many applications of our daily life such as airport surveillance, personal identification in law enforcement, surveillance systems, information safety, securing financial transactions, and computer security. The objective of this thesis is to develop a face recognition system capable of recognizing persons with a high recognition capability, low processing time, and under different illumination conditions, and different facial expressions. The thesis presents a study for the performance of the face recognition system using two techniques; the Principal Component Analysis (PCA), and the Zernike Moments (ZM). The performance of the recognition system is evaluated according to several aspects including the recognition rate, and the processing time. Face recognition systems that use visual images are sensitive to variations in the lighting conditions and facial expressions. The performance of these systems may be degraded under poor illumination conditions or for subjects of various skin colors. Several solutions have been proposed to overcome these limitations. One of these solutions is to work in the Infrared (IR) spectrum. IR images have been suggested as an alternative source of information for detection and recognition of faces, when there is little or no control over lighting conditions. This arises from the fact that these images are formed due to thermal emissions from skin, which is an intrinsic property because these emissions depend on the distribution of blood vessels under the skin. On the other hand IR face recognition systems still have limitations with temperature variations and recognition of persons wearing eye glasses. In this thesis we will fuse IR images with visible images to enhance the performance of face recognition systems. Images are fused using the wavelet transform. Simulation results show that the fusion of visible and

  10. Time-Elastic Generative Model for Acceleration Time Series in Human Activity Recognition

    Directory of Open Access Journals (Sweden)

    Mario Munoz-Organero

    2017-02-01

    Full Text Available Body-worn sensors in general and accelerometers in particular have been widely used in order to detect human movements and activities. The execution of each type of movement by each particular individual generates sequences of time series of sensed data from which specific movement related patterns can be assessed. Several machine learning algorithms have been used over windowed segments of sensed data in order to detect such patterns in activity recognition based on intermediate features (either hand-crafted or automatically learned from data. The underlying assumption is that the computed features will capture statistical differences that can properly classify different movements and activities after a training phase based on sensed data. In order to achieve high accuracy and recall rates (and guarantee the generalization of the system to new users, the training data have to contain enough information to characterize all possible ways of executing the activity or movement to be detected. This could imply large amounts of data and a complex and time-consuming training phase, which has been shown to be even more relevant when automatically learning the optimal features to be used. In this paper, we present a novel generative model that is able to generate sequences of time series for characterizing a particular movement based on the time elasticity properties of the sensed data. The model is used to train a stack of auto-encoders in order to learn the particular features able to detect human movements. The results of movement detection using a newly generated database with information on five users performing six different movements are presented. The generalization of results using an existing database is also presented in the paper. The results show that the proposed mechanism is able to obtain acceptable recognition rates (F = 0.77 even in the case of using different people executing a different sequence of movements and using different

  11. Real Time Facial Expression Recognition Using Webcam and SDK Affectiva

    Directory of Open Access Journals (Sweden)

    Martin Magdin

    2018-06-01

    Full Text Available Facial expression is an essential part of communication. For this reason, the issue of human emotions evaluation using a computer is a very interesting topic, which has gained more and more attention in recent years. It is mainly related to the possibility of applying facial expression recognition in many fields such as HCI, video games, virtual reality, and analysing customer satisfaction etc. Emotions determination (recognition process is often performed in 3 basic phases: face detection, facial features extraction, and last stage - expression classification. Most often you can meet the so-called Ekman’s classification of 6 emotional expressions (or 7 - neutral expression as well as other types of classification - the Russell circular model, which contains up to 24 or the Plutchik’s Wheel of Emotions. The methods used in the three phases of the recognition process have not only improved over the last 60 years, but new methods and algorithms have also emerged that can determine the ViolaJones detector with greater accuracy and lower computational demands. Therefore, there are currently various solutions in the form of the Software Development Kit (SDK. In this publication, we point to the proposition and creation of our system for real-time emotion classification. Our intention was to create a system that would use all three phases of the recognition process, work fast and stable in real time. That’s why we’ve decided to take advantage of existing Affectiva SDKs. By using the classic webcamera we can detect facial landmarks on the image automatically using the Software Development Kit (SDK from Affectiva. Geometric feature based approach is used for feature extraction. The distance between landmarks is used as a feature, and for selecting an optimal set of features, the brute force method is used. The proposed system uses neural network algorithm for classification. The proposed system recognizes 6 (respectively 7 facial expressions

  12. NUI framework based on real-time head pose estimation and hand gesture recognition

    Directory of Open Access Journals (Sweden)

    Kim Hyunduk

    2016-01-01

    Full Text Available The natural user interface (NUI is used for the natural motion interface without using device or tool such as mice, keyboards, pens and markers. In this paper, we develop natural user interface framework based on two recognition module. First module is real-time head pose estimation module using random forests and second module is hand gesture recognition module, named Hand gesture Key Emulation Toolkit (HandGKET. Using the head pose estimation module, we can know where the user is looking and what the user’s focus of attention is. Moreover, using the hand gesture recognition module, we can also control the computer using the user’s hand gesture without mouse and keyboard. In proposed framework, the user’s head direction and hand gesture are mapped into mouse and keyboard event, respectively.

  13. Physiological arousal in processing recognition information

    Directory of Open Access Journals (Sweden)

    Guy Hochman

    2010-07-01

    Full Text Available The recognition heuristic (RH; Goldstein and Gigerenzer, 2002 suggests that, when applicable, probabilistic inferences are based on a noncompensatory examination of whether an object is recognized or not. The overall findings on the processes that underlie this fast and frugal heuristic are somewhat mixed, and many studies have expressed the need for considering a more compensatory integration of recognition information. Regardless of the mechanism involved, it is clear that recognition has a strong influence on choices, and this finding might be explained by the fact that recognition cues arouse affect and thus receive more attention than cognitive cues. To test this assumption, we investigated whether recognition results in a direct affective signal by measuring physiological arousal (i.e., peripheral arterial tone in the established city-size task. We found that recognition of cities does not directly result in increased physiological arousal. Moreover, the results show that physiological arousal increased with increasing inconsistency between recognition information and additional cue information. These findings support predictions derived by a compensatory Parallel Constraint Satisfaction model rather than predictions of noncompensatory models. Additional results concerning confidence ratings, response times, and choice proportions further demonstrated that recognition information and other cognitive cues are integrated in a compensatory manner.

  14. Applications of PCA and SVM-PSO Based Real-Time Face Recognition System

    Directory of Open Access Journals (Sweden)

    Ming-Yuan Shieh

    2014-01-01

    Full Text Available This paper incorporates principal component analysis (PCA with support vector machine-particle swarm optimization (SVM-PSO for developing real-time face recognition systems. The integrated scheme aims to adopt the SVM-PSO method to improve the validity of PCA based image recognition systems on dynamically visual perception. The face recognition for most human-robot interaction applications is accomplished by PCA based method because of its dimensionality reduction. However, PCA based systems are only suitable for processing the faces with the same face expressions and/or under the same view directions. Since the facial feature selection process can be considered as a problem of global combinatorial optimization in machine learning, the SVM-PSO is usually used as an optimal classifier of the system. In this paper, the PSO is used to implement a feature selection, and the SVMs serve as fitness functions of the PSO for classification problems. Experimental results demonstrate that the proposed method simplifies features effectively and obtains higher classification accuracy.

  15. Marginalised Stacked Denoising Autoencoders for Robust Representation of Real-Time Multi-View Action Recognition

    Directory of Open Access Journals (Sweden)

    Feng Gu

    2015-07-01

    Full Text Available Multi-view action recognition has gained a great interest in video surveillance, human computer interaction, and multimedia retrieval, where multiple cameras of different types are deployed to provide a complementary field of views. Fusion of multiple camera views evidently leads to more robust decisions on both tracking multiple targets and analysing complex human activities, especially where there are occlusions. In this paper, we incorporate the marginalised stacked denoising autoencoders (mSDA algorithm to further improve the bag of words (BoWs representation in terms of robustness and usefulness for multi-view action recognition. The resulting representations are fed into three simple fusion strategies as well as a multiple kernel learning algorithm at the classification stage. Based on the internal evaluation, the codebook size of BoWs and the number of layers of mSDA may not significantly affect recognition performance. According to results on three multi-view benchmark datasets, the proposed framework improves recognition performance across all three datasets and outputs record recognition performance, beating the state-of-art algorithms in the literature. It is also capable of performing real-time action recognition at a frame rate ranging from 33 to 45, which could be further improved by using more powerful machines in future applications.

  16. Memristive Computational Architecture of an Echo State Network for Real-Time Speech Emotion Recognition

    Science.gov (United States)

    2015-05-28

    recognition is simpler and requires less computational resources compared to other inputs such as facial expressions . The Berlin database of Emotional ...Processing Magazine, IEEE, vol. 18, no. 1, pp. 32– 80, 2001. [15] K. R. Scherer, T. Johnstone, and G. Klasmeyer, “Vocal expression of emotion ...Network for Real-Time Speech- Emotion Recognition 5a. CONTRACT NUMBER IN-HOUSE 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 62788F 6. AUTHOR(S) Q

  17. A real time mobile-based face recognition with fisherface methods

    Science.gov (United States)

    Arisandi, D.; Syahputra, M. F.; Putri, I. L.; Purnamawati, S.; Rahmat, R. F.; Sari, P. P.

    2018-03-01

    Face Recognition is a field research in Computer Vision that study about learning face and determine the identity of the face from a picture sent to the system. By utilizing this face recognition technology, learning process about people’s identity between students in a university will become simpler. With this technology, student won’t need to browse student directory in university’s server site and look for the person with certain face trait. To obtain this goal, face recognition application use image processing methods consist of two phase, pre-processing phase and recognition phase. In pre-processing phase, system will process input image into the best image for recognition phase. Purpose of this pre-processing phase is to reduce noise and increase signal in image. Next, to recognize face phase, we use Fisherface Methods. This methods is chosen because of its advantage that would help system of its limited data. Therefore from experiment the accuracy of face recognition using fisherface is 90%.

  18. Ignorance- versus evidence-based decision making: a decision time analysis of the recognition heuristic.

    Science.gov (United States)

    Hilbig, Benjamin E; Pohl, Rüdiger F

    2009-09-01

    According to part of the adaptive toolbox notion of decision making known as the recognition heuristic (RH), the decision process in comparative judgments-and its duration-is determined by whether recognition discriminates between objects. By contrast, some recently proposed alternative models predict that choices largely depend on the amount of evidence speaking for each of the objects and that decision times thus depend on the evidential difference between objects, or the degree of conflict between options. This article presents 3 experiments that tested predictions derived from the RH against those from alternative models. All experiments used naturally recognized objects without teaching participants any information and thus provided optimal conditions for application of the RH. However, results supported the alternative, evidence-based models and often conflicted with the RH. Recognition was not the key determinant of decision times, whereas differences between objects with respect to (both positive and negative) evidence predicted effects well. In sum, alternative models that allow for the integration of different pieces of information may well provide a better account of comparative judgments. (c) 2009 APA, all rights reserved.

  19. Object recognition memory in zebrafish.

    Science.gov (United States)

    May, Zacnicte; Morrill, Adam; Holcombe, Adam; Johnston, Travis; Gallup, Joshua; Fouad, Karim; Schalomon, Melike; Hamilton, Trevor James

    2016-01-01

    The novel object recognition, or novel-object preference (NOP) test is employed to assess recognition memory in a variety of organisms. The subject is exposed to two identical objects, then after a delay, it is placed back in the original environment containing one of the original objects and a novel object. If the subject spends more time exploring one object, this can be interpreted as memory retention. To date, this test has not been fully explored in zebrafish (Danio rerio). Zebrafish possess recognition memory for simple 2- and 3-dimensional geometrical shapes, yet it is unknown if this translates to complex 3-dimensional objects. In this study we evaluated recognition memory in zebrafish using complex objects of different sizes. Contrary to rodents, zebrafish preferentially explored familiar over novel objects. Familiarity preference disappeared after delays of 5 mins. Leopard danios, another strain of D. rerio, also preferred the familiar object after a 1 min delay. Object preference could be re-established in zebra danios by administration of nicotine tartrate salt (50mg/L) prior to stimuli presentation, suggesting a memory-enhancing effect of nicotine. Additionally, exploration biases were present only when the objects were of intermediate size (2 × 5 cm). Our results demonstrate zebra and leopard danios have recognition memory, and that low nicotine doses can improve this memory type in zebra danios. However, exploration biases, from which memory is inferred, depend on object size. These findings suggest zebrafish ecology might influence object preference, as zebrafish neophobia could reflect natural anti-predatory behaviour. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Real-Time Multiview Recognition of Human Gestures by Distributed Image Processing

    Directory of Open Access Journals (Sweden)

    Sato Kosuke

    2010-01-01

    Full Text Available Since a gesture involves a dynamic and complex motion, multiview observation and recognition are desirable. For the better representation of gestures, one needs to know, in the first place, from which views a gesture should be observed. Furthermore, it becomes increasingly important how the recognition results are integrated when larger numbers of camera views are considered. To investigate these problems, we propose a framework under which multiview recognition is carried out, and an integration scheme by which the recognition results are integrated online and in realtime. For performance evaluation, we use the ViHASi (Virtual Human Action Silhouette public image database as a benchmark and our Japanese sign language (JSL image database that contains 18 kinds of hand signs. By examining the recognition rates of each gesture for each view, we found gestures that exhibit view dependency and the gestures that do not. Also, we found that the view dependency itself could vary depending on the target gesture sets. By integrating the recognition results of different views, our swarm-based integration provides more robust and better recognition performance than individual fixed-view recognition agents.

  1. Knowledge fusion: An approach to time series model selection followed by pattern recognition

    International Nuclear Information System (INIS)

    Bleasdale, S.A.; Burr, T.L.; Scovel, J.C.; Strittmatter, R.B.

    1996-03-01

    This report describes work done during FY 95 that was sponsored by the Department of Energy, Office of Nonproliferation and National Security, Knowledge Fusion Project. The project team selected satellite sensor data to use as the one main example for the application of its analysis algorithms. The specific sensor-fusion problem has many generic features, which make it a worthwhile problem to attempt to solve in a general way. The generic problem is to recognize events of interest from multiple time series that define a possibly noisy background. By implementing a suite of time series modeling and forecasting methods and using well-chosen alarm criteria, we reduce the number of false alarms. We then further reduce the number of false alarms by analyzing all suspicious sections of data, as judged by the alarm criteria, with pattern recognition methods. An accompanying report (Ref 1) describes the implementation and application of this 2-step process for separating events from unusual background and applies a suite of forecasting methods followed by a suite of pattern recognition methods. This report goes into more detail about one of the forecasting methods and one of the pattern recognition methods and is applied to the same kind of satellite-sensor data that is described in Ref. 1

  2. Additive and Interactive Effects on Response Time Distributions in Visual Word Recognition

    Science.gov (United States)

    Yap, Melvin J.; Balota, David A.

    2007-01-01

    Across 3 different word recognition tasks, distributional analyses were used to examine the joint effects of stimulus quality and word frequency on underlying response time distributions. Consistent with the extant literature, stimulus quality and word frequency produced additive effects in lexical decision, not only in the means but also in the…

  3. Eye-movement strategies in developmental prosopagnosia and "super" face recognition.

    Science.gov (United States)

    Bobak, Anna K; Parris, Benjamin A; Gregory, Nicola J; Bennetts, Rachel J; Bate, Sarah

    2017-02-01

    Developmental prosopagnosia (DP) is a cognitive condition characterized by a severe deficit in face recognition. Few investigations have examined whether impairments at the early stages of processing may underpin the condition, and it is also unknown whether DP is simply the "bottom end" of the typical face-processing spectrum. To address these issues, we monitored the eye-movements of DPs, typical perceivers, and "super recognizers" (SRs) while they viewed a set of static images displaying people engaged in naturalistic social scenarios. Three key findings emerged: (a) Individuals with more severe prosopagnosia spent less time examining the internal facial region, (b) as observed in acquired prosopagnosia, some DPs spent less time examining the eyes and more time examining the mouth than controls, and (c) SRs spent more time examining the nose-a measure that also correlated with face recognition ability in controls. These findings support previous suggestions that DP is a heterogeneous condition, but suggest that at least the most severe cases represent a group of individuals that qualitatively differ from the typical population. While SRs seem to merely be those at the "top end" of normal, this work identifies the nose as a critical region for successful face recognition.

  4. Left is where the L is right. Significantly delayed reaction time in limb laterality recognition in both CRPS and phantom limb pain patients.

    Science.gov (United States)

    Reinersmann, Annika; Haarmeyer, Golo Sung; Blankenburg, Markus; Frettlöh, Jule; Krumova, Elena K; Ocklenburg, Sebastian; Maier, Christoph

    2010-12-17

    The body schema is based on an intact cortical body representation. Its disruption is indicated by delayed reaction times (RT) and high error rates when deciding on the laterality of a pictured hand in a limb laterality recognition task. Similarities in both cortical reorganisation and disrupted body schema have been found in two different unilateral pain syndromes, one with deafferentation (phantom limb pain, PLP) and one with pain-induced dysfunction (complex regional pain syndrome, CRPS). This study aims to compare the extent of impaired laterality recognition in these two groups. Performance on a test battery for attentional performance (TAP 2.0) and on a limb laterality recognition task was evaluated in CRPS (n=12), PLP (n=12) and healthy subjects (n=38). Differences between recognising affected and unaffected hands were analysed. CRPS patients and healthy subjects additionally completed a four-day training of limb laterality recognition. Reaction time was significantly delayed in both CRPS (2278±735.7ms) and PLP (2301.3±809.3ms) compared to healthy subjects (1826.5±517.0ms), despite normal TAP values in all groups. There were no differences between recognition of affected and unaffected hands in both patient groups. Both healthy subjects and CRPS patients improved during training, but RTs of CRPS patients (1874.5±613.3ms) remain slower (pCRPS patients, uninfluenced by attention and pain and cannot be fully reversed by training alone. This suggests the involvement of complex central nervous system mechanisms in the disruption of the body schema. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  5. Effect of voice recognition on radiologist reporting time

    International Nuclear Information System (INIS)

    Bhan, S.N.; Coblentz, C.L.; Norman, G.R.; Ali, S.H.

    2008-01-01

    To study the effect that voice recognition (VR) has on radiologist reporting efficiency in a clinical setting and to identify variables associated with faster reporting time. Five radiologists were observed during the routine reporting of 402 plain radiograph studies using either VR (n 217) or conventional dictation (CD) (n = 185). Two radiologists were observed reporting 66 computed tomography (CT) studies using either VR (n - 39) or CD (n - 27). The time spent per reporting cycle, defined as the radiologist's time spent on a study from report finalization to the subsequent report finalization, was compared. As well, characteristics about the radiologist and their reporting style were collected and correlated against reporting time. For plain radiographs, radiologists took 134% (P = 0.048) more time to produce reports using VR, but there was significant variability between radiologists. Significant association with faster reporting times using VR included: English as a first language (r-0.24), use of a template (r -0.34), use of a headset microphone (r -0.46), and increased experience with VR (r -0.43). Experience as a staff radiologist and having previous study for comparison did not correlate with reporting time. For CT, there was no significant difference in reporting time identified between VR and CD (P 0.61). Overall, VR slightly decreases the reporting efficiency of radiologists. However, efficiency may be improved if English is a first language, a headset microphone, and macros and templates are use. (author)

  6. Changing predictions, stable recognition: Children's representations of downward incline motion.

    Science.gov (United States)

    Hast, Michael; Howe, Christine

    2017-11-01

    Various studies to-date have demonstrated children hold ill-conceived expressed beliefs about the physical world such as that one ball will fall faster than another because it is heavier. At the same time, they also demonstrate accurate recognition of dynamic events. How these representations relate is still unresolved. This study examined 5- to 11-year-olds' (N = 130) predictions and recognition of motion down inclines. Predictions were typically in error, matching previous work, but children largely recognized correct events as correct and rejected incorrect ones. The results also demonstrate while predictions change with increasing age, recognition shows signs of stability. The findings provide further support for a hybrid model of object representations and argue in favour of stable core cognition existing alongside developmental changes. Statement of contribution What is already known on this subject? Children's predictions of physical events show limitations in accuracy Their recognition of such events suggests children may use different knowledge sources in their reasoning What the present study adds? Predictions fluctuate more strongly than recognition, suggesting stable core cognition But recognition also shows some fluctuation, arguing for a hybrid model of knowledge representation. © 2017 The British Psychological Society.

  7. Syllabic Length Effect in Visual Word Recognition

    Directory of Open Access Journals (Sweden)

    Roya Ranjbar Mohammadi

    2014-07-01

    Full Text Available Studies on visual word recognition have resulted in different and sometimes contradictory proposals as Multi-Trace Memory Model (MTM, Dual-Route Cascaded Model (DRC, and Parallel Distribution Processing Model (PDP. The role of the number of syllables in word recognition was examined by the use of five groups of English words and non-words. The reaction time of the participants to these words was measured using reaction time measuring software. The results indicated that there was syllabic effect on recognition of both high and low frequency words. The pattern was incremental in terms of syllable number. This pattern prevailed in high and low frequency words and non-words except in one syllable words. In general, the results are in line with the PDP model which claims that a single processing mechanism is used in both words and non-words recognition. In other words, the findings suggest that lexical items are mainly processed via a lexical route.  A pedagogical implication of the findings would be that reading in English as a foreign language involves analytical processing of the syllable of the words.

  8. How fast is famous face recognition?

    Directory of Open Access Journals (Sweden)

    Gladys eBarragan-Jason

    2012-10-01

    Full Text Available The rapid recognition of familiar faces is crucial for social interactions. However the actual speed with which recognition can be achieved remains largely unknown as most studies have been carried out without any speed constraints. Different paradigms have been used, leading to conflicting results, and although many authors suggest that face recognition is fast, the speed of face recognition has not been directly compared to fast visual tasks. In this study, we sought to overcome these limitations. Subjects performed three tasks, a familiarity categorization task (famous faces among unknown faces, a superordinate categorization task (human faces among animal ones and a gender categorization task. All tasks were performed under speed constraints. The results show that, despite the use of speed constraints, subjects were slow when they had to categorize famous faces: minimum reaction time was 467 ms, which is 180 ms more than during superordinate categorization and 160 ms more than in the gender condition. Our results are compatible with a hierarchy of face processing from the superordinate level to the familiarity level. The processes taking place between detection and recognition need to be investigated in detail.

  9. Time course analyses of orthographic and phonological priming effects during word recognition in a transparent orthography.

    Science.gov (United States)

    Zeguers, M H T; Snellings, P; Huizenga, H M; van der Molen, M W

    2014-10-01

    In opaque orthographies, the activation of orthographic and phonological codes follows distinct time courses during visual word recognition. However, it is unclear how orthography and phonology are accessed in more transparent orthographies. Therefore, we conducted time course analyses of masked priming effects in the transparent Dutch orthography. The first study used targets with small phonological differences between phonological and orthographic primes, which are typical in transparent orthographies. Results showed consistent orthographic priming effects, yet phonological priming effects were absent. The second study explicitly manipulated the strength of the phonological difference and revealed that both orthographic and phonological priming effects became identifiable when phonological differences were strong enough. This suggests that, similar to opaque orthographies, strong phonological differences are a prerequisite to separate orthographic and phonological priming effects in transparent orthographies. Orthographic and phonological priming appeared to follow distinct time courses, with orthographic codes being quickly translated into phonological codes and phonology dominating the remainder of the lexical access phase.

  10. Impact of a PACS/RIS-integrated speech recognition system on radiology reporting time and report availability

    International Nuclear Information System (INIS)

    Trumm, C.G.; Glaser, C.; Paasche, V.; Kuettner, B.; Francke, M.; Nissen-Meyer, S.; Reiser, M.; Crispin, A.; Popp, P.

    2006-01-01

    Purpose: Quantification of the impact of a PACS/RIS-integrated speech recognition system (SRS) on the time expenditure for radiology reporting and on hospital-wide report availability (RA) in a university institution. Material and Methods: In a prospective pilot study, the following parameters were assessed for 669 radiographic examinations (CR): 1. time requirement per report dictation (TED: dictation time (s)/number of images [examination] x number of words [report]) with either a combination of PACS/tape-based dictation (TD: analog dictation device/minicassette/transcription) or PACS/RIS/speech recognition system (RR: remote recognition/transcription and OR: online recognition/self-correction by radiologist), respectively, and 2. the Report Turnaround Time (RTT) as the time interval from the entry of the first image into the PACS to the available RIS/HIS report. Two equal time periods were chosen retrospectively from the RIS database: 11/2002-2/2003 (only TD) and 11/2003-2/2004 (only RR or OR with speech recognition system [SRS]). The midterm (≥24 h, 24 h intervals) and short-term (< 24 h, 1 h intervals), RA after examination completion were calculated for all modalities and for Cr, CT, MR and XA/DS separately. The relative increase in the mid-term RA (RIMRA: related to total number of examinations in each time period) and increase in the short-term RA (ISRA: ratio of available reports during the 1st to 24th hour) were calculated. Results: Prospectively, there was a significant difference between TD/RR/OR (n=151/257/261) regarding mean TED (0.44/0.54/0.62 s [per word and image]) and mean RTT (10.47/6.65/1.27 h), respectively. Retrospectively, 37 898/39 680 reports were computed from the RIS database for the time periods of 11/2002-2/2003 and 11/2003-2/2004. For CR/CT there was a shift of the short-term RA to the first 6 hours after examination completion (mean cumulative RA 20% higher) with a more than three-fold increase in the total number of available

  11. Participation, Recognition and the Democratic Doxa

    DEFF Research Database (Denmark)

    Harrits, Gitte Sommer

    2006-01-01

    to the exclusionary effects of norms of citizenship, i.e. the exclusionfrom within, and suggest the recognition of group differences. This paper tries to suggest, how a Bourdieu-perspective can help bridge the gap of dichotomies such as individual/group, universalism/particularism and rights/recognition. The paper...... suggest that a democratisation of the political doxa, involving the recognition of differences in political habitus and (most importantly) practices is necessary to oppose the tendencies of exclusion and to further a widespread empowerment of citizens in late modern societies, without this turing...

  12. Food-Induced Emotional Resonance Improves Emotion Recognition.

    Science.gov (United States)

    Pandolfi, Elisa; Sacripante, Riccardo; Cardini, Flavia

    2016-01-01

    The effect of food substances on emotional states has been widely investigated, showing, for example, that eating chocolate is able to reduce negative mood. Here, for the first time, we have shown that the consumption of specific food substances is not only able to induce particular emotional states, but more importantly, to facilitate recognition of corresponding emotional facial expressions in others. Participants were asked to perform an emotion recognition task before and after eating either a piece of chocolate or a small amount of fish sauce-which we expected to induce happiness or disgust, respectively. Our results showed that being in a specific emotional state improves recognition of the corresponding emotional facial expression. Indeed, eating chocolate improved recognition of happy faces, while disgusted expressions were more readily recognized after eating fish sauce. In line with the embodied account of emotion understanding, we suggest that people are better at inferring the emotional state of others when their own emotional state resonates with the observed one.

  13. Food-Induced Emotional Resonance Improves Emotion Recognition

    Science.gov (United States)

    Pandolfi, Elisa; Sacripante, Riccardo; Cardini, Flavia

    2016-01-01

    The effect of food substances on emotional states has been widely investigated, showing, for example, that eating chocolate is able to reduce negative mood. Here, for the first time, we have shown that the consumption of specific food substances is not only able to induce particular emotional states, but more importantly, to facilitate recognition of corresponding emotional facial expressions in others. Participants were asked to perform an emotion recognition task before and after eating either a piece of chocolate or a small amount of fish sauce—which we expected to induce happiness or disgust, respectively. Our results showed that being in a specific emotional state improves recognition of the corresponding emotional facial expression. Indeed, eating chocolate improved recognition of happy faces, while disgusted expressions were more readily recognized after eating fish sauce. In line with the embodied account of emotion understanding, we suggest that people are better at inferring the emotional state of others when their own emotional state resonates with the observed one. PMID:27973559

  14. The time course of morphological processing during spoken word recognition in Chinese.

    Science.gov (United States)

    Shen, Wei; Qu, Qingqing; Ni, Aiping; Zhou, Junyi; Li, Xingshan

    2017-12-01

    We investigated the time course of morphological processing during spoken word recognition using the printed-word paradigm. Chinese participants were asked to listen to a spoken disyllabic compound word while simultaneously viewing a printed-word display. Each visual display consisted of three printed words: a semantic associate of the first constituent of the compound word (morphemic competitor), a semantic associate of the whole compound word (whole-word competitor), and an unrelated word (distractor). Participants were directed to detect whether the spoken target word was on the visual display. Results indicated that both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the morphemic competitor began to diverge from the distractor immediately at the acoustic offset of the first constituent, which was earlier than the whole-word competitor. These results suggest that lexical access to the auditory word is incremental and morphological processing (i.e., semantic access to the first constituent) that occurs at an early processing stage before access to the representation of the whole word in Chinese.

  15. Putting It All Together: A Unified Account of Word Recognition and Reaction-Time Distributions

    Science.gov (United States)

    Norris, Dennis

    2009-01-01

    R. Ratcliff, P. Gomez, and G. McKoon (2004) suggested much of what goes on in lexical decision is attributable to decision processes and may not be particularly informative about word recognition. They proposed that lexical decision should be characterized by a decision process, taking the form of a drift-diffusion model (R. Ratcliff, 1978), that…

  16. Recognition of Time Stamps on Full-Disk Hα Images Using Machine Learning Methods

    Science.gov (United States)

    Xu, Y.; Huang, N.; Jing, J.; Liu, C.; Wang, H.; Fu, G.

    2016-12-01

    Observation and understanding of the physics of the 11-year solar activity cycle and 22-year magnetic cycle are among the most important research topics in solar physics. The solar cycle is responsible for magnetic field and particle fluctuation in the near-earth environment that have been found increasingly important in affecting the living of human beings in the modern era. A systematic study of large-scale solar activities, as made possible by our rich data archive, will further help us to understand the global-scale magnetic fields that are closely related to solar cycles. The long-time-span data archive includes both full-disk and high-resolution Hα images. Prior to the widely use of CCD cameras in 1990s, 35-mm films were the major media to store images. The research group at NJIT recently finished the digitization of film data obtained by the National Solar Observatory (NSO) and Big Bear Solar Observatory (BBSO) covering the period of 1953 to 2000. The total volume of data exceeds 60 TB. To make this huge database scientific valuable, some processing and calibration are required. One of the most important steps is to read the time stamps on all of the 14 million images, which is almost impossible to be done manually. We implemented three different methods to recognize the time stamps automatically, including Optical Character Recognition (OCR), Classification Tree and TensorFlow. The latter two are known as machine learning algorithms which are very popular now a day in pattern recognition area. We will present some sample images and the results of clock recognition from all three methods.

  17. Pupil dilation during recognition memory: Isolating unexpected recognition from judgment uncertainty.

    Science.gov (United States)

    Mill, Ravi D; O'Connor, Akira R; Dobbins, Ian G

    2016-09-01

    Optimally discriminating familiar from novel stimuli demands a decision-making process informed by prior expectations. Here we demonstrate that pupillary dilation (PD) responses during recognition memory decisions are modulated by expectations, and more specifically, that pupil dilation increases for unexpected compared to expected recognition. Furthermore, multi-level modeling demonstrated that the time course of the dilation during each individual trial contains separable early and late dilation components, with the early amplitude capturing unexpected recognition, and the later trailing slope reflecting general judgment uncertainty or effort. This is the first demonstration that the early dilation response during recognition is dependent upon observer expectations and that separate recognition expectation and judgment uncertainty components are present in the dilation time course of every trial. The findings provide novel insights into adaptive memory-linked orienting mechanisms as well as the general cognitive underpinnings of the pupillary index of autonomic nervous system activity. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. ALBEDO PATTERN RECOGNITION AND TIME-SERIES ANALYSES IN MALAYSIA

    Directory of Open Access Journals (Sweden)

    S. A. Salleh

    2012-07-01

    Full Text Available Pattern recognition and time-series analyses will enable one to evaluate and generate predictions of specific phenomena. The albedo pattern and time-series analyses are very much useful especially in relation to climate condition monitoring. This study is conducted to seek for Malaysia albedo pattern changes. The pattern recognition and changes will be useful for variety of environmental and climate monitoring researches such as carbon budgeting and aerosol mapping. The 10 years (2000–2009 MODIS satellite images were used for the analyses and interpretation. These images were being processed using ERDAS Imagine remote sensing software, ArcGIS 9.3, the 6S code for atmospherical calibration and several MODIS tools (MRT, HDF2GIS, Albedo tools. There are several methods for time-series analyses were explored, this paper demonstrates trends and seasonal time-series analyses using converted HDF format MODIS MCD43A3 albedo land product. The results revealed significance changes of albedo percentages over the past 10 years and the pattern with regards to Malaysia's nebulosity index (NI and aerosol optical depth (AOD. There is noticeable trend can be identified with regards to its maximum and minimum value of the albedo. The rise and fall of the line graph show a similar trend with regards to its daily observation. The different can be identified in term of the value or percentage of rises and falls of albedo. Thus, it can be concludes that the temporal behavior of land surface albedo in Malaysia have a uniform behaviours and effects with regards to the local monsoons. However, although the average albedo shows linear trend with nebulosity index, the pattern changes of albedo with respects to the nebulosity index indicates that there are external factors that implicates the albedo values, as the sky conditions and its diffusion plotted does not have uniform trend over the years, especially when the trend of 5 years interval is examined, 2000 shows high

  19. Role of syllable segmentation processes in peripheral word recognition.

    Science.gov (United States)

    Bernard, Jean-Baptiste; Calabrèse, Aurélie; Castet, Eric

    2014-12-01

    Previous studies of foveal visual word recognition provide evidence for a low-level syllable decomposition mechanism occurring during the recognition of a word. We investigated if such a decomposition mechanism also exists in peripheral word recognition. Single words were visually presented to subjects in the peripheral field using a 6° square gaze-contingent simulated central scotoma. In the first experiment, words were either unicolor or had their adjacent syllables segmented with two different colors (color/syllable congruent condition). Reaction times for correct word identification were measured for the two different conditions and for two different print sizes. Results show a significant decrease in reaction time for the color/syllable congruent condition compared with the unicolor condition. A second experiment suggests that this effect is specific to syllable decomposition and results from strategic, presumably involving attentional factors, rather than stimulus-driven control.

  20. Sensory, Cognitive, and Sensorimotor Learning Effects in Recognition Memory for Music.

    Science.gov (United States)

    Mathias, Brian; Tillmann, Barbara; Palmer, Caroline

    2016-08-01

    Recent research suggests that perception and action are strongly interrelated and that motor experience may aid memory recognition. We investigated the role of motor experience in auditory memory recognition processes by musicians using behavioral, ERP, and neural source current density measures. Skilled pianists learned one set of novel melodies by producing them and another set by perception only. Pianists then completed an auditory memory recognition test during which the previously learned melodies were presented with or without an out-of-key pitch alteration while the EEG was recorded. Pianists indicated whether each melody was altered from or identical to one of the original melodies. Altered pitches elicited a larger N2 ERP component than original pitches, and pitches within previously produced melodies elicited a larger N2 than pitches in previously perceived melodies. Cortical motor planning regions were more strongly activated within the time frame of the N2 following altered pitches in previously produced melodies compared with previously perceived melodies, and larger N2 amplitudes were associated with greater detection accuracy following production learning than perception learning. Early sensory (N1) and later cognitive (P3a) components elicited by pitch alterations correlated with predictions of sensory echoic and schematic tonality models, respectively, but only for the perception learning condition, suggesting that production experience alters the extent to which performers rely on sensory and tonal recognition cues. These findings provide evidence for distinct time courses of sensory, schematic, and motoric influences within the same recognition task and suggest that learned auditory-motor associations influence responses to out-of-key pitches.

  1. Optical Pattern Recognition

    Science.gov (United States)

    Yu, Francis T. S.; Jutamulia, Suganda

    2008-10-01

    Contributors; Preface; 1. Pattern recognition with optics Francis T. S. Yu and Don A. Gregory; 2. Hybrid neural networks for nonlinear pattern recognition Taiwei Lu; 3. Wavelets, optics, and pattern recognition Yao Li and Yunglong Sheng; 4. Applications of the fractional Fourier transform to optical pattern recognition David Mendlovic, Zeev Zalesky and Haldum M. Oxaktas; 5. Optical implementation of mathematical morphology Tien-Hsin Chao; 6. Nonlinear optical correlators with improved discrimination capability for object location and recognition Leonid P. Yaroslavsky; 7. Distortion-invariant quadratic filters Gregory Gheen; 8. Composite filter synthesis as applied to pattern recognition Shizhou Yin and Guowen Lu; 9. Iterative procedures in electro-optical pattern recognition Joseph Shamir; 10. Optoelectronic hybrid system for three-dimensional object pattern recognition Guoguang Mu, Mingzhe Lu and Ying Sun; 11. Applications of photrefractive devices in optical pattern recognition Ziangyang Yang; 12. Optical pattern recognition with microlasers Eung-Gi Paek; 13. Optical properties and applications of bacteriorhodopsin Q. Wang Song and Yu-He Zhang; 14. Liquid-crystal spatial light modulators Aris Tanone and Suganda Jutamulia; 15. Representations of fully complex functions on real-time spatial light modulators Robert W. Cohn and Laurence G. Hassbrook; Index.

  2. Real-time billboard trademark detection and recognition in sports video

    Science.gov (United States)

    Bu, Jiang; Lao, Song-Yan; Bai, Liang

    2013-03-01

    Nowadays, different applications like automatic video indexing, keyword based video search and TV commercials can be developed by detecting and recognizing the billboard trademark. We propose a hierarchical solution for real-time billboard trademark recognition in various sports video, billboard frames are detected in the first level, fuzzy decision tree with easily-computing features are employed to accelerate the process, while in the second level, color and regional SIFT features are combined for the first time to describe the appearance of trademarks, and the shared nearest neighbor (SNN) clustering with x2 distance is utilized instead of traditional K-means clustering to construct the SIFT vocabulary, at last, Latent Semantic Analysis (LSA) based SIFT vocabulary matching is performed on the template trademark and the candidate regions in billboard frame. The preliminary experiments demonstrate the effectiveness of the hierarchical solution, and real time constraints are also met by our solution.

  3. Visual Scan Paths and Recognition of Facial Identity in Autism Spectrum Disorder and Typical Development

    Science.gov (United States)

    Wilson, C. Ellie; Palermo, Romina; Brock, Jon

    2012-01-01

    Background Previous research suggests that many individuals with autism spectrum disorder (ASD) have impaired facial identity recognition, and also exhibit abnormal visual scanning of faces. Here, two hypotheses accounting for an association between these observations were tested: i) better facial identity recognition is associated with increased gaze time on the Eye region; ii) better facial identity recognition is associated with increased eye-movements around the face. Methodology and Principal Findings Eye-movements of 11 children with ASD and 11 age-matched typically developing (TD) controls were recorded whilst they viewed a series of faces, and then completed a two alternative forced-choice recognition memory test for the faces. Scores on the memory task were standardized according to age. In both groups, there was no evidence of an association between the proportion of time spent looking at the Eye region of faces and age-standardized recognition performance, thus the first hypothesis was rejected. However, the ‘Dynamic Scanning Index’ – which was incremented each time the participant saccaded into and out of one of the core-feature interest areas – was strongly associated with age-standardized face recognition scores in both groups, even after controlling for various other potential predictors of performance. Conclusions and Significance In support of the second hypothesis, results suggested that increased saccading between core-features was associated with more accurate face recognition ability, both in typical development and ASD. Causal directions of this relationship remain undetermined. PMID:22666378

  4. Anisomycin administered in the olfactory bulb and dorsal hippocampus impaired social recognition memory consolidation in different time-points.

    Science.gov (United States)

    Pena, R R; Pereira-Caixeta, A R; Moraes, M F D; Pereira, G S

    2014-10-01

    To identify an individual as familiar, rodents form a specific type of memory named social recognition memory. The olfactory bulb (OB) is an important structure for social recognition memory, while the hippocampus recruitment is still controversial. The present study was designed to elucidate the OB and the dorsal hippocampus contribution to the consolidation of social memory. For that purpose, we tested the effect of anisomycin (ANI), which one of the effects is the inhibition of protein synthesis, on the consolidation of social recognition memory. Swiss adult mice with cannulae implanted into the CA1 region of the dorsal hippocampus or into the OB were exposed to a juvenile during 5 min (training session; TR), and once again 1.5 h or 24 h later to test social short-term memory (S-STM) or social long-term memory (S-LTM), respectively. To study S-LTM consolidation, mice received intra-OB or intra-CA1 infusion of saline or ANI immediately, 3, 6 or 18 h after TR. ANI impaired S-LTM consolidation in the OB, when administered immediately or 6h after TR. In the dorsal hippocampus, ANI was amnesic only if administered 3 h after TR. Furthermore, the infusion of ANI in either OB or CA1, immediately after training, did not affect S-STM. Moreover, ANI administered into the OB did not alter the animal's performance in the buried food-finding task. Altogether, our results suggest the consolidation of S-LTM requires both OB and hippocampus participation, although in different time points. This study may help shedding light on the specific roles of the OB and dorsal hippocampus in social recognition memory. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. A Real-time Face/Hand Tracking Method for Chinese Sign Language Recognition

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper introduces a new Chinese Sign Language recognition (CSLR) system and a method of real-time tracking face and hand applied in the system. In the method, an improved agent algorithm is used to extract the region of face and hand and track them. Kalman filter is introduced to forecast the position and rectangle of search, and self-adapting of target color is designed to counteract the effect of illumination.

  6. Effects of the Maximum Luminance in a Medical-grade Liquid-crystal Display on the Recognition Time of a Test Pattern: Observer Performance Using Landolt Rings.

    Science.gov (United States)

    Doi, Yasuhiro; Matsuyama, Michinobu; Ikeda, Ryuji; Hashida, Masahiro

    2016-07-01

    This study was conducted to measure the recognition time of the test pattern and to investigate the effects of the maximum luminance in a medical-grade liquid-crystal display (LCD) on the recognition time. Landolt rings as signals of the test pattern were used with four random orientations, one on each of the eight gray-scale steps. Ten observers input the orientation of the gap on the Landolt rings using cursor keys on the keyboard. The recognition times were automatically measured from the display of the test pattern on the medical-grade LCD to the input of the orientation of the gap in the Landolt rings. The maximum luminance in this study was set to one of four values (100, 170, 250, and 400 cd/m(2)), for which the corresponding recognition times were measured. As a result, the average recognition times for each observer with maximum luminances of 100, 170, 250, and 400 cd/m(2) were found to be 3.96 to 7.12 s, 3.72 to 6.35 s, 3.53 to 5.97 s, and 3.37 to 5.98 s, respectively. The results indicate that the observer's recognition time is directly proportional to the luminance of the medical-grade LCD. Therefore, it is evident that the maximum luminance of the medical-grade LCD affects the test pattern recognition time.

  7. Sex influence on face recognition memory moderated by presentation duration and reencoding.

    Science.gov (United States)

    Weirich, Sebastian; Hoffmann, Ferdinand; Meissner, Lucia; Heinz, Andreas; Bengner, Thomas

    2011-11-01

    It has been suggested that women have a better face recognition memory than men. Here we analyzed whether this advantage depends on a better encoding or consolidation of information and if the advantage is visible during short-term memory (STM), only, or whether it also remains evident in long-term memory (LTM). We tested short- and long-term face recognition memory in 36 nonclinical participants (19 women). We varied the duration of item presentation (1, 5, and 10 s), the time of testing (immediately after the study phase, 1 hr, and 24 hr later), and the possibility to reencode items (none, immediately after the study phase, after 1 hr). Women showed better overall face recognition memory than men (ηp² = .15, p face recognition was visible mainly if participants had the possibility to reencode faces during former test trials. Our results suggest women do not have a better face recognition memory than men per se, but may profit more than men from longer durations of presentation during encoding or the possibility for reencoding. Future research on sex differences in face recognition memory should explicate possible causes for the better encoding of face information in women.

  8. Repetition priming of face recognition in a serial choice reaction-time task.

    Science.gov (United States)

    Roberts, T; Bruce, V

    1989-05-01

    Marshall & Walker (1987) found that pictorial stimuli yield visual priming that is disrupted by an unpredictable visual event in the response-stimulus interval. They argue that visual stimuli are represented in memory in the form of distinct visual and object codes. Bruce & Young (1986) propose similar pictorial, structural and semantic codes which mediate the recognition of faces, yet repetition priming results obtained with faces as stimuli (Bruce & Valentine, 1985), and with objects (Warren & Morton, 1982) are quite different from those of Marshall & Walker (1987), in the sense that recognition is facilitated by pictures presented 20 minutes earlier. The experiment reported here used different views of familiar and unfamiliar faces as stimuli in a serial choice reaction-time task and found that, with identical pictures, repetition priming survives and intervening item requiring a response, with both familiar and unfamiliar faces. Furthermore, with familiar faces such priming was present even when the view of the prime was different from the target. The theoretical implications of these results are discussed.

  9. The active blind spot camera: hard real-time recognition of moving objects from a moving camera

    OpenAIRE

    Van Beeck, Kristof; Goedemé, Toon; Tuytelaars, Tinne

    2014-01-01

    This PhD research focuses on visual object recognition under specific demanding conditions. The object to be recognized as well as the camera move, and the time available for the recognition task is extremely short. This generic problem is applied here on a specific problem: the active blind spot camera. Statistics show a large number of accidents with trucks are related to the so-called blind spot, the area around the vehicle in which vulnerable road users are hard to perceive by the truck d...

  10. Automatic data-driven real-time segmentation and recognition of surgical workflow.

    Science.gov (United States)

    Dergachyova, Olga; Bouget, David; Huaulmé, Arnaud; Morandi, Xavier; Jannin, Pierre

    2016-06-01

    With the intention of extending the perception and action of surgical staff inside the operating room, the medical community has expressed a growing interest towards context-aware systems. Requiring an accurate identification of the surgical workflow, such systems make use of data from a diverse set of available sensors. In this paper, we propose a fully data-driven and real-time method for segmentation and recognition of surgical phases using a combination of video data and instrument usage signals, exploiting no prior knowledge. We also introduce new validation metrics for assessment of workflow detection. The segmentation and recognition are based on a four-stage process. Firstly, during the learning time, a Surgical Process Model is automatically constructed from data annotations to guide the following process. Secondly, data samples are described using a combination of low-level visual cues and instrument information. Then, in the third stage, these descriptions are employed to train a set of AdaBoost classifiers capable of distinguishing one surgical phase from others. Finally, AdaBoost responses are used as input to a Hidden semi-Markov Model in order to obtain a final decision. On the MICCAI EndoVis challenge laparoscopic dataset we achieved a precision and a recall of 91 % in classification of 7 phases. Compared to the analysis based on one data type only, a combination of visual features and instrument signals allows better segmentation, reduction of the detection delay and discovery of the correct phase order.

  11. REAL-TIME FACE RECOGNITION BASED ON OPTICAL FLOW AND HISTOGRAM EQUALIZATION

    Directory of Open Access Journals (Sweden)

    D. Sathish Kumar

    2013-05-01

    Full Text Available Face recognition is one of the intensive areas of research in computer vision and pattern recognition but many of which are focused on recognition of faces under varying facial expressions and pose variation. A constrained optical flow algorithm discussed in this paper, recognizes facial images involving various expressions based on motion vector computation. In this paper, an optical flow computation algorithm which computes the frames of varying facial gestures, and integrating with synthesized image in a probabilistic environment has been proposed. Also Histogram Equalization technique has been used to overcome the effect of illuminations while capturing the input data using camera devices. It also enhances the contrast of the image for better processing. The experimental results confirm that the proposed face recognition system is more robust and recognizes the facial images under varying expressions and pose variations more accurately.

  12. Looking for myself: current multisensory input alters self-face recognition.

    Science.gov (United States)

    Tsakiris, Manos

    2008-01-01

    How do I know the person I see in the mirror is really me? Is it because I know the person simply looks like me, or is it because the mirror reflection moves when I move, and I see it being touched when I feel touch myself? Studies of face-recognition suggest that visual recognition of stored visual features inform self-face recognition. In contrast, body-recognition studies conclude that multisensory integration is the main cue to selfhood. The present study investigates for the first time the specific contribution of current multisensory input for self-face recognition. Participants were stroked on their face while they were looking at a morphed face being touched in synchrony or asynchrony. Before and after the visuo-tactile stimulation participants performed a self-recognition task. The results show that multisensory signals have a significant effect on self-face recognition. Synchronous tactile stimulation while watching another person's face being similarly touched produced a bias in recognizing one's own face, in the direction of the other person included in the representation of one's own face. Multisensory integration can update cognitive representations of one's body, such as the sense of ownership. The present study extends this converging evidence by showing that the correlation of synchronous multisensory signals also updates the representation of one's face. The face is a key feature of our identity, but at the same time is a source of rich multisensory experiences used to maintain or update self-representations.

  13. Two-dimensional statistical linear discriminant analysis for real-time robust vehicle-type recognition

    Science.gov (United States)

    Zafar, I.; Edirisinghe, E. A.; Acar, S.; Bez, H. E.

    2007-02-01

    Automatic vehicle Make and Model Recognition (MMR) systems provide useful performance enhancements to vehicle recognitions systems that are solely based on Automatic License Plate Recognition (ALPR) systems. Several car MMR systems have been proposed in literature. However these approaches are based on feature detection algorithms that can perform sub-optimally under adverse lighting and/or occlusion conditions. In this paper we propose a real time, appearance based, car MMR approach using Two Dimensional Linear Discriminant Analysis that is capable of addressing this limitation. We provide experimental results to analyse the proposed algorithm's robustness under varying illumination and occlusions conditions. We have shown that the best performance with the proposed 2D-LDA based car MMR approach is obtained when the eigenvectors of lower significance are ignored. For the given database of 200 car images of 25 different make-model classifications, a best accuracy of 91% was obtained with the 2D-LDA approach. We use a direct Principle Component Analysis (PCA) based approach as a benchmark to compare and contrast the performance of the proposed 2D-LDA approach to car MMR. We conclude that in general the 2D-LDA based algorithm supersedes the performance of the PCA based approach.

  14. One process is not enough! A speed-accuracy tradeoff study of recognition memory.

    Science.gov (United States)

    Boldini, Angela; Russo, Riccardo; Avons, S E

    2004-04-01

    Speed-accuracy tradeoff (SAT) methods have been used to contrast single- and dual-process accounts of recognition memory. In these procedures, subjects are presented with individual test items and are required to make recognition decisions under various time constraints. In this experiment, we presented word lists under incidental learning conditions, varying the modality of presentation and level of processing. At test, we manipulated the interval between each visually presented test item and a response signal, thus controlling the amount of time available to retrieve target information. Study-test modality match had a beneficial effect on recognition accuracy at short response-signal delays (deep than from shallow processing at study only at relatively long response-signal delays (> or =300 msec). The results are congruent with views suggesting that both fast familiarity and slower recollection processes contribute to recognition memory.

  15. The optimal viewing position in face recognition.

    Science.gov (United States)

    Hsiao, Janet H; Liu, Tina T

    2012-02-28

    In English word recognition, the best recognition performance is usually obtained when the initial fixation is directed to the left of the center (optimal viewing position, OVP). This effect has been argued to involve an interplay of left hemisphere lateralization for language processing and the perceptual experience of fixating at word beginnings most often. While both factors predict a left-biased OVP in visual word recognition, in face recognition they predict contrasting biases: People prefer to fixate the left half-face, suggesting that the OVP should be to the left of the center; nevertheless, the right hemisphere lateralization in face processing suggests that the OVP should be to the right of the center in order to project most of the face to the right hemisphere. Here, we show that the OVP in face recognition was to the left of the center, suggesting greater influence from the perceptual experience than hemispheric asymmetry in central vision. In contrast, hemispheric lateralization effects emerged when faces were presented away from the center; there was an interaction between presented visual field and location (center vs. periphery), suggesting differential influence from perceptual experience and hemispheric asymmetry in central and peripheral vision.

  16. Real-Time Gait Cycle Parameter Recognition Using a Wearable Accelerometry System

    Directory of Open Access Journals (Sweden)

    Jun-Ming Lu

    2011-07-01

    Full Text Available This paper presents the development of a wearable accelerometry system for real-time gait cycle parameter recognition. Using a tri-axial accelerometer, the wearable motion detector is a single waist-mounted device to measure trunk accelerations during walking. Several gait cycle parameters, including cadence, step regularity, stride regularity and step symmetry can be estimated in real-time by using autocorrelation procedure. For validation purposes, five Parkinson’s disease (PD patients and five young healthy adults were recruited in an experiment. The gait cycle parameters among the two subject groups of different mobility can be quantified and distinguished by the system. Practical considerations and limitations for implementing the autocorrelation procedure in such a real-time system are also discussed. This study can be extended to the future attempts in real-time detection of disabling gaits, such as festinating or freezing of gait in PD patients. Ambulatory rehabilitation, gait assessment and personal telecare for people with gait disorders are also possible applications.

  17. Iris unwrapping using the Bresenham circle algorithm for real-time iris recognition

    Science.gov (United States)

    Carothers, Matthew T.; Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.

    2015-02-01

    An efficient parallel architecture design for the iris unwrapping process in a real-time iris recognition system using the Bresenham Circle Algorithm is presented in this paper. Based on the characteristics of the model parameters this algorithm was chosen over the widely used polar conversion technique as the iris unwrapping model. The architecture design is parallelized to increase the throughput of the system and is suitable for processing an inputted image size of 320 × 240 pixels in real-time using Field Programmable Gate Array (FPGA) technology. Quartus software is used to implement, verify, and analyze the design's performance using the VHSIC Hardware Description Language. The system's predicted processing time is faster than the modern iris unwrapping technique used today∗.

  18. The Onset and Time Course of Semantic Priming during Rapid Recognition of Visual Words

    Science.gov (United States)

    Hoedemaker, Renske S.; Gordon, Peter C.

    2016-01-01

    In two experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response tasks. In Experiment 1 (Ocular Lexical Decision Task), participants performed a lexical decision task using eye-movement responses on a sequence of four words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a meta-linguistic judgment. For both tasks, survival analyses showed that the earliest-observable effect (Divergence Point or DP) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment; manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective rather than a prospective priming mechanism and are consistent with compound-cue models of semantic priming. PMID:28230394

  19. L2 Word Recognition: Influence of L1 Orthography on Multi-Syllabic Word Recognition

    Science.gov (United States)

    Hamada, Megumi

    2017-01-01

    L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on…

  20. Combining high-speed SVM learning with CNN feature encoding for real-time target recognition in high-definition video for ISR missions

    Science.gov (United States)

    Kroll, Christine; von der Werth, Monika; Leuck, Holger; Stahl, Christoph; Schertler, Klaus

    2017-05-01

    For Intelligence, Surveillance, Reconnaissance (ISR) missions of manned and unmanned air systems typical electrooptical payloads provide high-definition video data which has to be exploited with respect to relevant ground targets in real-time by automatic/assisted target recognition software. Airbus Defence and Space is developing required technologies for real-time sensor exploitation since years and has combined the latest advances of Deep Convolutional Neural Networks (CNN) with a proprietary high-speed Support Vector Machine (SVM) learning method into a powerful object recognition system with impressive results on relevant high-definition video scenes compared to conventional target recognition approaches. This paper describes the principal requirements for real-time target recognition in high-definition video for ISR missions and the Airbus approach of combining an invariant feature extraction using pre-trained CNNs and the high-speed training and classification ability of a novel frequency-domain SVM training method. The frequency-domain approach allows for a highly optimized implementation for General Purpose Computation on a Graphics Processing Unit (GPGPU) and also an efficient training of large training samples. The selected CNN which is pre-trained only once on domain-extrinsic data reveals a highly invariant feature extraction. This allows for a significantly reduced adaptation and training of the target recognition method for new target classes and mission scenarios. A comprehensive training and test dataset was defined and prepared using relevant high-definition airborne video sequences. The assessment concept is explained and performance results are given using the established precision-recall diagrams, average precision and runtime figures on representative test data. A comparison to legacy target recognition approaches shows the impressive performance increase by the proposed CNN+SVM machine-learning approach and the capability of real-time high

  1. Self-Recognition in Autistic Children.

    Science.gov (United States)

    Dawson, Geraldine; McKissick, Fawn Celeste

    1984-01-01

    Fifteen autistic children (four to six years old) were assessed for visual self-recognition ability, as well as for object permanence and gestural imitation. It was found that 13 of 15 autistic children showed evidence of self-recognition. Consistent relationships were suggested between self-cognition and object permanence but not between…

  2. Hemispheric lateralization of linguistic prosody recognition in comparison to speech and speaker recognition.

    Science.gov (United States)

    Kreitewolf, Jens; Friederici, Angela D; von Kriegstein, Katharina

    2014-11-15

    Hemispheric specialization for linguistic prosody is a controversial issue. While it is commonly assumed that linguistic prosody and emotional prosody are preferentially processed in the right hemisphere, neuropsychological work directly comparing processes of linguistic prosody and emotional prosody suggests a predominant role of the left hemisphere for linguistic prosody processing. Here, we used two functional magnetic resonance imaging (fMRI) experiments to clarify the role of left and right hemispheres in the neural processing of linguistic prosody. In the first experiment, we sought to confirm previous findings showing that linguistic prosody processing compared to other speech-related processes predominantly involves the right hemisphere. Unlike previous studies, we controlled for stimulus influences by employing a prosody and speech task using the same speech material. The second experiment was designed to investigate whether a left-hemispheric involvement in linguistic prosody processing is specific to contrasts between linguistic prosody and emotional prosody or whether it also occurs when linguistic prosody is contrasted against other non-linguistic processes (i.e., speaker recognition). Prosody and speaker tasks were performed on the same stimulus material. In both experiments, linguistic prosody processing was associated with activity in temporal, frontal, parietal and cerebellar regions. Activation in temporo-frontal regions showed differential lateralization depending on whether the control task required recognition of speech or speaker: recognition of linguistic prosody predominantly involved right temporo-frontal areas when it was contrasted against speech recognition; when contrasted against speaker recognition, recognition of linguistic prosody predominantly involved left temporo-frontal areas. The results show that linguistic prosody processing involves functions of both hemispheres and suggest that recognition of linguistic prosody is based on

  3. Independent Component Analysis and Time-Frequency Masking for Speech Recognition in Multitalker Conditions

    Directory of Open Access Journals (Sweden)

    Reinhold Orglmeister

    2010-01-01

    Full Text Available When a number of speakers are simultaneously active, for example in meetings or noisy public places, the sources of interest need to be separated from interfering speakers and from each other in order to be robustly recognized. Independent component analysis (ICA has proven a valuable tool for this purpose. However, ICA outputs can still contain strong residual components of the interfering speakers whenever noise or reverberation is high. In such cases, nonlinear postprocessing can be applied to the ICA outputs, for the purpose of reducing remaining interferences. In order to improve robustness to the artefacts and loss of information caused by this process, recognition can be greatly enhanced by considering the processed speech feature vector as a random variable with time-varying uncertainty, rather than as deterministic. The aim of this paper is to show the potential to improve recognition of multiple overlapping speech signals through nonlinear postprocessing together with uncertainty-based decoding techniques.

  4. The time course of lexical competition during spoken word recognition in Mandarin Chinese: an event-related potential study.

    Science.gov (United States)

    Huang, Xianjun; Yang, Jin-Chen

    2016-01-20

    The present study investigated the effect of lexical competition on the time course of spoken word recognition in Mandarin Chinese using a unimodal auditory priming paradigm. Two kinds of competitive environments were designed. In one session (session 1), only the unrelated and the identical primes were presented before the target words. In the other session (session 2), besides the two conditions in session 1, the target words were also preceded by the cohort primes that have the same initial syllables as the targets. Behavioral results showed an inhibitory effect of the cohort competitors (primes) on target word recognition. The event-related potential results showed that the spoken word recognition processing in the middle and late latency windows is modulated by whether the phonologically related competitors are presented or not. Specifically, preceding activation of the competitors can induce direct competitions between multiple candidate words and lead to increased processing difficulties, primarily at the word disambiguation and selection stage during Mandarin Chinese spoken word recognition. The current study provided both behavioral and electrophysiological evidences for the lexical competition effect among the candidate words during spoken word recognition.

  5. Functional Connectivity of Multiple Brain Regions Required for the Consolidation of Social Recognition Memory.

    Science.gov (United States)

    Tanimizu, Toshiyuki; Kenney, Justin W; Okano, Emiko; Kadoma, Kazune; Frankland, Paul W; Kida, Satoshi

    2017-04-12

    found that social recognition memory is consolidated through CREB-meditated gene expression in the hippocampus, medial prefrontal cortex, anterior cingulate cortex (ACC), and amygdala. Importantly, network analyses based on c-fos expression suggest that functional connectivity of these four brain regions with other brain regions is increased with time spent in social investigation toward the generation of brain networks to consolidate social recognition memory. Furthermore, our findings suggest that hippocampus functions as a hub to integrate brain networks and generate social recognition memory, whereas ACC and amygdala are important for coordinating brain activity when social interaction is initiated by connecting with other brain regions. Copyright © 2017 the authors 0270-6474/17/374103-14$15.00/0.

  6. [Comparative studies of face recognition].

    Science.gov (United States)

    Kawai, Nobuyuki

    2012-07-01

    Every human being is proficient in face recognition. However, the reason for and the manner in which humans have attained such an ability remain unknown. These questions can be best answered-through comparative studies of face recognition in non-human animals. Studies in both primates and non-primates show that not only primates, but also non-primates possess the ability to extract information from their conspecifics and from human experimenters. Neural specialization for face recognition is shared with mammals in distant taxa, suggesting that face recognition evolved earlier than the emergence of mammals. A recent study indicated that a social insect, the golden paper wasp, can distinguish their conspecific faces, whereas a closely related species, which has a less complex social lifestyle with just one queen ruling a nest of underlings, did not show strong face recognition for their conspecifics. Social complexity and the need to differentiate between one another likely led humans to evolve their face recognition abilities.

  7. Social power and recognition of emotional prosody: High power is associated with lower recognition accuracy than low power.

    Science.gov (United States)

    Uskul, Ayse K; Paulmann, Silke; Weick, Mario

    2016-02-01

    Listeners have to pay close attention to a speaker's tone of voice (prosody) during daily conversations. This is particularly important when trying to infer the emotional state of the speaker. Although a growing body of research has explored how emotions are processed from speech in general, little is known about how psychosocial factors such as social power can shape the perception of vocal emotional attributes. Thus, the present studies explored how social power affects emotional prosody recognition. In a correlational study (Study 1) and an experimental study (Study 2), we show that high power is associated with lower accuracy in emotional prosody recognition than low power. These results, for the first time, suggest that individuals experiencing high or low power perceive emotional tone of voice differently. (c) 2016 APA, all rights reserved).

  8. Robust parameterization of time-frequency characteristics for recognition of musical genres of Mexican culture

    Science.gov (United States)

    Pérez Rosas, Osvaldo G.; Rivera Martínez, José L.; Maldonado Cano, Luis A.; López Rodríguez, Mario; Amaya Reyes, Laura M.; Cano Martínez, Elizabeth; García Vázquez, Mireya S.; Ramírez Acosta, Alejandro A.

    2017-09-01

    The automatic identification and classification of musical genres based on the sound similarities to form musical textures, it is a very active investigation area. In this context it has been created recognition systems of musical genres, formed by time-frequency characteristics extraction methods and by classification methods. The selection of this methods are important for a good development in the recognition systems. In this article they are proposed the Mel-Frequency Cepstral Coefficients (MFCC) methods as a characteristic extractor and Support Vector Machines (SVM) as a classifier for our system. The stablished parameters of the MFCC method in the system by our time-frequency analysis, represents the gamma of Mexican culture musical genres in this article. For the precision of a classification system of musical genres it is necessary that the descriptors represent the correct spectrum of each gender; to achieve this we must realize a correct parametrization of the MFCC like the one we present in this article. With the system developed we get satisfactory detection results, where the least identification percentage of musical genres was 66.67% and the one with the most precision was 100%.

  9. SSVEP recognition using common feature analysis in brain-computer interface.

    Science.gov (United States)

    Zhang, Yu; Zhou, Guoxu; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej

    2015-04-15

    Canonical correlation analysis (CCA) has been successfully applied to steady-state visual evoked potential (SSVEP) recognition for brain-computer interface (BCI) application. Although the CCA method outperforms the traditional power spectral density analysis through multi-channel detection, it requires additionally pre-constructed reference signals of sine-cosine waves. It is likely to encounter overfitting in using a short time window since the reference signals include no features from training data. We consider that a group of electroencephalogram (EEG) data trials recorded at a certain stimulus frequency on a same subject should share some common features that may bear the real SSVEP characteristics. This study therefore proposes a common feature analysis (CFA)-based method to exploit the latent common features as natural reference signals in using correlation analysis for SSVEP recognition. Good performance of the CFA method for SSVEP recognition is validated with EEG data recorded from ten healthy subjects, in contrast to CCA and a multiway extension of CCA (MCCA). Experimental results indicate that the CFA method significantly outperformed the CCA and the MCCA methods for SSVEP recognition in using a short time window (i.e., less than 1s). The superiority of the proposed CFA method suggests it is promising for the development of a real-time SSVEP-based BCI. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Face recognition increases during saccade preparation.

    Science.gov (United States)

    Lin, Hai; Rizak, Joshua D; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian

    2014-01-01

    Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.

  11. Bilingual Language Switching: Production vs. Recognition

    Science.gov (United States)

    Mosca, Michela; de Bot, Kees

    2017-01-01

    This study aims at assessing how bilinguals select words in the appropriate language in production and recognition while minimizing interference from the non-appropriate language. Two prominent models are considered which assume that when one language is in use, the other is suppressed. The Inhibitory Control (IC) model suggests that, in both production and recognition, the amount of inhibition on the non-target language is greater for the stronger compared to the weaker language. In contrast, the Bilingual Interactive Activation (BIA) model proposes that, in language recognition, the amount of inhibition on the weaker language is stronger than otherwise. To investigate whether bilingual language production and recognition can be accounted for by a single model of bilingual processing, we tested a group of native speakers of Dutch (L1), advanced speakers of English (L2) in a bilingual recognition and production task. Specifically, language switching costs were measured while participants performed a lexical decision (recognition) and a picture naming (production) task involving language switching. Results suggest that while in language recognition the amount of inhibition applied to the non-appropriate language increases along with its dominance as predicted by the IC model, in production the amount of inhibition applied to the non-relevant language is not related to language dominance, but rather it may be modulated by speakers' unconscious strategies to foster the weaker language. This difference indicates that bilingual language recognition and production might rely on different processing mechanisms and cannot be accounted within one of the existing models of bilingual language processing. PMID:28638361

  12. Pattern recognition techniques and neo-deterministic seismic hazard: Time dependent scenarios for North-Eastern Italy

    International Nuclear Information System (INIS)

    Peresan, A.; Vaccari, F.; Panza, G.F.; Zuccolo, E.; Gorshkov, A.

    2009-05-01

    An integrated neo-deterministic approach to seismic hazard assessment has been developed that combines different pattern recognition techniques, designed for the space-time identification of strong earthquakes, with algorithms for the realistic modeling of seismic ground motion. The integrated approach allows for a time dependent definition of the seismic input, through the routine updating of earthquake predictions. The scenarios of expected ground motion, associated with the alarmed areas, are defined by means of full waveform modeling. A set of neo-deterministic scenarios of ground motion is defined at regional and local scale, thus providing a prioritization tool for timely prevention and mitigation actions. Constraints about the space and time of occurrence of the impending strong earthquakes are provided by three formally defined and globally tested algorithms, which have been developed according to a pattern recognition scheme. Two algorithms, namely CN and M8, are routinely used for intermediate-term middle-range earthquake predictions, while a third algorithm allows for the identification of the areas prone to large events. These independent procedures have been combined to better constrain the alarmed area. The pattern recognition of earthquake-prone areas does not belong to the family of earthquake prediction algorithms since it does not provide any information about the time of occurrence of the expected earthquakes. Nevertheless, it can be considered as the term-less zero-approximation, which restrains the alerted areas (e.g. defined by CN or M8) to the more precise location of large events. Italy is the only region of moderate seismic activity where the two different prediction algorithms CN and M8S (i.e. a spatially stabilized variant of M8) are applied simultaneously and a real-time test of predictions, for earthquakes with magnitude larger than 5.4, is ongoing since 2003. The application of the CN to the Adriatic region (s.l.), which is relevant

  13. Real-Time Control of an Exoskeleton Hand Robot with Myoelectric Pattern Recognition.

    Science.gov (United States)

    Lu, Zhiyuan; Chen, Xiang; Zhang, Xu; Tong, Kay-Yu; Zhou, Ping

    2017-08-01

    Robot-assisted training provides an effective approach to neurological injury rehabilitation. To meet the challenge of hand rehabilitation after neurological injuries, this study presents an advanced myoelectric pattern recognition scheme for real-time intention-driven control of a hand exoskeleton. The developed scheme detects and recognizes user's intention of six different hand motions using four channels of surface electromyography (EMG) signals acquired from the forearm and hand muscles, and then drives the exoskeleton to assist the user accomplish the intended motion. The system was tested with eight neurologically intact subjects and two individuals with spinal cord injury (SCI). The overall control accuracy was [Formula: see text] for the neurologically intact subjects and [Formula: see text] for the SCI subjects. The total lag of the system was approximately 250[Formula: see text]ms including data acquisition, transmission and processing. One SCI subject also participated in training sessions in his second and third visits. Both the control accuracy and efficiency tended to improve. These results show great potential for applying the advanced myoelectric pattern recognition control of the wearable robotic hand system toward improving hand function after neurological injuries.

  14. Modeling Confidence and Response Time in Recognition Memory

    Science.gov (United States)

    Ratcliff, Roger; Starns, Jeffrey J.

    2009-01-01

    A new model for confidence judgments in recognition memory is presented. In the model, the match between a single test item and memory produces a distribution of evidence, with better matches corresponding to distributions with higher means. On this match dimension, confidence criteria are placed, and the areas between the criteria under the…

  15. Multitasking During Degraded Speech Recognition in School-Age Children.

    Science.gov (United States)

    Grieco-Calub, Tina M; Ward, Kristina M; Brehm, Laurel

    2017-01-01

    Multitasking requires individuals to allocate their cognitive resources across different tasks. The purpose of the current study was to assess school-age children's multitasking abilities during degraded speech recognition. Children (8 to 12 years old) completed a dual-task paradigm including a sentence recognition (primary) task containing speech that was either unprocessed or noise-band vocoded with 8, 6, or 4 spectral channels and a visual monitoring (secondary) task. Children's accuracy and reaction time on the visual monitoring task was quantified during the dual-task paradigm in each condition of the primary task and compared with single-task performance. Children experienced dual-task costs in the 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the visual monitoring task relative to baseline performance. In all conditions, children's dual-task performance on the visual monitoring task was strongly predicted by their single-task (baseline) performance on the task. Results suggest that children's proficiency with the secondary task contributes to the magnitude of dual-task costs while multitasking during degraded speech recognition.

  16. Toward fast feature adaptation and localization for real-time face recognition systems

    NARCIS (Netherlands)

    Zuo, F.; With, de P.H.N.; Ebrahimi, T.; Sikora, T.

    2003-01-01

    In a home environment, video surveillance employing face detection and recognition is attractive for new applications. Facial feature (e.g. eyes and mouth) localization in the face is an essential task for face recognition because it constitutes an indispensable step for face geometry normalization.

  17. Oxytocin improves facial emotion recognition in young adults with antisocial personality disorder.

    Science.gov (United States)

    Timmermann, Marion; Jeung, Haang; Schmitt, Ruth; Boll, Sabrina; Freitag, Christine M; Bertsch, Katja; Herpertz, Sabine C

    2017-11-01

    Deficient facial emotion recognition has been suggested to underlie aggression in individuals with antisocial personality disorder (ASPD). As the neuropeptide oxytocin (OT) has been shown to improve facial emotion recognition, it might also exert beneficial effects in individuals providing so much harm to the society. In a double-blind, randomized, placebo-controlled crossover trial, 22 individuals with ASPD and 29 healthy control (HC) subjects (matched for age, sex, intelligence, and education) were intranasally administered either OT (24 IU) or a placebo 45min before participating in an emotion classification paradigm with fearful, angry, and happy faces. We assessed the number of correct classifications and reaction times as indicators of emotion recognition ability. Significant group×substance×emotion interactions were found in correct classifications and reaction times. Compared to HC, individuals with ASPD showed deficits in recognizing fearful and happy faces; these group differences were no longer observable under OT. Additionally, reaction times for angry faces differed significantly between the ASPD and HC group in the placebo condition. This effect was mainly driven by longer reaction times in HC subjects after placebo administration compared to OT administration while individuals with ASPD revealed descriptively the contrary response pattern. Our data indicate an improvement of the recognition of fearful and happy facial expressions by OT in young adults with ASPD. Particularly the increased recognition of facial fear is of high importance since the correct perception of distress signals in others is thought to inhibit aggression. Beneficial effects of OT might be further mediated by improved recognition of facial happiness probably reflecting increased social reward responsiveness. Copyright © 2017. Published by Elsevier Ltd.

  18. Two processes support visual recognition memory in rhesus monkeys.

    Science.gov (United States)

    Guderian, Sebastian; Brigham, Danielle; Mishkin, Mortimer

    2011-11-29

    A large body of evidence in humans suggests that recognition memory can be supported by both recollection and familiarity. Recollection-based recognition is characterized by the retrieval of contextual information about the episode in which an item was previously encountered, whereas familiarity-based recognition is characterized instead by knowledge only that the item had been encountered previously in the absence of any context. To date, it is unknown whether monkeys rely on similar mnemonic processes to perform recognition memory tasks. Here, we present evidence from the analysis of receiver operating characteristics, suggesting that visual recognition memory in rhesus monkeys also can be supported by two separate processes and that these processes have features considered to be characteristic of recollection and familiarity. Thus, the present study provides converging evidence across species for a dual process model of recognition memory and opens up the possibility of studying the neural mechanisms of recognition memory in nonhuman primates on tasks that are highly similar to the ones used in humans.

  19. Face and body recognition show similar improvement during childhood.

    Science.gov (United States)

    Bank, Samantha; Rhodes, Gillian; Read, Ainsley; Jeffery, Linda

    2015-09-01

    Adults are proficient in extracting identity cues from faces. This proficiency develops slowly during childhood, with performance not reaching adult levels until adolescence. Bodies are similar to faces in that they convey identity cues and rely on specialized perceptual mechanisms. However, it is currently unclear whether body recognition mirrors the slow development of face recognition during childhood. Recent evidence suggests that body recognition develops faster than face recognition. Here we measured body and face recognition in 6- and 10-year-old children and adults to determine whether these two skills show different amounts of improvement during childhood. We found no evidence that they do. Face and body recognition showed similar improvement with age, and children, like adults, were better at recognizing faces than bodies. These results suggest that the mechanisms of face and body memory mature at a similar rate or that improvement of more general cognitive and perceptual skills underlies improvement of both face and body recognition. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Radar Waveform Recognition Based on Time-Frequency Analysis and Artificial Bee Colony-Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Lutao Liu

    2018-04-01

    Full Text Available In this paper, a system for identifying eight kinds of radar waveforms is explored. The waveforms are the binary phase shift keying (BPSK, Costas codes, linear frequency modulation (LFM and polyphase codes (including P1, P2, P3, P4 and Frank codes. The features of power spectral density (PSD, moments and cumulants, instantaneous properties and time-frequency analysis are extracted from the waveforms and three new features are proposed. The classifier is support vector machine (SVM, which is optimized by artificial bee colony (ABC algorithm. The system shows well robustness, excellent computational complexity and high recognition rate under low signal-to-noise ratio (SNR situation. The simulation results indicate that the overall recognition rate is 92% when SNR is −4 dB.

  1. Reading in developmental prosopagnosia: Evidence for a dissociation between word and face recognition.

    Science.gov (United States)

    Starrfelt, Randi; Klargaard, Solja K; Petersen, Anders; Gerlach, Christian

    2018-02-01

    Recent models suggest that face and word recognition may rely on overlapping cognitive processes and neural regions. In support of this notion, face recognition deficits have been demonstrated in developmental dyslexia. Here we test whether the opposite association can also be found, that is, impaired reading in developmental prosopagnosia. We tested 10 adults with developmental prosopagnosia and 20 matched controls. All participants completed the Cambridge Face Memory Test, the Cambridge Face Perception test and a Face recognition questionnaire used to quantify everyday face recognition experience. Reading was measured in four experimental tasks, testing different levels of letter, word, and text reading: (a) single word reading with words of varying length,(b) vocal response times in single letter and short word naming, (c) recognition of single letters and short words at brief exposure durations (targeting the word superiority effect), and d) text reading. Participants with developmental prosopagnosia performed strikingly similar to controls across the four reading tasks. Formal analysis revealed a significant dissociation between word and face recognition, as the difference in performance with faces and words was significantly greater for participants with developmental prosopagnosia than for controls. Adult developmental prosopagnosics read as quickly and fluently as controls, while they are seemingly unable to learn efficient strategies for recognizing faces. We suggest that this is due to the differing demands that face and word recognition put on the perceptual system. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Pattern recognition in probability spaces for visualization and identification of plasma confinement regimes and confinement time scaling

    International Nuclear Information System (INIS)

    Verdoolaege, G; Karagounis, G; Oost, G Van; Tendler, M

    2012-01-01

    Pattern recognition is becoming an increasingly important tool for making inferences from the massive amounts of data produced in fusion experiments. The purpose is to contribute to physics studies and plasma control. In this work, we address the visualization of plasma confinement data, the (real-time) identification of confinement regimes and the establishment of a scaling law for the energy confinement time. We take an intrinsically probabilistic approach, modeling data from the International Global H-mode Confinement Database with Gaussian distributions. We show that pattern recognition operations working in the associated probability space are considerably more powerful than their counterparts in a Euclidean data space. This opens up new possibilities for analyzing confinement data and for fusion data processing in general. We hence advocate the essential role played by measurement uncertainty for data interpretation in fusion experiments. (paper)

  3. Familiar Person Recognition: Is Autonoetic Consciousness More Likely to Accompany Face Recognition Than Voice Recognition?

    Science.gov (United States)

    Barsics, Catherine; Brédart, Serge

    2010-11-01

    Autonoetic consciousness is a fundamental property of human memory, enabling us to experience mental time travel, to recollect past events with a feeling of self-involvement, and to project ourselves in the future. Autonoetic consciousness is a characteristic of episodic memory. By contrast, awareness of the past associated with a mere feeling of familiarity or knowing relies on noetic consciousness, depending on semantic memory integrity. Present research was aimed at evaluating whether conscious recollection of episodic memories is more likely to occur following the recognition of a familiar face than following the recognition of a familiar voice. Recall of semantic information (biographical information) was also assessed. Previous studies that investigated the recall of biographical information following person recognition used faces and voices of famous people as stimuli. In this study, the participants were presented with personally familiar people's voices and faces, thus avoiding the presence of identity cues in the spoken extracts and allowing a stricter control of frequency exposure with both types of stimuli (voices and faces). In the present study, the rate of retrieved episodic memories, associated with autonoetic awareness, was significantly higher from familiar faces than familiar voices even though the level of overall recognition was similar for both these stimuli domains. The same pattern was observed regarding semantic information retrieval. These results and their implications for current Interactive Activation and Competition person recognition models are discussed.

  4. An analog VLSI real time optical character recognition system based on a neural architecture

    International Nuclear Information System (INIS)

    Bo, G.; Caviglia, D.; Valle, M.

    1999-01-01

    In this paper a real time Optical Character Recognition system is presented: it is based on a feature extraction module and a neural network classifier which have been designed and fabricated in analog VLSI technology. Experimental results validate the circuit functionality. The results obtained from a validation based on a mixed approach (i.e., an approach based on both experimental and simulation results) confirm the soundness and reliability of the system

  5. An analog VLSI real time optical character recognition system based on a neural architecture

    Energy Technology Data Exchange (ETDEWEB)

    Bo, G.; Caviglia, D.; Valle, M. [Genoa Univ. (Italy). Dip. of Biophysical and Electronic Engineering

    1999-03-01

    In this paper a real time Optical Character Recognition system is presented: it is based on a feature extraction module and a neural network classifier which have been designed and fabricated in analog VLSI technology. Experimental results validate the circuit functionality. The results obtained from a validation based on a mixed approach (i.e., an approach based on both experimental and simulation results) confirm the soundness and reliability of the system.

  6. L2 Word Recognition: Influence of L1 Orthography on Multi-syllabic Word Recognition.

    Science.gov (United States)

    Hamada, Megumi

    2017-10-01

    L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on the position of an embedded word. The participants were Arabic ESL learners, Chinese ESL learners, and native speakers of English. The task was a word search task, in which the participants identified a target word embedded in a pseudoword at the initial, middle, or final position. The search accuracy and speed indicated that all groups showed a strong preference for the initial position. The accuracy data further indicated group differences. The Arabic group showed higher accuracy in the final than middle, while the Chinese group showed the opposite and the native speakers showed no difference between the two positions. The findings suggest that L2 multi-syllabic word recognition involves unique processes.

  7. Progesterone impairs social recognition in male rats.

    Science.gov (United States)

    Bychowski, Meaghan E; Auger, Catherine J

    2012-04-01

    The influence of progesterone in the brain and on the behavior of females is fairly well understood. However, less is known about the effect of progesterone in the male system. In male rats, receptors for progesterone are present in virtually all vasopressin (AVP) immunoreactive cells in the bed nucleus of the stria terminalis (BST) and the medial amygdala (MeA). This colocalization functions to regulate AVP expression, as progesterone and/or progestin receptors (PR)s suppress AVP expression in these same extrahypothalamic regions in the brain. These data suggest that progesterone may influence AVP-dependent behavior. While AVP is implicated in numerous behavioral and physiological functions in rodents, AVP appears essential for social recognition of conspecifics. Therefore, we examined the effects of progesterone on social recognition. We report that progesterone plays an important role in modulating social recognition in the male brain, as progesterone treatment leads to a significant impairment of social recognition in male rats. Moreover, progesterone appears to act on PRs to impair social recognition, as progesterone impairment of social recognition is blocked by a PR antagonist, RU-486. Social recognition is also impaired by a specific progestin agonist, R5020. Interestingly, we show that progesterone does not interfere with either general memory or olfactory processes, suggesting that progesterone seems critically important to social recognition memory. These data provide strong evidence that physiological levels of progesterone can have an important impact on social behavior in male rats. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Gender differences in the recognition of emotional faces: are men less efficient?

    Directory of Open Access Journals (Sweden)

    Ana Ruiz-Ibáñez

    2017-06-01

    Full Text Available As research in recollection of stimuli with emotional valence indicates, emotions influence memory. Many studies in face and emotional facial expression recognition have focused on age (young and old people and gender-associated (men and women differences. Nevertheless, this kind of studies has produced contradictory results, because of that, it would be necessary to study gender involvement in depth. The main objective of our research consists of analyzing the differences in image recognition using faces with emotional facial expressions between two groups composed by university students aged 18-30. The first group is constituted by men and the second one by women. The results showed statistically significant differences in face corrected recognition (hit rate - false alarm rate: the women demonstrated a better recognition than the men. However, other analyzed variables as time or efficiency do not provide conclusive results. Furthermore, a significant negative correlation between the time used and the efficiency when doing the task was found in the male group. This information reinforces not only the hypothesis of gender difference in face recognition, in favor of women, but also these ones that suggest a different cognitive processing of facial stimuli in both sexes. Finally, we argue the necessity of a greater research related to variables as age or sociocultural level.

  9. Cortical Networks for Visual Self-Recognition

    Science.gov (United States)

    Sugiura, Motoaki

    This paper briefly reviews recent developments regarding the brain mechanisms of visual self-recognition. A special cognitive mechanism for visual self-recognition has been postulated based on behavioral and neuropsychological evidence, but its neural substrate remains controversial. Recent functional imaging studies suggest that multiple cortical mechanisms play self-specific roles during visual self-recognition, reconciling the existing controversy. Respective roles for the left occipitotemporal, right parietal, and frontal cortices in symbolic, visuospatial, and conceptual aspects of self-representation have been proposed.

  10. Cortical networks for visual self-recognition

    International Nuclear Information System (INIS)

    Sugiura, Motoaki

    2007-01-01

    This paper briefly reviews recent developments regarding the brain mechanisms of visual self-recognition. A special cognitive mechanism for visual self-recognition has been postulated based on behavioral and neuropsychological evidence, but its neural substrate remains controversial. Recent functional imaging studies suggest that multiple cortical mechanisms play self-specific roles during visual self-recognition, reconciling the existing controversy. Respective roles for the left occipitotemporal, right parietal, and frontal cortices in symbolic, visuospatial, and conceptual aspects of self-representation have been proposed. (author)

  11. Adults' strategies for simple addition and multiplication: verbal self-reports and the operand recognition paradigm.

    Science.gov (United States)

    Metcalfe, Arron W S; Campbell, Jamie I D

    2011-05-01

    Accurate measurement of cognitive strategies is important in diverse areas of psychological research. Strategy self-reports are a common measure, but C. Thevenot, M. Fanget, and M. Fayol (2007) proposed a more objective method to distinguish different strategies in the context of mental arithmetic. In their operand recognition paradigm, speed of recognition memory for problem operands after solving a problem indexes strategy (e.g., direct memory retrieval vs. a procedural strategy). Here, in 2 experiments, operand recognition time was the same following simple addition or multiplication, but, consistent with a wide variety of previous research, strategy reports indicated much greater use of procedures (e.g., counting) for addition than multiplication. Operation, problem size (e.g., 2 + 3 vs. 8 + 9), and operand format (digits vs. words) had interactive effects on reported procedure use that were not reflected in recognition performance. Regression analyses suggested that recognition time was influenced at least as much by the relative difficulty of the preceding problem as by the strategy used. The findings indicate that the operand recognition paradigm is not a reliable substitute for strategy reports and highlight the potential impact of difficulty-related carryover effects in sequential cognitive tasks.

  12. Improving the Robustness of Real-Time Myoelectric Pattern Recognition against Arm Position Changes in Transradial Amputees

    Directory of Open Access Journals (Sweden)

    Yanjuan Geng

    2017-01-01

    Full Text Available Previous studies have showed that arm position variations would significantly degrade the classification performance of myoelectric pattern-recognition-based prosthetic control, and the cascade classifier (CC and multiposition classifier (MPC have been proposed to minimize such degradation in offline scenarios. However, it remains unknown whether these proposed approaches could also perform well in the clinical use of a multifunctional prosthesis control. In this study, the online effect of arm position variation on motion identification was evaluated by using a motion-test environment (MTE developed to mimic the real-time control of myoelectric prostheses. The performance of different classifier configurations in reducing the impact of arm position variation was investigated using four real-time metrics based on dataset obtained from transradial amputees. The results of this study showed that, compared to the commonly used motion classification method, the CC and MPC configurations improved the real-time performance across seven classes of movements in five different arm positions (8.7% and 12.7% increments of motion completion rate, resp.. The results also indicated that high offline classification accuracy might not ensure good real-time performance under variable arm positions, which necessitated the investigation of the real-time control performance to gain proper insight on the clinical implementation of EMG-pattern-recognition-based controllers for limb amputees.

  13. Super-recognition in development: A case study of an adolescent with extraordinary face recognition skills.

    Science.gov (United States)

    Bennetts, Rachel J; Mole, Joseph; Bate, Sarah

    2017-09-01

    Face recognition abilities vary widely. While face recognition deficits have been reported in children, it is unclear whether superior face recognition skills can be encountered during development. This paper presents O.B., a 14-year-old female with extraordinary face recognition skills: a "super-recognizer" (SR). O.B. demonstrated exceptional face-processing skills across multiple tasks, with a level of performance that is comparable to adult SRs. Her superior abilities appear to be specific to face identity: She showed an exaggerated face inversion effect and her superior abilities did not extend to object processing or non-identity aspects of face recognition. Finally, an eye-movement task demonstrated that O.B. spent more time than controls examining the nose - a pattern previously reported in adult SRs. O.B. is therefore particularly skilled at extracting and using identity-specific facial cues, indicating that face and object recognition are dissociable during development, and that super recognition can be detected in adolescence.

  14. Emotion recognition in borderline personality disorder: effects of emotional information on negative bias.

    Science.gov (United States)

    Fenske, Sabrina; Lis, Stefanie; Liebke, Lisa; Niedtfeld, Inga; Kirsch, Peter; Mier, Daniela

    2015-01-01

    Borderline Personality Disorder (BPD) is characterized by severe deficits in social interactions, which might be linked to deficits in emotion recognition. Research on emotion recognition abilities in BPD revealed heterogeneous results, ranging from deficits to heightened sensitivity. The most stable findings point to an impairment in the evaluation of neutral facial expressions as neutral, as well as to a negative bias in emotion recognition; that is the tendency to attribute negative emotions to neutral expressions, or in a broader sense to report a more negative emotion category than depicted. However, it remains unclear which contextual factors influence the occurrence of this negative bias. Previous studies suggest that priming by preceding emotional information and also constrained processing time might augment the emotion recognition deficit in BPD. To test these assumptions, 32 female BPD patients and 31 healthy females, matched for age and education, participated in an emotion recognition study, in which every facial expression was preceded by either a positive, neutral or negative scene. Furthermore, time constraints for processing were varied by presenting the facial expressions with short (100 ms) or long duration (up to 3000 ms) in two separate blocks. BPD patients showed a significant deficit in emotion recognition for neutral and positive facial expression, associated with a significant negative bias. In BPD patients, this emotion recognition deficit was differentially affected by preceding emotional information and time constraints, with a greater influence of emotional information during long face presentations and a greater influence of neutral information during short face presentations. Our results are in line with previous findings supporting the existence of a negative bias in emotion recognition in BPD patients, and provide further insights into biased social perceptions in BPD patients.

  15. Iris Recognition for Partially Occluded Images: Methodology and Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Poursaberi A

    2007-01-01

    Full Text Available Accurate iris detection is a crucial part of an iris recognition system. One of the main issues in iris segmentation is coping with occlusion that happens due to eyelids and eyelashes. In the literature, some various methods have been suggested to solve the occlusion problem. In this paper, two different segmentations of iris are presented. In the first algorithm, a circle is located around the pupil with an appropriate diameter. The iris area encircled by the circular boundary is used for recognition purposes then. In the second method, again a circle is located around the pupil with a larger diameter. This time, however, only the lower part of the encircled iris area is utilized for individual recognition. Wavelet-based texture features are used in the process. Hamming and harmonic mean distance classifiers are exploited as a mixed classifier in suggested algorithm. It is observed that relying on a smaller but more reliable part of the iris, though reducing the net amount of information, improves the overall performance. Experimental results on CASIA database show that our method has a promising performance with an accuracy of 99.31%. The sensitivity of the proposed method is analyzed versus contrast, illumination, and noise as well, where lower sensitivity to all factors is observed when the lower half of the iris is used for recognition.

  16. Congruent bodily arousal promotes the constructive recognition of emotional words.

    Science.gov (United States)

    Kever, Anne; Grynberg, Delphine; Vermeulen, Nicolas

    2017-08-01

    Considerable research has shown that bodily states shape affect and cognition. Here, we examined whether transient states of bodily arousal influence the categorization speed of high arousal, low arousal, and neutral words. Participants realized two blocks of a constructive recognition task, once after a cycling session (increased arousal), and once after a relaxation session (reduced arousal). Results revealed overall faster response times for high arousal compared to low arousal words, and for positive compared to negative words. Importantly, low arousal words were categorized significantly faster after the relaxation than after the cycling, suggesting that a decrease in bodily arousal promotes the recognition of stimuli matching one's current arousal state. These findings highlight the importance of the arousal dimension in emotional processing, and suggest the presence of arousal-congruency effects. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Automatic speech recognition (zero crossing method). Automatic recognition of isolated vowels

    International Nuclear Information System (INIS)

    Dupeyrat, Benoit

    1975-01-01

    This note describes a recognition method of isolated vowels, using a preprocessing of the vocal signal. The processing extracts the extrema of the vocal signal and the interval time separating them (Zero crossing distances of the first derivative of the signal). The recognition of vowels uses normalized histograms of the values of these intervals. The program determines a distance between the histogram of the sound to be recognized and histograms models built during a learning phase. The results processed on real time by a minicomputer, are relatively independent of the speaker, the fundamental frequency being not allowed to vary too much (i.e. speakers of the same sex). (author) [fr

  18. GENDER DIFFERENCES IN THE RECOGNITION OF FACIAL EXPRESSIONS OF EMOTION

    Directory of Open Access Journals (Sweden)

    CARLOS FELIPE PARDO-VÉLEZ

    2003-07-01

    Full Text Available Gender differences in the recognition of facial expressions of anger, happiness and sadness wereresearched in students 18-25 years of age. A reaction time procedure was used, and the percentage ofcorrect answers when recognizing was also measured. Though the work hypothesis expected genderdifferences in facial expression recognition, results suggest that these differences are not significant at alevel of 0.05%. Statistical analysis shows a greater easiness (at a non-significant level for women torecognize happiness expressions, and for men to recognize anger expressions. The implications ofthese data are discussed, and possible extensions of this investigation in terms of sample size andcollege major of the participants.

  19. Flexible Piezoelectric Sensor-Based Gait Recognition

    Directory of Open Access Journals (Sweden)

    Youngsu Cha

    2018-02-01

    Full Text Available Most motion recognition research has required tight-fitting suits for precise sensing. However, tight-suit systems have difficulty adapting to real applications, because people normally wear loose clothes. In this paper, we propose a gait recognition system with flexible piezoelectric sensors in loose clothing. The gait recognition system does not directly sense lower-body angles. It does, however, detect the transition between standing and walking. Specifically, we use the signals from the flexible sensors attached to the knee and hip parts on loose pants. We detect the periodic motion component using the discrete time Fourier series from the signal during walking. We adapt the gait detection method to a real-time patient motion and posture monitoring system. In the monitoring system, the gait recognition operates well. Finally, we test the gait recognition system with 10 subjects, for which the proposed system successfully detects walking with a success rate over 93 %.

  20. Stages of processing in associative recognition: evidence from behavior, EEG, and classification.

    Science.gov (United States)

    Borst, Jelmer P; Schneider, Darryl W; Walsh, Matthew M; Anderson, John R

    2013-12-01

    In this study, we investigated the stages of information processing in associative recognition. We recorded EEG data while participants performed an associative recognition task that involved manipulations of word length, associative fan, and probe type, which were hypothesized to affect the perceptual encoding, retrieval, and decision stages of the recognition task, respectively. Analyses of the behavioral and EEG data, supplemented with classification of the EEG data using machine-learning techniques, provided evidence that generally supported the sequence of stages assumed by a computational model developed in the Adaptive Control of Thought-Rational cognitive architecture. However, the results suggested a more complex relationship between memory retrieval and decision-making than assumed by the model. Implications of the results for modeling associative recognition are discussed. The study illustrates how a classifier approach, in combination with focused manipulations, can be used to investigate the timing of processing stages.

  1. One-against-all weighted dynamic time warping for language-independent and speaker-dependent speech recognition in adverse conditions.

    Directory of Open Access Journals (Sweden)

    Xianglilan Zhang

    Full Text Available Considering personal privacy and difficulty of obtaining training material for many seldom used English words and (often non-English names, language-independent (LI with lightweight speaker-dependent (SD automatic speech recognition (ASR is a promising option to solve the problem. The dynamic time warping (DTW algorithm is the state-of-the-art algorithm for small foot-print SD ASR applications with limited storage space and small vocabulary, such as voice dialing on mobile devices, menu-driven recognition, and voice control on vehicles and robotics. Even though we have successfully developed two fast and accurate DTW variations for clean speech data, speech recognition for adverse conditions is still a big challenge. In order to improve recognition accuracy in noisy environment and bad recording conditions such as too high or low volume, we introduce a novel one-against-all weighted DTW (OAWDTW. This method defines a one-against-all index (OAI for each time frame of training data and applies the OAIs to the core DTW process. Given two speech signals, OAWDTW tunes their final alignment score by using OAI in the DTW process. Our method achieves better accuracies than DTW and merge-weighted DTW (MWDTW, as 6.97% relative reduction of error rate (RRER compared with DTW and 15.91% RRER compared with MWDTW are observed in our extensive experiments on one representative SD dataset of four speakers' recordings. To the best of our knowledge, OAWDTW approach is the first weighted DTW specially designed for speech data in adverse conditions.

  2. Neural Mechanism for Mirrored Self-face Recognition.

    Science.gov (United States)

    Sugiura, Motoaki; Miyauchi, Carlos Makoto; Kotozaki, Yuka; Akimoto, Yoritaka; Nozawa, Takayuki; Yomogida, Yukihito; Hanawa, Sugiko; Yamamoto, Yuki; Sakuma, Atsushi; Nakagawa, Seishu; Kawashima, Ryuta

    2015-09-01

    Self-face recognition in the mirror is considered to involve multiple processes that integrate 2 perceptual cues: temporal contingency of the visual feedback on one's action (contingency cue) and matching with self-face representation in long-term memory (figurative cue). The aim of this study was to examine the neural bases of these processes by manipulating 2 perceptual cues using a "virtual mirror" system. This system allowed online dynamic presentations of real-time and delayed self- or other facial actions. Perception-level processes were identified as responses to only a single perceptual cue. The effect of the contingency cue was identified in the cuneus. The regions sensitive to the figurative cue were subdivided by the response to a static self-face, which was identified in the right temporal, parietal, and frontal regions, but not in the bilateral occipitoparietal regions. Semantic- or integration-level processes, including amodal self-representation and belief validation, which allow modality-independent self-recognition and the resolution of potential conflicts between perceptual cues, respectively, were identified in distinct regions in the right frontal and insular cortices. The results are supportive of the multicomponent notion of self-recognition and suggest a critical role for contingency detection in the co-emergence of self-recognition and empathy in infants. © The Author 2014. Published by Oxford University Press.

  3. Seismic amplitude measurements suggest foreshocks have different focal mechanisms than aftershocks

    Science.gov (United States)

    Lindh, A.; Fuis, G.; Mantis, C.

    1978-01-01

    The ratio of the amplitudes of P and S waves from the foreshocks and aftershocks to three recent California earthquakes show a characteristic change at the time of the main events. As this ratio is extremely sensitive to small changes in the orientation of the fault plane, a small systematic change in stress or fault configuration in the source region may be inferred. These results suggest an approach to the recognition of foreshocks based on simple measurements of the amplitudes of seismic waves. Copyright ?? 1978 AAAS.

  4. Placebo-mediated, Naloxone-sensitive suggestibility of short-term memory performance.

    Science.gov (United States)

    Stern, Jair; Candia, Victor; Porchet, Roseline I; Krummenacher, Peter; Folkers, Gerd; Schedlowski, Manfred; Ettlin, Dominik A; Schönbächler, Georg

    2011-03-01

    Physiological studies of placebo-mediated suggestion have been recently performed beyond their traditional clinical context of pain and analgesia. Various neurotransmitter systems and immunological modulators have been used in successful placebo suggestions, including Dopamine, Cholecystokinin and, most extensively, opioids. We adhered to an established conceptual framework of placebo research and used the μ-opioid-antagonist Naloxone to test the applicability of this framework within a cognitive domain (e.g. memory) in healthy volunteers. Healthy men (n=62, age 29, SD=9) were required to perform a task-battery, including standardized and custom-designed memory tasks, to test short-term recall and delayed recognition. Tasks were performed twice, before and after intravenous injection of either NaCl (0.9%) or Naloxone (both 0.15 mg/kg), in a double-blind setting. While one group was given neutral information (S-), the other was told that it might receive a drug with suspected memory-boosting properties (S+). Objective and subjective indexes of memory performance and salivary cortisol (as a stress marker) were recorded during both runs and differences between groups were assessed. Short-term memory recall, but not delayed recognition, was objectively increased after placebo-mediated suggestion in the NaCl-group. Naloxone specifically blocked the suggestion effect without interfering with memory performance. These results were not affected when changes in salivary cortisol levels were considered. No reaction time changes, recorded to uncover unspecific attentional impairment, were seen. Placebo-mediated suggestion produced a training-independent, objective and Naloxone-sensitive increase in memory performance. These results indicate an opioid-mediated placebo effect within a circumscribed cognitive domain in healthy volunteers. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Indian Issues: More Consistent and Timely Tribal Recognition Process Needed

    National Research Council Canada - National Science Library

    Hill, Barry

    2002-01-01

    ...) regulatory process for federally recognizing Indian tribes. Federal recognition of an Indian tribe can have a tremendous effect on the tribe, surrounding communities, and the nation as a whole...

  6. Acute effects of triazolam on false recognition.

    Science.gov (United States)

    Mintzer, M Z; Griffiths, R R

    2000-12-01

    Neuropsychological, neuroimaging, and electrophysiological techniques have been applied to the study of false recognition; however, psychopharmacological techniques have not been applied. Benzodiazepine sedative/anxiolytic drugs produce memory deficits similar to those observed in organic amnesia and may be useful tools for studying normal and abnormal memory mechanisms. The present double-blind, placebo-controlled repeated measures study examined the acute effects of orally administered triazolam (Halcion; 0.125 and 0.25 mg/70 kg), a benzodiazepine hypnotic, on performance in the Deese (1959)/Roediger-McDermott (1995) false recognition paradigm in 24 healthy volunteers. Paralleling previous demonstrations in amnesic patients, triazolam produced significant dose-related reductions in false recognition rates to nonstudied words associatively related to studied words, suggesting that false recognition relies on normal memory mechanisms impaired in benzodiazepine-induced amnesia. The results also suggested that relative to placebo, triazolam reduced participants' reliance on memory for item-specific versus list-common semantic information and reduced participants' use of remember versus know responses.

  7. Assessment of Self-Recognition in Young Children with Handicaps.

    Science.gov (United States)

    Kelley, Michael F.; And Others

    1988-01-01

    Thirty young children with handicaps were assessed on five self-recognition mirror tasks. The set of tasks formed a reproducible scale, indicating that these tasks are an appropriate measure of self-recognition in this population. Data analysis suggested that stage of self-recognition is positively and significantly related to cognitive…

  8. Emotion recognition in girls with conduct problems.

    Science.gov (United States)

    Schwenck, Christina; Gensthaler, Angelika; Romanos, Marcel; Freitag, Christine M; Schneider, Wolfgang; Taurines, Regina

    2014-01-01

    A deficit in emotion recognition has been suggested to underlie conduct problems. Although several studies have been conducted on this topic so far, most concentrated on male participants. The aim of the current study was to compare recognition of morphed emotional faces in girls with conduct problems (CP) with elevated or low callous-unemotional (CU+ vs. CU-) traits and a matched healthy developing control group (CG). Sixteen girls with CP-CU+, 16 girls with CP-CU- and 32 controls (mean age: 13.23 years, SD=2.33 years) were included. Video clips with morphed faces were presented in two runs to assess emotion recognition. Multivariate analysis of variance with the factors group and run was performed. Girls with CP-CU- needed more time than the CG to encode sad, fearful, and happy faces and they correctly identified sadness less often. Girls with CP-CU+ outperformed the other groups in the identification of fear. Learning effects throughout runs were the same for all groups except that girls with CP-CU- correctly identified fear less often in the second run compared to the first run. Results need to be replicated with comparable tasks, which might result in subgroup-specific therapeutic recommendations.

  9. Association with the origin recognition complex suggests a novel role for histone acetyltransferase Hat1p/Hat2p

    Directory of Open Access Journals (Sweden)

    Greenblatt Jack F

    2007-09-01

    Full Text Available Abstract Background Histone modifications have been implicated in the regulation of transcription and, more recently, in DNA replication and repair. In yeast, a major conserved histone acetyltransferase, Hat1p, preferentially acetylates lysine residues 5 and 12 on histone H4. Results Here, we report that a nuclear sub-complex consisting of Hat1p and its partner Hat2p interacts physically and functionally with the origin recognition complex (ORC. While mutational inactivation of the histone acetyltransferase (HAT gene HAT1 alone does not compromise origin firing or initiation of DNA replication, a deletion in HAT1 (or HAT2 exacerbates the growth defects of conditional orc-ts mutants. Thus, the ORC-associated Hat1p-dependent histone acetyltransferase activity suggests a novel linkage between histone modification and DNA replication. Additional genetic and biochemical evidence points to the existence of partly overlapping histone H3 acetyltransferase activities in addition to Hat1p/Hat2p for proper DNA replication efficiency. Furthermore, we demonstrated a dynamic association of Hat1p with chromatin during S-phase that suggests a role of this enzyme at the replication fork. Conclusion We have found an intriguing new association of the Hat1p-dependent histone acetyltransferase in addition to its previously known role in nuclear chromatin assembly (Hat1p/Hat2p-Hif1p. The participation of a distinct Hat1p/Hat2p sub-complex suggests a linkage of histone H4 modification with ORC-dependent DNA replication.

  10. Star pattern recognition algorithm aided by inertial information

    Science.gov (United States)

    Liu, Bao; Wang, Ke-dong; Zhang, Chao

    2011-08-01

    Star pattern recognition is one of the key problems of the celestial navigation. The traditional star pattern recognition approaches, such as the triangle algorithm and the star angular distance algorithm, are a kind of all-sky matching method whose recognition speed is slow and recognition success rate is not high. Therefore, the real time and reliability of CNS (Celestial Navigation System) is reduced to some extent, especially for the maneuvering spacecraft. However, if the direction of the camera optical axis can be estimated by other navigation systems such as INS (Inertial Navigation System), the star pattern recognition can be fulfilled in the vicinity of the estimated direction of the optical axis. The benefits of the INS-aided star pattern recognition algorithm include at least the improved matching speed and the improved success rate. In this paper, the direction of the camera optical axis, the local matching sky, and the projection of stars on the image plane are estimated by the aiding of INS firstly. Then, the local star catalog for the star pattern recognition is established in real time dynamically. The star images extracted in the camera plane are matched in the local sky. Compared to the traditional all-sky star pattern recognition algorithms, the memory of storing the star catalog is reduced significantly. Finally, the INS-aided star pattern recognition algorithm is validated by simulations. The results of simulations show that the algorithm's computation time is reduced sharply and its matching success rate is improved greatly.

  11. Real-time object recognition in multidimensional images based on joined extended structural tensor and higher-order tensor decomposition methods

    Science.gov (United States)

    Cyganek, Boguslaw; Smolka, Bogdan

    2015-02-01

    In this paper a system for real-time recognition of objects in multidimensional video signals is proposed. Object recognition is done by pattern projection into the tensor subspaces obtained from the factorization of the signal tensors representing the input signal. However, instead of taking only the intensity signal the novelty of this paper is first to build the Extended Structural Tensor representation from the intensity signal that conveys information on signal intensities, as well as on higher-order statistics of the input signals. This way the higher-order input pattern tensors are built from the training samples. Then, the tensor subspaces are built based on the Higher-Order Singular Value Decomposition of the prototype pattern tensors. Finally, recognition relies on measurements of the distance of a test pattern projected into the tensor subspaces obtained from the training tensors. Due to high-dimensionality of the input data, tensor based methods require high memory and computational resources. However, recent achievements in the technology of the multi-core microprocessors and graphic cards allows real-time operation of the multidimensional methods as is shown and analyzed in this paper based on real examples of object detection in digital images.

  12. Facial emotion recognition, face scan paths, and face perception in children with neurofibromatosis type 1.

    Science.gov (United States)

    Lewis, Amelia K; Porter, Melanie A; Williams, Tracey A; Bzishvili, Samantha; North, Kathryn N; Payne, Jonathan M

    2017-05-01

    This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Although there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. An audiovisual emotion recognition system

    Science.gov (United States)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  14. Is having similar eye movement patterns during face learning and recognition beneficial for recognition performance? Evidence from hidden Markov modeling.

    Science.gov (United States)

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H

    2017-12-01

    The hidden Markov model (HMM)-based approach for eye movement analysis is able to reflect individual differences in both spatial and temporal aspects of eye movements. Here we used this approach to understand the relationship between eye movements during face learning and recognition, and its association with recognition performance. We discovered holistic (i.e., mainly looking at the face center) and analytic (i.e., specifically looking at the two eyes in addition to the face center) patterns during both learning and recognition. Although for both learning and recognition, participants who adopted analytic patterns had better recognition performance than those with holistic patterns, a significant positive correlation between the likelihood of participants' patterns being classified as analytic and their recognition performance was only observed during recognition. Significantly more participants adopted holistic patterns during learning than recognition. Interestingly, about 40% of the participants used different patterns between learning and recognition, and among them 90% switched their patterns from holistic at learning to analytic at recognition. In contrast to the scan path theory, which posits that eye movements during learning have to be recapitulated during recognition for the recognition to be successful, participants who used the same or different patterns during learning and recognition did not differ in recognition performance. The similarity between their learning and recognition eye movement patterns also did not correlate with their recognition performance. These findings suggested that perceptuomotor memory elicited by eye movement patterns during learning does not play an important role in recognition. In contrast, the retrieval of diagnostic information for recognition, such as the eyes for face recognition, is a better predictor for recognition performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Semantic Segmentation of Real-time Sensor Data Stream for Complex Activity Recognition

    OpenAIRE

    Triboan, Darpan; Chen, Liming; Chen, Feng; Wang, Zumin

    2016-01-01

    Department of Information Engineering, Dalian University, China The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link. Data segmentation plays a critical role in performing human activity recognition (HAR) in the ambient assistant living (AAL) systems. It is particularly important for complex activity recognition when the events occur in short bursts with attributes of multiple sub-tasks. Althou...

  16. Relaxing decision criteria does not improve recognition memory in amnesic patients.

    Science.gov (United States)

    Reber, P J; Squire, L R

    1999-05-01

    An important question about the organization of memory is whether information available in non-declarative memory can contribute to performance on tasks of declarative memory. Dorfman, Kihlstrom, Cork, and Misiaszek (1995) described a circumstance in which the phenomenon of priming might benefit recognition memory performance. They reported that patients receiving electroconvulsive therapy improved their recognition performance when they were encouraged to relax their criteria for endorsing test items as familiar. It was suggested that priming improved recognition by making information available about the familiarity of test items. In three experiments, we sought unsuccessfully to reproduce this phenomenon in amnesic patients. In Experiment 3, we reproduced the methods and procedure used by Dorfman et al. but still found no evidence for improved recognition memory following the manipulation of decision criteria. Although negative findings have their own limitations, our findings suggest that the phenomenon reported by Dorfman et al. does not generalize well. Our results agree with several recent findings that suggest that priming is independent of recognition memory and does not contribute to recognition memory scores.

  17. [Clinical analysis of real-time iris recognition guided LASIK with femtosecond laser flap creation for myopic astigmatism].

    Science.gov (United States)

    Jie, Li-ming; Wang, Qian; Zheng, Lin

    2013-08-01

    To assess the safety, efficacy, stability and changes in cylindrical degree and axis after real-time iris recognition guided LASIK with femtosecond laser flap creation for the correction of myopic astigmatism. Retrospective case series. This observational case study comprised 136 patients (249 eyes) with myopic astigmatism in a 6-month trial. Patients were divided into 3 groups according to the pre-operative cylindrical degree: Group 1, -0.75 to -1.25 D, 106 eyes;Group 2, -1.50 to -2.25 D, 89 eyes and Group 3, -2.50 to -5.00 D, 54 eyes. They were also grouped by pre-operative astigmatism axis:Group A, with the rule astigmatism (WTRA), 156 eyes; Group B, against the rule astigmatism (ATRA), 64 eyes;Group C, oblique axis astigmatism, 29 eyes. After femtosecond laser flap created, real-time iris recognized excimer ablation was performed. The naked visual acuity, the best-corrected visual acuity, the degree and axis of astigmatism were analyzed and compared at 1, 3 and 6 months postoperatively. Static iris recognition detected that eye cyclotorsional misalignment was 2.37° ± 2.16°, dynamic iris recognition detected that the intraoperative cyclotorsional misalignment range was 0-4.3°. Six months after operation, the naked visual acuity was 0.5 or better in 100% cases. No eye lost ≥ 1 line of best spectacle-corrected visual acuity (BSCVA). Six months after operation, the naked vision of 227 eyes surpassed the BSCVA, and 87 eyes gained 1 line of BSCVA. The degree of astigmatism decreased from (-1.72 ± 0.77) D (pre-operation) to (-0.29 ± 0.25) D (post-operation). Six months after operation, WTRA from 157 eyes (pre-operation) decreased to 43 eyes (post-operation), ATRA from 63 eyes (pre-operation) decreased to 28 eyes (post-operation), oblique astigmatism increased from 29 eyes to 34 eyes and 144 eyes became non-astigmatism. The real-time iris recognition guided LASIK with femtosecond laser flap creation can compensate deviation from eye cyclotorsion, decrease

  18. Voice congruency facilitates word recognition.

    Science.gov (United States)

    Campeanu, Sandra; Craik, Fergus I M; Alain, Claude

    2013-01-01

    Behavioral studies of spoken word memory have shown that context congruency facilitates both word and source recognition, though the level at which context exerts its influence remains equivocal. We measured event-related potentials (ERPs) while participants performed both types of recognition task with words spoken in four voices. Two voice parameters (i.e., gender and accent) varied between speakers, with the possibility that none, one or two of these parameters was congruent between study and test. Results indicated that reinstating the study voice at test facilitated both word and source recognition, compared to similar or no context congruency at test. Behavioral effects were paralleled by two ERP modulations. First, in the word recognition test, the left parietal old/new effect showed a positive deflection reflective of context congruency between study and test words. Namely, the same speaker condition provided the most positive deflection of all correctly identified old words. In the source recognition test, a right frontal positivity was found for the same speaker condition compared to the different speaker conditions, regardless of response success. Taken together, the results of this study suggest that the benefit of context congruency is reflected behaviorally and in ERP modulations traditionally associated with recognition memory.

  19. Voice congruency facilitates word recognition.

    Directory of Open Access Journals (Sweden)

    Sandra Campeanu

    Full Text Available Behavioral studies of spoken word memory have shown that context congruency facilitates both word and source recognition, though the level at which context exerts its influence remains equivocal. We measured event-related potentials (ERPs while participants performed both types of recognition task with words spoken in four voices. Two voice parameters (i.e., gender and accent varied between speakers, with the possibility that none, one or two of these parameters was congruent between study and test. Results indicated that reinstating the study voice at test facilitated both word and source recognition, compared to similar or no context congruency at test. Behavioral effects were paralleled by two ERP modulations. First, in the word recognition test, the left parietal old/new effect showed a positive deflection reflective of context congruency between study and test words. Namely, the same speaker condition provided the most positive deflection of all correctly identified old words. In the source recognition test, a right frontal positivity was found for the same speaker condition compared to the different speaker conditions, regardless of response success. Taken together, the results of this study suggest that the benefit of context congruency is reflected behaviorally and in ERP modulations traditionally associated with recognition memory.

  20. Reversing the picture superiority effect: a speed-accuracy trade-off study of recognition memory.

    Science.gov (United States)

    Boldini, Angela; Russo, Riccardo; Punia, Sahiba; Avons, S E

    2007-01-01

    Speed-accuracy trade-off methods have been used to contrast single- and dual-process accounts of recognition memory. With these procedures, subjects are presented with individual test items and required to make recognition decisions under various time constraints. In three experiments, we presented words and pictures to be intentionally learned; test stimuli were always visually presented words. At test, we manipulated the interval between the presentation of each test stimulus and that of a response signal, thus controlling the amount of time available to retrieve target information. The standard picture superiority effect was significant in long response deadline conditions (i.e., > or = 2,000 msec). Conversely, a significant reverse picture superiority effect emerged at short response-signal deadlines (< 200 msec). The results are congruent with views suggesting that both fast familiarity and slower recollection processes contribute to recognition memory. Alternative accounts are also discussed.

  1. 8 CFR 1292.2 - Organizations qualified for recognition; requests for recognition; withdrawal of recognition...

    Science.gov (United States)

    2010-01-01

    ...; requests for recognition; withdrawal of recognition; accreditation of representatives; roster. 1292.2...; requests for recognition; withdrawal of recognition; accreditation of representatives; roster. (a) Qualifications of organizations. A non-profit religious, charitable, social service, or similar organization...

  2. Reaction Time of Facial Affect Recognition in Asperger's Disorder for Cartoon and Real, Static and Moving Faces

    Science.gov (United States)

    Miyahara, Motohide; Bray, Anne; Tsujii, Masatsugu; Fujita, Chikako; Sugiyama, Toshiro

    2007-01-01

    This study used a choice reaction-time paradigm to test the perceived impairment of facial affect recognition in Asperger's disorder. Twenty teenagers with Asperger's disorder and 20 controls were compared with respect to the latency and accuracy of response to happy or disgusted facial expressions, presented in cartoon or real images and in…

  3. Recognition of disturbances with specified morphology in time series. Part 1: Spikes on magnetograms of the worldwide INTERMAGNET network

    Science.gov (United States)

    Bogoutdinov, Sh. R.; Gvishiani, A. D.; Agayan, S. M.; Solovyev, A. A.; Kin, E.

    2010-11-01

    The International Real-time Magnetic Observatory Network (INTERMAGNET) is the world's biggest international network of ground-based observatories, providing geomagnetic data almost in real time (within 72 hours of collection) [Kerridge, 2001]. The observation data are rapidly transferred by the observatories participating in the program to regional Geomagnetic Information Nodes (GINs), which carry out a global exchange of data and process the results. The observations of the main (core) magnetic field of the Earth and its study are one of the key problems of geophysics. The INTERMAGNET system is the basis of monitoring the state of the Earth's magnetic field; therefore, the information provided by the system is required to be very reliable. Despite the rigid high-quality standard of the recording devices, they are subject to external effects that affect the quality of the records. Therefore, an objective and formalized recognition with the subsequent remedy of the anomalies (artifacts) that occur on the records is an important task. Expanding on the ideas of Agayan [Agayan et al., 2005] and Gvishiani [Gvishiani et al., 2008a; 2008b], this paper suggests a new algorithm of automatic recognition of anomalies with specified morphology, capable of identifying both physically- and anthropogenically-derived spikes on the magnetograms. The algorithm is constructed using fuzzy logic and, as such, is highly adaptive and universal. The developed algorithmic system formalizes the work of the expert-interpreter in terms of artificial intelligence. This ensures identical processing of large data arrays, almost unattainable manually. Besides the algorithm, the paper also reports on the application of the developed algorithmic system for identifying spikes at the INTERMAGNET observatories. The main achievement of the work is the creation of an algorithm permitting the almost unmanned extraction of spike-free (definitive) magnetograms from preliminary records. This automated

  4. Threshold models of recognition and the recognition heuristic

    Directory of Open Access Journals (Sweden)

    Edgar Erdfelder

    2011-02-01

    Full Text Available According to the recognition heuristic (RH theory, decisions follow the recognition principle: Given a high validity of the recognition cue, people should prefer recognized choice options compared to unrecognized ones. Assuming that the memory strength of choice options is strongly correlated with both the choice criterion and recognition judgments, the RH is a reasonable strategy that approximates optimal decisions with a minimum of cognitive effort (Davis-Stober, Dana, and Budescu, 2010. However, theories of recognition memory are not generally compatible with this assumption. For example, some threshold models of recognition presume that recognition judgments can arise from two types of cognitive states: (1 certainty states in which judgments are almost perfectly correlated with memory strength and (2 uncertainty states in which recognition judgments reflect guessing rather than differences in memory strength. We report an experiment designed to test the prediction that the RH applies to certainty states only. Our results show that memory states rather than recognition judgments affect use of recognition information in binary decisions.

  5. Implicit recognition based on lateralized perceptual fluency.

    Science.gov (United States)

    Vargas, Iliana M; Voss, Joel L; Paller, Ken A

    2012-02-06

    In some circumstances, accurate recognition of repeated images in an explicit memory test is driven by implicit memory. We propose that this "implicit recognition" results from perceptual fluency that influences responding without awareness of memory retrieval. Here we examined whether recognition would vary if images appeared in the same or different visual hemifield during learning and testing. Kaleidoscope images were briefly presented left or right of fixation during divided-attention encoding. Presentation in the same visual hemifield at test produced higher recognition accuracy than presentation in the opposite visual hemifield, but only for guess responses. These correct guesses likely reflect a contribution from implicit recognition, given that when the stimulated visual hemifield was the same at study and test, recognition accuracy was higher for guess responses than for responses with any level of confidence. The dramatic difference in guessing accuracy as a function of lateralized perceptual overlap between study and test suggests that implicit recognition arises from memory storage in visual cortical networks that mediate repetition-induced fluency increments.

  6. Effects of varying presentation time on long-term recognition memory for scenes: Verbatim and gist representations.

    Science.gov (United States)

    Ahmad, Fahad N; Moscovitch, Morris; Hockley, William E

    2017-04-01

    Konkle, Brady, Alvarez and Oliva (Psychological Science, 21, 1551-1556, 2010) showed that participants have an exceptional long-term memory (LTM) for photographs of scenes. We examined to what extent participants' exceptional LTM for scenes is determined by presentation time during encoding. In addition, at retrieval, we varied the nature of the lures in a forced-choice recognition task so that they resembled the target in gist (i.e., global or categorical) information, but were distinct in verbatim information (e.g., an "old" beach scene and a similar "new" beach scene; exemplar condition) or vice versa (e.g., a beach scene and a new scene from a novel category; novel condition). In Experiment 1, half of the list of scenes was presented for 1 s, whereas the other half was presented for 4 s. We found lower performance for shorter study presentation time in the exemplar test condition and similar performance for both study presentation times in the novel test condition. In Experiment 2, participants showed similar performance in an exemplar test for which the lure was of a different category but a category that was used at study. In Experiment 3, when presentation time was lowered to 500 ms, recognition accuracy was reduced in both novel and exemplar test conditions. A less detailed memorial representation of the studied scene containing more gist (i.e., meaning) than verbatim (i.e., surface or perceptual details) information is retrieved from LTM after a short compared to a long study presentation time. We conclude that our findings support fuzzy-trace theory.

  7. Fine-grained recognition of plants from images.

    Science.gov (United States)

    Šulc, Milan; Matas, Jiří

    2017-01-01

    Fine-grained recognition of plants from images is a challenging computer vision task, due to the diverse appearance and complex structure of plants, high intra-class variability and small inter-class differences. We review the state-of-the-art and discuss plant recognition tasks, from identification of plants from specific plant organs to general plant recognition "in the wild". We propose texture analysis and deep learning methods for different plant recognition tasks. The methods are evaluated and compared them to the state-of-the-art. Texture analysis is only applied to images with unambiguous segmentation (bark and leaf recognition), whereas CNNs are only applied when sufficiently large datasets are available. The results provide an insight in the complexity of different plant recognition tasks. The proposed methods outperform the state-of-the-art in leaf and bark classification and achieve very competitive results in plant recognition "in the wild". The results suggest that recognition of segmented leaves is practically a solved problem, when high volumes of training data are available. The generality and higher capacity of state-of-the-art CNNs makes them suitable for plant recognition "in the wild" where the views on plant organs or plants vary significantly and the difficulty is increased by occlusions and background clutter.

  8. Recognition memory: a review of the critical findings and an integrated theory for relating them.

    Science.gov (United States)

    Malmberg, Kenneth J

    2008-12-01

    The development of formal models has aided theoretical progress in recognition memory research. Here, I review the findings that are critical for testing them, including behavioral and brain imaging results of single-item recognition, plurality discrimination, and associative recognition experiments under a variety of testing conditions. I also review the major approaches to measurement and process modeling of recognition. The review indicates that several extant dual-process measures of recollection are unreliable, and thus they are unsuitable as a basis for forming strong conclusions. At the process level, however, the retrieval dynamics of recognition memory and the effect of strengthening operations suggest that a recall-to-reject process plays an important role in plurality discrimination and associative recognition, but not necessarily in single-item recognition. A new theoretical framework proposes that the contribution of recollection to recognition depends on whether the retrieval of episodic details improves accuracy, and it organizes the models around the construct of efficiency. Accordingly, subjects adopt strategies that they believe will produce a desired level of accuracy in the shortest amount of time. Several models derived from this framework are shown to account the accuracy, latency, and confidence with which the various recognition tasks are performed.

  9. Contemporary deep recurrent learning for recognition

    Science.gov (United States)

    Iftekharuddin, K. M.; Alam, M.; Vidyaratne, L.

    2017-05-01

    Large-scale feed-forward neural networks have seen intense application in many computer vision problems. However, these networks can get hefty and computationally intensive with increasing complexity of the task. Our work, for the first time in literature, introduces a Cellular Simultaneous Recurrent Network (CSRN) based hierarchical neural network for object detection. CSRN has shown to be more effective to solving complex tasks such as maze traversal and image processing when compared to generic feed forward networks. While deep neural networks (DNN) have exhibited excellent performance in object detection and recognition, such hierarchical structure has largely been absent in neural networks with recurrency. Further, our work introduces deep hierarchy in SRN for object recognition. The simultaneous recurrency results in an unfolding effect of the SRN through time, potentially enabling the design of an arbitrarily deep network. This paper shows experiments using face, facial expression and character recognition tasks using novel deep recurrent model and compares recognition performance with that of generic deep feed forward model. Finally, we demonstrate the flexibility of incorporating our proposed deep SRN based recognition framework in a humanoid robotic platform called NAO.

  10. Improved RGB-D-T based Face Recognition

    DEFF Research Database (Denmark)

    Oliu Simon, Marc; Corneanu, Ciprian; Nasrollahi, Kamal

    2016-01-01

    years. At the same time a multimodal facial recognition is a promising approach. This paper combines the latest successes in both directions by applying deep learning Convolutional Neural Networks (CNN) to the multimodal RGB-D-T based facial recognition problem outperforming previously published results......Reliable facial recognition systems are of crucial importance in various applications from entertainment to security. Thanks to the deep-learning concepts introduced in the field, a significant improvement in the performance of the unimodal facial recognition systems has been observed in the recent...

  11. Get rich quick: the signal to respond procedure reveals the time course of semantic richness effects during visual word recognition.

    Science.gov (United States)

    Hargreaves, Ian S; Pexman, Penny M

    2014-05-01

    According to several current frameworks, semantic processing involves an early influence of language-based information followed by later influences of object-based information (e.g., situated simulations; Santos, Chaigneau, Simmons, & Barsalou, 2011). In the present study we examined whether these predictions extend to the influence of semantic variables in visual word recognition. We investigated the time course of semantic richness effects in visual word recognition using a signal-to-respond (STR) paradigm fitted to a lexical decision (LDT) and a semantic categorization (SCT) task. We used linear mixed effects to examine the relative contributions of language-based (number of senses, ARC) and object-based (imageability, number of features, body-object interaction ratings) descriptions of semantic richness at four STR durations (75, 100, 200, and 400ms). Results showed an early influence of number of senses and ARC in the SCT. In both LDT and SCT, object-based effects were the last to influence participants' decision latencies. We interpret our results within a framework in which semantic processes are available to influence word recognition as a function of their availability over time, and of their relevance to task-specific demands. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Kin-informative recognition cues in ants

    DEFF Research Database (Denmark)

    Nehring, Volker; Evison, Sophie E F; Santorelli, Lorenzo A

    2011-01-01

    behaviour is thought to be rare in one of the classic examples of cooperation--social insect colonies--because the colony-level costs of individual selfishness select against cues that would allow workers to recognize their closest relatives. In accord with this, previous studies of wasps and ants have...... found little or no kin information in recognition cues. Here, we test the hypothesis that social insects do not have kin-informative recognition cues by investigating the recognition cues and relatedness of workers from four colonies of the ant Acromyrmex octospinosus. Contrary to the theoretical...... prediction, we show that the cuticular hydrocarbons of ant workers in all four colonies are informative enough to allow full-sisters to be distinguished from half-sisters with a high accuracy. These results contradict the hypothesis of non-heritable recognition cues and suggest that there is more potential...

  13. Effects of modality and repetition in a continuous recognition memory task: Repetition has no effect on auditory recognition memory.

    Science.gov (United States)

    Amir Kassim, Azlina; Rehman, Rehan; Price, Jessica M

    2018-04-01

    Previous research has shown that auditory recognition memory is poorer compared to visual and cross-modal (visual and auditory) recognition memory. The effect of repetition on memory has been robust in showing improved performance. It is not clear, however, how auditory recognition memory compares to visual and cross-modal recognition memory following repetition. Participants performed a recognition memory task, making old/new discriminations to new stimuli, stimuli repeated for the first time after 4-7 intervening items (R1), or repeated for the second time after 36-39 intervening items (R2). Depending on the condition, participants were either exposed to visual stimuli (2D line drawings), auditory stimuli (spoken words), or cross-modal stimuli (pairs of images and associated spoken words). Results showed that unlike participants in the visual and cross-modal conditions, participants in the auditory recognition did not show improvements in performance on R2 trials compared to R1 trials. These findings have implications for pedagogical techniques in education, as well as for interventions and exercises aimed at boosting memory performance. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Weighted Feature Gaussian Kernel SVM for Emotion Recognition.

    Science.gov (United States)

    Wei, Wei; Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods.

  15. Dynamic Programming Algorithms in Speech Recognition

    Directory of Open Access Journals (Sweden)

    Titus Felix FURTUNA

    2008-01-01

    Full Text Available In a system of speech recognition containing words, the recognition requires the comparison between the entry signal of the word and the various words of the dictionary. The problem can be solved efficiently by a dynamic comparison algorithm whose goal is to put in optimal correspondence the temporal scales of the two words. An algorithm of this type is Dynamic Time Warping. This paper presents two alternatives for implementation of the algorithm designed for recognition of the isolated words.

  16. A recognition method research based on the heart sound texture map

    Directory of Open Access Journals (Sweden)

    Huizhong Cheng

    2016-06-01

    Full Text Available In order to improve the Heart Sound recognition rate and reduce the recognition time, in this paper, we introduces a new method for Heart Sound pattern recognition by using Heart Sound Texture Map. Based on the Heart Sound model, we give the Heart Sound time-frequency diagram and the Heart Sound Texture Map definition, we study the structure of the Heart Sound Window Function principle and realization method, and then discusses how to use the Heart Sound Window Function and the Short-time Fourier Transform to obtain two-dimensional Heart Sound time-frequency diagram, propose corner correlation recognition algorithm based on the Heart Sound Texture Map according to the characteristics of Heart Sound. The simulation results show that the Heart Sound Window Function compared with the traditional window function makes the first (S1 and the second (S2 Heart Sound texture clearer. And the corner correlation recognition algorithm based on the Heart Sound Texture Map can significantly improve the recognition rate and reduce the expense, which is an effective Heart Sound recognition method.

  17. Pattern Recognition Methods and Features Selection for Speech Emotion Recognition System.

    Science.gov (United States)

    Partila, Pavol; Voznak, Miroslav; Tovarek, Jaromir

    2015-01-01

    The impact of the classification method and features selection for the speech emotion recognition accuracy is discussed in this paper. Selecting the correct parameters in combination with the classifier is an important part of reducing the complexity of system computing. This step is necessary especially for systems that will be deployed in real-time applications. The reason for the development and improvement of speech emotion recognition systems is wide usability in nowadays automatic voice controlled systems. Berlin database of emotional recordings was used in this experiment. Classification accuracy of artificial neural networks, k-nearest neighbours, and Gaussian mixture model is measured considering the selection of prosodic, spectral, and voice quality features. The purpose was to find an optimal combination of methods and group of features for stress detection in human speech. The research contribution lies in the design of the speech emotion recognition system due to its accuracy and efficiency.

  18. The influence of suggestibility on memory.

    Science.gov (United States)

    Nicolas, Serge; Collins, Thérèse; Gounden, Yannick; Roediger, Henry L

    2011-06-01

    We provide a translation of Binet and Henri's pioneering 1894 paper on the influence of suggestibility on memory. Alfred Binet (1857-1911) is famous as the author who created the IQ test that bears his name, but he is almost unknown as the psychological investigator who generated numerous original experiments and fascinating results in the study of memory. His experiments published in 1894 manipulated suggestibility in several ways to determine effects on remembering. Three particular modes of suggestion were employed to induce false recognitions: (1) indirect suggestion by a preconceived idea; (2) direct suggestion; and (3) collective suggestion. In the commentary we suggest that Binet and Henri's (1894) paper written over 115 years ago is still highly relevant even today. In particular, Binet's legacy lives on in modern research on misinformation effects in memory, in studies of conformity, and in experiments on the social contagion of memory. Copyright © 2010 Elsevier Inc. All rights reserved.

  19. Automatic speech recognition for report generation in computed tomography

    International Nuclear Information System (INIS)

    Teichgraeber, U.K.M.; Ehrenstein, T.; Lemke, M.; Liebig, T.; Stobbe, H.; Hosten, N.; Keske, U.; Felix, R.

    1999-01-01

    Purpose: A study was performed to compare the performance of automatic speech recognition (ASR) with conventional transcription. Materials and Methods: 100 CT reports were generated by using ASR and 100 CT reports were dictated and written by medical transcriptionists. The time for dictation and correction of errors by the radiologist was assessed and the type of mistakes was analysed. The text recognition rate was calculated in both groups and the average time between completion of the imaging study by the technologist and generation of the written report was assessed. A commercially available speech recognition technology (ASKA Software, IBM Via Voice) running of a personal computer was used. Results: The time for the dictation using digital voice recognition was 9.4±2.3 min compared to 4.5±3.6 min with an ordinary Dictaphone. The text recognition rate was 97% with digital voice recognition and 99% with medical transcriptionists. The average time from imaging completion to written report finalisation was reduced from 47.3 hours with medical transcriptionists to 12.7 hours with ASR. The analysis of misspellings demonstrated (ASR vs. medical transcriptionists): 3 vs. 4 for syntax errors, 0 vs. 37 orthographic mistakes, 16 vs. 22 mistakes in substance and 47 vs. erroneously applied terms. Conclusions: The use of digital voice recognition as a replacement for medical transcription is recommendable when an immediate availability of written reports is necessary. (orig.) [de

  20. Pattern recognition

    CERN Document Server

    Theodoridis, Sergios

    2003-01-01

    Pattern recognition is a scientific discipline that is becoming increasingly important in the age of automation and information handling and retrieval. Patter Recognition, 2e covers the entire spectrum of pattern recognition applications, from image analysis to speech recognition and communications. This book presents cutting-edge material on neural networks, - a set of linked microprocessors that can form associations and uses pattern recognition to ""learn"" -and enhances student motivation by approaching pattern recognition from the designer's point of view. A direct result of more than 10

  1. Emotion recognition in Chinese people with schizophrenia.

    Science.gov (United States)

    Chan, Chetwyn C H; Wong, Raymond; Wang, Kai; Lee, Tatia M C

    2008-01-15

    This study examined whether people with paranoid or nonparanoid schizophrenia would show emotion-recognition deficits, both facial and prosodic. Furthermore, this study examined the neuropsychological predictors of emotion-recognition ability in people with schizophrenia. Participants comprised 86 people, of whom: 43 were people diagnosed with schizophrenia and 43 were controls. The 43 clinical participants were placed in either the paranoid group (n=19) or the nonparanoid group (n=24). Each participant was administered the Facial Emotion Recognition task and the Prosodic Recognition task, together with other neuropsychological measures of attention and visual perception. People suffering from nonparanoid schizophrenia were found to have deficits in both facial and prosodic emotion recognition, after correction for the differences in the intelligence and depression scores between the two groups. Furthermore, spatial perception was observed to be the best predictor of facial emotion identification in individuals with nonparanoid schizophrenia, whereas attentional processing control predicted both prosodic emotion identification and discrimination in nonparanoid schizophrenia patients. Our findings suggest that patients with schizophrenia in remission may still suffer from impairment of certain aspects of emotion recognition.

  2. Implicit Recognition Based on Lateralized Perceptual Fluency

    Directory of Open Access Journals (Sweden)

    Iliana M. Vargas

    2012-02-01

    Full Text Available In some circumstances, accurate recognition of repeated images in an explicit memory test is driven by implicit memory. We propose that this “implicit recognition” results from perceptual fluency that influences responding without awareness of memory retrieval. Here we examined whether recognition would vary if images appeared in the same or different visual hemifield during learning and testing. Kaleidoscope images were briefly presented left or right of fixation during divided-attention encoding. Presentation in the same visual hemifield at test produced higher recognition accuracy than presentation in the opposite visual hemifield, but only for guess responses. These correct guesses likely reflect a contribution from implicit recognition, given that when the stimulated visual hemifield was the same at study and test, recognition accuracy was higher for guess responses than for responses with any level of confidence. The dramatic difference in guessing accuracy as a function of lateralized perceptual overlap between study and test suggests that implicit recognition arises from memory storage in visual cortical networks that mediate repetition-induced fluency increments.

  3. Control of Target Molecular Recognition in a Small Pore Space with Biomolecule-Recognition Gating Membrane.

    Science.gov (United States)

    Okuyama, Hiroto; Oshiba, Yuhei; Ohashi, Hidenori; Yamaguchi, Takeo

    2018-05-01

    A biomolecule-recognition gating membrane, which introduces thermosensitive graft polymer including molecular recognition receptor into porous membrane substrate, can close its pores by recognizing target biomolecule. The present study reports strategies for improving both versatility and sensitivity of the gating membrane. First, the membrane is fabricated by introducing the receptor via a selectively reactive click reaction improving the versatility. Second, the sensitivity of the membrane is enhanced via an active delivering method of the target molecules into the pores. In the method, the tiny signal of the target biomolecule is amplified as obvious pressure change. Furthermore, this offers 15 times higher sensitivity compared to the previously reported passive delivering method (membrane immersion to sample solution) with significantly shorter recognition time. The improvement will aid in applying the gating membrane to membrane sensors in medical fields. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. [Effect of opioid receptors on acute stress-induced changes in recognition memory].

    Science.gov (United States)

    Liu, Ying; Wu, Yu-Wei; Qian, Zhao-Qiang; Yan, Cai-Fang; Fan, Ka-Min; Xu, Jin-Hui; Li, Xiao; Liu, Zhi-Qiang

    2016-12-25

    Although ample evidence has shown that acute stress impairs memory, the influences of acute stress on different phases of memory, such as acquisition, consolidation and retrieval, are different. Experimental data from both human and animals support that endogenous opioid system plays a role in stress, as endogenous opioid release is increased and opioid receptors are activated during stress experience. On the other hand, endogenous opioid system mediates learning and memory. The aim of the present study was to investigate the effect of acute forced swimming stress on recognition memory of C57 mice and the role of opioid receptors in this process by using a three-day pattern of new object recognition task. The results showed that 15-min acute forced swimming damaged the retrieval of recognition memory, but had no effect on acquisition and consolidation of recognition memory. No significant change of object recognition memory was found in mice that were given naloxone, an opioid receptor antagonist, by intraperitoneal injection. But intraperitoneal injection of naloxone before forced swimming stress could inhibit the impairment of recognition memory retrieval caused by forced swimming stress. The results of real-time PCR showed that acute forced swimming decreased the μ opioid receptor mRNA levels in whole brain and hippocampus, while the injection of naloxone before stress could reverse this change. These results suggest that acute stress may impair recognition memory retrieval via opioid receptors.

  5. Individual differences in forced-choice recognition memory: partitioning contributions of recollection and familiarity.

    Science.gov (United States)

    Migo, Ellen M; Quamme, Joel R; Holmes, Selina; Bendell, Andrew; Norman, Kenneth A; Mayes, Andrew R; Montaldi, Daniela

    2014-01-01

    In forced-choice recognition memory, two different testing formats are possible under conditions of high target-foil similarity: Each target can be presented alongside foils similar to itself (forced-choice corresponding; FCC), or alongside foils similar to other targets (forced-choice noncorresponding; FCNC). Recent behavioural and neuropsychological studies suggest that FCC performance can be supported by familiarity whereas FCNC performance is supported primarily by recollection. In this paper, we corroborate this finding from an individual differences perspective. A group of older adults were given a test of FCC and FCNC recognition for object pictures, as well as standardized tests of recall, recognition, and IQ. Recall measures were found to predict FCNC, but not FCC performance, consistent with a critical role for recollection in FCNC only. After the common influence of recall was removed, standardized tests of recognition predicted FCC, but not FCNC performance. This is consistent with a contribution of only familiarity in FCC. Simulations show that a two-process model, where familiarity and recollection make separate contributions to recognition, is 10 times more likely to give these results than a single-process model. This evidence highlights the importance of recognition memory test design when examining the involvement of recollection and familiarity.

  6. Surviving Blind Decomposition: A Distributional Analysis of the Time-Course of Complex Word Recognition

    Science.gov (United States)

    Schmidtke, Daniel; Matsuki, Kazunaga; Kuperman, Victor

    2017-01-01

    The current study addresses a discrepancy in the psycholinguistic literature about the chronology of information processing during the visual recognition of morphologically complex words. "Form-then-meaning" accounts of complex word recognition claim that morphemes are processed as units of form prior to any influence of their meanings,…

  7. Culture/Religion and Identity: Social Justice versus Recognition

    Science.gov (United States)

    Bekerman, Zvi

    2012-01-01

    Recognition is the main word attached to multicultural perspectives. The multicultural call for recognition, the one calling for the recognition of cultural minorities and identities, the one now voiced by liberal states all over and also in Israel was a more difficult one. It took the author some time to realize that calling for the recognition…

  8. Fast neuromimetic object recognition using FPGA outperforms GPU implementations.

    Science.gov (United States)

    Orchard, Garrick; Martin, Jacob G; Vogelstein, R Jacob; Etienne-Cummings, Ralph

    2013-08-01

    Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks.

  9. A dynamic approach to recognition memory.

    Science.gov (United States)

    Cox, Gregory E; Shiffrin, Richard M

    2017-11-01

    We present a dynamic model of memory that integrates the processes of perception, retrieval from knowledge, retrieval of events, and decision making as these evolve from 1 moment to the next. The core of the model is that recognition depends on tracking changes in familiarity over time from an initial baseline generally determined by context, with these changes depending on the availability of different kinds of information at different times. A mathematical implementation of this model leads to precise, accurate predictions of accuracy, response time, and speed-accuracy trade-off in episodic recognition at the levels of both groups and individuals across a variety of paradigms. Our approach leads to novel insights regarding word frequency, speeded responding, context reinstatement, short-term priming, similarity, source memory, and associative recognition, revealing how the same set of core dynamic principles can help unify otherwise disparate phenomena in the study of memory. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Syntactic and semantic errors in radiology reports associated with speech recognition software.

    Science.gov (United States)

    Ringler, Michael D; Goss, Brian C; Bartholmai, Brian J

    2017-03-01

    Speech recognition software can increase the frequency of errors in radiology reports, which may affect patient care. We retrieved 213,977 speech recognition software-generated reports from 147 different radiologists and proofread them for errors. Errors were classified as "material" if they were believed to alter interpretation of the report. "Immaterial" errors were subclassified as intrusion/omission or spelling errors. The proportion of errors and error type were compared among individual radiologists, imaging subspecialty, and time periods. In all, 20,759 reports (9.7%) contained errors, of which 3992 (1.9%) were material errors. Among immaterial errors, spelling errors were more common than intrusion/omission errors ( p reports, reports reinterpreting results of outside examinations, and procedural studies (all p < .001). Error rate decreased over time ( p < .001), which suggests that a quality control program with regular feedback may reduce errors.

  11. The recognition heuristic: a review of theory and tests.

    Science.gov (United States)

    Pachur, Thorsten; Todd, Peter M; Gigerenzer, Gerd; Schooler, Lael J; Goldstein, Daniel G

    2011-01-01

    The recognition heuristic is a prime example of how, by exploiting a match between mind and environment, a simple mental strategy can lead to efficient decision making. The proposal of the heuristic initiated a debate about the processes underlying the use of recognition in decision making. We review research addressing four key aspects of the recognition heuristic: (a) that recognition is often an ecologically valid cue; (b) that people often follow recognition when making inferences; (c) that recognition supersedes further cue knowledge; (d) that its use can produce the less-is-more effect - the phenomenon that lesser states of recognition knowledge can lead to more accurate inferences than more complete states. After we contrast the recognition heuristic to other related concepts, including availability and fluency, we carve out, from the existing findings, some boundary conditions of the use of the recognition heuristic as well as key questions for future research. Moreover, we summarize developments concerning the connection of the recognition heuristic with memory models. We suggest that the recognition heuristic is used adaptively and that, compared to other cues, recognition seems to have a special status in decision making. Finally, we discuss how systematic ignorance is exploited in other cognitive mechanisms (e.g., estimation and preference).

  12. The Recognition Heuristic: A Review of Theory and Tests

    Directory of Open Access Journals (Sweden)

    Thorsten ePachur

    2011-07-01

    Full Text Available The recognition heuristic is a prime example of how, by exploiting a match between mind and environment, a simple mental strategy can lead to efficient decision making. The proposal of the heuristic initiated a debate about the processes underlying the use of recognition in decision making. We review research addressing four key aspects of the recognition heuristic: (a that recognition is often an ecologically valid cue; (b that people often follow recognition when making inferences; (c that recognition supersedes further cue knowledge; (d that its use can produce the less-is-more effect—the phenomenon that lesser states of recognition knowledge can lead to more accurate inferences than more complete states. After we contrast the recognition heuristic to other related concepts, including availability and fluency, we carve out, from the existing findings, some boundary conditions of the use of the recognition heuristic as well as key questions for future research. Moreover, we summarize developments concerning the connection of the recognition heuristic with memory models. We suggest that the recognition heuristic is used adaptively and that, compared to other cues, recognition seems to have a special status in decision making. Finally, we discuss how systematic ignorance is exploited in other cognitive mechanisms (e.g., estimation and preference.

  13. The Recognition Heuristic: A Review of Theory and Tests

    Science.gov (United States)

    Pachur, Thorsten; Todd, Peter M.; Gigerenzer, Gerd; Schooler, Lael J.; Goldstein, Daniel G.

    2011-01-01

    The recognition heuristic is a prime example of how, by exploiting a match between mind and environment, a simple mental strategy can lead to efficient decision making. The proposal of the heuristic initiated a debate about the processes underlying the use of recognition in decision making. We review research addressing four key aspects of the recognition heuristic: (a) that recognition is often an ecologically valid cue; (b) that people often follow recognition when making inferences; (c) that recognition supersedes further cue knowledge; (d) that its use can produce the less-is-more effect – the phenomenon that lesser states of recognition knowledge can lead to more accurate inferences than more complete states. After we contrast the recognition heuristic to other related concepts, including availability and fluency, we carve out, from the existing findings, some boundary conditions of the use of the recognition heuristic as well as key questions for future research. Moreover, we summarize developments concerning the connection of the recognition heuristic with memory models. We suggest that the recognition heuristic is used adaptively and that, compared to other cues, recognition seems to have a special status in decision making. Finally, we discuss how systematic ignorance is exploited in other cognitive mechanisms (e.g., estimation and preference). PMID:21779266

  14. Tracking the time course of word-frequency effects in auditory word recognition with event-related potentials.

    Science.gov (United States)

    Dufour, Sophie; Brunellière, Angèle; Frauenfelder, Ulrich H

    2013-04-01

    Although the word-frequency effect is one of the most established findings in spoken-word recognition, the precise processing locus of this effect is still a topic of debate. In this study, we used event-related potentials (ERPs) to track the time course of the word-frequency effect. In addition, the neighborhood density effect, which is known to reflect mechanisms involved in word identification, was also examined. The ERP data showed a clear frequency effect as early as 350 ms from word onset on the P350, followed by a later effect at word offset on the late N400. A neighborhood density effect was also found at an early stage of spoken-word processing on the PMN, and at word offset on the late N400. Overall, our ERP differences for word frequency suggest that frequency affects the core processes of word identification starting from the initial phase of lexical activation and including target word selection. They thus rule out any interpretation of the word frequency effect that is limited to a purely decisional locus after word identification has been completed. Copyright © 2012 Cognitive Science Society, Inc.

  15. Exploiting Three-Dimensional Gaze Tracking for Action Recognition During Bimanual Manipulation to Enhance Human–Robot Collaboration

    Directory of Open Access Journals (Sweden)

    Alireza Haji Fathaliyan

    2018-04-01

    Full Text Available Human–robot collaboration could be advanced by facilitating the intuitive, gaze-based control of robots, and enabling robots to recognize human actions, infer human intent, and plan actions that support human goals. Traditionally, gaze tracking approaches to action recognition have relied upon computer vision-based analyses of two-dimensional egocentric camera videos. The objective of this study was to identify useful features that can be extracted from three-dimensional (3D gaze behavior and used as inputs to machine learning algorithms for human action recognition. We investigated human gaze behavior and gaze–object interactions in 3D during the performance of a bimanual, instrumental activity of daily living: the preparation of a powdered drink. A marker-based motion capture system and binocular eye tracker were used to reconstruct 3D gaze vectors and their intersection with 3D point clouds of objects being manipulated. Statistical analyses of gaze fixation duration and saccade size suggested that some actions (pouring and stirring may require more visual attention than other actions (reach, pick up, set down, and move. 3D gaze saliency maps, generated with high spatial resolution for six subtasks, appeared to encode action-relevant information. The “gaze object sequence” was used to capture information about the identity of objects in concert with the temporal sequence in which the objects were visually regarded. Dynamic time warping barycentric averaging was used to create a population-based set of characteristic gaze object sequences that accounted for intra- and inter-subject variability. The gaze object sequence was used to demonstrate the feasibility of a simple action recognition algorithm that utilized a dynamic time warping Euclidean distance metric. Averaged over the six subtasks, the action recognition algorithm yielded an accuracy of 96.4%, precision of 89.5%, and recall of 89.2%. This level of performance suggests that

  16. Speech recognition technology: an outlook for human-to-machine interaction.

    Science.gov (United States)

    Erdel, T; Crooks, S

    2000-01-01

    Speech recognition, as an enabling technology in healthcare-systems computing, is a topic that has been discussed for quite some time, but is just now coming to fruition. Traditionally, speech-recognition software has been constrained by hardware, but improved processors and increased memory capacities are starting to remove some of these limitations. With these barriers removed, companies that create software for the healthcare setting have the opportunity to write more successful applications. Among the criticisms of speech-recognition applications are the high rates of error and steep training curves. However, even in the face of such negative perceptions, there remains significant opportunities for speech recognition to allow healthcare providers and, more specifically, physicians, to work more efficiently and ultimately spend more time with their patients and less time completing necessary documentation. This article will identify opportunities for inclusion of speech-recognition technology in the healthcare setting and examine major categories of speech-recognition software--continuous speech recognition, command and control, and text-to-speech. We will discuss the advantages and disadvantages of each area, the limitations of the software today, and how future trends might affect them.

  17. Generalized Hough transform based time invariant action recognition with 3D pose information

    Science.gov (United States)

    Muench, David; Huebner, Wolfgang; Arens, Michael

    2014-10-01

    Human action recognition has emerged as an important field in the computer vision community due to its large number of applications such as automatic video surveillance, content based video-search and human robot interaction. In order to cope with the challenges that this large variety of applications present, recent research has focused more on developing classifiers able to detect several actions in more natural and unconstrained video sequences. The invariance discrimination tradeoff in action recognition has been addressed by utilizing a Generalized Hough Transform. As a basis for action representation we transform 3D poses into a robust feature space, referred to as pose descriptors. For each action class a one-dimensional temporal voting space is constructed. Votes are generated from associating pose descriptors with their position in time relative to the end of an action sequence. Training data consists of manually segmented action sequences. In the detection phase valid human 3D poses are assumed as input, e.g. originating from 3D sensors or monocular pose reconstruction methods. The human 3D poses are normalized to gain view-independence and transformed into (i) relative limb-angle space to ensure independence of non-adjacent joints or (ii) geometric features. In (i) an action descriptor consists of the relative angles between limbs and their temporal derivatives. In (ii) the action descriptor consists of different geometric features. In order to circumvent the problem of time-warping we propose to use a codebook of prototypical 3D poses which is generated from sample sequences of 3D motion capture data. This idea is in accordance with the concept of equivalence classes in action space. Results of the codebook method are presented using the Kinect sensor and the CMU Motion Capture Database.

  18. Pattern Recognition Methods and Features Selection for Speech Emotion Recognition System

    Directory of Open Access Journals (Sweden)

    Pavol Partila

    2015-01-01

    Full Text Available The impact of the classification method and features selection for the speech emotion recognition accuracy is discussed in this paper. Selecting the correct parameters in combination with the classifier is an important part of reducing the complexity of system computing. This step is necessary especially for systems that will be deployed in real-time applications. The reason for the development and improvement of speech emotion recognition systems is wide usability in nowadays automatic voice controlled systems. Berlin database of emotional recordings was used in this experiment. Classification accuracy of artificial neural networks, k-nearest neighbours, and Gaussian mixture model is measured considering the selection of prosodic, spectral, and voice quality features. The purpose was to find an optimal combination of methods and group of features for stress detection in human speech. The research contribution lies in the design of the speech emotion recognition system due to its accuracy and efficiency.

  19. 8 CFR 292.2 - Organizations qualified for recognition; requests for recognition; withdrawal of recognition...

    Science.gov (United States)

    2010-01-01

    ...; requests for recognition; withdrawal of recognition; accreditation of representatives; roster. 292.2...; withdrawal of recognition; accreditation of representatives; roster. (a) Qualifications of organizations. A non-profit religious, charitable, social service, or similar organization established in the United...

  20. Listening for recollection: a multi-voxel pattern analysis of recognition memory retrieval strategies

    Directory of Open Access Journals (Sweden)

    Joel R Quamme

    2010-08-01

    Full Text Available Recent studies of recognition memory indicate that subjects can strategically vary how much they rely on recollection of specific details vs. feelings of familiarity when making recognition judgments. One possible explanation of these results is that subjects can establish an internally-directed attentional state (listening for recollection that enhances retrieval of studied details; fluctuations in this attentional state over time should be associated with fluctuations in subjects' recognition behavior. In this study, we used multi-voxel pattern analysis of fMRI data to identify brain regions that are involved in listening for recollection. Specifically, we looked for brain regions that met the following criteria: 1 Distinct neural patterns should be present when subjects are instructed to rely on recollection vs. familiarity, and 2 fluctuations in these neural patterns should be related to recognition behavior in the manner predicted by dual-process theories of recognition: Specifically, the presence of the recollection pattern during the pre-stimulus interval (indicating that subjects are listening for recollection at that moment should be associated with a selective decrease in false alarms to related lures. We found that pre-stimulus activity in the right supramarginal gyrus met all of these criteria, suggesting that this region proactively establishes an internally-directed attentional state that fosters recollection. We also found other regions (e.g., left middle temporal gyrus where the pattern of neural activity was related to subjects’ responding to related lures after stimulus onset (but not before, suggesting that these regions implement processes that are engaged in a reactive fashion to boost recollection.

  1. Probabilistic recognition of human faces from video

    DEFF Research Database (Denmark)

    Zhou, Saohua; Krüger, Volker; Chellappa, Rama

    2003-01-01

    Recognition of human faces using a gallery of still or video images and a probe set of videos is systematically investigated using a probabilistic framework. In still-to-video recognition, where the gallery consists of still images, a time series state space model is proposed to fuse temporal...... of the identity variable produces the recognition result. The model formulation is very general and it allows a variety of image representations and transformations. Experimental results using videos collected by NIST/USF and CMU illustrate the effectiveness of this approach for both still-to-video and video-to-video...... information in a probe video, which simultaneously characterizes the kinematics and identity using a motion vector and an identity variable, respectively. The joint posterior distribution of the motion vector and the identity variable is estimated at each time instant and then propagated to the next time...

  2. Culture moderates the relationship between interdependence and face recognition

    Directory of Open Access Journals (Sweden)

    Andy H Ng

    2015-10-01

    Full Text Available Recent theory suggests that face recognition accuracy is affected by people’s motivations, with people being particularly motivated to remember ingroup versus outgroup faces. In the current research we suggest that those higher in interdependence should have a greater motivation to remember ingroup faces, but this should depend on how ingroups are defined. To examine this possibility, we used a joint individual difference and cultural approach to test (a whether individual differences in interdependence would predict face recognition accuracy, and (b whether this effect would be moderated by culture. In Study 1 European Canadians higher in interdependence demonstrated greater recognition for same-race (White, but not cross-race (East Asian faces. In Study 2 we found that culture moderated this effect. Interdependence again predicted greater recognition for same-race (White, but not cross-race (East Asian faces among European Canadians; however, interdependence predicted worse recognition for both same-race (East Asian and cross-race (White faces among first-generation East Asians. The results provide insight into the role of motivation in face perception as well as cultural differences in the conception of ingroups.

  3. Approach to recognition of flexible form for credit card expiration date recognition as example

    Science.gov (United States)

    Sheshkus, Alexander; Nikolaev, Dmitry P.; Ingacheva, Anastasia; Skoryukina, Natalya

    2015-12-01

    In this paper we consider a task of finding information fields within document with flexible form for credit card expiration date field as example. We discuss main difficulties and suggest possible solutions. In our case this task is to be solved on mobile devices therefore computational complexity has to be as low as possible. In this paper we provide results of the analysis of suggested algorithm. Error distribution of the recognition system shows that suggested algorithm solves the task with required accuracy.

  4. Colour agnosia impairs the recognition of natural but not of non-natural scenes.

    Science.gov (United States)

    Nijboer, Tanja C W; Van Der Smagt, Maarten J; Van Zandvoort, Martine J E; De Haan, Edward H F

    2007-03-01

    Scene recognition can be enhanced by appropriate colour information, yet the level of visual processing at which colour exerts its effects is still unclear. It has been suggested that colour supports low-level sensory processing, while others have claimed that colour information aids semantic categorization and recognition of objects and scenes. We investigated the effect of colour on scene recognition in a case of colour agnosia, M.A.H. In a scene identification task, participants had to name images of natural or non-natural scenes in six different formats. Irrespective of scene format, M.A.H. was much slower on the natural than on the non-natural scenes. As expected, neither M.A.H. nor control participants showed any difference in performance for the non-natural scenes. However, for the natural scenes, appropriate colour facilitated scene recognition in control participants (i.e., shorter reaction times), whereas M.A.H.'s performance did not differ across formats. Our data thus support the hypothesis that the effect of colour occurs at the level of learned associations.

  5. Can corrective feedback improve recognition memory?

    Science.gov (United States)

    Kantner, Justin; Lindsay, D Stephen

    2010-06-01

    An understanding of the effects of corrective feedback on recognition memory can inform both recognition theory and memory training programs, but few published studies have investigated the issue. Although the evidence to date suggests that feedback does not improve recognition accuracy, few studies have directly examined its effect on sensitivity, and fewer have created conditions that facilitate a feedback advantage by encouraging controlled processing at test. In Experiment 1, null effects of feedback were observed following both deep and shallow encoding of categorized study lists. In Experiment 2, feedback robustly influenced response bias by allowing participants to discern highly uneven base rates of old and new items, but sensitivity remained unaffected. In Experiment 3, a false-memory procedure, feedback failed to attenuate false recognition of critical lures. In Experiment 4, participants were unable to use feedback to learn a simple category rule separating old items from new items, despite the fact that feedback was of substantial benefit in a nearly identical categorization task. The recognition system, despite a documented ability to utilize controlled strategic or inferential decision-making processes, appears largely impenetrable to a benefit of corrective feedback.

  6. Timing, timing, timing: Fast decoding of object information from intracranial field potentials in human visual cortex

    Science.gov (United States)

    Liu, Hesheng; Agam, Yigal; Madsen, Joseph R.; Kreiman, Gabriel

    2010-01-01

    Summary The difficulty of visual recognition stems from the need to achieve high selectivity while maintaining robustness to object transformations within hundreds of milliseconds. Theories of visual recognition differ in whether the neuronal circuits invoke recurrent feedback connections or not. The timing of neurophysiological responses in visual cortex plays a key role in distinguishing between bottom-up and top-down theories. Here we quantified at millisecond resolution the amount of visual information conveyed by intracranial field potentials from 912 electrodes in 11 human subjects. We could decode object category information from human visual cortex in single trials as early as 100 ms post-stimulus. Decoding performance was robust to depth rotation and scale changes. The results suggest that physiological activity in the temporal lobe can account for key properties of visual recognition. The fast decoding in single trials is compatible with feed-forward theories and provides strong constraints for computational models of human vision. PMID:19409272

  7. Further insight into self-face recognition in schizophrenia patients: Why ambiguity matters.

    Science.gov (United States)

    Bortolon, Catherine; Capdevielle, Delphine; Salesse, Robin N; Raffard, Stephane

    2016-03-01

    Although some studies reported specifically self-face processing deficits in patients with schizophrenia disorder (SZ), it remains unclear whether these deficits rather reflect a more global face processing deficit. Contradictory results are probably due to the different methodologies employed and the lack of control of other confounding factors. Moreover, no study has so far evaluated possible daily life self-face recognition difficulties in SZ. Therefore, our primary objective was to investigate self-face recognition in patients suffering from SZ compared to healthy controls (HC) using an "objective measure" (reaction time and accuracy) and a "subjective measure" (self-report of daily self-face recognition difficulties). Twenty-four patients with SZ and 23 HC performed a self-face recognition task and completed a questionnaire evaluating daily difficulties in self-face recognition. Recognition task material consisted in three different faces (the own, a famous and an unknown) being morphed in steps of 20%. Results showed that SZ were overall slower than HC regardless of the face identity, but less accurate only for the faces containing 60%-40% morphing. Moreover, SZ and HC reported a similar amount of daily problems with self/other face recognition. No significant correlations were found between objective and subjective measures (p > 0.05). The small sample size and relatively mild severity of psychopathology does not allow us to generalize our results. These results suggest that: (1) patients with SZ are as capable of recognizing their own face as HC, although they are susceptible to ambiguity; (2) there are far less self recognition deficits in schizophrenia patients than previously postulated. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. A Case Study of the Recognition of the Foundation Degree Qualification for Pharmacy Technicians

    Science.gov (United States)

    Herrera, Helena; Brown, David; Portlock, Jane

    2013-01-01

    The Foundation Degree (FD) is a work-related, intermediate-level higher education qualification. Issues around its recognition can affect success in attracting students where the literature suggests that uptake should be adequate. This research represented a case study which explored for the first time, whether the above applied to the FD for…

  9. Robust speaker recognition in noisy environments

    CERN Document Server

    Rao, K Sreenivasa

    2014-01-01

    This book discusses speaker recognition methods to deal with realistic variable noisy environments. The text covers authentication systems for; robust noisy background environments, functions in real time and incorporated in mobile devices. The book focuses on different approaches to enhance the accuracy of speaker recognition in presence of varying background environments. The authors examine: (a) Feature compensation using multiple background models, (b) Feature mapping using data-driven stochastic models, (c) Design of super vector- based GMM-SVM framework for robust speaker recognition, (d) Total variability modeling (i-vectors) in a discriminative framework and (e) Boosting method to fuse evidences from multiple SVM models.

  10. Interplay of oxytocin, vasopressin, and sex hormones in the regulation of social recognition.

    Science.gov (United States)

    Gabor, Christopher S; Phan, Anna; Clipperton-Allen, Amy E; Kavaliers, Martin; Choleris, Elena

    2012-02-01

    Social Recognition is a fundamental skill that forms the basis of behaviors essential to the proper functioning of pair or group living in most social species. We review here various neurobiological and genetic studies that point to an interplay of oxytocin (OT), arginine-vasopressin (AVP), and the gonadal hormones, estrogens and testosterone, in the mediation of social recognition. Results of a number of studies have shown that OT and its actions at the medial amygdala seem to be essential for social recognition in both sexes. Estrogens facilitate social recognition, possibly by regulating OT production in the hypothalamus and the OT receptors at the medial amygdala. Estrogens also affect social recognition on a rapid time scale, likely through nongenomic actions. The mechanisms of these rapid effects are currently unknown but available evidence points at the hippocampus as the possible site of action. Male rodents seem to be more dependent on AVP acting at the level of the lateral septum for social recognition than female rodents. Results of various studies suggest that testosterone and its metabolites (including estradiol) influence social recognition in males primarily through the AVP V1a receptor. Overall, it appears that gonadal hormone modulation of OT and AVP regulates and fine tunes social recognition and those behaviors that depend upon it (e.g., social bonds, social hierarchies) in a sex specific manner. This points at an important role for these neuroendocrine systems in the regulation of the sex differences that are evident in social behavior and of sociality as a whole.

  11. Semantic relations differentially impact associative recognition memory: electrophysiological evidence.

    Science.gov (United States)

    Kriukova, Olga; Bridger, Emma; Mecklinger, Axel

    2013-10-01

    Though associative recognition memory is thought to rely primarily on recollection, recent research indicates that familiarity might also make a substantial contribution when to-be-learned items are integrated into a coherent structure by means of an existing semantic relation. It remains unclear how different types of semantic relations, such as categorical (e.g., dancer-singer) and thematic (e.g., dancer-stage) relations might affect associative recognition, however. Using event-related potentials (ERPs), we addressed this question by manipulating the type of semantic link between paired words in an associative recognition memory experiment. An early midfrontal old/new effect, typically linked to familiarity, was observed across the relation types. In contrast, a robust left parietal old/new effect was found in the categorical condition only, suggesting a clear contribution of recollection to associative recognition for this kind of pairs. One interpretation of this pattern is that familiarity was sufficiently diagnostic for associative recognition of thematic relations, which could result from the integrative nature of the thematic relatedness compared to the similarity-based nature of categorical pairs. The present study suggests that the extent to which recollection and familiarity are involved in associative recognition is at least in part determined by the properties of semantic relations between the paired associates. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Optogenetic Stimulation of Prefrontal Glutamatergic Neurons Enhances Recognition Memory.

    Science.gov (United States)

    Benn, Abigail; Barker, Gareth R I; Stuart, Sarah A; Roloff, Eva V L; Teschemacher, Anja G; Warburton, E Clea; Robinson, Emma S J

    2016-05-04

    Finding effective cognitive enhancers is a major health challenge; however, modulating glutamatergic neurotransmission has the potential to enhance performance in recognition memory tasks. Previous studies using glutamate receptor antagonists have revealed that the medial prefrontal cortex (mPFC) plays a central role in associative recognition memory. The present study investigates short-term recognition memory using optogenetics to target glutamatergic neurons within the rodent mPFC specifically. Selective stimulation of glutamatergic neurons during the online maintenance of information enhanced associative recognition memory in normal animals. This cognitive enhancing effect was replicated by local infusions of the AMPAkine CX516, but not CX546, which differ in their effects on EPSPs. This suggests that enhancing the amplitude, but not the duration, of excitatory synaptic currents improves memory performance. Increasing glutamate release through infusions of the mGluR7 presynaptic receptor antagonist MMPIP had no effect on performance. These results provide new mechanistic information that could guide the targeting of future cognitive enhancers. Our work suggests that improved associative-recognition memory can be achieved by enhancing endogenous glutamatergic neuronal activity selectively using an optogenetic approach. We build on these observations to recapitulate this effect using drug treatments that enhance the amplitude of EPSPs; however, drugs that alter the duration of the EPSP or increase glutamate release lack efficacy. This suggests that both neural and temporal specificity are needed to achieve cognitive enhancement. Copyright © 2016 Benn et al.

  13. Mobile Visual Recognition on Smartphones

    Directory of Open Access Journals (Sweden)

    Zhenwen Gui

    2013-01-01

    Full Text Available This paper addresses the recognition of large-scale outdoor scenes on smartphones by fusing outputs of inertial sensors and computer vision techniques. The main contributions can be summarized as follows. Firstly, we propose an ORD (overlap region divide method to plot image position area, which is fast enough to find the nearest visiting area and can also reduce the search range compared with the traditional approaches. Secondly, the vocabulary tree-based approach is improved by introducing GAGCC (gravity-aligned geometric consistency constraint. Our method involves no operation in the high-dimensional feature space and does not assume a global transform between a pair of images. Thus, it substantially reduces the computational complexity and memory usage, which makes the city scale image recognition feasible on the smartphone. Experiments on a collected database including 0.16 million images show that the proposed method demonstrates excellent recognition performance, while maintaining the average recognition time about 1 s.

  14. The processing of auditory and visual recognition of self-stimuli.

    Science.gov (United States)

    Hughes, Susan M; Nicholson, Shevon E

    2010-12-01

    This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one's own voice was to seeing photograph of one's own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one's own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. Copyright © 2010 Elsevier Inc. All rights reserved.

  15. Deletion of the GluA1 AMPA receptor subunit impairs recency-dependent object recognition memory

    Science.gov (United States)

    Sanderson, David J.; Hindley, Emma; Smeaton, Emily; Denny, Nick; Taylor, Amy; Barkus, Chris; Sprengel, Rolf; Seeburg, Peter H.; Bannerman, David M.

    2011-01-01

    Deletion of the GluA1 AMPA receptor subunit impairs short-term spatial recognition memory. It has been suggested that short-term recognition depends upon memory caused by the recent presentation of a stimulus that is independent of contextual–retrieval processes. The aim of the present set of experiments was to test whether the role of GluA1 extends to nonspatial recognition memory. Wild-type and GluA1 knockout mice were tested on the standard object recognition task and a context-independent recognition task that required recency-dependent memory. In a first set of experiments it was found that GluA1 deletion failed to impair performance on either of the object recognition or recency-dependent tasks. However, GluA1 knockout mice displayed increased levels of exploration of the objects in both the sample and test phases compared to controls. In contrast, when the time that GluA1 knockout mice spent exploring the objects was yoked to control mice during the sample phase, it was found that GluA1 deletion now impaired performance on both the object recognition and the recency-dependent tasks. GluA1 deletion failed to impair performance on a context-dependent recognition task regardless of whether object exposure in knockout mice was yoked to controls or not. These results demonstrate that GluA1 is necessary for nonspatial as well as spatial recognition memory and plays an important role in recency-dependent memory processes. PMID:21378100

  16. Misattribution, false recognition and the sins of memory.

    OpenAIRE

    Schacter, D L; Dodson, C S

    2001-01-01

    Memory is sometimes a troublemaker. Schacter has classified memory's transgressions into seven fundamental 'sins': transience, absent-mindedness, blocking, misattribution, suggestibility, bias and persistence. This paper focuses on one memory sin, misattribution, that is implicated in false or illusory recognition of episodes that never occurred. We present data from cognitive, neuropsychological and neuroimaging studies that illuminate aspects of misattribution and false recognition. We firs...

  17. Misattribution, false recognition and the sins of memory.

    Science.gov (United States)

    Schacter, D L; Dodson, C S

    2001-09-29

    Memory is sometimes a troublemaker. Schacter has classified memory's transgressions into seven fundamental 'sins': transience, absent-mindedness, blocking, misattribution, suggestibility, bias and persistence. This paper focuses on one memory sin, misattribution, that is implicated in false or illusory recognition of episodes that never occurred. We present data from cognitive, neuropsychological and neuroimaging studies that illuminate aspects of misattribution and false recognition. We first discuss cognitive research examining possible mechanisms of misattribution associated with false recognition. We also consider ways in which false recognition can be reduced or avoided, focusing in particular on the role of distinctive information. We next turn to neuropsychological research concerning patients with amnesia and Alzheimer's disease that reveals conditions under which such patients are less susceptible to false recognition than are healthy controls, thus providing clues about the brain mechanisms that drive false recognition. We then consider neuroimaging studies concerned with the neural correlates of true and false recognition, examining when the two forms of recognition can and cannot be distinguished on the basis of brain activity. Finally, we argue that even though misattribution and other memory sins are annoying and even dangerous, they can also be viewed as by-products of adaptive features of memory.

  18. The Role of Verbal Instruction and Visual Guidance in Training Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Jamie S. North

    2017-09-01

    Full Text Available We used a novel approach to examine whether it is possible to improve the perceptual–cognitive skill of pattern recognition using a video-based training intervention. Moreover, we investigated whether any improvements in pattern recognition transfer to an improved ability to make anticipation judgments. Finally, we compared the relative effectiveness of verbal and visual guidance interventions compared to a group that merely viewed the same sequences without any intervention and a control group that only completed pre- and post-tests. We found a significant effect for time of testing. Participants were more sensitive in their ability to perceive patterns and distinguish between novel and familiar sequences at post- compared to pre-test. However, this improvement was not influenced by the nature of the intervention, despite some trends in the data. An analysis of anticipation accuracy showed no change from pre- to post-test following the pattern recognition training intervention, suggesting that the link between pattern perception and anticipation may not be strong. We present a series of recommendations for scientists and practitioners when employing training methods to improve pattern recognition and anticipation.

  19. Taxon and trait recognition from digitized herbarium specimens using deep convolutional neural networks

    KAUST Repository

    Younis, Sohaib; Weiland, Claus; Hoehndorf, Robert; Dressler, Stefan; Hickler, Thomas; Seeger, Bernhard; Schmidt, Marco

    2018-01-01

    Herbaria worldwide are housing a treasure of hundreds of millions of herbarium specimens, which are increasingly being digitized and thereby more accessible to the scientific community. At the same time, deep-learning algorithms are rapidly improving pattern recognition from images and these techniques are more and more being applied to biological objects. In this study, we are using digital images of herbarium specimens in order to identify taxa and traits of these collection objects by applying convolutional neural networks (CNN). Images of the 1000 species most frequently documented by herbarium specimens on GBIF have been downloaded and combined with morphological trait data, preprocessed and divided into training and test datasets for species and trait recognition. Good performance in both domains suggests substantial potential of this approach for supporting taxonomy and natural history collection management. Trait recognition is also promising for applications in functional ecology.

  20. Taxon and trait recognition from digitized herbarium specimens using deep convolutional neural networks

    KAUST Repository

    Younis, Sohaib

    2018-03-13

    Herbaria worldwide are housing a treasure of hundreds of millions of herbarium specimens, which are increasingly being digitized and thereby more accessible to the scientific community. At the same time, deep-learning algorithms are rapidly improving pattern recognition from images and these techniques are more and more being applied to biological objects. In this study, we are using digital images of herbarium specimens in order to identify taxa and traits of these collection objects by applying convolutional neural networks (CNN). Images of the 1000 species most frequently documented by herbarium specimens on GBIF have been downloaded and combined with morphological trait data, preprocessed and divided into training and test datasets for species and trait recognition. Good performance in both domains suggests substantial potential of this approach for supporting taxonomy and natural history collection management. Trait recognition is also promising for applications in functional ecology.

  1. [Face recognition in patients with autism spectrum disorders].

    Science.gov (United States)

    Kita, Yosuke; Inagaki, Masumi

    2012-07-01

    The present study aimed to review previous research conducted on face recognition in patients with autism spectrum disorders (ASD). Face recognition is a key question in the ASD research field because it can provide clues for elucidating the neural substrates responsible for the social impairment of these patients. Historically, behavioral studies have reported low performance and/or unique strategies of face recognition among ASD patients. However, the performance and strategy of ASD patients is comparable to those of the control group, depending on the experimental situation or developmental stage, suggesting that face recognition of ASD patients is not entirely impaired. Recent brain function studies, including event-related potential and functional magnetic resonance imaging studies, have investigated the cognitive process of face recognition in ASD patients, and revealed impaired function in the brain's neural network comprising the fusiform gyrus and amygdala. This impaired function is potentially involved in the diminished preference for faces, and in the atypical development of face recognition, eliciting symptoms of unstable behavioral characteristics in these patients. Additionally, face recognition in ASD patients is examined from a different perspective, namely self-face recognition, and facial emotion recognition. While the former topic is intimately linked to basic social abilities such as self-other discrimination, the latter is closely associated with mentalizing. Further research on face recognition in ASD patients should investigate the connection between behavioral and neurological specifics in these patients, by considering developmental changes and the spectrum clinical condition of ASD.

  2. Greater epitope recognition of shrimp allergens by children than by adults suggests that shrimp sensitization decreases with age.

    Science.gov (United States)

    Ayuso, Rosalía; Sánchez-Garcia, Silvia; Lin, Jing; Fu, Zhiyan; Ibáñez, María Dolores; Carrillo, Teresa; Blanco, Carlos; Goldis, Marina; Bardina, Ludmila; Sastre, Joaquín; Sampson, Hugh A

    2010-06-01

    Shellfish allergy is a long-lasting disorder typically affecting adults. Despite its high prevalence, there is limited information about allergenic shrimp proteins and the epitopes implicated in such allergic reactions. We sought to identify the IgE-binding epitopes of the 4 shrimp allergens and to characterize epitope recognition profiles of children and adults with shrimp allergy. Fifty-three subjects, 34 children and 19 adults, were selected with immediate allergic reactions to shrimp, increased shrimp-specific serum IgE levels, and positive immunoblot binding to shrimp. Study subjects and 7 nonatopic control subjects were tested by means of peptide microarray for IgE binding with synthetic overlapping peptides spanning the sequences of Litopenaeus vannamei shrimp tropomyosin, arginine kinase (AK), myosin light chain (MLC), and sarcoplasmic calcium-binding protein (SCP). The Wilcoxon test was used to determine significant differences in z scores between patients and control subjects. The median shrimp IgE level was 4-fold higher in children than in adults (47 vs 12.5 kU(A)/L). The frequency of allergen recognition was higher in children (tropomyosin, 81% [94% for children and 61% for adults]; MLC, 57% [70% for children and 31% for adults]; AK, 51% [67% for children and 21% for adults]; and SCP, 45% [59% for children and 21% for adults]), whereas control subjects showed negligible binding. Seven IgE-binding regions were identified in tropomyosin by means of peptide microarray, confirming previously identified shrimp epitopes. In addition, 3 new epitopes were identified in tropomyosin (epitopes 1, 3, and 5b-c), 5 epitopes were identified in MLC, 3 epitopes were identified in SCP, and 7 epitopes were identified in AK. Interestingly, frequency of individual epitope recognition, as well as intensity of IgE binding, was significantly greater in children than in adults for all 4 proteins. Children with shrimp allergy have greater shrimp-specific IgE antibody levels and

  3. Cross-cultural differences in the neural correlates of specific and general recognition.

    Science.gov (United States)

    Paige, Laura E; Ksander, John C; Johndro, Hunter A; Gutchess, Angela H

    2017-06-01

    Research suggests that culture influences how people perceive the world, which extends to memory specificity, or how much perceptual detail is remembered. The present study investigated cross-cultural differences (Americans vs East Asians) at the time of encoding in the neural correlates of specific versus general memory formation. Participants encoded photos of everyday items in the scanner and 48 h later completed a surprise recognition test. The recognition test consisted of same (i.e., previously seen in scanner), similar (i.e., same name, different features), or new photos (i.e., items not previously seen in scanner). For Americans compared to East Asians, we predicted greater activation in the hippocampus and right fusiform for specific memory at recognition, as these regions were implicated previously in encoding perceptual details. Results revealed that East Asians activated the left fusiform and left hippocampus more than Americans for specific versus general memory. Follow-up analyses ruled out alternative explanations of retrieval difficulty and familiarity for this pattern of cross-cultural differences at encoding. Results overall suggest that culture should be considered as another individual difference that affects memory specificity and modulates neural regions underlying these processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Episodic Short-Term Recognition Requires Encoding into Visual Working Memory: Evidence from Probe Recognition after Letter Report.

    Science.gov (United States)

    Poth, Christian H; Schneider, Werner X

    2016-01-01

    Human vision is organized in discrete processing episodes (e.g., eye fixations or task-steps). Object information must be transmitted across episodes to enable episodic short-term recognition: recognizing whether a current object has been seen in a previous episode. We ask whether episodic short-term recognition presupposes that objects have been encoded into capacity-limited visual working memory (VWM), which retains visual information for report. Alternatively, it could rely on the activation of visual features or categories that occurs before encoding into VWM. We assessed the dependence of episodic short-term recognition on VWM by a new paradigm combining letter report and probe recognition. Participants viewed displays of 10 letters and reported as many as possible after a retention interval (whole report). Next, participants viewed a probe letter and indicated whether it had been one of the 10 letters (probe recognition). In Experiment 1, probe recognition was more accurate for letters that had been encoded into VWM (reported letters) compared with non-encoded letters (non-reported letters). Interestingly, those letters that participants reported in their whole report had been near to one another within the letter displays. This suggests that the encoding into VWM proceeded in a spatially clustered manner. In Experiment 2, participants reported only one of 10 letters (partial report) and probes either referred to this letter, to letters that had been near to it, or far from it. Probe recognition was more accurate for near than for far letters, although none of these letters had to be reported. These findings indicate that episodic short-term recognition is constrained to a small number of simultaneously presented objects that have been encoded into VWM.

  5. Episodic Short-Term Recognition Requires Encoding into Visual Working Memory: Evidence from Probe Recognition after Letter Report

    Directory of Open Access Journals (Sweden)

    Christian H. Poth

    2016-09-01

    Full Text Available Human vision is organized in discrete processing episodes (e.g. eye fixations or task-steps. Object information must be transmitted across episodes to enable episodic short-term recognition: recognizing whether a current object has been seen in a previous episode. We ask whether episodic short-term recognition presupposes that objects have been encoded into capacity-limited visual working memory (VWM, which retains visual information for report. Alternatively, it could rely on the activation of visual features or categories that occurs before encoding into VWM. We assessed the dependence of episodic short-term recognition on VWM by a new paradigm combining letter report and probe recognition. Participants viewed displays of ten letters and reported as many as possible after a retention interval (whole report. Next, participants viewed a probe letter and indicated whether it had been one of the ten letters (probe recognition. In Experiment 1, probe recognition was more accurate for letters that had been encoded into VWM (reported letters compared with non-encoded letters (non-reported letters. Interestingly, those letters that participants reported in their whole report had been near to one another within the letter displays. This suggests that the encoding into VWM proceeded in a spatially clustered manner. In Experiment 2 participants reported only one of ten letters (partial report and probes either referred to this letter, to letters that had been near to it, or far from it. Probe recognition was more accurate for near than for far letters, although none of these letters had to be reported. These findings indicate that episodic short-term recognition is constrained to a small number of simultaneously presented objects that have been encoded into VWM.

  6. Cross-modal individual recognition in wild African lions.

    Science.gov (United States)

    Gilfillan, Geoffrey; Vitale, Jessica; McNutt, John Weldon; McComb, Karen

    2016-08-01

    Individual recognition is considered to have been fundamental in the evolution of complex social systems and is thought to be a widespread ability throughout the animal kingdom. Although robust evidence for individual recognition remains limited, recent experimental paradigms that examine cross-modal processing have demonstrated individual recognition in a range of captive non-human animals. It is now highly relevant to test whether cross-modal individual recognition exists within wild populations and thus examine how it is employed during natural social interactions. We address this question by testing audio-visual cross-modal individual recognition in wild African lions (Panthera leo) using an expectancy-violation paradigm. When presented with a scenario where the playback of a loud-call (roaring) broadcast from behind a visual block is incongruent with the conspecific previously seen there, subjects responded more strongly than during the congruent scenario where the call and individual matched. These findings suggest that lions are capable of audio-visual cross-modal individual recognition and provide a useful method for studying this ability in wild populations. © 2016 The Author(s).

  7. An adaptive deep Q-learning strategy for handwritten digit recognition.

    Science.gov (United States)

    Qiao, Junfei; Wang, Gongming; Li, Wenjing; Chen, Min

    2018-02-22

    Handwritten digits recognition is a challenging problem in recent years. Although many deep learning-based classification algorithms are studied for handwritten digits recognition, the recognition accuracy and running time still need to be further improved. In this paper, an adaptive deep Q-learning strategy is proposed to improve accuracy and shorten running time for handwritten digit recognition. The adaptive deep Q-learning strategy combines the feature-extracting capability of deep learning and the decision-making of reinforcement learning to form an adaptive Q-learning deep belief network (Q-ADBN). First, Q-ADBN extracts the features of original images using an adaptive deep auto-encoder (ADAE), and the extracted features are considered as the current states of Q-learning algorithm. Second, Q-ADBN receives Q-function (reward signal) during recognition of the current states, and the final handwritten digits recognition is implemented by maximizing the Q-function using Q-learning algorithm. Finally, experimental results from the well-known MNIST dataset show that the proposed Q-ADBN has a superiority to other similar methods in terms of accuracy and running time. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Bihippocampal damage with emotional dysfunction: impaired auditory recognition of fear.

    Science.gov (United States)

    Ghika-Schmid, F; Ghika, J; Vuilleumier, P; Assal, G; Vuadens, P; Scherer, K; Maeder, P; Uske, A; Bogousslavsky, J

    1997-01-01

    A right-handed man developed a sudden transient, amnestic syndrome associated with bilateral hemorrhage of the hippocampi, probably due to Urbach-Wiethe disease. In the 3rd month, despite significant hippocampal structural damage on imaging, only a milder degree of retrograde and anterograde amnesia persisted on detailed neuropsychological examination. On systematic testing of recognition of facial and vocal expression of emotion, we found an impairment of the vocal perception of fear, but not that of other emotions, such as joy, sadness and anger. Such selective impairment of fear perception was not present in the recognition of facial expression of emotion. Thus emotional perception varies according to the different aspects of emotions and the different modality of presentation (faces versus voices). This is consistent with the idea that there may be multiple emotion systems. The study of emotional perception in this unique case of bilateral involvement of hippocampus suggests that this structure may play a critical role in the recognition of fear in vocal expression, possibly dissociated from that of other emotions and from that of fear in facial expression. In regard of recent data suggesting that the amygdala is playing a role in the recognition of fear in the auditory as well as in the visual modality this could suggest that the hippocampus may be part of the auditory pathway of fear recognition.

  9. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias.

    Science.gov (United States)

    Invitto, Sara; Calcagnì, Antonio; Mignozzi, Arianna; Scardino, Rosanna; Piraino, Giulia; Turchi, Daniele; De Feudis, Irio; Brunetti, Antonio; Bevilacqua, Vitoantonio; de Tommaso, Marina

    2017-01-01

    Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians). Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP) and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment). A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.

  10. Brain dynamics of upstream perceptual processes leading to visual object recognition: a high density ERP topographic mapping study.

    Science.gov (United States)

    Schettino, Antonio; Loeys, Tom; Delplanque, Sylvain; Pourtois, Gilles

    2011-04-01

    Recent studies suggest that visual object recognition is a proactive process through which perceptual evidence accumulates over time before a decision can be made about the object. However, the exact electrophysiological correlates and time-course of this complex process remain unclear. In addition, the potential influence of emotion on this process has not been investigated yet. We recorded high density EEG in healthy adult participants performing a novel perceptual recognition task. For each trial, an initial blurred visual scene was first shown, before the actual content of the stimulus was gradually revealed by progressively adding diagnostic high spatial frequency information. Participants were asked to stop this stimulus sequence as soon as they could correctly perform an animacy judgment task. Behavioral results showed that participants reliably gathered perceptual evidence before recognition. Furthermore, prolonged exploration times were observed for pleasant, relative to either neutral or unpleasant scenes. ERP results showed distinct effects starting at 280 ms post-stimulus onset in distant brain regions during stimulus processing, mainly characterized by: (i) a monotonic accumulation of evidence, involving regions of the posterior cingulate cortex/parahippocampal gyrus, and (ii) true categorical recognition effects in medial frontal regions, including the dorsal anterior cingulate cortex. These findings provide evidence for the early involvement, following stimulus onset, of non-overlapping brain networks during proactive processes eventually leading to visual object recognition. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Hybrid methodological approach to context-dependent speech recognition

    Directory of Open Access Journals (Sweden)

    Dragiša Mišković

    2017-01-01

    Full Text Available Although the importance of contextual information in speech recognition has been acknowledged for a long time now, it has remained clearly underutilized even in state-of-the-art speech recognition systems. This article introduces a novel, methodologically hybrid approach to the research question of context-dependent speech recognition in human–machine interaction. To the extent that it is hybrid, the approach integrates aspects of both statistical and representational paradigms. We extend the standard statistical pattern-matching approach with a cognitively inspired and analytically tractable model with explanatory power. This methodological extension allows for accounting for contextual information which is otherwise unavailable in speech recognition systems, and using it to improve post-processing of recognition hypotheses. The article introduces an algorithm for evaluation of recognition hypotheses, illustrates it for concrete interaction domains, and discusses its implementation within two prototype conversational agents.

  12. Action recognition using mined hierarchical compound features.

    Science.gov (United States)

    Gilbert, Andrew; Illingworth, John; Bowden, Richard

    2011-05-01

    The field of Action Recognition has seen a large increase in activity in recent years. Much of the progress has been through incorporating ideas from single-frame object recognition and adapting them for temporal-based action recognition. Inspired by the success of interest points in the 2D spatial domain, their 3D (space-time) counterparts typically form the basic components used to describe actions, and in action recognition the features used are often engineered to fire sparsely. This is to ensure that the problem is tractable; however, this can sacrifice recognition accuracy as it cannot be assumed that the optimum features in terms of class discrimination are obtained from this approach. In contrast, we propose to initially use an overcomplete set of simple 2D corners in both space and time. These are grouped spatially and temporally using a hierarchical process, with an increasing search area. At each stage of the hierarchy, the most distinctive and descriptive features are learned efficiently through data mining. This allows large amounts of data to be searched for frequently reoccurring patterns of features. At each level of the hierarchy, the mined compound features become more complex, discriminative, and sparse. This results in fast, accurate recognition with real-time performance on high-resolution video. As the compound features are constructed and selected based upon their ability to discriminate, their speed and accuracy increase at each level of the hierarchy. The approach is tested on four state-of-the-art data sets, the popular KTH data set to provide a comparison with other state-of-the-art approaches, the Multi-KTH data set to illustrate performance at simultaneous multiaction classification, despite no explicit localization information provided during training. Finally, the recent Hollywood and Hollywood2 data sets provide challenging complex actions taken from commercial movie sequences. For all four data sets, the proposed hierarchical

  13. Action Recognition by Joint Spatial-Temporal Motion Feature

    Directory of Open Access Journals (Sweden)

    Weihua Zhang

    2013-01-01

    Full Text Available This paper introduces a method for human action recognition based on optical flow motion features extraction. Automatic spatial and temporal alignments are combined together in order to encourage the temporal consistence on each action by an enhanced dynamic time warping (DTW algorithm. At the same time, a fast method based on coarse-to-fine DTW constraint to improve computational performance without reducing accuracy is induced. The main contributions of this study include (1 a joint spatial-temporal multiresolution optical flow computation method which can keep encoding more informative motion information than recent proposed methods, (2 an enhanced DTW method to improve temporal consistence of motion in action recognition, and (3 coarse-to-fine DTW constraint on motion features pyramids to speed up recognition performance. Using this method, high recognition accuracy is achieved on different action databases like Weizmann database and KTH database.

  14. Stein and Honneth on Empathy and Emotional Recognition

    DEFF Research Database (Denmark)

    Jardine, James Alexander

    2015-01-01

    My aim in this paper is to make use of Edith Stein’s phenomenological analyses of empathy, emotion, and personhood to clarify and critically assess the recent suggestion by Axel Honneth that a basic form of recognition is affective in nature. I will begin by considering Honneth’s own presentation...... of this claim in his discussion of the role of affect in recognitive gestures, as well as in his notion of ‘elementary recognition,’ arguing that while his account contains much of value it also generates problems. On the basis of this analysis, I will try to show that Stein’s account of empathy demarcates...... an elementary form of recognition in a less problematic fashion than does Honneth’s own treatment of this issue. I will then spell out the consequences of this move for the emotional recognition thesis, arguing that Stein’s treatment lends it further credence, before ending with some remarks on the connection...

  15. Working Memory Load Affects Processing Time in Spoken Word Recognition: Evidence from Eye-Movements

    Science.gov (United States)

    Hadar, Britt; Skrzypek, Joshua E.; Wingfield, Arthur; Ben-David, Boaz M.

    2016-01-01

    In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the “visual world” eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g., “point at the candle”). Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset) or at the last syllable (offset). Eye movements captured listeners' ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal). We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load) or four (high-load) spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions. PMID:27242424

  16. Graded Mirror Self-Recognition by Clark's Nutcrackers.

    Science.gov (United States)

    Clary, Dawson; Kelly, Debbie M

    2016-11-04

    The traditional 'mark test' has shown some large-brained species are capable of mirror self-recognition. During this test a mark is inconspicuously placed on an animal's body where it can only be seen with the aid of a mirror. If the animal increases the number of actions directed to the mark region when presented with a mirror, the animal is presumed to have recognized the mirror image as its reflection. However, the pass/fail nature of the mark test presupposes self-recognition exists in entirety or not at all. We developed a novel mirror-recognition task, to supplement the mark test, which revealed gradation in the self-recognition of Clark's nutcrackers, a large-brained corvid. To do so, nutcrackers cached food alone, observed by another nutcracker, or with a regular or blurry mirror. The nutcrackers suppressed caching with a regular mirror, a behavioural response to prevent cache theft by conspecifics, but did not suppress caching with a blurry mirror. Likewise, during the mark test, most nutcrackers made more self-directed actions to the mark with a blurry mirror than a regular mirror. Both results suggest self-recognition was more readily achieved with the blurry mirror and that self-recognition may be more broadly present among animals than currently thought.

  17. Oxytocin improves emotion recognition for older males.

    Science.gov (United States)

    Campbell, Anna; Ruffman, Ted; Murray, Janice E; Glue, Paul

    2014-10-01

    Older adults (≥60 years) perform worse than young adults (18-30 years) when recognizing facial expressions of emotion. The hypothesized cause of these changes might be declines in neurotransmitters that could affect information processing within the brain. In the present study, we examined the neuropeptide oxytocin that functions to increase neurotransmission. Research suggests that oxytocin benefits the emotion recognition of less socially able individuals. Men tend to have lower levels of oxytocin and older men tend to have worse emotion recognition than older women; therefore, there is reason to think that older men will be particularly likely to benefit from oxytocin. We examined this idea using a double-blind design, testing 68 older and 68 young adults randomly allocated to receive oxytocin nasal spray (20 international units) or placebo. Forty-five minutes afterward they completed an emotion recognition task assessing labeling accuracy for angry, disgusted, fearful, happy, neutral, and sad faces. Older males receiving oxytocin showed improved emotion recognition relative to those taking placebo. No differences were found for older females or young adults. We hypothesize that oxytocin facilitates emotion recognition by improving neurotransmission in the group with the worst emotion recognition. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Forecasting elections with mere recognition from small, lousy samples: A comparison of collective recognition, wisdom of crowds, and representative polls

    Directory of Open Access Journals (Sweden)

    Wolfgang Gaissmeier

    2011-02-01

    Full Text Available We investigated the extent to which the human capacity for recognition helps to forecast political elections: We compared naive recognition-based election forecasts computed from convenience samples of citizens' recognition of party names to (i standard polling forecasts computed from representative samples of citizens' voting intentions, and to (ii simple---and typically very accurate---wisdom-of-crowds-forecasts computed from the same convenience samples of citizens' aggregated hunches about election results. Results from four major German elections show that mere recognition of party names forecast the parties' electoral success fairly well. Recognition-based forecasts were most competitive with the other models when forecasting the smaller parties' success and for small sample sizes. However, wisdom-of-crowds-forecasts outperformed recognition-based forecasts in most cases. It seems that wisdom-of-crowds-forecasts are able to draw on the benefits of recognition while at the same time avoiding its downsides, such as lack of discrimination among very famous parties or recognition caused by factors unrelated to electoral success. Yet it seems that a simple extension of the recognition-based forecasts---asking people what proportion of the population would recognize a party instead of whether they themselves recognize it---is also able to eliminate these downsides.

  19. USE OF IMAGE ENHANCEMENT TECHNIQUES FOR IMPROVING REAL TIME FACE RECOGNITION EFFICIENCY ON WEARABLE GADGETS

    Directory of Open Access Journals (Sweden)

    MUHAMMAD EHSAN RANA

    2017-01-01

    Full Text Available The objective of this research is to study the effects of image enhancement techniques on face recognition performance of wearable gadgets with an emphasis on recognition rate.In this research, a number of image enhancement techniques are selected that include brightness normalization, contrast normalization, sharpening, smoothing, and various combinations of these. Subsequently test images are obtained from AT&T database and Yale Face Database B to investigate the effect of these image enhancement techniques under various conditions such as change of illumination and face orientation and expression.The evaluation of data, collected during this research, revealed that the effect of image pre-processing techniques on face recognition highly depends on the illumination condition under which these images are taken. It is revealed that the benefit of applying image enhancement techniques on face images is best seen when there is high variation of illumination among images. Results also indicate that highest recognition rate is achieved when images are taken under low light condition and image contrast is enhanced using histogram equalization technique and then image noise is reduced using median smoothing filter. Additionally combination of contrast normalization and mean smoothing filter shows good result in all scenarios. Results obtained from test cases illustrate up to 75% improvement in face recognition rate when image enhancement is applied to images in given scenarios.

  20. The role of the hippocampus in recognition memory.

    Science.gov (United States)

    Bird, Chris M

    2017-08-01

    Many theories of declarative memory propose that it is supported by partially separable processes underpinned by different brain structures. The hippocampus plays a critical role in binding together item and contextual information together and processing the relationships between individual items. By contrast, the processing of individual items and their later recognition can be supported by extrahippocampal regions of the medial temporal lobes (MTL), particularly when recognition is based on feelings of familiarity without the retrieval of any associated information. These theories are domain-general in that "items" might be words, faces, objects, scenes, etc. However, there is mixed evidence that item recognition does not require the hippocampus, or that familiarity-based recognition can be supported by extrahippocampal regions. By contrast, there is compelling evidence that in humans, hippocampal damage does not affect recognition memory for unfamiliar faces, whilst recognition memory for several other stimulus classes is impaired. I propose that regions outside of the hippocampus can support recognition of unfamiliar faces because they are perceived as discrete items and have no prior conceptual associations. Conversely, extrahippocampal processes are inadequate for recognition of items which (a) have been previously experienced, (b) are conceptually meaningful, or (c) are perceived as being comprised of individual elements. This account reconciles findings from primate and human studies of recognition memory. Furthermore, it suggests that while the hippocampus is critical for binding and relational processing, these processes are required for item recognition memory in most situations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Exposure to childhood adversity and deficits in emotion recognition: results from a large, population-based sample.

    Science.gov (United States)

    Dunn, Erin C; Crawford, Katherine M; Soare, Thomas W; Button, Katherine S; Raffeld, Miriam R; Smith, Andrew D A C; Penton-Voak, Ian S; Munafò, Marcus R

    2018-03-07

    Emotion recognition skills are essential for social communication. Deficits in these skills have been implicated in mental disorders. Prior studies of clinical and high-risk samples have consistently shown that children exposed to adversity are more likely than their unexposed peers to have emotion recognition skills deficits. However, only one population-based study has examined this association. We analyzed data from children participating in the Avon Longitudinal Study of Parents and Children, a prospective birth cohort (n = 6,506). We examined the association between eight adversities, assessed repeatedly from birth to age 8 (caregiver physical or emotional abuse; sexual or physical abuse; maternal psychopathology; one adult in the household; family instability; financial stress; parent legal problems; neighborhood disadvantage) and the ability to recognize facial displays of emotion measured using the faces subtest of the Diagnostic Assessment of Non-Verbal Accuracy (DANVA) at age 8.5 years. In addition to examining the role of exposure (vs. nonexposure) to each type of adversity, we also evaluated the role of the timing, duration, and recency of each adversity using a Least Angle Regression variable selection procedure. Over three-quarters of the sample experienced at least one adversity. We found no evidence to support an association between emotion recognition deficits and previous exposure to adversity, either in terms of total lifetime exposure, timing, duration, or recency, or when stratifying by sex. Results from the largest population-based sample suggest that even extreme forms of adversity are unrelated to emotion recognition deficits as measured by the DANVA, suggesting the possible immutability of emotion recognition in the general population. These findings emphasize the importance of population-based studies to generate generalizable results. © 2018 Association for Child and Adolescent Mental Health.

  2. Intact suppression of increased false recognition in schizophrenia.

    Science.gov (United States)

    Weiss, Anthony P; Dodson, Chad S; Goff, Donald C; Schacter, Daniel L; Heckers, Stephan

    2002-09-01

    Recognition memory is impaired in patients with schizophrenia, as they rely largely on item familiarity, rather than conscious recollection, to make mnemonic decisions. False recognition of novel items (foils) is increased in schizophrenia and may relate to this deficit in conscious recollection. By studying pictures of the target word during encoding, healthy adults can suppress false recognition. This study examined the effect of pictorial encoding on subsequent recognition of repeated foils in patients with schizophrenia. The study included 40 patients with schizophrenia and 32 healthy comparison subjects. After incidental encoding of 60 words or pictures, subjects were tested for recognition of target items intermixed with 60 new foils. These new foils were subsequently repeated following either a two- or 24-word delay. Subjects were instructed to label these repeated foils as new and not to mistake them for old target words. Schizophrenic patients showed greater overall false recognition of repeated foils. The rate of false recognition of repeated foils was lower after picture encoding than after word encoding. Despite higher levels of false recognition of repeated new items, patients and comparison subjects demonstrated a similar degree of false recognition suppression after picture, as compared to word, encoding. Patients with schizophrenia displayed greater false recognition of repeated foils than comparison subjects, suggesting both a decrement of item- (or source-) specific recollection and a consequent reliance on familiarity in schizophrenia. Despite these deficits, presenting pictorial information at encoding allowed schizophrenic subjects to suppress false recognition to a similar degree as the comparison group, implying the intact use of a high-level cognitive strategy in this population.

  3. Development of visuo-haptic transfer for object recognition in typical preschool and school-aged children.

    Science.gov (United States)

    Purpura, Giulia; Cioni, Giovanni; Tinelli, Francesca

    2018-07-01

    Object recognition is a long and complex adaptive process and its full maturation requires combination of many different sensory experiences as well as cognitive abilities to manipulate previous experiences in order to develop new percepts and subsequently to learn from the environment. It is well recognized that the transfer of visual and haptic information facilitates object recognition in adults, but less is known about development of this ability. In this study, we explored the developmental course of object recognition capacity in children using unimodal visual information, unimodal haptic information, and visuo-haptic information transfer in children from 4 years to 10 years and 11 months of age. Participants were tested through a clinical protocol, involving visual exploration of black-and-white photographs of common objects, haptic exploration of real objects, and visuo-haptic transfer of these two types of information. Results show an age-dependent development of object recognition abilities for visual, haptic, and visuo-haptic modalities. A significant effect of time on development of unimodal and crossmodal recognition skills was found. Moreover, our data suggest that multisensory processes for common object recognition are active at 4 years of age. They facilitate recognition of common objects, and, although not fully mature, are significant in adaptive behavior from the first years of age. The study of typical development of visuo-haptic processes in childhood is a starting point for future studies regarding object recognition in impaired populations.

  4. Exploring Cultural Differences in the Recognition of the Self-Conscious Emotions.

    Directory of Open Access Journals (Sweden)

    Joanne M Chung

    Full Text Available Recent research suggests that the self-conscious emotions of embarrassment, shame, and pride have distinct, nonverbal expressions that can be recognized in the United States at above-chance levels. However, few studies have examined the recognition of these emotions in other cultures, and little research has been conducted in Asia. Consequently the cross-cultural generalizability of self-conscious emotions has not been firmly established. Additionally, there is no research that examines cultural variability in the recognition of the self-conscious emotions. Cultural values and exposure to Western culture have been identified as contributors to variability in recognition rates for the basic emotions; we sought to examine this for the self-conscious emotions using the University of California, Davis Set of Emotion Expressions (UCDSEE. The present research examined recognition of the self-conscious emotion expressions in South Korean college students and found that recognition rates were very high for pride, low but above chance for shame, and near zero for embarrassment. To examine what might be underlying the recognition rates we found in South Korea, recognition of self-conscious emotions and several cultural values were examined in a U.S. college student sample of European Americans, Asian Americans, and Asian-born individuals. Emotion recognition rates were generally similar between the European Americans and Asian Americans, and higher than emotion recognition rates for Asian-born individuals. These differences were not explained by cultural values in an interpretable manner, suggesting that exposure to Western culture is a more important mediator than values.

  5. Exploring Cultural Differences in the Recognition of the Self-Conscious Emotions.

    Science.gov (United States)

    Chung, Joanne M; Robins, Richard W

    2015-01-01

    Recent research suggests that the self-conscious emotions of embarrassment, shame, and pride have distinct, nonverbal expressions that can be recognized in the United States at above-chance levels. However, few studies have examined the recognition of these emotions in other cultures, and little research has been conducted in Asia. Consequently the cross-cultural generalizability of self-conscious emotions has not been firmly established. Additionally, there is no research that examines cultural variability in the recognition of the self-conscious emotions. Cultural values and exposure to Western culture have been identified as contributors to variability in recognition rates for the basic emotions; we sought to examine this for the self-conscious emotions using the University of California, Davis Set of Emotion Expressions (UCDSEE). The present research examined recognition of the self-conscious emotion expressions in South Korean college students and found that recognition rates were very high for pride, low but above chance for shame, and near zero for embarrassment. To examine what might be underlying the recognition rates we found in South Korea, recognition of self-conscious emotions and several cultural values were examined in a U.S. college student sample of European Americans, Asian Americans, and Asian-born individuals. Emotion recognition rates were generally similar between the European Americans and Asian Americans, and higher than emotion recognition rates for Asian-born individuals. These differences were not explained by cultural values in an interpretable manner, suggesting that exposure to Western culture is a more important mediator than values.

  6. Exploring Cultural Differences in the Recognition of the Self-Conscious Emotions

    Science.gov (United States)

    Chung, Joanne M.; Robins, Richard W.

    2015-01-01

    Recent research suggests that the self-conscious emotions of embarrassment, shame, and pride have distinct, nonverbal expressions that can be recognized in the United States at above-chance levels. However, few studies have examined the recognition of these emotions in other cultures, and little research has been conducted in Asia. Consequently the cross-cultural generalizability of self-conscious emotions has not been firmly established. Additionally, there is no research that examines cultural variability in the recognition of the self-conscious emotions. Cultural values and exposure to Western culture have been identified as contributors to variability in recognition rates for the basic emotions; we sought to examine this for the self-conscious emotions using the University of California, Davis Set of Emotion Expressions (UCDSEE). The present research examined recognition of the self-conscious emotion expressions in South Korean college students and found that recognition rates were very high for pride, low but above chance for shame, and near zero for embarrassment. To examine what might be underlying the recognition rates we found in South Korea, recognition of self-conscious emotions and several cultural values were examined in a U.S. college student sample of European Americans, Asian Americans, and Asian-born individuals. Emotion recognition rates were generally similar between the European Americans and Asian Americans, and higher than emotion recognition rates for Asian-born individuals. These differences were not explained by cultural values in an interpretable manner, suggesting that exposure to Western culture is a more important mediator than values. PMID:26309215

  7. Compact holographic optical neural network system for real-time pattern recognition

    Science.gov (United States)

    Lu, Taiwei; Mintzer, David T.; Kostrzewski, Andrew A.; Lin, Freddie S.

    1996-08-01

    One of the important characteristics of artificial neural networks is their capability for massive interconnection and parallel processing. Recently, specialized electronic neural network processors and VLSI neural chips have been introduced in the commercial market. The number of parallel channels they can handle is limited because of the limited parallel interconnections that can be implemented with 1D electronic wires. High-resolution pattern recognition problems can require a large number of neurons for parallel processing of an image. This paper describes a holographic optical neural network (HONN) that is based on high- resolution volume holographic materials and is capable of performing massive 3D parallel interconnection of tens of thousands of neurons. A HONN with more than 16,000 neurons packaged in an attache case has been developed. Rotation- shift-scale-invariant pattern recognition operations have been demonstrated with this system. System parameters such as the signal-to-noise ratio, dynamic range, and processing speed are discussed.

  8. Tree-based indexing for real-time ConvNet landmark-based visual place recognition

    Directory of Open Access Journals (Sweden)

    Yi Hou

    2017-01-01

    Full Text Available Recent impressive studies on using ConvNet landmarks for visual place recognition take an approach that involves three steps: (a detection of landmarks, (b description of the landmarks by ConvNet features using a convolutional neural network, and (c matching of the landmarks in the current view with those in the database views. Such an approach has been shown to achieve the state-of-the-art accuracy even under significant viewpoint and environmental changes. However, the computational burden in step (c significantly prevents this approach from being applied in practice, due to the complexity of linear search in high-dimensional space of the ConvNet features. In this article, we propose two simple and efficient search methods to tackle this issue. Both methods are built upon tree-based indexing. Given a set of ConvNet features of a query image, the first method directly searches the features’ approximate nearest neighbors in a tree structure that is constructed from ConvNet features of database images. The database images are voted on by features in the query image, according to a lookup table which maps each ConvNet feature to its corresponding database image. The database image with the highest vote is considered the solution. Our second method uses a coarse-to-fine procedure: the coarse step uses the first method to coarsely find the top-N database images, and the fine step performs a linear search in Hamming space of the hash codes of the ConvNet features to determine the best match. Experimental results demonstrate that our methods achieve real-time search performance on five data sets with different sizes and various conditions. Most notably, by achieving an average search time of 0.035 seconds/query, our second method improves the matching efficiency by the three orders of magnitude over a linear search baseline on a database with 20,688 images, with negligible loss in place recognition accuracy.

  9. The Neuropsychology of Familiar Person Recognition from Face and Voice

    Directory of Open Access Journals (Sweden)

    Guido Gainotti

    2014-05-01

    Full Text Available Prosopagnosia has been considered for a long period of time as the most important and almost exclusive disorder in the recognition of familiar people. In recent years, however, this conviction has been undermined by the description of patients showing a concomitant defect in the recognition of familiar faces and voices as a consequence of lesions encroaching upon the right anterior temporal lobe (ATL. These new data have obliged researchers to reconsider on one hand the construct of ‘associative prosopagnosia’ and on the other hand current models of people recognition. A systematic review of the patterns of familiar people recognition disorders observed in patients with right and left ATL lesions has shown that in patients with right ATL lesions face familiarity feelings and the retrieval of person-specific semantic information from faces are selectively affected, whereas in patients with left ATL lesions the defect selectively concerns famous people naming. Furthermore, some patients with right ATL lesions and intact face familiarity feelings show a defect in the retrieval of person-specific semantic knowledge greater from face than from name. These data are at variance with current models assuming: (a that familiarity feelings are generated at the level of person identity nodes (PINs where information processed by various sensory modalities converge, and (b that PINs provide a modality-free gateway to a single semantic system, where information about people is stored in an amodal format. They suggest, on the contrary: (a that familiarity feelings are generated at the level of modality-specific recognition units; (b that face and voice recognition units are represented more in the right than in the left ATLs; (c that in the right ATL are mainly stored person-specific information based on a convergence of perceptual information, whereas in the left ATLs are represented verbally-mediated person-specific information.

  10. Sigma A recognition sites in the Bacillus subtilis genome

    DEFF Research Database (Denmark)

    Jarmer, Hanne Østergaard; Larsen, Thomas Schou; Krogh, Anders Stærmose

    2001-01-01

    A hidden Markov model of sigma (A) RNA polymerase cofactor recognition sites in Bacillus subtilis, containing either the common or the extended -10 motifs, has been constructed based on experimentally verified sigma (A) recognition sites. This work suggests that more information exists...... at the initiation site of transcription in both types of promoters than previously thought. When tested on the entire B. subtilis genome, the model predicts that approximately half of the sigma (A) recognition sites are of the extended type. Some of the response-regulator aspartate phosphatases were among...

  11. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    OpenAIRE

    Francisco Javier Ordóñez; Daniel Roggen

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we pro...

  12. The nature of visual self-recognition.

    Science.gov (United States)

    Suddendorf, Thomas; Butler, David L

    2013-03-01

    Visual self-recognition is often controversially cited as an indicator of self-awareness and assessed with the mirror-mark test. Great apes and humans, unlike small apes and monkeys, have repeatedly passed mirror tests, suggesting that the underlying brain processes are homologous and evolved 14-18 million years ago. However, neuroscientific, developmental, and clinical dissociations show that the medium used for self-recognition (mirror vs photograph vs video) significantly alters behavioral and brain responses, likely due to perceptual differences among the different media and prior experience. On the basis of this evidence and evolutionary considerations, we argue that the visual self-recognition skills evident in humans and great apes are a byproduct of a general capacity to collate representations, and need not index other aspects of self-awareness. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Identity recognition in response to different levels of genetic relatedness in commercial soya bean

    Science.gov (United States)

    Van Acker, Rene; Rajcan, Istvan; Swanton, Clarence J.

    2017-01-01

    Identity recognition systems allow plants to tailor competitive phenotypes in response to the genetic relatedness of neighbours. There is limited evidence for the existence of recognition systems in crop species and whether they operate at a level that would allow for identification of different degrees of relatedness. Here, we test the responses of commercial soya bean cultivars to neighbours of varying genetic relatedness consisting of other commercial cultivars (intraspecific), its wild progenitor Glycine soja, and another leguminous species Phaseolus vulgaris (interspecific). We found, for the first time to our knowledge, that a commercial soya bean cultivar, OAC Wallace, showed identity recognition responses to neighbours at different levels of genetic relatedness. OAC Wallace showed no response when grown with other commercial soya bean cultivars (intra-specific neighbours), showed increased allocation to leaves compared with stems with wild soya beans (highly related wild progenitor species), and increased allocation to leaves compared with stems and roots with white beans (interspecific neighbours). Wild soya bean also responded to identity recognition but these responses involved changes in biomass allocation towards stems instead of leaves suggesting that identity recognition responses are species-specific and consistent with the ecology of the species. In conclusion, elucidating identity recognition in crops may provide further knowledge into mechanisms of crop competition and the relationship between crop density and yield. PMID:28280587

  14. Growing slower and less accurate: adult age differences in time-accuracy functions for recall and recognition from episodic memory.

    Science.gov (United States)

    Verhaeghen, P; Vandenbroucke, A; Dierckx, V

    1998-01-01

    In 2 experiments, time-accuracy curves were derived for recall and recognition from episodic memory for both young and older adults. In Experiment 1, time-accuracy functions were estimated for free list recall and list recall cued by rhyme words or semantic associations for 13 young and 13 older participants. In Experiment 2, time-accuracy functions were estimated for recognition of word lists with or without distractor items and with or without articulatory suppression for 29 young and 30 older participants. In both studies, age differences were found in the asymptote (i.e., the maximum level of performance attainable) and in the rate of approach toward the asymptote (i.e., the steepness of the curve). These two parameters were only modestly correlated. In Experiment 2, it was found that 89% of the age-related variance in the rate of approach and 62% of the age-related variance in the asymptote was explained by perceptual speed. The data point at the existence of 2 distinct effects of aging on episodic memory, namely a dynamic effect (growing slower) and an asymptotic effect (growing less accurate). The absence of Age x Condition interactions in the age-related parameters in either experiment points at the rather general nature of both aging effects.

  15. From Off-line to On-line Handwriting Recognition

    NARCIS (Netherlands)

    Lallican, P.; Viard-Gaudin, C.; Knerr, S.

    2004-01-01

    On-line handwriting includes more information on time order of the writing signal and on the dynamics of the writing process than off-line handwriting. Therefore, on-line recognition systems achieve higher recognition rates. This can be concluded from results reported in the literature, and has been

  16. A new selective developmental deficit: Impaired object recognition with normal face recognition.

    Science.gov (United States)

    Germine, Laura; Cashdollar, Nathan; Düzel, Emrah; Duchaine, Bradley

    2011-05-01

    Studies of developmental deficits in face recognition, or developmental prosopagnosia, have shown that individuals who have not suffered brain damage can show face recognition impairments coupled with normal object recognition (Duchaine and Nakayama, 2005; Duchaine et al., 2006; Nunn et al., 2001). However, no developmental cases with the opposite dissociation - normal face recognition with impaired object recognition - have been reported. The existence of a case of non-face developmental visual agnosia would indicate that the development of normal face recognition mechanisms does not rely on the development of normal object recognition mechanisms. To see whether a developmental variant of non-face visual object agnosia exists, we conducted a series of web-based object and face recognition tests to screen for individuals showing object recognition memory impairments but not face recognition impairments. Through this screening process, we identified AW, an otherwise normal 19-year-old female, who was then tested in the lab on face and object recognition tests. AW's performance was impaired in within-class visual recognition memory across six different visual categories (guns, horses, scenes, tools, doors, and cars). In contrast, she scored normally on seven tests of face recognition, tests of memory for two other object categories (houses and glasses), and tests of recall memory for visual shapes. Testing confirmed that her impairment was not related to a general deficit in lower-level perception, object perception, basic-level recognition, or memory. AW's results provide the first neuropsychological evidence that recognition memory for non-face visual object categories can be selectively impaired in individuals without brain damage or other memory impairment. These results indicate that the development of recognition memory for faces does not depend on intact object recognition memory and provide further evidence for category-specific dissociations in visual

  17. Data-Model Relationship in Text-Independent Speaker Recognition

    Directory of Open Access Journals (Sweden)

    Stapert Robert

    2005-01-01

    Full Text Available Text-independent speaker recognition systems such as those based on Gaussian mixture models (GMMs do not include time sequence information (TSI within the model itself. The level of importance of TSI in speaker recognition is an interesting question and one addressed in this paper. Recent works has shown that the utilisation of higher-level information such as idiolect, pronunciation, and prosodics can be useful in reducing speaker recognition error rates. In accordance with these developments, the aim of this paper is to show that as more data becomes available, the basic GMM can be enhanced by utilising TSI, even in a text-independent mode. This paper presents experimental work incorporating TSI into the conventional GMM. The resulting system, known as the segmental mixture model (SMM, embeds dynamic time warping (DTW into a GMM framework. Results are presented on the 2000-speaker SpeechDat Welsh database which show improved speaker recognition performance with the SMM.

  18. Facial Expression Recognition Based on TensorFlow Platform

    Directory of Open Access Journals (Sweden)

    Xia Xiao-Ling

    2017-01-01

    Full Text Available Facial expression recognition have a wide range of applications in human-machine interaction, pattern recognition, image understanding, machine vision and other fields. Recent years, it has gradually become a hot research. However, different people have different ways of expressing their emotions, and under the influence of brightness, background and other factors, there are some difficulties in facial expression recognition. In this paper, based on the Inception-v3 model of TensorFlow platform, we use the transfer learning techniques to retrain facial expression dataset (The Extended Cohn-Kanade dataset, which can keep the accuracy of recognition and greatly reduce the training time.

  19. A New Profile Shape Matching Stereovision Algorithm for Real-time Human Pose and Hand Gesture Recognition

    Directory of Open Access Journals (Sweden)

    Dong Zhang

    2014-02-01

    Full Text Available This paper presents a new profile shape matching stereovision algorithm that is designed to extract 3D information in real time. This algorithm obtains 3D information by matching profile intensity shapes of each corresponding row of the stereo image pair. It detects the corresponding matching patterns of the intensity profile rather than the intensity values of individual pixels or pixels in a small neighbourhood. This approach reduces the effect of the intensity and colour variations caused by lighting differences. As with all real-time vision algorithms, there is always a trade-off between accuracy and processing speed. This algorithm achieves a balance between the two to produce accurate results for real-time applications. To demonstrate its performance, the proposed algorithm is tested for human pose and hand gesture recognition to control a smart phone and an entertainment system.

  20. Eye movements during object recognition in visual agnosia.

    Science.gov (United States)

    Charles Leek, E; Patterson, Candy; Paul, Matthew A; Rafal, Robert; Cristino, Filipe

    2012-07-01

    This paper reports the first ever detailed study about eye movement patterns during single object recognition in visual agnosia. Eye movements were recorded in a patient with an integrative agnosic deficit during two recognition tasks: common object naming and novel object recognition memory. The patient showed normal directional biases in saccades and fixation dwell times in both tasks and was as likely as controls to fixate within object bounding contour regardless of recognition accuracy. In contrast, following initial saccades of similar amplitude to controls, the patient showed a bias for short saccades. In object naming, but not in recognition memory, the similarity of the spatial distributions of patient and control fixations was modulated by recognition accuracy. The study provides new evidence about how eye movements can be used to elucidate the functional impairments underlying object recognition deficits. We argue that the results reflect a breakdown in normal functional processes involved in the integration of shape information across object structure during the visual perception of shape. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Stereotypes and prejudice affect the recognition of emotional body postures.

    Science.gov (United States)

    Bijlstra, Gijsbert; Holland, Rob W; Dotsch, Ron; Wigboldus, Daniel H J

    2018-03-26

    Most research on emotion recognition focuses on facial expressions. However, people communicate emotional information through bodily cues as well. Prior research on facial expressions has demonstrated that emotion recognition is modulated by top-down processes. Here, we tested whether this top-down modulation generalizes to the recognition of emotions from body postures. We report three studies demonstrating that stereotypes and prejudice about men and women may affect how fast people classify various emotional body postures. Our results suggest that gender cues activate gender associations, which affect the recognition of emotions from body postures in a top-down fashion. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Facial and prosodic emotion recognition in social anxiety disorder.

    Science.gov (United States)

    Tseng, Huai-Hsuan; Huang, Yu-Lien; Chen, Jian-Ting; Liang, Kuei-Yu; Lin, Chao-Cheng; Chen, Sue-Huei

    2017-07-01

    Patients with social anxiety disorder (SAD) have a cognitive preference to negatively evaluate emotional information. In particular, the preferential biases in prosodic emotion recognition in SAD have been much less explored. The present study aims to investigate whether SAD patients retain negative evaluation biases across visual and auditory modalities when given sufficient response time to recognise emotions. Thirty-one SAD patients and 31 age- and gender-matched healthy participants completed a culturally suitable non-verbal emotion recognition task and received clinical assessments for social anxiety and depressive symptoms. A repeated measures analysis of variance was conducted to examine group differences in emotion recognition. Compared to healthy participants, SAD patients were significantly less accurate at recognising facial and prosodic emotions, and spent more time on emotion recognition. The differences were mainly driven by the lower accuracy and longer reaction times for recognising fearful emotions in SAD patients. Within the SAD patients, lower accuracy of sad face recognition was associated with higher severity of depressive and social anxiety symptoms, particularly with avoidance symptoms. These findings may represent a cross-modality pattern of avoidance in the later stage of identifying negative emotions in SAD. This pattern may be linked to clinical symptom severity.

  3. Recall, Recognition, and the Measurement of Memory for Print Advertisements

    OpenAIRE

    Richard P. Bagozzi; Alvin J. Silk

    1983-01-01

    The recall and recognition of people for 95 print ads were examined with an aim toward investigating memory structure and decay processes. It was found that recall and recognition do not, by themselves, measure a single underlying memory state. Rather, memory is multidimensional, and recall and recognition capture only a portion of memory, while at the same time reflecting other mental states. When interest in the ads was held constant, however, recall and recognition did measure memory as a ...

  4. Efficient Interaction Recognition through Positive Action Representation

    Directory of Open Access Journals (Sweden)

    Tao Hu

    2013-01-01

    Full Text Available This paper proposes a novel approach to decompose two-person interaction into a Positive Action and a Negative Action for more efficient behavior recognition. A Positive Action plays the decisive role in a two-person exchange. Thus, interaction recognition can be simplified to Positive Action-based recognition, focusing on an action representation of just one person. Recently, a new depth sensor has become widely available, the Microsoft Kinect camera, which provides RGB-D data with 3D spatial information for quantitative analysis. However, there are few publicly accessible test datasets using this camera, to assess two-person interaction recognition approaches. Therefore, we created a new dataset with six types of complex human interactions (i.e., named K3HI, including kicking, pointing, punching, pushing, exchanging an object, and shaking hands. Three types of features were extracted for each Positive Action: joint, plane, and velocity features. We used continuous Hidden Markov Models (HMMs to evaluate the Positive Action-based interaction recognition method and the traditional two-person interaction recognition approach with our test dataset. Experimental results showed that the proposed recognition technique is more accurate than the traditional method, shortens the sample training time, and therefore achieves comprehensive superiority.

  5. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias

    Directory of Open Access Journals (Sweden)

    Sara Invitto

    2017-08-01

    Full Text Available Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians. Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment. A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.

  6. The hierarchical brain network for face recognition.

    Science.gov (United States)

    Zhen, Zonglei; Fang, Huizhen; Liu, Jia

    2013-01-01

    Numerous functional magnetic resonance imaging (fMRI) studies have identified multiple cortical regions that are involved in face processing in the human brain. However, few studies have characterized the face-processing network as a functioning whole. In this study, we used fMRI to identify face-selective regions in the entire brain and then explore the hierarchical structure of the face-processing network by analyzing functional connectivity among these regions. We identified twenty-five regions mainly in the occipital, temporal and frontal cortex that showed a reliable response selective to faces (versus objects) across participants and across scan sessions. Furthermore, these regions were clustered into three relatively independent sub-networks in a face-recognition task on the basis of the strength of functional connectivity among them. The functionality of the sub-networks likely corresponds to the recognition of individual identity, retrieval of semantic knowledge and representation of emotional information. Interestingly, when the task was switched to object recognition from face recognition, the functional connectivity between the inferior occipital gyrus and the rest of the face-selective regions were significantly reduced, suggesting that this region may serve as an entry node in the face-processing network. In sum, our study provides empirical evidence for cognitive and neural models of face recognition and helps elucidate the neural mechanisms underlying face recognition at the network level.

  7. Molecularly imprinted titania nanoparticles for selective recognition and assay of uric acid

    Science.gov (United States)

    Mujahid, Adnan; Khan, Aimen Idrees; Afzal, Adeel; Hussain, Tajamal; Raza, Muhammad Hamid; Shah, Asma Tufail; uz Zaman, Waheed

    2015-06-01

    Molecularly imprinted titania nanoparticles are su ccessfully synthesized by sol-gel method for the selective recognition of uric acid. Atomic force microscopy is used to study the morphology of uric acid imprinted titania nanoparticles with diameter in the range of 100-150 nm. Scanning electron microscopy images of thick titania layer indicate the formation of fine network of titania nanoparticles with uniform distribution. Molecular imprinting of uric acid as well as its subsequent washing is confirmed by Fourier transformation infrared spectroscopy measurements. Uric acid rebinding studies reveal the recognition capability of imprinted particles in the range of 0.01-0.095 mmol, which is applicable in monitoring normal to elevated levels of uric acid in human blood. The optical shift (signal) of imprinted particles is six times higher in comparison with non-imprinted particles for the same concentration of uric acid. Imprinted titania particles have shown substantially reduced binding affinity toward interfering and structurally related substances, e.g. ascorbic acid and guanine. These results suggest the possible application of titania nanoparticles in uric acid recognition and quantification in blood serum.

  8. Color constancy in 3D-2D face recognition

    Science.gov (United States)

    Meyer, Manuel; Riess, Christian; Angelopoulou, Elli; Evangelopoulos, Georgios; Kakadiaris, Ioannis A.

    2013-05-01

    Face is one of the most popular biometric modalities. However, up to now, color is rarely actively used in face recognition. Yet, it is well-known that when a person recognizes a face, color cues can become as important as shape, especially when combined with the ability of people to identify the color of objects independent of illuminant color variations. In this paper, we examine the feasibility and effect of explicitly embedding illuminant color information in face recognition systems. We empirically examine the theoretical maximum gain of including known illuminant color to a 3D-2D face recognition system. We also investigate the impact of using computational color constancy methods for estimating the illuminant color, which is then incorporated into the face recognition framework. Our experiments show that under close-to-ideal illumination estimates, one can improve face recognition rates by 16%. When the illuminant color is algorithmically estimated, the improvement is approximately 5%. These results suggest that color constancy has a positive impact on face recognition, but the accuracy of the illuminant color estimate has a considerable effect on its benefits.

  9. New technique for number-plate recognition

    Science.gov (United States)

    Guo, Jie; Shi, Peng-Fei

    2001-09-01

    This paper presents an alternative algorithm for number plate recognition. The algorithm consists of three modules. Respectively, they are number plate location module, character segmentation module and character recognition module. Number plate location module extracts the number plate from the detected car image by analyzing the color and the texture properties. Different from most license plate location methods, the algorithm has fewer limits to the car size, the car position in the image and the image background. Character segmentation module applies connected region algorithm both to eliminate noise points and to segment characters. Touching characters and broken characters can be processed correctly. Character recognition module recognizes characters with HHIC (Hierarchical Hybrid Integrated Classifier). The system has been tested with 100 images obtained from crossroad and parking lot, etc, where the cars have different size, position, background and illumination. Successful recognition rate is about 92%. The average processing time is 1.2 second.

  10. Recovery from emotion recognition impairment after temporal lobectomy

    Directory of Open Access Journals (Sweden)

    Francesca eBenuzzi

    2014-06-01

    Full Text Available Mesial temporal lobe epilepsy (MTLE can be associated with emotion recognition impairment that can be particularly severe in patients with early onset seizures (1-3. Whereas there is growing evidence that memory and language can improve in seizure-free patients after anterior temporal lobectomy (ATL (4, the effects of surgery on emotional processing are still unknown. We used functional magnetic resonance imaging (fMRI to investigate short-term reorganization of networks engaged in facial emotion recognition in MTLE patients. Behavioral and fMRI data were collected from six patients before and after ATL. During the fMRI scan, patients were asked to make a gender decision on fearful and neutral faces. Behavioral data demonstrated that two patients with early-onset right MTLE were impaired in fear recognition while fMRI results showed they lacked specific activations for fearful faces. Post-ATL behavioral data showed improved emotion recognition ability, while fMRI demonstrated the recruitment of a functional network for fearful face processing. Our results suggest that ATL elicited brain plasticity mechanisms allowing behavioral and fMRI improvement in emotion recognition.

  11. The role of long-term and short-term familiarity in visual and haptic face recognition.

    Science.gov (United States)

    Casey, Sarah J; Newell, Fiona N

    2005-10-01

    Recent studies have suggested that the familiarity of a face leads to more robust recognition, at least within the visual domain. The aim of our study was to investigate whether face familiarity resulted in a representation of faces that was easily shared across the sensory modalities. In Experiment 1, we tested whether haptic recognition of a highly familiar face (one's own face) was as efficient as visual recognition. Our observers were unable to recognise their own face models from tactile memory alone but were able to recognise their faces visually. However, haptic recognition improved when participants were primed by their own live face. In Experiment 2, we found that short-term familiarisation with a set of previously unfamiliar face stimuli improved crossmodal recognition relative to the recognition of unfamiliar faces. Our findings suggest that familiarisation provides a strong representation of faces but that the nature of the information encoded during learning is critical for efficient crossmodal recognition.

  12. An exploratory study on emotion recognition in patients with a clinically isolated syndrome and multiple sclerosis.

    Science.gov (United States)

    Jehna, Margit; Neuper, Christa; Petrovic, Katja; Wallner-Blazek, Mirja; Schmidt, Reinhold; Fuchs, Siegrid; Fazekas, Franz; Enzinger, Christian

    2010-07-01

    Multiple sclerosis (MS) is a chronic multifocal CNS disorder which can affect higher order cognitive processes. Whereas cognitive disturbances in MS are increasingly better characterised, emotional facial expression (EFE) has rarely been tested, despite its importance for adequate social behaviour. We tested 20 patients with a clinically isolated syndrome suggestive of MS (CIS) or MS and 23 healthy controls (HC) for the ability to differ between emotional facial stimuli, controlling for the influence of depressive mood (ADS-L). We screened for cognitive dysfunction using The Faces Symbol Test (FST). The patients demonstrated significant decreased reaction-times regarding emotion recognition tests compared to HC. However, the results also suggested worse cognitive abilities in the patients. Emotional and cognitive test results were correlated. This exploratory pilot study suggests that emotion recognition deficits might be prevalent in MS. However, future studies will be needed to overcome the limitations of this study. Copyright 2010 Elsevier B.V. All rights reserved.

  13. Understanding eye movements in face recognition using hidden Markov models.

    Science.gov (United States)

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H

    2014-09-16

    We use a hidden Markov model (HMM) based approach to analyze eye movement data in face recognition. HMMs are statistical models that are specialized in handling time-series data. We conducted a face recognition task with Asian participants, and model each participant's eye movement pattern with an HMM, which summarized the participant's scan paths in face recognition with both regions of interest and the transition probabilities among them. By clustering these HMMs, we showed that participants' eye movements could be categorized into holistic or analytic patterns, demonstrating significant individual differences even within the same culture. Participants with the analytic pattern had longer response times, but did not differ significantly in recognition accuracy from those with the holistic pattern. We also found that correct and wrong recognitions were associated with distinctive eye movement patterns; the difference between the two patterns lies in the transitions rather than locations of the fixations alone. © 2014 ARVO.

  14. Selective verbal recognition memory impairments are associated with atrophy of the language network in non-semantic variants of primary progressive aphasia.

    Science.gov (United States)

    Nilakantan, Aneesha S; Voss, Joel L; Weintraub, Sandra; Mesulam, M-Marsel; Rogalski, Emily J

    2017-06-01

    Primary progressive aphasia (PPA) is clinically defined by an initial loss of language function and preservation of other cognitive abilities, including episodic memory. While PPA primarily affects the left-lateralized perisylvian language network, some clinical neuropsychological tests suggest concurrent initial memory loss. The goal of this study was to test recognition memory of objects and words in the visual and auditory modality to separate language-processing impairments from retentive memory in PPA. Individuals with non-semantic PPA had longer reaction times and higher false alarms for auditory word stimuli compared to visual object stimuli. Moreover, false alarms for auditory word recognition memory were related to cortical thickness within the left inferior frontal gyrus and left temporal pole, while false alarms for visual object recognition memory was related to cortical thickness within the right-temporal pole. This pattern of results suggests that specific vulnerability in processing verbal stimuli can hinder episodic memory in PPA, and provides evidence for differential contributions of the left and right temporal poles in word and object recognition memory. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Individual differences in the recognition of facial expressions: an event-related potentials study.

    Directory of Open Access Journals (Sweden)

    Yoshiyuki Tamamiya

    Full Text Available Previous studies have shown that early posterior components of event-related potentials (ERPs are modulated by facial expressions. The goal of the current study was to investigate individual differences in the recognition of facial expressions by examining the relationship between ERP components and the discrimination of facial expressions. Pictures of 3 facial expressions (angry, happy, and neutral were presented to 36 young adults during ERP recording. Participants were asked to respond with a button press as soon as they recognized the expression depicted. A multiple regression analysis, where ERP components were set as predictor variables, assessed hits and reaction times in response to the facial expressions as dependent variables. The N170 amplitudes significantly predicted for accuracy of angry and happy expressions, and the N170 latencies were predictive for accuracy of neutral expressions. The P2 amplitudes significantly predicted reaction time. The P2 latencies significantly predicted reaction times only for neutral faces. These results suggest that individual differences in the recognition of facial expressions emerge from early components in visual processing.

  16. Individual differences in the recognition of facial expressions: an event-related potentials study.

    Science.gov (United States)

    Tamamiya, Yoshiyuki; Hiraki, Kazuo

    2013-01-01

    Previous studies have shown that early posterior components of event-related potentials (ERPs) are modulated by facial expressions. The goal of the current study was to investigate individual differences in the recognition of facial expressions by examining the relationship between ERP components and the discrimination of facial expressions. Pictures of 3 facial expressions (angry, happy, and neutral) were presented to 36 young adults during ERP recording. Participants were asked to respond with a button press as soon as they recognized the expression depicted. A multiple regression analysis, where ERP components were set as predictor variables, assessed hits and reaction times in response to the facial expressions as dependent variables. The N170 amplitudes significantly predicted for accuracy of angry and happy expressions, and the N170 latencies were predictive for accuracy of neutral expressions. The P2 amplitudes significantly predicted reaction time. The P2 latencies significantly predicted reaction times only for neutral faces. These results suggest that individual differences in the recognition of facial expressions emerge from early components in visual processing.

  17. Speech Recognition

    Directory of Open Access Journals (Sweden)

    Adrian Morariu

    2009-01-01

    Full Text Available This paper presents a method of speech recognition by pattern recognition techniques. Learning consists in determining the unique characteristics of a word (cepstral coefficients by eliminating those characteristics that are different from one word to another. For learning and recognition, the system will build a dictionary of words by determining the characteristics of each word to be used in the recognition. Determining the characteristics of an audio signal consists in the following steps: noise removal, sampling it, applying Hamming window, switching to frequency domain through Fourier transform, calculating the magnitude spectrum, filtering data, determining cepstral coefficients.

  18. Conceptual fluency at test shifts recognition response bias in Alzheimer's disease: implications for increased false recognition.

    Science.gov (United States)

    Gold, Carl A; Marchant, Natalie L; Koutstaal, Wilma; Schacter, Daniel L; Budson, Andrew E

    2007-09-20

    The presence or absence of conceptual information in pictorial stimuli may explain the mixed findings of previous studies of false recognition in patients with mild Alzheimer's disease (AD). To test this hypothesis, 48 patients with AD were compared to 48 healthy older adults on a recognition task first described by Koutstaal et al. [Koutstaal, W., Reddy, C., Jackson, E. M., Prince, S., Cendan, D. L., & Schacter D. L. (2003). False recognition of abstract versus common objects in older and younger adults: Testing the semantic categorization account. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 499-510]. Participants studied and were tested on their memory for categorized ambiguous pictures of common objects. The presence of conceptual information at study and/or test was manipulated by providing or withholding disambiguating semantic labels. Analyses focused on testing two competing theories. The semantic encoding hypothesis, which posits that the inter-item perceptual details are not encoded by AD patients when conceptual information is present in the stimuli, was not supported by the findings. In contrast, the conceptual fluency hypothesis was supported. Enhanced conceptual fluency at test dramatically shifted AD patients to a more liberal response bias, raising their false recognition. These results suggest that patients with AD rely on the fluency of test items in making recognition memory decisions. We speculate that AD patients' over reliance upon fluency may be attributable to (1) dysfunction of the hippocampus, disrupting recollection, and/or (2) dysfunction of prefrontal cortex, disrupting post-retrieval processes.

  19. Recognition bias and the physical attractiveness stereotype.

    Science.gov (United States)

    Rohner, Jean-Christophe; Rasmussen, Anders

    2012-06-01

    Previous studies have found a recognition bias for information consistent with the physical attractiveness stereotype (PAS), in which participants believe that they remember that attractive individuals have positive qualities and that unattractive individuals have negative qualities, regardless of what information actually occurred. The purpose of this research was to examine whether recognition bias for PAS congruent information is replicable and invariant across a variety of conditions (i.e. generalizable). The effects of nine different moderator variables were examined in two experiments. With a few exceptions, the effect of PAS congruence on recognition bias was independent of the moderator variables. The results suggest that the tendency to believe that one remembers information consistent with the physical attractiveness stereotype is a robust phenomenon. © 2012 The Authors. Scandinavian Journal of Psychology © 2012 The Scandinavian Psychological Associations.

  20. Does Employee Recognition Affect Positive Psychological Functioning and Well-Being?

    Science.gov (United States)

    Merino, M Dolores; Privado, Jesús

    2015-09-14

    Employee recognition is one of the typical characteristics of healthy organizations. The majority of research on recognition has studied the consequences of this variable on workers. But few investigations have focused on understanding what mechanisms mediate between recognition and its consequences. This work aims to understand whether the relationship between employee recognition and well-being, psychological resources mediate. To answer this question a sample of 1831 workers was used. The variables measured were: employee recognition, subjective well-being and positive psychological functioning (PPF), which consists of 11 psychological resources. In the analysis of data, structural equation models were applied. The results confirmed our hypothesis and showed that PPF mediate the relationship between recognition and well-being. The effect of recognition over PPF is two times greater (.39) with peer-recognition than with supervisor-recognition (.20), and, the effect of PPF over well-being is .59. This study highlights the importance of promoting employee recognition policies in organizations for the impact it has, not only on well-being, but also on the positive psychological functioning of the workers.

  1. Usage of semantic representations in recognition memory.

    Science.gov (United States)

    Nishiyama, Ryoji; Hirano, Tetsuji; Ukita, Jun

    2017-11-01

    Meanings of words facilitate false acceptance as well as correct rejection of lures in recognition memory tests, depending on the experimental context. This suggests that semantic representations are both directly and indirectly (i.e., mediated by perceptual representations) used in remembering. Studies using memory conjunction errors (MCEs) paradigms, in which the lures consist of component parts of studied words, have reported semantic facilitation of rejection of the lures. However, attending to components of the lures could potentially cause this. Therefore, we investigated whether semantic overlap of lures facilitates MCEs using Japanese Kanji words in which a whole-word image is more concerned in reading. Experiments demonstrated semantic facilitation of MCEs in a delayed recognition test (Experiment 1), and in immediate recognition tests in which participants were prevented from using phonological or orthographic representations (Experiment 2), and the salient effect on individuals with high semantic memory capacities (Experiment 3). Additionally, analysis of the receiver operating characteristic suggested that this effect is attributed to familiarity-based memory judgement and phantom recollection. These findings indicate that semantic representations can be directly used in remembering, even when perceptual representations of studied words are available.

  2. The role of nitric oxide in the object recognition memory.

    Science.gov (United States)

    Pitsikas, Nikolaos

    2015-05-15

    The novel object recognition task (NORT) assesses recognition memory in animals. It is a non-rewarded paradigm that it is based on spontaneous exploratory behavior in rodents. This procedure is widely used for testing the effects of compounds on recognition memory. Recognition memory is a type of memory severely compromised in schizophrenic and Alzheimer's disease patients. Nitric oxide (NO) is sought to be an intra- and inter-cellular messenger in the central nervous system and its implication in learning and memory is well documented. Here I intended to critically review the role of NO-related compounds on different aspects of recognition memory. Current analysis shows that both NO donors and NO synthase (NOS) inhibitors are involved in object recognition memory and suggests that NO might be a promising target for cognition impairments. However, the potential neurotoxicity of NO would add a note of caution in this context. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Enhancement of Iris Recognition System Based on Phase Only Correlation

    Directory of Open Access Journals (Sweden)

    Nuriza Pramita

    2011-08-01

    Full Text Available Iris recognition system is one of biometric based recognition/identification systems. Numerous techniques have been implemented to achieve a good recognition rate, including the ones based on Phase Only Correlation (POC. Significant and higher correlation peaks suggest that the system recognizes iris images of the same subject (person, while lower and unsignificant peaks correspond to recognition of those of difference subjects. Current POC methods have not investigated minimum iris point that can be used to achieve higher correlation peaks. This paper proposed a method that used only one-fourth of full normalized iris size to achieve higher (or at least the same recognition rate. Simulation on CASIA version 1.0 iris image database showed that averaged recognition rate of the proposed method achieved 67%, higher than that of using one-half (56% and full (53% iris point. Furthermore, all (100% POC peak values of the proposed method was higher than that of the method with full iris points.

  4. Facial Recognition of Happiness Is Impaired in Musicians with High Music Performance Anxiety.

    Science.gov (United States)

    Sabino, Alini Daniéli Viana; Camargo, Cristielli M; Chagas, Marcos Hortes N; Osório, Flávia L

    2018-01-01

    Music performance anxiety (MPA) can be defined as a lasting and intense apprehension connected with musical performance in public. Studies suggest that MPA can be regarded as a subtype of social anxiety. Since individuals with social anxiety have deficits in the recognition of facial emotion, we hypothesized that musicians with high levels of MPA would share similar impairments. The aim of this study was to compare parameters of facial emotion recognition (FER) between musicians with high and low MPA. 150 amateur and professional musicians with different musical backgrounds were assessed in respect to their level of MPA and completed a dynamic FER task. The outcomes investigated were accuracy, response time, emotional intensity, and response bias. Musicians with high MPA were less accurate in the recognition of happiness ( p  = 0.04; d  = 0.34), had increased response bias toward fear ( p  = 0.03), and increased response time to facial emotions as a whole ( p  = 0.02; d  = 0.39). Musicians with high MPA displayed FER deficits that were independent of general anxiety levels and possibly of general cognitive capacity. These deficits may favor the maintenance and exacerbation of experiences of anxiety during public performance, since cues of approval, satisfaction, and encouragement are not adequately recognized.

  5. Stage-specific sampling by pattern recognition receptors during Candida albicans phagocytosis.

    Directory of Open Access Journals (Sweden)

    Sigrid E M Heinsbroek

    2008-11-01

    Full Text Available Candida albicans is a medically important pathogen, and recognition by innate immune cells is critical for its clearance. Although a number of pattern recognition receptors have been shown to be involved in recognition and phagocytosis of this fungus, the relative role of these receptors has not been formally examined. In this paper, we have investigated the contribution of the mannose receptor, Dectin-1, and complement receptor 3; and we have demonstrated that Dectin-1 is the main non-opsonic receptor involved in fungal uptake. However, both Dectin-1 and complement receptor 3 were found to accumulate at the site of uptake, while mannose receptor accumulated on C. albicans phagosomes at later stages. These results suggest a potential role for MR in phagosome sampling; and, accordingly, MR deficiency led to a reduction in TNF-alpha and MCP-1 production in response to C. albicans uptake. Our data suggest that pattern recognition receptors sample the fungal phagosome in a sequential fashion.

  6. Recognition and Toleration

    DEFF Research Database (Denmark)

    Lægaard, Sune

    2010-01-01

    Recognition and toleration are ways of relating to the diversity characteristic of multicultural societies. The article concerns the possible meanings of toleration and recognition, and the conflict that is often claimed to exist between these two approaches to diversity. Different forms...... or interpretations of recognition and toleration are considered, confusing and problematic uses of the terms are noted, and the compatibility of toleration and recognition is discussed. The article argues that there is a range of legitimate and importantly different conceptions of both toleration and recognition...

  7. Processing of recognition information and additional cues: A model-based analysis of choice, confidence, and response time

    Directory of Open Access Journals (Sweden)

    Andreas Glockner

    2011-02-01

    Full Text Available Research on the processing of recognition information has focused on testing the recognition heuristic (RH. On the aggregate, the noncompensatory use of recognition information postulated by the RH was rejected in several studies, while RH could still account for a considerable proportion of choices. These results can be explained if either a a part of the subjects used RH or b nobody used it but its choice predictions were accidentally in line with predictions of the strategy used. In the current study, which exemplifies a new approach to model testing, we determined individuals' decision strategies based on a maximum-likelihood classification method, taking into account choices, response times and confidence ratings simultaneously. Unlike most previous studies of the RH, our study tested the RH under conditions in which we provided information about cue values of unrecognized objects (which we argue is fairly common and thus of some interest. For 77.5% of the subjects, overall behavior was best explained by a compensatory parallel constraint satisfaction (PCS strategy. The proportion of subjects using an enhanced RH heuristic (RHe was negligible (up to 7.5%; 15% of the subjects seemed to use a take the best strategy (TTB. A more-fine grained analysis of the supplemental behavioral parameters conditional on strategy use supports PCS but calls into question process assumptions for apparent users of RH, RHe, and TTB within our experimental context. Our results are consistent with previous literature highlighting the importance of individual strategy classification as compared to aggregated analyses.

  8. Event Recognition Based on Deep Learning in Chinese Texts.

    Directory of Open Access Journals (Sweden)

    Yajun Zhang

    Full Text Available Event recognition is the most fundamental and critical task in event-based natural language processing systems. Existing event recognition methods based on rules and shallow neural networks have certain limitations. For example, extracting features using methods based on rules is difficult; methods based on shallow neural networks converge too quickly to a local minimum, resulting in low recognition precision. To address these problems, we propose the Chinese emergency event recognition model based on deep learning (CEERM. Firstly, we use a word segmentation system to segment sentences. According to event elements labeled in the CEC 2.0 corpus, we classify words into five categories: trigger words, participants, objects, time and location. Each word is vectorized according to the following six feature layers: part of speech, dependency grammar, length, location, distance between trigger word and core word and trigger word frequency. We obtain deep semantic features of words by training a feature vector set using a deep belief network (DBN, then analyze those features in order to identify trigger words by means of a back propagation neural network. Extensive testing shows that the CEERM achieves excellent recognition performance, with a maximum F-measure value of 85.17%. Moreover, we propose the dynamic-supervised DBN, which adds supervised fine-tuning to a restricted Boltzmann machine layer by monitoring its training performance. Test analysis reveals that the new DBN improves recognition performance and effectively controls the training time. Although the F-measure increases to 88.11%, the training time increases by only 25.35%.

  9. Event Recognition Based on Deep Learning in Chinese Texts.

    Science.gov (United States)

    Zhang, Yajun; Liu, Zongtian; Zhou, Wen

    2016-01-01

    Event recognition is the most fundamental and critical task in event-based natural language processing systems. Existing event recognition methods based on rules and shallow neural networks have certain limitations. For example, extracting features using methods based on rules is difficult; methods based on shallow neural networks converge too quickly to a local minimum, resulting in low recognition precision. To address these problems, we propose the Chinese emergency event recognition model based on deep learning (CEERM). Firstly, we use a word segmentation system to segment sentences. According to event elements labeled in the CEC 2.0 corpus, we classify words into five categories: trigger words, participants, objects, time and location. Each word is vectorized according to the following six feature layers: part of speech, dependency grammar, length, location, distance between trigger word and core word and trigger word frequency. We obtain deep semantic features of words by training a feature vector set using a deep belief network (DBN), then analyze those features in order to identify trigger words by means of a back propagation neural network. Extensive testing shows that the CEERM achieves excellent recognition performance, with a maximum F-measure value of 85.17%. Moreover, we propose the dynamic-supervised DBN, which adds supervised fine-tuning to a restricted Boltzmann machine layer by monitoring its training performance. Test analysis reveals that the new DBN improves recognition performance and effectively controls the training time. Although the F-measure increases to 88.11%, the training time increases by only 25.35%.

  10. Local Feature Learning for Face Recognition under Varying Poses

    DEFF Research Database (Denmark)

    Duan, Xiaodong; Tan, Zheng-Hua

    2015-01-01

    In this paper, we present a local feature learning method for face recognition to deal with varying poses. As opposed to the commonly used approaches of recovering frontal face images from profile views, the proposed method extracts the subject related part from a local feature by removing the pose...... related part in it on the basis of a pose feature. The method has a closed-form solution, hence being time efficient. For performance evaluation, cross pose face recognition experiments are conducted on two public face recognition databases FERET and FEI. The proposed method shows a significant...... recognition improvement under varying poses over general local feature approaches and outperforms or is comparable with related state-of-the-art pose invariant face recognition approaches. Copyright ©2015 by IEEE....

  11. An Evaluation of PC-Based Optical Character Recognition Systems.

    Science.gov (United States)

    Schreier, E. M.; Uslan, M. M.

    1991-01-01

    The review examines six personal computer-based optical character recognition (OCR) systems designed for use by blind and visually impaired people. Considered are OCR components and terms, documentation, scanning and reading, command structure, conversion, unique features, accuracy of recognition, scanning time, speed, and cost. (DB)

  12. Exploiting core knowledge for visual object recognition.

    Science.gov (United States)

    Schurgin, Mark W; Flombaum, Jonathan I

    2017-03-01

    Humans recognize thousands of objects, and with relative tolerance to variable retinal inputs. The acquisition of this ability is not fully understood, and it remains an area in which artificial systems have yet to surpass people. We sought to investigate the memory process that supports object recognition. Specifically, we investigated the association of inputs that co-occur over short periods of time. We tested the hypothesis that human perception exploits expectations about object kinematics to limit the scope of association to inputs that are likely to have the same token as a source. In several experiments we exposed participants to images of objects, and we then tested recognition sensitivity. Using motion, we manipulated whether successive encounters with an image took place through kinematics that implied the same or a different token as the source of those encounters. Images were injected with noise, or shown at varying orientations, and we included 2 manipulations of motion kinematics. Across all experiments, memory performance was better for images that had been previously encountered with kinematics that implied a single token. A model-based analysis similarly showed greater memory strength when images were shown via kinematics that implied a single token. These results suggest that constraints from physics are built into the mechanisms that support memory about objects. Such constraints-often characterized as 'Core Knowledge'-are known to support perception and cognition broadly, even in young infants. But they have never been considered as a mechanism for memory with respect to recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Optical Pattern Recognition for Missile Guidance.

    Science.gov (United States)

    1982-11-15

    directed to novel pattern recognition algo- rithms (that allow pattern recognition and object classification in the face of various geometrical and...I wats EF5 = 50) p.j/t’ni 2 (for btith image pat tern recognitio itas a preproicessing oiperatiton. Ini devices). TIhe rt’ad light intensity (0.33t mW...electrodes on its large faces . This Priz light modulator and the motivation for its devel- SLM is known as the Prom (Pockels real-time optical opment. In Sec

  14. Emotion recognition pattern in adolescent boys with attention-deficit/hyperactivity disorder.

    Science.gov (United States)

    Aspan, Nikoletta; Bozsik, Csilla; Gadoros, Julia; Nagy, Peter; Inantsy-Pap, Judit; Vida, Peter; Halasz, Jozsef

    2014-01-01

    Social and emotional deficits were recently considered as inherent features of individuals with attention-deficit hyperactivity disorder (ADHD), but only sporadic literature data exist on emotion recognition in adolescents with ADHD. The aim of the present study was to establish emotion recognition profile in adolescent boys with ADHD in comparison with control adolescents. Forty-four adolescent boys (13-16 years) participated in the study after informed consent; 22 boys had a clinical diagnosis of ADHD, while data were also assessed from 22 adolescent control boys matched for age and Raven IQ. Parent- and self-reported behavioral characteristics were assessed by the means of the Strengths and Difficulties Questionnaire. The recognition of six basic emotions was evaluated by the "Facial Expressions of Emotion-Stimuli and Tests." Compared to controls, adolescents with ADHD were more sensitive in the recognition of disgust and, worse in the recognition of fear and showed a tendency for impaired recognition of sadness. Hyperactivity measures showed an inverse correlation with fear recognition. Our data suggest that adolescent boys with ADHD have alterations in the recognition of specific emotions.

  15. Simultaneous tracking and activity recognition

    DEFF Research Database (Denmark)

    Manfredotti, Cristina Elena; Fleet, David J.; Hamilton, Howard J.

    2011-01-01

    be used to improve the prediction step of the tracking, while, at the same time, tracking information can be used for online activity recognition. Experimental results in two different settings show that our approach 1) decreases the error rate and improves the identity maintenance of the positional......Many tracking problems involve several distinct objects interacting with each other. We develop a framework that takes into account interactions between objects allowing the recognition of complex activities. In contrast to classic approaches that consider distinct phases of tracking and activity...... tracking and 2) identifies the correct activity with higher accuracy than standard approaches....

  16. Obligatory and facultative brain regions for voice-identity recognition

    Science.gov (United States)

    Roswandowitz, Claudia; Kappes, Claudia; Obrig, Hellmuth; von Kriegstein, Katharina

    2018-01-01

    Abstract Recognizing the identity of others by their voice is an important skill for social interactions. To date, it remains controversial which parts of the brain are critical structures for this skill. Based on neuroimaging findings, standard models of person-identity recognition suggest that the right temporal lobe is the hub for voice-identity recognition. Neuropsychological case studies, however, reported selective deficits of voice-identity recognition in patients predominantly with right inferior parietal lobe lesions. Here, our aim was to work towards resolving the discrepancy between neuroimaging studies and neuropsychological case studies to find out which brain structures are critical for voice-identity recognition in humans. We performed a voxel-based lesion-behaviour mapping study in a cohort of patients (n = 58) with unilateral focal brain lesions. The study included a comprehensive behavioural test battery on voice-identity recognition of newly learned (voice-name, voice-face association learning) and familiar voices (famous voice recognition) as well as visual (face-identity recognition) and acoustic control tests (vocal-pitch and vocal-timbre discrimination). The study also comprised clinically established tests (neuropsychological assessment, audiometry) and high-resolution structural brain images. The three key findings were: (i) a strong association between voice-identity recognition performance and right posterior/mid temporal and right inferior parietal lobe lesions; (ii) a selective association between right posterior/mid temporal lobe lesions and voice-identity recognition performance when face-identity recognition performance was factored out; and (iii) an association of right inferior parietal lobe lesions with tasks requiring the association between voices and faces but not voices and names. The results imply that the right posterior/mid temporal lobe is an obligatory structure for voice-identity recognition, while the inferior parietal

  17. Foreign language learning, hyperlexia, and early word recognition.

    Science.gov (United States)

    Sparks, R L; Artzer, M

    2000-01-01

    Children with hyperlexia read words spontaneously before the age of five, have impaired comprehension on both listening and reading tasks, and have word recognition skill above expectations based on cognitive and linguistic abilities. One student with hyperlexia and another student with higher word recognition than comprehension skills who started to read words at a very early age were followed over several years from the primary grades through high school when both were completing a second-year Spanish course. The purpose of the present study was to examine the foreign language (FL) word recognition, spelling, reading comprehension, writing, speaking, and listening skills of the two students and another high school student without hyperlexia. Results showed that the student without hyperlexia achieved higher scores than the hyperlexic student and the student with above average word recognition skills on most FL proficiency measures. The student with hyperlexia and the student with above average word recognition skills achieved higher scores on the Spanish proficiency tasks that required the exclusive use of phonological (pronunciation) and phonological/orthographic (word recognition, spelling) skills than on Spanish proficiency tasks that required the use of listening comprehension and speaking and writing skills. The findings provide support for the notion that word recognition and spelling in a FL may be modular processes and exist independently of general cognitive and linguistic skills. Results also suggest that students may have stronger FL learning skills in one language component than in other components of language, and that there may be a weak relationship between FL word recognition and oral proficiency in the FL.

  18. Individual recognition based on communication behaviour of male fowl.

    Science.gov (United States)

    Smith, Carolynn L; Taubert, Jessica; Weldon, Kimberly; Evans, Christopher S

    2016-04-01

    Correctly directing social behaviour towards a specific individual requires an ability to discriminate between conspecifics. The mechanisms of individual recognition include phenotype matching and familiarity-based recognition. Communication-based recognition is a subset of familiarity-based recognition wherein the classification is based on behavioural or distinctive signalling properties. Male fowl (Gallus gallus) produce a visual display (tidbitting) upon finding food in the presence of a female. Females typically approach displaying males. However, males may tidbit without food. We used the distinctiveness of the visual display and the unreliability of some males to test for communication-based recognition in female fowl. We manipulated the prior experience of the hens with the males to create two classes of males: S(+) wherein the tidbitting signal was paired with a food reward to the female, and S (-) wherein the tidbitting signal occurred without food reward. We then conducted a sequential discrimination test with hens using a live video feed of a familiar male. The results of the discrimination tests revealed that hens discriminated between categories of males based on their signalling behaviour. These results suggest that fowl possess a communication-based recognition system. This is the first demonstration of live-to-video transfer of recognition in any species of bird. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Recall and recognition hypermnesia for Socratic stimuli.

    Science.gov (United States)

    Kazén, Miguel; Solís-Macías, Víctor M

    2016-01-01

    In two experiments, we investigate hypermnesia, net memory improvements with repeated testing of the same material after a single study trial. In the first experiment, we found hypermnesia across three trials for the recall of word solutions to Socratic stimuli (dictionary-like definitions of concepts) replicating Erdelyi, Buschke, and Finkelstein and, for the first time using these materials, for their recognition. In the second experiment, we had two "yes/no" recognition groups, a Socratic stimuli group presented with concrete and abstract verbal materials and a word-only control group. Using signal detection measures, we found hypermnesia for concrete Socratic stimuli-and stable performance for abstract stimuli across three recognition tests. The control group showed memory decrements across tests. We interpret these findings with the alternative retrieval pathways (ARP) hypothesis, contrasting it with alternative theories of hypermnesia, such as depth of processing, generation and retrieve-recognise. We conclude that recognition hypermnesia for concrete Socratic stimuli is a reliable phenomenon, which we found in two experiments involving both forced-choice and yes/no recognition procedures.

  20. Giant pandas failed to show mirror self-recognition.

    Science.gov (United States)

    Ma, Xiaozan; Jin, Yuan; Luo, Bo; Zhang, Guiquan; Wei, Rongping; Liu, Dingzhen

    2015-05-01

    Mirror self-recognition (MSR), i.e., the ability to recognize oneself in a mirror, is considered a potential index of self-recognition and the foundation of individual development. A wealth of literature on MSR is available for social animals, such as chimpanzees, Asian elephants and dolphins, yet little is known about MSR in solitary mammalian species. We aimed to evaluate whether the giant panda can recognize itself in the mirror, and whether this capacity varies with age. Thirty-four captive giant pandas (F:M = 18:16; juveniles, sub-adults and adults) were subjected to four mirror tests: covered mirror tests, open mirror tests, water mark control tests, and mark tests. The results showed that, though adult, sub-adult and juvenile pandas exposed to mirrors spent similar amounts of time in social mirror-directed behaviors (χ(2) = 0.719, P = 0.698), none of them used the mirror to touch the mark on their head, a self-directed behavior suggesting MSR. Individuals of all age groups initially displayed attacking, threatening, foot scraping and backwards walking behaviors when exposed to their self-images in the mirror. Our data indicate that, regardless of age, the giant pandas did not recognize their self-image in the mirror, but instead considered the image to be a conspecific. Our results add to the available information on mirror self-recognition in large mammals, provide new information on a solitary species, and will be useful for enclosure design and captive animal management.

  1. Stress reaction process-based hierarchical recognition algorithm for continuous intrusion events in optical fiber prewarning system

    Science.gov (United States)

    Qu, Hongquan; Yuan, Shijiao; Wang, Yanping; Yang, Dan

    2018-04-01

    To improve the recognition performance of optical fiber prewarning system (OFPS), this study proposed a hierarchical recognition algorithm (HRA). Compared with traditional methods, which employ only a complex algorithm that includes multiple extracted features and complex classifiers to increase the recognition rate with a considerable decrease in recognition speed, HRA takes advantage of the continuity of intrusion events, thereby creating a staged recognition flow inspired by stress reaction. HRA is expected to achieve high-level recognition accuracy with less time consumption. First, this work analyzed the continuity of intrusion events and then presented the algorithm based on the mechanism of stress reaction. Finally, it verified the time consumption through theoretical analysis and experiments, and the recognition accuracy was obtained through experiments. Experiment results show that the processing speed of HRA is 3.3 times faster than that of a traditional complicated algorithm and has a similar recognition rate of 98%. The study is of great significance to fast intrusion event recognition in OFPS.

  2. Speech recognition: impact on workflow and report availability

    International Nuclear Information System (INIS)

    Glaser, C.; Trumm, C.; Nissen-Meyer, S.; Francke, M.; Kuettner, B.; Reiser, M.

    2005-01-01

    With ongoing technical refinements speech recognition systems (SRS) are becoming an increasingly attractive alternative to traditional methods of preparing and transcribing medical reports. The two main components of any SRS are the acoustic model and the language model. Features of modern SRS with continuous speech recognition are macros with individually definable texts and report templates as well as the option to navigate in a text or to control SRS or RIS functions by speech recognition. The best benefit from SRS can be obtained if it is integrated into a RIS/RIS-PACS installation. Report availability and time efficiency of the reporting process (related to recognition rate, time expenditure for editing and correcting a report) are the principal determinants of the clinical performance of any SRS. For practical purposes the recognition rate is estimated by the error rate (unit ''word''). Error rates range from 4 to 28%. Roughly 20% of them are errors in the vocabulary which may result in clinically relevant misinterpretation. It is thus mandatory to thoroughly correct any transcribed text as well as to continuously train and adapt the SRS vocabulary. The implementation of SRS dramatically improves report availability. This is most pronounced for CT and CR. However, the individual time expenditure for (SRS-based) reporting increased by 20-25% (CR) and according to literature data there is an increase by 30% for CT and MRI. The extent to which the transcription staff profits from SRS depends largely on its qualification. Online dictation implies a workload shift from the transcription staff to the reporting radiologist. (orig.) [de

  3. Source Separation via Spectral Masking for Speech Recognition Systems

    Directory of Open Access Journals (Sweden)

    Gustavo Fernandes Rodrigues

    2012-12-01

    Full Text Available In this paper we present an insight into the use of spectral masking techniques in time-frequency domain, as a preprocessing step for the speech signal recognition. Speech recognition systems have their performance negatively affected in noisy environments or in the presence of other speech signals. The limits of these masking techniques for different levels of the signal-to-noise ratio are discussed. We show the robustness of the spectral masking techniques against four types of noise: white, pink, brown and human speech noise (bubble noise. The main contribution of this work is to analyze the performance limits of recognition systems  using spectral masking. We obtain an increase of 18% on the speech hit rate, when the speech signals were corrupted by other speech signals or bubble noise, with different signal-to-noise ratio of approximately 1, 10 and 20 dB. On the other hand, applying the ideal binary masks to mixtures corrupted by white, pink and brown noise, results an average growth of 9% on the speech hit rate, with the same different signal-to-noise ratio. The experimental results suggest that the masking spectral techniques are more suitable for the case when it is applied a bubble noise, which is produced by human speech, than for the case of applying white, pink and brown noise.

  4. A Survey on Sentiment Classification in Face Recognition

    Science.gov (United States)

    Qian, Jingyu

    2018-01-01

    Face recognition has been an important topic for both industry and academia for a long time. K-means clustering, autoencoder, and convolutional neural network, each representing a design idea for face recognition method, are three popular algorithms to deal with face recognition problems. It is worthwhile to summarize and compare these three different algorithms. This paper will focus on one specific face recognition problem-sentiment classification from images. Three different algorithms for sentiment classification problems will be summarized, including k-means clustering, autoencoder, and convolutional neural network. An experiment with the application of these algorithms on a specific dataset of human faces will be conducted to illustrate how these algorithms are applied and their accuracy. Finally, the three algorithms are compared based on the accuracy result.

  5. Description and recognition of patterns in stochastic signals. [Electroencephalograms

    Energy Technology Data Exchange (ETDEWEB)

    Flik, T [Technische Univ. Berlin (F.R. Germany). Informatik-Forschungsgruppe Rechnerorganisation und Schaltwerke

    1975-10-01

    A method is shown for the description and recognition of patterns in stochastic signals such as electroencephalograms. For pattern extraction the signal is segmented at times of minimum amplitudes. The describing features consist of geometric values of the so defined patterns. The classification algorithm is based on the regression analysis, which is well known in the field of character recognition. For an economic classification a method is proposed which reduces the number of features. The quality of this pattern recognition method is demonstrated by the detection of spike wave complexes in electroencephalograms. The pattern description and recognition are provided for processing on a digital computer. (DE)

  6. Brain Structural Correlates of Emotion Recognition in Psychopaths.

    Directory of Open Access Journals (Sweden)

    Vanessa Pera-Guardiola

    Full Text Available Individuals with psychopathy present deficits in the recognition of facial emotional expressions. However, the nature and extent of these alterations are not fully understood. Furthermore, available data on the functional neural correlates of emotional face recognition deficits in adult psychopaths have provided mixed results. In this context, emotional face morphing tasks may be suitable for clarifying mild and emotion-specific impairments in psychopaths. Likewise, studies exploring corresponding anatomical correlates may be useful for disentangling available neurofunctional evidence based on the alleged neurodevelopmental roots of psychopathic traits. We used Voxel-Based Morphometry and a morphed emotional face expression recognition task to evaluate the relationship between regional gray matter (GM volumes and facial emotion recognition deficits in male psychopaths. In comparison to male healthy controls, psychopaths showed deficits in the recognition of sad, happy and fear emotional expressions. In subsequent brain imaging analyses psychopaths with better recognition of facial emotional expressions showed higher volume in the prefrontal cortex (orbitofrontal, inferior frontal and dorsomedial prefrontal cortices, somatosensory cortex, anterior insula, cingulate cortex and the posterior lobe of the cerebellum. Amygdala and temporal lobe volumes contributed to better emotional face recognition in controls only. These findings provide evidence suggesting that variability in brain morphometry plays a role in accounting for psychopaths' impaired ability to recognize emotional face expressions, and may have implications for comprehensively characterizing the empathy and social cognition dysfunctions typically observed in this population of subjects.

  7. The effects of initial testing on false recall and false recognition in the social contagion of memory paradigm.

    Science.gov (United States)

    Huff, Mark J; Davis, Sara D; Meade, Michelle L

    2013-08-01

    In three experiments, participants studied photographs of common household scenes. Following study, participants completed a category-cued recall test without feedback (Exps. 1 and 3), a category-cued recall test with feedback (Exp. 2), or a filler task (no-test condition). Participants then viewed recall tests from fictitious previous participants that contained erroneous items presented either one or four times, and then completed final recall and source recognition tests. The participants in all conditions reported incorrect items during final testing (a social contagion effect), and across experiments, initial testing had no impact on false recall of erroneous items. However, on the final source-monitoring recognition test, initial testing had a protective effect against false source recognition: Participants who were initially tested with and without feedback on category-cued initial tests attributed fewer incorrect items to the original event on the final source-monitoring recognition test than did participants who were not initially tested. These data demonstrate that initial testing may protect individuals' memories from erroneous suggestions.

  8. Re-thinking employee recognition: understanding employee experiences of recognition

    OpenAIRE

    Smith, Charlotte

    2013-01-01

    Despite widespread acceptance of the importance of employee recognition for both individuals and organisations and evidence of its increasing use in organisations, employee recognition has received relatively little focused attention from academic researchers. Particularly lacking is research exploring the lived experience of employee recognition and the interpretations and meanings which individuals give to these experiences. Drawing on qualitative interviews conducted as part of my PhD rese...

  9. Image preprocessing study on KPCA-based face recognition

    Science.gov (United States)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  10. Probabilistic Open Set Recognition

    Science.gov (United States)

    Jain, Lalit Prithviraj

    support vector machines. Building from the success of statistical EVT based recognition methods such as PI-SVM and W-SVM on the open set problem, we present a new general supervised learning algorithm for multi-class classification and multi-class open set recognition called the Extreme Value Local Basis (EVLB). The design of this algorithm is motivated by the observation that extrema from known negative class distributions are the closest negative points to any positive sample during training, and thus should be used to define the parameters of a probabilistic decision model. In the EVLB, the kernel distribution for each positive training sample is estimated via an EVT distribution fit over the distances to the separating hyperplane between positive training sample and closest negative samples, with a subset of the overall positive training data retained to form a probabilistic decision boundary. Using this subset as a frame of reference, the probability of a sample at test time decreases as it moves away from the positive class. Possessing this property, the EVLB is well-suited to open set recognition problems where samples from unknown or novel classes are encountered at test. Our experimental evaluation shows that the EVLB provides a substantial improvement in scalability compared to standard radial basis function kernel machines, as well as P I-SVM and W-SVM, with improved accuracy in many cases. We evaluate our algorithm on open set variations of the standard visual learning benchmarks, as well as with an open subset of classes from Caltech 256 and ImageNet. Our experiments show that PI-SVM, WSVM and EVLB provide significant advances over the previous state-of-the-art solutions for the same tasks.

  11. Three-dimensional fingerprint recognition by using convolution neural network

    Science.gov (United States)

    Tian, Qianyu; Gao, Nan; Zhang, Zonghua

    2018-01-01

    With the development of science and technology and the improvement of social information, fingerprint recognition technology has become a hot research direction and been widely applied in many actual fields because of its feasibility and reliability. The traditional two-dimensional (2D) fingerprint recognition method relies on matching feature points. This method is not only time-consuming, but also lost three-dimensional (3D) information of fingerprint, with the fingerprint rotation, scaling, damage and other issues, a serious decline in robustness. To solve these problems, 3D fingerprint has been used to recognize human being. Because it is a new research field, there are still lots of challenging problems in 3D fingerprint recognition. This paper presents a new 3D fingerprint recognition method by using a convolution neural network (CNN). By combining 2D fingerprint and fingerprint depth map into CNN, and then through another CNN feature fusion, the characteristics of the fusion complete 3D fingerprint recognition after classification. This method not only can preserve 3D information of fingerprints, but also solves the problem of CNN input. Moreover, the recognition process is simpler than traditional feature point matching algorithm. 3D fingerprint recognition rate by using CNN is compared with other fingerprint recognition algorithms. The experimental results show that the proposed 3D fingerprint recognition method has good recognition rate and robustness.

  12. Use of the recognition heuristic depends on the domain's recognition validity, not on the recognition validity of selected sets of objects.

    Science.gov (United States)

    Pohl, Rüdiger F; Michalkiewicz, Martha; Erdfelder, Edgar; Hilbig, Benjamin E

    2017-07-01

    According to the recognition-heuristic theory, decision makers solve paired comparisons in which one object is recognized and the other not by recognition alone, inferring that recognized objects have higher criterion values than unrecognized ones. However, success-and thus usefulness-of this heuristic depends on the validity of recognition as a cue, and adaptive decision making, in turn, requires that decision makers are sensitive to it. To this end, decision makers could base their evaluation of the recognition validity either on the selected set of objects (the set's recognition validity), or on the underlying domain from which the objects were drawn (the domain's recognition validity). In two experiments, we manipulated the recognition validity both in the selected set of objects and between domains from which the sets were drawn. The results clearly show that use of the recognition heuristic depends on the domain's recognition validity, not on the set's recognition validity. In other words, participants treat all sets as roughly representative of the underlying domain and adjust their decision strategy adaptively (only) with respect to the more general environment rather than the specific items they are faced with.

  13. Separating recognition processes of declarative memory via anodal tDCS: boosting old item recognition by temporal and new item detection by parietal stimulation.

    Science.gov (United States)

    Pisoni, Alberto; Turi, Zsolt; Raithel, Almuth; Ambrus, Géza Gergely; Alekseichuk, Ivan; Schacht, Annekathrin; Paulus, Walter; Antal, Andrea

    2015-01-01

    There is emerging evidence from imaging studies that parietal and temporal cortices act together to achieve successful recognition of declarative information; nevertheless, the precise role of these regions remains elusive. To evaluate the role of these brain areas in declarative memory retrieval, we applied bilateral tDCS, with anode over the left and cathode over the right parietal or temporal cortices separately, during the recognition phase of a verbal learning paradigm using a balanced old-new decision task. In a parallel group design, we tested three different groups of healthy adults, matched for demographic and neurocognitive status: two groups received bilateral active stimulation of either the parietal or the temporal cortex, while a third group received sham stimulation. Accuracy, discriminability index (d') and reaction times of recognition memory performance were measurements of interest. The d' sensitivity index and accuracy percentage improved in both active stimulation groups, as compared with the sham one, while reaction times remained unaffected. Moreover, the analysis of accuracy revealed a different effect of tDCS for old and new item recognition. While the temporal group showed enhanced performance for old item recognition, the parietal group was better at correctly recognising new ones. Our results support an active role of both of these areas in memory retrieval, possibly underpinning different stages of the recognition process.

  14. On the relation between face and object recognition in developmental prosopagnosia

    DEFF Research Database (Denmark)

    Gerlach, Christian; Klargaard, Solja K.; Starrfelt, Randi

    2016-01-01

    There is an ongoing debate about whether face recognition and object recognition constitute separate domains. Clarification of this issue can have important theoretical implications as face recognition is often used as a prime example of domain-specificity in mind and brain. An important source...... of input to this debate comes from studies of individuals with developmental prosopagnosia, suggesting that face recognition can be selectively impaired. We put the selectivity hypothesis to test by assessing the performance of 10 individuals with developmental prosopagnosia on demanding tests of visual...... object processing involving both regular and degraded drawings. None of the individuals exhibited a clear dissociation between face and object recognition, and as a group they were significantly more affected by degradation of objects than control participants. Importantly, we also find positive...

  15. Fast Traffic Sign Recognition with a Rotation Invariant Binary Pattern Based Feature

    Directory of Open Access Journals (Sweden)

    Shouyi Yin

    2015-01-01

    Full Text Available Robust and fast traffic sign recognition is very important but difficult for safe driving assistance systems. This study addresses fast and robust traffic sign recognition to enhance driving safety. The proposed method includes three stages. First, a typical Hough transformation is adopted to implement coarse-grained location of the candidate regions of traffic signs. Second, a RIBP (Rotation Invariant Binary Pattern based feature in the affine and Gaussian space is proposed to reduce the time of traffic sign detection and achieve robust traffic sign detection in terms of scale, rotation, and illumination. Third, the techniques of ANN (Artificial Neutral Network based feature dimension reduction and classification are designed to reduce the traffic sign recognition time. Compared with the current work, the experimental results in the public datasets show that this work achieves robustness in traffic sign recognition with comparable recognition accuracy and faster processing speed, including training speed and recognition speed.

  16. Food-Induced Emotional Resonance Improves Emotion Recognition

    OpenAIRE

    Pandolfi, Elisa; Sacripante, Riccardo; Cardini, Flavia

    2016-01-01

    The effect of food substances on emotional states has been widely investigated, showing, for example, that eating chocolate is able to reduce negative mood. Here, for the first time, we have shown that the consumption of specific food substances is not only able to induce particular emotional states, but more importantly, to facilitate recognition of corresponding emotional facial expressions in others. Participants were asked to perform an emotion recognition task before and after eating eit...

  17. Impaired face recognition is associated with social inhibition.

    Science.gov (United States)

    Avery, Suzanne N; VanDerKlok, Ross M; Heckers, Stephan; Blackford, Jennifer U

    2016-02-28

    Face recognition is fundamental to successful social interaction. Individuals with deficits in face recognition are likely to have social functioning impairments that may lead to heightened risk for social anxiety. A critical component of social interaction is how quickly a face is learned during initial exposure to a new individual. Here, we used a novel Repeated Faces task to assess how quickly memory for faces is established. Face recognition was measured over multiple exposures in 52 young adults ranging from low to high in social inhibition, a core dimension of social anxiety. High social inhibition was associated with a smaller slope of change in recognition memory over repeated face exposure, indicating participants with higher social inhibition showed smaller improvements in recognition memory after seeing faces multiple times. We propose that impaired face learning is an important mechanism underlying social inhibition and may contribute to, or maintain, social anxiety. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Graphical symbol recognition

    OpenAIRE

    K.C. , Santosh; Wendling , Laurent

    2015-01-01

    International audience; The chapter focuses on one of the key issues in document image processing i.e., graphical symbol recognition. Graphical symbol recognition is a sub-field of a larger research domain: pattern recognition. The chapter covers several approaches (i.e., statistical, structural and syntactic) and specially designed symbol recognition techniques inspired by real-world industrial problems. It, in general, contains research problems, state-of-the-art methods that convey basic s...

  19. Event-related theta synchronization predicts deficit in facial affect recognition in schizophrenia.

    Science.gov (United States)

    Csukly, Gábor; Stefanics, Gábor; Komlósi, Sarolta; Czigler, István; Czobor, Pál

    2014-02-01

    Growing evidence suggests that abnormalities in the synchronized oscillatory activity of neurons in schizophrenia may lead to impaired neural activation and temporal coding and thus lead to neurocognitive dysfunctions, such as deficits in facial affect recognition. To gain an insight into the neurobiological processes linked to facial affect recognition, we investigated both induced and evoked oscillatory activity by calculating the Event Related Spectral Perturbation (ERSP) and the Inter Trial Coherence (ITC) during facial affect recognition. Fearful and neutral faces as well as nonface patches were presented to 24 patients with schizophrenia and 24 matched healthy controls while EEG was recorded. The participants' task was to recognize facial expressions. Because previous findings with healthy controls showed that facial feature decoding was associated primarily with oscillatory activity in the theta band, we analyzed ERSP and ITC in this frequency band in the time interval of 140-200 ms, which corresponds to the N170 component. Event-related theta activity and phase-locking to facial expressions, but not to nonface patches, predicted emotion recognition performance in both controls and patients. Event-related changes in theta amplitude and phase-locking were found to be significantly weaker in patients compared with healthy controls, which is in line with previous investigations showing decreased neural synchronization in the low frequency bands in patients with schizophrenia. Neural synchrony is thought to underlie distributed information processing. Our results indicate a less effective functioning in the recognition process of facial features, which may contribute to a less effective social cognition in schizophrenia. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  20. The Influence of Emotion on Recognition Memory for Scenes

    OpenAIRE

    Pryde, Beatrice

    2012-01-01

    According to dual-process models, recognition memory is supported by two distinct processes: familiarity, a relatively automatic process that involves the retrieval of a previously encountered item, and recollection, a more effortful process that involves the retrieval of information associated with the context in which an item was encoded (Mickes, Wais & Wixted, 2009). There is a wealth of research suggesting that recognition memory performance is affected by the emotional content of stimul...

  1. Recognition of Face and Emotional Facial Expressions in Autism

    Directory of Open Access Journals (Sweden)

    Muhammed Tayyib Kadak

    2013-03-01

    Full Text Available Autism is a genetically transferred neurodevelopmental disorder characterized by severe and permanent deficits in many interpersonal relation areas like communication, social interaction and emotional responsiveness. Patients with autism have deficits in face recognition, eye contact and recognition of emotional expression. Both recognition of face and expression of facial emotion carried on face processing. Structural and functional impairment in fusiform gyrus, amygdala, superior temporal sulcus and other brain regions lead to deficits in recognition of face and facial emotion. Therefore studies suggest that face processing deficits resulted in problems in areas of social interaction and emotion in autism. Studies revealed that children with autism had problems in recognition of facial expression and used mouth region more than eye region. It was also shown that autistic patients interpreted ambiguous expressions as negative emotion. In autism, deficits related in various stages of face processing like detection of gaze, face identity, recognition of emotional expression were determined, so far. Social interaction impairments in autistic spectrum disorders originated from face processing deficits during the periods of infancy, childhood and adolescence. Recognition of face and expression of facial emotion could be affected either automatically by orienting towards faces after birth, or by “learning” processes in developmental periods such as identity and emotion processing. This article aimed to review neurobiological basis of face processing and recognition of emotional facial expressions during normal development and in autism.

  2. Pattern recognition based on time-frequency analysis and convolutional neural networks for vibrational events in φ-OTDR

    Science.gov (United States)

    Xu, Chengjin; Guan, Junjun; Bao, Ming; Lu, Jiangang; Ye, Wei

    2018-01-01

    Based on vibration signals detected by a phase-sensitive optical time-domain reflectometer distributed optical fiber sensing system, this paper presents an implement of time-frequency analysis and convolutional neural network (CNN), used to classify different types of vibrational events. First, spectral subtraction and the short-time Fourier transform are used to enhance time-frequency features of vibration signals and transform different types of vibration signals into spectrograms, which are input to the CNN for automatic feature extraction and classification. Finally, by replacing the soft-max layer in the CNN with a multiclass support vector machine, the performance of the classifier is enhanced. Experiments show that after using this method to process 4000 vibration signal samples generated by four different vibration events, namely, digging, walking, vehicles passing, and damaging, the recognition rates of vibration events are over 90%. The experimental results prove that this method can automatically make an effective feature selection and greatly improve the classification accuracy of vibrational events in distributed optical fiber sensing systems.

  3. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    Science.gov (United States)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  4. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework

    Directory of Open Access Journals (Sweden)

    Shengjing Wei

    2016-04-01

    Full Text Available Sign language recognition (SLR can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG sensors, accelerometers (ACC, and gyroscopes (GYRO. In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set suggested by two reference subjects, (82.6 ± 13.2% and (79.7 ± 13.4% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7% and (86.3 ± 13.7% when the training set included 50~60 gestures (about half of the target gesture set. The proposed framework can significantly reduce the user’s training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  5. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.

    Science.gov (United States)

    Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu

    2016-04-19

    Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  6. Distinct roles of the hippocampus and perirhinal cortex in GABAA receptor blockade-induced enhancement of object recognition memory.

    Science.gov (United States)

    Kim, Jong Min; Kim, Dong Hyun; Lee, Younghwan; Park, Se Jin; Ryu, Jong Hoon

    2014-03-13

    It is well known that the hippocampus plays a role in spatial and contextual memory, and that spatial information is tightly regulated by the hippocampus. However, it is still highly controversial whether the hippocampus plays a role in object recognition memory. In a pilot study, the administration of bicuculline, a GABAA receptor antagonist, enhanced memory in the passive avoidance task, but not in the novel object recognition task. In the present study, we hypothesized that these different results are related to the characteristics of each task and the different roles of hippocampus and perirhinal cortex. A region-specific drug-treatment model was employed to clarify the role of the hippocampus and perirhinal cortex in object recognition memory. After a single habituation in the novel object recognition task, intra-perirhinal cortical injection of bicuculline increased and intra-hippocampal injection decreased the exploration time ratio to novel object. In addition, when animals were repeatedly habituated to the context, intra-perirhinal cortical administration of bicuculline still increased exploration time ratio to novel object, but the effect of intra-hippocampal administration disappeared. Concurrent increases of c-Fos expression and ERK phosphorylation were observed in the perirhinal cortex of the object with context-exposed group either after single or repeated habituation to the context, but no changes were noted in the hippocampus. Altogether, these results suggest that object recognition memory formation requires the perirhinal cortex but not the hippocampus, and that hippocampal activation interferes with object recognition memory by the information encoding of unfamiliar environment. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Time-frequency feature analysis and recognition of fission neutrons signal based on support vector machine

    International Nuclear Information System (INIS)

    Jin Jing; Wei Biao; Feng Peng; Tang Yuelin; Zhou Mi

    2010-01-01

    Based on the interdependent relationship between fission neutrons ( 252 Cf) and fission chain ( 235 U system), the paper presents the time-frequency feature analysis and recognition in fission neutron signal based on support vector machine (SVM) through the analysis on signal characteristics and the measuring principle of the 252 Cf fission neutron signal. The time-frequency characteristics and energy features of the fission neutron signal are extracted by using wavelet decomposition and de-noising wavelet packet decomposition, and then applied to training and classification by means of support vector machine based on statistical learning theory. The results show that, it is effective to obtain features of nuclear signal via wavelet decomposition and de-noising wavelet packet decomposition, and the latter can reflect the internal characteristics of the fission neutron system better. With the training accomplished, the SVM classifier achieves an accuracy rate above 70%, overcoming the lack of training samples, and verifying the effectiveness of the algorithm. (authors)

  8. Study of the vocal signal in the amplitude-time representation. Speech segmentation and recognition algorithms

    International Nuclear Information System (INIS)

    Baudry, Marc

    1978-01-01

    This dissertation exposes an acoustical and phonetical study of vocal signal. The complex pattern of the signal is segmented into simple sub-patterns and each one of these sub-patterns may be segmented again into another more simplest patterns with lower level. Application of pattern recognition techniques facilitates on one hand this segmentation and on the other hand the definition of the structural relations between the sub-patterns. Particularly, we have developed syntactic techniques in which the rewriting rules, context-sensitive, are controlled by predicates using parameters evaluated on the sub-patterns themselves. This allow to generalize a pure syntactic analysis by adding a semantic information. The system we expose, realizes pre-classification and a partial identification of the phonemes as also the accurate detection of each pitch period. The voice signal is analysed directly using the amplitude-time representation. This system has been implemented on a mini-computer and it works in the real time. (author) [fr

  9. Low-Budget, Cost-Effective OCR: Optical Character Recognition for MS-DOS Micros.

    Science.gov (United States)

    Perez, Ernest

    1990-01-01

    Discusses optical character recognition (OCR) for use with MS-DOS microcomputers. Cost effectiveness is considered, three types of software approaches to character recognition are explained, hardware and operation requirements are described, possible library applications are discussed, future OCR developments are suggested, and a list of OCR…

  10. Real-time speech gisting for ATC applications

    Science.gov (United States)

    Dunkelberger, Kirk A.

    1995-06-01

    Command and control within the ATC environment remains primarily voice-based. Hence, automatic real time, speaker independent, continuous speech recognition (CSR) has many obvious applications and implied benefits to the ATC community: automated target tagging, aircraft compliance monitoring, controller training, automatic alarm disabling, display management, and many others. However, while current state-of-the-art CSR systems provide upwards of 98% word accuracy in laboratory environments, recent low-intrusion experiments in the ATCT environments demonstrated less than 70% word accuracy in spite of significant investments in recognizer tuning. Acoustic channel irregularities and controller/pilot grammar verities impact current CSR algorithms at their weakest points. It will be shown herein, however, that real time context- and environment-sensitive gisting can provide key command phrase recognition rates of greater than 95% using the same low-intrusion approach. The combination of real time inexact syntactic pattern recognition techniques and a tight integration of CSR, gisting, and ATC database accessor system components is the key to these high phase recognition rates. A system concept for real time gisting in the ATC context is presented herein. After establishing an application context, discussion presents a minimal CSR technology context then focuses on the gisting mechanism, desirable interfaces into the ATCT database environment, and data and control flow within the prototype system. Results of recent tests for a subset of the functionality are presented together with suggestions for further research.

  11. Consequences of temporary inhibition of the medial amygdala on social recognition memory performance in mice

    Directory of Open Access Journals (Sweden)

    Julia eNoack

    2015-04-01

    Full Text Available Different lines of investigation suggest that the medial amygdala is causally involved in the processing of information linked to social behaviour in rodents. Here we investigated the consequences of temporary inhibition of the medial amygdala by bilateral injections of lidocaine on long-term social recognition memory as tested in the social discrimination task. Lidocaine or control NaCl solution was infused immediately before learning or before retrieval. Our data show that lidocaine infusion immediately before learning did not affect long-term memory retrieval. However, intra-amygdalar lidocaine infusions immediately before choice interfered with correct memory retrieval. Analysis of the aggressive behaviour measured simultaneously during all sessions in the social recognition memory task support the impression that the lidocaine dosage used here was effective as it – at least partially – reduced the aggressive behaviour shown by the experimental subjects towards the juveniles. Surprisingly, also infusions of NaCl solution blocked recognition memory at both injection time points. The results are interpreted in the context of the importance of the medial amygdala for the processing of non-volatile odours as a major contributor to the olfactory signature for social recognition memory.

  12. The crystal structure of the Sox4 HMG domain-DNA complex suggests a mechanism for positional interdependence in DNA recognition.

    Science.gov (United States)

    Jauch, Ralf; Ng, Calista K L; Narasimhan, Kamesh; Kolatkar, Prasanna R

    2012-04-01

    It has recently been proposed that the sequence preferences of DNA-binding TFs (transcription factors) can be well described by models that include the positional interdependence of the nucleotides of the target sites. Such binding models allow for multiple motifs to be invoked, such as principal and secondary motifs differing at two or more nucleotide positions. However, the structural mechanisms underlying the accommodation of such variant motifs by TFs remain elusive. In the present study we examine the crystal structure of the HMG (high-mobility group) domain of Sox4 [Sry (sex-determining region on the Y chromosome)-related HMG box 4] bound to DNA. By comparing this structure with previously solved structures of Sox17 and Sox2, we observed subtle conformational differences at the DNA-binding interface. Furthermore, using quantitative electrophoretic mobility-shift assays we validated the positional interdependence of two nucleotides and the presence of a secondary Sox motif in the affinity landscape of Sox4. These results suggest that a concerted rearrangement of two interface amino acids enables Sox4 to accommodate primary and secondary motifs. The structural adaptations lead to altered dinucleotide preferences that mutually reinforce each other. These analyses underline the complexity of the DNA recognition by TFs and provide an experimental validation for the conceptual framework of positional interdependence and secondary binding motifs.

  13. Spoken word recognition without a TRACE

    Science.gov (United States)

    Hannagan, Thomas; Magnuson, James S.; Grainger, Jonathan

    2013-01-01

    How do we map the rapid input of spoken language onto phonological and lexical representations over time? Attempts at psychologically-tractable computational models of spoken word recognition tend either to ignore time or to transform the temporal input into a spatial representation. TRACE, a connectionist model with broad and deep coverage of speech perception and spoken word recognition phenomena, takes the latter approach, using exclusively time-specific units at every level of representation. TRACE reduplicates featural, phonemic, and lexical inputs at every time step in a large memory trace, with rich interconnections (excitatory forward and backward connections between levels and inhibitory links within levels). As the length of the memory trace is increased, or as the phoneme and lexical inventory of the model is increased to a realistic size, this reduplication of time- (temporal position) specific units leads to a dramatic proliferation of units and connections, begging the question of whether a more efficient approach is possible. Our starting point is the observation that models of visual object recognition—including visual word recognition—have grappled with the problem of spatial invariance, and arrived at solutions other than a fully-reduplicative strategy like that of TRACE. This inspires a new model of spoken word recognition that combines time-specific phoneme representations similar to those in TRACE with higher-level representations based on string kernels: temporally independent (time invariant) diphone and lexical units. This reduces the number of necessary units and connections by several orders of magnitude relative to TRACE. Critically, we compare the new model to TRACE on a set of key phenomena, demonstrating that the new model inherits much of the behavior of TRACE and that the drastic computational savings do not come at the cost of explanatory power. PMID:24058349

  14. Speech recognition systems on the Cell Broadband Engine

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Y; Jones, H; Vaidya, S; Perrone, M; Tydlitat, B; Nanda, A

    2007-04-20

    In this paper we describe our design, implementation, and first results of a prototype connected-phoneme-based speech recognition system on the Cell Broadband Engine{trademark} (Cell/B.E.). Automatic speech recognition decodes speech samples into plain text (other representations are possible) and must process samples at real-time rates. Fortunately, the computational tasks involved in this pipeline are highly data-parallel and can receive significant hardware acceleration from vector-streaming architectures such as the Cell/B.E. Identifying and exploiting these parallelism opportunities is challenging, but also critical to improving system performance. We observed, from our initial performance timings, that a single Cell/B.E. processor can recognize speech from thousands of simultaneous voice channels in real time--a channel density that is orders-of-magnitude greater than the capacity of existing software speech recognizers based on CPUs (central processing units). This result emphasizes the potential for Cell/B.E.-based speech recognition and will likely lead to the future development of production speech systems using Cell/B.E. clusters.

  15. The effect of glucose administration on the recollection and familiarity components of recognition memory.

    Science.gov (United States)

    Sünram-Lea, Sandra I; Dewhurst, Stephen A; Foster, Jonathan K

    2008-01-01

    Previous research has demonstrated that glucose administration facilitates long-term memory performance. The aim of the present research was to evaluate the effect of glucose administration on different components of long-term recognition memory. Fifty-six healthy young individuals received (a) a drink containing 25 g of glucose or (b) an inert placebo drink. Recollection and familiarity components of recognition memory were measured using the 'remember-know' paradigm. The results revealed that glucose administration led to significantly increased proportion of recognition responses based on recollection, but had no effect on the proportion of recognition responses made through participants' detection of stimulus familiarity. Consequently, the data suggest that glucose administration appears to facilitate recognition memory that is accompanied by recollection of contextual details and episodic richness. The findings also suggest that memory tasks that result in high levels of hippocampal activity may be more likely to be enhanced by glucose administration than tasks that are less reliant on medial temporal lobe structures.

  16. The recognition of facial emotion expressions in Parkinson's disease.

    Science.gov (United States)

    Assogna, Francesca; Pontieri, Francesco E; Caltagirone, Carlo; Spalletta, Gianfranco

    2008-11-01

    A limited number of studies in Parkinson's Disease (PD) suggest a disturbance of recognition of facial emotion expressions. In particular, disgust recognition impairment has been reported in unmedicated and medicated PD patients. However, the results are rather inconclusive in the definition of the degree and the selectivity of emotion recognition impairment, and an associated impairment of almost all basic facial emotions in PD is also described. Few studies have investigated the relationship with neuropsychiatric and neuropsychological symptoms with mainly negative results. This inconsistency may be due to many different problems, such as emotion assessment, perception deficit, cognitive impairment, behavioral symptoms, illness severity and antiparkinsonian therapy. Here we review the clinical characteristics and neural structures involved in the recognition of specific facial emotion expressions, and the plausible role of dopamine transmission and dopamine replacement therapy in these processes. It is clear that future studies should be directed to clarify all these issues.

  17. Peptide Pattern Recognition for high-throughput protein sequence analysis and clustering

    DEFF Research Database (Denmark)

    Busk, Peter Kamp

    2017-01-01

    Large collections of protein sequences with divergent sequences are tedious to analyze for understanding their phylogenetic or structure-function relation. Peptide Pattern Recognition is an algorithm that was developed to facilitate this task but the previous version does only allow a limited...... number of sequences as input. I implemented Peptide Pattern Recognition as a multithread software designed to handle large numbers of sequences and perform analysis in a reasonable time frame. Benchmarking showed that the new implementation of Peptide Pattern Recognition is twenty times faster than...... the previous implementation on a small protein collection with 673 MAP kinase sequences. In addition, the new implementation could analyze a large protein collection with 48,570 Glycosyl Transferase family 20 sequences without reaching its upper limit on a desktop computer. Peptide Pattern Recognition...

  18. Obligatory and facultative brain regions for voice-identity recognition.

    Science.gov (United States)

    Roswandowitz, Claudia; Kappes, Claudia; Obrig, Hellmuth; von Kriegstein, Katharina

    2018-01-01

    Recognizing the identity of others by their voice is an important skill for social interactions. To date, it remains controversial which parts of the brain are critical structures for this skill. Based on neuroimaging findings, standard models of person-identity recognition suggest that the right temporal lobe is the hub for voice-identity recognition. Neuropsychological case studies, however, reported selective deficits of voice-identity recognition in patients predominantly with right inferior parietal lobe lesions. Here, our aim was to work towards resolving the discrepancy between neuroimaging studies and neuropsychological case studies to find out which brain structures are critical for voice-identity recognition in humans. We performed a voxel-based lesion-behaviour mapping study in a cohort of patients (n = 58) with unilateral focal brain lesions. The study included a comprehensive behavioural test battery on voice-identity recognition of newly learned (voice-name, voice-face association learning) and familiar voices (famous voice recognition) as well as visual (face-identity recognition) and acoustic control tests (vocal-pitch and vocal-timbre discrimination). The study also comprised clinically established tests (neuropsychological assessment, audiometry) and high-resolution structural brain images. The three key findings were: (i) a strong association between voice-identity recognition performance and right posterior/mid temporal and right inferior parietal lobe lesions; (ii) a selective association between right posterior/mid temporal lobe lesions and voice-identity recognition performance when face-identity recognition performance was factored out; and (iii) an association of right inferior parietal lobe lesions with tasks requiring the association between voices and faces but not voices and names. The results imply that the right posterior/mid temporal lobe is an obligatory structure for voice-identity recognition, while the inferior parietal lobe is

  19. Speech recognition using articulatory and excitation source features

    CERN Document Server

    Rao, K Sreenivasa

    2017-01-01

    This book discusses the contribution of articulatory and excitation source information in discriminating sound units. The authors focus on excitation source component of speech -- and the dynamics of various articulators during speech production -- for enhancement of speech recognition (SR) performance. Speech recognition is analyzed for read, extempore, and conversation modes of speech. Five groups of articulatory features (AFs) are explored for speech recognition, in addition to conventional spectral features. Each chapter provides the motivation for exploring the specific feature for SR task, discusses the methods to extract those features, and finally suggests appropriate models to capture the sound unit specific knowledge from the proposed features. The authors close by discussing various combinations of spectral, articulatory and source features, and the desired models to enhance the performance of SR systems.

  20. A Versatile Embedded Platform for EMG Acquisition and Gesture Recognition.

    Science.gov (United States)

    Benatti, Simone; Casamassima, Filippo; Milosevic, Bojan; Farella, Elisabetta; Schönle, Philipp; Fateh, Schekeb; Burger, Thomas; Huang, Qiuting; Benini, Luca

    2015-10-01

    Wearable devices offer interesting features, such as low cost and user friendliness, but their use for medical applications is an open research topic, given the limited hardware resources they provide. In this paper, we present an embedded solution for real-time EMG-based hand gesture recognition. The work focuses on the multi-level design of the system, integrating the hardware and software components to develop a wearable device capable of acquiring and processing EMG signals for real-time gesture recognition. The system combines the accuracy of a custom analog front end with the flexibility of a low power and high performance microcontroller for on-board processing. Our system achieves the same accuracy of high-end and more expensive active EMG sensors used in applications with strict requirements on signal quality. At the same time, due to its flexible configuration, it can be compared to the few wearable platforms designed for EMG gesture recognition available on market. We demonstrate that we reach similar or better performance while embedding the gesture recognition on board, with the benefit of cost reduction. To validate this approach, we collected a dataset of 7 gestures from 4 users, which were used to evaluate the impact of the number of EMG channels, the number of recognized gestures and the data rate on the recognition accuracy and on the computational demand of the classifier. As a result, we implemented a SVM recognition algorithm capable of real-time performance on the proposed wearable platform, achieving a classification rate of 90%, which is aligned with the state-of-the-art off-line results and a 29.7 mW power consumption, guaranteeing 44 hours of continuous operation with a 400 mAh battery.

  1. Statistical Pattern Recognition

    CERN Document Server

    Webb, Andrew R

    2011-01-01

    Statistical pattern recognition relates to the use of statistical techniques for analysing data measurements in order to extract information and make justified decisions.  It is a very active area of study and research, which has seen many advances in recent years. Applications such as data mining, web searching, multimedia data retrieval, face recognition, and cursive handwriting recognition, all require robust and efficient pattern recognition techniques. This third edition provides an introduction to statistical pattern theory and techniques, with material drawn from a wide range of fields,

  2. Multispectral Palmprint Recognition Using a Quaternion Matrix

    Directory of Open Access Journals (Sweden)

    Yafeng Li

    2012-04-01

    Full Text Available Palmprints have been widely studied for biometric recognition for many years. Traditionally, a white light source is used for illumination. Recently, multispectral imaging has drawn attention because of its high recognition accuracy. Multispectral palmprint systems can provide more discriminant information under different illuminations in a short time, thus they can achieve better recognition accuracy. Previously, multispectral palmprint images were taken as a kind of multi-modal biometrics, and the fusion scheme on the image level or matching score level was used. However, some spectral information will be lost during image level or matching score level fusion. In this study, we propose a new method for multispectral images based on a quaternion model which could fully utilize the multispectral information. Firstly, multispectral palmprint images captured under red, green, blue and near-infrared (NIR illuminations were represented by a quaternion matrix, then principal component analysis (PCA and discrete wavelet transform (DWT were applied respectively on the matrix to extract palmprint features. After that, Euclidean distance was used to measure the dissimilarity between different features. Finally, the sum of two distances and the nearest neighborhood classifier were employed for recognition decision. Experimental results showed that using the quaternion matrix can achieve a higher recognition rate. Given 3000 test samples from 500 palms, the recognition rate can be as high as 98.83%.

  3. Does the presence of priming hinder subsequent recognition or recall performance?

    Science.gov (United States)

    Stark, Shauna M; Gordon, Barry; Stark, Craig E L

    2008-02-01

    Declarative and non-declarative memories are thought be supported by two distinct memory systems that are often posited not to interact. However, Wagner, Maril, and Schacter (2000a) reported that at the time priming was assessed, greater behavioural and neural priming was associated with lower levels of subsequent recognition memory, demonstrating an interaction between declarative and non-declarative memory. We examined this finding using a similar paradigm, in which participants made the same or different semantic word judgements following a short or long lag and subsequent memory test. We found a similar overall pattern of results, with greater behavioural priming associated with a decrease in recognition and recall performance. However, neither various within-participant nor various between-participant analyses revealed significant correlations between priming and subsequent memory performance. These data suggest that both lag and task have effects on priming and declarative memory performance, but that they are largely independent and occur in parallel.

  4. Facial emotion recognition in paranoid schizophrenia and autism spectrum disorder.

    Science.gov (United States)

    Sachse, Michael; Schlitt, Sabine; Hainz, Daniela; Ciaramidaro, Angela; Walter, Henrik; Poustka, Fritz; Bölte, Sven; Freitag, Christine M

    2014-11-01

    Schizophrenia (SZ) and autism spectrum disorder (ASD) share deficits in emotion processing. In order to identify convergent and divergent mechanisms, we investigated facial emotion recognition in SZ, high-functioning ASD (HFASD), and typically developed controls (TD). Different degrees of task difficulty and emotion complexity (face, eyes; basic emotions, complex emotions) were used. Two Benton tests were implemented in order to elicit potentially confounding visuo-perceptual functioning and facial processing. Nineteen participants with paranoid SZ, 22 with HFASD and 20 TD were included, aged between 14 and 33 years. Individuals with SZ were comparable to TD in all obtained emotion recognition measures, but showed reduced basic visuo-perceptual abilities. The HFASD group was impaired in the recognition of basic and complex emotions compared to both, SZ and TD. When facial identity recognition was adjusted for, group differences remained for the recognition of complex emotions only. Our results suggest that there is a SZ subgroup with predominantly paranoid symptoms that does not show problems in face processing and emotion recognition, but visuo-perceptual impairments. They also confirm the notion of a general facial and emotion recognition deficit in HFASD. No shared emotion recognition deficit was found for paranoid SZ and HFASD, emphasizing the differential cognitive underpinnings of both disorders. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Likelihood ratio sequential sampling models of recognition memory.

    Science.gov (United States)

    Osth, Adam F; Dennis, Simon; Heathcote, Andrew

    2017-02-01

    The mirror effect - a phenomenon whereby a manipulation produces opposite effects on hit and false alarm rates - is benchmark regularity of recognition memory. A likelihood ratio decision process, basing recognition on the relative likelihood that a stimulus is a target or a lure, naturally predicts the mirror effect, and so has been widely adopted in quantitative models of recognition memory. Glanzer, Hilford, and Maloney (2009) demonstrated that likelihood ratio models, assuming Gaussian memory strength, are also capable of explaining regularities observed in receiver-operating characteristics (ROCs), such as greater target than lure variance. Despite its central place in theorising about recognition memory, however, this class of models has not been tested using response time (RT) distributions. In this article, we develop a linear approximation to the likelihood ratio transformation, which we show predicts the same regularities as the exact transformation. This development enabled us to develop a tractable model of recognition-memory RT based on the diffusion decision model (DDM), with inputs (drift rates) provided by an approximate likelihood ratio transformation. We compared this "LR-DDM" to a standard DDM where all targets and lures receive their own drift rate parameters. Both were implemented as hierarchical Bayesian models and applied to four datasets. Model selection taking into account parsimony favored the LR-DDM, which requires fewer parameters than the standard DDM but still fits the data well. These results support log-likelihood based models as providing an elegant explanation of the regularities of recognition memory, not only in terms of choices made but also in terms of the times it takes to make them. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Simulation Analysis on Driving Behavior during Traffic Sign Recognition

    Directory of Open Access Journals (Sweden)

    Lishan Sun

    2011-05-01

    Full Text Available The traffic signs transfer trip information to drivers through vectors like words, graphs and numbers. Traffic sign with excessive information often makes the drivers have no time to read and understand, leading to risky driving. It is still a problem of how to clarify the relationship between traffic sign recognition and risky driving behavior. This paper presents a study that is reflective of such an effort. Twenty volunteers participated in the dynamic visual recognition experiment in driving simulator, and the data of several key indicators are obtained, including visual cognition time, vehicle acceleration and the offset distance from middle lane, etc. Correlations between each indicator above are discussed in terms of risky driving. Research findings directly show that drivers' behavior changes a lot during their traffic sign recognition.

  7. Comparison of Ecological Micro-Expression Recognition in Patients with Depression and Healthy Individuals

    Directory of Open Access Journals (Sweden)

    Chuanlin Zhu

    2017-10-01

    Full Text Available Previous studies have focused on the characteristics of ordinary facial expressions in patients with depression, and have not investigated the processing characteristics of ecological micro-expressions (MEs, i.e., MEs that presented in different background expressions in these patients. Based on this, adopting the ecological MEs recognition paradigm, this study aimed to comparatively evaluate facial ME recognition in depressed and healthy individuals. The findings of the study are as follows: (1 background expression: the accuracy (ACC in the neutral background condition tended to be higher than that in the fear background condition, and the reaction time (RT in the neutral background condition was significantly longer than that in other backgrounds. The type of ME and its interaction with the type of background expression could affect participants’ ecological MEs recognition ACC and speed. Depression type: there was no significant difference between the ecological MEs recognition ACC of patients with depression and healthy individuals, but the patients’ RT was significantly longer than that of healthy individuals; and (2 patients with depression judged happy MEs that were presented against different backgrounds as neutral and judged neutral MEs that were presented against sad backgrounds as sad. The present study suggested the following: (1 ecological MEs recognition was influenced by background expressions. The ACC of happy MEs was the highest, of neutral ME moderate and of sadness and fear the lowest. The response to the happy MEs was significantly shorter than that of identifying other MEs. It is necessary to conduct research on ecological MEs recognition; (2 the speed of patients with depression in identifying ecological MEs was slower than of healthy individuals; indicating that the patients’ cognitive function was impaired; and (3 the patients with depression showed negative bias in the ecological MEs recognition task, reflecting

  8. Perceptual Plasticity for Auditory Object Recognition

    Science.gov (United States)

    Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.

    2017-01-01

    In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples

  9. Action Recognition Using Motion Primitives and Probabilistic Edit Distance

    DEFF Research Database (Denmark)

    Fihl, Preben; Holte, Michael Boelstoft; Moeslund, Thomas B.

    2006-01-01

    In this paper we describe a recognition approach based on the notion of primitives. As opposed to recognizing actions based on temporal trajectories or temporal volumes, primitive-based recognition is based on representing a temporal sequence containing an action by only a few characteristic time...... into a string containing a sequence of symbols, each representing a primitives. After pruning the string a probabilistic Edit Distance classifier is applied to identify which action best describes the pruned string. The approach is evaluated on five one-arm gestures and the recognition rate is 91...

  10. Primitive Based Action Representation and recognition

    DEFF Research Database (Denmark)

    Baby, Sanmohan

    The presented work is aimed at designing a system that will model and recognize actions and its interaction with objects. Such a system is aimed at facilitating robot task learning. Activity modeling and recognition is very important for its potential applications in surveillance, human-machine i......The presented work is aimed at designing a system that will model and recognize actions and its interaction with objects. Such a system is aimed at facilitating robot task learning. Activity modeling and recognition is very important for its potential applications in surveillance, human......-machine interface, entertainment, biomechanics etc. Recent developments in neuroscience suggest that all actions are a compositions of smaller units called primitives. Current works based on primitives for action recognition uses a supervised framework for specifying the primitives. We propose a method to extract...... primitives automatically. These primitives are to be used to generate actions based on certain rules for combining. These rules are expressed as a stochastic context free grammar. A model merging approach is adopted to learn a Hidden Markov Model to t the observed data sequences. The states of the HMM...

  11. Pattern recognition and modelling of earthquake registrations with interactive computer support

    International Nuclear Information System (INIS)

    Manova, Katarina S.

    2004-01-01

    The object of the thesis is Pattern Recognition. Pattern recognition i.e. classification, is applied in many fields: speech recognition, hand printed character recognition, medical analysis, satellite and aerial-photo interpretations, biology, computer vision, information retrieval and so on. In this thesis is studied its applicability in seismology. Signal classification is an area of great importance in a wide variety of applications. This thesis deals with the problem of (automatic) classification of earthquake signals, which are non-stationary signals. Non-stationary signal classification is an area of active research in the signal and image processing community. The goal of the thesis is recognition of earthquake signals according to their epicentral zone. Source classification i.e. recognition is based on transformation of seismograms (earthquake registrations) to images, via time-frequency transformations, and applying image processing and pattern recognition techniques for feature extraction, classification and recognition. The tested data include local earthquakes from seismic regions in Macedonia. By using actual seismic data it is shown that proposed methods provide satisfactory results for classification and recognition.(Author)

  12. Children's familiarity preference in self-directed study improves recognition memory

    NARCIS (Netherlands)

    Adams, K.A.; Kachergis, G.E.; Markant, D.; Gunzelmann, G.; Howes, A.; Tenbrink, T.; Davelaar, E.

    2017-01-01

    In both adults and school-age children, volitional control over the presentation of stimuli during study leads to enhanced recognition memory. Yet little is known about how very young learners choose to allocate their time and attention during self-directed study. Using a recognition memory task, we

  13. Rotation-invariant neural pattern recognition system with application to coin recognition.

    Science.gov (United States)

    Fukumi, M; Omatu, S; Takeda, F; Kosaka, T

    1992-01-01

    In pattern recognition, it is often necessary to deal with problems to classify a transformed pattern. A neural pattern recognition system which is insensitive to rotation of input pattern by various degrees is proposed. The system consists of a fixed invariance network with many slabs and a trainable multilayered network. The system was used in a rotation-invariant coin recognition problem to distinguish between a 500 yen coin and a 500 won coin. The results show that the approach works well for variable rotation pattern recognition.

  14. Exhibits Recognition System for Combining Online Services and Offline Services

    Science.gov (United States)

    Ma, He; Liu, Jianbo; Zhang, Yuan; Wu, Xiaoyu

    2017-10-01

    In order to achieve a more convenient and accurate digital museum navigation, we have developed a real-time and online-to-offline museum exhibits recognition system using image recognition method based on deep learning. In this paper, the client and server of the system are separated and connected through the HTTP. Firstly, by using the client app in the Android mobile phone, the user can take pictures and upload them to the server. Secondly, the features of the picture are extracted using the deep learning network in the server. With the help of the features, the pictures user uploaded are classified with a well-trained SVM. Finally, the classification results are sent to the client and the detailed exhibition’s introduction corresponding to the classification results are shown in the client app. Experimental results demonstrate that the recognition accuracy is close to 100% and the computing time from the image uploading to the exhibit information show is less than 1S. By means of exhibition image recognition algorithm, our implemented exhibits recognition system can combine online detailed exhibition information to the user in the offline exhibition hall so as to achieve better digital navigation.

  15. Concreteness norms for 1,659 French words: Relationships with other psycholinguistic variables and word recognition times.

    Science.gov (United States)

    Bonin, Patrick; Méot, Alain; Bugaiska, Aurélia

    2018-02-12

    Words that correspond to a potential sensory experience-concrete words-have long been found to possess a processing advantage over abstract words in various lexical tasks. We collected norms of concreteness for a set of 1,659 French words, together with other psycholinguistic norms that were not available for these words-context availability, emotional valence, and arousal-but which are important if we are to achieve a better understanding of the meaning of concreteness effects. We then investigated the relationships of concreteness with these newly collected variables, together with other psycholinguistic variables that were already available for this set of words (e.g., imageability, age of acquisition, and sensory experience ratings). Finally, thanks to the variety of psychological norms available for this set of words, we decided to test further the embodied account of concreteness effects in visual-word recognition, championed by Kousta, Vigliocco, Vinson, Andrews, and Del Campo (Journal of Experimental Psychology: General, 140, 14-34, 2011). Similarly, we investigated the influences of concreteness in three word recognition tasks-lexical decision, progressive demasking, and word naming-using a multiple regression approach, based on the reaction times available in Chronolex (Ferrand, Brysbaert, Keuleers, New, Bonin, Méot, Pallier, Frontiers in Psychology, 2; 306, 2011). The norms can be downloaded as supplementary material provided with this article.

  16. Falling out of time: enhanced memory for scenes presented at behaviorally irrelevant points in time in posttraumatic stress disorder (PTSD).

    Science.gov (United States)

    Levy-Gigi, Einat; Kéri, Szabolcs

    2012-01-01

    Spontaneous encoding of the visual environment depends on the behavioral relevance of the task performed simultaneously. If participants identify target letters or auditory tones while viewing a series of briefly presented natural and urban scenes, they demonstrate effective scene recognition only when a target, but not a behaviorally irrelevant distractor, appears together with the scene. Here, we show that individuals with posttraumatic stress disorder (PTSD), who witnessed the red sludge disaster in Hungary, show the opposite pattern of performance: enhanced recognition of scenes presented together with distractors and deficient recognition of scenes presented with targets. The recognition of trauma-related and neutral scenes was not different in individuals with PTSD. We found a positive correlation between memory for scenes presented with auditory distractors and re-experiencing symptoms (memory intrusions and flashbacks). These results suggest that abnormal encoding of visual scenes at behaviorally irrelevant events might be associated with intrusive experiences by disrupting the flow of time.

  17. Falling out of time: enhanced memory for scenes presented at behaviorally irrelevant points in time in posttraumatic stress disorder (PTSD.

    Directory of Open Access Journals (Sweden)

    Einat Levy-Gigi

    Full Text Available Spontaneous encoding of the visual environment depends on the behavioral relevance of the task performed simultaneously. If participants identify target letters or auditory tones while viewing a series of briefly presented natural and urban scenes, they demonstrate effective scene recognition only when a target, but not a behaviorally irrelevant distractor, appears together with the scene. Here, we show that individuals with posttraumatic stress disorder (PTSD, who witnessed the red sludge disaster in Hungary, show the opposite pattern of performance: enhanced recognition of scenes presented together with distractors and deficient recognition of scenes presented with targets. The recognition of trauma-related and neutral scenes was not different in individuals with PTSD. We found a positive correlation between memory for scenes presented with auditory distractors and re-experiencing symptoms (memory intrusions and flashbacks. These results suggest that abnormal encoding of visual scenes at behaviorally irrelevant events might be associated with intrusive experiences by disrupting the flow of time.

  18. Temporal lobe structures and facial emotion recognition in schizophrenia patients and nonpsychotic relatives.

    Science.gov (United States)

    Goghari, Vina M; Macdonald, Angus W; Sponheim, Scott R

    2011-11-01

    Temporal lobe abnormalities and emotion recognition deficits are prominent features of schizophrenia and appear related to the diathesis of the disorder. This study investigated whether temporal lobe structural abnormalities were associated with facial emotion recognition deficits in schizophrenia and related to genetic liability for the disorder. Twenty-seven schizophrenia patients, 23 biological family members, and 36 controls participated. Several temporal lobe regions (fusiform, superior temporal, middle temporal, amygdala, and hippocampus) previously associated with face recognition in normative samples and found to be abnormal in schizophrenia were evaluated using volumetric analyses. Participants completed a facial emotion recognition task and an age recognition control task under time-limited and self-paced conditions. Temporal lobe volumes were tested for associations with task performance. Group status explained 23% of the variance in temporal lobe volume. Left fusiform gray matter volume was decreased by 11% in patients and 7% in relatives compared with controls. Schizophrenia patients additionally exhibited smaller hippocampal and middle temporal volumes. Patients were unable to improve facial emotion recognition performance with unlimited time to make a judgment but were able to improve age recognition performance. Patients additionally showed a relationship between reduced temporal lobe gray matter and poor facial emotion recognition. For the middle temporal lobe region, the relationship between greater volume and better task performance was specific to facial emotion recognition and not age recognition. Because schizophrenia patients exhibited a specific deficit in emotion recognition not attributable to a generalized impairment in face perception, impaired emotion recognition may serve as a target for interventions.

  19. Emotional face recognition deficit in amnestic patients with mild cognitive impairment: behavioral and electrophysiological evidence

    Directory of Open Access Journals (Sweden)

    Yang L

    2015-08-01

    Full Text Available Linlin Yang, Xiaochuan Zhao, Lan Wang, Lulu Yu, Mei Song, Xueyi Wang Department of Mental Health, The First Hospital of Hebei Medical University, Hebei Medical University Institute of Mental Health, Shijiazhuang, People’s Republic of China Abstract: Amnestic mild cognitive impairment (MCI has been conceptualized as a transitional stage between healthy aging and Alzheimer’s disease. Thus, understanding emotional face recognition deficit in patients with amnestic MCI could be useful in determining progression of amnestic MCI. The purpose of this study was to investigate the features of emotional face processing in amnestic MCI by using event-related potentials (ERPs. Patients with amnestic MCI and healthy controls performed a face recognition task, giving old/new responses to previously studied and novel faces with different emotional messages as the stimulus material. Using the learning-recognition paradigm, the experiments were divided into two steps, ie, a learning phase and a test phase. ERPs were analyzed on electroencephalographic recordings. The behavior data indicated high emotion classification accuracy for patients with amnestic MCI and for healthy controls. The mean percentage of correct classifications was 81.19% for patients with amnestic MCI and 96.46% for controls. Our ERP data suggest that patients with amnestic MCI were still be able to undertake personalizing processing for negative faces, but not for neutral or positive faces, in the early frontal processing stage. In the early time window, no differences in frontal old/new effect were found between patients with amnestic MCI and normal controls. However, in the late time window, the three types of stimuli did not elicit any old/new parietal effects in patients with amnestic MCI, suggesting their recollection was impaired. This impairment may be closely associated with amnestic MCI disease. We conclude from our data that face recognition processing and emotional memory is

  20. Memory evaluation in mild cognitive impairment using recall and recognition tests.

    Science.gov (United States)

    Bennett, Ilana J; Golob, Edward J; Parker, Elizabeth S; Starr, Arnold

    2006-11-01

    Amnestic mild cognitive impairment (MCI) is a selective episodic memory deficit that often indicates early Alzheimer's disease. Episodic memory function in MCI is typically defined by deficits in free recall, but can also be tested using recognition procedures. To assess both recall and recognition in MCI, MCI (n = 21) and older comparison (n = 30) groups completed the USC-Repeatable Episodic Memory Test. Subjects memorized two verbally presented 15-item lists. One list was used for three free recall trials, immediately followed by yes/no recognition. The second list was used for three-alternative forced-choice recognition. Relative to the comparison group, MCI had significantly fewer hits and more false alarms in yes/no recognition, and were less accurate in forced-choice recognition. Signal detection analysis showed that group differences were not due to response bias. Discriminant function analysis showed that yes/no recognition was a better predictor of group membership than free recall or forced-choice measures. MCI subjects recalled fewer items than comparison subjects, with no group differences in repetitions, intrusions, serial position effects, or measures of recall strategy (subjective organization, recall consistency). Performance deficits on free recall and recognition in MCI suggest a combination of both tests may be useful for defining episodic memory impairment associated with MCI and early Alzheimer's disease.

  1. The effect of mood-context on visual recognition and recall memory.

    Science.gov (United States)

    Robinson, Sarita J; Rollings, Lucy J L

    2011-01-01

    Although it is widely known that memory is enhanced when encoding and retrieval occur in the same state, the impact of elevated stress/arousal is less understood. This study explores mood-dependent memory's effects on visual recognition and recall of material memorized either in a neutral mood or under higher stress/arousal levels. Participants' (N = 60) recognition and recall were assessed while they experienced either the same o a mismatched mood at retrieval. The results suggested that both visual recognition and recall memory were higher when participants experienced the same mood at encoding and retrieval compared with those who experienced a mismatch in mood context between encoding and retrieval. These findings offer support for a mood dependency effect on both the recognition and recall of visual information.

  2. The Role of Binocular Disparity in Rapid Scene and Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Matteo Valsecchi

    2013-04-01

    Full Text Available We investigated the contribution of binocular disparity to the rapid recognition of scenes and simpler spatial patterns using a paradigm combining backward masked stimulus presentation and short-term match-to-sample recognition. First, we showed that binocular disparity did not contribute significantly to the recognition of briefly presented natural and artificial scenes, even when the availability of monocular cues was reduced. Subsequently, using dense random dot stereograms as stimuli, we showed that observers were in principle able to extract spatial patterns defined only by disparity under brief, masked presentations. Comparing our results with the predictions from a cue-summation model, we showed that combining disparity with luminance did not per se disrupt the processing of disparity. Our results suggest that the rapid recognition of scenes is mediated mostly by a monocular comparison of the images, although we can rely on stereo in fast pattern recognition.

  3. The relationship between face recognition ability and socioemotional functioning throughout adulthood.

    Science.gov (United States)

    Turano, Maria Teresa; Viggiano, Maria Pia

    2017-11-01

    The relationship between face recognition ability and socioemotional functioning has been widely explored. However, how aging modulates this association regarding both objective performance and subjective-perception is still neglected. Participants, aged between 18 and 81 years, performed a face memory test and completed subjective face recognition and socioemotional questionnaires. General and social anxiety, and neuroticism traits account for the individual variation in face recognition abilities during adulthood. Aging modulates these relationships because as they age, individuals that present a higher level of these traits also show low-level face recognition ability. Intriguingly, the association between depression and face recognition abilities is evident with increasing age. Overall, the present results emphasize the importance of embedding face metacognition measurement into the context of these studies and suggest that aging is an important factor to be considered, which seems to contribute to the relationship between socioemotional and face-cognitive functioning.

  4. Incorporating Duration Information in Activity Recognition

    Science.gov (United States)

    Chaurasia, Priyanka; Scotney, Bryan; McClean, Sally; Zhang, Shuai; Nugent, Chris

    Activity recognition has become a key issue in smart home environments. The problem involves learning high level activities from low level sensor data. Activity recognition can depend on several variables; one such variable is duration of engagement with sensorised items or duration of intervals between sensor activations that can provide useful information about personal behaviour. In this paper a probabilistic learning algorithm is proposed that incorporates episode, time and duration information to determine inhabitant identity and the activity being undertaken from low level sensor data. Our results verify that incorporating duration information consistently improves the accuracy.

  5. Acute adrenal insufficiency: an aide-memoire of the critical importance of its recognition and prevention.

    Science.gov (United States)

    Gargya, A; Chua, E; Hetherington, J; Sommer, K; Cooper, M

    2016-03-01

    Adrenal crisis is a life-threatening emergency that causes significant excess mortality in patients with adrenal insufficiency. Delayed recognition by medical staff of an impending adrenal crisis and failure to give timely hydrocortisone therapy within the emergency department continue to be commonly encountered, even in metropolitan teaching hospitals. Within the authors' institutions, several cases of poorly handled adrenal crises have occurred over the last 2 years. Anecdotal accounts from members of the Addison's support group suggest that these issues are common in Australia. This manuscript is a timely reminder for clinical staff on the critical importance of the recognition, treatment and prevention of adrenal crisis. The manuscript: (i) outlines a case and the clinical outcome of sub-optimally managed adrenal crisis, (ii) summarises the clinical features and acute management of adrenal crisis, (iii) provides recommendations on the prevention of adrenal crisis and (iv) provides guidance on the management of 'sick days' in patients with adrenal insufficiency. © 2016 Royal Australasian College of Physicians.

  6. Emotional recognition in depressed epilepsy patients.

    Science.gov (United States)

    Brand, Jesse G; Burton, Leslie A; Schaffer, Sarah G; Alper, Kenneth R; Devinsky, Orrin; Barr, William B

    2009-07-01

    The current study examined the relationship between emotional recognition and depression using the Minnesota Multiphasic Personality Inventory, Second Edition (MMPI-2), in a population with epilepsy. Participants were a mixture of surgical candidates in addition to those receiving neuropsychological testing as part of a comprehensive evaluation. Results suggested that patients with epilepsy reporting increased levels of depression (Scale D) performed better than those patients reporting low levels of depression on an index of simple facial recognition, and depression was associated with poor prosody discrimination. Further, it is notable that more than half of the present sample had significantly elevated Scale D scores. The potential effects of a mood-congruent bias and implications for social functioning in depressed patients with epilepsy are discussed.

  7. Facial Expression at Retrieval Affects Recognition of Facial Identity

    Directory of Open Access Journals (Sweden)

    Wenfeng eChen

    2015-06-01

    Full Text Available It is well known that memory can be modulated by emotional stimuli at the time of encoding and consolidation. For example, happy faces create better identity recognition than faces with certain other expressions. However, the influence of facial expression at the time of retrieval remains unknown in the literature. To separate the potential influence of expression at retrieval from its effects at earlier stages, we had participants learn neutral faces but manipulated facial expression at the time of memory retrieval in a standard old/new recognition task. The results showed a clear effect of facial expression, where happy test faces were identified more successfully than angry test faces. This effect is unlikely due to greater image similarity between the neutral learning face and the happy test face, because image analysis showed that the happy test faces are in fact less similar to the neutral learning faces relative to the angry test faces. In the second experiment, we investigated whether this emotional effect is influenced by the expression at the time of learning. We employed angry or happy faces as learning stimuli, and angry, happy, and neutral faces as test stimuli. The results showed that the emotional effect at retrieval is robust across different encoding conditions with happy or angry expressions. These findings indicate that emotional expressions affect the retrieval process in identity recognition, and identity recognition does not rely on emotional association between learning and test faces.

  8. Long Short-Term Memory Projection Recurrent Neural Network Architectures for Piano’s Continuous Note Recognition

    Directory of Open Access Journals (Sweden)

    YuKang Jia

    2017-01-01

    Full Text Available Long Short-Term Memory (LSTM is a kind of Recurrent Neural Networks (RNN relating to time series, which has achieved good performance in speech recogniton and image recognition. Long Short-Term Memory Projection (LSTMP is a variant of LSTM to further optimize speed and performance of LSTM by adding a projection layer. As LSTM and LSTMP have performed well in pattern recognition, in this paper, we combine them with Connectionist Temporal Classification (CTC to study piano’s continuous note recognition for robotics. Based on the Beijing Forestry University music library, we conduct experiments to show recognition rates and numbers of iterations of LSTM with a single layer, LSTMP with a single layer, and Deep LSTM (DLSTM, LSTM with multilayers. As a result, the single layer LSTMP proves performing much better than the single layer LSTM in both time and the recognition rate; that is, LSTMP has fewer parameters and therefore reduces the training time, and, moreover, benefiting from the projection layer, LSTMP has better performance, too. The best recognition rate of LSTMP is 99.8%. As for DLSTM, the recognition rate can reach 100% because of the effectiveness of the deep structure, but compared with the single layer LSTMP, DLSTM needs more training time.

  9. Artificial Neural Network Application for Partial Discharge Recognition: Survey and Future Directions

    Directory of Open Access Journals (Sweden)

    Abdullahi Abubakar Mas’ud

    2016-07-01

    Full Text Available In order to investigate how artificial neural networks (ANNs have been applied for partial discharge (PD pattern recognition, this paper reviews recent progress made on ANN development for PD classification by a literature survey. Contributions from several authors have been presented and discussed. High recognition rate has been recorded for several PD faults, but there are still many factors that hinder correct recognition of PD by the ANN, such as high-amplitude noise or wide spectral content typical from industrial environments, trial and error approaches in determining an optimum ANN, multiple PD sources acting simultaneously, lack of comprehensive and up to date databank of PD faults, and the appropriate selection of the characteristics that allow a correct recognition of the type of source which are currently being addressed by researchers. Several suggestions for improvement are proposed by the authors include: (1 determining the optimum weights in training the ANN; (2 using PD data captured over long stressing period in training the ANN; (3 ANN recognizing different PD degradation levels; (4 using the same resolution sizes of the PD patterns when training and testing the ANN with different PD dataset; (5 understanding the characteristics of multiple concurrent PD faults and effectively recognizing them; and (6 developing techniques in order to shorten the training time for the ANN as applied for PD recognition Finally, this paper critically assesses the suitability of ANNs for both online and offline PD detections outlining the advantages to the practitioners in the field. It is possible for the ANNs to determine the stage of degradation of the PD, thereby giving an indication of the seriousness of the fault.

  10. Implementing a Real-Time Suggestion Service in a Library Discovery Layer

    Directory of Open Access Journals (Sweden)

    Benjamin Pennell

    2010-06-01

    Full Text Available As part of an effort to improve user interactions with authority data in its online catalog, the UNC Chapel Hill Libraries have developed and implemented a system for providing real-time query suggestions from records found within its catalog. The system takes user input as it is typed to predict likely title, author, or subject matches in a manner functionally similar to the systems found on commercial websites such as google.com or amazon.com. This paper discusses the technologies, decisions and methodologies that went into the implementation of this feature, as well as analysis of its impact on user search behaviors.

  11. Oxytocin, vasopressin and estrogen receptor gene expression in relation to social recognition in female mice.

    Science.gov (United States)

    Clipperton-Allen, Amy E; Lee, Anna W; Reyes, Anny; Devidze, Nino; Phan, Anna; Pfaff, Donald W; Choleris, Elena

    2012-02-28

    Inter- and intra-species differences in social behavior and recognition-related hormones and receptors suggest that different distribution and/or expression patterns may relate to social recognition. We used qRT-PCR to investigate naturally occurring differences in expression of estrogen receptor-alpha (ERα), ER-beta (ERβ), progesterone receptor (PR), oxytocin (OT) and receptor, and vasopressin (AVP) and receptors in proestrous female mice. Following four 5 min exposures to the same two conspecifics, one was replaced with a novel mouse in the final trial (T5). Gene expression was examined in mice showing high (85-100%) and low (40-60%) social recognition scores (i.e., preferential novel mouse investigation in T5) in eight socially-relevant brain regions. Results supported OT and AVP involvement in social recognition, and suggest that in the medial preoptic area, increased OT and AVP mRNA, together with ERα and ERβ gene activation, relate to improved social recognition. Initial social investigation correlated with ERs, PR and OTR in the dorsolateral septum, suggesting that these receptors may modulate social interest without affecting social recognition. Finally, increased lateral amygdala gene activation in the LR mice may be associated with general learning impairments, while decreased lateral amygdala activity may indicate more efficient cognitive mechanisms in the HR mice. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. [Measuring impairment of facial affects recognition in schizophrenia. Preliminary study of the facial emotions recognition task (TREF)].

    Science.gov (United States)

    Gaudelus, B; Virgile, J; Peyroux, E; Leleu, A; Baudouin, J-Y; Franck, N

    2015-06-01

    The impairment of social cognition, including facial affects recognition, is a well-established trait in schizophrenia, and specific cognitive remediation programs focusing on facial affects recognition have been developed by different teams worldwide. However, even though social cognitive impairments have been confirmed, previous studies have also shown heterogeneity of the results between different subjects. Therefore, assessment of personal abilities should be measured individually before proposing such programs. Most research teams apply tasks based on facial affects recognition by Ekman et al. or Gur et al. However, these tasks are not easily applicable in a clinical exercise. Here, we present the Facial Emotions Recognition Test (TREF), which is designed to identify facial affects recognition impairments in a clinical practice. The test is composed of 54 photos and evaluates abilities in the recognition of six universal emotions (joy, anger, sadness, fear, disgust and contempt). Each of these emotions is represented with colored photos of 4 different models (two men and two women) at nine intensity levels from 20 to 100%. Each photo is presented during 10 seconds; no time limit for responding is applied. The present study compared the scores of the TREF test in a sample of healthy controls (64 subjects) and people with stabilized schizophrenia (45 subjects) according to the DSM IV-TR criteria. We analysed global scores for all emotions, as well as sub scores for each emotion between these two groups, taking into account gender differences. Our results were coherent with previous findings. Applying TREF, we confirmed an impairment in facial affects recognition in schizophrenia by showing significant differences between the two groups in their global results (76.45% for healthy controls versus 61.28% for people with schizophrenia), as well as in sub scores for each emotion except for joy. Scores for women were significantly higher than for men in the population

  13. The Complete Gabor-Fisher Classifier for Robust Face Recognition

    Directory of Open Access Journals (Sweden)

    Štruc Vitomir

    2010-01-01

    Full Text Available Abstract This paper develops a novel face recognition technique called Complete Gabor Fisher Classifier (CGFC. Different from existing techniques that use Gabor filters for deriving the Gabor face representation, the proposed approach does not rely solely on Gabor magnitude information but effectively uses features computed based on Gabor phase information as well. It represents one of the few successful attempts found in the literature of combining Gabor magnitude and phase information for robust face recognition. The novelty of the proposed CGFC technique comes from (1 the introduction of a Gabor phase-based face representation and (2 the combination of the recognition technique using the proposed representation with classical Gabor magnitude-based methods into a unified framework. The proposed face recognition framework is assessed in a series of face verification and identification experiments performed on the XM2VTS, Extended YaleB, FERET, and AR databases. The results of the assessment suggest that the proposed technique clearly outperforms state-of-the-art face recognition techniques from the literature and that its performance is almost unaffected by the presence of partial occlusions of the facial area, changes in facial expression, or severe illumination changes.

  14. Mobile Application Development for Quran Verse Recognition and Interpretations

    Directory of Open Access Journals (Sweden)

    Maha Alqahtani

    2015-01-01

    Full Text Available Mobile learning or “m-learning” is the process of learning when learners are not at a fixed location or time and can exploit the advantage of learning opportunities using mobile technologies. Nowadays, speech recognition is being used in many mobile applications. Speech recognition helps people to interact with the device as if were they talking to another person. This technology helps people to learn anything using computers by promoting self-study over extended periods of time. The objective of this study is focusing on designing and developing a mobile application for the Arabic recognition of spoken Quranic verses. The application is suitable for Android-based devices. The application is called Say Quran and is available on Google Play Store. Moreover, this paper presents the results of a preliminary experimentation to gather feedback from students regarding the developed application

  15. Improving Negative Emotion Recognition in Young Offenders Reduces Subsequent Crime.

    Science.gov (United States)

    Hubble, Kelly; Bowen, Katharine L; Moore, Simon C; van Goozen, Stephanie H M

    2015-01-01

    Children with antisocial behaviour show deficits in the perception of emotional expressions in others that may contribute to the development and persistence of antisocial and aggressive behaviour. Current treatments for antisocial youngsters are limited in effectiveness. It has been argued that more attention should be devoted to interventions that target neuropsychological correlates of antisocial behaviour. This study examined the effect of emotion recognition training on criminal behaviour. Emotion recognition and crime levels were studied in 50 juvenile offenders. Whilst all young offenders received their statutory interventions as the study was conducted, a subgroup of twenty-four offenders also took part in a facial affect training aimed at improving emotion recognition. Offenders in the training and control groups were matched for age, SES, IQ and lifetime crime level. All offenders were tested twice for emotion recognition performance, and recent crime data were collected after the testing had been completed. Before the training there were no differences between the groups in emotion recognition, with both groups displaying poor fear, sadness and anger recognition. After the training fear, sadness and anger recognition improved significantly in juvenile offenders in the training group. Although crime rates dropped in all offenders in the 6 months following emotion testing, only the group of offenders who had received the emotion training showed a significant reduction in the severity of the crimes they committed. The study indicates that emotion recognition can be relatively easily improved in youths who engage in serious antisocial and criminal behavior. The results suggest that improved emotion recognition has the potential to reduce the severity of reoffending.

  16. Integration trumps selection in object recognition

    Science.gov (United States)

    Saarela, Toni P.; Landy, Michael S.

    2015-01-01

    Summary Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several “cues” (color, luminance, texture etc.), and humans can integrate sensory cues to improve detection and recognition [1–3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue-invariance by responding to a given shape independent of the visual cue defining it [5–8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10,11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11,12], imaging [13–16], and single-cell and neural population recordings [17,18]. Besides single features, attention can select whole objects [19–21]. Objects are among the suggested “units” of attention because attention to a single feature of an object causes the selection of all of its features [19–21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near-optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection. PMID:25802154

  17. Integration trumps selection in object recognition.

    Science.gov (United States)

    Saarela, Toni P; Landy, Michael S

    2015-03-30

    Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several "cues" (color, luminance, texture, etc.), and humans can integrate sensory cues to improve detection and recognition [1-3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue invariance by responding to a given shape independent of the visual cue defining it [5-8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10, 11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11, 12], imaging [13-16], and single-cell and neural population recordings [17, 18]. Besides single features, attention can select whole objects [19-21]. Objects are among the suggested "units" of attention because attention to a single feature of an object causes the selection of all of its features [19-21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. The recognition of female voice based on voice registers in singing techniques in real-time using hankel transform method and macdonald function

    Science.gov (United States)

    Meiyanti, R.; Subandi, A.; Fuqara, N.; Budiman, M. A.; Siahaan, A. P. U.

    2018-03-01

    A singer doesn’t just recite the lyrics of a song, but also with the use of particular sound techniques to make it more beautiful. In the singing technique, more female have a diverse sound registers than male. There are so many registers of the human voice, but the voice registers used while singing, among others, Chest Voice, Head Voice, Falsetto, and Vocal fry. Research of speech recognition based on the female’s voice registers in singing technique is built using Borland Delphi 7.0. Speech recognition process performed by the input recorded voice samples and also in real time. Voice input will result in weight energy values based on calculations using Hankel Transformation method and Macdonald Functions. The results showed that the accuracy of the system depends on the accuracy of sound engineering that trained and tested, and obtained an average percentage of the successful introduction of the voice registers record reached 48.75 percent, while the average percentage of the successful introduction of the voice registers in real time to reach 57 percent.

  19. Hippocampal histone acetylation regulates object recognition and the estradiol-induced enhancement of object recognition.

    Science.gov (United States)

    Zhao, Zaorui; Fan, Lu; Fortress, Ashley M; Boulware, Marissa I; Frick, Karyn M

    2012-02-15

    Histone acetylation has recently been implicated in learning and memory processes, yet necessity of histone acetylation for such processes has not been demonstrated using pharmacological inhibitors of histone acetyltransferases (HATs). As such, the present study tested whether garcinol, a potent HAT inhibitor in vitro, could impair hippocampal memory consolidation and block the memory-enhancing effects of the modulatory hormone 17β-estradiol E2. We first showed that bilateral infusion of garcinol (0.1, 1, or 10 μg/side) into the dorsal hippocampus (DH) immediately after training impaired object recognition memory consolidation in ovariectomized female mice. A behaviorally effective dose of garcinol (10 μg/side) also significantly decreased DH HAT activity. We next examined whether DH infusion of a behaviorally subeffective dose of garcinol (1 ng/side) could block the effects of DH E2 infusion on object recognition and epigenetic processes. Immediately after training, ovariectomized female mice received bilateral DH infusions of vehicle, E2 (5 μg/side), garcinol (1 ng/side), or E2 plus garcinol. Forty-eight hours later, garcinol blocked the memory-enhancing effects of E2. Garcinol also reversed the E2-induced increase in DH histone H3 acetylation, HAT activity, and levels of the de novo methyltransferase DNMT3B, as well as the E2-induced decrease in levels of the memory repressor protein histone deacetylase 2. Collectively, these findings suggest that histone acetylation is critical for object recognition memory consolidation and the beneficial effects of E2 on object recognition. Importantly, this work demonstrates that the role of histone acetylation in memory processes can be studied using a HAT inhibitor.

  20. The Pandora software development kit for pattern recognition

    Energy Technology Data Exchange (ETDEWEB)

    Marshall, J.S.; Thomson, M.A. [University of Cambridge, Cavendish Laboratory, Cambridge (United Kingdom)

    2015-09-15

    The development of automated solutions to pattern recognition problems is important in many areas of scientific research and human endeavour. This paper describes the implementation of the Pandora software development kit, which aids the process of designing, implementing and running pattern recognition algorithms. The Pandora Application Programming Interfaces ensure simple specification of the building-blocks defining a pattern recognition problem. The logic required to solve the problem is implemented in algorithms. The algorithms request operations to create or modify data structures and the operations are performed by the Pandora framework. This design promotes an approach using many decoupled algorithms, each addressing specific topologies. Details of algorithms addressing two pattern recognition problems in High Energy Physics are presented: reconstruction of events at a high-energy e{sup +}e{sup -} linear collider and reconstruction of cosmic ray or neutrino events in a liquid argon time projection chamber. (orig.)

  1. How does susceptibility to proactive interference relate to speech recognition in aided and unaided conditions?

    Science.gov (United States)

    Ellis, Rachel J; Rönnberg, Jerker

    2015-01-01

    Proactive interference (PI) is the capacity to resist interference to the acquisition of new memories from information stored in the long-term memory. Previous research has shown that PI correlates significantly with the speech-in-noise recognition scores of younger adults with normal hearing. In this study, we report the results of an experiment designed to investigate the extent to which tests of visual PI relate to the speech-in-noise recognition scores of older adults with hearing loss, in aided and unaided conditions. The results suggest that measures of PI correlate significantly with speech-in-noise recognition only in the unaided condition. Furthermore the relation between PI and speech-in-noise recognition differs to that observed in younger listeners without hearing loss. The findings suggest that the relation between PI tests and the speech-in-noise recognition scores of older adults with hearing loss relates to capability of the test to index cognitive flexibility.

  2. How does susceptibility to proactive interference relate to speech recognition in aided and unaided conditions?

    Directory of Open Access Journals (Sweden)

    Rachel Jane Ellis

    2015-08-01

    Full Text Available Proactive interference (PI is the capacity to resist interference to the acquisition of new memories from information stored in the long-term memory. Previous research has shown that PI correlates significantly with the speech-in-noise recognition scores of younger adults with normal hearing. In this study, we report the results of an experiment designed to investigate the extent to which tests of visual PI relate to the speech-in-noise recognition scores of older adults with hearing loss, in aided and unaided conditions. The results suggest that measures of PI correlate significantly with speech-in-noise recognition only in the unaided condition. Furthermore the relation between PI and speech-in-noise recognition differs to that observed in younger listeners without hearing loss. The findings suggest that the relation between PI tests and the speech-in-noise recognition scores of older adults with hearing loss relates to capability of the test to index cognitive flexibility.

  3. Enhancing Speech Recognition Using Improved Particle Swarm Optimization Based Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Lokesh Selvaraj

    2014-01-01

    Full Text Available Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO is suggested. The suggested methodology contains four stages, namely, (i denoising, (ii feature mining (iii, vector quantization, and (iv IPSO based hidden Markov model (HMM technique (IP-HMM. At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC, mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.

  4. Self-organization comprehensive real-time state evaluation model for oil pump unit on the basis of operating condition classification and recognition

    Science.gov (United States)

    Liang, Wei; Yu, Xuchao; Zhang, Laibin; Lu, Wenqing

    2018-05-01

    In oil transmission station, the operating condition (OC) of an oil pump unit sometimes switches accordingly, which will lead to changes in operating parameters. If not taking the switching of OCs into consideration while performing a state evaluation on the pump unit, the accuracy of evaluation would be largely influenced. Hence, in this paper, a self-organization Comprehensive Real-Time State Evaluation Model (self-organization CRTSEM) is proposed based on OC classification and recognition. However, the underlying model CRTSEM is built through incorporating the advantages of Gaussian Mixture Model (GMM) and Fuzzy Comprehensive Evaluation Model (FCEM) first. That is to say, independent state models are established for every state characteristic parameter according to their distribution types (i.e. the Gaussian distribution and logistic regression distribution). Meanwhile, Analytic Hierarchy Process (AHP) is utilized to calculate the weights of state characteristic parameters. Then, the OC classification is determined by the types of oil delivery tasks, and CRTSEMs of different standard OCs are built to constitute the CRTSEM matrix. On the other side, the OC recognition is realized by a self-organization model that is established on the basis of Back Propagation (BP) model. After the self-organization CRTSEM is derived through integration, real-time monitoring data can be inputted for OC recognition. At the end, the current state of the pump unit can be evaluated by using the right CRTSEM. The case study manifests that the proposed self-organization CRTSEM can provide reasonable and accurate state evaluation results for the pump unit. Besides, the assumption that the switching of OCs will influence the results of state evaluation is also verified.

  5. Global precedence effects account for individual differences in both face and object recognition performance

    DEFF Research Database (Denmark)

    Gerlach, Christian; Starrfelt, Randi

    2018-01-01

    examine whether global precedence effects, measured by means of non-face stimuli in Navon's paradigm, can also account for individual differences in face recognition and, if so, whether the effect is of similar magnitude for faces and objects. We find evidence that global precedence effects facilitate...... both face and object recognition, and to a similar extent. Our results suggest that both face and object recognition are characterized by a coarse-to-fine temporal dynamic, where global shape information is derived prior to local shape information, and that the efficiency of face and object recognition...

  6. Evidences of the role of the rodent hippocampus in the non-spatial recognition memory.

    Science.gov (United States)

    Yi, Jee Hyun; Park, Hye Jin; Kim, Byeong C; Kim, Dong Hyun; Ryu, Jong Hoon

    2016-01-15

    The hippocampus is a key region responsible for processing spatial information. However, the role of the hippocampus in non-spatial recognition memory is still controversial. In the present study, we performed hippocampal lesioning to address this controversy. The hippocampi of mice were disrupted with bilateral cytotoxic lesions, and standard object recognition (non-spatial) and object location recognition (spatial) were tested. In the habituation period, mice with hippocampal lesions needed a significantly longer time to fully habituate to the test box. Interestingly, after 4 days of habituation (insufficient habituation), the recognition index was similar in the sham and hippocampal lesion groups. However, exploration time was significantly shorter in mice with hippocampal lesions compared with that in control mice. Interestingly, if mice were subjected to a 10-days-long period of habituation (full habituation), the recognition index was significantly lower in mice with hippocampal lesions compared with that in control mice; however, total exploration time was similar in both groups. Furthermore, the object recognition test after full habituation occluded hippocampal long-term potentiation, a cellular model of memory. These results indicate that sufficient habituation is required to observe the effects of hippocampal lesions on object recognition memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Learning during processing Word learning doesn’t wait for word recognition to finish

    Science.gov (United States)

    Apfelbaum, Keith S.; McMurray, Bob

    2017-01-01

    Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learning event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete. PMID:27471082

  8. Arguments Against a Configural Processing Account of Familiar Face Recognition.

    Science.gov (United States)

    Burton, A Mike; Schweinberger, Stefan R; Jenkins, Rob; Kaufmann, Jürgen M

    2015-07-01

    Face recognition is a remarkable human ability, which underlies a great deal of people's social behavior. Individuals can recognize family members, friends, and acquaintances over a very large range of conditions, and yet the processes by which they do this remain poorly understood, despite decades of research. Although a detailed understanding remains elusive, face recognition is widely thought to rely on configural processing, specifically an analysis of spatial relations between facial features (so-called second-order configurations). In this article, we challenge this traditional view, raising four problems: (1) configural theories are underspecified; (2) large configural changes leave recognition unharmed; (3) recognition is harmed by nonconfigural changes; and (4) in separate analyses of face shape and face texture, identification tends to be dominated by texture. We review evidence from a variety of sources and suggest that failure to acknowledge the impact of familiarity on facial representations may have led to an overgeneralization of the configural account. We argue instead that second-order configural information is remarkably unimportant for familiar face recognition. © The Author(s) 2015.

  9. Enriched environment effects on remote object recognition memory.

    Science.gov (United States)

    Melani, Riccardo; Chelini, Gabriele; Cenni, Maria Cristina; Berardi, Nicoletta

    2017-06-03

    Since Ebbinghaus' classical work on oblivion and saving effects, we know that declarative memories may become at first spontaneously irretrievable and only subsequently completely extinguished. Recently, this time-dependent path toward memory-trace loss has been shown to correlate with different patterns of brain activation. Environmental enrichment (EE) enhances learning and memory and affects system memory consolidation. However, there is no evidence on whether and how EE could affect the time-dependent path toward oblivion. We used Object Recognition Test (ORT) to assess in adult mice put in EE for 40days (EE mice) or left in standard condition (SC mice) memory retrieval of the familiar objects 9 and 21days after learning with or without a brief retraining performed the day before. We found that SC mice show preferential exploration of new object at day 9 only with retraining, while EE mice do it even without. At day 21 SC mice do not show preferential exploration of novel object, irrespective of the retraining, while EE mice are still capable to benefit from retraining, even if they were not able to spontaneously recover the trace. Analysis of c-fos expression 20days after learning shows a different pattern of active brain areas in response to the retraining session in EE and SC mice, with SC mice recruiting the same brain network as naïve SC or EE mice following de novo learning. This suggests that EE promotes formation of longer lasting object recognition memory, allowing a longer time window during which saving is present. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  10. Benefits for Voice Learning Caused by Concurrent Faces Develop over Time.

    Science.gov (United States)

    Zäske, Romi; Mühl, Constanze; Schweinberger, Stefan R

    2015-01-01

    Recognition of personally familiar voices benefits from the concurrent presentation of the corresponding speakers' faces. This effect of audiovisual integration is most pronounced for voices combined with dynamic articulating faces. However, it is unclear if learning unfamiliar voices also benefits from audiovisual face-voice integration or, alternatively, is hampered by attentional capture of faces, i.e., "face-overshadowing". In six study-test cycles we compared the recognition of newly-learned voices following unimodal voice learning vs. bimodal face-voice learning with either static (Exp. 1) or dynamic articulating faces (Exp. 2). Voice recognition accuracies significantly increased for bimodal learning across study-test cycles while remaining stable for unimodal learning, as reflected in numerical costs of bimodal relative to unimodal voice learning in the first two study-test cycles and benefits in the last two cycles. This was independent of whether faces were static images (Exp. 1) or dynamic videos (Exp. 2). In both experiments, slower reaction times to voices previously studied with faces compared to voices only may result from visual search for faces during memory retrieval. A general decrease of reaction times across study-test cycles suggests facilitated recognition with more speaker repetitions. Overall, our data suggest two simultaneous and opposing mechanisms during bimodal face-voice learning: while attentional capture of faces may initially impede voice learning, audiovisual integration may facilitate it thereafter.

  11. Autolysis: a plausible finding suggestive of long ESD procedure time.

    Science.gov (United States)

    Hyun, Jong Jin; Chun, Hoon Jai; Keum, Bora; Seo, Yeon Seok; Kim, Yong Sik; Jeen, Yoon Tae; Lee, Hong Sik; Um, Soon Ho; Kim, Chang Duck; Ryu, Ho Sang; Chae, Yang-Seok

    2012-04-01

    Autolysis is the enzymatic digestion of cells by the action of its own enzymes, and it mostly occurs in dying or dead cells. It has previously been suggested that prolonged procedure time could lead to autolytic changes from the periphery of the endoscopic submucosal dissection specimens. Recently, the authors have experienced a case of autolysis; due to the presence of ulcer, fibrosis, and frequent bleeding from the cut surface, it took 6 hours to complete the resection. More than halfway through the resection; bluish purple discoloration of the part of the dissected flap where the dissection was initiated was noticed. Histologic examination of this site showed diffuse distortion of epithelial lining and cellular architectures along with loss of cell components, compatible with autolysis. Because autolysis could theoretically pose a potential problem regarding the evaluation of resection margin, endoscopists and pathologists should communicate with each other for a reliable pathologic decision.

  12. Use of digital speech recognition in diagnostics radiology

    International Nuclear Information System (INIS)

    Arndt, H.; Stockheim, D.; Mutze, S.; Petersein, J.; Gregor, P.; Hamm, B.

    1999-01-01

    Purpose: Applicability and benefits of digital speech recognition in diagnostic radiology were tested using the speech recognition system SP 6000. Methods: The speech recognition system SP 6000 was integrated into the network of the institute and connected to the existing Radiological Information System (RIS). Three subjects used this system for writing 2305 findings from dictation. After the recognition process the date, length of dictation, time required for checking/correction, kind of examination and error rate were recorded for every dictation. With the same subjects, a correlation was performed with 625 conventionally written finding. Results: After an 1-hour initial training the average error rates were 8.4 to 13.3%. The first adaptation of the speech recognition system (after nine days) decreased the average error rates to 2.4 to 10.7% due to the ability of the program to learn. The 2 nd and 3 rd adaptations resulted only in small changes of the error rate. An individual comparison of the error rate developments in the same kind of investigation showed the relative independence of the error rate on the individual user. Conclusion: The results show that the speech recognition system SP 6000 can be evaluated as an advantageous alternative for quickly recording radiological findings. A comparison between manually writing and dictating the findings verifies the individual differences of the writing speeds and shows the advantage of the application of voice recognition when faced with normal keyboard performance. (orig.) [de

  13. Semantic Activity Recognition

    OpenAIRE

    Thonnat , Monique

    2008-01-01

    International audience; Extracting automatically the semantics from visual data is a real challenge. We describe in this paper how recent work in cognitive vision leads to significative results in activity recognition for visualsurveillance and video monitoring. In particular we present work performed in the domain of video understanding in our PULSAR team at INRIA in Sophia Antipolis. Our main objective is to analyse in real-time video streams captured by static video cameras and to recogniz...

  14. Warmth of familiarity and chill of error: affective consequences of recognition decisions.

    Science.gov (United States)

    Chetverikov, Andrey

    2014-04-01

    The present research aimed to assess the effect of recognition decision on subsequent affective evaluations of recognised and non-recognised objects. Consistent with the proposed account of post-decisional preferences, results showed that the effect of recognition on preferences depends upon objective familiarity. If stimuli are recognised, liking ratings are positively associated with exposure frequency; if stimuli are not recognised, this link is either absent (Experiment 1) or negative (Experiments 2 and 3). This interaction between familiarity and recognition exists even when recognition accuracy is at chance level and the "mere exposure" effect is absent. Finally, data obtained from repeated measurements of preferences and using manipulations of task order confirm that recognition decisions have a causal influence on preferences. The findings suggest that affective evaluation can provide fine-grained access to the efficacy of cognitive processing even in simple cognitive tasks.

  15. Role of Fusiform and Anterior Temporal Cortical Areas in Facial Recognition

    Science.gov (United States)

    Nasr, Shahin; Tootell, Roger BH

    2012-01-01

    Recent FMRI studies suggest that cortical face processing extends well beyond the fusiform face area (FFA), including unspecified portions of the anterior temporal lobe. However, the exact location of such anterior temporal region(s), and their role during active face recognition, remain unclear. Here we demonstrate that (in addition to FFA) a small bilateral site in the anterior tip of the collateral sulcus (‘AT’; the anterior temporal face patch) is selectively activated during recognition of faces but not houses (a non-face object). In contrast to the psychophysical prediction that inverted and contrast reversed faces are processed like other non-face objects, both FFA and AT (but not other visual areas) were also activated during recognition of inverted and contrast reversed faces. However, response accuracy was better correlated to recognition-driven activity in AT, compared to FFA. These data support a segregated, hierarchical model of face recognition processing, extending to the anterior temporal cortex. PMID:23034518

  16. Cuticular hydrocarbons as potential kin recognition cues in a subsocial spider

    DEFF Research Database (Denmark)

    Grinsted, Lena; Bilde, Trine; D'Ettorre, Patrizia

    2011-01-01

    of recognition cues in subsocial species can provide insights into evolutionary pathways leading to permanent sociality and kin-selected benefits of cooperation. In subsocial spiders, empirical evidence suggests the existence of both kin recognition and benefits of cooperating with kin, whereas the cues for kin...... recognition have yet to be identified. However, cuticular hydrocarbons have been proposed to be involved in regulation of tolerance and interattraction in spider sociality. Here, we show that subsocial Stegodyphus lineatus spiderlings have cuticular hydrocarbon profiles that are sibling-group specific, making...... be branched alkanes that are influenced very little by rearing conditions and may be genetically determined. This indicates that a specific group of cuticular chemicals, namely branched alkanes, could have evolved to act as social recognition cues in both insects and spiders....

  17. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    Directory of Open Access Journals (Sweden)

    Francisco Javier Ordóñez

    2016-01-01

    Full Text Available Human activity recognition (HAR tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i is suitable for multimodal wearable sensors; (ii can perform sensor fusion naturally; (iii does not require expert knowledge in designing features; and (iv explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation.

  18. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition.

    Science.gov (United States)

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-18

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters' influence on performance to provide insights about their optimisation.

  19. Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition

    Science.gov (United States)

    Ordóñez, Francisco Javier; Roggen, Daniel

    2016-01-01

    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation. PMID:26797612

  20. Neural Mechanisms and Information Processing in Recognition Systems

    Directory of Open Access Journals (Sweden)

    Mamiko Ozaki

    2014-10-01

    Full Text Available Nestmate recognition is a hallmark of social insects. It is based on the match/mismatch of an identity signal carried by members of the society with that of the perceiving individual. While the behavioral response, amicable or aggressive, is very clear, the neural systems underlying recognition are not fully understood. Here we contrast two alternative hypotheses for the neural mechanisms that are responsible for the perception and information processing in recognition. We focus on recognition via chemical signals, as the common modality in social insects. The first, classical, hypothesis states that upon perception of recognition cues by the sensory system the information is passed as is to the antennal lobes and to higher brain centers where the information is deciphered and compared to a neural template. Match or mismatch information is then transferred to some behavior-generating centers where the appropriate response is elicited. An alternative hypothesis, that of “pre-filter mechanism”, posits that the decision as to whether to pass on the information to the central nervous system takes place in the peripheral sensory system. We suggest that, through sensory adaptation, only alien signals are passed on to the brain, specifically to an “aggressive-behavior-switching center”, where the response is generated if the signal is above a certain threshold.

  1. Altered emotional recognition and expression in patients with Parkinson’s disease

    Directory of Open Access Journals (Sweden)

    Jin Y

    2017-11-01

    Full Text Available Yazhou Jin,* Zhiqi Mao,* Zhipei Ling, Xin Xu, Zhiyuan Zhang, Xinguang Yu Department of Neurosurgery, People’s Liberation Army General Hospital, Beijing, People’s Republic of China *These authors contributed equally to this work Background: Parkinson’s disease (PD patients exhibit deficits in emotional recognition and expression abilities, including emotional faces and voices. The aim of this study was to explore emotional processing in pre-deep brain stimulation (pre-DBS PD patients using two sensory modalities (visual and auditory. Methods: Fifteen PD patients who needed DBS surgery and 15 healthy, age- and gender-matched controls were recruited as participants. All participants were assessed by the Karolinska Directed Emotional Faces database 50 Faces Recognition test. Vocal recognition was evaluated by the Montreal Affective Voices database 50 Voices Recognition test. For emotional facial expression, the participants were asked to imitate five basic emotions (neutral, happiness, anger, fear, and sadness. The subjects were required to express nonverbal vocalizations of the five basic emotions. Fifteen Chinese native speakers were recruited as decoders. We recorded the accuracy of the responses, reaction time, and confidence level. Results: For emotional recognition and expression, the PD group scored lower on both facial and vocal emotional processing than did the healthy control group. There were significant differences between the two groups in both reaction time and confidence level. A significant relationship was also found between emotional recognition and emotional expression when considering all participants between the two groups together. Conclusion: The PD group exhibited poorer performance on both the recognition and expression tasks. Facial emotion deficits and vocal emotion abnormalities were associated with each other. In addition, our data allow us to speculate that emotional recognition and expression may share a common

  2. Object recognition with hierarchical discriminant saliency networks.

    Science.gov (United States)

    Han, Sunhyoung; Vasconcelos, Nuno

    2014-01-01

    computer vision literatures. This demonstrates benefits for all the functional enhancements of the HDSN, the class tuning inherent to discriminant saliency, and saliency layers based on templates of increasing target selectivity and invariance. Altogether, these experiments suggest that there are non-trivial benefits in integrating attention and recognition.

  3. Evaluation of Activity Recognition Algorithms for Employee Performance Monitoring

    OpenAIRE

    Mehreen Mumtaz; Hafiz Adnan Habib

    2012-01-01

    Successful Human Resource Management plays a key role in success of any organization. Traditionally, human resource managers rely on various information technology solutions such as Payroll and Work Time Systems incorporating RFID and biometric technologies. This research evaluates activity recognition algorithms for employee performance monitoring. An activity recognition algorithm has been implemented that categorized the activity of employee into following in to classes: job activities and...

  4. Cross-sensor iris recognition through kernel learning.

    Science.gov (United States)

    Pillai, Jaishanker K; Puertas, Maria; Chellappa, Rama

    2014-01-01

    Due to the increasing popularity of iris biometrics, new sensors are being developed for acquiring iris images and existing ones are being continuously upgraded. Re-enrolling users every time a new sensor is deployed is expensive and time-consuming, especially in applications with a large number of enrolled users. However, recent studies show that cross-sensor matching, where the test samples are verified using data enrolled with a different sensor, often lead to reduced performance. In this paper, we propose a machine learning technique to mitigate the cross-sensor performance degradation by adapting the iris samples from one sensor to another. We first present a novel optimization framework for learning transformations on iris biometrics. We then utilize this framework for sensor adaptation, by reducing the distance between samples of the same class, and increasing it between samples of different classes, irrespective of the sensors acquiring them. Extensive evaluations on iris data from multiple sensors demonstrate that the proposed method leads to improvement in cross-sensor recognition accuracy. Furthermore, since the proposed technique requires minimal changes to the iris recognition pipeline, it can easily be incorporated into existing iris recognition systems.

  5. Longitudinal study of fingerprint recognition.

    Science.gov (United States)

    Yoon, Soweon; Jain, Anil K

    2015-07-14

    Human identification by fingerprints is based on the fundamental premise that ridge patterns from distinct fingers are different (uniqueness) and a fingerprint pattern does not change over time (persistence). Although the uniqueness of fingerprints has been investigated by developing statistical models to estimate the probability of error in comparing two random samples of fingerprints, the persistence of fingerprints has remained a general belief based on only a few case studies. In this study, fingerprint match (similarity) scores are analyzed by multilevel statistical models with covariates such as time interval between two fingerprints in comparison, subject's age, and fingerprint image quality. Longitudinal fingerprint records of 15,597 subjects are sampled from an operational fingerprint database such that each individual has at least five 10-print records over a minimum time span of 5 y. In regard to the persistence of fingerprints, the longitudinal analysis on a single (right index) finger demonstrates that (i) genuine match scores tend to significantly decrease when time interval between two fingerprints in comparison increases, whereas the change in impostor match scores is negligible; and (ii) fingerprint recognition accuracy at operational settings, nevertheless, tends to be stable as the time interval increases up to 12 y, the maximum time span in the dataset. However, the uncertainty of temporal stability of fingerprint recognition accuracy becomes substantially large if either of the two fingerprints being compared is of poor quality. The conclusions drawn from 10-finger fusion analysis coincide with the conclusions from single-finger analysis.

  6. How similar are recognition memory and inductive reasoning?

    Science.gov (United States)

    Hayes, Brett K; Heit, Evan

    2013-07-01

    Conventionally, memory and reasoning are seen as different types of cognitive activities driven by different processes. In two experiments, we challenged this view by examining the relationship between recognition memory and inductive reasoning involving multiple forms of similarity. A common study set (members of a conjunctive category) was followed by a test set containing old and new category members, as well as items that matched the study set on only one dimension. The study and test sets were presented under recognition or induction instructions. In Experiments 1 and 2, the inductive property being generalized was varied in order to direct attention to different dimensions of similarity. When there was no time pressure on decisions, patterns of positive responding were strongly affected by property type, indicating that different types of similarity were driving recognition and induction. By comparison, speeded judgments showed weaker property effects and could be explained by generalization based on overall similarity. An exemplar model, GEN-EX (GENeralization from EXamples), could account for both the induction and recognition data. These findings show that induction and recognition share core component processes, even when the tasks involve flexible forms of similarity.

  7. Improving Negative Emotion Recognition in Young Offenders Reduces Subsequent Crime.

    Directory of Open Access Journals (Sweden)

    Kelly Hubble

    Full Text Available Children with antisocial behaviour show deficits in the perception of emotional expressions in others that may contribute to the development and persistence of antisocial and aggressive behaviour. Current treatments for antisocial youngsters are limited in effectiveness. It has been argued that more attention should be devoted to interventions that target neuropsychological correlates of antisocial behaviour. This study examined the effect of emotion recognition training on criminal behaviour.Emotion recognition and crime levels were studied in 50 juvenile offenders. Whilst all young offenders received their statutory interventions as the study was conducted, a subgroup of twenty-four offenders also took part in a facial affect training aimed at improving emotion recognition. Offenders in the training and control groups were matched for age, SES, IQ and lifetime crime level. All offenders were tested twice for emotion recognition performance, and recent crime data were collected after the testing had been completed.Before the training there were no differences between the groups in emotion recognition, with both groups displaying poor fear, sadness and anger recognition. After the training fear, sadness and anger recognition improved significantly in juvenile offenders in the training group. Although crime rates dropped in all offenders in the 6 months following emotion testing, only the group of offenders who had received the emotion training showed a significant reduction in the severity of the crimes they committed.The study indicates that emotion recognition can be relatively easily improved in youths who engage in serious antisocial and criminal behavior. The results suggest that improved emotion recognition has the potential to reduce the severity of reoffending.

  8. The use of the operand-recognition paradigm for the study of mental addition in older adults.

    Science.gov (United States)

    Thevenot, Catherine; Castel, Caroline; Danjon, Juliette; Fanget, Muriel; Fayol, Michel

    2013-01-01

    Determining how individuals solve arithmetic problems is crucial for our understanding of human cognitive architecture. Elderly adults are supposed to use memory retrieval more often than younger ones. However, they might backup their retrieval by reconstructive strategies. In order to investigate this issue, we used the operand-recognition paradigm, which capitalizes on the fact that algorithmic procedures degrade the memory traces of the operands. Twenty-three older adults (M = 70.4) and 23 younger adults (M = 20.0) solved easy, difficult, and medium-difficulty addition and comparison problems and were then presented with a recognition task of the operands. When one-digit numbers with sums larger than 10 were involved (medium-difficulty problem), it was more difficult for younger adults to recognize the operands after addition than comparison. In contrast, in older adults, recognition times of the operands were the same after addition and comparison. Older adults, in contrast with younger adults, are able to retrieve the results of addition problems of medium difficulty. Contrary to what was suggested, older participants do not seem to resort to backup strategies for such problems. Finally, older adults' reliance on the more efficient retrieval strategy allowed them to catch up to younger adults in terms of solution times.

  9. [Prosopagnosia and facial expression recognition].

    Science.gov (United States)

    Koyama, Shinichi

    2014-04-01

    This paper reviews clinical neuropsychological studies that have indicated that the recognition of a person's identity and the recognition of facial expressions are processed by different cortical and subcortical areas of the brain. The fusiform gyrus, especially the right fusiform gyrus, plays an important role in the recognition of identity. The superior temporal sulcus, amygdala, and medial frontal cortex play important roles in facial-expression recognition. Both facial recognition and facial-expression recognition are highly intellectual processes that involve several regions of the brain.

  10. fMRI characterization of visual working memory recognition.

    Science.gov (United States)

    Rahm, Benjamin; Kaiser, Jochen; Unterrainer, Josef M; Simon, Juliane; Bledowski, Christoph

    2014-04-15

    Encoding and maintenance of information in visual working memory have been extensively studied, highlighting the crucial and capacity-limiting role of fronto-parietal regions. In contrast, the neural basis of recognition in visual working memory has remained largely unspecified. Cognitive models suggest that recognition relies on a matching process that compares sensory information with the mental representations held in memory. To characterize the neural basis of recognition we varied both the need for recognition and the degree of similarity between the probe item and the memory contents, while independently manipulating memory load to produce load-related fronto-parietal activations. fMRI revealed a fractionation of working memory functions across four distributed networks. First, fronto-parietal regions were activated independent of the need for recognition. Second, anterior parts of load-related parietal regions contributed to recognition but their activations were independent of the difficulty of matching in terms of sample-probe similarity. These results argue against a key role of the fronto-parietal attention network in recognition. Rather the third group of regions including bilateral temporo-parietal junction, posterior cingulate cortex and superior frontal sulcus reflected demands on matching both in terms of sample-probe-similarity and the number of items to be compared. Also, fourth, bilateral motor regions and right superior parietal cortex showed higher activation when matching provided clear evidence for a decision. Together, the segregation between the well-known fronto-parietal activations attributed to attentional operations in working memory from those regions involved in matching supports the theoretical view of separable attentional and mnemonic contributions to working memory. Yet, the close theoretical and empirical correspondence to perceptual decision making may call for an explicit consideration of decision making mechanisms in

  11. Infliximab ameliorates AD-associated object recognition memory impairment.

    Science.gov (United States)

    Kim, Dong Hyun; Choi, Seong-Min; Jho, Jihoon; Park, Man-Seok; Kang, Jisu; Park, Se Jin; Ryu, Jong Hoon; Jo, Jihoon; Kim, Hyun Hee; Kim, Byeong C

    2016-09-15

    Dysfunctions in the perirhinal cortex (PRh) are associated with visual recognition memory deficit, which is frequently detected in the early stage of Alzheimer's disease. Muscarinic acetylcholine receptor-dependent long-term depression (mAChR-LTD) of synaptic transmission is known as a key pathway in eliciting this type of memory, and Tg2576 mice expressing enhanced levels of Aβ oligomers are found to have impaired mAChR-LTD in this brain area at as early as 3 months of age. We found that the administration of Aβ oligomers in young normal mice also induced visual recognition memory impairment and perturbed mAChR-LTD in mouse PRh slices. In addition, when mice were treated with infliximab, a monoclonal antibody against TNF-α, visual recognition memory impaired by pre-administered Aβ oligomers dramatically improved and the detrimental Aβ effect on mAChR-LTD was annulled. Taken together, these findings suggest that Aβ-induced inflammation is mediated through TNF-α signaling cascades, disturbing synaptic transmission in the PRh, and leading to visual recognition memory deficits. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Activity Recognition Invariant to Sensor Orientation with Wearable Motion Sensors.

    Science.gov (United States)

    Yurtman, Aras; Barshan, Billur

    2017-08-09

    Most activity recognition studies that employ wearable sensors assume that the sensors are attached at pre-determined positions and orientations that do not change over time. Since this is not the case in practice, it is of interest to develop wearable systems that operate invariantly to sensor position and orientation. We focus on invariance to sensor orientation and develop two alternative transformations to remove the effect of absolute sensor orientation from the raw sensor data. We test the proposed methodology in activity recognition with four state-of-the-art classifiers using five publicly available datasets containing various types of human activities acquired by different sensor configurations. While the ordinary activity recognition system cannot handle incorrectly oriented sensors, the proposed transformations allow the sensors to be worn at any orientation at a given position on the body, and achieve nearly the same activity recognition performance as the ordinary system for which the sensor units are not rotatable. The proposed techniques can be applied to existing wearable systems without much effort, by simply transforming the time-domain sensor data at the pre-processing stage.

  13. Target recognition of log-polar ladar range images using moment invariants

    Science.gov (United States)

    Xia, Wenze; Han, Shaokun; Cao, Jie; Yu, Haoyong

    2017-01-01

    The ladar range image has received considerable attentions in the automatic target recognition field. However, previous research does not cover target recognition using log-polar ladar range images. Therefore, we construct a target recognition system based on log-polar ladar range images in this paper. In this system combined moment invariants and backpropagation neural network are selected as shape descriptor and shape classifier, respectively. In order to fully analyze the effect of log-polar sampling pattern on recognition result, several comparative experiments based on simulated and real range images are carried out. Eventually, several important conclusions are drawn: (i) if combined moments are computed directly by log-polar range images, translation, rotation and scaling invariant properties of combined moments will be invalid (ii) when object is located in the center of field of view, recognition rate of log-polar range images is less sensitive to the changing of field of view (iii) as object position changes from center to edge of field of view, recognition performance of log-polar range images will decline dramatically (iv) log-polar range images has a better noise robustness than Cartesian range images. Finally, we give a suggestion that it is better to divide field of view into recognition area and searching area in the real application.

  14. Emotion recognition abilities across stimulus modalities in schizophrenia and the role of visual attention.

    Science.gov (United States)

    Simpson, Claire; Pinkham, Amy E; Kelsven, Skylar; Sasson, Noah J

    2013-12-01

    Emotion can be expressed by both the voice and face, and previous work suggests that presentation modality may impact emotion recognition performance in individuals with schizophrenia. We investigated the effect of stimulus modality on emotion recognition accuracy and the potential role of visual attention to faces in emotion recognition abilities. Thirty-one patients who met DSM-IV criteria for schizophrenia (n=8) or schizoaffective disorder (n=23) and 30 non-clinical control individuals participated. Both groups identified emotional expressions in three different conditions: audio only, visual only, combined audiovisual. In the visual only and combined conditions, time spent visually fixating salient features of the face were recorded. Patients were significantly less accurate than controls in emotion recognition during both the audio and visual only conditions but did not differ from controls on the combined condition. Analysis of visual scanning behaviors demonstrated that patients attended less than healthy individuals to the mouth in the visual condition but did not differ in visual attention to salient facial features in the combined condition, which may in part explain the absence of a deficit for patients in this condition. Collectively, these findings demonstrate that patients benefit from multimodal stimulus presentations of emotion and support hypotheses that visual attention to salient facial features may serve as a mechanism for accurate emotion identification. © 2013.

  15. Face and Word Recognition Can Be Selectively Affected by Brain Injury or Developmental Disorders.

    Science.gov (United States)

    Robotham, Ro J; Starrfelt, Randi

    2017-01-01

    Face and word recognition have traditionally been thought to rely on highly specialised and relatively independent cognitive processes. Some of the strongest evidence for this has come from patients with seemingly category-specific visual perceptual deficits such as pure prosopagnosia, a selective face recognition deficit, and pure alexia, a selective word recognition deficit. Together, the patterns of impaired reading with preserved face recognition and impaired face recognition with preserved reading constitute a double dissociation. The existence of these selective deficits has been questioned over the past decade. It has been suggested that studies describing patients with these pure deficits have failed to measure the supposedly preserved functions using sensitive enough measures, and that if tested using sensitive measurements, all patients with deficits in one visual category would also have deficits in the other. The implications of this would be immense, with most textbooks in cognitive neuropsychology requiring drastic revisions. In order to evaluate the evidence for dissociations, we review studies that specifically investigate whether face or word recognition can be selectively affected by acquired brain injury or developmental disorders. We only include studies published since 2004, as comprehensive reviews of earlier studies are available. Most of the studies assess the supposedly preserved functions using sensitive measurements. We found convincing evidence that reading can be preserved in acquired and developmental prosopagnosia and also evidence (though weaker) that face recognition can be preserved in acquired or developmental dyslexia, suggesting that face and word recognition are at least in part supported by independent processes.

  16. Damage-recognition proteins as a potential indicator of DNA-damage-mediated sensitivity or resistance of human cells to ultraviolet radiation

    International Nuclear Information System (INIS)

    Chao, C.C.-K.

    1992-01-01

    The authors compared damage-recognition proteins in cells expressing different sensitivities to DNA damage. An increase in damage-recognition proteins and an enhancement of plasmid re-activation were detected in HeLa cells resistant to cisplatin and u.v. However, repair-defective cells derived from xeroderma-pigmentosum (a rare skin disease) patients did not express less cisplatin damage-recognition proteins than repair-competent cells, suggesting that damage-recognition-protein expression may not be related to DNA repair. By contrast, cells resistant to DNA damage consistently expressed high levels of u.v.-modified-DNA damage-recognition proteins. The results support the notion that u.v. damage-recognition proteins are different from those that bind to cisplatin. Findings also suggest that the damage-recognition proteins identified could be used as potential indicators of the sensitivity or resistance of cells to u.v. (author)

  17. Fast Pedestrian Recognition Based on Multisensor Fusion

    Directory of Open Access Journals (Sweden)

    Hongyu Hu

    2012-01-01

    Full Text Available A fast pedestrian recognition algorithm based on multisensor fusion is presented in this paper. Firstly, potential pedestrian locations are estimated by laser radar scanning in the world coordinates, and then their corresponding candidate regions in the image are located by camera calibration and the perspective mapping model. For avoiding time consuming in the training and recognition process caused by large numbers of feature vector dimensions, region of interest-based integral histograms of oriented gradients (ROI-IHOG feature extraction method is proposed later. A support vector machine (SVM classifier is trained by a novel pedestrian sample dataset which adapt to the urban road environment for online recognition. Finally, we test the validity of the proposed approach with several video sequences from realistic urban road scenarios. Reliable and timewise performances are shown based on our multisensor fusing method.

  18. Recognition Memory for Novel Stimuli: The Structural Regularity Hypothesis

    Science.gov (United States)

    Cleary, Anne M.; Morris, Alison L.; Langley, Moses M.

    2007-01-01

    Early studies of human memory suggest that adherence to a known structural regularity (e.g., orthographic regularity) benefits memory for an otherwise novel stimulus (e.g., G. A. Miller, 1958). However, a more recent study suggests that structural regularity can lead to an increase in false-positive responses on recognition memory tests (B. W. A.…

  19. Does comorbid anxiety counteract emotion recognition deficits in conduct disorder?

    Science.gov (United States)

    Short, Roxanna M L; Sonuga-Barke, Edmund J S; Adams, Wendy J; Fairchild, Graeme

    2016-08-01

    Previous research has reported altered emotion recognition in both conduct disorder (CD) and anxiety disorders (ADs) - but these effects appear to be of different kinds. Adolescents with CD often show a generalised pattern of deficits, while those with ADs show hypersensitivity to specific negative emotions. Although these conditions often cooccur, little is known regarding emotion recognition performance in comorbid CD+ADs. Here, we test the hypothesis that in the comorbid case, anxiety-related emotion hypersensitivity counteracts the emotion recognition deficits typically observed in CD. We compared facial emotion recognition across four groups of adolescents aged 12-18 years: those with CD alone (n = 28), ADs alone (n = 23), cooccurring CD+ADs (n = 20) and typically developing controls (n = 28). The emotion recognition task we used systematically manipulated the emotional intensity of facial expressions as well as fixation location (eye, nose or mouth region). Conduct disorder was associated with a generalised impairment in emotion recognition; however, this may have been modulated by group differences in IQ. AD was associated with increased sensitivity to low-intensity happiness, disgust and sadness. In general, the comorbid CD+ADs group performed similarly to typically developing controls. Although CD alone was associated with emotion recognition impairments, ADs and comorbid CD+ADs were associated with normal or enhanced emotion recognition performance. The presence of comorbid ADs appeared to counteract the effects of CD, suggesting a potentially protective role, although future research should examine the contribution of IQ and gender to these effects. © 2016 Association for Child and Adolescent Mental Health.

  20. Double-Windows-Based Motion Recognition in Multi-Floor Buildings Assisted by a Built-In Barometer.

    Science.gov (United States)

    Liu, Maolin; Li, Huaiyu; Wang, Yuan; Li, Fei; Chen, Xiuwan

    2018-04-01

    Accelerometers, gyroscopes and magnetometers in smartphones are often used to recognize human motions. Since it is difficult to distinguish between vertical motions and horizontal motions in the data provided by these built-in sensors, the vertical motion recognition accuracy is relatively low. The emergence of a built-in barometer in smartphones improves the accuracy of motion recognition in the vertical direction. However, there is a lack of quantitative analysis and modelling of the barometer signals, which is the basis of barometer's application to motion recognition, and a problem of imbalanced data also exists. This work focuses on using the barometers inside smartphones for vertical motion recognition in multi-floor buildings through modelling and feature extraction of pressure signals. A novel double-windows pressure feature extraction method, which adopts two sliding time windows of different length, is proposed to balance recognition accuracy and response time. Then, a random forest classifier correlation rule is further designed to weaken the impact of imbalanced data on recognition accuracy. The results demonstrate that the recognition accuracy can reach 95.05% when pressure features and the improved random forest classifier are adopted. Specifically, the recognition accuracy of the stair and elevator motions is significantly improved with enhanced response time. The proposed approach proves effective and accurate, providing a robust strategy for increasing accuracy of vertical motions.

  1. Towards multimodal emotion recognition in E-learning environments

    NARCIS (Netherlands)

    Bahreini, Kiavash; Nadolski, Rob; Westera, Wim

    2014-01-01

    This paper presents a framework (FILTWAM (Framework for Improving Learning Through Webcams And Microphones)) for real-time emotion recognition in e-learning by using webcams. FILTWAM offers timely and relevant feedback based upon learner’s facial expressions and verbalizations. FILTWAM’s facial

  2. Human motion sensing and recognition a fuzzy qualitative approach

    CERN Document Server

    Liu, Honghai; Ji, Xiaofei; Chan, Chee Seng; Khoury, Mehdi

    2017-01-01

    This book introduces readers to the latest exciting advances in human motion sensing and recognition, from the theoretical development of fuzzy approaches to their applications. The topics covered include human motion recognition in 2D and 3D, hand motion analysis with contact sensors, and vision-based view-invariant motion recognition, especially from the perspective of Fuzzy Qualitative techniques. With the rapid development of technologies in microelectronics, computers, networks, and robotics over the last decade, increasing attention has been focused on human motion sensing and recognition in many emerging and active disciplines where human motions need to be automatically tracked, analyzed or understood, such as smart surveillance, intelligent human-computer interaction, robot motion learning, and interactive gaming. Current challenges mainly stem from the dynamic environment, data multi-modality, uncertain sensory information, and real-time issues. These techniques are shown to effectively address the ...

  3. Active Multimodal Sensor System for Target Recognition and Tracking.

    Science.gov (United States)

    Qu, Yufu; Zhang, Guirong; Zou, Zhaofan; Liu, Ziyue; Mao, Jiansen

    2017-06-28

    High accuracy target recognition and tracking systems using a single sensor or a passive multisensor set are susceptible to external interferences and exhibit environmental dependencies. These difficulties stem mainly from limitations to the available imaging frequency bands, and a general lack of coherent diversity of the available target-related data. This paper proposes an active multimodal sensor system for target recognition and tracking, consisting of a visible, an infrared, and a hyperspectral sensor. The system makes full use of its multisensor information collection abilities; furthermore, it can actively control different sensors to collect additional data, according to the needs of the real-time target recognition and tracking processes. This level of integration between hardware collection control and data processing is experimentally shown to effectively improve the accuracy and robustness of the target recognition and tracking system.

  4. Face Recognition Is Affected by Similarity in Spatial Frequency Range to a Greater Degree Than Within-Category Object Recognition

    Science.gov (United States)

    Collin, Charles A.; Liu, Chang Hong; Troje, Nikolaus F.; McMullen, Patricia A.; Chaudhuri, Avi

    2004-01-01

    Previous studies have suggested that face identification is more sensitive to variations in spatial frequency content than object recognition, but none have compared how sensitive the 2 processes are to variations in spatial frequency overlap (SFO). The authors tested face and object matching accuracy under varying SFO conditions. Their results…

  5. Brand recognition in television advertising: The influence of brand presence and brand introduction

    Directory of Open Access Journals (Sweden)

    Charlene Gerber

    2014-05-01

    Problem investigated: Brand recognition and recall are established advertising effectiveness measurements to assess brand awareness. Of particular interest is whether encoding of brand information as measured by brand recognition is influenced by brand presence and brand introduction. Design/methodology/approach: A meta-analysis was performed on responses to 25 television advertisements, gathered from 50 000 respondents. Findings: The findings indicated a positive linear relationship between brand presence and brand recognition but a negative linear relationship between brand introduction and brand recognition, whilst brand introduction and brand presence predicted variance in brand recognition. Value of research: The researchers concluded that a brand should be present in an advertisement for about two-thirds of the time for optimum brand recognition.

  6. Quality based approach for adaptive face recognition

    Science.gov (United States)

    Abboud, Ali J.; Sellahewa, Harin; Jassim, Sabah A.

    2009-05-01

    Recent advances in biometric technology have pushed towards more robust and reliable systems. We aim to build systems that have low recognition errors and are less affected by variation in recording conditions. Recognition errors are often attributed to the usage of low quality biometric samples. Hence, there is a need to develop new intelligent techniques and strategies to automatically measure/quantify the quality of biometric image samples and if necessary restore image quality according to the need of the intended application. In this paper, we present no-reference image quality measures in the spatial domain that have impact on face recognition. The first is called symmetrical adaptive local quality index (SALQI) and the second is called middle halve (MH). Also, an adaptive strategy has been developed to select the best way to restore the image quality, called symmetrical adaptive histogram equalization (SAHE). The main benefits of using quality measures for adaptive strategy are: (1) avoidance of excessive unnecessary enhancement procedures that may cause undesired artifacts, and (2) reduced computational complexity which is essential for real time applications. We test the success of the proposed measures and adaptive approach for a wavelet-based face recognition system that uses the nearest neighborhood classifier. We shall demonstrate noticeable improvements in the performance of adaptive face recognition system over the corresponding non-adaptive scheme.

  7. Kernel learning algorithms for face recognition

    CERN Document Server

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  8. Unrealistic optimism and 'nosognosia': illness recognition in the healthy brain.

    Science.gov (United States)

    McKay, Ryan; Buchmann, Andreas; Germann, Nicole; Yu, Shancong; Brugger, Peter

    2014-12-01

    At the centenary of research on anosognosia, the time seems ripe to supplement work in anosognosic patients with empirical studies on nosognosia in healthy participants. To this end, we adopted a signal detection framework to investigate the lateralized recognition of illness words--an operational measure of nosognosia--in healthy participants. As positively biased reports about one's current health status (anosognosia) and future health status (unrealistic optimism) have both been associated with deficient right hemispheric functioning, and conversely with undisturbed left hemispheric functioning, we hypothesised that more optimistic participants would adopt a more conservative response criterion, and/or display less sensitivity, when identifying illnesses in our nosognosia task; especially harmful illnesses presented to the left hemisphere via the right visual field. Thirty-two healthy right-handed men estimated their own relative risk of contracting a series of illnesses in the future, and then completed a novel computer task assessing their recognition of illness names presented to the left or right visual field. To check that effects were specific to the recognition of illness (rather than reflecting recognition of lexical items per se), we also administered a standard lateralized lexical decision task. Highly optimistic participants tended to be more conservative in detecting illnesses, especially harmful illnesses presented to the right visual field. Contrary to expectation, they were also more sensitive to illness names in this half-field. We suggest that, in evolutionary terms, unrealistic optimism may be an adaptive trait that combines a high perceptual sensitivity to threat with a high threshold for acknowledging its presence. The signal detection approach to nosognosia developed here may open up new avenues for the understanding of anosognosia in neurological patients. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. No effect of stress on false recognition.

    Science.gov (United States)

    Beato, María Soledad; Cadavid, Sara; Pulido, Ramón F; Pinho, María Salomé

    2013-02-01

    The present study aimed to analyze the effect of acute stress on false recognition in the Deese/Roediger-McDermott (DRM) paradigm. In this paradigm, lists of words associated with a non-presented critical lure are studied and, in a subsequent memory test, critical lures are often falsely remembered. In two experiments, participants were randomly assigned to either the stress group (Trier Social Stress Test) or the no-stress control group. Because we sought to control the level-of-processing at encoding, in Experiment 1, participants created a visual mental image for each presented word (deep encoding). In Experiment 2, participants performed a shallow encoding (to respond whether each word contained the letter "o"). The results indicated that, in both experiments, as predicted, heart rate and STAI-S scores increased only in the stress group. However, false recognition did not differ across stress and no-stress groups. Results suggest that, although psychosocial stress was successfully induced, it does not enhance the vulnerability of individuals with acute stress to DRM false recognition, regardless of the level of processing.

  10. Action recognition is sensitive to the identity of the actor.

    Science.gov (United States)

    Ferstl, Ylva; Bülthoff, Heinrich; de la Rosa, Stephan

    2017-09-01

    Recognizing who is carrying out an action is essential for successful human interaction. The cognitive mechanisms underlying this ability are little understood and have been subject of discussions in embodied approaches to action recognition. Here we examine one solution, that visual action recognition processes are at least partly sensitive to the actor's identity. We investigated the dependency between identity information and action related processes by testing the sensitivity of neural action recognition processes to clothing and facial identity information with a behavioral adaptation paradigm. Our results show that action adaptation effects are in fact modulated by both clothing information and the actor's facial identity. The finding demonstrates that neural processes underlying action recognition are sensitive to identity information (including facial identity) and thereby not exclusively tuned to actions. We suggest that such response properties are useful to help humans in knowing who carried out an action. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  11. Normal mere exposure effect with impaired recognition in Alzheimer's disease.

    Science.gov (United States)

    Willems, Sylvie; Adam, Stéphane; Van der Linden, Martial

    2002-02-01

    We investigated the mere exposure effect and the explicit memory in Alzheimer's disease (AD) patients and elderly control subjects, using unfamiliar faces. During the exposure phase, the subjects estimated the age of briefly flashed faces. The mere exposure effect was examined by presenting pairs of faces (old and new) and asking participants to select the face they liked. The participants were then presented with a forced-choice explicit recognition task. Controls subjects exhibited above-chance preference and recognition scores for old faces. The AD patients also showed the mere exposure effect but no explicit recognition. These results suggest that the processes involved in the mere exposure effect are preserved in AD patients despite their impaired explicit recognition. The results are discussed in terms of Seamon et al.'s (1995) proposal that processes involved in the mere exposure effect are equivalent to those subserving perceptual priming. These processes would depend on extrastriate areas which are relatively preserved in AD patients.

  12. No one way ticket from orthography to semantics in recognition memory: N400 and P200 effects of associations.

    Science.gov (United States)

    Stuellein, Nicole; Radach, Ralph R; Jacobs, Arthur M; Hofmann, Markus J

    2016-05-15

    Computational models of word recognition already successfully used associative spreading from orthographic to semantic levels to account for false memories. But can they also account for semantic effects on event-related potentials in a recognition memory task? To address this question, target words in the present study had either many or few semantic associates in the stimulus set. We found larger P200 amplitudes and smaller N400 amplitudes for old words in comparison to new words. Words with many semantic associates led to larger P200 amplitudes and a smaller N400 in comparison to words with a smaller number of semantic associations. We also obtained inverted response time and accuracy effects for old and new words: faster response times and fewer errors were found for old words that had many semantic associates, whereas new words with a large number of semantic associates produced slower response times and more errors. Both behavioral and electrophysiological results indicate that semantic associations between words can facilitate top-down driven lexical access and semantic integration in recognition memory. Our results support neurophysiologically plausible predictions of the Associative Read-Out Model, which suggests top-down connections from semantic to orthographic layers. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Iris double recognition based on modified evolutionary neural network

    Science.gov (United States)

    Liu, Shuai; Liu, Yuan-Ning; Zhu, Xiao-Dong; Huo, Guang; Liu, Wen-Tao; Feng, Jia-Kai

    2017-11-01

    Aiming at multicategory iris recognition under illumination and noise interference, this paper proposes a method of iris double recognition based on a modified evolutionary neural network. An equalization histogram and Laplace of Gaussian operator are used to process the iris to suppress illumination and noise interference and Haar wavelet to convert the iris feature to binary feature encoding. Calculate the Hamming distance for the test iris and template iris , and compare with classification threshold, determine the type of iris. If the iris cannot be identified as a different type, there needs to be a secondary recognition. The connection weights in back-propagation (BP) neural network use modified evolutionary neural network to adaptively train. The modified neural network is composed of particle swarm optimization with mutation operator and BP neural network. According to different iris libraries in different circumstances of experimental results, under illumination and noise interference, the correct recognition rate of this algorithm is higher, the ROC curve is closer to the coordinate axis, the training and recognition time is shorter, and the stability and the robustness are better.

  14. Speech Recognition for Medical Dictation: Overview in Quebec and Systematic Review.

    Science.gov (United States)

    Poder, Thomas G; Fisette, Jean-François; Déry, Véronique

    2018-04-03

    Speech recognition is increasingly used in medical reporting. The aim of this article is to identify in the literature the strengths and weaknesses of this technology, as well as barriers to and facilitators of its implementation. A systematic review of systematic reviews was performed using PubMed, Scopus, the Cochrane Library and the Center for Reviews and Dissemination through August 2017. The gray literature has also been consulted. The quality of systematic reviews has been assessed with the AMSTAR checklist. The main inclusion criterion was use of speech recognition for medical reporting (front-end or back-end). A survey has also been conducted in Quebec, Canada, to identify the dissemination of this technology in this province, as well as the factors leading to the success or failure of its implementation. Five systematic reviews were identified. These reviews indicated a high level of heterogeneity across studies. The quality of the studies reported was generally poor. Speech recognition is not as accurate as human transcription, but it can dramatically reduce turnaround times for reporting. In front-end use, medical doctors need to spend more time on dictation and correction than required with human transcription. With speech recognition, major errors occur up to three times more frequently. In back-end use, a potential increase in productivity of transcriptionists was noted. In conclusion, speech recognition offers several advantages for medical reporting. However, these advantages are countered by an increased burden on medical doctors and by risks of additional errors in medical reports. It is also hard to identify for which medical specialties and which clinical activities the use of speech recognition will be the most beneficial.

  15. Understanding gender bias in face recognition: effects of divided attention at encoding.

    Science.gov (United States)

    Palmer, Matthew A; Brewer, Neil; Horry, Ruth

    2013-03-01

    Prior research has demonstrated a female own-gender bias in face recognition, with females better at recognizing female faces than male faces. We explored the basis for this effect by examining the effect of divided attention during encoding on females' and males' recognition of female and male faces. For female participants, divided attention impaired recognition performance for female faces to a greater extent than male faces in a face recognition paradigm (Study 1; N=113) and an eyewitness identification paradigm (Study 2; N=502). Analysis of remember-know judgments (Study 2) indicated that divided attention at encoding selectively reduced female participants' recollection of female faces at test. For male participants, divided attention selectively reduced recognition performance (and recollection) for male stimuli in Study 2, but had similar effects on recognition of male and female faces in Study 1. Overall, the results suggest that attention at encoding contributes to the female own-gender bias by facilitating the later recollection of female faces. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. The contribution of the body and motion to whole person recognition.

    Science.gov (United States)

    Simhi, Noa; Yovel, Galit

    2016-05-01

    While the importance of faces in person recognition has been the subject of many studies, there are relatively few studies examining recognition of the whole person in motion even though this most closely resembles daily experience. Most studies examining the whole body in motion use point light displays, which have many advantages but are impoverished and unnatural compared to real life. To determine which factors are used when recognizing the whole person in motion we conducted two experiments using naturalistic videos. In Experiment 1 we used a matching task in which the first stimulus in each pair could either be a video or multiple still images from a video of the full body. The second stimulus, on which person recognition was performed, could be an image of either the full body or face alone. We found that the body contributed to person recognition beyond the face, but only after exposure to motion. Since person recognition was performed on still images, the contribution of motion to person recognition was mediated by form-from-motion processes. To assess whether dynamic identity signatures may also contribute to person recognition, in Experiment 2 we presented people in motion and examined person recognition from videos compared to still images. Results show that dynamic identity signatures did not contribute to person recognition beyond form-from-motion processes. We conclude that the face, body and form-from-motion processes all appear to play a role in unfamiliar person recognition, suggesting the importance of considering the whole body and motion when examining person perception. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Parents’ Emotion-Related Beliefs, Behaviors, and Skills Predict Children's Recognition of Emotion

    Science.gov (United States)

    Castro, Vanessa L.; Halberstadt, Amy G.; Lozada, Fantasy T.; Craig, Ashley B.

    2015-01-01

    Children who are able to recognize others’ emotions are successful in a variety of socioemotional domains, yet we know little about how school-aged children's abilities develop, particularly in the family context. We hypothesized that children develop emotion recognition skill as a function of parents’ own emotion-related beliefs, behaviors, and skills. We examined parents’ beliefs about the value of emotion and guidance of children's emotion, parents’ emotion labeling and teaching behaviors, and parents’ skill in recognizing children's emotions in relation to their school-aged children's emotion recognition skills. Sixty-nine parent-child dyads completed questionnaires, participated in dyadic laboratory tasks, and identified their own emotions and emotions felt by the other participant from videotaped segments. Regression analyses indicate that parents’ beliefs, behaviors, and skills together account for 37% of the variance in child emotion recognition ability, even after controlling for parent and child expressive clarity. The findings suggest the importance of the family milieu in the development of children's emotion recognition skill in middle childhood, and add to accumulating evidence suggesting important age-related shifts in the relation between parental emotion socialization and child emotional development. PMID:26005393

  18. Differential effects of spaced vs. massed training in long-term object-identity and object-location recognition memory.

    Science.gov (United States)

    Bello-Medina, Paola C; Sánchez-Carrasco, Livia; González-Ornelas, Nadia R; Jeffery, Kathryn J; Ramírez-Amaya, Víctor

    2013-08-01

    Here we tested whether the well-known superiority of spaced training over massed training is equally evident in both object identity and object location recognition memory. We trained animals with objects placed in a variable or in a fixed location to produce a location-independent object identity memory or a location-dependent object representation. The training consisted of 5 trials that occurred either on one day (Massed) or over the course of 5 consecutive days (Spaced). The memory test was done in independent groups of animals either 24h or 7 days after the last training trial. In each test the animals were exposed to either a novel object, when trained with the objects in variable locations, or to a familiar object in a novel location, when trained with objects in fixed locations. The difference in time spent exploring the changed versus the familiar objects was used as a measure of recognition memory. For the object-identity-trained animals, spaced training produced clear evidence of recognition memory after both 24h and 7 days, but massed-training animals showed it only after 24h. In contrast, for the object-location-trained animals, recognition memory was evident after both retention intervals and with both training procedures. When objects were placed in variable locations for the two types of training and the test was done with a brand-new location, only the spaced-training animals showed recognition at 24h, but surprisingly, after 7 days, animals trained using both procedures were able to recognize the change, suggesting a post-training consolidation process. We suggest that the two training procedures trigger different neural mechanisms that may differ in the two segregated streams that process object information and that may consolidate differently. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. The association between imitation recognition and socio-communicative competencies in chimpanzees (Pan troglodytes).

    Science.gov (United States)

    Pope, Sarah M; Russell, Jamie L; Hopkins, William D

    2015-01-01

    Imitation recognition provides a viable platform from which advanced social cognitive skills may develop. Despite evidence that non-human primates are capable of imitation recognition, how this ability is related to social cognitive skills is unknown. In this study, we compared imitation recognition performance, as indicated by the production of testing behaviors, with performance on a series of tasks that assess social and physical cognition in 49 chimpanzees. In the initial analyses, we found that males were more responsive than females to being imitated and engaged in significantly greater behavior repetitions and testing sequences. We also found that subjects who consistently recognized being imitated performed better on social but not physical cognitive tasks, as measured by the Primate Cognitive Test Battery. These findings suggest that the neural constructs underlying imitation recognition are likely associated with or among those underlying more general socio-communicative abilities in chimpanzees. Implications regarding how imitation recognition may facilitate other social cognitive processes, such as mirror self-recognition, are discussed.

  20. Wavelet-based ground vehicle recognition using acoustic signals

    Science.gov (United States)

    Choe, Howard C.; Karlsen, Robert E.; Gerhart, Grant R.; Meitzler, Thomas J.

    1996-03-01

    We present, in this paper, a wavelet-based acoustic signal analysis to remotely recognize military vehicles using their sound intercepted by acoustic sensors. Since expedited signal recognition is imperative in many military and industrial situations, we developed an algorithm that provides an automated, fast signal recognition once implemented in a real-time hardware system. This algorithm consists of wavelet preprocessing, feature extraction and compact signal representation, and a simple but effective statistical pattern matching. The current status of the algorithm does not require any training. The training is replaced by human selection of reference signals (e.g., squeak or engine exhaust sound) distinctive to each individual vehicle based on human perception. This allows a fast archiving of any new vehicle type in the database once the signal is collected. The wavelet preprocessing provides time-frequency multiresolution analysis using discrete wavelet transform (DWT). Within each resolution level, feature vectors are generated from statistical parameters and energy content of the wavelet coefficients. After applying our algorithm on the intercepted acoustic signals, the resultant feature vectors are compared with the reference vehicle feature vectors in the database using statistical pattern matching to determine the type of vehicle from where the signal originated. Certainly, statistical pattern matching can be replaced by an artificial neural network (ANN); however, the ANN would require training data sets and time to train the net. Unfortunately, this is not always possible for many real world situations, especially collecting data sets from unfriendly ground vehicles to train the ANN. Our methodology using wavelet preprocessing and statistical pattern matching provides robust acoustic signal recognition. We also present an example of vehicle recognition using acoustic signals collected from two different military ground vehicles. In this paper, we will

  1. A modern optical character recognition system in a real world clinical setting: some accuracy and feasibility observations.

    Science.gov (United States)

    Biondich, Paul G; Overhage, J Marc; Dexter, Paul R; Downs, Stephen M; Lemmon, Larry; McDonald, Clement J

    2002-01-01

    Advances in optical character recognition (OCR) software and computer hardware have stimulated a reevaluation of the technology and its ability to capture structured clinical data from preexisting paper forms. In our pilot evaluation, we measured the accuracy and feasibility of capturing vitals data from a pediatric encounter form that has been in use for over twenty years. We found that the software had a digit recognition rate of 92.4% (95% confidence interval: 91.6 to 93.2) overall. More importantly, this system was approximately three times as fast as our existing method of data entry. These preliminary results suggest that with further refinements in the approach and additional development, we may be able to incorporate OCR as another method for capturing structured clinical data.

  2. Stimulus effects and the mediation of recognition memory.

    Science.gov (United States)

    McAdoo, Ryan M; Key, Kylie N; Gronlund, Scott D

    2018-04-19

    Two broad approaches characterize the type of evidence that mediates recognition memory: discrete state and continuous. Discrete-state models posit a thresholded memory process that provides accurate information about an item (it is detected) or, failing that, no mnemonic information about the item. Continuous models, in contrast, posit the existence of graded mnemonic information about an item. Evidence favoring 1 approach over the other has been mixed, suggesting the possibility that the mediation of recognition memory may be adaptable and influenced by other factors. We tested this possibility with 2 experiments that varied the semantic similarity of word targets and fillers. Experiment 1, which used semantically similar fillers, displayed evidence of continuous mediation (contrary to Kellen & Klauer, 2015), whereas Experiment 2, which used semantically dissimilar fillers, displayed evidence of discrete mediation. The results have implications for basic theories of recognition memory, as well as for theories of applied domains like eyewitness identification. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Hypergraph-Based Recognition Memory Model for Lifelong Experience

    Science.gov (United States)

    2014-01-01

    Cognitive agents are expected to interact with and adapt to a nonstationary dynamic environment. As an initial process of decision making in a real-world agent interaction, familiarity judgment leads the following processes for intelligence. Familiarity judgment includes knowing previously encoded data as well as completing original patterns from partial information, which are fundamental functions of recognition memory. Although previous computational memory models have attempted to reflect human behavioral properties on the recognition memory, they have been focused on static conditions without considering temporal changes in terms of lifelong learning. To provide temporal adaptability to an agent, in this paper, we suggest a computational model for recognition memory that enables lifelong learning. The proposed model is based on a hypergraph structure, and thus it allows a high-order relationship between contextual nodes and enables incremental learning. Through a simulated experiment, we investigate the optimal conditions of the memory model and validate the consistency of memory performance for lifelong learning. PMID:25371665

  4. The review and results of different methods for facial recognition

    Science.gov (United States)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  5. Physics of Automatic Target Recognition

    CERN Document Server

    Sadjadi, Firooz

    2007-01-01

    Physics of Automatic Target Recognition addresses the fundamental physical bases of sensing, and information extraction in the state-of-the art automatic target recognition field. It explores both passive and active multispectral sensing, polarimetric diversity, complex signature exploitation, sensor and processing adaptation, transformation of electromagnetic and acoustic waves in their interactions with targets, background clutter, transmission media, and sensing elements. The general inverse scattering, and advanced signal processing techniques and scientific evaluation methodologies being used in this multi disciplinary field will be part of this exposition. The issues of modeling of target signatures in various spectral modalities, LADAR, IR, SAR, high resolution radar, acoustic, seismic, visible, hyperspectral, in diverse geometric aspects will be addressed. The methods for signal processing and classification will cover concepts such as sensor adaptive and artificial neural networks, time reversal filt...

  6. Hardware processors for pattern recognition tasks in experiments with wire chambers

    International Nuclear Information System (INIS)

    Verkerk, C.

    1975-01-01

    Hardware processors for pattern recognition tasks in experiments with multiwire proportional chambers or drift chambers are described. They vary from simple ones used for deciding in real time if particle trajectories are straight to complex ones for recognition of curved tracks. Schematics and block-diagrams of different processors are shown

  7. Incremental support vector machines for fast reliable image recognition

    International Nuclear Information System (INIS)

    Makili, L.; Vega, J.; Dormido-Canto, S.

    2013-01-01

    Highlights: ► A conformal predictor using SVM as the underlying algorithm was implemented. ► It was applied to image recognition in the TJ–II's Thomson Scattering Diagnostic. ► To improve time efficiency an approach to incremental SVM training has been used. ► Accuracy is similar to the one reached when standard SVM is used. ► Computational time saving is significant for large training sets. -- Abstract: This paper addresses the reliable classification of images in a 5-class problem. To this end, an automatic recognition system, based on conformal predictors and using Support Vector Machines (SVM) as the underlying algorithm has been developed and applied to the recognition of images in the Thomson Scattering Diagnostic of the TJ–II fusion device. Using such conformal predictor based classifier is a computationally intensive task since it implies to train several SVM models to classify a single example and to perform this training from scratch takes a significant amount of time. In order to improve the classification time efficiency, an approach to the incremental training of SVM has been used as the underlying algorithm. Experimental results show that the overall performance of the new classifier is high, comparable to the one corresponding to the use of standard SVM as the underlying algorithm and there is a significant improvement in time efficiency

  8. Incremental support vector machines for fast reliable image recognition

    Energy Technology Data Exchange (ETDEWEB)

    Makili, L., E-mail: makili_le@yahoo.com [Instituto Superior Politécnico da Universidade Katyavala Bwila, Benguela (Angola); Vega, J. [Asociación EURATOM/CIEMAT para Fusión, Madrid (Spain); Dormido-Canto, S. [Dpto. Informática y Automática – UNED, Madrid (Spain)

    2013-10-15

    Highlights: ► A conformal predictor using SVM as the underlying algorithm was implemented. ► It was applied to image recognition in the TJ–II's Thomson Scattering Diagnostic. ► To improve time efficiency an approach to incremental SVM training has been used. ► Accuracy is similar to the one reached when standard SVM is used. ► Computational time saving is significant for large training sets. -- Abstract: This paper addresses the reliable classification of images in a 5-class problem. To this end, an automatic recognition system, based on conformal predictors and using Support Vector Machines (SVM) as the underlying algorithm has been developed and applied to the recognition of images in the Thomson Scattering Diagnostic of the TJ–II fusion device. Using such conformal predictor based classifier is a computationally intensive task since it implies to train several SVM models to classify a single example and to perform this training from scratch takes a significant amount of time. In order to improve the classification time efficiency, an approach to the incremental training of SVM has been used as the underlying algorithm. Experimental results show that the overall performance of the new classifier is high, comparable to the one corresponding to the use of standard SVM as the underlying algorithm and there is a significant improvement in time efficiency.

  9. Technical Reviews on Pattern Recognition in Process Analytical Technology

    International Nuclear Information System (INIS)

    Kim, Jong Yun; Choi, Yong Suk; Ji, Sun Kyung; Park, Yong Joon; Song, Kyu Seok; Jung, Sung Hee

    2008-12-01

    Pattern recognition is one of the first and the most widely adopted chemometric tools among many active research area in chemometrics such as design of experiment(DoE), pattern recognition, multivariate calibration, signal processing. Pattern recognition has been used to identify the origin of a wine and the time of year that the vine was grown by using chromatography, cause of fire by using GC/MS chromatography, detection of explosives and land mines, cargo and luggage inspection in seaports and airports by using a prompt gamma-ray activation analysis, and source apportionment of environmental pollutant by using a stable isotope ratio mass spectrometry. Recently, pattern recognition has been taken into account as a major chemometric tool in the so-called 'process analytical technology (PAT)', which is a newly-developed concept in the area of process analytics proposed by US Food and Drug Administration (US FDA). For instance, identification of raw material by using a pattern recognition analysis plays an important role for the effective quality control of the production process. Recently, pattern recognition technique has been used to identify the spatial distribution and uniformity of the active ingredients present in the product such as tablet by transforming the chemical data into the visual information

  10. Medial prefrontal cortex role in recognition memory in rodents.

    Science.gov (United States)

    Morici, Juan Facundo; Bekinschtein, Pedro; Weisstaub, Noelia V

    2015-10-01

    The study of the neurobiology of recognition memory, defined by the integration of the different components of experiences that support recollection of past experiences have been a challenge for memory researches for many years. In the last twenty years, with the development of the spontaneous novel object recognition task and all its variants this has started to change. The features of recognition memory include a particular object or person ("what"), the context in which the experience took place, which can be the arena itself or the location within a particular arena ("where") and the particular time at which the event occurred ("when"). This definition instead of the historical anthropocentric one allows the study of this type of episodic memory in animal models. Some forms of recognition memory that require integration of different features recruit the medial prefrontal cortex. Focusing on findings from spontaneous recognition memory tasks performed by rodents, this review concentrates on the description of previous works that have examined the role that the medial prefrontal cortex has on the different steps of recognition memory. We conclude that this structure, independently of the task used, is required at different memory stages when the task cannot be solved by a single item strategy. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Recognition of knowledge – A step towards optimization of education

    Directory of Open Access Journals (Sweden)

    Vanda Rebolj

    2011-03-01

    In her presentation of the knowledge recognition procedures, the author relies on constructivist theories on knowledge and highlights the importance of the achieved levels of knowledge, paying equal attention to the low levels (skills, higher levels and the highest levels (problem­solving, none of which should be omitted in the assessment and recognition procedures. The author then presents the experience in knowledge recognition gained in the last five years by several colleges providing part­time studies, starting with a course in accounting and proceeding with other programmes. It is essential that knowledge recognition should not be pushed into the domain of experts or become an administrative procedure; it must remain part of the regular teaching procedure and under control of the teacher. This requires implementation of appropriate teacher training. Despite the fact that the recognition procedures developed so far have proved to be valid and have gained on credibility, numerous new research issues are being raised in this field.

  12. Stereo vision with distance and gradient recognition

    Science.gov (United States)

    Kim, Soo-Hyun; Kang, Suk-Bum; Yang, Tae-Kyu

    2007-12-01

    Robot vision technology is needed for the stable walking, object recognition and the movement to the target spot. By some sensors which use infrared rays and ultrasonic, robot can overcome the urgent state or dangerous time. But stereo vision of three dimensional space would make robot have powerful artificial intelligence. In this paper we consider about the stereo vision for stable and correct movement of a biped robot. When a robot confront with an inclination plane or steps, particular algorithms are needed to go on without failure. This study developed the recognition algorithm of distance and gradient of environment by stereo matching process.

  13. The Costs and Benefits of Testing and Guessing on Recognition Memory

    Science.gov (United States)

    Huff, Mark J.; Balota, David A.; Hutchison, Keith A.

    2016-01-01

    We examined whether two types of interpolated tasks (i.e., retrieval-practice via free recall or guessing a missing critical item) improved final recognition for related and unrelated word lists relative to restudying or completing a filler task. Both retrieval-practice and guessing tasks improved correct recognition relative to restudy and filler tasks, particularly when study lists were semantically related. However, both retrieval practice and guessing also generally inflated false recognition for the non-presented critical words. These patterns were found when final recognition was completed during a short delay within the same experimental session (Experiment 1) and following a 24-hr delay (Experiment 2). In Experiment 3, task instructions were presented randomly after each list to determine whether retrieval-practice and guessing effects were influenced by task-expectancy processes. In contrast to Experiments 1 and 2, final recognition following retrieval practice and guessing was equivalent to restudy, suggesting that the observed retrieval-practice and guessing advantages were in part due to preparatory task-based processing during study. PMID:26950490

  14. Stimulation over primary motor cortex during action observation impairs effector recognition.

    Science.gov (United States)

    Naish, Katherine R; Barnes, Brittany; Obhi, Sukhvinder S

    2016-04-01

    Recent work suggests that motor cortical processing during action observation plays a role in later recognition of the object involved in the action. Here, we investigated whether recognition of the effector making an action is also impaired when transcranial magnetic stimulation (TMS) - thought to interfere with normal cortical activity - is applied over the primary motor cortex (M1) during action observation. In two experiments, single-pulse TMS was delivered over the hand area of M1 while participants watched short clips of hand actions. Participants were then asked whether an image (experiment 1) or a video (experiment 2) of a hand presented later in the trial was the same or different to the hand in the preceding video. In Experiment 1, we found that participants' ability to recognise static images of hands was significantly impaired when TMS was delivered over M1 during action observation, compared to when no TMS was delivered, or when stimulation was applied over the vertex. Conversely, stimulation over M1 did not affect recognition of dot configurations, or recognition of hands that were previously presented as static images (rather than action movie clips) with no object. In Experiment 2, we found that effector recognition was impaired when stimulation was applied part way through (300ms) and at the end (500ms) of the action observation period, indicating that 200ms of action-viewing following stimulation was not long enough to form a new representation that could be used for later recognition. The findings of both experiments suggest that interfering with cortical motor activity during action observation impairs subsequent recognition of the effector involved in the action, which complements previous findings of motor system involvement in object memory. This work provides some of the first evidence that motor processing during action observation is involved in forming representations of the effector that are useful beyond the action observation period

  15. Towards Multimodal Emotion Recognition in E-Learning Environments

    Science.gov (United States)

    Bahreini, Kiavash; Nadolski, Rob; Westera, Wim

    2016-01-01

    This paper presents a framework (FILTWAM (Framework for Improving Learning Through Webcams And Microphones)) for real-time emotion recognition in e-learning by using webcams. FILTWAM offers timely and relevant feedback based upon learner's facial expressions and verbalizations. FILTWAM's facial expression software module has been developed and…

  16. Recognition of emotional facial expressions in adolescents with anorexia nervosa and adolescents with major depression.

    Science.gov (United States)

    Sfärlea, Anca; Greimel, Ellen; Platt, Belinda; Dieler, Alica C; Schulte-Körne, Gerd

    2018-04-01

    Anorexia nervosa (AN) has been suggested to be associated with abnormalities in facial emotion recognition. Most prior studies on facial emotion recognition in AN have investigated adult samples, despite the onset of AN being particularly often during adolescence. In addition, few studies have examined whether impairments in facial emotion recognition are specific to AN or might be explained by frequent comorbid conditions that are also associated with deficits in emotion recognition, such as depression. The present study addressed these gaps by investigating recognition of emotional facial expressions in adolescent girls with AN (n = 26) compared to girls with major depression (MD; n = 26) and healthy girls (HC; n = 37). Participants completed one task requiring identification of emotions (happy, sad, afraid, angry, neutral) in faces and two control tasks. Neither of the clinical groups showed impairments. The AN group was more accurate than the HC group in recognising afraid facial expressions and more accurate than the MD group in recognising happy, sad, and afraid expressions. Misclassification analyses identified subtle group differences in the types of errors made. The results suggest that the deficits in facial emotion recognition found in adult AN samples are not present in adolescent patients. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. A reciprocal model of face recognition and autistic traits: evidence from an individual differences perspective.

    Science.gov (United States)

    Halliday, Drew W R; MacDonald, Stuart W S; Scherf, K Suzanne; Sherf, Suzanne K; Tanaka, James W

    2014-01-01

    Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals.

  18. An Improved Iris Recognition Algorithm Based on Hybrid Feature and ELM

    Science.gov (United States)

    Wang, Juan

    2018-03-01

    The iris image is easily polluted by noise and uneven light. This paper proposed an improved extreme learning machine (ELM) based iris recognition algorithm with hybrid feature. 2D-Gabor filters and GLCM is employed to generate a multi-granularity hybrid feature vector. 2D-Gabor filter and GLCM feature work for capturing low-intermediate frequency and high frequency texture information, respectively. Finally, we utilize extreme learning machine for iris recognition. Experimental results reveal our proposed ELM based multi-granularity iris recognition algorithm (ELM-MGIR) has higher accuracy of 99.86%, and lower EER of 0.12% under the premise of real-time performance. The proposed ELM-MGIR algorithm outperforms other mainstream iris recognition algorithms.

  19. Synergy of Two Highly Specific Biomolecular Recognition Events

    DEFF Research Database (Denmark)

    Ejlersen, Maria; Christensen, Niels Johan; Sørensen, Kasper K

    2018-01-01

    Two highly specific biomolecular recognition events, nucleic acid duplex hybridization and DNA-peptide recognition in the minor groove, were coalesced in a miniature ensemble for the first time by covalently attaching a natural AT-hook peptide motif to nucleic acid duplexes via a 2'-amino......-LNA scaffold. A combination of molecular dynamics simulations and ultraviolet thermal denaturation studies revealed high sequence-specific affinity of the peptide-oligonucleotide conjugates (POCs) when binding to complementary DNA strands, leveraging the bioinformation encrypted in the minor groove of DNA...

  20. Embedded Face Detection and Recognition

    Directory of Open Access Journals (Sweden)

    Göksel Günlü

    2012-10-01

    Full Text Available The need to increase security in open or public spaces has in turn given rise to the requirement to monitor these spaces and analyse those images on-site and on-time. At this point, the use of smart cameras – of which the popularity has been increasing – is one step ahead. With sensors and Digital Signal Processors (DSPs, smart cameras generate ad hoc results by analysing the numeric images transmitted from the sensor by means of a variety of image-processing algorithms. Since the images are not transmitted to a distance processing unit but rather are processed inside the camera, it does not necessitate high-bandwidth networks or high processor powered systems; it can instantaneously decide on the required access. Nonetheless, on account of restricted memory, processing power and overall power, image processing algorithms need to be developed and optimized for embedded processors. Among these algorithms, one of the most important is for face detection and recognition. A number of face detection and recognition methods have been proposed recently and many of these methods have been tested on general-purpose processors. In smart cameras – which are real-life applications of such methods – the widest use is on DSPs. In the present study, the Viola-Jones face detection method – which was reported to run faster on PCs – was optimized for DSPs; the face recognition method was combined with the developed sub-region and mask-based DCT (Discrete Cosine Transform. As the employed DSP is a fixed-point processor, the processes were performed with integers insofar as it was possible. To enable face recognition, the image was divided into sub-regions and from each sub-region the robust coefficients against disruptive elements – like face expression, illumination, etc. – were selected as the features. The discrimination of the selected features was enhanced via LDA (Linear Discriminant Analysis and then employed for recognition. Thanks to its

  1. Sensor-Based Activity Recognition with Dynamically Added Context

    Directory of Open Access Journals (Sweden)

    Jiahui Wen

    2015-08-01

    Full Text Available An activity recognition system essentially processes raw sensor data and maps them into latent activity classes. Most of the previous systems are built with supervised learning techniques and pre-defined data sources, and result in static models. However, in realistic and dynamic environments, original data sources may fail and new data sources become available, a robust activity recognition system should be able to perform evolution automatically with dynamic sensor availability in dynamic environments. In this paper, we propose methods that automatically incorporate dynamically available data sources to adapt and refine the recognition system at run-time. The system is built upon ensemble classifiers which can automatically choose the features with the most discriminative power. Extensive experimental results with publicly available datasets demonstrate the effectiveness of our methods.

  2. Imageability and age of acquisition effects in disyllabic word recognition.

    Science.gov (United States)

    Cortese, Michael J; Schock, Jocelyn

    2013-01-01

    Imageability and age of acquisition (AoA) effects, as well as key interactions between these variables and frequency and consistency, were examined via multiple regression analyses for 1,936 disyllabic words, using reaction time and accuracy measures from the English Lexicon Project. Both imageability and AoA accounted for unique variance in lexical decision and naming reaction time performance. In addition, across both tasks, AoA and imageability effects were larger for low-frequency words than high-frequency words, and imageability effects were larger for later acquired than earlier acquired words. In reading aloud, consistency effects in reaction time were larger for later acquired words than earlier acquired words, but consistency did not interact with imageability in the reaction time analysis. These results provide further evidence that multisyllabic word recognition is similar to monosyllabic word recognition and indicate that AoA and imageability are valid predictors of word recognition performance. In addition, the results indicate that meaning exerts a larger influence in the reading aloud of multisyllabic words than monosyllabic words. Finally, parallel-distributed-processing approaches provide a useful theoretical framework to explain the main effects and interactions.

  3. The Improvement of Behavior Recognition Accuracy of Micro Inertial Accelerometer by Secondary Recognition Algorithm

    Directory of Open Access Journals (Sweden)

    Yu Liu

    2014-05-01

    Full Text Available Behaviors of “still”, “walking”, “running”, “jumping”, “upstairs” and “downstairs” can be recognized by micro inertial accelerometer of low cost. By using the features as inputs to the well-trained BP artificial neural network which is selected as classifier, those behaviors can be recognized. But the experimental results show that the recognition accuracy is not satisfactory. This paper presents secondary recognition algorithm and combine it with BP artificial neural network to improving the recognition accuracy. The Algorithm is verified by the Android mobile platform, and the recognition accuracy can be improved more than 8 %. Through extensive testing statistic analysis, the recognition accuracy can reach 95 % through BP artificial neural network and the secondary recognition, which is a reasonable good result from practical point of view.

  4. Incremental Tensor Principal Component Analysis for Handwritten Digit Recognition

    Directory of Open Access Journals (Sweden)

    Chang Liu

    2014-01-01

    Full Text Available To overcome the shortcomings of traditional dimensionality reduction algorithms, incremental tensor principal component analysis (ITPCA based on updated-SVD technique algorithm is proposed in this paper. This paper proves the relationship between PCA, 2DPCA, MPCA, and the graph embedding framework theoretically and derives the incremental learning procedure to add single sample and multiple samples in detail. The experiments on handwritten digit recognition have demonstrated that ITPCA has achieved better recognition performance than that of vector-based principal component analysis (PCA, incremental principal component analysis (IPCA, and multilinear principal component analysis (MPCA algorithms. At the same time, ITPCA also has lower time and space complexity.

  5. How love and sex can influence recognition of faces and words: a processing model account

    NARCIS (Netherlands)

    Förster, J.

    2010-01-01

    A link between romantic love and face recognition and sexual desire and verbal recognition is suggested. When in love, people typically focus on a long-term perspective which enhances global perception, whereas when experiencing sexual encounters they focus on the present which enhances a perception

  6. Time and nature of the signal for maternal recognition of pregnancy in the pig

    NARCIS (Netherlands)

    Meulen, van der J.

    1989-01-01

    A vital link in a complex of physiological processes occurring during early pregnancy is the so-called maternal recognition of pregnancy: the prolongation of ovarian luteal function for continuation of progesterone secretion by an anti-luteolytic action of the developing embryos.

  7. Associative recognition and the hippocampus: differential effects of hippocampal lesions on object-place, object-context and object-place-context memory.

    Science.gov (United States)

    Langston, Rosamund F; Wood, Emma R

    2010-10-01

    The hippocampus is thought to be required for the associative recognition of objects together with the spatial or temporal contexts in which they occur. However, recent data showing that rats with fornix lesions perform as well as controls in an object-place task, while being impaired on an object-place-context task (Eacott and Norman (2004) J Neurosci 24:1948-1953), suggest that not all forms of context-dependent associative recognition depend on the integrity of the hippocampus. To examine the role of the hippocampus in context-dependent recognition directly, the present study tested the effects of large, selective, bilateral hippocampus lesions in rats on performance of a series of spontaneous recognition memory tasks: object recognition, object-place recognition, object-context recognition and object-place-context recognition. Consistent with the effects of fornix lesions, animals with hippocampus lesions were impaired only on the object-place-context task. These data confirm that not all forms of context-dependent associative recognition are mediated by the hippocampus. Subsequent experiments suggested that the object-place task does not require an allocentric representation of space, which could account for the lack of impairment following hippocampus lesions. Importantly, as the object-place-context task has similar spatial requirements, the selective deficit in object-place-context recognition suggests that this task requires hippocampus-dependent neural processes distinct from those required for allocentric spatial memory, or for object memory, object-place memory or object-context memory. Two possibilities are that object, place, and context information converge only in the hippocampus, or that recognition of integrated object-place-context information requires a hippocampus-dependent mode of retrieval, such as recollection. © 2009 Wiley-Liss, Inc.

  8. Selective attention meets spontaneous recognition memory: Evidence for effects at retrieval.

    Science.gov (United States)

    Moen, Katherine C; Miller, Jeremy K; Lloyd, Marianne E

    2017-03-01

    Previous research on the effects of Divided Attention on recognition memory have shown consistent impairments during encoding but more variable effects at retrieval. The present study explored whether effects of Selective Attention at retrieval and subsequent testing were parallel to those of Divided Attention. Participants studied a list of pictures and then had a recognition memory test that included both full attention and selective attention (the to be responded to object was overlaid atop a blue outlined object) trials. All participants then completed a second recognition memory test. The results of 2 experiments suggest that subsequent tests consistently show impacts of the status of the ignored stimulus, and that having an initial test changes performance on a later test. The results are discussed in relation to effect of attention on memory more generally as well as spontaneous recognition memory research. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Knowledge fusion: Time series modeling followed by pattern recognition applied to unusual sections of background data

    International Nuclear Information System (INIS)

    Burr, T.; Doak, J.; Howell, J.A.; Martinez, D.; Strittmatter, R.

    1996-03-01

    This report describes work performed during FY 95 for the Knowledge Fusion Project, which by the Department of Energy, Office of Nonproliferation and National Security. The project team selected satellite sensor data as the one main example to which its analysis algorithms would be applied. The specific sensor-fusion problem has many generic features that make it a worthwhile problem to attempt to solve in a general way. The generic problem is to recognize events of interest from multiple time series in a possibly noisy background. By implementing a suite of time series modeling and forecasting methods and using well-chosen alarm criteria, we reduce the number of false alarms. We then further reduce the number of false alarms by analyzing all suspicious sections of data, as judged by the alarm criteria, with pattern recognition methods. This report describes the implementation and application of this two-step process for separating events from unusual background. As a fortunate by-product of this activity, it is possible to gain a better understanding of the natural background

  10. Knowledge fusion: Time series modeling followed by pattern recognition applied to unusual sections of background data

    Energy Technology Data Exchange (ETDEWEB)

    Burr, T.; Doak, J.; Howell, J.A.; Martinez, D.; Strittmatter, R.

    1996-03-01

    This report describes work performed during FY 95 for the Knowledge Fusion Project, which by the Department of Energy, Office of Nonproliferation and National Security. The project team selected satellite sensor data as the one main example to which its analysis algorithms would be applied. The specific sensor-fusion problem has many generic features that make it a worthwhile problem to attempt to solve in a general way. The generic problem is to recognize events of interest from multiple time series in a possibly noisy background. By implementing a suite of time series modeling and forecasting methods and using well-chosen alarm criteria, we reduce the number of false alarms. We then further reduce the number of false alarms by analyzing all suspicious sections of data, as judged by the alarm criteria, with pattern recognition methods. This report describes the implementation and application of this two-step process for separating events from unusual background. As a fortunate by-product of this activity, it is possible to gain a better understanding of the natural background.

  11. Oscillation-Driven Spike-Timing Dependent Plasticity Allows Multiple Overlapping Pattern Recognition in Inhibitory Interneuron Networks

    DEFF Research Database (Denmark)

    Garrido, Jesús A.; Luque, Niceto R.; Tolu, Silvia

    2016-01-01

    The majority of operations carried out by the brain require learning complex signal patterns for future recognition, retrieval and reuse. Although learning is thought to depend on multiple forms of long-term synaptic plasticity, the way this latter contributes to pattern recognition is still poorly...... and at the inhibitory interneuron-interneuron synapses, the interneurons rapidly learned complex input patterns. Interestingly, induction of plasticity required that the network be entrained into theta-frequency band oscillations, setting the internal phase-reference required to drive STDP. Inhibitory plasticity...... effectively distributed multiple patterns among available interneurons, thus allowing the simultaneous detection of multiple overlapping patterns. The addition of plasticity in intrinsic excitability made the system more robust allowing self-adjustment and rescaling in response to a broad range of input...

  12. Recognition memory for low- and high-frequency-filtered emotional faces: Low spatial frequencies drive emotional memory enhancement, whereas high spatial frequencies drive the emotion-induced recognition bias.

    Science.gov (United States)

    Rohr, Michaela; Tröger, Johannes; Michely, Nils; Uhde, Alarith; Wentura, Dirk

    2017-07-01

    This article deals with two well-documented phenomena regarding emotional stimuli: emotional memory enhancement-that is, better long-term memory for emotional than for neutral stimuli-and the emotion-induced recognition bias-that is, a more liberal response criterion for emotional than for neutral stimuli. Studies on visual emotion perception and attention suggest that emotion-related processes can be modulated by means of spatial-frequency filtering of the presented emotional stimuli. Specifically, low spatial frequencies are assumed to play a primary role for the influence of emotion on attention and judgment. Given this theoretical background, we investigated whether spatial-frequency filtering also impacts (1) the memory advantage for emotional faces and (2) the emotion-induced recognition bias, in a series of old/new recognition experiments. Participants completed incidental-learning tasks with high- (HSF) and low- (LSF) spatial-frequency-filtered emotional and neutral faces. The results of the surprise recognition tests showed a clear memory advantage for emotional stimuli. Most importantly, the emotional memory enhancement was significantly larger for face images containing only low-frequency information (LSF faces) than for HSF faces across all experiments, suggesting that LSF information plays a critical role in this effect, whereas the emotion-induced recognition bias was found only for HSF stimuli. We discuss our findings in terms of both the traditional account of different processing pathways for HSF and LSF information and a stimulus features account. The double dissociation in the results favors the latter account-that is, an explanation in terms of differences in the characteristics of HSF and LSF stimuli.

  13. The Effect of Mood-Context on Visual Recognition and Recall Memory

    OpenAIRE

    Robinson, Sarita Jane; Rollings, Lucy J. L.

    2010-01-01

    Although it is widely known that memory is enhanced when encoding and retrieval occur in the same state, the impact of elevated stress/arousal is less understood. This study explores mood-dependent memory's effects on visual recognition and recall of material memorized either in a neutral mood or under higher stress/arousal levels. Participants’ (N = 60) recognition and recall were assessed while they experienced either the same or a mismatched mood at retrieval. The results suggested that bo...

  14. Own-Group Face Recognition Bias: The Effects of Location and Reputation

    Directory of Open Access Journals (Sweden)

    Linlin Yan

    2017-10-01

    Full Text Available In the present study, we examined whether social categorization based on university affiliation can induce an advantage in recognizing faces. Moreover, we investigated how the reputation or location of the university affected face recognition performance using an old/new paradigm. We assigned five different university labels to the faces: participants’ own university and four other universities. Among the four other university labels, we manipulated the academic reputation and geographical location of these universities relative to the participants’ own university. The results showed that an own-group face recognition bias emerged for faces with own-university labels comparing to those with other-university labels. Furthermore, we found a robust own-group face recognition bias only when the other university was located in a different city far away from participants’ own university. Interestingly, we failed to find the influence of university reputation on own-group face recognition bias. These results suggest that categorizing a face as a member of one’s own university is sufficient to enhance recognition accuracy and the location will play a more important role in the effect of social categorization on face recognition than reputation. The results provide insight into the role of motivational factors underlying the university membership in face perception.

  15. A Method to Integrate GMM, SVM and DTW for Speaker Recognition

    Directory of Open Access Journals (Sweden)

    Ing-Jr Ding

    2014-01-01

    Full Text Available This paper develops an effective and efficient scheme to integrate Gaussian mixture model (GMM, support vector machine (SVM, and dynamic time wrapping (DTW for automatic speaker recognition. GMM and SVM are two popular classifiers for speaker recognition applications. DTW is a fast and simple template matching method, and it is frequently seen in applications of speech recognition. In this work, DTW does not play a role to perform speech recognition, and it will be employed to be a verifier for verification of valid speakers. The proposed combination scheme of GMM, SVM and DTW, called SVMGMM-DTW, for speaker recognition in this study is a two-phase verification process task including GMM-SVM verification of the first phase and DTW verification of the second phase. By providing a double check to verify the identity of a speaker, it will be difficult for imposters to try to pass the security protection; therefore, the safety degree of speaker recognition systems will be largely increased. A series of experiments designed on door access control applications demonstrated that the superiority of the developed SVMGMM-DTW on speaker recognition accuracy.

  16. Arm Motion Recognition and Exercise Coaching System for Remote Interaction

    Directory of Open Access Journals (Sweden)

    Hong Zeng

    2016-01-01

    Full Text Available Arm motion recognition and its related applications have become a promising human computer interaction modal due to the rapid integration of numerical sensors in modern mobile-phones. We implement a mobile-phone-based arm motion recognition and exercise coaching system that can help people carrying mobile-phones to do body exercising anywhere at any time, especially for the persons that have very limited spare time and are constantly traveling across cities. We first design improved k-means algorithm to cluster the collecting 3-axis acceleration and gyroscope data of person actions into basic motions. A learning method based on Hidden Markov Model is then designed to classify and recognize continuous arm motions of both learners and coaches, which also measures the action similarities between the persons. We implement the system on MIUI 2S mobile-phone and evaluate the system performance and its accuracy of recognition.

  17. Social recognition is context dependent in single male prairie voles

    Science.gov (United States)

    Zheng, Da-Jiang; Foley, Lauren; Rehman, Asad; Ophir, Alexander G.

    2013-01-01

    Single males might benefit from knowing the identity of neighbouring males when establishing and defending boundaries. Similarly, males should discriminate between individual females if this leads to more reproductive opportunities. Contextual social cues may alter the value of learning identity. Knowing the identity of competitors that intrude into an animal’s territory may be more salient than knowing the identity of individuals on whose territory an animal is trespassing. Hence, social and environmental context could affect social recognition in many ways. Here we test social recognition of socially monogamous single male prairie voles, Microtus ochrogaster. In experiment 1 we tested recognition of male or female conspecifics and found that males discriminated between different males but not between different females. In experiment 2 we asked whether recognition of males is influenced when males are tested in their own cage (familiar), in a clean cage (neutral) or in the home cage of another male (unfamiliar). Although focal males discriminated between male conspecifics in all three contexts, individual variation in recognition was lower when males were tested in their home cage (in the presence of familiar social cues) compared to when the context lacked social cues (neutral). Experiment 1 indicates that selective pressures may have operated to enhance male territorial behaviour and indiscriminate mate selection. Experiment 2 suggests that the presence of a conspecific cue heightens social recognition and that home-field advantages might extend to social cognition. Taken together, our results indicate social recognition depends on the social and possibly territorial context. PMID:24273328

  18. [Recognition of visual objects under forward masking. Effects of cathegorial similarity of test and masking stimuli].

    Science.gov (United States)

    Gerasimenko, N Iu; Slavutskaia, A V; Kalinin, S A; Kulikov, M A; Mikhaĭlova, E S

    2013-01-01

    In 38 healthy subjects accuracy and response time were examined during recognition of two categories of images--animals andnonliving objects--under forward masking. We revealed new data that masking effects depended of categorical similarity of target and masking stimuli. The recognition accuracy was the lowest and the response time was the most slow, when the target and masking stimuli belongs to the same category, that was combined with high dispersion of response times. The revealed effects were more clear in the task of animal recognition in comparison with the recognition of nonliving objects. We supposed that the revealed effects connected with interference between cortical representations of the target and masking stimuli and discussed our results in context of cortical interference and negative priming.

  19. The Legal Recognition of Sign Languages

    Science.gov (United States)

    De Meulder, Maartje

    2015-01-01

    This article provides an analytical overview of the different types of explicit legal recognition of sign languages. Five categories are distinguished: constitutional recognition, recognition by means of general language legislation, recognition by means of a sign language law or act, recognition by means of a sign language law or act including…

  20. Individual differences in language and working memory affect children's speech recognition in noise.

    Science.gov (United States)

    McCreery, Ryan W; Spratford, Meredith; Kirby, Benjamin; Brennan, Marc

    2017-05-01

    We examined how cognitive and linguistic skills affect speech recognition in noise for children with normal hearing. Children with better working memory and language abilities were expected to have better speech recognition in noise than peers with poorer skills in these domains. As part of a prospective, cross-sectional study, children with normal hearing completed speech recognition in noise for three types of stimuli: (1) monosyllabic words, (2) syntactically correct but semantically anomalous sentences and (3) semantically and syntactically anomalous word sequences. Measures of vocabulary, syntax and working memory were used to predict individual differences in speech recognition in noise. Ninety-six children with normal hearing, who were between 5 and 12 years of age. Higher working memory was associated with better speech recognition in noise for all three stimulus types. Higher vocabulary abilities were associated with better recognition in noise for sentences and word sequences, but not for words. Working memory and language both influence children's speech recognition in noise, but the relationships vary across types of stimuli. These findings suggest that clinical assessment of speech recognition is likely to reflect underlying cognitive and linguistic abilities, in addition to a child's auditory skills, consistent with the Ease of Language Understanding model.