WorldWideScience

Sample records for intelligence drive auditory

  1. Auditory interfaces in automated driving: an international survey

    NARCIS (Netherlands)

    Bazilinskyy, P.; de Winter, J.C.F.

    2015-01-01

    This study investigated peoples’ opinion on auditory interfaces in contemporary
    cars and their willingness to be exposed to auditory feedback in automated driving. We used an Internet-based survey to collect 1,205 responses from 91 countries. The respondents stated their attitudes towards two

  2. Hidden Hearing Loss and Computational Models of the Auditory Pathway: Predicting Speech Intelligibility Decline

    Science.gov (United States)

    2016-11-28

    Title: Hidden Hearing Loss and Computational Models of the Auditory Pathway: Predicting Speech Intelligibility Decline Christopher J. Smalt...representation of speech intelligibility in noise. The auditory-periphery model of Zilany et al. (JASA 2009,2014) is used to make predictions of...auditory nerve (AN) responses to speech stimuli under a variety of difficult listening conditions. The resulting cochlear neurogram, a spectrogram

  3. Auditory interfaces in automated driving: an international survey

    Directory of Open Access Journals (Sweden)

    Pavlo Bazilinskyy

    2015-08-01

    Full Text Available This study investigated peoples’ opinion on auditory interfaces in contemporary cars and their willingness to be exposed to auditory feedback in automated driving. We used an Internet-based survey to collect 1,205 responses from 91 countries. The respondents stated their attitudes towards two existing auditory driver assistance systems, a parking assistant (PA and a forward collision warning system (FCWS, as well as towards a futuristic augmented sound system (FS proposed for fully automated driving. The respondents were positive towards the PA and FCWS, and rated the willingness to have automated versions of these systems as 3.87 and 3.77, respectively (on a scale from 1 = disagree strongly to 5 = agree strongly. The respondents tolerated the FS (the mean willingness to use it was 3.00 on the same scale. The results showed that among the available response options, the female voice was the most preferred feedback type for takeover requests in highly automated driving, regardless of whether the respondents’ country was English speaking or not. The present results could be useful for designers of automated vehicles and other stakeholders.

  4. Sensory Intelligence for Extraction of an Abstract Auditory Rule: A Cross-Linguistic Study.

    Science.gov (United States)

    Guo, Xiao-Tao; Wang, Xiao-Dong; Liang, Xiu-Yuan; Wang, Ming; Chen, Lin

    2018-02-21

    In a complex linguistic environment, while speech sounds can greatly vary, some shared features are often invariant. These invariant features constitute so-called abstract auditory rules. Our previous study has shown that with auditory sensory intelligence, the human brain can automatically extract the abstract auditory rules in the speech sound stream, presumably serving as the neural basis for speech comprehension. However, whether the sensory intelligence for extraction of abstract auditory rules in speech is inherent or experience-dependent remains unclear. To address this issue, we constructed a complex speech sound stream using auditory materials in Mandarin Chinese, in which syllables had a flat lexical tone but differed in other acoustic features to form an abstract auditory rule. This rule was occasionally and randomly violated by the syllables with the rising, dipping or falling tone. We found that both Chinese and foreign speakers detected the violations of the abstract auditory rule in the speech sound stream at a pre-attentive stage, as revealed by the whole-head recordings of mismatch negativity (MMN) in a passive paradigm. However, MMNs peaked earlier in Chinese speakers than in foreign speakers. Furthermore, Chinese speakers showed different MMN peak latencies for the three deviant types, which paralleled recognition points. These findings indicate that the sensory intelligence for extraction of abstract auditory rules in speech sounds is innate but shaped by language experience. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. Reference-Free Assessment of Speech Intelligibility Using Bispectrum of an Auditory Neurogram

    Science.gov (United States)

    Hossain, Mohammad E.; Jassim, Wissam A.; Zilany, Muhammad S. A.

    2016-01-01

    Sensorineural hearing loss occurs due to damage to the inner and outer hair cells of the peripheral auditory system. Hearing loss can cause decreases in audibility, dynamic range, frequency and temporal resolution of the auditory system, and all of these effects are known to affect speech intelligibility. In this study, a new reference-free speech intelligibility metric is proposed using 2-D neurograms constructed from the output of a computational model of the auditory periphery. The responses of the auditory-nerve fibers with a wide range of characteristic frequencies were simulated to construct neurograms. The features of the neurograms were extracted using third-order statistics referred to as bispectrum. The phase coupling of neurogram bispectrum provides a unique insight for the presence (or deficit) of supra-threshold nonlinearities beyond audibility for listeners with normal hearing (or hearing loss). The speech intelligibility scores predicted by the proposed method were compared to the behavioral scores for listeners with normal hearing and hearing loss both in quiet and under noisy background conditions. The results were also compared to the performance of some existing methods. The predicted results showed a good fit with a small error suggesting that the subjective scores can be estimated reliably using the proposed neural-response-based metric. The proposed metric also had a wide dynamic range, and the predicted scores were well-separated as a function of hearing loss. The proposed metric successfully captures the effects of hearing loss and supra-threshold nonlinearities on speech intelligibility. This metric could be applied to evaluate the performance of various speech-processing algorithms designed for hearing aids and cochlear implants. PMID:26967160

  6. Genetic pleiotropy explains associations between musical auditory discrimination and intelligence.

    Science.gov (United States)

    Mosing, Miriam A; Pedersen, Nancy L; Madison, Guy; Ullén, Fredrik

    2014-01-01

    Musical aptitude is commonly measured using tasks that involve discrimination of different types of musical auditory stimuli. Performance on such different discrimination tasks correlates positively with each other and with intelligence. However, no study to date has explored these associations using a genetically informative sample to estimate underlying genetic and environmental influences. In the present study, a large sample of Swedish twins (N = 10,500) was used to investigate the genetic architecture of the associations between intelligence and performance on three musical auditory discrimination tasks (rhythm, melody and pitch). Phenotypic correlations between the tasks ranged between 0.23 and 0.42 (Pearson r values). Genetic modelling showed that the covariation between the variables could be explained by shared genetic influences. Neither shared, nor non-shared environment had a significant effect on the associations. Good fit was obtained with a two-factor model where one underlying shared genetic factor explained all the covariation between the musical discrimination tasks and IQ, and a second genetic factor explained variance exclusively shared among the discrimination tasks. The results suggest that positive correlations among musical aptitudes result from both genes with broad effects on cognition, and genes with potentially more specific influences on auditory functions.

  7. Speech Auditory Alerts Promote Memory for Alerted Events in a Video-Simulated Self-Driving Car Ride.

    Science.gov (United States)

    Nees, Michael A; Helbein, Benji; Porter, Anna

    2016-05-01

    Auditory displays could be essential to helping drivers maintain situation awareness in autonomous vehicles, but to date, few or no studies have examined the effectiveness of different types of auditory displays for this application scenario. Recent advances in the development of autonomous vehicles (i.e., self-driving cars) have suggested that widespread automation of driving may be tenable in the near future. Drivers may be required to monitor the status of automation programs and vehicle conditions as they engage in secondary leisure or work tasks (entertainment, communication, etc.) in autonomous vehicles. An experiment compared memory for alerted events-a component of Level 1 situation awareness-using speech alerts, auditory icons, and a visual control condition during a video-simulated self-driving car ride with a visual secondary task. The alerts gave information about the vehicle's operating status and the driving scenario. Speech alerts resulted in better memory for alerted events. Both auditory display types resulted in less perceived effort devoted toward the study tasks but also greater perceived annoyance with the alerts. Speech auditory displays promoted Level 1 situation awareness during a simulation of a ride in a self-driving vehicle under routine conditions, but annoyance remains a concern with auditory displays. Speech auditory displays showed promise as a means of increasing Level 1 situation awareness of routine scenarios during an autonomous vehicle ride with an unrelated secondary task. © 2016, Human Factors and Ergonomics Society.

  8. Common genetic influences on intelligence and auditory simple reaction time in a large Swedish sample

    NARCIS (Netherlands)

    Madison, G.; Mosing, M.A.; Verweij, K.J.H.; Pedersen, N.L.; Ullén, F.

    2016-01-01

    Intelligence and cognitive ability have long been associated with chronometric performance measures, such as reaction time (RT), but few studies have investigated auditory RT in this context. The nature of this relationship is important for understanding the etiology and structure of intelligence.

  9. CyberTORCS: An Intelligent Vehicles Simulation Platform for Cooperative Driving

    Directory of Open Access Journals (Sweden)

    Ming Yang

    2011-05-01

    Full Text Available Simulation platforms play an important role in helping intelligent vehicle research, especially for the research of cooperative driving due to the high cost and risk of the real experiments. In order to ease and bring more convenience for cooperative driving tests, we introduce an intelligent vehicle simulation platform, called CyberTORCS, for the research in cooperative driving. Details of the simulator modules including vehicle body control, vehicle visualization modeling and track visualization modeling are presented. Two simulation examples are given to validate the feasibility and effectiveness of the proposed simulation platform.

  10. Emotional Intelligence among Auditory, Reading, and Kinesthetic Learning Styles of Elementary School Students in Ambon-Indonesia

    Science.gov (United States)

    Leasa, Marleny; Corebima, Aloysius D.; Ibrohim; Suwono, Hadi

    2017-01-01

    Students have unique ways in managing the information in their learning process. VARK learning styles associated with memory are considered to have an effect on emotional intelligence. This quasi-experimental research was conducted to compare the emotional intelligence among the students having auditory, reading, and kinesthetic learning styles in…

  11. The Getting of Wisdom: Fluid Intelligence Does Not Drive Knowledge Acquisition

    Science.gov (United States)

    Christensen, Helen; Batterham, Philip J.; Mackinnon, Andrew J.

    2013-01-01

    The investment hypothesis proposes that fluid intelligence drives the accumulation of crystallized intelligence, such that crystallized intelligence increases more substantially in individuals with high rather than low fluid intelligence. However, most investigations have been conducted on adolescent cohorts or in two-wave data sets. There are few…

  12. Comparison on driving fatigue related hemodynamics activated by auditory and visual stimulus

    Science.gov (United States)

    Deng, Zishan; Gao, Yuan; Li, Ting

    2018-02-01

    As one of the main causes of traffic accidents, driving fatigue deserves researchers' attention and its detection and monitoring during long-term driving require a new technique to realize. Since functional near-infrared spectroscopy (fNIRS) can be applied to detect cerebral hemodynamic responses, we can promisingly expect its application in fatigue level detection. Here, we performed three different kinds of experiments on a driver and recorded his cerebral hemodynamic responses when driving for long hours utilizing our device based on fNIRS. Each experiment lasted for 7 hours and one of the three specific experimental tests, detecting the driver's response to sounds, traffic lights and direction signs respectively, was done every hour. The results showed that visual stimulus was easier to cause fatigue compared with auditory stimulus and visual stimulus induced by traffic lights scenes was easier to cause fatigue compared with visual stimulus induced by direction signs in the first few hours. We also found that fatigue related hemodynamics caused by auditory stimulus increased fastest, then traffic lights scenes, and direction signs scenes slowest. Our study successfully compared audio, visual color, and visual character stimulus in sensitivity to cause driving fatigue, which is meaningful for driving safety management.

  13. Predicting Academic Success: General Intelligence, "Big Five" Personality Traits, and Work Drive

    Science.gov (United States)

    Ridgell, Susan D.; Lounsbury, John W.

    2004-01-01

    General intelligence, Big Five personality traits, and the construct Work Drive were studied in relation to two measures of collegiate academic performance: a single course grade received by undergraduate students in an introductory psychology course, and self-reported GPA. General intelligence and Work Drive were found to be significantly…

  14. Intelligence and P3 Components of the Event-Related Potential Elicited during an Auditory Discrimination Task with Masking

    Science.gov (United States)

    De Pascalis, V.; Varriale, V.; Matteoli, A.

    2008-01-01

    The relationship between fluid intelligence (indexed by scores on Raven Progressive Matrices) and auditory discrimination ability was examined by recording event-related potentials from 48 women during the performance of an auditory oddball task with backward masking. High ability (HA) subjects exhibited shorter response times, greater response…

  15. Relations Between the Intelligibility of Speech in Noise and Psychophysical Measures of Hearing Measured in Four Languages Using the Auditory Profile Test Battery

    Directory of Open Access Journals (Sweden)

    T. E. M. Van Esch

    2015-12-01

    Full Text Available The aim of the present study was to determine the relations between the intelligibility of speech in noise and measures of auditory resolution, loudness recruitment, and cognitive function. The analyses were based on data published earlier as part of the presentation of the Auditory Profile, a test battery implemented in four languages. Tests of the intelligibility of speech, resolution, loudness recruitment, and lexical decision making were measured using headphones in five centers: in Germany, the Netherlands, Sweden, and the United Kingdom. Correlations and stepwise linear regression models were calculated. In sum, 72 hearing-impaired listeners aged 22 to 91 years with a broad range of hearing losses were included in the study. Several significant correlations were found with the intelligibility of speech in noise. Stepwise linear regression analyses showed that pure-tone average, age, spectral and temporal resolution, and loudness recruitment were significant predictors of the intelligibility of speech in fluctuating noise. Complex interrelationships between auditory factors and the intelligibility of speech in noise were revealed using the Auditory Profile data set in four languages. After taking into account the effects of pure-tone average and age, spectral and temporal resolution and loudness recruitment had an added value in the prediction of variation among listeners with respect to the intelligibility of speech in noise. The results of the lexical decision making test were not related to the intelligibility of speech in noise, in the population studied.

  16. Motivation and intelligence drive auditory perceptual learning.

    Science.gov (United States)

    Amitay, Sygal; Halliday, Lorna; Taylor, Jenny; Sohoglu, Ediz; Moore, David R

    2010-03-23

    Although feedback on performance is generally thought to promote perceptual learning, the role and necessity of feedback remain unclear. We investigated the effect of providing varying amounts of positive feedback while listeners attempted to discriminate between three identical tones on learning frequency discrimination. Using this novel procedure, the feedback was meaningless and random in relation to the listeners' responses, but the amount of feedback provided (or lack thereof) affected learning. We found that a group of listeners who received positive feedback on 10% of the trials improved their performance on the task (learned), while other groups provided either with excess (90%) or with no feedback did not learn. Superimposed on these group data, however, individual listeners showed other systematic changes of performance. In particular, those with lower non-verbal IQ who trained in the no feedback condition performed more poorly after training. This pattern of results cannot be accounted for by learning models that ascribe an external teacher role to feedback. We suggest, instead, that feedback is used to monitor performance on the task in relation to its perceived difficulty, and that listeners who learn without the benefit of feedback are adept at self-monitoring of performance, a trait that also supports better performance on non-verbal IQ tests. These results show that 'perceptual' learning is strongly influenced by top-down processes of motivation and intelligence.

  17. A Review of Intelligent Driving Style Analysis Systems and Related Artificial Intelligence Algorithms.

    Science.gov (United States)

    Meiring, Gys Albertus Marthinus; Myburgh, Hermanus Carel

    2015-12-04

    In this paper the various driving style analysis solutions are investigated. An in-depth investigation is performed to identify the relevant machine learning and artificial intelligence algorithms utilised in current driver behaviour and driving style analysis systems. This review therefore serves as a trove of information, and will inform the specialist and the student regarding the current state of the art in driver style analysis systems, the application of these systems and the underlying artificial intelligence algorithms applied to these applications. The aim of the investigation is to evaluate the possibilities for unique driver identification utilizing the approaches identified in other driver behaviour studies. It was found that Fuzzy Logic inference systems, Hidden Markov Models and Support Vector Machines consist of promising capabilities to address unique driver identification algorithms if model complexity can be reduced.

  18. A Review of Intelligent Driving Style Analysis Systems and Related Artificial Intelligence Algorithms

    Directory of Open Access Journals (Sweden)

    Gys Albertus Marthinus Meiring

    2015-12-01

    Full Text Available In this paper the various driving style analysis solutions are investigated. An in-depth investigation is performed to identify the relevant machine learning and artificial intelligence algorithms utilised in current driver behaviour and driving style analysis systems. This review therefore serves as a trove of information, and will inform the specialist and the student regarding the current state of the art in driver style analysis systems, the application of these systems and the underlying artificial intelligence algorithms applied to these applications. The aim of the investigation is to evaluate the possibilities for unique driver identification utilizing the approaches identified in other driver behaviour studies. It was found that Fuzzy Logic inference systems, Hidden Markov Models and Support Vector Machines consist of promising capabilities to address unique driver identification algorithms if model complexity can be reduced.

  19. Effects of age and auditory and visual dual tasks on closed-road driving performance.

    Science.gov (United States)

    Chaparro, Alex; Wood, Joanne M; Carberry, Trent

    2005-08-01

    This study investigated how driving performance of young and old participants is affected by visual and auditory secondary tasks on a closed driving course. Twenty-eight participants comprising two age groups (younger, mean age = 27.3 years; older, mean age = 69.2 years) drove around a 5.1-km closed-road circuit under both single and dual task conditions. Measures of driving performance included detection and identification of road signs, detection and avoidance of large low-contrast road hazards, gap judgment, lane keeping, and time to complete the course. The dual task required participants to verbally report the sums of pairs of single-digit numbers presented through either a computer speaker (auditorily) or a dashboard-mounted monitor (visually) while driving. Participants also completed a vision and cognitive screening battery, including LogMAR visual acuity, Pelli-Robson letter contrast sensitivity, the Trails test, and the Digit Symbol Substitution (DSS) test. Drivers reported significantly fewer signs, hit more road hazards, misjudged more gaps, and increased their time to complete the course under the dual task (visual and auditory) conditions compared with the single task condition. The older participants also reported significantly fewer road signs and drove significantly more slowly than the younger participants, and this was exacerbated for the visual dual task condition. The results of the regression analysis revealed that cognitive aging (measured by the DSS and Trails test) rather than chronologic age was a better predictor of the declines seen in driving performance under dual task conditions. An overall z score was calculated, which took into account both driving and the secondary task (summing) performance under the two dual task conditions. Performance was significantly worse for the auditory dual task compared with the visual dual task, and the older participants performed significantly worse than the young subjects. These findings demonstrate

  20. On the relationship between auditory cognition and speech intelligibility in cochlear implant users: An ERP study.

    Science.gov (United States)

    Finke, Mareike; Büchner, Andreas; Ruigendijk, Esther; Meyer, Martin; Sandmann, Pascale

    2016-07-01

    There is a high degree of variability in speech intelligibility outcomes across cochlear-implant (CI) users. To better understand how auditory cognition affects speech intelligibility with the CI, we performed an electroencephalography study in which we examined the relationship between central auditory processing, cognitive abilities, and speech intelligibility. Postlingually deafened CI users (N=13) and matched normal-hearing (NH) listeners (N=13) performed an oddball task with words presented in different background conditions (quiet, stationary noise, modulated noise). Participants had to categorize words as living (targets) or non-living entities (standards). We also assessed participants' working memory (WM) capacity and verbal abilities. For the oddball task, we found lower hit rates and prolonged response times in CI users when compared with NH listeners. Noise-related prolongation of the N1 amplitude was found for all participants. Further, we observed group-specific modulation effects of event-related potentials (ERPs) as a function of background noise. While NH listeners showed stronger noise-related modulation of the N1 latency, CI users revealed enhanced modulation effects of the N2/N4 latency. In general, higher-order processing (N2/N4, P3) was prolonged in CI users in all background conditions when compared with NH listeners. Longer N2/N4 latency in CI users suggests that these individuals have difficulties to map acoustic-phonetic features to lexical representations. These difficulties seem to be increased for speech-in-noise conditions when compared with speech in quiet background. Correlation analyses showed that shorter ERP latencies were related to enhanced speech intelligibility (N1, N2/N4), better lexical fluency (N1), and lower ratings of listening effort (N2/N4) in CI users. In sum, our findings suggest that CI users and NH listeners differ with regards to both the sensory and the higher-order processing of speech in quiet as well as in

  1. The role of auditory spectro-temporal modulation filtering and the decision metric for speech intelligibility prediction

    DEFF Research Database (Denmark)

    Chabot-Leclerc, Alexandre; Jørgensen, Søren; Dau, Torsten

    2014-01-01

    Speech intelligibility models typically consist of a preprocessing part that transforms stimuli into some internal (auditory) representation and a decision metric that relates the internal representation to speech intelligibility. The present study analyzed the role of modulation filtering...... in the preprocessing of different speech intelligibility models by comparing predictions from models that either assume a spectro-temporal (i.e., two-dimensional) or a temporal-only (i.e., one-dimensional) modulation filterbank. Furthermore, the role of the decision metric for speech intelligibility was investigated...... subtraction. The results suggested that a decision metric based on the SNRenv may provide a more general basis for predicting speech intelligibility than a metric based on the MTF. Moreover, the one-dimensional modulation filtering process was found to be sufficient to account for the data when combined...

  2. Auditory Perspective Taking

    National Research Council Canada - National Science Library

    Martinson, Eric; Brock, Derek

    2006-01-01

    .... From this knowledge of another's auditory perspective, a conversational partner can then adapt his or her auditory output to overcome a variety of environmental challenges and insure that what is said is intelligible...

  3. The simulation of emergent dispatch of cars for intelligent driving autos

    Science.gov (United States)

    Zheng, Ziao

    2018-03-01

    It is widely acknowledged that it is important for the development of intelligent cars to be widely accepted by the majority of car users. While most of the intelligent cars have the system of monitoring itself whether it is on the good situation to drive, it is also clear that studies should be performed on the way of cars for the emergent rescue of the intelligent vehicles. In this study, writer focus mainly on how to derive a separate system for the car caring teams to arrive as soon as they get the signal sent out by the intelligent driving autos. This simulation measure the time for the rescuing team to arrive, the cost it spent on arriving on the site of car problem happens, also how long the queue is when the rescuing auto is waiting to cross a road. This can be definitely in great use when there are a team of intelligent cars with one car immediately having problems causing it's not moving and can be helpful in other situations. Through this way, the interconnection of cars can be a safety net for the drivers encountering difficulties in any time.

  4. Age differences in visual-auditory self-motion perception during a simulated driving task

    Directory of Open Access Journals (Sweden)

    Robert eRamkhalawansingh

    2016-04-01

    Full Text Available Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e. optic flow and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e. engine, tire, and wind sounds. Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion.

  5. Relations Between the Intelligibility of Speech in Noise and Psychophysical Measures of Hearing Measured in Four Languages Using the Auditory Profile Test Battery

    NARCIS (Netherlands)

    van Esch, T. E. M.; Dreschler, W. A.

    2015-01-01

    The aim of the present study was to determine the relations between the intelligibility of speech in noise and measures of auditory resolution, loudness recruitment, and cognitive function. The analyses were based on data published earlier as part of the presentation of the Auditory Profile, a test

  6. Trading Control Intelligence for Physical Intelligence: Muscle Drives in Evolved Virtual Creatures

    DEFF Research Database (Denmark)

    Lessin, Dan; Fussell, Don; Miikkulainen, Risto

    2014-01-01

    Traditional evolved virtual creatures [12] are actuated using unevolved, uniform, invisible drives at joints between rigid segments. In contrast, this paper shows how such conven- tional actuators can be replaced by evolvable muscle drives that are a part of the creature’s physical structure....... This design is important for two reasons: First, the con- trol intelligence is made visible in the purposeful develop- ment of muscle density, orientation, attachment points, and size. Second, the complexity that needs to be evolved for the brain to control the actuators is reduced, and in some cases can...... be essentially eliminated, thus freeing brain power for higher-level functions. Such designs may thus make it pos- sible to create more complex behavior than would otherwise be achievable....

  7. A novel 9-class auditory ERP paradigm driving a predictive text entry system

    Directory of Open Access Journals (Sweden)

    Johannes eHöhne

    2011-08-01

    Full Text Available Brain-Computer Interfaces (BCIs based on Event Related Potentials (ERPs strive for offering communication pathways which are independent of muscle activity. While most visual ERP-based BCI paradigms require good control of the user's gaze direction, auditory BCI paradigms overcome this restriction. The present work proposes a novel approach using Auditory Evoked Potentials (AEP for the example of a multiclass text spelling application. To control the ERP speller, BCI users focus their attention to two-dimensional auditory stimuli that vary in both, pitch (high/medium/low and direction (left/middle/right and that are presented via headphones. The resulting nine different control signals are exploited to drive a predictive text entry system. It enables the user to spell a letter by a single 9-class decision plus two additional decisions to confirm a spelled word.This paradigm - called PASS2D - was investigated in an online study with twelve healthy participants. Users spelled with more than 0.8 characters per minute on average (3.4 bits per minute which makes PASS2D a competitive method. It could enrich the toolbox of existing ERP paradigms for BCI end users like late-stage ALS patients.

  8. Factors Driving Business Intelligence Culture

    Directory of Open Access Journals (Sweden)

    Rimvydas Skyrius

    2016-05-01

    Full Text Available The field of business intelligence (BI, despite rapid technology advances, continues to feature inadequate levels of adoption. The attention of researchers is shifting towards hu-man factors of BI adoption. The wide set of human factors influencing BI adoption con-tains elements of what we call BI culture – an overarching concept covering key managerial issues that come up in BI implementation. Research sources provide different sets of features pertaining to BI culture or related concepts – decision-making culture, analytical culture and others. The goal of this paper is to perform the review of research and practical sources to examine driving forces of BI – data-driven approaches, BI agility, maturity and acceptance – to point out culture-related issues that support BI adoption and to suggest an emerging set of factors influencing BI culture.

  9. Blocking-out auditory distracters while driving : A cognitive strategy to reduce task-demands on the road

    NARCIS (Netherlands)

    Unal, Ayca Berfu; Platteel, Samantha; Steg, Linda; Epstude, Kai

    The current research examined how drivers handle task-demands induced by listening to the radio while driving. In particular, we explored the traces of a possible cognitive strategy that might be used by drivers to cope with task-demands, namely blocking-out auditory distracters. In Study 1 (N =

  10. Advanced and intelligent control in power electronics and drives

    CERN Document Server

    Blaabjerg, Frede; Rodríguez, José

    2014-01-01

    Power electronics and variable frequency drives are continuously developing multidisciplinary fields in electrical engineering, and it is practically not possible to write a book covering the entire area by one individual specialist. Especially by taking account the recent fast development in the neighboring fields like control theory, computational intelligence and signal processing, which all strongly influence new solutions in control of power electronics and drives. Therefore, this book is written by individual key specialist working on the area of modern advanced control methods which penetrates current implementation of power converters and drives. Although some of the presented methods are still not adopted by industry, they create new solutions with high further research and application potential. The material of the book is presented in the following three parts: Part I: Advanced Power Electronic Control in Renewable Energy Sources (Chapters 1-4), Part II: Predictive Control of Power Converters and D...

  11. Transient and sustained cortical activity elicited by connected speech of varying intelligibility

    Directory of Open Access Journals (Sweden)

    Tiitinen Hannu

    2012-12-01

    Full Text Available Abstract Background The robustness of speech perception in the face of acoustic variation is founded on the ability of the auditory system to integrate the acoustic features of speech and to segregate them from background noise. This auditory scene analysis process is facilitated by top-down mechanisms, such as recognition memory for speech content. However, the cortical processes underlying these facilitatory mechanisms remain unclear. The present magnetoencephalography (MEG study examined how the activity of auditory cortical areas is modulated by acoustic degradation and intelligibility of connected speech. The experimental design allowed for the comparison of cortical activity patterns elicited by acoustically identical stimuli which were perceived as either intelligible or unintelligible. Results In the experiment, a set of sentences was presented to the subject in distorted, undistorted, and again in distorted form. The intervening exposure to undistorted versions of sentences rendered the initially unintelligible, distorted sentences intelligible, as evidenced by an increase from 30% to 80% in the proportion of sentences reported as intelligible. These perceptual changes were reflected in the activity of the auditory cortex, with the auditory N1m response (~100 ms being more prominent for the distorted stimuli than for the intact ones. In the time range of auditory P2m response (>200 ms, auditory cortex as well as regions anterior and posterior to this area generated a stronger response to sentences which were intelligible than unintelligible. During the sustained field (>300 ms, stronger activity was elicited by degraded stimuli in auditory cortex and by intelligible sentences in areas posterior to auditory cortex. Conclusions The current findings suggest that the auditory system comprises bottom-up and top-down processes which are reflected in transient and sustained brain activity. It appears that analysis of acoustic features occurs

  12. Driving the brain towards creativity and intelligence: A network control theory analysis.

    Science.gov (United States)

    Kenett, Yoed N; Medaglia, John D; Beaty, Roger E; Chen, Qunlin; Betzel, Richard F; Thompson-Schill, Sharon L; Qiu, Jiang

    2018-01-04

    High-level cognitive constructs, such as creativity and intelligence, entail complex and multiple processes, including cognitive control processes. Recent neurocognitive research on these constructs highlight the importance of dynamic interaction across neural network systems and the role of cognitive control processes in guiding such a dynamic interaction. How can we quantitatively examine the extent and ways in which cognitive control contributes to creativity and intelligence? To address this question, we apply a computational network control theory (NCT) approach to structural brain imaging data acquired via diffusion tensor imaging in a large sample of participants, to examine how NCT relates to individual differences in distinct measures of creative ability and intelligence. Recent application of this theory at the neural level is built on a model of brain dynamics, which mathematically models patterns of inter-region activity propagated along the structure of an underlying network. The strength of this approach is its ability to characterize the potential role of each brain region in regulating whole-brain network function based on its anatomical fingerprint and a simplified model of node dynamics. We find that intelligence is related to the ability to "drive" the brain system into easy to reach neural states by the right inferior parietal lobe and lower integration abilities in the left retrosplenial cortex. We also find that creativity is related to the ability to "drive" the brain system into difficult to reach states by the right dorsolateral prefrontal cortex (inferior frontal junction) and higher integration abilities in sensorimotor areas. Furthermore, we found that different facets of creativity-fluency, flexibility, and originality-relate to generally similar but not identical network controllability processes. We relate our findings to general theories on intelligence and creativity. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. The comparing analysis of simulation of emergent dispatch of cars for intelligent driving autos in crossroads

    Science.gov (United States)

    Zheng, Ziao

    2018-03-01

    It is widely acknowledged that it is important for the development of intelligent cars to be widely accepted by the majority of car users. While most of the intelligent cars have the system of monitoring itself whether it is on the good situation to drive, it is also clear that studies should be performed on the way of cars for the emergent rescue of the intelligent vehicles. In this study, writer focus mainly on how to derive a separate system for the car caring teams to arrive as soon as they get the signal sent out by the intelligent driving autos. This simulation measure the time for the rescuing team to arrive, the cost it spent on arriving on the site of car problem happens, also how long the queue is when the rescuing auto is waiting to cross a road. This can be definitely in great use when there are a team of intelligent cars with one car immediately having problems causing its not moving and can be helpful in other situations. Through this way, the interconnection of cars can be a safety net for the drivers encountering difficulties in any time.

  14. Auditory and Non-Auditory Contributions for Unaided Speech Recognition in Noise as a Function of Hearing Aid Use.

    Science.gov (United States)

    Gieseler, Anja; Tahden, Maike A S; Thiel, Christiane M; Wagener, Kirsten C; Meis, Markus; Colonius, Hans

    2017-01-01

    Differences in understanding speech in noise among hearing-impaired individuals cannot be explained entirely by hearing thresholds alone, suggesting the contribution of other factors beyond standard auditory ones as derived from the audiogram. This paper reports two analyses addressing individual differences in the explanation of unaided speech-in-noise performance among n = 438 elderly hearing-impaired listeners ( mean = 71.1 ± 5.8 years). The main analysis was designed to identify clinically relevant auditory and non-auditory measures for speech-in-noise prediction using auditory (audiogram, categorical loudness scaling) and cognitive tests (verbal-intelligence test, screening test of dementia), as well as questionnaires assessing various self-reported measures (health status, socio-economic status, and subjective hearing problems). Using stepwise linear regression analysis, 62% of the variance in unaided speech-in-noise performance was explained, with measures Pure-tone average (PTA), Age , and Verbal intelligence emerging as the three most important predictors. In the complementary analysis, those individuals with the same hearing loss profile were separated into hearing aid users (HAU) and non-users (NU), and were then compared regarding potential differences in the test measures and in explaining unaided speech-in-noise recognition. The groupwise comparisons revealed significant differences in auditory measures and self-reported subjective hearing problems, while no differences in the cognitive domain were found. Furthermore, groupwise regression analyses revealed that Verbal intelligence had a predictive value in both groups, whereas Age and PTA only emerged significant in the group of hearing aid NU.

  15. Intelligent Adjustment of Printhead Driving Waveform Parameters for 3D Electronic Printing

    Directory of Open Access Journals (Sweden)

    Lin Na

    2017-01-01

    Full Text Available In practical applications of 3D electronic printing, a major challenge is to adjust the printhead for a high print resolution and accuracy. However, an exhausting manual selective process inevitably wastes a lot of time. Therefore, in this paper, we proposed a new intelligent adjustment method, which adopts artificial bee colony algorithm to optimize the printhead driving waveform parameters for getting the desired printhead state. Experimental results show that this method can quickly and accuracy find out the suitable combination of driving waveform parameters to meet the needs of applications.

  16. The Contribution of Auditory and Cognitive Factors to Intelligibility of Words and Sentences in Noise.

    Science.gov (United States)

    Heinrich, Antje; Knight, Sarah

    2016-01-01

    Understanding the causes for speech-in-noise (SiN) perception difficulties is complex, and is made even more difficult by the fact that listening situations can vary widely in target and background sounds. While there is general agreement that both auditory and cognitive factors are important, their exact relationship to SiN perception across various listening situations remains unclear. This study manipulated the characteristics of the listening situation in two ways: first, target stimuli were either isolated words, or words heard in the context of low- (LP) and high-predictability (HP) sentences; second, the background sound, speech-modulated noise, was presented at two signal-to-noise ratios. Speech intelligibility was measured for 30 older listeners (aged 62-84) with age-normal hearing and related to individual differences in cognition (working memory, inhibition and linguistic skills) and hearing (PTA(0.25-8 kHz) and temporal processing). The results showed that while the effect of hearing thresholds on intelligibility was rather uniform, the influence of cognitive abilities was more specific to a certain listening situation. By revealing a complex picture of relationships between intelligibility and cognition, these results may help us understand some of the inconsistencies in the literature as regards cognitive contributions to speech perception.

  17. Musical experience shapes top-down auditory mechanisms: evidence from masking and auditory attention performance.

    Science.gov (United States)

    Strait, Dana L; Kraus, Nina; Parbery-Clark, Alexandra; Ashley, Richard

    2010-03-01

    A growing body of research suggests that cognitive functions, such as attention and memory, drive perception by tuning sensory mechanisms to relevant acoustic features. Long-term musical experience also modulates lower-level auditory function, although the mechanisms by which this occurs remain uncertain. In order to tease apart the mechanisms that drive perceptual enhancements in musicians, we posed the question: do well-developed cognitive abilities fine-tune auditory perception in a top-down fashion? We administered a standardized battery of perceptual and cognitive tests to adult musicians and non-musicians, including tasks either more or less susceptible to cognitive control (e.g., backward versus simultaneous masking) and more or less dependent on auditory or visual processing (e.g., auditory versus visual attention). Outcomes indicate lower perceptual thresholds in musicians specifically for auditory tasks that relate with cognitive abilities, such as backward masking and auditory attention. These enhancements were observed in the absence of group differences for the simultaneous masking and visual attention tasks. Our results suggest that long-term musical practice strengthens cognitive functions and that these functions benefit auditory skills. Musical training bolsters higher-level mechanisms that, when impaired, relate to language and literacy deficits. Thus, musical training may serve to lessen the impact of these deficits by strengthening the corticofugal system for hearing. 2009 Elsevier B.V. All rights reserved.

  18. The role of across-frequency envelope processing for speech intelligibility

    DEFF Research Database (Denmark)

    Chabot-Leclerc, Alexandre; Jørgensen, Søren; Dau, Torsten

    2013-01-01

    Speech intelligibility models consist of a preprocessing part that transforms the stimuli into some internal (auditory) representation, and a decision metric that quantifies effects of transmission channel, speech interferers, and auditory processing on the speech intelligibility. Here, two recent...... speech intelligibility models, the spectro-temporal modulation index [STMI; Elhilali et al. (2003)] and the speech-based envelope power spectrum model [sEPSM; Jørgensen and Dau (2011)] were evaluated in conditions of noisy speech subjected to reverberation, and to nonlinear distortions through either...

  19. The role of across-frequency envelope processing for speech intelligibility

    DEFF Research Database (Denmark)

    Chabot-Leclerc, Alexandre; Jørgensen, Søren; Dau, Torsten

    2013-01-01

    Speech intelligibility models consist of a preprocessing part that transforms the stimuli into some internal (auditory) representation, and a decision metric that quantifies effects of transmission channel, speech interferers, and auditory processing on the speech intelligibility. Here, two recent...... speech intelligibility models, the spectro-temporal modulation index (STMI; Elhilali et al., 2003) and the speech-based envelope power spectrum model (sEPSM; Jørgensen and Dau, 2011) were evaluated in conditions of noisy speech subjected to reverberation, and to nonlinear distortions through either...

  20. Intelligent Method for Identifying Driving Risk Based on V2V Multisource Big Data

    Directory of Open Access Journals (Sweden)

    Jinshuan Peng

    2018-01-01

    Full Text Available Risky driving behavior is a major cause of traffic conflicts, which can develop into road traffic accidents, making the timely and accurate identification of such behavior essential to road safety. A platform was therefore established for analyzing the driving behavior of 20 professional drivers in field tests, in which overclose car following and lane departure were used as typical risky driving behaviors. Characterization parameters for identification were screened and used to determine threshold values and an appropriate time window for identification. A neural network-Bayesian filter identification model was established and data samples were selected to identify risky driving behavior and evaluate the identification efficiency of the model. The results obtained indicated a successful identification rate of 83.6% when the neural network model was solely used to identify risky driving behavior, but this could be increased to 92.46% once corrected by the Bayesian filter. This has important theoretical and practical significance in relation to evaluating the efficiency of existing driver assist systems, as well as the development of future intelligent driving systems.

  1. Driver's various information process and multi-ruled decision-making mechanism: a fundamental of intelligent driving shaping model

    Directory of Open Access Journals (Sweden)

    Wuhong Wang

    2011-05-01

    Full Text Available The most difficult but important problem in advance driver assistance system development is how to measure and model the behavioral response of drivers with focusing on the cognition process. This paper describes driver's deceleration and acceleration behavior based on driving situation awareness in the car-following process, and then presents several driving models for analysis of driver's safety approaching behavior in traffic operation. The emphasis of our work is placed on the research of driver's various information process and multi-ruled decisionmaking mechanism by considering the complicated control process of driving; the results will be able to provide a theoretical basis for intelligent driving shaping model.

  2. Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.

    Science.gov (United States)

    Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M

    1991-06-01

    An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.

  3. Cognitive factors shape brain networks for auditory skills: spotlight on auditory working memory

    Science.gov (United States)

    Kraus, Nina; Strait, Dana; Parbery-Clark, Alexandra

    2012-01-01

    Musicians benefit from real-life advantages such as a greater ability to hear speech in noise and to remember sounds, although the biological mechanisms driving such advantages remain undetermined. Furthermore, the extent to which these advantages are a consequence of musical training or innate characteristics that predispose a given individual to pursue music training is often debated. Here, we examine biological underpinnings of musicians’ auditory advantages and the mediating role of auditory working memory. Results from our laboratory are presented within a framework that emphasizes auditory working memory as a major factor in the neural processing of sound. Within this framework, we provide evidence for music training as a contributing source of these abilities. PMID:22524346

  4. Auditory-Phonetic Projection and Lexical Structure in the Recognition of Sine-Wave Words

    Science.gov (United States)

    Remez, Robert E.; Dubowski, Kathryn R.; Broder, Robin S.; Davids, Morgana L.; Grossman, Yael S.; Moskalenko, Marina; Pardo, Jennifer S.; Hasbun, Sara Maria

    2011-01-01

    Speech remains intelligible despite the elimination of canonical acoustic correlates of phonemes from the spectrum. A portion of this perceptual flexibility can be attributed to modulation sensitivity in the auditory-to-phonetic projection, although signal-independent properties of lexical neighborhoods also affect intelligibility in utterances…

  5. A loudspeaker-based room auralization system for auditory research

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel

    to systematically study the signal processing of realistic sounds by normal-hearing and hearing-impaired listeners, a flexible, reproducible and fully controllable auditory environment is needed. A loudspeaker-based room auralization (LoRA) system was developed in this thesis to provide virtual auditory...... in reverberant environments. Each part of the early incoming sound to the listener was auralized with either higher-order Ambisonic (HOA) or using a single loudspeaker. The late incoming sound was auralized with a specific algorithm in order to provide a diffuse reverberation with minimal coloration artifacts...... assessed the impact of the auralization technique used for the early incoming sound (HOA or single loudspeaker) on speech intelligibility. A listening test showed that speech intelligibility experiments can be reliably conducted with the LoRA system with both techniques. The second evaluation investigated...

  6. A loudspeaker-based room auralization system for auditory perception research

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Favrot, Sylvain Emmanuel

    2009-01-01

    Most research on basic auditory function has been conducted in anechoic or almost anechoic environments. The knowledge derived from these experiments cannot directly be transferred to reverberant environments. In order to investigate the auditory signal processing of reverberant sounds....... This system provides a flexible research platform for conducting auditory experiments with normal-hearing, hearing-impaired, and aided hearing-impaired listeners in a fully controlled and realistic environment. This includes measures of basic auditory function (e.g., signal detection, distance perception......) and measures of speech intelligibility. A battery of objective tests (e.g., reverberation time, clarity, interaural correlation coefficient) and subjective tests (e.g., speech reception thresholds) is presented that demonstrates the applicability of the LoRA system....

  7. Cognitive mechanisms associated with auditory sensory gating

    Science.gov (United States)

    Jones, L.A.; Hills, P.J.; Dick, K.M.; Jones, S.P.; Bright, P.

    2016-01-01

    Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants additionally completed a paired-stimulus paradigm as a measure of auditory sensory gating. A correlational analysis revealed that several tasks correlated significantly with sensory gating. However once fluid intelligence and working memory were accounted for, only a measure of latent inhibition and accuracy scores on the continuous performance task showed significant sensitivity to sensory gating. We conclude that sensory gating reflects the identification of goal-irrelevant information at the encoding (input) stage and the subsequent ability to selectively attend to goal-relevant information based on that previous identification. PMID:26716891

  8. Crowd wisdom drives intelligent manufacturing

    Directory of Open Access Journals (Sweden)

    Jiaqi Lu

    2017-03-01

    Full Text Available Purpose – A fundamental problem for intelligent manufacturing is to equip the agents with the ability to automatically make judgments and decisions. This paper aims to introduce the basic principle for intelligent crowds in an attempt to show that crowd wisdom could help in making accurate judgments and proper decisions. This further shows the positive effects that crowd wisdom could bring to the entire manufacturing process. Design/methodology/approach – Efforts to support the critical role of crowd wisdom in intelligent manufacturing involve theoretical explanation, including a discussion of several prevailing concepts, such as consumer-to-business (C2B, crowdfunding and an interpretation of the contemporary Big Data mania. In addition, an empirical study with three business cases was conducted to prove the conclusion that our ideas could well explain the current business phenomena and guide the future of manufacturing. Findings – This paper shows that crowd wisdom could help make accurate judgments and proper decisions. It further shows the positive effects that crowd wisdom could bring to the entire manufacturing process. Originality/value – The paper highlights the importance of crowd wisdom in manufacturing with sufficient theoretical and empirical analysis, potentially providing a guideline for future industry.

  9. Artificial intelligence-based speed control of DTC induction motor drives - A comparative study

    Energy Technology Data Exchange (ETDEWEB)

    Gadoue, S.M.; Giaouris, D.; Finch, J.W. [School of Electrical, Electronic and Computer Engineering, Newcastle University, Newcastle upon Tyne NE1 7RU (United Kingdom)

    2009-01-15

    The design of the speed controller greatly affects the performance of an electric drive. A common strategy to control an induction machine is to use direct torque control combined with a PI speed controller. These schemes require proper and continuous tuning and therefore adaptive controllers are proposed to replace conventional PI controllers to improve the drive's performance. This paper presents a comparison between four different speed controller design strategies based on artificial intelligence techniques; two are based on tuning of conventional PI controllers, the third makes use of a fuzzy logic controller and the last is based on hybrid fuzzy sliding mode control theory. To provide a numerical comparison between different controllers, a performance index based on speed error is assigned. All methods are applied to the direct torque control scheme and each control strategy has been tested for its robustness and disturbance rejection ability. (author)

  10. Driving Style Analysis Using Primitive Driving Patterns With Bayesian Nonparametric Approaches

    OpenAIRE

    Wang, Wenshuo; Xi, Junqiang; Zhao, Ding

    2017-01-01

    Analysis and recognition of driving styles are profoundly important to intelligent transportation and vehicle calibration. This paper presents a novel driving style analysis framework using the primitive driving patterns learned from naturalistic driving data. In order to achieve this, first, a Bayesian nonparametric learning method based on a hidden semi-Markov model (HSMM) is introduced to extract primitive driving patterns from time series driving data without prior knowledge of the number...

  11. Comprehensive evaluation of a child with an auditory brainstem implant.

    Science.gov (United States)

    Eisenberg, Laurie S; Johnson, Karen C; Martinez, Amy S; DesJardin, Jean L; Stika, Carren J; Dzubak, Danielle; Mahalak, Mandy Lutz; Rector, Emily P

    2008-02-01

    We had an opportunity to evaluate an American child whose family traveled to Italy to receive an auditory brainstem implant (ABI). The goal of this evaluation was to obtain insight into possible benefits derived from the ABI and to begin developing assessment protocols for pediatric clinical trials. Case study. Tertiary referral center. Pediatric ABI Patient 1 was born with auditory nerve agenesis. Auditory brainstem implant surgery was performed in December, 2005, in Verona, Italy. The child was assessed at the House Ear Institute, Los Angeles, in July 2006 at the age of 3 years 11 months. Follow-up assessment has continued at the HEAR Center in Birmingham, Alabama. Auditory brainstem implant. Performance was assessed for the domains of audition, speech and language, intelligence and behavior, quality of life, and parental factors. Patient 1 demonstrated detection of sound, speech pattern perception with visual cues, and inconsistent auditory-only vowel discrimination. Language age with signs was approximately 2 years, and vocalizations were increasing. Of normal intelligence, he exhibited attention deficits with difficulty completing structured tasks. Twelve months later, this child was able to identify speech patterns consistently; closed-set word identification was emerging. These results were within the range of performance for a small sample of similarly aged pediatric cochlear implant users. Pediatric ABI assessment with a group of well-selected children is needed to examine risk versus benefit in this population and to analyze whether open-set speech recognition is achievable.

  12. Plasticity in the Human Speech Motor System Drives Changes in Speech Perception

    Science.gov (United States)

    Lametti, Daniel R.; Rochet-Capellan, Amélie; Neufeld, Emily; Shiller, Douglas M.

    2014-01-01

    Recent studies of human speech motor learning suggest that learning is accompanied by changes in auditory perception. But what drives the perceptual change? Is it a consequence of changes in the motor system? Or is it a result of sensory inflow during learning? Here, subjects participated in a speech motor-learning task involving adaptation to altered auditory feedback and they were subsequently tested for perceptual change. In two separate experiments, involving two different auditory perceptual continua, we show that changes in the speech motor system that accompany learning drive changes in auditory speech perception. Specifically, we obtained changes in speech perception when adaptation to altered auditory feedback led to speech production that fell into the phonetic range of the speech perceptual tests. However, a similar change in perception was not observed when the auditory feedback that subjects' received during learning fell into the phonetic range of the perceptual tests. This indicates that the central motor outflow associated with vocal sensorimotor adaptation drives changes to the perceptual classification of speech sounds. PMID:25080594

  13. Competition and convergence between auditory and cross-modal visual inputs to primary auditory cortical areas

    Science.gov (United States)

    Mao, Yu-Ting; Hua, Tian-Miao

    2011-01-01

    Sensory neocortex is capable of considerable plasticity after sensory deprivation or damage to input pathways, especially early in development. Although plasticity can often be restorative, sometimes novel, ectopic inputs invade the affected cortical area. Invading inputs from other sensory modalities may compromise the original function or even take over, imposing a new function and preventing recovery. Using ferrets whose retinal axons were rerouted into auditory thalamus at birth, we were able to examine the effect of varying the degree of ectopic, cross-modal input on reorganization of developing auditory cortex. In particular, we assayed whether the invading visual inputs and the existing auditory inputs competed for or shared postsynaptic targets and whether the convergence of input modalities would induce multisensory processing. We demonstrate that although the cross-modal inputs create new visual neurons in auditory cortex, some auditory processing remains. The degree of damage to auditory input to the medial geniculate nucleus was directly related to the proportion of visual neurons in auditory cortex, suggesting that the visual and residual auditory inputs compete for cortical territory. Visual neurons were not segregated from auditory neurons but shared target space even on individual target cells, substantially increasing the proportion of multisensory neurons. Thus spatial convergence of visual and auditory input modalities may be sufficient to expand multisensory representations. Together these findings argue that early, patterned visual activity does not drive segregation of visual and auditory afferents and suggest that auditory function might be compromised by converging visual inputs. These results indicate possible ways in which multisensory cortical areas may form during development and evolution. They also suggest that rehabilitative strategies designed to promote recovery of function after sensory deprivation or damage need to take into

  14. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

    Science.gov (United States)

    Schall, Sonja; von Kriegstein, Katharina

    2014-01-01

    It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.

  15. Design of an intelligent car

    Science.gov (United States)

    Na, Yongyi

    2017-03-01

    The design of simple intelligent car, using AT89S52 single chip microcomputer as the car detection and control core; The metal sensor TL - Q5MC induction to iron, to detect the way to send feedback to the signal of single chip microcomputer, make SCM according to the scheduled work mode to control the car in the area according to the predetermined speed, and the operation mode of the microcontroller choose different also can control the car driving along s-shaped iron; Use A44E hall element to detect the car speeds; Adopts 1602 LCD display time of car driving, driving the car to stop, take turns to show the car driving time, distance, average speed and the speed of time. This design has simple structure and is easy to implement, but are highly intelligent, humane, to a certain extent reflects the intelligence.

  16. Statistical representation of sound textures in the impaired auditory system

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; Dau, Torsten

    2015-01-01

    Many challenges exist when it comes to understanding and compensating for hearing impairment. Traditional methods, such as pure tone audiometry and speech intelligibility tests, offer insight into the deficiencies of a hearingimpaired listener, but can only partially reveal the mechanisms...... that underlie the hearing loss. An alternative approach is to investigate the statistical representation of sounds for hearing-impaired listeners along the auditory pathway. Using models of the auditory periphery and sound synthesis, we aimed to probe hearing impaired perception for sound textures – temporally...

  17. Effect of age at cochlear implantation on auditory and speech development of children with auditory neuropathy spectrum disorder.

    Science.gov (United States)

    Liu, Yuying; Dong, Ruijuan; Li, Yuling; Xu, Tianqiu; Li, Yongxin; Chen, Xueqing; Gong, Shusheng

    2014-12-01

    To evaluate the auditory and speech abilities in children with auditory neuropathy spectrum disorder (ANSD) after cochlear implantation (CI) and determine the role of age at implantation. Ten children participated in this retrospective case series study. All children had evidence of ANSD. All subjects had no cochlear nerve deficiency on magnetic resonance imaging and had used the cochlear implants for a period of 12-84 months. We divided our children into two groups: children who underwent implantation before 24 months of age and children who underwent implantation after 24 months of age. Their auditory and speech abilities were evaluated using the following: behavioral audiometry, the Categories of Auditory Performance (CAP), the Meaningful Auditory Integration Scale (MAIS), the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS), the Standard-Chinese version of the Monosyllabic Lexical Neighborhood Test (LNT), the Multisyllabic Lexical Neighborhood Test (MLNT), the Speech Intelligibility Rating (SIR) and the Meaningful Use of Speech Scale (MUSS). All children showed progress in their auditory and language abilities. The 4-frequency average hearing level (HL) (500Hz, 1000Hz, 2000Hz and 4000Hz) of aided hearing thresholds ranged from 17.5 to 57.5dB HL. All children developed time-related auditory perception and speech skills. Scores of children with ANSD who received cochlear implants before 24 months tended to be better than those of children who received cochlear implants after 24 months. Seven children completed the Mandarin Lexical Neighborhood Test. Approximately half of the children showed improved open-set speech recognition. Cochlear implantation is helpful for children with ANSD and may be a good optional treatment for many ANSD children. In addition, children with ANSD fitted with cochlear implants before 24 months tended to acquire auditory and speech skills better than children fitted with cochlear implants after 24 months. Copyright © 2014

  18. Auditory and Visual Continuous Performance Tests: Relationships with Age, Gender, Cognitive Functioning, and Classroom Behavior

    Science.gov (United States)

    Lehman, Elyse Brauch; Olson, Vanessa A.; Aquilino, Sally A.; Hall, Laura C.

    2006-01-01

    Elementary school children in three grade groups (Grades K/1, 3, and 5/6) completed either the auditory or the visual 1/9 vigilance task from the Gordon Diagnostic System (GDS) as well as subtests from the Wechsler Intelligence Scale for Children--Third Edition and auditory or visual processing subtests from the Woodcock-Johnson Tests of Cognitive…

  19. Application of auditory signals to the operation of an agricultural vehicle: results of pilot testing.

    Science.gov (United States)

    Karimi, D; Mondor, T A; Mann, D D

    2008-01-01

    The operation of agricultural vehicles is a multitask activity that requires proper distribution of attentional resources. Human factors theories suggest that proper utilization of the operator's sensory capacities under such conditions can improve the operator's performance and reduce the operator's workload. Using a tractor driving simulator, this study investigated whether auditory cues can be used to improve performance of the operator of an agricultural vehicle. Steering of a vehicle was simulated in visual mode (where driving error was shown to the subject using a lightbar) and in auditory mode (where a pair of speakers were used to convey the driving error direction and/or magnitude). A secondary task was also introduced in order to simulate the monitoring of an attached machine. This task included monitoring of two identical displays, which were placed behind the simulator, and responding to them, when needed, using a joystick. This task was also implemented in auditory mode (in which a beep signaled the subject to push the proper button when a response was needed) and in visual mode (in which there was no beep and visual, monitoring of the displays was necessary). Two levels of difficulty of the monitoring task were used. Deviation of the simulated vehicle from a desired straight line was used as the measure of performance in the steering task, and reaction time to the displays was used as the measure of performance in the monitoring task. Results of the experiments showed that steering performance was significantly better when steering was a visual task (driving errors were 40% to 60% of the driving errors in auditory mode), although subjective evaluations showed that auditory steering could be easier, depending on the implementation. Performance in the monitoring task was significantly better for auditory implementation (reaction time was approximately 6 times shorter), and this result was strongly supported by subjective ratings. The majority of the

  20. Influence of unexpected events on driving behaviour at different hierarchical levels: a driving simulator experiment

    NARCIS (Netherlands)

    Schaap, T.W.; Horst, A.R.A. van der; Arem, B. van

    2008-01-01

    Computer based simulation models of human driving behaviour can be used effectively to model driving and behavioural adaptation to Intelligent Transport System (ITS). This can be a useful step in human centered design of ITS. To construct a comprehensive model of driving behaviour, the interaction

  1. Influence of unexpected events on driving behaviour at different hierarchical levels: A driving simulator experiment

    NARCIS (Netherlands)

    Schaap, Nina; van der Horst, A.R.A.; van Arem, Bart; Brusque, Corinne

    2008-01-01

    Computer based simulation models of human driving behaviour can be used effectively to model driving behaviour and behavioural adaptation to Intelligent Transport System (ITS). This can be a useful step in human centered design of ITS. To construct a comprehensive model of driving behaviour, the

  2. H1 antihistamines and driving

    OpenAIRE

    Florin-Dan, Popescu

    2008-01-01

    Driving performances depend on cognitive, psychomotor and perception functions. The CNS adverse effects of some H1 antihistamines can alter the patient ability to drive. Data from studies using standardized objective cognitive and psychomotor tests (Choice Reaction Time, Critical Flicker Fusion, Digital Symbol Substitution Test), functional brain imaging (Positron Emission Tomography, functional Magnetic Resonance Imaging), neurophysiological studies (Multiple Sleep Latency Test, auditory and...

  3. Patterns of language and auditory dysfunction in 6-year-old children with epilepsy.

    Science.gov (United States)

    Selassie, Gunilla Rejnö-Habte; Olsson, Ingrid; Jennische, Margareta

    2009-01-01

    In a previous study we reported difficulty with expressive language and visuoperceptual ability in preschool children with epilepsy and otherwise normal development. The present study analysed speech and language dysfunction for each individual in relation to epilepsy variables, ear preference, and intelligence in these children and described their auditory function. Twenty 6-year-old children with epilepsy (14 females, 6 males; mean age 6:5 y, range 6 y-6 y 11 mo) and 30 reference children without epilepsy (18 females, 12 males; mean age 6:5 y, range 6 y-6 y 11 mo) were assessed for language and auditory ability. Low scores for the children with epilepsy were analysed with respect to speech-language domains, type of epilepsy, site of epileptiform activity, intelligence, and language laterality. Auditory attention, perception, discrimination, and ear preference were measured with a dichotic listening test, and group comparisons were performed. Children with left-sided partial epilepsy had extensive language dysfunction. Most children with partial epilepsy had phonological dysfunction. Language dysfunction was also found in children with generalized and unclassified epilepsies. The children with epilepsy performed significantly worse than the reference children in auditory attention, perception of vowels and discrimination of consonants for the right ear and had more left ear advantage for vowels, indicating undeveloped language laterality.

  4. Auditory cortex involvement in emotional learning and memory.

    Science.gov (United States)

    Grosso, A; Cambiaghi, M; Concina, G; Sacco, T; Sacchetti, B

    2015-07-23

    Emotional memories represent the core of human and animal life and drive future choices and behaviors. Early research involving brain lesion studies in animals lead to the idea that the auditory cortex participates in emotional learning by processing the sensory features of auditory stimuli paired with emotional consequences and by transmitting this information to the amygdala. Nevertheless, electrophysiological and imaging studies revealed that, following emotional experiences, the auditory cortex undergoes learning-induced changes that are highly specific, associative and long lasting. These studies suggested that the role played by the auditory cortex goes beyond stimulus elaboration and transmission. Here, we discuss three major perspectives created by these data. In particular, we analyze the possible roles of the auditory cortex in emotional learning, we examine the recruitment of the auditory cortex during early and late memory trace encoding, and finally we consider the functional interplay between the auditory cortex and subcortical nuclei, such as the amygdala, that process affective information. We conclude that, starting from the early phase of memory encoding, the auditory cortex has a more prominent role in emotional learning, through its connections with subcortical nuclei, than is typically acknowledged. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. H1 antihistamines and driving.

    Science.gov (United States)

    Popescu, Florin Dan

    2008-01-01

    Driving performances depend on cognitive, psychomotor and perception functions. The CNS adverse effects of some H1 antihistamines can alter the patient ability to drive. Data from studies using standardized objective cognitive and psychomotor tests (Choice Reaction Time, Critical Flicker Fusion. Digital Symbol Substitution Test), functional brain imaging (Positron Emission Tomography, functional Magnetic Resonance Imaging), neurophysiological studies (Multiple Sleep Latency Test, auditory and visual evoked potentials), experimental simulated driving (driving simulators) and real driving studies (the Highway Driving Test, with the evaluation of the Standard Deviation Lateral Position, and the Car Following Test, with the measurement of the Brake Reaction Time) must be discussed in order to classify a H1 antihistamine as a true non-sedating one.

  6. On the relationship between dynamic visual and auditory processing and literacy skills; results from a large primary-school study.

    Science.gov (United States)

    Talcott, Joel B; Witton, Caroline; Hebb, Gillian S; Stoodley, Catherine J; Westwood, Elizabeth A; France, Susan J; Hansen, Peter C; Stein, John F

    2002-01-01

    Three hundred and fifty randomly selected primary school children completed a psychometric and psychophysical test battery to ascertain relationships between reading ability and sensitivity to dynamic visual and auditory stimuli. The first analysis examined whether sensitivity to visual coherent motion and auditory frequency resolution differed between groups of children with different literacy and cognitive skills. For both tasks, a main effect of literacy group was found in the absence of a main effect for intelligence or an interaction between these factors. To assess the potential confounding effects of attention, a second analysis of the frequency discrimination data was conducted with performance on catch trials entered as a covariate. Significant effects for both the covariate and literacy skill was found, but again there was no main effect of intelligence, nor was there an interaction between intelligence and literacy skill. Regression analyses were conducted to determine the magnitude of the relationship between sensory and literacy skills in the entire sample. Both visual motion sensitivity and auditory sensitivity to frequency differences were robust predictors of children's literacy skills and their orthographic and phonological skills.

  7. Multichannel Spatial Auditory Display for Speed Communications

    Science.gov (United States)

    Begault, Durand R.; Erbe, Tom

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplifiedhead-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degree azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degree azimuth positions.

  8. Cooperative driving in platooning scenario’s

    NARCIS (Netherlands)

    van der Linden, M.J.G.M.; Nijmeijer, H.

    2011-01-01

    Cooperative driving enables a more efficient use of existing infrastructure which reduces the expenditures and land use for new roads. Cooperative driving is based on intelligent communication between vehicles and between vehicles and their environment. Vehicles can drive closer to each other due to

  9. Toward New-Generation Intelligent Manufacturing

    Directory of Open Access Journals (Sweden)

    Ji Zhou

    2018-02-01

    Full Text Available Intelligent manufacturing is a general concept that is under continuous development. It can be categorized into three basic paradigms: digital manufacturing, digital-networked manufacturing, and new-generation intelligent manufacturing. New-generation intelligent manufacturing represents an in-depth integration of new-generation artificial intelligence (AI technology and advanced manufacturing technology. It runs through every link in the full life-cycle of design, production, product, and service. The concept also relates to the optimization and integration of corresponding systems; the continuous improvement of enterprises’ product quality, performance, and service levels; and reduction in resources consumption. New-generation intelligent manufacturing acts as the core driving force of the new industrial revolution and will continue to be the main pathway for the transformation and upgrading of the manufacturing industry in the decades to come. Human-cyber-physical systems (HCPSs reveal the technological mechanisms of new-generation intelligent manufacturing and can effectively guide related theoretical research and engineering practice. Given the sequential development, cross interaction, and iterative upgrading characteristics of the three basic paradigms of intelligent manufacturing, a technology roadmap for “parallel promotion and integrated development” should be developed in order to drive forward the intelligent transformation of the manufacturing industry in China. Keywords: Advanced manufacturing, New-generation intelligent manufacturing, Human-cyber-physical system, New-generation AI, Basic paradigms, Parallel promotion, Integrated development

  10. Modeling Driving Behavior at Roundabouts: Impact of Roundabout Layout and Surrounding Traffic on Driving Behavior

    OpenAIRE

    Zhao, Min; Käthner, David; Söffker, Dirk; Jipp, Meike; Lemmer, Karsten

    2017-01-01

    Driving behavior prediction at roundabouts is an important challenge to improve driving safety by supporting drivers with intelligent assistance systems. To predict the driving behavior effciently steering wheel status was proven to have robust predictability based on a Support Vector Machine algorithm. Previous research has not considered potential effects of roundabout layout and surrounding traffic on driving behavior, but that consideration can certainly improve the prediction results....

  11. Comparing the information conveyed by envelope modulation for speech intelligibility, speech quality, and music quality.

    Science.gov (United States)

    Kates, James M; Arehart, Kathryn H

    2015-10-01

    This paper uses mutual information to quantify the relationship between envelope modulation fidelity and perceptual responses. Data from several previous experiments that measured speech intelligibility, speech quality, and music quality are evaluated for normal-hearing and hearing-impaired listeners. A model of the auditory periphery is used to generate envelope signals, and envelope modulation fidelity is calculated using the normalized cross-covariance of the degraded signal envelope with that of a reference signal. Two procedures are used to describe the envelope modulation: (1) modulation within each auditory frequency band and (2) spectro-temporal processing that analyzes the modulation of spectral ripple components fit to successive short-time spectra. The results indicate that low modulation rates provide the highest information for intelligibility, while high modulation rates provide the highest information for speech and music quality. The low-to-mid auditory frequencies are most important for intelligibility, while mid frequencies are most important for speech quality and high frequencies are most important for music quality. Differences between the spectral ripple components used for the spectro-temporal analysis were not significant in five of the six experimental conditions evaluated. The results indicate that different modulation-rate and auditory-frequency weights may be appropriate for indices designed to predict different types of perceptual relationships.

  12. Efficacy of individual computer-based auditory training for people with hearing loss: a systematic review of the evidence.

    Directory of Open Access Journals (Sweden)

    Helen Henshaw

    Full Text Available BACKGROUND: Auditory training involves active listening to auditory stimuli and aims to improve performance in auditory tasks. As such, auditory training is a potential intervention for the management of people with hearing loss. OBJECTIVE: This systematic review (PROSPERO 2011: CRD42011001406 evaluated the published evidence-base for the efficacy of individual computer-based auditory training to improve speech intelligibility, cognition and communication abilities in adults with hearing loss, with or without hearing aids or cochlear implants. METHODS: A systematic search of eight databases and key journals identified 229 articles published since 1996, 13 of which met the inclusion criteria. Data were independently extracted and reviewed by the two authors. Study quality was assessed using ten pre-defined scientific and intervention-specific measures. RESULTS: Auditory training resulted in improved performance for trained tasks in 9/10 articles that reported on-task outcomes. Although significant generalisation of learning was shown to untrained measures of speech intelligibility (11/13 articles, cognition (1/1 articles and self-reported hearing abilities (1/2 articles, improvements were small and not robust. Where reported, compliance with computer-based auditory training was high, and retention of learning was shown at post-training follow-ups. Published evidence was of very-low to moderate study quality. CONCLUSIONS: Our findings demonstrate that published evidence for the efficacy of individual computer-based auditory training for adults with hearing loss is not robust and therefore cannot be reliably used to guide intervention at this time. We identify a need for high-quality evidence to further examine the efficacy of computer-based auditory training for people with hearing loss.

  13. The 15th Annual Intelligent Ground Vehicle Competition: Intelligent Ground Robots Created by Intelligent Students

    National Research Council Canada - National Science Library

    Theisen, Bernard L

    2007-01-01

    ..., and mobile platform fundamentals to design and build an unmanned system. Teams from around the world focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligent driving capabilities...

  14. Mode of communication and classroom placement impact on speech intelligibility.

    Science.gov (United States)

    Tobey, Emily A; Rekart, Deborah; Buckley, Kristi; Geers, Ann E

    2004-05-01

    To examine the impact of classroom placement and mode of communication on speech intelligibility scores in children aged 8 to 9 years using multichannel cochlear implants. Classroom placement (special education, partial mainstream, and full mainstream) and mode of communication (total communication and auditory-oral) reported via parental rating scales before and 4 times after implantation were the independent variables. Speech intelligibility scores obtained at 8 to 9 years of age were the dependent variables. The study included 131 congenitally deafened children between the ages of 8 and 9 years who received a multichannel cochlear implant before the age of 5 years. Higher speech intelligibility scores at 8 to 9 years of age were significantly associated with enrollment in auditory-oral programs rather than enrollment in total communication programs, regardless of when the mode of communication was used (before or after implantation). Speech intelligibility at 8 to 9 years of age was not significantly influenced by classroom placement before implantation, regardless of mode of communication. After implantation, however, there were significant associations between classroom placement and speech intelligibility scores at 8 to 9 years of age. Higher speech intelligibility scores at 8 to 9 years of age were associated with classroom exposure to normal-hearing peers in full or partial mainstream placements than in self-contained, special education placements. Higher speech intelligibility scores in 8- to 9-year-old congenitally deafened cochlear implant recipients were associated with educational settings that emphasize oral communication development. Educational environments that incorporate exposure to normal-hearing peers were also associated with higher speech intelligibility scores at 8 to 9 years of age.

  15. Driving ability in cancer patients receiving long-term morphine analgesia.

    Science.gov (United States)

    Vainio, A; Ollila, J; Matikainen, E; Rosenberg, P; Kalso, E

    1995-09-09

    When given in single doses to healthy volunteers, opioid analgesics impair reaction time, muscle coordination, attention, and short-term memory sufficiently to affect driving and other skilled activities. Despite the increasing use of oral morphine daily, little is known about the effect of long-term opioid therapy on psychomotor performance. To examine the effects of continuous morphine medication, psychological and neurological tests originally designed for professional motor vehicle drivers were conducted in two groups of cancer patients who were similar apart from experience of pain. 24 were on continuous morphine (mean 209 mg oral morphine daily) for cancer pain; and 25 were pain-free without regular analgesics. Though the results were a little worse in the patients taking morphine, there were no significant differences between the groups in intelligence, vigilance, concentration, fluency of motor reactions, or division of attention. Of the neural function tests, reaction times (auditory, visual, associative), thermal discrimination, and body sway with eyes open were similar in the two groups; only balancing ability with closed eyes was worse in the morphine group. These results indicate that, in cancer patients receiving long-term morphine treatment with stable doses, morphine has only a slight and selective effect on functions related to driving.

  16. Improving Usefulness of Automated Driving by Lowering Primary Task Interference through HMI Design

    Directory of Open Access Journals (Sweden)

    Frederik Naujoks

    2017-01-01

    Full Text Available During conditionally automated driving (CAD, driving time can be used for non-driving-related tasks (NDRTs. To increase safety and comfort of an automated ride, upcoming automated manoeuvres such as lane changes or speed adaptations may be communicated to the driver. However, as the driver’s primary task consists of performing NDRTs, they might prefer to be informed in a nondistracting way. In this paper, the potential of using speech output to improve human-automation interaction is explored. A sample of 17 participants completed different situations which involved communication between the automation and the driver in a motion-based driving simulator. The Human-Machine Interface (HMI of the automated driving system consisted of a visual-auditory HMI with either generic auditory feedback (i.e., standard information tones or additional speech output. The drivers were asked to perform a common NDRT during the drive. Compared to generic auditory output, communicating upcoming automated manoeuvres additionally by speech led to a decrease in self-reported visual workload and decreased monitoring of the visual HMI. However, interruptions of the NDRT were not affected by additional speech output. Participants clearly favoured the HMI with additional speech-based output, demonstrating the potential of speech to enhance usefulness and acceptance of automated vehicles.

  17. The influence of masker type on early reflection processing and speech intelligibility (L)

    DEFF Research Database (Denmark)

    Arweiler, Iris; Buchholz, Jörg M.; Dau, Torsten

    2013-01-01

    Arweiler and Buchholz [J. Acoust. Soc. Am. 130, 996-1005 (2011)] showed that, while the energy of early reflections (ERs) in a room improves speech intelligibility, the benefit is smaller than that provided by the energy of the direct sound (DS). In terms of integration of ERs and DS, binaural...... listening did not provide a benefit from ERs apart from a binaural energy summation, such that monaural auditory processing could account for the data. However, a diffuse speech shaped noise (SSN) was used in the speech intelligibility experiments, which does not provide distinct binaural cues...... to the auditory system. In the present study, the monaural and binaural benefit from ERs for speech intelligibility was investigated using three directional maskers presented from 90° azimuth: a SSN, a multi-talker babble, and a reversed two-talker masker. For normal-hearing as well as hearing-impaired listeners...

  18. Auditory Training Effects on the Listening Skills of Children With Auditory Processing Disorder.

    Science.gov (United States)

    Loo, Jenny Hooi Yin; Rosen, Stuart; Bamiou, Doris-Eva

    2016-01-01

    Children with auditory processing disorder (APD) typically present with "listening difficulties,"' including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in

  19. Computational spectrotemporal auditory model with applications to acoustical information processing

    Science.gov (United States)

    Chi, Tai-Shih

    A computational spectrotemporal auditory model based on neurophysiological findings in early auditory and cortical stages is described. The model provides a unified multiresolution representation of the spectral and temporal features of sound likely critical in the perception of timbre. Several types of complex stimuli are used to demonstrate the spectrotemporal information preserved by the model. Shown by these examples, this two stage model reflects the apparent progressive loss of temporal dynamics along the auditory pathway from the rapid phase-locking (several kHz in auditory nerve), to moderate rates of synchrony (several hundred Hz in midbrain), to much lower rates of modulations in the cortex (around 30 Hz). To complete this model, several projection-based reconstruction algorithms are implemented to resynthesize the sound from the representations with reduced dynamics. One particular application of this model is to assess speech intelligibility. The spectro-temporal Modulation Transfer Functions (MTF) of this model is investigated and shown to be consistent with the salient trends in the human MTFs (derived from human detection thresholds) which exhibit a lowpass function with respect to both spectral and temporal dimensions, with 50% bandwidths of about 16 Hz and 2 cycles/octave. Therefore, the model is used to demonstrate the potential relevance of these MTFs to the assessment of speech intelligibility in noise and reverberant conditions. Another useful feature is the phase singularity emerged in the scale space generated by this multiscale auditory model. The singularity is shown to have certain robust properties and carry the crucial information about the spectral profile. Such claim is justified by perceptually tolerable resynthesized sounds from the nonconvex singularity set. In addition, the singularity set is demonstrated to encode the pitch and formants at different scales. These properties make the singularity set very suitable for traditional

  20. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  1. Assessment of children with suspected auditory processing disorder: a factor analysis study.

    Science.gov (United States)

    Ahmmed, Ansar U; Ahmmed, Afsara A; Bath, Julie R; Ferguson, Melanie A; Plack, Christopher J; Moore, David R

    2014-01-01

    To identify the factors that may underlie the deficits in children with listening difficulties, despite normal pure-tone audiograms. These children may have auditory processing disorder (APD), but there is no universally agreed consensus as to what constitutes APD. The authors therefore refer to these children as children with suspected APD (susAPD) and aim to clarify the role of attention, cognition, memory, sensorimotor processing speed, speech, and nonspeech auditory processing in susAPD. It was expected that a factor analysis would show how nonauditory and supramodal factors relate to auditory behavioral measures in such children with susAPD. This would facilitate greater understanding of the nature of listening difficulties, thus further helping with characterizing APD and designing multimodal test batteries to diagnose APD. Factor analysis of outcomes from 110 children (68 male, 42 female; aged 6 to 11 years) with susAPD on a widely used clinical test battery (SCAN-C) and a research test battery (MRC Institute of Hearing Research Multi-center Auditory Processing "IMAP"), that have age-based normative data. The IMAP included backward masking, simultaneous masking, frequency discrimination, nonverbal intelligence, working memory, reading, alerting attention and motor reaction times to auditory and visual stimuli. SCAN-C included monaural low-redundancy speech (auditory closure and speech in noise) and dichotic listening tests (competing words and competing sentences) that assess divided auditory attention and hence executive attention. Three factors were extracted: "general auditory processing," "working memory and executive attention," and "processing speed and alerting attention." Frequency discrimination, backward masking, simultaneous masking, and monaural low-redundancy speech tests represented the "general auditory processing" factor. Dichotic listening and the IMAP cognitive tests (apart from nonverbal intelligence) were represented in the "working

  2. Intelligent Intrusion Detection of Grey Hole and Rushing Attacks in Self-Driving Vehicular Networks

    Directory of Open Access Journals (Sweden)

    Khattab M. Ali Alheeti

    2016-07-01

    Full Text Available Vehicular ad hoc networks (VANETs play a vital role in the success of self-driving and semi self-driving vehicles, where they improve safety and comfort. Such vehicles depend heavily on external communication with the surrounding environment via data control and Cooperative Awareness Messages (CAMs exchanges. VANETs are potentially exposed to a number of attacks, such as grey hole, black hole, wormhole and rushing attacks. This work presents an intelligent Intrusion Detection System (IDS that relies on anomaly detection to protect the external communication system from grey hole and rushing attacks. These attacks aim to disrupt the transmission between vehicles and roadside units. The IDS uses features obtained from a trace file generated in a network simulator and consists of a feed-forward neural network and a support vector machine. Additionally, the paper studies the use of a novel systematic response, employed to protect the vehicle when it encounters malicious behaviour. Our simulations of the proposed detection system show that the proposed schemes possess outstanding detection rates with a reduction in false alarms. This safe mode response system has been evaluated using four performance metrics, namely, received packets, packet delivery ratio, dropped packets and the average end to end delay, under both normal and abnormal conditions.

  3. Short-Term Memory and Auditory Processing Disorders: Concurrent Validity and Clinical Diagnostic Markers

    Science.gov (United States)

    Maerlender, Arthur

    2010-01-01

    Auditory processing disorders (APDs) are of interest to educators and clinicians, as they impact school functioning. Little work has been completed to demonstrate how children with APDs perform on clinical tests. In a series of studies, standard clinical (psychometric) tests from the Wechsler Intelligence Scale for Children, Fourth Edition…

  4. Age-related changes in auditory and cognitive abilities in elderly persons with hearing aids fitted at the initial stages of hearing loss

    Directory of Open Access Journals (Sweden)

    C. Obuchi

    2011-03-01

    Full Text Available In this study, we investigated the relation between the use of hearing aids at the initial stages of hearing loss and age-related changes in the auditory and cognitive abilities of elderly persons. 12 healthy elderly persons participated in an annual auditory and cognitive longitudinal examination for three years. According to their hearing level, they were divided into 3 subgroups - the normal hearing group, the hearing loss without hearing aids group, and the hearing loss with hearing aids group. All the subjects underwent 4 tests: pure-tone audiometry, syllable intelligibility test, dichotic listening test (DLT, and Wechsler Adult Intelligence Scale-Revised (WAIS-R Short Forms. Comparison between the 3 groups revealed that the hearing loss without hearing aids group showed the lowest scores for the performance tasks, in contrast to the hearing level and intelligibility results. The other groups showed no significant difference in the WAIS-R subtests. This result indicates that prescription of a hearing aid during the early stages of hearing loss is related to the retention of cognitive abilities in such elderly people. However, there were no statistical significant correlations between the auditory and cognitive tasks.

  5. Automatic intelligent cruise control

    OpenAIRE

    Stanton, NA; Young, MS

    2006-01-01

    This paper reports a study on the evaluation of automatic intelligent cruise control (AICC) from a psychological perspective. It was anticipated that AICC would have an effect upon the psychology of driving—namely, make the driver feel like they have less control, reduce the level of trust in the vehicle, make drivers less situationally aware, but might reduce the workload and make driving might less stressful. Drivers were asked to drive in a driving simulator under manual and automatic inte...

  6. Memory performance on the Auditory Inference Span Test is independent of background noise type for young adults with normal hearing at high speech intelligibility.

    Science.gov (United States)

    Rönnberg, Niklas; Rudner, Mary; Lunner, Thomas; Stenfelt, Stefan

    2014-01-01

    Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST) is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR) on listening effort, as a function of working memory capacity (WMC) and updating ability (UA). The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing memory load level. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech fragments and vocal sounds in the background noise.

  7. Memory performance on the Auditory Inference Span Test is independent of background noise type for young adults with normal hearing at high speech intelligibility

    Directory of Open Access Journals (Sweden)

    Niklas eRönnberg

    2014-12-01

    Full Text Available Listening in noise is often perceived to be effortful. This is partly because cognitive resources are engaged in separating the target signal from background noise, leaving fewer resources for storage and processing of the content of the message in working memory. The Auditory Inference Span Test (AIST is designed to assess listening effort by measuring the ability to maintain and process heard information. The aim of this study was to use AIST to investigate the effect of background noise types and signal-to-noise ratio (SNR on listening effort, as a function of working memory capacity (WMC and updating ability (UA. The AIST was administered in three types of background noise: steady-state speech-shaped noise, amplitude modulated speech-shaped noise, and unintelligible speech. Three SNRs targeting 90% speech intelligibility or better were used in each of the three noise types, giving nine different conditions. The reading span test assessed WMC, while UA was assessed with the letter memory test. Twenty young adults with normal hearing participated in the study. Results showed that AIST performance was not influenced by noise type at the same intelligibility level, but became worse with worse SNR when background noise was speech-like. Performance on AIST also decreased with increasing MLL. Correlations between AIST performance and the cognitive measurements suggested that WMC is of more importance for listening when SNRs are worse, while UA is of more importance for listening in easier SNRs. The results indicated that in young adults with normal hearing, the effort involved in listening in noise at high intelligibility levels is independent of the noise type. However, when noise is speech-like and intelligibility decreases, listening effort increases, probably due to extra demands on cognitive resources added by the informational masking created by the speech-fragments and vocal sounds in the background noise.

  8. Audiomotor Perceptual Training Enhances Speech Intelligibility in Background Noise.

    Science.gov (United States)

    Whitton, Jonathon P; Hancock, Kenneth E; Shannon, Jeffrey M; Polley, Daniel B

    2017-11-06

    Sensory and motor skills can be improved with training, but learning is often restricted to practice stimuli. As an exception, training on closed-loop (CL) sensorimotor interfaces, such as action video games and musical instruments, can impart a broad spectrum of perceptual benefits. Here we ask whether computerized CL auditory training can enhance speech understanding in levels of background noise that approximate a crowded restaurant. Elderly hearing-impaired subjects trained for 8 weeks on a CL game that, like a musical instrument, challenged them to monitor subtle deviations between predicted and actual auditory feedback as they moved their fingertip through a virtual soundscape. We performed our study as a randomized, double-blind, placebo-controlled trial by training other subjects in an auditory working-memory (WM) task. Subjects in both groups improved at their respective auditory tasks and reported comparable expectations for improved speech processing, thereby controlling for placebo effects. Whereas speech intelligibility was unchanged after WM training, subjects in the CL training group could correctly identify 25% more words in spoken sentences or digit sequences presented in high levels of background noise. Numerically, CL audiomotor training provided more than three times the benefit of our subjects' hearing aids for speech processing in noisy listening conditions. Gains in speech intelligibility could be predicted from gameplay accuracy and baseline inhibitory control. However, benefits did not persist in the absence of continuing practice. These studies employ stringent clinical standards to demonstrate that perceptual learning on a computerized audio game can transfer to "real-world" communication challenges. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Event-related potentials and secondary task performance during simulated driving.

    Science.gov (United States)

    Wester, A E; Böcker, K B E; Volkerts, E R; Verster, J C; Kenemans, J L

    2008-01-01

    Inattention and distraction account for a substantial number of traffic accidents. Therefore, we examined the impact of secondary task performance (an auditory oddball task) on a primary driving task (lane keeping). Twenty healthy participants performed two 20-min tests in the Divided Attention Steering Simulator (DASS). The visual secondary task of the DASS was replaced by an auditory oddball task to allow recording of brain activity. The driving task and the secondary (distracting) oddball task were presented in isolation and simultaneously, to assess their mutual interference. In addition to performance measures (lane keeping in the primary driving task and reaction speed in the secondary oddball task), brain activity, i.e. event-related potentials (ERPs), was recorded. Performance parameters on the driving test and the secondary oddball task did not differ between performance in isolation and simultaneous performance. However, when both tasks were performed simultaneously, reaction time variability increased in the secondary oddball task. Analysis of brain activity indicated that ERP amplitude (P3a amplitude) related to the secondary task, was significantly reduced when the task was performed simultaneously with the driving test. This study shows that when performing a simple secondary task during driving, performance of the driving task and this secondary task are both unaffected. However, analysis of brain activity shows reduced cortical processing of irrelevant, potentially distracting stimuli from the secondary task during driving.

  10. GRASP agents: social first, intelligent later

    NARCIS (Netherlands)

    Hofstede, G.J.

    2017-01-01

    This paper urges that if we wish to give social intelligence to our agents, it pays to look at how we acquired our social intelligence ourselves. We are born with drives and motives that are innate and deeply social. Next, as children we are socialized to acquire norms and values and to understand

  11. Influence of memory, attention, IQ and age on auditory temporal processing tests: preliminary study.

    Science.gov (United States)

    Murphy, Cristina Ferraz Borges; Zachi, Elaine Cristina; Roque, Daniela Tsubota; Ventura, Dora Selma Fix; Schochat, Eliane

    2014-01-01

    To investigate the existence of correlations between the performance of children in auditory temporal tests (Frequency Pattern and Gaps in Noise--GIN) and IQ, attention, memory and age measurements. Fifteen typically developing individuals between the ages of 7 to 12 years and normal hearing participated in the study. Auditory temporal processing tests (GIN and Frequency Pattern), as well as a Memory test (Digit Span), Attention tests (auditory and visual modality) and intelligence tests (RAVEN test of Progressive Matrices) were applied. Significant and positive correlation between the Frequency Pattern test and age variable were found, which was considered good (p<0.01, 75.6%). There were no significant correlations between the GIN test and the variables tested. Auditory temporal skills seem to be influenced by different factors: while the performance in temporal ordering skill seems to be influenced by maturational processes, the performance in temporal resolution was not influenced by any of the aspects investigated.

  12. Comparison of Social Interaction between Cochlear-Implanted Children with Normal Intelligence Undergoing Auditory Verbal Therapy and Normal-Hearing Children: A Pilot Study.

    Science.gov (United States)

    Monshizadeh, Leila; Vameghi, Roshanak; Sajedi, Firoozeh; Yadegari, Fariba; Hashemi, Seyed Basir; Kirchem, Petra; Kasbi, Fatemeh

    2018-04-01

    A cochlear implant is a device that helps hearing-impaired children by transmitting sound signals to the brain and helping them improve their speech, language, and social interaction. Although various studies have investigated the different aspects of speech perception and language acquisition in cochlear-implanted children, little is known about their social skills, particularly Persian-speaking cochlear-implanted children. Considering the growing number of cochlear implants being performed in Iran and the increasing importance of developing near-normal social skills as one of the ultimate goals of cochlear implantation, this study was performed to compare the social interaction between Iranian cochlear-implanted children who have undergone rehabilitation (auditory verbal therapy) after surgery and normal-hearing children. This descriptive-analytical study compared the social interaction level of 30 children with normal hearing and 30 with cochlear implants who were conveniently selected. The Raven test was administered to the both groups to ensure normal intelligence quotient. The social interaction status of both groups was evaluated using the Vineland Adaptive Behavior Scale, and statistical analysis was performed using Statistical Package for Social Sciences (SPSS) version 21. After controlling age as a covariate variable, no significant difference was observed between the social interaction scores of both the groups (p > 0.05). In addition, social interaction had no correlation with sex in either group. Cochlear implantation followed by auditory verbal rehabilitation helps children with sensorineural hearing loss to have normal social interactions, regardless of their sex.

  13. Driving-Simulator-Based Test on the Effectiveness of Auditory Red-Light Running Vehicle Warning System Based on Time-To-Collision Sensor

    Directory of Open Access Journals (Sweden)

    Xuedong Yan

    2014-02-01

    Full Text Available The collision avoidance warning system is an emerging technology designed to assist drivers in avoiding red-light running (RLR collisions at intersections. The aim of this paper is to evaluate the effect of auditory warning information on collision avoidance behaviors in the RLR pre-crash scenarios and further to examine the casual relationships among the relevant factors. A driving-simulator-based experiment was designed and conducted with 50 participants. The data from the experiments were analyzed by approaches of ANOVA and structural equation modeling (SEM. The collisions avoidance related variables were measured in terms of brake reaction time (BRT, maximum deceleration and lane deviation in this study. It was found that the collision avoidance warning system can result in smaller collision rates compared to the without-warning condition and lead to shorter reaction times, larger maximum deceleration and less lane deviation. Furthermore, the SEM analysis illustrate that the audio warning information in fact has both direct and indirect effect on occurrence of collisions, and the indirect effect plays a more important role on collision avoidance than the direct effect. Essentially, the auditory warning information can assist drivers in detecting the RLR vehicles in a timely manner, thus providing drivers more adequate time and space to decelerate to avoid collisions with the conflicting vehicles.

  14. Energy efficient drives and control engineering. Intelligent machine and plant concepts for manufacturing; Energieeffiziente Antriebs- und Steuerungstechnik. Intelligente Maschinen- und Anlagenkonzepte fuer die Fertigung

    Energy Technology Data Exchange (ETDEWEB)

    Fahrbach, Christian; Frank, Klaus; Haack, Steffen; Schemm, Eberhardt; Wittschen, Wiebke

    2010-07-01

    The book discusses the potential of intelligent and energy-efficient drive and control concepts. It shows that energy efficient components - pumps, motors, or speed-controlled pump drives - may result in a double-digit reduction of energy consumption. The effect is even more pronounced when the system is opotimized as a whole. If energy-efficient components are combined so that a direct energy flow will result, energy will be converted on demand, i.e. the right amount at the right time. The book also discusses how these design options can be applied in the various phases of the machine life cycle.

  15. Distracted driving in elderly and middle-aged drivers.

    Science.gov (United States)

    Thompson, Kelsey R; Johnson, Amy M; Emerson, Jamie L; Dawson, Jeffrey D; Boer, Erwin R; Rizzo, Matthew

    2012-03-01

    Automobile driving is a safety-critical real-world example of multitasking. A variety of roadway and in-vehicle distracter tasks create information processing loads that compete for the neural resources needed to drive safely. Drivers with mind and brain aging may be particularly susceptible to distraction due to waning cognitive resources and control over attention. This study examined distracted driving performance in an instrumented vehicle (IV) in 86 elderly (mean=72.5 years, SD=5.0 years) and 51 middle-aged drivers (mean=53.7 years, SD=9.3 year) under a concurrent auditory-verbal processing load created by the Paced Auditory Serial Addition Task (PASAT). Compared to baseline (no-task) driving performance, distraction was associated with reduced steering control in both groups, with middle-aged drivers showing a greater increase in steering variability. The elderly drove slower and showed decreased speed variability during distraction compared to middle-aged drivers. They also tended to "freeze up", spending significantly more time holding the gas pedal steady, another tactic that may mitigate time pressured integration and control of information, thereby freeing mental resources to maintain situation awareness. While 39% of elderly and 43% of middle-aged drivers committed significantly more driving safety errors during distraction, 28% and 18%, respectively, actually improved, compatible with allocation of attention resources to safety critical tasks under a cognitive load. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Validation of auditory detection response task method for assessing the attentional effects of cognitive load.

    Science.gov (United States)

    Stojmenova, Kristina; Sodnik, Jaka

    2018-07-04

    There are 3 standardized versions of the Detection Response Task (DRT), 2 using visual stimuli (remote DRT and head-mounted DRT) and one using tactile stimuli. In this article, we present a study that proposes and validates a type of auditory signal to be used as DRT stimulus and evaluate the proposed auditory version of this method by comparing it with the standardized visual and tactile version. This was a within-subject design study performed in a driving simulator with 24 participants. Each participant performed 8 2-min-long driving sessions in which they had to perform 3 different tasks: driving, answering to DRT stimuli, and performing a cognitive task (n-back task). Presence of additional cognitive load and type of DRT stimuli were defined as independent variables. DRT response times and hit rates, n-back task performance, and pupil size were observed as dependent variables. Significant changes in pupil size for trials with a cognitive task compared to trials without showed that cognitive load was induced properly. Each DRT version showed a significant increase in response times and a decrease in hit rates for trials with a secondary cognitive task compared to trials without. Similar and significantly better results in differences in response times and hit rates were obtained for the auditory and tactile version compared to the visual version. There were no significant differences in performance rate between the trials without DRT stimuli compared to trials with and among the trials with different DRT stimuli modalities. The results from this study show that the auditory DRT version, using the signal implementation suggested in this article, is sensitive to the effects of cognitive load on driver's attention and is significantly better than the remote visual and tactile version for auditory-vocal cognitive (n-back) secondary tasks.

  17. Artificial Intelligence-based control for torque ripple minimization in switched reluctance motor drives - doi: 10.4025/actascitechnol.v36i1.18097

    Directory of Open Access Journals (Sweden)

    Kalaivani Lakshmanan

    2014-01-01

    Full Text Available In this paper, various intelligent controllers such as Fuzzy Logic Controller (FLC and Adaptive Neuro Fuzzy Inference System (ANFIS-based current compensating techniques are employed for minimizing the torque ripples in switched reluctance motor. FLC and ANFIS controllers are tuned using MATLAB Toolbox. For the purpose of comparison, the performance of conventional Proportional-Integral (PI controller is also considered. The statistical parameters like minimum, maximum, mean, standard deviation of total torque, torque ripple coefficient and the settling time of speed response for various controllers are reported. From the simulation results, it is found that both FLC and ANFIS controllers gives better performance than PI controller. Among the intelligent controllers, ANFIS gives outer performance than FLC due to its good learning and generalization capabilities thereby improves the dynamic performance of SRM drives.

  18. Cloud Incubator Car: A Reliable Platform for Autonomous Driving

    Directory of Open Access Journals (Sweden)

    Raúl Borraz

    2018-02-01

    Full Text Available It appears clear that the future of road transport is going through enormous changes (intelligent transport systems, the main one being the Intelligent Vehicle (IV. Automated driving requires a huge research effort in multiple technological areas: sensing, control, and driving algorithms. We present a comprehensible and reliable platform for autonomous driving technology development as well as for testing purposes, developed in the Intelligent Vehicles Lab at the Technical University of Cartagena. We propose an open and modular architecture capable of easily integrating a wide variety of sensors and actuators which can be used for testing algorithms and control strategies. As a proof of concept, this paper presents a reliable and complete navigation application for a commercial vehicle (Renault Twizy. It comprises a complete perception system (2D LIDAR, 3D HD LIDAR, ToF cameras, Real-Time Kinematic (RTK unit, Inertial Measurement Unit (IMU, an automation of the driving elements of the vehicle (throttle, steering, brakes, and gearbox, a control system, and a decision-making system. Furthermore, two flexible and reliable algorithms are presented for carrying out global and local route planning on board autonomous vehicles.

  19. Drive-By-Wire Technology

    Science.gov (United States)

    2001-05-29

    Symposium Intelligent Systems for the Objective Fleet uTransmission controls uSteering (both on-transmission and under-carriage) uBraking (service and...parking) uTransmission select uThrottle uOther Electromechanical Opportunities uTurret drives (elevation, traverse) uAutomatic propellant handling systems

  20. Modelling speech intelligibility in adverse conditions

    DEFF Research Database (Denmark)

    Jørgensen, Søren; Dau, Torsten

    2013-01-01

    Jørgensen and Dau (J Acoust Soc Am 130:1475-1487, 2011) proposed the speech-based envelope power spectrum model (sEPSM) in an attempt to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII) in conditions with nonlinearly processed speech...... subjected to phase jitter, a condition in which the spectral structure of the intelligibility of speech signal is strongly affected, while the broadband temporal envelope is kept largely intact. In contrast, the effects of this distortion can be predicted -successfully by the spectro-temporal modulation...... suggest that the SNRenv might reflect a powerful decision metric, while some explicit across-frequency analysis seems crucial in some conditions. How such across-frequency analysis is "realized" in the auditory system remains unresolved....

  1. Raising agents: sources of human social intelligence

    NARCIS (Netherlands)

    Hofstede, G.J.

    2014-01-01

    This paper urges that if we wish to give social intelligence to our agents, it pays to look at how we acquired our social intelligence ourselves. Our drives and motives are innate and deeply social. Next, as children we are social-ized to acquire norms and values. This motivational and group-based

  2. Auditory sensory memory and language abilities in former late talkers: a mismatch negativity study.

    Science.gov (United States)

    Grossheinrich, Nicola; Kademann, Stefanie; Bruder, Jennifer; Bartling, Juergen; Von Suchodoletz, Waldemar

    2010-09-01

    The present study investigated whether (a) a reduced duration of auditory sensory memory is found in late talking children and (b) whether deficits of sensory memory are linked to persistent difficulties in language acquisition. Former late talkers and children without delayed language development were examined at the age of 4 years and 7 months using mismatch negativity (MMN) with interstimulus intervals (ISIs) of 500 ms and 2000 ms. Additionally, short-term memory, language skills, and nonverbal intelligence were assessed. MMN mean amplitude was reduced for the ISI of 2000 ms in former late talking children both with and without persistent language deficits. In summary, our findings suggest that late talkers are characterized by a reduced duration of auditory sensory memory. However, deficits in auditory sensory memory are not sufficient for persistent language difficulties and may be compensated for by some children.

  3. Detecting and Quantifying Mind Wandering during Simulated Driving

    Directory of Open Access Journals (Sweden)

    Carryl L. Baldwin

    2017-08-01

    Full Text Available Mind wandering is a pervasive threat to transportation safety, potentially accounting for a substantial number of crashes and fatalities. In the current study, mind wandering was induced through completion of the same task for 5 days, consisting of a 20-min monotonous freeway-driving scenario, a cognitive depletion task, and a repetition of the 20-min driving scenario driven in the reverse direction. Participants were periodically probed with auditory tones to self-report whether they were mind wandering or focused on the driving task. Self-reported mind wandering frequency was high, and did not statistically change over days of participation. For measures of driving performance, participant labeled periods of mind wandering were associated with reduced speed and reduced lane variability, in comparison to periods of on task performance. For measures of electrophysiology, periods of mind wandering were associated with increased power in the alpha band of the electroencephalogram (EEG, as well as a reduction in the magnitude of the P3a component of the event related potential (ERP in response to the auditory probe. Results support that mind wandering has an impact on driving performance and the associated change in driver’s attentional state is detectable in underlying brain physiology. Further, results suggest that detecting the internal cognitive state of humans is possible in a continuous task such as automobile driving. Identifying periods of likely mind wandering could serve as a useful research tool for assessment of driver attention, and could potentially lead to future in-vehicle safety countermeasures.

  4. Extraordinary intelligence and the care of infants

    Science.gov (United States)

    Piantadosi, Steven T.; Kidd, Celeste

    2016-01-01

    We present evidence that pressures for early childcare may have been one of the driving factors of human evolution. We show through an evolutionary model that runaway selection for high intelligence may occur when (i) altricial neonates require intelligent parents, (ii) intelligent parents must have large brains, and (iii) large brains necessitate having even more altricial offspring. We test a prediction of this account by showing across primate genera that the helplessness of infants is a particularly strong predictor of the adults’ intelligence. We discuss related implications, including this account’s ability to explain why human-level intelligence evolved specifically in mammals. This theory complements prior hypotheses that link human intelligence to social reasoning and reproductive pressures and explains how human intelligence may have become so distinctive compared with our closest evolutionary relatives. PMID:27217560

  5. The function of BDNF in the adult auditory system.

    Science.gov (United States)

    Singer, Wibke; Panford-Walsh, Rama; Knipper, Marlies

    2014-01-01

    The inner ear of vertebrates is specialized to perceive sound, gravity and movements. Each of the specialized sensory organs within the cochlea (sound) and vestibular system (gravity, head movements) transmits information to specific areas of the brain. During development, brain-derived neurotrophic factor (BDNF) orchestrates the survival and outgrowth of afferent fibers connecting the vestibular organ and those regions in the cochlea that map information for low frequency sound to central auditory nuclei and higher-auditory centers. The role of BDNF in the mature inner ear is less understood. This is mainly due to the fact that constitutive BDNF mutant mice are postnatally lethal. Only in the last few years has the improved technology of performing conditional cell specific deletion of BDNF in vivo allowed the study of the function of BDNF in the mature developed organ. This review provides an overview of the current knowledge of the expression pattern and function of BDNF in the peripheral and central auditory system from just prior to the first auditory experience onwards. A special focus will be put on the differential mechanisms in which BDNF drives refinement of auditory circuitries during the onset of sensory experience and in the adult brain. This article is part of the Special Issue entitled 'BDNF Regulation of Synaptic Structure, Function, and Plasticity'. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. The influence of drinking, texting, and eating on simulated driving performance.

    Science.gov (United States)

    Irwin, Christopher; Monement, Sophie; Desbrow, Ben

    2015-01-01

    Driving is a complex task and distractions such as using a mobile phone for the purpose of text messaging are known to have a significant impact on driving. Eating and drinking are common forms of distraction that have received less attention in relation to their impact on driving. The aim of this study was to further explore and compare the effects of a variety of distraction tasks (i.e., text messaging, eating, drinking) on simulated driving. Twenty-eight healthy individuals (13 female) participated in a crossover design study involving 3 experimental trials (separated by ≥24 h). In each trial, participants completed a baseline driving task (no distraction) before completing a second driving task involving one of 3 different distraction tasks (drinking 400 mL water, drinking 400 mL water and eating a 6-inch Subway sandwich, drinking 400 mL water and composing 3 text messages). Primary outcome measures of driving consisted of standard deviation of lateral position (SDLP) and reaction time to auditory and visual critical events. Subjective ratings of difficulty in performing the driving tasks were also collected at the end of the study to determine perceptions of distraction difficulty on driving. Driving tasks involving texting and eating were associated with significant impairment in driving performance measures for SDLP compared to baseline driving (46.0 ± 0.08 vs. 41.3 ± 0.06 cm and 44.8 ± 0.10 vs. 41.6 ± 0.07 cm, respectively), number of lane departures compared to baseline driving (10.9 ± 7.8 vs. 7.6 ± 7.1 and 9.4 ± 7.5 vs. 7.1 ± 7.0, respectively), and auditory reaction time compared to baseline driving (922 ± 95 vs. 889 ± 104 ms and 933 ± 101 vs. 901 ± 103 ms, respectively). No difference in SDLP (42.7 ± 0.08 vs. 42.5 ± 0.07 cm), number of lane departures (7.6 ± 7.7 vs. 7.0 ± 6.8), or auditory reaction time (891 ± 98 and 885 ± 89 ms) was observed in the drive involving the drink-only condition compared to the corresponding baseline drive

  7. Permanent magnet brushless DC motor drives and controls

    CERN Document Server

    Xia, Chang-liang

    2012-01-01

    An advanced introduction to the simulation and hardware implementation of BLDC motor drives A thorough reference on the simulation and hardware implementation of BLDC motor drives, this book covers recent advances in the control of BLDC motor drives, including intelligent control, sensorless control, torque ripple reduction and hardware implementation. With the guidance of the expert author team, readers will understand the principle, modelling, design and control of BLDC motor drives. The advanced control methods and new achievements of BLDC motor drives, of interest to more a

  8. Adult plasticity in the subcortical auditory pathway of the maternal mouse.

    Directory of Open Access Journals (Sweden)

    Jason A Miranda

    Full Text Available Subcortical auditory nuclei were traditionally viewed as non-plastic in adulthood so that acoustic information could be stably conveyed to higher auditory areas. Studies in a variety of species, including humans, now suggest that prolonged acoustic training can drive long-lasting brainstem plasticity. The neurobiological mechanisms for such changes are not well understood in natural behavioral contexts due to a relative dearth of in vivo animal models in which to study this. Here, we demonstrate in a mouse model that a natural life experience with increased demands on the auditory system - motherhood - is associated with improved temporal processing in the subcortical auditory pathway. We measured the auditory brainstem response to test whether mothers and pup-naïve virgin mice differed in temporal responses to both broadband and tone stimuli, including ultrasonic frequencies found in mouse pup vocalizations. Mothers had shorter latencies for early ABR peaks, indicating plasticity in the auditory nerve and the cochlear nucleus. Shorter interpeak latency between waves IV and V also suggest plasticity in the inferior colliculus. Hormone manipulations revealed that these cannot be explained solely by estrogen levels experienced during pregnancy and parturition in mothers. In contrast, we found that pup-care experience, independent of pregnancy and parturition, contributes to shortening auditory brainstem response latencies. These results suggest that acoustic experience in the maternal context imparts plasticity on early auditory processing that lasts beyond pup weaning. In addition to establishing an animal model for exploring adult auditory brainstem plasticity in a neuroethological context, our results have broader implications for models of perceptual, behavioral and neural changes that arise during maternity, where subcortical sensorineural plasticity has not previously been considered.

  9. Adult plasticity in the subcortical auditory pathway of the maternal mouse.

    Science.gov (United States)

    Miranda, Jason A; Shepard, Kathryn N; McClintock, Shannon K; Liu, Robert C

    2014-01-01

    Subcortical auditory nuclei were traditionally viewed as non-plastic in adulthood so that acoustic information could be stably conveyed to higher auditory areas. Studies in a variety of species, including humans, now suggest that prolonged acoustic training can drive long-lasting brainstem plasticity. The neurobiological mechanisms for such changes are not well understood in natural behavioral contexts due to a relative dearth of in vivo animal models in which to study this. Here, we demonstrate in a mouse model that a natural life experience with increased demands on the auditory system - motherhood - is associated with improved temporal processing in the subcortical auditory pathway. We measured the auditory brainstem response to test whether mothers and pup-naïve virgin mice differed in temporal responses to both broadband and tone stimuli, including ultrasonic frequencies found in mouse pup vocalizations. Mothers had shorter latencies for early ABR peaks, indicating plasticity in the auditory nerve and the cochlear nucleus. Shorter interpeak latency between waves IV and V also suggest plasticity in the inferior colliculus. Hormone manipulations revealed that these cannot be explained solely by estrogen levels experienced during pregnancy and parturition in mothers. In contrast, we found that pup-care experience, independent of pregnancy and parturition, contributes to shortening auditory brainstem response latencies. These results suggest that acoustic experience in the maternal context imparts plasticity on early auditory processing that lasts beyond pup weaning. In addition to establishing an animal model for exploring adult auditory brainstem plasticity in a neuroethological context, our results have broader implications for models of perceptual, behavioral and neural changes that arise during maternity, where subcortical sensorineural plasticity has not previously been considered.

  10. Development of a test for recording both visual and auditory reaction times, potentially useful for future studies in patients on opioids therapy

    Directory of Open Access Journals (Sweden)

    Miceli L

    2015-02-01

    Full Text Available Luca Miceli,1 Rym Bednarova,2 Alessandro Rizzardo,1 Valentina Samogin,1 Giorgio Della Rocca1 1Department of Anesthesia and Intensive Care Medicine, University of Udine, 2Department of Pain Medicine and Palliative Care, Hospital of Latisana, Latisana, Udine, Italy Objective: Italian Road Law limits driving while undergoing treatment with certain kinds of medication. Here, we report the results of a test, run as a smartphone application (app, assessing auditory and visual reflexes in a sample of 300 drivers. The scope of the test is to provide both the police force and medication-taking drivers with a tool that can evaluate the individual’s capacity to drive safely. Methods: The test is run as an app for Apple iOS and Android mobile operating systems and facilitates four different reaction times to be assessed: simple visual and auditory reaction times and complex visual and auditory reaction times. Reference deciles were created for the test results obtained from a sample of 300 Italian subjects. Results lying within the first three deciles were considered as incompatible with safe driving capabilities. Results: Performance is both age-related (r>0.5 and sex-related (female reaction times were significantly slower than those recorded for male subjects, P<0.05. Only 21% of the subjects were able to perform all four tests correctly. Conclusion: We developed and fine-tuned a test called Safedrive that measures visual and auditory reaction times through a smartphone mobile device; the scope of the test is two-fold: to provide a clinical tool for the assessment of the driving capacity of individuals taking pain relief medication; to promote the sense of social responsibility in drivers who are on medication and provide these individuals with a means of testing their own capacity to drive safely. Keywords: visual reaction time, auditory reaction time, opioids, Safedrive

  11. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  12. Depending on Data: Business Intelligence Systems Drive Reform

    Science.gov (United States)

    Halligan, Tom

    2010-01-01

    As more community colleges focus on using data to improve educational outcomes, many administrators are considering business intelligence applications that promise a path toward more informed decisions. Getting there, leaders say, requires more than installing some out-of-the-box solution; it requires changing the culture and finding skilled…

  13. Development of a test for recording both visual and auditory reaction times, potentially useful for future studies in patients on opioids therapy.

    Science.gov (United States)

    Miceli, Luca; Bednarova, Rym; Rizzardo, Alessandro; Samogin, Valentina; Della Rocca, Giorgio

    2015-01-01

    Italian Road Law limits driving while undergoing treatment with certain kinds of medication. Here, we report the results of a test, run as a smartphone application (app), assessing auditory and visual reflexes in a sample of 300 drivers. The scope of the test is to provide both the police force and medication-taking drivers with a tool that can evaluate the individual's capacity to drive safely. The test is run as an app for Apple iOS and Android mobile operating systems and facilitates four different reaction times to be assessed: simple visual and auditory reaction times and complex visual and auditory reaction times. Reference deciles were created for the test results obtained from a sample of 300 Italian subjects. Results lying within the first three deciles were considered as incompatible with safe driving capabilities. Performance is both age-related (r>0.5) and sex-related (female reaction times were significantly slower than those recorded for male subjects, Psafely.

  14. A Novel Harmonic Elimination Approach in Three-Phase Multi-Motor Drives

    DEFF Research Database (Denmark)

    Davari, Pooya; Yang, Yongheng; Zare, Firuz

    2015-01-01

    and may cause unnecessary losses in power system transformers. Both degradations are apt to occur in motor drive applications. As a consequence, it calls for advanced and intelligent control strategies for the power electronics based drive systems like adjustable speed drives in industry. At present, many...

  15. [Short-term sentence memory in children with auditory processing disorders].

    Science.gov (United States)

    Kiese-Himmel, C

    2010-05-01

    To compare sentence repetition performance of different groups of children with Auditory Processing Disorders (APD) and to examine the relationship between age or respectively nonverbal intelligence and sentence recall. Nonverbal intelligence was measured with the COLOURED MATRICES, in addition the children completed a standardized test of SENTENCE REPETITION (SR) which requires to repeat spoken sentences (subtest of the HEIDELBERGER SPRACHENTWICKLUNGSTEST). Three clinical groups (n=49 with monosymptomatic APD; n=29 with APD+developmental language impairment; n=14 with APD+developmental dyslexia); two control groups (n=13 typically developing peers without any clinical developmental disorder; n=10 children with slight reduced nonverbal intelligence). The analysis showed a significant group effect (p=0.0007). The best performance was achieved by the normal controls (T-score 52.9; SD 6.4; Min 42; Max 59) followed by children with monosymptomatic APD (43.2; SD 9.2), children with the co-morbid-conditions APD+developmental dyslexia (43.1; SD 10.3), and APD+developmental language impairment (39.4; SD 9.4). The clinical control group presented the lowest performance, on average (38.6; SD 9.6). Accordingly, language-impaired children and children with slight reductions in intelligence could poorly use their grammatical knowledge for SR. A statistically significant improvement in SR was verified with the increase of age with the exception of children belonging to the small group with lowered intelligence. This group comprised the oldest children. Nonverbal intelligence correlated positively with SR only in children with below average-range intelligence (0.62; p=0.054). The absence of APD, SLI as well as the presence of normal intelligence facilitated the use of phonological information for SR.

  16. Driver compliance to take-over requests with different auditory outputs in conditional automation.

    Science.gov (United States)

    Forster, Yannick; Naujoks, Frederik; Neukum, Alexandra; Huestegge, Lynn

    2017-12-01

    Conditionally automated driving (CAD) systems are expected to improve traffic safety. Whenever the CAD system exceeds its limit of operation, designers of the system need to ensure a safe and timely enough transition from automated to manual mode. An existing visual Human-Machine Interface (HMI) was supplemented by different auditory outputs. The present work compares the effects of different auditory outputs in form of (1) a generic warning tone and (2) additional semantic speech output on driver behavior for the announcement of an upcoming take-over request (TOR). We expect the information carried by means of speech output to lead to faster reactions and better subjective evaluations by the drivers compared to generic auditory output. To test this assumption, N=17 drivers completed two simulator drives, once with a generic warning tone ('Generic') and once with additional speech output ('Speech+generic'), while they were working on a non-driving related task (NDRT; i.e., reading a magazine). Each drive incorporated one transition from automated to manual mode when yellow secondary lanes emerged. Different reaction time measures, relevant for the take-over process, were assessed. Furthermore, drivers evaluated the complete HMI regarding usefulness, ease of use and perceived visual workload just after experiencing the take-over. They gave comparative ratings on usability and acceptance at the end of the experiment. Results revealed that reaction times, reflecting information processing time (i.e., hands on the steering wheel, termination of NDRT), were shorter for 'Speech+generic' compared to 'Generic' while reaction time, reflecting allocation of attention (i.e., first glance ahead), did not show this difference. Subjective ratings were in favor of the system with additional speech output. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  18. Engineering an Affordable Self-Driving Car

    KAUST Repository

    Budisteanu, Alexandru Ionut

    2018-01-17

    "More than a million people die in car accidents each year, and most of those accidents are the result of human errorヤ Alexandru Budisteanu is 23 years old and owns a group of startups including Autonomix, an Artificial Intelligence software for affordable self-driving cars and he designed a low-cost self-driving car. The car\\'s roof has cameras and low-resolution 3D LiDAR equipment to detect traffic lanes, other cars, curbs and obstacles, such as people crossing by. To process this dizzying amount of data, Alexandru employed Artificial Intelligence algorithms to extract information from the visual data and plot a safe route for the car. Then, he built a manufacturing facility in his garage from Romania to assembly affordable VisionBot Pick and Place robots that are used to produce electronics. During this lecture, Alexandru will talk about this autonomous self-driving car prototype, for which he received the grand prize of the Intel International Science and Engineering Fair, and was nominated by TIME magazine as one of the worldメs most influential teens of 2013.

  19. Auditory Reserve and the Legacy of Auditory Experience

    Directory of Open Access Journals (Sweden)

    Erika Skoe

    2014-11-01

    Full Text Available Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function.

  20. Effects of in-vehicle warning information displays with or without spatial compatibility on driving behaviors and response performance.

    Science.gov (United States)

    Liu, Yung-Ching; Jhuang, Jing-Wun

    2012-07-01

    A driving simulator study was conducted to evaluate the effects of five in-vehicle warning information displays upon drivers' emergent response and decision performance. These displays include visual display, auditory displays with and without spatial compatibility, hybrid displays in both visual and auditory format with and without spatial compatibility. Thirty volunteer drivers were recruited to perform various tasks that involved driving, stimulus-response, divided attention and stress rating. Results show that for displays of single-modality, drivers benefited more when coping with visual display of warning information than auditory display with or without spatial compatibility. However, auditory display with spatial compatibility significantly improved drivers' performance in reacting to the divided attention task and making accurate S-R task decision. Drivers' best performance results were obtained for hybrid display with spatial compatibility. Hybrid displays enabled drivers to respond the fastest and achieve the best accuracy in both S-R and divided attention tasks. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  1. The influence of spectral and spatial characteristics of early reflections on speech intelligibility

    DEFF Research Database (Denmark)

    Arweiler, Iris; Buchholz, Jörg; Dau, Torsten

    The auditory system employs different strategies to facilitate speech intelligibility in complex listening conditions. One of them is the integration of early reflections (ER’s) with the direct sound (DS) to increase the effective speech level. So far the underlying mechanisms of ER processing have...... of listeners that speech intelligibility improved with added ER energy, but less than with added DS energy. An efficiency factor was introduced to quantify this effect. The difference in speech intelligibility could be mainly ascribed to the differences in the spectrum between the speech signals....... binaural). The direction-dependency could be explained by the spectral changes introduced by the pinna, head, and torso. The results will be important with regard to the influence of signal processing strategies in modern hearing aids on speech intelligibility, because they might alter the spectral...

  2. Useful field of view predicts driving in the presence of distracters.

    Science.gov (United States)

    Wood, Joanne M; Chaparro, Alex; Lacherez, Philippe; Hickson, Louise

    2012-04-01

    The Useful Field of View (UFOV) test has been shown to be highly effective in predicting crash risk among older adults. An important question which we examined in this study is whether this association is due to the ability of the UFOV to predict difficulties in attention-demanding driving situations that involve either visual or auditory distracters. Participants included 92 community-living adults (mean age 73.6 ± 5.4 years; range 65-88 years) who completed all three subtests of the UFOV involving assessment of visual processing speed (subtest 1), divided attention (subtest 2), and selective attention (subtest 3); driving safety risk was also classified using the UFOV scoring system. Driving performance was assessed separately on a closed-road circuit while driving under three conditions: no distracters, visual distracters, and auditory distracters. Driving outcome measures included road sign recognition, hazard detection, gap perception, time to complete the course, and performance on the distracter tasks. Those rated as safe on the UFOV (safety rating categories 1 and 2), as well as those responding faster than the recommended cut-off on the selective attention subtest (350 msec), performed significantly better in terms of overall driving performance and also experienced less interference from distracters. Of the three UFOV subtests, the selective attention subtest best predicted overall driving performance in the presence of distracters. Older adults who were rated as higher risk on the UFOV, particularly on the selective attention subtest, demonstrated poorest driving performance in the presence of distracters. This finding suggests that the selective attention subtest of the UFOV may be differentially more effective in predicting driving difficulties in situations of divided attention which are commonly associated with crashes.

  3. Measuring Auditory Selective Attention using Frequency Tagging

    Directory of Open Access Journals (Sweden)

    Hari M Bharadwaj

    2014-02-01

    Full Text Available Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in the contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right precentral sulcus (lPCS, a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream suggesting that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity help partly explain why past ASSR studies of auditory spatial attention yield seemingly contradictory

  4. Effects of Instantaneous Multiband Dynamic Compression on Speech Intelligibility

    Directory of Open Access Journals (Sweden)

    Herzke Tobias

    2005-01-01

    Full Text Available The recruitment phenomenon, that is, the reduced dynamic range between threshold and uncomfortable level, is attributed to the loss of instantaneous dynamic compression on the basilar membrane. Despite this, hearing aids commonly use slow-acting dynamic compression for its compensation, because this was found to be the most successful strategy in terms of speech quality and intelligibility rehabilitation. Former attempts to use fast-acting compression gave ambiguous results, raising the question as to whether auditory-based recruitment compensation by instantaneous compression is in principle applicable in hearing aids. This study thus investigates instantaneous multiband dynamic compression based on an auditory filterbank. Instantaneous envelope compression is performed in each frequency band of a gammatone filterbank, which provides a combination of time and frequency resolution comparable to the normal healthy cochlea. The gain characteristics used for dynamic compression are deduced from categorical loudness scaling. In speech intelligibility tests, the instantaneous dynamic compression scheme was compared against a linear amplification scheme, which used the same filterbank for frequency analysis, but employed constant gain factors that restored the sound level for medium perceived loudness in each frequency band. In subjective comparisons, five of nine subjects preferred the linear amplification scheme and would not accept the instantaneous dynamic compression in hearing aids. Four of nine subjects did not perceive any quality differences. A sentence intelligibility test in noise (Oldenburg sentence test showed little to no negative effects of the instantaneous dynamic compression, compared to linear amplification. A word intelligibility test in quiet (one-syllable rhyme test showed that the subjects benefit from the larger amplification at low levels provided by instantaneous dynamic compression. Further analysis showed that the increase

  5. Prediction of speech intelligibility based on a correlation metric in the envelope power spectrum domain

    DEFF Research Database (Denmark)

    Relano-Iborra, Helia; May, Tobias; Zaar, Johannes

    A powerful tool to investigate speech perception is the use of speech intelligibility prediction models. Recently, a model was presented, termed correlation-based speechbased envelope power spectrum model (sEPSMcorr) [1], based on the auditory processing of the multi-resolution speech-based Envel...

  6. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    Science.gov (United States)

    Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A.

    2015-01-01

    Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests. Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study. Forty-four listeners aged between 50 and 74 years with mild sensorineural hearing loss were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet), to medium (digit triplet perception in speech-shaped noise) to high (sentence perception in modulated noise); cognitive tests of attention, memory, and non-verbal intelligence quotient; and self-report questionnaires of general health-related and hearing-specific quality of life. Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that

  7. Analysis of Wheel Hub Motor Drive Application in Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Sun Yuechao

    2017-01-01

    Full Text Available Based on the comparative analysis of the performance characteristics of centralized and distributed drive electric vehicles, we found that the wheel hub motor drive mode of the electric vehicles with distributed drive have compact structure, high utilization ratio of interior vehicle space, lower center of vehicle gravity, good driving stability, easy intelligent control and many other advantages, hence in line with the new requirements for the development of drive performance of electric vehicles, and distributed drive will be the ultimate mode of electric vehicles in the future.

  8. The virtual driving instructor : Creating awareness in a multi-agent system

    NARCIS (Netherlands)

    Weevers, Ivo; Kuipers, Jorrit; Brugman, Arnd O.; Zwiers, Job; van Dijk, Elisabeth M.A.G.; Nijholt, Anton; Xiang, Y.; Chaib-draa, B.

    2003-01-01

    Driving simulators need an Intelligent Tutoring System (ITS). Simulators provide ways to conduct objective measurements on students’ driving behavior and opportunities for creating the best possible learning environment. The generated traffic situations can be influenced directly according to the

  9. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    Science.gov (United States)

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  10. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    Directory of Open Access Journals (Sweden)

    Yael Zaltz

    2017-11-01

    Full Text Available The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF, intensity discrimination, spectrum discrimination (DLS, and time discrimination (DLT. Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels, and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels, were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant

  11. Auditory midbrain processing is differentially modulated by auditory and visual cortices: An auditory fMRI study.

    Science.gov (United States)

    Gao, Patrick P; Zhang, Jevin W; Fan, Shu-Juan; Sanes, Dan H; Wu, Ed X

    2015-12-01

    The cortex contains extensive descending projections, yet the impact of cortical input on brainstem processing remains poorly understood. In the central auditory system, the auditory cortex contains direct and indirect pathways (via brainstem cholinergic cells) to nuclei of the auditory midbrain, called the inferior colliculus (IC). While these projections modulate auditory processing throughout the IC, single neuron recordings have samples from only a small fraction of cells during stimulation of the corticofugal pathway. Furthermore, assessments of cortical feedback have not been extended to sensory modalities other than audition. To address these issues, we devised blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) paradigms to measure the sound-evoked responses throughout the rat IC and investigated the effects of bilateral ablation of either auditory or visual cortices. Auditory cortex ablation increased the gain of IC responses to noise stimuli (primarily in the central nucleus of the IC) and decreased response selectivity to forward species-specific vocalizations (versus temporally reversed ones, most prominently in the external cortex of the IC). In contrast, visual cortex ablation decreased the gain and induced a much smaller effect on response selectivity. The results suggest that auditory cortical projections normally exert a large-scale and net suppressive influence on specific IC subnuclei, while visual cortical projections provide a facilitatory influence. Meanwhile, auditory cortical projections enhance the midbrain response selectivity to species-specific vocalizations. We also probed the role of the indirect cholinergic projections in the auditory system in the descending modulation process by pharmacologically blocking muscarinic cholinergic receptors. This manipulation did not affect the gain of IC responses but significantly reduced the response selectivity to vocalizations. The results imply that auditory cortical

  12. Context-Based Filtering for Assisted Brain-Actuated Wheelchair Driving

    Directory of Open Access Journals (Sweden)

    Gerolf Vanacker

    2007-01-01

    Full Text Available Controlling a robotic device by using human brain signals is an interesting and challenging task. The device may be complicated to control and the nonstationary nature of the brain signals provides for a rather unstable input. With the use of intelligent processing algorithms adapted to the task at hand, however, the performance can be increased. This paper introduces a shared control system that helps the subject in driving an intelligent wheelchair with a noninvasive brain interface. The subject's steering intentions are estimated from electroencephalogram (EEG signals and passed through to the shared control system before being sent to the wheelchair motors. Experimental results show a possibility for significant improvement in the overall driving performance when using the shared control system compared to driving without it. These results have been obtained with 2 healthy subjects during their first day of training with the brain-actuated wheelchair.

  13. Modeling speech intelligibility in adverse conditions

    DEFF Research Database (Denmark)

    Dau, Torsten

    2012-01-01

    ) in conditions with nonlinearly processed speech. Instead of considering the reduction of the temporal modulation energy as the intelligibility metric, as assumed in the STI, the sEPSM applies the signal-to-noise ratio in the envelope domain (SNRenv). This metric was shown to be the key for predicting...... understanding speech when more than one person is talking, even when reduced audibility has been fully compensated for by a hearing aid. The reasons for these difficulties are not well understood. This presentation highlights recent concepts of the monaural and binaural signal processing strategies employed...... by the normal as well as impaired auditory system. Jørgensen and Dau [(2011). J. Acoust. Soc. Am. 130, 1475-1487] proposed the speech-based envelope power spectrum model (sEPSM) in an attempt to overcome the limitations of the classical speech transmission index (STI) and speech intelligibility index (SII...

  14. Thinking positively: The genetics of high intelligence

    Science.gov (United States)

    Shakeshaft, Nicholas G.; Trzaskowski, Maciej; McMillan, Andrew; Krapohl, Eva; Simpson, Michael A.; Reichenberg, Avi; Cederlöf, Martin; Larsson, Henrik; Lichtenstein, Paul; Plomin, Robert

    2015-01-01

    High intelligence (general cognitive ability) is fundamental to the human capital that drives societies in the information age. Understanding the origins of this intellectual capital is important for government policy, for neuroscience, and for genetics. For genetics, a key question is whether the genetic causes of high intelligence are qualitatively or quantitatively different from the normal distribution of intelligence. We report results from a sibling and twin study of high intelligence and its links with the normal distribution. We identified 360,000 sibling pairs and 9000 twin pairs from 3 million 18-year-old males with cognitive assessments administered as part of conscription to military service in Sweden between 1968 and 2010. We found that high intelligence is familial, heritable, and caused by the same genetic and environmental factors responsible for the normal distribution of intelligence. High intelligence is a good candidate for “positive genetics” — going beyond the negative effects of DNA sequence variation on disease and disorders to consider the positive end of the distribution of genetic effects. PMID:25593376

  15. A Review of Auditory Prediction and Its Potential Role in Tinnitus Perception.

    Science.gov (United States)

    Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D

    2018-06-01

    The precise mechanisms underlying tinnitus perception and distress are still not fully understood. A recent proposition is that auditory prediction errors and related memory representations may play a role in driving tinnitus perception. It is of interest to further explore this. To obtain a comprehensive narrative synthesis of current research in relation to auditory prediction and its potential role in tinnitus perception and severity. A narrative review methodological framework was followed. The key words Prediction Auditory, Memory Prediction Auditory, Tinnitus AND Memory, Tinnitus AND Prediction in Article Title, Abstract, and Keywords were extensively searched on four databases: PubMed, Scopus, SpringerLink, and PsychINFO. All study types were selected from 2000-2016 (end of 2016) and had the following exclusion criteria applied: minimum age of participants article not available in English. Reference lists of articles were reviewed to identify any further relevant studies. Articles were short listed based on title relevance. After reading the abstracts and with consensus made between coauthors, a total of 114 studies were selected for charting data. The hierarchical predictive coding model based on the Bayesian brain hypothesis, attentional modulation and top-down feedback serves as the fundamental framework in current literature for how auditory prediction may occur. Predictions are integral to speech and music processing, as well as in sequential processing and identification of auditory objects during auditory streaming. Although deviant responses are observable from middle latency time ranges, the mismatch negativity (MMN) waveform is the most commonly studied electrophysiological index of auditory irregularity detection. However, limitations may apply when interpreting findings because of the debatable origin of the MMN and its restricted ability to model real-life, more complex auditory phenomenon. Cortical oscillatory band activity may act as

  16. Predicting speech intelligibility based on a correlation metric in the envelope power spectrum domain

    DEFF Research Database (Denmark)

    Relaño-Iborra, Helia; May, Tobias; Zaar, Johannes

    2016-01-01

    A speech intelligibility prediction model is proposed that combines the auditory processing front end of the multi-resolution speech-based envelope power spectrum model [mr-sEPSM; Jørgensen, Ewert, and Dau (2013). J. Acoust. Soc. Am. 134(1), 436–446] with a correlation back end inspired by the sh...

  17. Task-dependent modulation of regions in the left temporal cortex during auditory sentence comprehension.

    Science.gov (United States)

    Zhang, Linjun; Yue, Qiuhai; Zhang, Yang; Shu, Hua; Li, Ping

    2015-01-01

    Numerous studies have revealed the essential role of the left lateral temporal cortex in auditory sentence comprehension along with evidence of the functional specialization of the anterior and posterior temporal sub-areas. However, it is unclear whether task demands (e.g., active vs. passive listening) modulate the functional specificity of these sub-areas. In the present functional magnetic resonance imaging (fMRI) study, we addressed this issue by applying both independent component analysis (ICA) and general linear model (GLM) methods. Consistent with previous studies, intelligible sentences elicited greater activity in the left lateral temporal cortex relative to unintelligible sentences. Moreover, responses to intelligibility in the sub-regions were differentially modulated by task demands. While the overall activation patterns of the anterior and posterior superior temporal sulcus and middle temporal gyrus (STS/MTG) were equivalent during both passive and active tasks, a middle portion of the STS/MTG was found to be selectively activated only during the active task under a refined analysis of sub-regional contributions. Our results not only confirm the critical role of the left lateral temporal cortex in auditory sentence comprehension but further demonstrate that task demands modulate functional specialization of the anterior-middle-posterior temporal sub-areas. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Neural Substrates of Auditory Emotion Recognition Deficits in Schizophrenia.

    Science.gov (United States)

    Kantrowitz, Joshua T; Hoptman, Matthew J; Leitman, David I; Moreno-Ortega, Marta; Lehrfeld, Jonathan M; Dias, Elisa; Sehatpour, Pejman; Laukka, Petri; Silipo, Gail; Javitt, Daniel C

    2015-11-04

    Deficits in auditory emotion recognition (AER) are a core feature of schizophrenia and a key component of social cognitive impairment. AER deficits are tied behaviorally to impaired ability to interpret tonal ("prosodic") features of speech that normally convey emotion, such as modulations in base pitch (F0M) and pitch variability (F0SD). These modulations can be recreated using synthetic frequency modulated (FM) tones that mimic the prosodic contours of specific emotional stimuli. The present study investigates neural mechanisms underlying impaired AER using a combined event-related potential/resting-state functional connectivity (rsfMRI) approach in 84 schizophrenia/schizoaffective disorder patients and 66 healthy comparison subjects. Mismatch negativity (MMN) to FM tones was assessed in 43 patients/36 controls. rsfMRI between auditory cortex and medial temporal (insula) regions was assessed in 55 patients/51 controls. The relationship between AER, MMN to FM tones, and rsfMRI was assessed in the subset who performed all assessments (14 patients, 21 controls). As predicted, patients showed robust reductions in MMN across FM stimulus type (p = 0.005), particularly to modulations in F0M, along with impairments in AER and FM tone discrimination. MMN source analysis indicated dipoles in both auditory cortex and anterior insula, whereas rsfMRI analyses showed reduced auditory-insula connectivity. MMN to FM tones and functional connectivity together accounted for ∼50% of the variance in AER performance across individuals. These findings demonstrate that impaired preattentive processing of tonal information and reduced auditory-insula connectivity are critical determinants of social cognitive dysfunction in schizophrenia, and thus represent key targets for future research and clinical intervention. Schizophrenia patients show deficits in the ability to infer emotion based upon tone of voice [auditory emotion recognition (AER)] that drive impairments in social cognition

  19. Auditory Association Cortex Lesions Impair Auditory Short-Term Memory in Monkeys

    Science.gov (United States)

    Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.

    1990-01-01

    Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.

  20. Neuropsychological assessment of driving ability and self-evaluation: a comparison between driving offenders and a control group.

    Science.gov (United States)

    Zingg, Christina; Puelschen, Dietrich; Soyka, Michael

    2009-12-01

    The relationship between performance in neuropsychological tests and actual driving performance is unclear and results of studies on this topic differ. This makes it difficult to use neuropsychological tests to assess driving ability. The ability to compensate cognitive deficits plays a crucial role in this context. We compared neuropsychological test results and self-evaluation ratings between three groups: driving offenders with a psychiatric diagnosis relevant for driving ability (mainly alcohol dependence), driving offenders without such a diagnosis and a control group of non-offending drivers. Subjects were divided into two age categories (19-39 and 40-66 years). It was assumed that drivers with a psychiatric diagnosis relevant for driving ability and younger driving offenders without a psychiatric diagnosis would be less able to adequately assess their own capabilities than the control group. The driving offenders with a psychiatric diagnosis showed poorer concentration, reactivity, cognitive flexibility and problem solving, and tended to overassess their abilities in intelligence and attentional functions, compared to the other two groups. Conversely, younger drivers rather underassessed their performance.

  1. A decrease in brain activation associated with driving when listening to someone speak

    Science.gov (United States)

    2008-02-01

    Behavioral studies have shown that engaging in a secondary task, such as talking on a cellular : telephone, disrupts driving performance. This study used functional magnetic resonance : imaging (fMRI) to investigate the impact of concurrent auditory ...

  2. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments. © 2015 Elsevier B.V. All rights reserved.

  3. Intelligent transport systems (UTS) and driving behaviour: setting the agenda

    NARCIS (Netherlands)

    Heijden, R.E.C.M. van der; Marchau, V.A.W.J.; Thissen, W.A.H.; Wieinga, P.; Pantic, M.; Ludema, M.

    2004-01-01

    The application of intelligent transportation systems (ITS), in particular advanced driver assistance systems (ADAS), is expected to improve the performance of road transportation significantly. Public policy makers, among others, are therefore increasingly interested in the implementation

  4. Driving with intelligent vehicles: driving behaviour with Adaptive Cruise Control and the acceptance by individual drivers

    NARCIS (Netherlands)

    Hoedemaeker, D.M.

    1999-01-01

    This thesis focuses on the following research questions: What are the effects of driver support systems on driving behaviour? To what extent will driver support systems be accepted by individual drivers? To what extent will driving behaviour and acceptance be determined by individual differences?

  5. Synchronization to auditory and visual rhythms in hearing and deaf individuals

    Science.gov (United States)

    Iversen, John R.; Patel, Aniruddh D.; Nicodemus, Brenda; Emmorey, Karen

    2014-01-01

    A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported. PMID:25460395

  6. Auditory agnosia due to long-term severe hydrocephalus caused by spina bifida - specific auditory pathway versus nonspecific auditory pathway.

    Science.gov (United States)

    Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa

    2011-07-01

    A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.

  7. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  8. Study on Awakening Effect by Fragrance Presentation Against Drowsy Driving and Construction of Fragrance Presentation System

    Science.gov (United States)

    Kakamu, Yuki; Yoshikawa, Masahito; Shimizu, Takayuki; Yanagida, Yasuyuki; Nakano, Tomoaki; Yamamoto, Shin; Yamada, Muneo

    Traffic accidents caused by drowsy driving never disappear and easily result in fatal crash when heavy vehicle is involved. General methods to prevent drowsy driving are caution-advisory indicators and alarm sounds. However visual and auditory information are excessive enough to alert drivers. This study aims to focus on olfactory stimuli, which do not provoke interference with driving actions, and examine the effectiveness in combating drowsiness. Changing type of scent, we performed investigations on the effectiveness of each countermeasure to remain alert against drowsy driving.

  9. THE RELATION BETWEEN DRIVER BEHAVIOR AND INTELLIGENT TRANSPORT SYSTEM

    Directory of Open Access Journals (Sweden)

    Alica Kalašová

    2017-12-01

    Full Text Available The main objective of Slovakia’s transport policy is to reduce the number of traffic accidents and increase safety on our roads. Implementation of intelligent transport systems presents one of the possibilities how to meet this goal. Acceptance of these systems by motor vehicle drivers and other road traffic participants is necessary in order for them to fulfill their purpose. Only if the drivers will accept intelligent transport systems, it is possible to flexibly and effectively manage road traffic flexibly and effectively. From the perspective of a driver it concerns, in particular, the possibility of using alternative routes when traffic accidents or other obstacles occurs on the route that would significantly affect the continuity and safety of road traffic. Thanks to these technologies, it is possible to choose the appropriate route while driving, of course based on the criterion, which the driver considers the most important during the transport from origin to destination (driving time, distance from origin to destination, fuel consumption, quality of infrastructure. Information isare provided to the driver through variable message signs or directly in the vehicle (RDS-TMC. Another advantage of intelligent transport systems is a positive impact on psychological well-being of the driver while driving. Additional information about the possible obstacles, weather conditions and dangerous situations that occur on the roads as well as alternative routes are provided to the driver well in advance. This paper is mainly focused on how the drivers perceive the influence of intelligent transport systems in Žilina region.

  10. Driving Simulator study for intelligent cooperative intersection safety system (IRIS)

    NARCIS (Netherlands)

    Vreeswijk, J.; Schendzielorz, T.; Mathias, P.; Feenstra, P.

    2008-01-01

    About forty percent of all accidents occur at intersections. The Intelligent Cooperative Intersection Safety system (IRIS), as part of the European research project SAFESPOT, is a roadside application and aims at minimizing the number of accidents at controlled and uncontrolled intersections. IRIS

  11. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    Directory of Open Access Journals (Sweden)

    Antje eHeinrich

    2015-06-01

    Full Text Available Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests.Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study.Forty-four listeners aged between 50-74 years with mild SNHL were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet, to medium (digit triplet perception in speech-shaped noise to high (sentence perception in modulated noise; cognitive tests of attention, memory, and nonverbal IQ; and self-report questionnaires of general health-related and hearing-specific quality of life.Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that auditory environments pose on

  12. Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback.

    Science.gov (United States)

    Daliri, Ayoub; Max, Ludo

    2018-02-01

    Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre

  13. Effect of delayed auditory feedback on stuttering with and without central auditory processing disorders.

    Science.gov (United States)

    Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de

    2017-12-07

    To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.

  14. Relations between perceptual measures of temporal processing, auditory-evoked brainstem responses and speech intelligibility in noise

    DEFF Research Database (Denmark)

    Papakonstantinou, Alexandra; Strelcyk, Olaf; Dau, Torsten

    2011-01-01

    This study investigates behavioural and objective measures of temporal auditory processing and their relation to the ability to understand speech in noise. The experiments were carried out on a homogeneous group of seven hearing-impaired listeners with normal sensitivity at low frequencies (up to 1...... kHz) and steeply sloping hearing losses above 1 kHz. For comparison, data were also collected for five normalhearing listeners. Temporal processing was addressed at low frequencies by means of psychoacoustical frequency discrimination, binaural masked detection and amplitude modulation (AM......) detection. In addition, auditory brainstem responses (ABRs) to clicks and broadband rising chirps were recorded. Furthermore, speech reception thresholds (SRTs) were determined for Danish sentences in speechshaped noise. The main findings were: (1) SRTs were neither correlated with hearing sensitivity...

  15. Biases in Visual, Auditory, and Audiovisual Perception of Space

    Science.gov (United States)

    Odegaard, Brian; Wozny, David R.; Shams, Ladan

    2015-01-01

    Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1) if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors), and (2) whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli). Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers) was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s) driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only improves the

  16. Biases in Visual, Auditory, and Audiovisual Perception of Space.

    Directory of Open Access Journals (Sweden)

    Brian Odegaard

    2015-12-01

    Full Text Available Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1 if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors, and (2 whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli. Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only

  17. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    Science.gov (United States)

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Speech intelligibility for normal hearing and hearing-impaired listeners in simulated room acoustic conditions

    DEFF Research Database (Denmark)

    Arweiler, Iris; Dau, Torsten; Poulsen, Torben

    Speech intelligibility depends on many factors such as room acoustics, the acoustical properties and location of the signal and the interferers, and the ability of the (normal and impaired) auditory system to process monaural and binaural sounds. In the present study, the effect of reverberation...... on spatial release from masking was investigated in normal hearing and hearing impaired listeners using three types of interferers: speech shaped noise, an interfering female talker and speech-modulated noise. Speech reception thresholds (SRT) were obtained in three simulated environments: a listening room......, a classroom and a church. The data from the study provide constraints for existing models of speech intelligibility prediction (based on the speech intelligibility index, SII, or the speech transmission index, STI) which have shortcomings when reverberation and/or fluctuating noise affect speech...

  19. Mid-sized omnidirectional robot with hydraulic drive and steering

    Science.gov (United States)

    Wood, Carl G.; Perry, Trent; Cook, Douglas; Maxfield, Russell; Davidson, Morgan E.

    2003-09-01

    Through funding from the US Army-Tank-Automotive and Armaments Command's (TACOM) Intelligent Mobility Program, Utah State University's (USU) Center for Self-Organizing and Intelligent Systems (CSOIS) has developed the T-series of omni-directional robots based on the USU omni-directional vehicle (ODV) technology. The ODV provides independent computer control of steering and drive in a single wheel assembly. By putting multiple omni-directional (OD) wheels on a chassis, a vehicle is capable of uncoupled translational and rotational motion. Previous robots in the series, the T1, T2, T3, ODIS, ODIS-T, and ODIS-S have all used OD wheels based on electric motors. The T4 weighs approximately 1400 lbs and features a 4-wheel drive wheel configuration. Each wheel assembly consists of a hydraulic drive motor and a hydraulic steering motor. A gasoline engine is used to power both the hydraulic and electrical systems. The paper presents an overview of the mechanical design of the vehicle as well as potential uses of this technology in fielded systems.

  20. Auditory Spatial Layout

    Science.gov (United States)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  1. Multichannel auditory search: toward understanding control processes in polychotic auditory listening.

    Science.gov (United States)

    Lee, M D

    2001-01-01

    Two experiments are presented that serve as a framework for exploring auditory information processing. The framework is referred to as polychotic listening or auditory search, and it requires a listener to scan multiple simultaneous auditory streams for the appearance of a target word (the name of a letter such as A or M). Participants' ability to scan between two and six simultaneous auditory streams of letter and digit names for the name of a target letter was examined using six loudspeakers. The main independent variable was auditory load, or the number of active audio streams on a given trial. The primary dependent variables were target localization accuracy and reaction time. Results showed that as load increased, performance decreased. The performance decrease was evident in reaction time, accuracy, and sensitivity measures. The second study required participants to practice the same task for 10 sessions, for a total of 1800 trials. Results indicated that even with extensive practice, performance was still affected by auditory load. The present results are compared with findings in the visual search literature. The implications for the use of multiple auditory displays are discussed. Potential applications include cockpit and automobile warning displays, virtual reality systems, and training systems.

  2. 12th International Conference on Intelligent Autonomous Systems

    CERN Document Server

    Cho, Hyungsuck; Yoon, Kwang-Joon; Lee, Jangmyung

    2013-01-01

    Intelligent autonomous systems are emerged as a key enabler for the creation of a new paradigm of services to humankind, as seen by the recent advancement of autonomous cars licensed for driving in our streets, of unmanned aerial and underwater vehicles carrying out hazardous tasks on-site, and of space robots engaged in scientific as well as operational missions, to list only a few. This book aims at serving the researchers and practitioners in related fields with a timely dissemination of the recent progress on intelligent autonomous systems, based on a collection of papers presented at the 12th International Conference on Intelligent Autonomous Systems, held in Jeju, Korea, June 26-29, 2012. With the theme of “Intelligence and Autonomy for the Service to Humankind, the conference has covered such diverse areas as autonomous ground, aerial, and underwater vehicles, intelligent transportation systems, personal/domestic service robots, professional service robots for surgery/rehabilitation, rescue/security ...

  3. Diminished auditory sensory gating during active auditory verbal hallucinations.

    Science.gov (United States)

    Thoma, Robert J; Meier, Andrew; Houck, Jon; Clark, Vincent P; Lewine, Jeffrey D; Turner, Jessica; Calhoun, Vince; Stephen, Julia

    2017-10-01

    Auditory sensory gating, assessed in a paired-click paradigm, indicates the extent to which incoming stimuli are filtered, or "gated", in auditory cortex. Gating is typically computed as the ratio of the peak amplitude of the event related potential (ERP) to a second click (S2) divided by the peak amplitude of the ERP to a first click (S1). Higher gating ratios are purportedly indicative of incomplete suppression of S2 and considered to represent sensory processing dysfunction. In schizophrenia, hallucination severity is positively correlated with gating ratios, and it was hypothesized that a failure of sensory control processes early in auditory sensation (gating) may represent a larger system failure within the auditory data stream; resulting in auditory verbal hallucinations (AVH). EEG data were collected while patients (N=12) with treatment-resistant AVH pressed a button to indicate the beginning (AVH-on) and end (AVH-off) of each AVH during a paired click protocol. For each participant, separate gating ratios were computed for the P50, N100, and P200 components for each of the AVH-off and AVH-on states. AVH trait severity was assessed using the Psychotic Symptoms Rating Scales AVH Total score (PSYRATS). The results of a mixed model ANOVA revealed an overall effect for AVH state, such that gating ratios were significantly higher during the AVH-on state than during AVH-off for all three components. PSYRATS score was significantly and negatively correlated with N100 gating ratio only in the AVH-off state. These findings link onset of AVH with a failure of an empirically-defined auditory inhibition system, auditory sensory gating, and pave the way for a sensory gating model of AVH. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Effectiveness of auditory and tactile crossmodal cues in a dual-task visual and auditory scenario.

    Science.gov (United States)

    Hopkins, Kevin; Kass, Steven J; Blalock, Lisa Durrance; Brill, J Christopher

    2017-05-01

    In this study, we examined how spatially informative auditory and tactile cues affected participants' performance on a visual search task while they simultaneously performed a secondary auditory task. Visual search task performance was assessed via reaction time and accuracy. Tactile and auditory cues provided the approximate location of the visual target within the search display. The inclusion of tactile and auditory cues improved performance in comparison to the no-cue baseline conditions. In comparison to the no-cue conditions, both tactile and auditory cues resulted in faster response times in the visual search only (single task) and visual-auditory (dual-task) conditions. However, the effectiveness of auditory and tactile cueing for visual task accuracy was shown to be dependent on task-type condition. Crossmodal cueing remains a viable strategy for improving task performance without increasing attentional load within a singular sensory modality. Practitioner Summary: Crossmodal cueing with dual-task performance has not been widely explored, yet has practical applications. We examined the effects of auditory and tactile crossmodal cues on visual search performance, with and without a secondary auditory task. Tactile cues aided visual search accuracy when also engaged in a secondary auditory task, whereas auditory cues did not.

  5. Automated Machinery Health Monitoring Using Stress Wave Analysis & Artificial Intelligence

    National Research Council Canada - National Science Library

    Board, David

    1998-01-01

    .... Army, for application to helicopter drive train components. The system will detect structure borne, high frequency acoustic data, and process it with feature extraction and polynomial network artificial intelligence software...

  6. Operational and real-time Business Intelligence

    Directory of Open Access Journals (Sweden)

    Daniela Ioana SANDU

    2008-01-01

    Full Text Available A key component of a company’s IT framework is a business intelligence (BI system. BI enables business users to report on, analyze and optimize business operations to reduce costs and increase revenues. Organizations use BI for strategic and tactical decision making where the decision-making cycle may span a time period of several weeks (e.g., campaign management or months (e.g., improving customer satisfaction.Competitive pressures coming from a very dynamic business environment are forcing companies to react faster to changing business conditions and customer requirements. As a result, there is now a need to use BI to help drive and optimize business operations on a daily basis, and, in some cases, even for intraday decision making. This type of BI is usually called operational business intelligence and real-time business intelligence.

  7. Binding and unbinding the auditory and visual streams in the McGurk effect.

    Science.gov (United States)

    Nahorna, Olha; Berthommier, Frédéric; Schwartz, Jean-Luc

    2012-08-01

    Subjects presented with coherent auditory and visual streams generally fuse them into a single percept. This results in enhanced intelligibility in noise, or in visual modification of the auditory percept in the McGurk effect. It is classically considered that processing is done independently in the auditory and visual systems before interaction occurs at a certain representational stage, resulting in an integrated percept. However, some behavioral and neurophysiological data suggest the existence of a two-stage process. A first stage would involve binding together the appropriate pieces of audio and video information before fusion per se in a second stage. Then it should be possible to design experiments leading to unbinding. It is shown here that if a given McGurk stimulus is preceded by an incoherent audiovisual context, the amount of McGurk effect is largely reduced. Various kinds of incoherent contexts (acoustic syllables dubbed on video sentences or phonetic or temporal modifications of the acoustic content of a regular sequence of audiovisual syllables) can significantly reduce the McGurk effect even when they are short (less than 4 s). The data are interpreted in the framework of a two-stage "binding and fusion" model for audiovisual speech perception.

  8. The pathways for intelligible speech: multivariate and univariate perspectives.

    Science.gov (United States)

    Evans, S; Kyong, J S; Rosen, S; Golestani, N; Warren, J E; McGettigan, C; Mourão-Miranda, J; Wise, R J S; Scott, S K

    2014-09-01

    An anterior pathway, concerned with extracting meaning from sound, has been identified in nonhuman primates. An analogous pathway has been suggested in humans, but controversy exists concerning the degree of lateralization and the precise location where responses to intelligible speech emerge. We have demonstrated that the left anterior superior temporal sulcus (STS) responds preferentially to intelligible speech (Scott SK, Blank CC, Rosen S, Wise RJS. 2000. Identification of a pathway for intelligible speech in the left temporal lobe. Brain. 123:2400-2406.). A functional magnetic resonance imaging study in Cerebral Cortex used equivalent stimuli and univariate and multivariate analyses to argue for the greater importance of bilateral posterior when compared with the left anterior STS in responding to intelligible speech (Okada K, Rong F, Venezia J, Matchin W, Hsieh IH, Saberi K, Serences JT,Hickok G. 2010. Hierarchical organization of human auditory cortex: evidence from acoustic invariance in the response to intelligible speech. 20: 2486-2495.). Here, we also replicate our original study, demonstrating that the left anterior STS exhibits the strongest univariate response and, in decoding using the bilateral temporal cortex, contains the most informative voxels showing an increased response to intelligible speech. In contrast, in classifications using local "searchlights" and a whole brain analysis, we find greater classification accuracy in posterior rather than anterior temporal regions. Thus, we show that the precise nature of the multivariate analysis used will emphasize different response profiles associated with complex sound to speech processing. © The Author 2013. Published by Oxford University Press.

  9. Design of vehicle intelligent anti-collision warning system

    Science.gov (United States)

    Xu, Yangyang; Wang, Ying

    2018-05-01

    This paper mainly designs a low cost, high-accuracy, micro-miniaturization, and digital display and acousto-optic alarm features of the vehicle intelligent anti-collision warning system that based on MCU AT89C51. The vehicle intelligent anti-collision warning system includes forward anti-collision warning system, auto parking systems and reversing anti-collision radar system. It mainly develops on the basis of ultrasonic distance measurement, its performance is reliable, thus the driving safety is greatly improved and the parking security and efficiency enhance enormously.

  10. Speech intelligibility of normal listeners and persons with impaired hearing in traffic noise

    Science.gov (United States)

    Aniansson, G.; Peterson, Y.

    1983-10-01

    Speech intelligibility (PB words) in traffic-like noise was investigated in a laboratory situation simulating three common listening situations, indoors at 1 and 4 m and outdoors at 1 m. The maximum noise levels still permitting 75% intelligibility of PB words in these three listening situations were also defined. A total of 269 persons were examined. Forty-six had normal hearing, 90 a presbycusis-type hearing loss, 95 a noise-induced hearing loss and 38 a conductive hearing loss. In the indoor situation the majority of the groups with impaired hearing retained good speech intelligibility in 40 dB(A) masking noise. Lowering the noise level to less than 40 dB(A) resulted in a minor, usually insignificant, improvement in speech intelligibility. Listeners with normal hearing maintained good speech intelligibility in the outdoor listening situation at noise levels up to 60 dB(A), without lip-reading (i.e., using non-auditory information). For groups with impaired hearing due to age and/or noise, representing 8% of the population in Sweden, the noise level outdoors had to be lowered to less than 50 dB(A), in order to achieve good speech intelligibility at 1 m without lip-reading.

  11. Attending to auditory memory.

    Science.gov (United States)

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. The influence of spectral characteristics of early reflections on speech intelligibility

    DEFF Research Database (Denmark)

    Arweiler, Iris; Buchholz, Jörg

    2011-01-01

    The auditory system takes advantage of early reflections (ERs) in a room by integrating them with the direct sound (DS) and thereby increasing the effective speech level. In the present paper the benefit from realistic ERs on speech intelligibility in diffuse speech-shaped noise was investigated...... ascribed to their altered spectrum compared to the DS and to the filtering by the torso, head, and pinna. No binaural processing other than a binaural summation effect could be observed....

  13. Predictive coding of visual-auditory and motor-auditory events: An electrophysiological study.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2015-11-11

    The amplitude of auditory components of the event-related potential (ERP) is attenuated when sounds are self-generated compared to externally generated sounds. This effect has been ascribed to internal forward modals predicting the sensory consequences of one's own motor actions. Auditory potentials are also attenuated when a sound is accompanied by a video of anticipatory visual motion that reliably predicts the sound. Here, we investigated whether the neural underpinnings of prediction of upcoming auditory stimuli are similar for motor-auditory (MA) and visual-auditory (VA) events using a stimulus omission paradigm. In the MA condition, a finger tap triggered the sound of a handclap whereas in the VA condition the same sound was accompanied by a video showing the handclap. In both conditions, the auditory stimulus was omitted in either 50% or 12% of the trials. These auditory omissions induced early and mid-latency ERP components (oN1 and oN2, presumably reflecting prediction and prediction error), and subsequent higher-order error evaluation processes. The oN1 and oN2 of MA and VA were alike in amplitude, topography, and neural sources despite that the origin of the prediction stems from different brain areas (motor versus visual cortex). This suggests that MA and VA predictions activate a sensory template of the sound in auditory cortex. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. An Intelligent Robot Programing

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Seong Yong

    2012-01-15

    This book introduces an intelligent robot programing with background of the begging, introduction of VPL, and SPL, building of environment for robot platform, starting of robot programing, design of simulation environment, robot autonomy drive control programing, simulation graphic. Such as SPL graphic programing graphical image and graphical shapes, and graphical method application, application of procedure for robot control, robot multiprogramming, robot bumper sensor programing, robot LRF sencor programing and robot color sensor programing.

  15. An Intelligent Robot Programing

    International Nuclear Information System (INIS)

    Hong, Seong Yong

    2012-01-01

    This book introduces an intelligent robot programing with background of the begging, introduction of VPL, and SPL, building of environment for robot platform, starting of robot programing, design of simulation environment, robot autonomy drive control programing, simulation graphic. Such as SPL graphic programing graphical image and graphical shapes, and graphical method application, application of procedure for robot control, robot multiprogramming, robot bumper sensor programing, robot LRF sencor programing and robot color sensor programing.

  16. Autonomous Driver Based on an Intelligent System of Decision-Making.

    Science.gov (United States)

    Czubenko, Michał; Kowalczuk, Zdzisław; Ordys, Andrew

    The paper presents and discusses a system ( xDriver ) which uses an Intelligent System of Decision-making (ISD) for the task of car driving. The principal subject is the implementation, simulation and testing of the ISD system described earlier in our publications (Kowalczuk and Czubenko in artificial intelligence and soft computing lecture notes in computer science, lecture notes in artificial intelligence, Springer, Berlin, 2010, 2010, In Int J Appl Math Comput Sci 21(4):621-635, 2011, In Pomiary Autom Robot 2(17):60-5, 2013) for the task of autonomous driving. The design of the whole ISD system is a result of a thorough modelling of human psychology based on an extensive literature study. Concepts somehow similar to the ISD system can be found in the literature (Muhlestein in Cognit Comput 5(1):99-105, 2012; Wiggins in Cognit Comput 4(3):306-319, 2012), but there are no reports of a system which would model the human psychology for the purpose of autonomously driving a car. The paper describes assumptions for simulation, the set of needs and reactions (characterizing the ISD system), the road model and the vehicle model, as well as presents some results of simulation. It proves that the xDriver system may behave on the road as a very inexperienced driver.

  17. Dyslexia risk gene relates to representation of sound in the auditory brainstem.

    Science.gov (United States)

    Neef, Nicole E; Müller, Bent; Liebig, Johanna; Schaadt, Gesa; Grigutsch, Maren; Gunter, Thomas C; Wilcke, Arndt; Kirsten, Holger; Skeide, Michael A; Kraft, Indra; Kraus, Nina; Emmrich, Frank; Brauer, Jens; Boltze, Johannes; Friederici, Angela D

    2017-04-01

    Dyslexia is a reading disorder with strong associations with KIAA0319 and DCDC2. Both genes play a functional role in spike time precision of neurons. Strikingly, poor readers show an imprecise encoding of fast transients of speech in the auditory brainstem. Whether dyslexia risk genes are related to the quality of sound encoding in the auditory brainstem remains to be investigated. Here, we quantified the response consistency of speech-evoked brainstem responses to the acoustically presented syllable [da] in 159 genotyped, literate and preliterate children. When controlling for age, sex, familial risk and intelligence, partial correlation analyses associated a higher dyslexia risk loading with KIAA0319 with noisier responses. In contrast, a higher risk loading with DCDC2 was associated with a trend towards more stable responses. These results suggest that unstable representation of sound, and thus, reduced neural discrimination ability of stop consonants, occurred in genotypes carrying a higher amount of KIAA0319 risk alleles. Current data provide the first evidence that the dyslexia-associated gene KIAA0319 can alter brainstem responses and impair phoneme processing in the auditory brainstem. This brain-gene relationship provides insight into the complex relationships between phenotype and genotype thereby improving the understanding of the dyslexia-inherent complex multifactorial condition. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Development of the auditory system

    Science.gov (United States)

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  19. Animal models for auditory streaming

    Science.gov (United States)

    Itatani, Naoya

    2017-01-01

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons’ response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044022

  20. The impact of self-driving cars on existing transportation networks

    Science.gov (United States)

    Ji, Xiang

    2018-04-01

    In this paper, considering the usage of self-driving, I research the congestion problems of traffic networks from both macro and micro levels. Firstly, the macroscopic mathematical model is established using the Greenshields function, analytic hierarchy process and Monte Carlo simulation, where the congestion level is divided into five levels according to the average vehicle speed. The roads with an obvious congestion situation is investigated mainly and the traffic flow and topology of the roads are analyzed firstly. By processing the data, I propose a traffic congestion model. In the model, I assume that half of the non-self-driving cars only take the shortest route and the other half can choose the path randomly. While self-driving cars can obtain vehicle density data of each road and choose the path more reasonable. When the path traffic density exceeds specific value, it cannot be selected. To overcome the dimensional differences of data, I rate the paths by BORDA sorting. The Monte Carlo simulation of Cellular Automaton is used to obtain the negative feedback information of the density of the traffic network, where the vehicles are added into the road network one by one. I then analyze the influence of negative feedback information on path selection of intelligent cars. The conclusion is that the increase of the proportion of intelligent vehicles will make the road load more balanced, and the self-driving cars can avoid the peak and reduce the degree of road congestion. Combined with other models, the optimal self-driving ratio is about sixty-two percent. From the microscopic aspect, by using the single-lane traffic NS rule, another model is established to analyze the road Partition scheme. The self-driving traffic is more intelligent, and their cooperation can reduce the random deceleration probability. By the model, I get the different self-driving ratio of space-time distribution. I also simulate the case of making a lane separately for self-driving

  1. Applying intelligent transport systems to manage noise impacts

    NARCIS (Netherlands)

    Wilmink, I.R.; Vonk, T.

    2015-01-01

    This contribution discusses how traffic management, and many other measures that can be categorised as Intelligent Transport Systems (ITS, i.e. all traffic and transport measures that use ICT) can help reduce noise levels by influencing mobility choices and driving behaviour. Several examples of

  2. Intelligent autonomous systems 12. Vol. 2. Proceedings

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sukhan [Sungkyunkwan Univ., Gyeonggi-Do (Korea, Republic of). College of Information and Communication Engineering; Yoon, Kwang-Joon [Konkuk Univ., Seoul (Korea, Republic of); Cho, Hyungsuck [Daegu Gyeongbuk Institute of Science and Technology, Daegu (Korea, Republic of); Lee, Jangmyung (eds.) [Pusan National Univ. (Korea, Republic of). Dept. of Electronics Engineering

    2013-02-01

    Recent research in Intelligent and Autonomous Systems. Volume 2 of the proceedings of the 12th International Conference IAS-12, held June 26-29, 2012, jeju Island, Korea. Written by leading experts in the field. Intelligent autonomous systems are emerged as a key enabler for the creation of a new paradigm of services to humankind, as seen by the recent advancement of autonomous cars licensed for driving in our streets, of unmanned aerial and underwater vehicles carrying out hazardous tasks on-site, and of space robots engaged in scientific as well as operational missions, to list only a few. This book aims at serving the researchers and practitioners in related fields with a timely dissemination of the recent progress on intelligent autonomous systems, based on a collection of papers presented at the 12th International Conference on Intelligent Autonomous Systems, held in Jeju, Korea, June 26-29, 2012. With the theme of ''Intelligence and Autonomy for the Service to Humankind, the conference has covered such diverse areas as autonomous ground, aerial, and underwater vehicles, intelligent transportation systems, personal/domestic service robots, professional service robots for surgery/rehabilitation, rescue/security and space applications, and intelligent autonomous systems for manufacturing and healthcare. This volume 2 includes contributions devoted to Service Robotics and Human-Robot Interaction and Autonomous Multi-Agent Systems and Life Engineering.

  3. Modeling and Recognizing Driver Behavior Based on Driving Data: A Survey

    OpenAIRE

    Wang, Wenshuo; Xi, Junqiang; Chen, Huiyan

    2014-01-01

    In recent years, modeling and recognizing driver behavior have become crucial to understanding intelligence transport systems, human-vehicle systems, and intelligent vehicle systems. A wide range of both mathematical identification methods and modeling methods of driver behavior are presented from the control point of view in this paper based on the driving data, such as the brake/throttle pedal position and the steering wheel angle, among others. Subsequently, the driver’s characteristics de...

  4. Improvement of auditory hallucinations and reduction of primary auditory area's activation following TMS

    International Nuclear Information System (INIS)

    Giesel, Frederik L.; Mehndiratta, Amit; Hempel, Albrecht; Hempel, Eckhard; Kress, Kai R.; Essig, Marco; Schröder, Johannes

    2012-01-01

    Background: In the present case study, improvement of auditory hallucinations following transcranial magnetic stimulation (TMS) therapy was investigated with respect to activation changes of the auditory cortices. Methods: Using functional magnetic resonance imaging (fMRI), activation of the auditory cortices was assessed prior to and after a 4-week TMS series of the left superior temporal gyrus in a schizophrenic patient with medication-resistant auditory hallucinations. Results: Hallucinations decreased slightly after the third and profoundly after the fourth week of TMS. Activation in the primary auditory area decreased, whereas activation in the operculum and insula remained stable. Conclusions: Combination of TMS and repetitive fMRI is promising to elucidate the physiological changes induced by TMS.

  5. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture.

    Science.gov (United States)

    Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L

    2017-12-13

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although

  6. Driving, brain injury and assistive technology.

    Science.gov (United States)

    Lane, Amy K; Benoit, Dana

    2011-01-01

    Individuals with brain injury often present with cognitive, physical and emotional impairments which impact their ability to resume independence in activities of daily living. Of those activities, the resumption of driving privileges is cited as one of the greatest concerns by survivors of brain injury. The integration of driving fundamentals within the hierarchical model proposed by Keskinen represents the complexity of skills and behaviors necessary for driving. This paper provides a brief review of specific considerations concerning the driver with TBI and highlights current vehicle technology which has been developed by the automotive industry and by manufacturers of adaptive driving equipment that may facilitate the driving task. Adaptive equipment technology allows for compensation of a variety of operational deficits, whereas technological advances within the automotive industry provide drivers with improved safety and information systems. However, research has not yet supported the use of such intelligent transportation systems or advanced driving systems for drivers with brain injury. Although technologies are intended to improve the safety of drivers within the general population, the potential of negative consequences for drivers with brain injury must be considered. Ultimately, a comprehensive driving evaluation and training by a driving rehabilitation specialist is recommended for individuals with brain injury. An understanding of the potential impact of TBI on driving-related skills and knowledge of current adaptive equipment and technology is imperative to determine whether return-to-driving is a realistic and achievable goal for the individual with TBI.

  7. The Effects of Meaning-Based Auditory Training on Behavioral Measures of Perceptual Effort in Individuals with Impaired Hearing.

    Science.gov (United States)

    Sommers, Mitchell S; Tye-Murray, Nancy; Barcroft, Joe; Spehar, Brent P

    2015-11-01

    There has been considerable interest in measuring the perceptual effort required to understand speech, as well as to identify factors that might reduce such effort. In the current study, we investigated whether, in addition to improving speech intelligibility, auditory training also could reduce perceptual or listening effort. Perceptual effort was assessed using a modified version of the n-back memory task in which participants heard lists of words presented without background noise and were asked to continually update their memory of the three most recently presented words. Perceptual effort was indexed by memory for items in the three-back position immediately before, immediately after, and 3 months after participants completed the Computerized Learning Exercises for Aural Rehabilitation (clEAR), a 12-session computerized auditory training program. Immediate posttraining measures of perceptual effort indicated that participants could remember approximately one additional word compared to pretraining. Moreover, some training gains were retained at the 3-month follow-up, as indicated by significantly greater recall for the three-back item at the 3-month measurement than at pretest. There was a small but significant correlation between gains in intelligibility and gains in perceptual effort. The findings are discussed within the framework of a limited-capacity speech perception system.

  8. Autonomous driving in urban environments: approaches, lessons and challenges.

    Science.gov (United States)

    Campbell, Mark; Egerstedt, Magnus; How, Jonathan P; Murray, Richard M

    2010-10-13

    The development of autonomous vehicles for urban driving has seen rapid progress in the past 30 years. This paper provides a summary of the current state of the art in autonomous driving in urban environments, based primarily on the experiences of the authors in the 2007 DARPA Urban Challenge (DUC). The paper briefly summarizes the approaches that different teams used in the DUC, with the goal of describing some of the challenges that the teams faced in driving in urban environments. The paper also highlights the long-term research challenges that must be overcome in order to enable autonomous driving and points to opportunities for new technologies to be applied in improving vehicle safety, exploiting intelligent road infrastructure and enabling robotic vehicles operating in human environments.

  9. E-drive with electrically controlled differential; E-Antrieb mit elektrisch geregeltem Differenzial

    Energy Technology Data Exchange (ETDEWEB)

    Smetana, Tomas; Biermann, Thorsten [Schaeffler Technologies GmbH und Co. KG, Herzogenaurach (Germany); Rohe, Marco [AFT Atlas Fahrzeugtechnik GmbH, Werdohl (Germany); Heinrich, Wolfgang [IDAM, Suhl (Germany)

    2011-10-15

    Schaeffler is presenting an all-wheel drive electric vehicle named 'Active eDrive'. The name is intended principally to convey innovation and the USP of the drive system: an electric differential with a torque vectoring function. The system combines the final drive with intelligent transverse torque distribution which, when used on axles, enables the distribution of torque over the longitudinal axis of the vehicle. The final drive can be integrated in both electric and hybrid vehicles with or without a range extender capability. The authors first explain the mechanical requirements and then describe the electrical systems that are intended to fulfill these. (orig.)

  10. Effects of Non-Driving Related Task Modalities on Takeover Performance in Highly Automated Driving.

    Science.gov (United States)

    Wandtner, Bernhard; Schömig, Nadja; Schmidt, Gerald

    2018-04-01

    Aim of the study was to evaluate the impact of different non-driving related tasks (NDR tasks) on takeover performance in highly automated driving. During highly automated driving, it is allowed to engage in NDR tasks temporarily. However, drivers must be able to take over control when reaching a system limit. There is evidence that the type of NDR task has an impact on takeover performance, but little is known about the specific task characteristics that account for performance decrements. Thirty participants drove in a simulator using a highly automated driving system. Each participant faced five critical takeover situations. Based on assumptions of Wickens's multiple resource theory, stimulus and response modalities of a prototypical NDR task were systematically manipulated. Additionally, in one experimental group, the task was locked out simultaneously with the takeover request. Task modalities had significant effects on several measures of takeover performance. A visual-manual texting task degraded performance the most, particularly when performed handheld. In contrast, takeover performance with an auditory-vocal task was comparable to a baseline without any task. Task lockout was associated with faster hands-on-wheel times but not altered brake response times. Results showed that NDR task modalities are relevant factors for takeover performance. An NDR task lockout was highly accepted by the drivers and showed moderate benefits for the first takeover reaction. Knowledge about the impact of NDR task characteristics is an enabler for adaptive takeover concepts. In addition, it might help regulators to make decisions on allowed NDR tasks during automated driving.

  11. Auditory hindsight bias: Fluency misattribution versus memory reconstruction.

    Science.gov (United States)

    Higham, Philip A; Neil, Greg J; Bernstein, Daniel M

    2017-06-01

    We report 4 experiments investigating auditory hindsight bias-the tendency to overestimate the intelligibility of distorted auditory stimuli after learning their identity. An associative priming manipulation was used to vary the amount of processing fluency independently of prior target knowledge. For hypothetical designs, in which hindsight judgments are made for peers in foresight, we predicted that judgments would be based on processing fluency and that hindsight bias would be greater in the unrelated- compared to related-prime context (differential-fluency hypothesis). Conversely, for memory designs, in which foresight judgments are remembered in hindsight, we predicted that judgments would be based on memory reconstruction and that there would be independent effects of prime relatedness and prior target knowledge (recollection hypothesis). These predictions were confirmed. Specifically, we found support for the differential-fluency hypothesis when a hypothetical design was used in Experiments 1 and 2 (hypothetical group). Conversely, when a memory design was used in Experiments 2 (memory group), 3A, and 3B, we found support for the recollection hypothesis. Together, the results suggest that qualitatively different mechanisms create hindsight bias in the 2 designs. The results are discussed in terms of fluency misattributions, memory reconstruction, anchoring-and-adjustment, sense making, and a multicomponent model of hindsight bias. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Differential Recruitment of Auditory Cortices in the Consolidation of Recent Auditory Fearful Memories.

    Science.gov (United States)

    Cambiaghi, Marco; Grosso, Anna; Renna, Annamaria; Sacchetti, Benedetto

    2016-08-17

    Memories of frightening events require a protracted consolidation process. Sensory cortex, such as the auditory cortex, is involved in the formation of fearful memories with a more complex sensory stimulus pattern. It remains controversial, however, whether the auditory cortex is also required for fearful memories related to simple sensory stimuli. In the present study, we found that, 1 d after training, the temporary inactivation of either the most anterior region of the auditory cortex, including the primary (Te1) cortex, or the most posterior region, which included the secondary (Te2) component, did not affect the retention of recent memories, which is consistent with the current literature. However, at this time point, the inactivation of the entire auditory cortices completely prevented the formation of new memories. Amnesia was site specific and was not due to auditory stimuli perception or processing and strictly related to the interference with memory consolidation processes. Strikingly, at a late time interval 4 d after training, blocking the posterior part (encompassing the Te2) alone impaired memory retention, whereas the inactivation of the anterior part (encompassing the Te1) left memory unaffected. Together, these data show that the auditory cortex is necessary for the consolidation of auditory fearful memories related to simple tones in rats. Moreover, these results suggest that, at early time intervals, memory information is processed in a distributed network composed of both the anterior and the posterior auditory cortical regions, whereas, at late time intervals, memory processing is concentrated in the most posterior part containing the Te2 region. Memories of threatening experiences undergo a prolonged process of "consolidation" to be maintained for a long time. The dynamic of fearful memory consolidation is poorly understood. Here, we show that 1 d after learning, memory is processed in a distributed network composed of both primary Te1 and

  13. Evolution Engines and Artificial Intelligence

    Science.gov (United States)

    Hemker, Andreas; Becks, Karl-Heinz

    In the last years artificial intelligence has achieved great successes, mainly in the field of expert systems and neural networks. Nevertheless the road to truly intelligent systems is still obscured. Artificial intelligence systems with a broad range of cognitive abilities are not within sight. The limited competence of such systems (brittleness) is identified as a consequence of the top-down design process. The evolution principle of nature on the other hand shows an alternative and elegant way to build intelligent systems. We propose to take an evolution engine as the driving force for the bottom-up development of knowledge bases and for the optimization of the problem-solving process. A novel data analysis system for the high energy physics experiment DELPHI at CERN shows the practical relevance of this idea. The system is able to reconstruct the physical processes after the collision of particles by making use of the underlying standard model of elementary particle physics. The evolution engine acts as a global controller of a population of inference engines working on the reconstruction task. By implementing the system on the Connection Machine (Model CM-2) we use the full advantage of the inherent parallelization potential of the evolutionary approach.

  14. The development of working memory capacity and fluid intelligence in children

    OpenAIRE

    Engel de Abreu, Pascale; Gathercole, S; Conway, A

    2010-01-01

    A longitudinal study was conducted to investigate the relationship between working memory capacity and fluid intelligence and how this relationship develops in early childhood. The major aim was to determine which aspect of the working memory system – short-term storage or executive attention – drives the relationship with fluid intelligence. A sample of 119 children was followed from kindergarten to second grade and completed multiple assessments of short-term memory, wor...

  15. Manipulation of Auditory Inputs as Rehabilitation Therapy for Maladaptive Auditory Cortical Reorganization

    Directory of Open Access Journals (Sweden)

    Hidehiko Okamoto

    2018-01-01

    Full Text Available Neurophysiological and neuroimaging data suggest that the brains of not only children but also adults are reorganized based on sensory inputs and behaviors. Plastic changes in the brain are generally beneficial; however, maladaptive cortical reorganization in the auditory cortex may lead to hearing disorders such as tinnitus and hyperacusis. Recent studies attempted to noninvasively visualize pathological neural activity in the living human brain and reverse maladaptive cortical reorganization by the suitable manipulation of auditory inputs in order to alleviate detrimental auditory symptoms. The effects of the manipulation of auditory inputs on maladaptively reorganized brain were reviewed herein. The findings obtained indicate that rehabilitation therapy based on the manipulation of auditory inputs is an effective and safe approach for hearing disorders. The appropriate manipulation of sensory inputs guided by the visualization of pathological brain activities using recent neuroimaging techniques may contribute to the establishment of new clinical applications for affected individuals.

  16. Speech processing: from peripheral to hemispheric asymmetry of the auditory system.

    Science.gov (United States)

    Lazard, Diane S; Collette, Jean-Louis; Perrot, Xavier

    2012-01-01

    Language processing from the cochlea to auditory association cortices shows side-dependent specificities with an apparent left hemispheric dominance. The aim of this article was to propose to nonspeech specialists a didactic review of two complementary theories about hemispheric asymmetry in speech processing. Starting from anatomico-physiological and clinical observations of auditory asymmetry and interhemispheric connections, this review then exposes behavioral (dichotic listening paradigm) as well as functional (functional magnetic resonance imaging and positron emission tomography) experiments that assessed hemispheric specialization for speech processing. Even though speech at an early phonological level is regarded as being processed bilaterally, a left-hemispheric dominance exists for higher-level processing. This asymmetry may arise from a segregation of the speech signal, broken apart within nonprimary auditory areas in two distinct temporal integration windows--a fast one on the left and a slower one on the right--modeled through the asymmetric sampling in time theory or a spectro-temporal trade-off, with a higher temporal resolution in the left hemisphere and a higher spectral resolution in the right hemisphere, modeled through the spectral/temporal resolution trade-off theory. Both theories deal with the concept that lower-order tuning principles for acoustic signal might drive higher-order organization for speech processing. However, the precise nature, mechanisms, and origin of speech processing asymmetry are still being debated. Finally, an example of hemispheric asymmetry alteration, which has direct clinical implications, is given through the case of auditory aging that mixes peripheral disorder and modifications of central processing. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.

  17. Auditory temporal preparation induced by rhythmic cues during concurrent auditory working memory tasks.

    Science.gov (United States)

    Cutanda, Diana; Correa, Ángel; Sanabria, Daniel

    2015-06-01

    The present study investigated whether participants can develop temporal preparation driven by auditory isochronous rhythms when concurrently performing an auditory working memory (WM) task. In Experiment 1, participants had to respond to an auditory target presented after a regular or an irregular sequence of auditory stimuli while concurrently performing a Sternberg-type WM task. Results showed that participants responded faster after regular compared with irregular rhythms and that this effect was not affected by WM load; however, the lack of a significant main effect of WM load made it difficult to draw any conclusion regarding the influence of the dual-task manipulation in Experiment 1. In order to enhance dual-task interference, Experiment 2 combined the auditory rhythm procedure with an auditory N-Back task, which required WM updating (monitoring and coding of the information) and was presumably more demanding than the mere rehearsal of the WM task used in Experiment 1. Results now clearly showed dual-task interference effects (slower reaction times [RTs] in the high- vs. the low-load condition). However, such interference did not affect temporal preparation induced by rhythms, with faster RTs after regular than after irregular sequences in the high-load and low-load conditions. These results revealed that secondary tasks demanding memory updating, relative to tasks just demanding rehearsal, produced larger interference effects on overall RTs in the auditory rhythm task. Nevertheless, rhythm regularity exerted a strong temporal preparation effect that survived the interference of the WM task even when both tasks competed for processing resources within the auditory modality. (c) 2015 APA, all rights reserved).

  18. Short-term plasticity in auditory cognition.

    Science.gov (United States)

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  19. Leukoaraiosis significantly worsens driving performance of ordinary older drivers.

    Directory of Open Access Journals (Sweden)

    Kimihiko Nakano

    Full Text Available BACKGROUND: Leukoaraiosis is defined as extracellular space caused mainly by atherosclerotic or demyelinated changes in the brain tissue and is commonly found in the brains of healthy older people. A significant association between leukoaraiosis and traffic crashes was reported in our previous study; however, the reason for this is still unclear. METHOD: This paper presents a comprehensive evaluation of driving performance in ordinary older drivers with leukoaraiosis. First, the degree of leukoaraiosis was examined in 33 participants, who underwent an actual-vehicle driving examination on a standard driving course, and a driver skill rating was also collected while the driver carried out a paced auditory serial addition test, which is a calculating task given verbally. At the same time, a steering entropy method was used to estimate steering operation performance. RESULTS: The experimental results indicated that a normal older driver with leukoaraiosis was readily affected by external disturbances and made more operation errors and steered less smoothly than one without leukoaraiosis during driving; at the same time, their steering skill significantly deteriorated. CONCLUSIONS: Leukoaraiosis worsens the driving performance of older drivers because of their increased vulnerability to distraction.

  20. Intelligent networks recent approaches and applications in medical systems

    CERN Document Server

    Ahamed, Syed V

    2013-01-01

    This textbook offers an insightful study of the intelligent Internet-driven revolutionary and fundamental forces at work in society. Readers will have access to tools and techniques to mentor and monitor these forces rather than be driven by changes in Internet technology and flow of money. These submerged social and human forces form a powerful synergistic foursome web of (a) processor technology, (b) evolving wireless networks of the next generation, (c) the intelligent Internet, and (d) the motivation that drives individuals and corporations. In unison, the technological forces can tear

  1. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...

  2. The Effect of Working Memory Training on Auditory Stream Segregation in Auditory Processing Disorders Children

    OpenAIRE

    Abdollah Moossavi; Saeideh Mehrkian; Yones Lotfi; Soghrat Faghih zadeh; Hamed Adjedi

    2015-01-01

    Objectives: This study investigated the efficacy of working memory training for improving working memory capacity and related auditory stream segregation in auditory processing disorders children. Methods: Fifteen subjects (9-11 years), clinically diagnosed with auditory processing disorder participated in this non-randomized case-controlled trial. Working memory abilities and auditory stream segregation were evaluated prior to beginning and six weeks after completing the training program...

  3. Aberrant interference of auditory negative words on attention in patients with schizophrenia.

    Directory of Open Access Journals (Sweden)

    Norichika Iwashiro

    Full Text Available Previous research suggests that deficits in attention-emotion interaction are implicated in schizophrenia symptoms. Although disruption in auditory processing is crucial in the pathophysiology of schizophrenia, deficits in interaction between emotional processing of auditorily presented language stimuli and auditory attention have not yet been clarified. To address this issue, the current study used a dichotic listening task to examine 22 patients with schizophrenia and 24 age-, sex-, parental socioeconomic background-, handedness-, dexterous ear-, and intelligence quotient-matched healthy controls. The participants completed a word recognition task on the attended side in which a word with emotionally valenced content (negative/positive/neutral was presented to one ear and a different neutral word was presented to the other ear. Participants selectively attended to either ear. In the control subjects, presentation of negative but not positive word stimuli provoked a significantly prolonged reaction time compared with presentation of neutral word stimuli. This interference effect for negative words existed whether or not subjects directed attention to the negative words. This interference effect was significantly smaller in the patients with schizophrenia than in the healthy controls. Furthermore, the smaller interference effect was significantly correlated with severe positive symptoms and delusional behavior in the patients with schizophrenia. The present findings suggest that aberrant interaction between semantic processing of negative emotional content and auditory attention plays a role in production of positive symptoms in schizophrenia. (224 words.

  4. Frequency locking in auditory hair cells: Distinguishing between additive and parametric forcing

    Science.gov (United States)

    Edri, Yuval; Bozovic, Dolores; Yochelis, Arik

    2016-10-01

    The auditory system displays remarkable sensitivity and frequency discrimination, attributes shown to rely on an amplification process that involves a mechanical as well as a biochemical response. Models that display proximity to an oscillatory onset (also known as Hopf bifurcation) exhibit a resonant response to distinct frequencies of incoming sound, and can explain many features of the amplification phenomenology. To understand the dynamics of this resonance, frequency locking is examined in a system near the Hopf bifurcation and subject to two types of driving forces: additive and parametric. Derivation of a universal amplitude equation that contains both forcing terms enables a study of their relative impact on the hair cell response. In the parametric case, although the resonant solutions are 1 : 1 frequency locked, they show the coexistence of solutions obeying a phase shift of π, a feature typical of the 2 : 1 resonance. Different characteristics are predicted for the transition from unlocked to locked solutions, leading to smooth or abrupt dynamics in response to different types of forcing. The theoretical framework provides a more realistic model of the auditory system, which incorporates a direct modulation of the internal control parameter by an applied drive. The results presented here can be generalized to many other media, including Faraday waves, chemical reactions, and elastically driven cardiomyocytes, which are known to exhibit resonant behavior.

  5. The influence of music on mental effort and driving performance.

    Science.gov (United States)

    Ünal, Ayça Berfu; Steg, Linda; Epstude, Kai

    2012-09-01

    The current research examined the influence of loud music on driving performance, and whether mental effort mediated this effect. Participants (N=69) drove in a driving simulator either with or without listening to music. In order to test whether music would have similar effects on driving performance in different situations, we manipulated the simulated traffic environment such that the driving context consisted of both complex and monotonous driving situations. In addition, we systematically kept track of drivers' mental load by making the participants verbally report their mental effort at certain moments while driving. We found that listening to music increased mental effort while driving, irrespective of the driving situation being complex or monotonous, providing support to the general assumption that music can be a distracting auditory stimulus while driving. However, drivers who listened to music performed as well as the drivers who did not listen to music, indicating that music did not impair their driving performance. Importantly, the increases in mental effort while listening to music pointed out that drivers try to regulate their mental effort as a cognitive compensatory strategy to deal with task demands. Interestingly, we observed significant improvements in driving performance in two of the driving situations. It seems like mental effort might mediate the effect of music on driving performance in situations requiring sustained attention. Other process variables, such as arousal and boredom, should also be incorporated to study designs in order to reveal more on the nature of how music affects driving. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Modularity in Sensory Auditory Memory

    OpenAIRE

    Clement, Sylvain; Moroni, Christine; Samson, Séverine

    2004-01-01

    The goal of this paper was to review various experimental and neuropsychological studies that support the modular conception of auditory sensory memory or auditory short-term memory. Based on initial findings demonstrating that verbal sensory memory system can be dissociated from a general auditory memory store at the functional and anatomical levels. we reported a series of studies that provided evidence in favor of multiple auditory sensory stores specialized in retaining eit...

  7. Beneficial auditory and cognitive effects of auditory brainstem implantation in children.

    Science.gov (United States)

    Colletti, Liliana

    2007-09-01

    This preliminary study demonstrates the development of hearing ability and shows that there is a significant improvement in some cognitive parameters related to selective visual/spatial attention and to fluid or multisensory reasoning, in children fitted with auditory brainstem implantation (ABI). The improvement in cognitive paramenters is due to several factors, among which there is certainly, as demonstrated in the literature on a cochlear implants (CIs), the activation of the auditory sensory canal, which was previously absent. The findings of the present study indicate that children with cochlear or cochlear nerve abnormalities with associated cognitive deficits should not be excluded from ABI implantation. The indications for ABI have been extended over the last 10 years to adults with non-tumoral (NT) cochlear or cochlear nerve abnormalities that cannot benefit from CI. We demonstrated that the ABI with surface electrodes may provide sufficient stimulation of the central auditory system in adults for open set speech recognition. These favourable results motivated us to extend ABI indications to children with profound hearing loss who were not candidates for a CI. This study investigated the performances of young deaf children undergoing ABI, in terms of their auditory perceptual development and their non-verbal cognitive abilities. In our department from 2000 to 2006, 24 children aged 14 months to 16 years received an ABI for different tumour and non-tumour diseases. Two children had NF2 tumours. Eighteen children had bilateral cochlear nerve aplasia. In this group, nine children had associated cochlear malformations, two had unilateral facial nerve agenesia and two had combined microtia, aural atresia and middle ear malformations. Four of these children had previously been fitted elsewhere with a CI with no auditory results. One child had bilateral incomplete cochlear partition (type II); one child, who had previously been fitted unsuccessfully elsewhere

  8. What determines auditory distraction? On the roles of local auditory changes and expectation violations.

    Directory of Open Access Journals (Sweden)

    Jan P Röer

    Full Text Available Both the acoustic variability of a distractor sequence and the degree to which it violates expectations are important determinants of auditory distraction. In four experiments we examined the relative contribution of local auditory changes on the one hand and expectation violations on the other hand in the disruption of serial recall by irrelevant sound. We present evidence for a greater disruption by auditory sequences ending in unexpected steady state distractor repetitions compared to auditory sequences with expected changing state endings even though the former contained fewer local changes. This effect was demonstrated with piano melodies (Experiment 1 and speech distractors (Experiment 2. Furthermore, it was replicated when the expectation violation occurred after the encoding of the target items (Experiment 3, indicating that the items' maintenance in short-term memory was disrupted by attentional capture and not their encoding. This seems to be primarily due to the violation of a model of the specific auditory distractor sequences because the effect vanishes and even reverses when the experiment provides no opportunity to build up a specific neural model about the distractor sequence (Experiment 4. Nevertheless, the violation of abstract long-term knowledge about auditory regularities seems to cause a small and transient capture effect: Disruption decreased markedly over the course of the experiments indicating that participants habituated to the unexpected distractor repetitions across trials. The overall pattern of results adds to the growing literature that the degree to which auditory distractors violate situation-specific expectations is a more important determinant of auditory distraction than the degree to which a distractor sequence contains local auditory changes.

  9. Auditory-visual integration in fields of the auditory cortex.

    Science.gov (United States)

    Kubota, Michinori; Sugimoto, Shunji; Hosokawa, Yutaka; Ojima, Hisayuki; Horikawa, Junsei

    2017-03-01

    While multimodal interactions have been known to exist in the early sensory cortices, the response properties and spatiotemporal organization of these interactions are poorly understood. To elucidate the characteristics of multimodal sensory interactions in the cerebral cortex, neuronal responses to visual stimuli with or without auditory stimuli were investigated in core and belt fields of guinea pig auditory cortex using real-time optical imaging with a voltage-sensitive dye. On average, visual responses consisted of short excitation followed by long inhibition. Although visual responses were observed in core and belt fields, there were regional and temporal differences in responses. The most salient visual responses were observed in the caudal belt fields, especially posterior (P) and dorsocaudal belt (DCB) fields. Visual responses emerged first in fields P and DCB and then spread rostroventrally to core and ventrocaudal belt (VCB) fields. Absolute values of positive and negative peak amplitudes of visual responses were both larger in fields P and DCB than in core and VCB fields. When combined visual and auditory stimuli were applied, fields P and DCB were more inhibited than core and VCB fields beginning approximately 110 ms after stimuli. Correspondingly, differences between responses to auditory stimuli alone and combined audiovisual stimuli became larger in fields P and DCB than in core and VCB fields after approximately 110 ms after stimuli. These data indicate that visual influences are most salient in fields P and DCB, which manifest mainly as inhibition, and that they enhance differences in auditory responses among fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Assessing the aging effect on auditory-verbal memory by Persian version of dichotic auditory verbal memory test

    Directory of Open Access Journals (Sweden)

    Zahra Shahidipour

    2014-01-01

    Conclusion: Based on the obtained results, significant reduction in auditory memory was seen in aged group and the Persian version of dichotic auditory-verbal memory test, like many other auditory verbal memory tests, showed the aging effects on auditory verbal memory performance.

  11. [Assessment of the efficiency of the auditory training in children with dyslalia and auditory processing disorders].

    Science.gov (United States)

    Włodarczyk, Elżbieta; Szkiełkowska, Agata; Skarżyński, Henryk; Piłka, Adam

    2011-01-01

    To assess effectiveness of the auditory training in children with dyslalia and central auditory processing disorders. Material consisted of 50 children aged 7-9-years-old. Children with articulation disorders stayed under long-term speech therapy care in the Auditory and Phoniatrics Clinic. All children were examined by a laryngologist and a phoniatrician. Assessment included tonal and impedance audiometry and speech therapists' and psychologist's consultations. Additionally, a set of electrophysiological examinations was performed - registration of N2, P2, N2, P2, P300 waves and psychoacoustic test of central auditory functions: FPT - frequency pattern test. Next children took part in the regular auditory training and attended speech therapy. Speech assessment followed treatment and therapy, again psychoacoustic tests were performed and P300 cortical potentials were recorded. After that statistical analyses were performed. Analyses revealed that application of auditory training in patients with dyslalia and other central auditory disorders is very efficient. Auditory training may be a very efficient therapy supporting speech therapy in children suffering from dyslalia coexisting with articulation and central auditory disorders and in children with educational problems of audiogenic origin. Copyright © 2011 Polish Otolaryngology Society. Published by Elsevier Urban & Partner (Poland). All rights reserved.

  12. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  13. Driver's behavioural changes with new intelligent transport system interventions at railway level crossings--A driving simulator study.

    Science.gov (United States)

    Larue, Grégoire S; Kim, Inhi; Rakotonirainy, Andry; Haworth, Narelle L; Ferreira, Luis

    2015-08-01

    Improving safety at railway level crossings is an important issue for the Australian transport system. Governments, the rail industry and road organisations have tried a variety of countermeasures for many years to improve railway level crossing safety. New types of intelligent transport system (ITS) interventions are now emerging due to the availability and the affordability of technology. These interventions target both actively and passively protected railway level crossings and attempt to address drivers' errors at railway crossings, which are mainly a failure to detect the crossing or the train and misjudgement of the train approach speed and distance. This study aims to assess the effectiveness of three emerging ITS that the rail industry considers implementing in Australia: a visual in-vehicle ITS, an audio in-vehicle ITS, as well as an on-road flashing beacons intervention. The evaluation was conducted on an advanced driving simulator with 20 participants per trialled technology, each participant driving once without any technology and once with one of the ITS interventions. Every participant drove through a range of active and passive crossings with and without trains approaching. Their speed approach of the crossing, head movements and stopping compliance were measured. Results showed that driver behaviour was changed with the three ITS interventions at passive crossings, while limited effects were found at active crossings, even with reduced visibility. The on-road intervention trialled was unsuccessful in improving driver behaviour; the audio and visual ITS improved driver behaviour when a train was approaching. A trend toward worsening driver behaviour with the visual ITS was observed when no trains were approaching. This trend was not observed for the audio ITS intervention, which appears to be the ITS intervention with the highest potential for improving safety at passive crossings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Neural Segregation of Concurrent Speech: Effects of Background Noise and Reverberation on Auditory Scene Analysis in the Ventral Cochlear Nucleus.

    Science.gov (United States)

    Sayles, Mark; Stasiak, Arkadiusz; Winter, Ian M

    2016-01-01

    Concurrent complex sounds (e.g., two voices speaking at once) are perceptually disentangled into separate "auditory objects". This neural processing often occurs in the presence of acoustic-signal distortions from noise and reverberation (e.g., in a busy restaurant). A difference in periodicity between sounds is a strong segregation cue under quiet, anechoic conditions. However, noise and reverberation exert differential effects on speech intelligibility under "cocktail-party" listening conditions. Previous neurophysiological studies have concentrated on understanding auditory scene analysis under ideal listening conditions. Here, we examine the effects of noise and reverberation on periodicity-based neural segregation of concurrent vowels /a/ and /i/, in the responses of single units in the guinea-pig ventral cochlear nucleus (VCN): the first processing station of the auditory brain stem. In line with human psychoacoustic data, we find reverberation significantly impairs segregation when vowels have an intonated pitch contour, but not when they are spoken on a monotone. In contrast, noise impairs segregation independent of intonation pattern. These results are informative for models of speech processing under ecologically valid listening conditions, where noise and reverberation abound.

  15. Auditory and visual interactions between the superior and inferior colliculi in the ferret.

    Science.gov (United States)

    Stitt, Iain; Galindo-Leon, Edgar; Pieper, Florian; Hollensteiner, Karl J; Engler, Gerhard; Engel, Andreas K

    2015-05-01

    The integration of visual and auditory spatial information is important for building an accurate perception of the external world, but the fundamental mechanisms governing such audiovisual interaction have only partially been resolved. The earliest interface between auditory and visual processing pathways is in the midbrain, where the superior (SC) and inferior colliculi (IC) are reciprocally connected in an audiovisual loop. Here, we investigate the mechanisms of audiovisual interaction in the midbrain by recording neural signals from the SC and IC simultaneously in anesthetized ferrets. Visual stimuli reliably produced band-limited phase locking of IC local field potentials (LFPs) in two distinct frequency bands: 6-10 and 15-30 Hz. These visual LFP responses co-localized with robust auditory responses that were characteristic of the IC. Imaginary coherence analysis confirmed that visual responses in the IC were not volume-conducted signals from the neighboring SC. Visual responses in the IC occurred later than retinally driven superficial SC layers and earlier than deep SC layers that receive indirect visual inputs, suggesting that retinal inputs do not drive visually evoked responses in the IC. In addition, SC and IC recording sites with overlapping visual spatial receptive fields displayed stronger functional connectivity than sites with separate receptive fields, indicating that visual spatial maps are aligned across both midbrain structures. Reciprocal coupling between the IC and SC therefore probably serves the dynamic integration of visual and auditory representations of space. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  16. Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions.

    Science.gov (United States)

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2016-12-01

    Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Exploratory study of the relationship between the musical, visuospatial, bodily-kinesthetic intelligence and drive creativity in the process of learning

    Directory of Open Access Journals (Sweden)

    Paula MARCHENA CRUZ

    2017-12-01

    Full Text Available Currently, the Spanish educational system focuses its attention on the development of priority subjects such as language and mathematics versus other secondary such as music (Palacios, 2006, without considering numerous neuropsychological research that provides new theories of mind and learning that can positively influence the transformation of current educational models (Martin-Lobo, 2015. This research aims to determine the relation between musical intelligence, bodily-kinesthetic intelligence, intelligence visuospatial and motor creativity in a sample among 5 years old students from the last year of Early Childhood Education. The instrument used to assess the three intelligences, based on Gardner’s theory, was the Multiple Intelligences questionnaire for children of pre-school age (Prieto and Ballester, 2003; for the evaluation of motor creativity was used Test of Creative Thinking in Action and Movement (Torrance, Reisman and Floyd, 1981. A descriptive and correlational statistical analysis (using the Pearson correlation index applying the Microsoft Excel program along with the supplement known as Ezanalyze. The results indicated no significant relationship between musical intelligence and motor creativity (p = 0.988; the visuospatial intelligence and motor creativity (p = 0.992; and the bodily-kinesthetic intelligence and motor creativity (p = 0.636. Although there was significant relation between the musical and visuospatial intelligence (p = 0.000; the musical and bodily-kinesthetic intelligence (p = 0.000; and the bodily-kinesthetic and visuospatial intelligence (p = 0.025.

  18. Listenmee and Listenmee smartphone application: synchronizing walking to rhythmic auditory cues to improve gait in Parkinson's disease.

    Science.gov (United States)

    Lopez, William Omar Contreras; Higuera, Carlos Andres Escalante; Fonoff, Erich Talamoni; Souza, Carolina de Oliveira; Albicker, Ulrich; Martinez, Jairo Alberto Espinoza

    2014-10-01

    Evidence supports the use of rhythmic external auditory signals to improve gait in PD patients (Arias & Cudeiro, 2008; Kenyon & Thaut, 2000; McIntosh, Rice & Thaut, 1994; McIntosh et al., 1997; Morris, Iansek, & Matyas, 1994; Thaut, McIntosh, & Rice, 1997; Suteerawattananon, Morris, Etnyre, Jankovic, & Protas , 2004; Willems, Nieuwboer, Chavert, & Desloovere, 2006). However, few prototypes are available for daily use, and to our knowledge, none utilize a smartphone application allowing individualized sounds and cadence. Therefore, we analyzed the effects on gait of Listenmee®, an intelligent glasses system with a portable auditory device, and present its smartphone application, the Listenmee app®, offering over 100 different sounds and an adjustable metronome to individualize the cueing rate as well as its smartwatch with accelerometer to detect magnitude and direction of the proper acceleration, track calorie count, sleep patterns, steps count and daily distances. The present study included patients with idiopathic PD presented gait disturbances including freezing. Auditory rhythmic cues were delivered through Listenmee®. Performance was analyzed in a motion and gait analysis laboratory. The results revealed significant improvements in gait performance over three major dependent variables: walking speed in 38.1%, cadence in 28.1% and stride length in 44.5%. Our findings suggest that auditory cueing through Listenmee® may significantly enhance gait performance. Further studies are needed to elucidate the potential role and maximize the benefits of these portable devices. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Multimodal Detection of Music Performances for Intelligent Emotion Based Lighting

    DEFF Research Database (Denmark)

    Bonde, Esben Oxholm Skjødt; Hansen, Ellen Kathrine; Triantafyllidis, Georgios

    2016-01-01

    Playing music is about conveying emotions and the lighting at a concert can help do that. However, new and unknown bands that play at smaller venues and bands that don’t have the budget to hire a dedicated light technician have to miss out on lighting that will help them to convey the emotions...... of what they play. In this paper it is investigated whether it is possible or not to develop an intelligent system that through a multimodal input detects the intended emotions of the played music and in realtime adjusts the lighting accordingly. A concept for such an intelligent lighting system...... is developed and described. Through existing research on music and emotion, as well as on musicians’ body movements related to the emotion they want to convey, a row of cues is defined. This includes amount, speed, fluency and regularity for the visual and level, tempo, articulation and timbre for the auditory...

  20. Improvement of intelligibility of ideal binary-masked noisy speech by adding background noise.

    Science.gov (United States)

    Cao, Shuyang; Li, Liang; Wu, Xihong

    2011-04-01

    When a target-speech/masker mixture is processed with the signal-separation technique, ideal binary mask (IBM), intelligibility of target speech is remarkably improved in both normal-hearing listeners and hearing-impaired listeners. Intelligibility of speech can also be improved by filling in speech gaps with un-modulated broadband noise. This study investigated whether intelligibility of target speech in the IBM-treated target-speech/masker mixture can be further improved by adding a broadband-noise background. The results of this study show that following the IBM manipulation, which remarkably released target speech from speech-spectrum noise, foreign-speech, or native-speech masking (experiment 1), adding a broadband-noise background with the signal-to-noise ratio no less than 4 dB significantly improved intelligibility of target speech when the masker was either noise (experiment 2) or speech (experiment 3). The results suggest that since adding the noise background shallows the areas of silence in the time-frequency domain of the IBM-treated target-speech/masker mixture, the abruption of transient changes in the mixture is smoothed and the perceived continuity of target-speech components becomes enhanced, leading to improved target-speech intelligibility. The findings are useful for advancing computational auditory scene analysis, hearing-aid/cochlear-implant designs, and understanding of speech perception under "cocktail-party" conditions.

  1. Developmental programming of auditory learning

    Directory of Open Access Journals (Sweden)

    Melania Puddu

    2012-10-01

    Full Text Available The basic structures involved in the development of auditory function and consequently in language acquisition are directed by genetic code, but the expression of individual genes may be altered by exposure to environmental factors, which if favorable, orient it in the proper direction, leading its development towards normality, if unfavorable, they deviate it from its physiological course. Early sensorial experience during the foetal period (i.e. intrauterine noise floor, sounds coming from the outside and attenuated by the uterine filter, particularly mother’s voice and modifications induced by it at the cochlear level represent the first example of programming in one of the earliest critical periods in development of the auditory system. This review will examine the factors that influence the developmental programming of auditory learning from the womb to the infancy. In particular it focuses on the following points: the prenatal auditory experience and the plastic phenomena presumably induced by it in the auditory system from the basilar membrane to the cortex;the involvement of these phenomena on language acquisition and on the perception of language communicative intention after birth;the consequences of auditory deprivation in critical periods of auditory development (i.e. premature interruption of foetal life.

  2. Auditory-model based assessment of the effects of hearing loss and hearing-aid compression on spectral and temporal resolution

    DEFF Research Database (Denmark)

    Kowalewski, Borys; MacDonald, Ewen; Strelcyk, Olaf

    2016-01-01

    . However, due to the complexity of speech and its robustness to spectral and temporal alterations, the effects of DRC on speech perception have been mixed and controversial. The goal of the present study was to obtain a clearer understanding of the interplay between hearing loss and DRC by means......Most state-of-the-art hearing aids apply multi-channel dynamic-range compression (DRC). Such designs have the potential to emulate, at least to some degree, the processing that takes place in the healthy auditory system. One way to assess hearing-aid performance is to measure speech intelligibility....... Outcomes were simulated using the auditory processing model of Jepsen et al. (2008) with the front end modified to include effects of hearing impairment and DRC. The results were compared to experimental data from normal-hearing and hearing-impaired listeners....

  3. Smart sensorless prediction diagnosis of electric drives

    Science.gov (United States)

    Kruglova, TN; Glebov, NA; Shoshiashvili, ME

    2017-10-01

    In this paper, the discuss diagnostic method and prediction of the technical condition of an electrical motor using artificial intelligent method, based on the combination of fuzzy logic and neural networks, are discussed. The fuzzy sub-model determines the degree of development of each fault. The neural network determines the state of the object as a whole and the number of serviceable work periods for motors actuator. The combination of advanced techniques reduces the learning time and increases the forecasting accuracy. The experimental implementation of the method for electric drive diagnosis and associated equipment is carried out at different speeds. As a result, it was found that this method allows troubleshooting the drive at any given speed.

  4. Speech recognition and parent-ratings from auditory development questionnaires in children who are hard of hearing

    Science.gov (United States)

    McCreery, Ryan W.; Walker, Elizabeth A.; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia

    2015-01-01

    Objectives Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HA) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children’s auditory experience on parent-report auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Design Parent ratings on auditory development questionnaires and children’s speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years of age. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children Rating Scale, and an adaptation of the Speech, Spatial and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open and Closed set task, Early Speech Perception Test, Lexical Neighborhood Test, and Phonetically-balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared to peers with normal hearing matched for age, maternal educational level and nonverbal intelligence. The effects of aided audibility, HA use and language ability on parent responses to auditory development questionnaires and on children’s speech recognition were also examined. Results Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater

  5. Drive Control System for Pipeline Crawl Robot Based on CAN Bus

    International Nuclear Information System (INIS)

    Chen, H J; Gao, B T; Zhang, X H; Deng, Z Q

    2006-01-01

    Drive control system plays important roles in pipeline robot. In order to inspect the flaw and corrosion of seabed crude oil pipeline, an original mobile pipeline robot with crawler drive unit, power and monitor unit, central control unit, and ultrasonic wave inspection device is developed. The CAN bus connects these different function units and presents a reliable information channel. Considering the limited space, a compact hardware system is designed based on an ARM processor with two CAN controllers. With made-to-order CAN protocol for the crawl robot, an intelligent drive control system is developed. The implementation of the crawl robot demonstrates that the presented drive control scheme can meet the motion control requirements of the underwater pipeline crawl robot

  6. Drive Control System for Pipeline Crawl Robot Based on CAN Bus

    Energy Technology Data Exchange (ETDEWEB)

    Chen, H J [Department of Electrical Engineering, Harbin Institute of Technology Harbin, Heilongjiang, 150001 (China); Gao, B T [Department of Electrical Engineering, Harbin Institute of Technology Harbin, Heilongjiang, 150001 (China); Zhang, X H [Department of Electrical Engineering, Harbin Institute of Technology Harbin, Heilongjiang, 150001 (China); Deng, Z Q [School of Mechanical Engineering, Harbin Institute of Technology Harbin, Heilongjiang, 150001 (China)

    2006-10-15

    Drive control system plays important roles in pipeline robot. In order to inspect the flaw and corrosion of seabed crude oil pipeline, an original mobile pipeline robot with crawler drive unit, power and monitor unit, central control unit, and ultrasonic wave inspection device is developed. The CAN bus connects these different function units and presents a reliable information channel. Considering the limited space, a compact hardware system is designed based on an ARM processor with two CAN controllers. With made-to-order CAN protocol for the crawl robot, an intelligent drive control system is developed. The implementation of the crawl robot demonstrates that the presented drive control scheme can meet the motion control requirements of the underwater pipeline crawl robot.

  7. Auditory short-term memory in the primate auditory cortex

    OpenAIRE

    Scott, Brian H.; Mishkin, Mortimer

    2015-01-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ���working memory��� bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive sho...

  8. A comparative study of simple auditory reaction time in blind (congenitally) and sighted subjects.

    Science.gov (United States)

    Gandhi, Pritesh Hariprasad; Gokhale, Pradnya A; Mehta, H B; Shah, C J

    2013-07-01

    Reaction time is the time interval between the application of a stimulus and the appearance of appropriate voluntary response by a subject. It involves stimulus processing, decision making, and response programming. Reaction time study has been popular due to their implication in sports physiology. Reaction time has been widely studied as its practical implications may be of great consequence e.g., a slower than normal reaction time while driving can have grave results. To study simple auditory reaction time in congenitally blind subjects and in age sex matched sighted subjects. To compare the simple auditory reaction time between congenitally blind subjects and healthy control subjects. STUDY HAD BEEN CARRIED OUT IN TWO GROUPS: The 1(st) of 50 congenitally blind subjects and 2(nd) group comprises of 50 healthy controls. It was carried out on Multiple Choice Reaction Time Apparatus, Inco Ambala Ltd. (Accuracy±0.001 s) in a sitting position at Government Medical College and Hospital, Bhavnagar and at a Blind School, PNR campus, Bhavnagar, Gujarat, India. Simple auditory reaction time response with four different type of sound (horn, bell, ring, and whistle) was recorded in both groups. According to our study, there is no significant different in reaction time between congenital blind and normal healthy persons. Blind individuals commonly utilize tactual and auditory cues for information and orientation and they reliance on touch and audition, together with more practice in using these modalities to guide behavior, is often reflected in better performance of blind relative to sighted participants in tactile or auditory discrimination tasks, but there is not any difference in reaction time between congenitally blind and sighted people.

  9. Integration of auditory and visual speech information

    NARCIS (Netherlands)

    Hall, M.; Smeele, P.M.T.; Kuhl, P.K.

    1998-01-01

    The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual

  10. Auditory Dysfunction in Patients with Cerebrovascular Disease

    Directory of Open Access Journals (Sweden)

    Sadaharu Tabuchi

    2014-01-01

    Full Text Available Auditory dysfunction is a common clinical symptom that can induce profound effects on the quality of life of those affected. Cerebrovascular disease (CVD is the most prevalent neurological disorder today, but it has generally been considered a rare cause of auditory dysfunction. However, a substantial proportion of patients with stroke might have auditory dysfunction that has been underestimated due to difficulties with evaluation. The present study reviews relationships between auditory dysfunction and types of CVD including cerebral infarction, intracerebral hemorrhage, subarachnoid hemorrhage, cerebrovascular malformation, moyamoya disease, and superficial siderosis. Recent advances in the etiology, anatomy, and strategies to diagnose and treat these conditions are described. The numbers of patients with CVD accompanied by auditory dysfunction will increase as the population ages. Cerebrovascular diseases often include the auditory system, resulting in various types of auditory dysfunctions, such as unilateral or bilateral deafness, cortical deafness, pure word deafness, auditory agnosia, and auditory hallucinations, some of which are subtle and can only be detected by precise psychoacoustic and electrophysiological testing. The contribution of CVD to auditory dysfunction needs to be understood because CVD can be fatal if overlooked.

  11. Convergent evolution of complex brains and high intelligence.

    Science.gov (United States)

    Roth, Gerhard

    2015-12-19

    Within the animal kingdom, complex brains and high intelligence have evolved several to many times independently, e.g. among ecdysozoans in some groups of insects (e.g. blattoid, dipteran, hymenopteran taxa), among lophotrochozoans in octopodid molluscs, among vertebrates in teleosts (e.g. cichlids), corvid and psittacid birds, and cetaceans, elephants and primates. High levels of intelligence are invariantly bound to multimodal centres such as the mushroom bodies in insects, the vertical lobe in octopodids, the pallium in birds and the cerebral cortex in primates, all of which contain highly ordered associative neuronal networks. The driving forces for high intelligence may vary among the mentioned taxa, e.g. needs for spatial learning and foraging strategies in insects and cephalopods, for social learning in cichlids, instrumental learning and spatial orientation in birds and social as well as instrumental learning in primates. © 2015 The Author(s).

  12. Weak responses to auditory feedback perturbation during articulation in persons who stutter: evidence for abnormal auditory-motor transformation.

    Directory of Open Access Journals (Sweden)

    Shanqing Cai

    Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.

  13. Integration of Visual Information in Auditory Cortex Promotes Auditory Scene Analysis through Multisensory Binding.

    Science.gov (United States)

    Atilgan, Huriye; Town, Stephen M; Wood, Katherine C; Jones, Gareth P; Maddox, Ross K; Lee, Adrian K C; Bizley, Jennifer K

    2018-02-07

    How and where in the brain audio-visual signals are bound to create multimodal objects remains unknown. One hypothesis is that temporal coherence between dynamic multisensory signals provides a mechanism for binding stimulus features across sensory modalities. Here, we report that when the luminance of a visual stimulus is temporally coherent with the amplitude fluctuations of one sound in a mixture, the representation of that sound is enhanced in auditory cortex. Critically, this enhancement extends to include both binding and non-binding features of the sound. We demonstrate that visual information conveyed from visual cortex via the phase of the local field potential is combined with auditory information within auditory cortex. These data provide evidence that early cross-sensory binding provides a bottom-up mechanism for the formation of cross-sensory objects and that one role for multisensory binding in auditory cortex is to support auditory scene analysis. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  14. Interaction of language, auditory and memory brain networks in auditory verbal hallucinations

    NARCIS (Netherlands)

    Curcic-Blake, Branislava; Ford, Judith M.; Hubl, Daniela; Orlov, Natasza D.; Sommer, Iris E.; Waters, Flavie; Allen, Paul; Jardri, Renaud; Woodruff, Peter W.; David, Olivier; Mulert, Christoph; Woodward, Todd S.; Aleman, Andre

    Auditory verbal hallucinations (AVH) occur in psychotic disorders, but also as a symptom of other conditions and even in healthy people. Several current theories on the origin of AVH converge, with neuroimaging studies suggesting that the language, auditory and memory/limbic networks are of

  15. Contribution to intelligent vehicle platoon control

    OpenAIRE

    Zhao , Jin

    2010-01-01

    This PhD thesis is dedicated to the control strategies for intelligent vehicle platoon in highway with the main aims of alleviating traffic congestion and improving traffic safety. After a review of the different existing automated driving systems, the vehicle longitudinal and lateral dynamic models are derived. Then, the longitudinal control and lateral control strategies are studied respectively. At first, the longitudinal control system is designed to be hierarchical with an upper level co...

  16. Procedures for central auditory processing screening in schoolchildren.

    Science.gov (United States)

    Carvalho, Nádia Giulian de; Ubiali, Thalita; Amaral, Maria Isabel Ramos do; Santos, Maria Francisca Colella

    2018-03-22

    Central auditory processing screening in schoolchildren has led to debates in literature, both regarding the protocol to be used and the importance of actions aimed at prevention and promotion of auditory health. Defining effective screening procedures for central auditory processing is a challenge in Audiology. This study aimed to analyze the scientific research on central auditory processing screening and discuss the effectiveness of the procedures utilized. A search was performed in the SciELO and PUBMed databases by two researchers. The descriptors used in Portuguese and English were: auditory processing, screening, hearing, auditory perception, children, auditory tests and their respective terms in Portuguese. original articles involving schoolchildren, auditory screening of central auditory skills and articles in Portuguese or English. studies with adult and/or neonatal populations, peripheral auditory screening only, and duplicate articles. After applying the described criteria, 11 articles were included. At the international level, central auditory processing screening methods used were: screening test for auditory processing disorder and its revised version, screening test for auditory processing, scale of auditory behaviors, children's auditory performance scale and Feather Squadron. In the Brazilian scenario, the procedures used were the simplified auditory processing assessment and Zaidan's battery of tests. At the international level, the screening test for auditory processing and Feather Squadron batteries stand out as the most comprehensive evaluation of hearing skills. At the national level, there is a paucity of studies that use methods evaluating more than four skills, and are normalized by age group. The use of simplified auditory processing assessment and questionnaires can be complementary in the search for an easy access and low-cost alternative in the auditory screening of Brazilian schoolchildren. Interactive tools should be proposed, that

  17. Sleep Disrupts High-Level Speech Parsing Despite Significant Basic Auditory Processing.

    Science.gov (United States)

    Makov, Shiri; Sharon, Omer; Ding, Nai; Ben-Shachar, Michal; Nir, Yuval; Zion Golumbic, Elana

    2017-08-09

    The extent to which the sleeping brain processes sensory information remains unclear. This is particularly true for continuous and complex stimuli such as speech, in which information is organized into hierarchically embedded structures. Recently, novel metrics for assessing the neural representation of continuous speech have been developed using noninvasive brain recordings that have thus far only been tested during wakefulness. Here we investigated, for the first time, the sleeping brain's capacity to process continuous speech at different hierarchical levels using a newly developed Concurrent Hierarchical Tracking (CHT) approach that allows monitoring the neural representation and processing-depth of continuous speech online. Speech sequences were compiled with syllables, words, phrases, and sentences occurring at fixed time intervals such that different linguistic levels correspond to distinct frequencies. This enabled us to distinguish their neural signatures in brain activity. We compared the neural tracking of intelligible versus unintelligible (scrambled and foreign) speech across states of wakefulness and sleep using high-density EEG in humans. We found that neural tracking of stimulus acoustics was comparable across wakefulness and sleep and similar across all conditions regardless of speech intelligibility. In contrast, neural tracking of higher-order linguistic constructs (words, phrases, and sentences) was only observed for intelligible speech during wakefulness and could not be detected at all during nonrapid eye movement or rapid eye movement sleep. These results suggest that, whereas low-level auditory processing is relatively preserved during sleep, higher-level hierarchical linguistic parsing is severely disrupted, thereby revealing the capacity and limits of language processing during sleep. SIGNIFICANCE STATEMENT Despite the persistence of some sensory processing during sleep, it is unclear whether high-level cognitive processes such as speech

  18. Modeling speech intelligibility based on the signal-to-noise envelope power ratio

    DEFF Research Database (Denmark)

    Jørgensen, Søren

    of modulation frequency selectivity in the auditory processing of sound with a decision metric for intelligibility that is based on the signal-to-noise envelope power ratio (SNRenv). The proposed speech-based envelope power spectrum model (sEPSM) is demonstrated to account for the effects of stationary...... through three commercially available mobile phones. The model successfully accounts for the performance across the phones in conditions with a stationary speech-shaped background noise, whereas deviations were observed in conditions with “Traffic” and “Pub” noise. Overall, the results of this thesis...

  19. Auditory prediction during speaking and listening.

    Science.gov (United States)

    Sato, Marc; Shiller, Douglas M

    2018-02-02

    In the present EEG study, the role of auditory prediction in speech was explored through the comparison of auditory cortical responses during active speaking and passive listening to the same acoustic speech signals. Two manipulations of sensory prediction accuracy were used during the speaking task: (1) a real-time change in vowel F1 feedback (reducing prediction accuracy relative to unaltered feedback) and (2) presenting a stable auditory target rather than a visual cue to speak (enhancing auditory prediction accuracy during baseline productions, and potentially enhancing the perturbing effect of altered feedback). While subjects compensated for the F1 manipulation, no difference between the auditory-cue and visual-cue conditions were found. Under visually-cued conditions, reduced N1/P2 amplitude was observed during speaking vs. listening, reflecting a motor-to-sensory prediction. In addition, a significant correlation was observed between the magnitude of behavioral compensatory F1 response and the magnitude of this speaking induced suppression (SIS) for P2 during the altered auditory feedback phase, where a stronger compensatory decrease in F1 was associated with a stronger the SIS effect. Finally, under the auditory-cued condition, an auditory repetition-suppression effect was observed in N1/P2 amplitude during the listening task but not active speaking, suggesting that auditory predictive processes during speaking and passive listening are functionally distinct. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  1. Intelligence in Artificial Intelligence

    OpenAIRE

    Datta, Shoumen Palit Austin

    2016-01-01

    The elusive quest for intelligence in artificial intelligence prompts us to consider that instituting human-level intelligence in systems may be (still) in the realm of utopia. In about a quarter century, we have witnessed the winter of AI (1990) being transformed and transported to the zenith of tabloid fodder about AI (2015). The discussion at hand is about the elements that constitute the canonical idea of intelligence. The delivery of intelligence as a pay-per-use-service, popping out of ...

  2. Autonomous intelligent cars: proof that the EPSRC Principles are future-proof

    Science.gov (United States)

    de Cock Buning, Madeleine; de Bruin, Roeland

    2017-07-01

    Principle 2 of the EPSRC's principles of robotics (AISB workshop on Principles of Robotics, 2016) proves to be future proof when applied to the current state of the art of law and technology surrounding autonomous intelligent cars (AICs). Humans, not AICS, are responsible agents. AICs should be designed; operated as far as is practicable to comply with existing laws and fundamental rights and freedoms, including privacy by design. It will show that some legal questions arising from autonomous intelligent driving technology can be answered by the technology itself.

  3. Temporal envelope processing in the human auditory cortex: response and interconnections of auditory cortical areas.

    Science.gov (United States)

    Gourévitch, Boris; Le Bouquin Jeannès, Régine; Faucon, Gérard; Liégeois-Chauvel, Catherine

    2008-03-01

    Temporal envelope processing in the human auditory cortex has an important role in language analysis. In this paper, depth recordings of local field potentials in response to amplitude modulated white noises were used to design maps of activation in primary, secondary and associative auditory areas and to study the propagation of the cortical activity between them. The comparison of activations between auditory areas was based on a signal-to-noise ratio associated with the response to amplitude modulation (AM). The functional connectivity between cortical areas was quantified by the directed coherence (DCOH) applied to auditory evoked potentials. This study shows the following reproducible results on twenty subjects: (1) the primary auditory cortex (PAC), the secondary cortices (secondary auditory cortex (SAC) and planum temporale (PT)), the insular gyrus, the Brodmann area (BA) 22 and the posterior part of T1 gyrus (T1Post) respond to AM in both hemispheres. (2) A stronger response to AM was observed in SAC and T1Post of the left hemisphere independent of the modulation frequency (MF), and in the left BA22 for MFs 8 and 16Hz, compared to those in the right. (3) The activation and propagation features emphasized at least four different types of temporal processing. (4) A sequential activation of PAC, SAC and BA22 areas was clearly visible at all MFs, while other auditory areas may be more involved in parallel processing upon a stream originating from primary auditory area, which thus acts as a distribution hub. These results suggest that different psychological information is carried by the temporal envelope of sounds relative to the rate of amplitude modulation.

  4. The role of intelligence in the fight against Salafist jihadist terrorism

    Directory of Open Access Journals (Sweden)

    Gustavo Díaz Matey

    2017-09-01

    Full Text Available There is clear academic and political consensus on the importance of the intelligence services to the fight against the terrorist threat, but the development of so-called “global terrorism” has substantially altered the way the threat can be contained and, as a result, certain practices of the intelligence services have changed too. This article studies how the terrorist threat has changed over recent years focusing on the specific case of Salafist jihadist terrorism. It analyses the main consequences of these changes for the intelligence services both internally (processes of obtaining information and analysis and externally (in terms of cooperation. Finally, it shows the importance of reaching a proper understanding as the basis for driving measures in the medium and long term that allow the intelligence services to pre-empt the evolution of ideas based on radicalisation and the use of extreme political violence.

  5. Tactile feedback improves auditory spatial localization

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2014-10-01

    Full Text Available Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014. To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile-feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject’s forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal-feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no-feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially coherent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.

  6. Artificial intelligence-based condition monitoring for practical electrical drives

    OpenAIRE

    Ashari, Djoni; Pislaru, Crinela; Ball, Andrew; Gu, Fengshou

    2012-01-01

    The main types of existing Condition Monitoring methods (MCSA, GA, IAS) for electrical drives are\\ud described. Then the steps for the design of expert systems are presented: problem identification and analysis, system specification, development tool selection, knowledge based, prototyping and testing. The employment of SOMA (Self-Organizing Migrating Algorithm) used for the optimization of ambient\\ud vibration energy harvesting is analyzed. The power electronics devices are becoming smaller ...

  7. Cross-modal attention influences auditory contrast sensitivity: Decreasing visual load improves auditory thresholds for amplitude- and frequency-modulated sounds.

    Science.gov (United States)

    Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G

    2017-03-01

    We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.

  8. Auditory preferences of young children with and without hearing loss for meaningful auditory-visual compound stimuli.

    Science.gov (United States)

    Zupan, Barbra; Sussman, Joan E

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both experiments was to evaluate the role of familiarity in these preferences. Participants were exposed to randomized blocks of photographs and sounds of ten familiar and ten unfamiliar animals in auditory-only, visual-only and auditory-visual trials. Results indicated an overall auditory preference in children, regardless of hearing status, and a visual preference in adults. Familiarity only affected modality preferences in adults who showed a strong visual preference to unfamiliar stimuli only. The similar degree of auditory responses in children with hearing loss to those from children with normal hearing is an original finding and lends support to an auditory emphasis for habilitation. Readers will be able to (1) Describe the pattern of modality preferences reported in young children without hearing loss; (2) Recognize that differences in communication mode may affect modality preferences in young children with hearing loss; and (3) Understand the role of familiarity in modality preferences in children with and without hearing loss.

  9. Auditory verbal hallucinations and cognitive functioning in healthy individuals.

    Science.gov (United States)

    Daalman, Kirstin; van Zandvoort, Martine; Bootsman, Florian; Boks, Marco; Kahn, René; Sommer, Iris

    2011-11-01

    Auditory verbal hallucinations (AVH) are a characteristic symptom in schizophrenia, and also occur in the general, non-clinical population. In schizophrenia patients, several specific cognitive deficits, such as in speech processing, working memory, source memory, attention, inhibition, episodic memory and self-monitoring have been associated with auditory verbal hallucinations. Such associations are interesting, as they may identify specific cognitive traits that constitute a predisposition for AVH. However, it is difficult to disentangle a specific relation with AVH in patients with schizophrenia, as so many other factors can affect the performance on cognitive tests. Examining the cognitive profile of healthy individuals experiencing AVH may reveal a more direct association between AVH and aberrant cognitive functioning in a specific domain. For the current study, performance in executive functioning, memory (both short- and long-term), processing speed, spatial ability, lexical access, abstract reasoning, language and intelligence performance was compared between 101 healthy individuals with AVH and 101 healthy controls, matched for gender, age, handedness and education. Although performance of both groups was within the normal range, not clinically impaired, significant differences between the groups were found in the verbal domain as well as in executive functioning. Performance on all other cognitive domains was similar in both groups. The predisposition to experience AVH is associated with lower performance in executive functioning and aberrant language performance. This association might be related to difficulties in the inhibition of irrelevant verbal information. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Sensory ERPs predict differences in working memory span and fluid intelligence.

    Science.gov (United States)

    Brumback, Carrie R; Low, Kathy A; Gratton, Gabriele; Fabiani, Monica

    2004-02-09

    The way our brain reacts to sensory stimulation may provide important clues about higher-level cognitive function and its operation. Here we show that short-latency (memory span, as well as between subjects scoring high and low on a fluid intelligence test. Our findings also suggest that this link between sensory responses and complex cognitive tasks is modality specific (visual sensory measures correlate with visuo-spatial tasks whereas auditory sensory measures correlate with verbal tasks). We interpret these findings as indicating that people's effectiveness in controlling attention and gating sensory information is a critical determinant of individual differences in complex cognitive abilities.

  11. The Relationship between Types of Attention and Auditory Processing Skills: Reconsidering Auditory Processing Disorder Diagnosis

    Science.gov (United States)

    Stavrinos, Georgios; Iliadou, Vassiliki-Maria; Edwards, Lindsey; Sirimanna, Tony; Bamiou, Doris-Eva

    2018-01-01

    Measures of attention have been found to correlate with specific auditory processing tests in samples of children suspected of Auditory Processing Disorder (APD), but these relationships have not been adequately investigated. Despite evidence linking auditory attention and deficits/symptoms of APD, measures of attention are not routinely used in APD diagnostic protocols. The aim of the study was to examine the relationship between auditory and visual attention tests and auditory processing tests in children with APD and to assess whether a proposed diagnostic protocol for APD, including measures of attention, could provide useful information for APD management. A pilot study including 27 children, aged 7–11 years, referred for APD assessment was conducted. The validated test of everyday attention for children, with visual and auditory attention tasks, the listening in spatialized noise sentences test, the children's communication checklist questionnaire and tests from a standard APD diagnostic test battery were administered. Pearson's partial correlation analysis examining the relationship between these tests and Cochrane's Q test analysis comparing proportions of diagnosis under each proposed battery were conducted. Divided auditory and divided auditory-visual attention strongly correlated with the dichotic digits test, r = 0.68, p attention battery identified as having Attention Deficits (ADs). The proposed APD battery excluding AD cases did not have a significantly different diagnosis proportion than the standard APD battery. Finally, the newly proposed diagnostic battery, identifying an inattentive subtype of APD, identified five children who would have otherwise been considered not having ADs. The findings show that a subgroup of children with APD demonstrates underlying sustained and divided attention deficits. Attention deficits in children with APD appear to be centred around the auditory modality but further examination of types of attention in both

  12. Auditory interfaces: The human perceiver

    Science.gov (United States)

    Colburn, H. Steven

    1991-01-01

    A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.

  13. The Study of Intelligent Vehicle Navigation Path Based on Behavior Coordination of Particle Swarm.

    Science.gov (United States)

    Han, Gaining; Fu, Weiping; Wang, Wen

    2016-01-01

    In the behavior dynamics model, behavior competition leads to the shock problem of the intelligent vehicle navigation path, because of the simultaneous occurrence of the time-variant target behavior and obstacle avoidance behavior. Considering the safety and real-time of intelligent vehicle, the particle swarm optimization (PSO) algorithm is proposed to solve these problems for the optimization of weight coefficients of the heading angle and the path velocity. Firstly, according to the behavior dynamics model, the fitness function is defined concerning the intelligent vehicle driving characteristics, the distance between intelligent vehicle and obstacle, and distance of intelligent vehicle and target. Secondly, behavior coordination parameters that minimize the fitness function are obtained by particle swarm optimization algorithms. Finally, the simulation results show that the optimization method and its fitness function can improve the perturbations of the vehicle planning path and real-time and reliability.

  14. Auditory attention activates peripheral visual cortex.

    Directory of Open Access Journals (Sweden)

    Anthony D Cate

    Full Text Available BACKGROUND: Recent neuroimaging studies have revealed that putatively unimodal regions of visual cortex can be activated during auditory tasks in sighted as well as in blind subjects. However, the task determinants and functional significance of auditory occipital activations (AOAs remains unclear. METHODOLOGY/PRINCIPAL FINDINGS: We examined AOAs in an intermodal selective attention task to distinguish whether they were stimulus-bound or recruited by higher-level cognitive operations associated with auditory attention. Cortical surface mapping showed that auditory occipital activations were localized to retinotopic visual cortex subserving the far peripheral visual field. AOAs depended strictly on the sustained engagement of auditory attention and were enhanced in more difficult listening conditions. In contrast, unattended sounds produced no AOAs regardless of their intensity, spatial location, or frequency. CONCLUSIONS/SIGNIFICANCE: Auditory attention, but not passive exposure to sounds, routinely activated peripheral regions of visual cortex when subjects attended to sound sources outside the visual field. Functional connections between auditory cortex and visual cortex subserving the peripheral visual field appear to underlie the generation of AOAs, which may reflect the priming of visual regions to process soon-to-appear objects associated with unseen sound sources.

  15. Review: Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Ja'fari

    2003-01-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  16. Neuronal Correlates of Auditory Streaming in Monkey Auditory Cortex for Tone Sequences without Spectral Differences

    Directory of Open Access Journals (Sweden)

    Stanislava Knyazeva

    2018-01-01

    Full Text Available This study finds a neuronal correlate of auditory perceptual streaming in the primary auditory cortex for sequences of tone complexes that have the same amplitude spectrum but a different phase spectrum. Our finding is based on microelectrode recordings of multiunit activity from 270 cortical sites in three awake macaque monkeys. The monkeys were presented with repeated sequences of a tone triplet that consisted of an A tone, a B tone, another A tone and then a pause. The A and B tones were composed of unresolved harmonics formed by adding the harmonics in cosine phase, in alternating phase, or in random phase. A previous psychophysical study on humans revealed that when the A and B tones are similar, humans integrate them into a single auditory stream; when the A and B tones are dissimilar, humans segregate them into separate auditory streams. We found that the similarity of neuronal rate responses to the triplets was highest when all A and B tones had cosine phase. Similarity was intermediate when the A tones had cosine phase and the B tones had alternating phase. Similarity was lowest when the A tones had cosine phase and the B tones had random phase. The present study corroborates and extends previous reports, showing similar correspondences between neuronal activity in the primary auditory cortex and auditory streaming of sound sequences. It also is consistent with Fishman’s population separation model of auditory streaming.

  17. Neuronal Correlates of Auditory Streaming in Monkey Auditory Cortex for Tone Sequences without Spectral Differences.

    Science.gov (United States)

    Knyazeva, Stanislava; Selezneva, Elena; Gorkin, Alexander; Aggelopoulos, Nikolaos C; Brosch, Michael

    2018-01-01

    This study finds a neuronal correlate of auditory perceptual streaming in the primary auditory cortex for sequences of tone complexes that have the same amplitude spectrum but a different phase spectrum. Our finding is based on microelectrode recordings of multiunit activity from 270 cortical sites in three awake macaque monkeys. The monkeys were presented with repeated sequences of a tone triplet that consisted of an A tone, a B tone, another A tone and then a pause. The A and B tones were composed of unresolved harmonics formed by adding the harmonics in cosine phase, in alternating phase, or in random phase. A previous psychophysical study on humans revealed that when the A and B tones are similar, humans integrate them into a single auditory stream; when the A and B tones are dissimilar, humans segregate them into separate auditory streams. We found that the similarity of neuronal rate responses to the triplets was highest when all A and B tones had cosine phase. Similarity was intermediate when the A tones had cosine phase and the B tones had alternating phase. Similarity was lowest when the A tones had cosine phase and the B tones had random phase. The present study corroborates and extends previous reports, showing similar correspondences between neuronal activity in the primary auditory cortex and auditory streaming of sound sequences. It also is consistent with Fishman's population separation model of auditory streaming.

  18. Neurofeedback-Based Enhancement of Single-Trial Auditory Evoked Potentials: Treatment of Auditory Verbal Hallucinations in Schizophrenia.

    Science.gov (United States)

    Rieger, Kathryn; Rarra, Marie-Helene; Diaz Hernandez, Laura; Hubl, Daniela; Koenig, Thomas

    2018-03-01

    Auditory verbal hallucinations depend on a broad neurobiological network ranging from the auditory system to language as well as memory-related processes. As part of this, the auditory N100 event-related potential (ERP) component is attenuated in patients with schizophrenia, with stronger attenuation occurring during auditory verbal hallucinations. Changes in the N100 component assumingly reflect disturbed responsiveness of the auditory system toward external stimuli in schizophrenia. With this premise, we investigated the therapeutic utility of neurofeedback training to modulate the auditory-evoked N100 component in patients with schizophrenia and associated auditory verbal hallucinations. Ten patients completed electroencephalography neurofeedback training for modulation of N100 (treatment condition) or another unrelated component, P200 (control condition). On a behavioral level, only the control group showed a tendency for symptom improvement in the Positive and Negative Syndrome Scale total score in a pre-/postcomparison ( t (4) = 2.71, P = .054); however, no significant differences were found in specific hallucination related symptoms ( t (7) = -0.53, P = .62). There was no significant overall effect of neurofeedback training on ERP components in our paradigm; however, we were able to identify different learning patterns, and found a correlation between learning and improvement in auditory verbal hallucination symptoms across training sessions ( r = 0.664, n = 9, P = .05). This effect results, with cautious interpretation due to the small sample size, primarily from the treatment group ( r = 0.97, n = 4, P = .03). In particular, a within-session learning parameter showed utility for predicting symptom improvement with neurofeedback training. In conclusion, patients with schizophrenia and associated auditory verbal hallucinations who exhibit a learning pattern more characterized by within-session aptitude may benefit from electroencephalography neurofeedback

  19. Pre-Attentive Auditory Processing of Lexicality

    Science.gov (United States)

    Jacobsen, Thomas; Horvath, Janos; Schroger, Erich; Lattner, Sonja; Widmann, Andreas; Winkler, Istvan

    2004-01-01

    The effects of lexicality on auditory change detection based on auditory sensory memory representations were investigated by presenting oddball sequences of repeatedly presented stimuli, while participants ignored the auditory stimuli. In a cross-linguistic study of Hungarian and German participants, stimulus sequences were composed of words that…

  20. Modeling and Recognizing Driver Behavior Based on Driving Data: A Survey

    Directory of Open Access Journals (Sweden)

    Wenshuo Wang

    2014-01-01

    Full Text Available In recent years, modeling and recognizing driver behavior have become crucial to understanding intelligence transport systems, human-vehicle systems, and intelligent vehicle systems. A wide range of both mathematical identification methods and modeling methods of driver behavior are presented from the control point of view in this paper based on the driving data, such as the brake/throttle pedal position and the steering wheel angle, among others. Subsequently, the driver’s characteristics derived from the driver model are embedded into the advanced driver assistance systems, and the evaluation and verification of vehicle systems based on the driver model are described.

  1. Symbolic and Sub-Symbolic Robotic Intelligence Control System (SS-RICS) Users Manual

    Science.gov (United States)

    2017-10-01

    representations to drive intelligent systems, and the second focuses on a mathematical approach using distributed representations constructed with structures...3.1.1 Zip File • Extract the files from the zip file to a directory on your computer . • Open My Computer . • Browse to the directory where you...instructions. 3.1.2 CD • Open My Computer . • Browse to the CD-ROM drive on your computer . • Find the setup.exe file. Approved for public release

  2. The Efficacy of Rehearsal Strategy on Auditory Short-Term Memory of Educable 5 to 8 Years Old Children with Down Syndrome

    Directory of Open Access Journals (Sweden)

    Esmaeil Esmaieli

    2014-03-01

    Full Text Available Objective: One of the problems of children with Down syndrome is their low performance on retention of information and its recall in the memory. The present study aimed to determine the efficacy of rehearsal strategy on auditory short-term memory of educable 5 to 8 years old children with Down syndrome. Materials & Methods: In this quasi-experimental study, 24 children (14 boys and 10 girls were selected in convenience from Iranian Down Syndrome Charity Association and evaluated by Raven’s Intelligence Progressive Matrices. Then, children were assigned into two experimental and control groups randomly (each contained 12 individuals. Experimental group participated in 8 group sessions (two sessions per week, each lasting 30 minutes and trained by rehearsal strategy. All subjects were evaluated by Expressive-Auditory Memory Sequence Test before and after intervention sessions. Data were analyzed by multiple analysis of covariance.  Results: The results of analysis of covariance showed that rehearsal strategy have led to increase of digit span, word span and auditory short-term memory (P<0.01 in experimental group compared to control group. Conclusion: It can be concluded that rehearsal strategy training is an effective method on promotion of digit span, word span and auditory short-term memory of children with Down syndrome and implies important consequences for their education.

  3. Plasticity in the Primary Auditory Cortex, Not What You Think it is: Implications for Basic and Clinical Auditory Neuroscience

    Science.gov (United States)

    Weinberger, Norman M.

    2013-01-01

    Standard beliefs that the function of the primary auditory cortex (A1) is the analysis of sound have proven to be incorrect. Its involvement in learning, memory and other complex processes in both animals and humans is now well-established, although often not appreciated. Auditory coding is strongly modifed by associative learning, evident as associative representational plasticity (ARP) in which the representation of an acoustic dimension, like frequency, is re-organized to emphasize a sound that has become behaviorally important. For example, the frequency tuning of a cortical neuron can be shifted to match that of a significant sound and the representational area of sounds that acquire behavioral importance can be increased. ARP depends on the learning strategy used to solve an auditory problem and the increased cortical area confers greater strength of auditory memory. Thus, primary auditory cortex is involved in cognitive processes, transcending its assumed function of auditory stimulus analysis. The implications for basic neuroscience and clinical auditory neuroscience are presented and suggestions for remediation of auditory processing disorders are introduced. PMID:25356375

  4. Design and Optimization of Intelligent Service Robot Suspension System Using Dynamic Model

    International Nuclear Information System (INIS)

    Choi, Seong Hoon; Park, Tae Won; Lee, Soo Ho; Jung, Sung Pil; Jun, Kab Jin; Yoon, J. W.

    2010-01-01

    Recently, an intelligent service robot is being developed for use in guiding and providing information to visitors about the building at public institutions. The intelligent robot has a sensor at the bottom to recognize its location. Four wheels, which are arranged in the form of a lozenge, support the robot. This robot cannot be operated on uneven ground because its driving parts are attached to its main body that contains the important internal components. Continuous impact with the ground can change the precise positions of the components and weaken the connection between each structural part. In this paper, the design of the suspension system for such a robot is described. The dynamic model of the robot is created, and the driving characteristics of the robot with the designed suspension system are simulated. Additionally, the suspension system is optimized to reduce the impact for the robot components

  5. The Cyber Intelligence Challenge of Asyngnotic Networks

    Directory of Open Access Journals (Sweden)

    Edward M. Roche

    2015-09-01

    Full Text Available The intelligence community is facing a new type of organization, one enabled by the world’s information and communications infrastructure. These asyngnotic networks operate without leadership and are self-organizing in nature. They pose a threat to national security because they are difficult to detect in time for intelligence to provide adequate warning. Social network analysis and link analysis are important tools but can be supplemented by application of neuroscience principles to understand the forces that drive asyngnotic self-organization and triggering of terrorist events. Applying Living Systems Theory (LST to a terrorist attack provides a useful framework to identify hidden asyngnotic networks. There is some antecedent work in propaganda analysis that may help uncover hidden asyngnotic networks, but computerized SIGINT methods face a number of challenges.

  6. Artificial Intelligence and Moral intelligence

    OpenAIRE

    Laura Pana

    2008-01-01

    We discuss the thesis that the implementation of a moral code in the behaviour of artificial intelligent systems needs a specific form of human and artificial intelligence, not just an abstract intelligence. We present intelligence as a system with an internal structure and the structural levels of the moral system, as well as certain characteristics of artificial intelligent agents which can/must be treated as 1- individual entities (with a complex, specialized, autonomous or selfdetermined,...

  7. Automated feedback to foster safe driving in young drivers: phase 2 : traffic tech.

    Science.gov (United States)

    2015-12-01

    Intelligent Speed Adaptation (ISA) provides a promising approach to reduce speeding. A core principle of ISA is real-time feedback that lets drivers know when they are driving over the speed limit. The overall goal of the study was to provide insight...

  8. Investigation of in-vehicle speech intelligibility metrics for normal hearing and hearing impaired listeners

    Science.gov (United States)

    Samardzic, Nikolina

    The effectiveness of in-vehicle speech communication can be a good indicator of the perception of the overall vehicle quality and customer satisfaction. Currently available speech intelligibility metrics do not account in their procedures for essential parameters needed for a complete and accurate evaluation of in-vehicle speech intelligibility. These include the directivity and the distance of the talker with respect to the listener, binaural listening, hearing profile of the listener, vocal effort, and multisensory hearing. In the first part of this research the effectiveness of in-vehicle application of these metrics is investigated in a series of studies to reveal their shortcomings, including a wide range of scores resulting from each of the metrics for a given measurement configuration and vehicle operating condition. In addition, the nature of a possible correlation between the scores obtained from each metric is unknown. The metrics and the subjective perception of speech intelligibility using, for example, the same speech material have not been compared in literature. As a result, in the second part of this research, an alternative method for speech intelligibility evaluation is proposed for use in the automotive industry by utilizing a virtual reality driving environment for ultimately setting targets, including the associated statistical variability, for future in-vehicle speech intelligibility evaluation. The Speech Intelligibility Index (SII) was evaluated at the sentence Speech Receptions Threshold (sSRT) for various listening situations and hearing profiles using acoustic perception jury testing and a variety of talker and listener configurations and background noise. In addition, the effect of individual sources and transfer paths of sound in an operating vehicle to the vehicle interior sound, specifically their effect on speech intelligibility was quantified, in the framework of the newly developed speech intelligibility evaluation method. Lastly

  9. A Brain System for Auditory Working Memory.

    Science.gov (United States)

    Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D

    2016-04-20

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.

  10. Functional studies of the human auditory cortex, auditory memory and musical hallucinations

    International Nuclear Information System (INIS)

    Goycoolea, Marcos; Mena, Ismael; Neubauer, Sonia

    2004-01-01

    Objectives. 1. To determine which areas of the cerebral cortex are activated stimulating the left ear with pure tones, and what type of stimulation occurs (eg. excitatory or inhibitory) in these different areas. 2. To use this information as an initial step to develop a normal functional data base for future studies. 3. To try to determine if there is a biological substrate to the process of recalling previous auditory perceptions and if possible, suggest a locus for auditory memory. Method. Brain perfusion single photon emission computerized tomography (SPECT) evaluation was conducted: 1-2) Using auditory stimulation with pure tones in 4 volunteers with normal hearing. 3) In a patient with bilateral profound hearing loss who had auditory perception of previous musical experiences; while injected with Tc99m HMPAO while she was having the sensation of hearing a well known melody. Results. Both in the patient with auditory hallucinations and the normal controls -stimulated with pure tones- there was a statistically significant increase in perfusion in Brodmann's area 39, more intense on the right side (right to left p < 0.05). With a lesser intensity there was activation in the adjacent area 40 and there was intense activation also in the executive frontal cortex areas 6, 8, 9, and 10 of Brodmann. There was also activation of area 7 of Brodmann; an audio-visual association area; more marked on the right side in the patient and the normal stimulated controls. In the subcortical structures there was also marked activation in the patient with hallucinations in both lentiform nuclei, thalamus and caudate nuclei also more intense in the right hemisphere, 5, 4.7 and 4.2 S.D. above the mean respectively and 5, 3.3, and 3 S.D. above the normal mean in the left hemisphere respectively. Similar findings were observed in normal controls. Conclusions. After auditory stimulation with pure tones in the left ear of normal female volunteers, there is bilateral activation of area 39

  11. An analysis of nonlinear dynamics underlying neural activity related to auditory induction in the rat auditory cortex.

    Science.gov (United States)

    Noto, M; Nishikawa, J; Tateno, T

    2016-03-24

    A sound interrupted by silence is perceived as discontinuous. However, when high-intensity noise is inserted during the silence, the missing sound may be perceptually restored and be heard as uninterrupted. This illusory phenomenon is called auditory induction. Recent electrophysiological studies have revealed that auditory induction is associated with the primary auditory cortex (A1). Although experimental evidence has been accumulating, the neural mechanisms underlying auditory induction in A1 neurons are poorly understood. To elucidate this, we used both experimental and computational approaches. First, using an optical imaging method, we characterized population responses across auditory cortical fields to sound and identified five subfields in rats. Next, we examined neural population activity related to auditory induction with high temporal and spatial resolution in the rat auditory cortex (AC), including the A1 and several other AC subfields. Our imaging results showed that tone-burst stimuli interrupted by a silent gap elicited early phasic responses to the first tone and similar or smaller responses to the second tone following the gap. In contrast, tone stimuli interrupted by broadband noise (BN), considered to cause auditory induction, considerably suppressed or eliminated responses to the tone following the noise. Additionally, tone-burst stimuli that were interrupted by notched noise centered at the tone frequency, which is considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. To phenomenologically mimic the neural population activity in the A1 and thus investigate the mechanisms underlying auditory induction, we constructed a computational model from the periphery through the AC, including a nonlinear dynamical system. The computational model successively reproduced some of the above-mentioned experimental results. Therefore, our results suggest that a nonlinear, self

  12. Predictors of auditory performance in hearing-aid users: The role of cognitive function and auditory lifestyle (A)

    DEFF Research Database (Denmark)

    Vestergaard, Martin David

    2006-01-01

    no objective benefit can be measured. It has been suggested that lack of agreement between various hearing-aid outcome components can be explained by individual differences in cognitive function and auditory lifestyle. We measured speech identification, self-report outcome, spectral and temporal resolution...... of hearing, cognitive skills, and auditory lifestyle in 25 new hearing-aid users. The purpose was to assess the predictive power of the nonauditory measures while looking at the relationships between measures from various auditory-performance domains. The results showed that only moderate correlation exists...... between objective and subjective hearing-aid outcome. Different self-report outcome measures showed a different amount of correlation with objective auditory performance. Cognitive skills were found to play a role in explaining speech performance and spectral and temporal abilities, and auditory lifestyle...

  13. Central auditory processing outcome after stroke in children

    Directory of Open Access Journals (Sweden)

    Karla M. I. Freiria Elias

    2014-09-01

    Full Text Available Objective To investigate central auditory processing in children with unilateral stroke and to verify whether the hemisphere affected by the lesion influenced auditory competence. Method 23 children (13 male between 7 and 16 years old were evaluated through speech-in-noise tests (auditory closure; dichotic digit test and staggered spondaic word test (selective attention; pitch pattern and duration pattern sequence tests (temporal processing and their results were compared with control children. Auditory competence was established according to the performance in auditory analysis ability. Results Was verified similar performance between groups in auditory closure ability and pronounced deficits in selective attention and temporal processing abilities. Most children with stroke showed an impaired auditory ability in a moderate degree. Conclusion Children with stroke showed deficits in auditory processing and the degree of impairment was not related to the hemisphere affected by the lesion.

  14. Experience and information loss in auditory and visual memory.

    Science.gov (United States)

    Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K

    2017-07-01

    Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.

  15. Auditory and motor imagery modulate learning in music performance.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  16. Auditory and motor imagery modulate learning in music performance

    Science.gov (United States)

    Brown, Rachel M.; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  17. Auditory-vocal mirroring in songbirds.

    Science.gov (United States)

    Mooney, Richard

    2014-01-01

    Mirror neurons are theorized to serve as a neural substrate for spoken language in humans, but the existence and functions of auditory-vocal mirror neurons in the human brain remain largely matters of speculation. Songbirds resemble humans in their capacity for vocal learning and depend on their learned songs to facilitate courtship and individual recognition. Recent neurophysiological studies have detected putative auditory-vocal mirror neurons in a sensorimotor region of the songbird's brain that plays an important role in expressive and receptive aspects of vocal communication. This review discusses the auditory and motor-related properties of these cells, considers their potential role on song learning and communication in relation to classical studies of birdsong, and points to the circuit and developmental mechanisms that may give rise to auditory-vocal mirroring in the songbird's brain.

  18. Noise perception in the workplace and auditory and extra-auditory symptoms referred by university professors.

    Science.gov (United States)

    Servilha, Emilse Aparecida Merlin; Delatti, Marina de Almeida

    2012-01-01

    To investigate the correlation between noise in the work environment and auditory and extra-auditory symptoms referred by university professors. Eighty five professors answered a questionnaire about identification, functional status, and health. The relationship between occupational noise and auditory and extra-auditory symptoms was investigated. Statistical analysis considered the significance level of 5%. None of the professors indicated absence of noise. Responses were grouped in Always (A) (n=21) and Not Always (NA) (n=63). Significant sources of noise were both the yard and another class, which were classified as high intensity; poor acoustic and echo. There was no association between referred noise and health complaints, such as digestive, hormonal, osteoarticular, dental, circulatory, respiratory and emotional complaints. There was also no association between referred noise and hearing complaints, and the group A showed higher occurrence of responses regarding noise nuisance, hearing difficulty and dizziness/vertigo, tinnitus, and earache. There was association between referred noise and voice alterations, and the group NA presented higher percentage of cases with voice alterations than the group A. The university environment was considered noisy; however, there was no association with auditory and extra-auditory symptoms. The hearing complaints were more evident among professors in the group A. Professors' health is a multi-dimensional product and, therefore, noise cannot be considered the only aggravation factor.

  19. Relation between Working Memory Capacity and Auditory Stream Segregation in Children with Auditory Processing Disorder.

    Science.gov (United States)

    Lotfi, Yones; Mehrkian, Saiedeh; Moossavi, Abdollah; Zadeh, Soghrat Faghih; Sadjedi, Hamed

    2016-03-01

    This study assessed the relationship between working memory capacity and auditory stream segregation by using the concurrent minimum audible angle in children with a diagnosed auditory processing disorder (APD). The participants in this cross-sectional, comparative study were 20 typically developing children and 15 children with a diagnosed APD (age, 9-11 years) according to the subtests of multiple-processing auditory assessment. Auditory stream segregation was investigated using the concurrent minimum audible angle. Working memory capacity was evaluated using the non-word repetition and forward and backward digit span tasks. Nonparametric statistics were utilized to compare the between-group differences. The Pearson correlation was employed to measure the degree of association between working memory capacity and the localization tests between the 2 groups. The group with APD had significantly lower scores than did the typically developing subjects in auditory stream segregation and working memory capacity. There were significant negative correlations between working memory capacity and the concurrent minimum audible angle in the most frontal reference location (0° azimuth) and lower negative correlations in the most lateral reference location (60° azimuth) in the children with APD. The study revealed a relationship between working memory capacity and auditory stream segregation in children with APD. The research suggests that lower working memory capacity in children with APD may be the possible cause of the inability to segregate and group incoming information.

  20. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  1. The relation between working memory capacity and auditory lateralization in children with auditory processing disorders.

    Science.gov (United States)

    Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed

    2014-11-01

    Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Intelligence Naturelle et Intelligence Artificielle

    OpenAIRE

    Dubois, Daniel

    2011-01-01

    Cet article présente une approche systémique du concept d’intelligence naturelle en ayant pour objectif de créer une intelligence artificielle. Ainsi, l’intelligence naturelle, humaine et animale non-humaine, est une fonction composée de facultés permettant de connaître et de comprendre. De plus, l'intelligence naturelle reste indissociable de la structure, à savoir les organes du cerveau et du corps. La tentation est grande de doter les systèmes informatiques d’une intelligence artificielle ...

  3. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    Science.gov (United States)

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  4. c-Fos and Arc/Arg3.1 expression in auditory and visual cortices after hearing loss: Evidence of sensory crossmodal reorganization in adult rats.

    Science.gov (United States)

    Pernia, M; Estevez, S; Poveda, C; Plaza, I; Carro, J; Juiz, J M; Merchan, M A

    2017-08-15

    Cross-modal reorganization in the auditory and visual cortices has been reported after hearing and visual deficits mostly during the developmental period, possibly underlying sensory compensation mechanisms. However, there are very few data on the existence or nature and timeline of such reorganization events during sensory deficits in adulthood. In this study, we assessed long-term changes in activity-dependent immediate early genes c-Fos and Arc/Arg3.1 in auditory and neighboring visual cortical areas after bilateral deafness in young adult rats. Specifically, we analyzed qualitatively and quantitatively c-Fos and Arc/Arg3.1 immunoreactivity at 15 and 90 days after cochlea removal. We report extensive, global loss of c-Fos and Arc/Arg3.1 immunoreactive neurons in the auditory cortex 15 days after permanent auditory deprivation in adult rats, which is partly reversed 90 days after deafness. Simultaneously, the number and labeling intensity of c-Fos- and Arc/Arg3.1-immunoreactive neurons progressively increase in neighboring visual cortical areas from 2 weeks after deafness and these changes stabilize three months after inducing the cochlear lesion. These findings support plastic, compensatory, long-term changes in activity in the auditory and visual cortices after auditory deprivation in the adult rats. Further studies may clarify whether those changes result in perceptual potentiation of visual drives on auditory regions of the adult cortex. © 2017 The Authors The Journal of Comparative Neurology Published by Wiley Periodicals, Inc.

  5. Neural circuits in auditory and audiovisual memory.

    Science.gov (United States)

    Plakke, B; Romanski, L M

    2016-06-01

    Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Auditory motion-specific mechanisms in the primate brain.

    Directory of Open Access Journals (Sweden)

    Colline Poirier

    2017-05-01

    Full Text Available This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI. We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream.

  7. What You Don't Notice Can Harm You: Age-Related Differences in Detecting Concurrent Visual, Auditory, and Tactile Cues.

    Science.gov (United States)

    Pitts, Brandon J; Sarter, Nadine

    2018-06-01

    Objective This research sought to determine whether people can perceive and process three nonredundant (and unrelated) signals in vision, hearing, and touch at the same time and how aging and concurrent task demands affect this ability. Background Multimodal displays have been shown to improve multitasking and attention management; however, their potential limitations are not well understood. The majority of studies on multimodal information presentation have focused on the processing of only two concurrent and, most often, redundant cues by younger participants. Method Two experiments were conducted in which younger and older adults detected and responded to a series of singles, pairs, and triplets of visual, auditory, and tactile cues in the absence (Experiment 1) and presence (Experiment 2) of an ongoing simulated driving task. Detection rates, response times, and driving task performance were measured. Results Compared to younger participants, older adults showed longer response times and higher error rates in response to cues/cue combinations. Older participants often missed the tactile cue when three cues were combined. They sometimes falsely reported the presence of a visual cue when presented with a pair of auditory and tactile signals. Driving performance suffered most in the presence of cue triplets. Conclusion People are more likely to miss information if more than two concurrent nonredundant signals are presented to different sensory channels. Application The findings from this work help inform the design of multimodal displays and ensure their usefulness across different age groups and in various application domains.

  8. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    Science.gov (United States)

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  9. Fundamental deficits of auditory perception in Wernicke's aphasia.

    Science.gov (United States)

    Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen

    2013-01-01

    This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Visual cortex and auditory cortex activation in early binocularly blind macaques: A BOLD-fMRI study using auditory stimuli.

    Science.gov (United States)

    Wang, Rong; Wu, Lingjie; Tang, Zuohua; Sun, Xinghuai; Feng, Xiaoyuan; Tang, Weijun; Qian, Wen; Wang, Jie; Jin, Lixin; Zhong, Yufeng; Xiao, Zebin

    2017-04-15

    Cross-modal plasticity within the visual and auditory cortices of early binocularly blind macaques is not well studied. In this study, four healthy neonatal macaques were assigned to group A (control group) or group B (binocularly blind group). Sixteen months later, blood oxygenation level-dependent functional imaging (BOLD-fMRI) was conducted to examine the activation in the visual and auditory cortices of each macaque while being tested using pure tones as auditory stimuli. The changes in the BOLD response in the visual and auditory cortices of all macaques were compared with immunofluorescence staining findings. Compared with group A, greater BOLD activity was observed in the bilateral visual cortices of group B, and this effect was particularly obvious in the right visual cortex. In addition, more activated volumes were found in the bilateral auditory cortices of group B than of group A, especially in the right auditory cortex. These findings were consistent with the fact that there were more c-Fos-positive cells in the bilateral visual and auditory cortices of group B compared with group A (p visual cortices of binocularly blind macaques can be reorganized to process auditory stimuli after visual deprivation, and this effect is more obvious in the right than the left visual cortex. These results indicate the establishment of cross-modal plasticity within the visual and auditory cortices. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Left auditory cortex gamma synchronization and auditory hallucination symptoms in schizophrenia

    Directory of Open Access Journals (Sweden)

    Shenton Martha E

    2009-07-01

    Full Text Available Abstract Background Oscillatory electroencephalogram (EEG abnormalities may reflect neural circuit dysfunction in neuropsychiatric disorders. Previously we have found positive correlations between the phase synchronization of beta and gamma oscillations and hallucination symptoms in schizophrenia patients. These findings suggest that the propensity for hallucinations is associated with an increased tendency for neural circuits in sensory cortex to enter states of oscillatory synchrony. Here we tested this hypothesis by examining whether the 40 Hz auditory steady-state response (ASSR generated in the left primary auditory cortex is positively correlated with auditory hallucination symptoms in schizophrenia. We also examined whether the 40 Hz ASSR deficit in schizophrenia was associated with cross-frequency interactions. Sixteen healthy control subjects (HC and 18 chronic schizophrenia patients (SZ listened to 40 Hz binaural click trains. The EEG was recorded from 60 electrodes and average-referenced offline. A 5-dipole model was fit from the HC grand average ASSR, with 2 pairs of superior temporal dipoles and a deep midline dipole. Time-frequency decomposition was performed on the scalp EEG and source data. Results Phase locking factor (PLF and evoked power were reduced in SZ at fronto-central electrodes, replicating prior findings. PLF was reduced in SZ for non-homologous right and left hemisphere sources. Left hemisphere source PLF in SZ was positively correlated with auditory hallucination symptoms, and was modulated by delta phase. Furthermore, the correlations between source evoked power and PLF found in HC was reduced in SZ for the LH sources. Conclusion These findings suggest that differential neural circuit abnormalities may be present in the left and right auditory cortices in schizophrenia. In addition, they provide further support for the hypothesis that hallucinations are related to cortical hyperexcitability, which is manifested by

  12. Influence of age, spatial memory, and ocular fixation on localization of auditory, visual, and bimodal targets by human subjects.

    Science.gov (United States)

    Dobreva, Marina S; O'Neill, William E; Paige, Gary D

    2012-12-01

    A common complaint of the elderly is difficulty identifying and localizing auditory and visual sources, particularly in competing background noise. Spatial errors in the elderly may pose challenges and even threats to self and others during everyday activities, such as localizing sounds in a crowded room or driving in traffic. In this study, we investigated the influence of aging, spatial memory, and ocular fixation on the localization of auditory, visual, and combined auditory-visual (bimodal) targets. Head-restrained young and elderly subjects localized targets in a dark, echo-attenuated room using a manual laser pointer. Localization accuracy and precision (repeatability) were quantified for both ongoing and transient (remembered) targets at response delays up to 10 s. Because eye movements bias auditory spatial perception, localization was assessed under target fixation (eyes free, pointer guided by foveal vision) and central fixation (eyes fixed straight ahead, pointer guided by peripheral vision) conditions. Spatial localization across the frontal field in young adults demonstrated (1) horizontal overshoot and vertical undershoot for ongoing auditory targets under target fixation conditions, but near-ideal horizontal localization with central fixation; (2) accurate and precise localization of ongoing visual targets guided by foveal vision under target fixation that degraded when guided by peripheral vision during central fixation; (3) overestimation in horizontal central space (±10°) of remembered auditory, visual, and bimodal targets with increasing response delay. In comparison with young adults, elderly subjects showed (1) worse precision in most paradigms, especially when localizing with peripheral vision under central fixation; (2) greatly impaired vertical localization of auditory and bimodal targets; (3) increased horizontal overshoot in the central field for remembered visual and bimodal targets across response delays; (4) greater vulnerability to

  13. Auditory Motion Elicits a Visual Motion Aftereffect.

    Science.gov (United States)

    Berger, Christopher C; Ehrsson, H Henrik

    2016-01-01

    The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect-an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  14. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  15. Relation between Working Memory Capacity and Auditory Stream Segregation in Children with Auditory Processing Disorder

    Directory of Open Access Journals (Sweden)

    Yones Lotfi

    2016-03-01

    Full Text Available Background: This study assessed the relationship between working memory capacity and auditory stream segregation by using the concurrent minimum audible angle in children with a diagnosed auditory processing disorder (APD. Methods: The participants in this cross-sectional, comparative study were 20 typically developing children and 15 children with a diagnosed APD (age, 9–11 years according to the subtests of multiple-processing auditory assessment. Auditory stream segregation was investigated using the concurrent minimum audible angle. Working memory capacity was evaluated using the non-word repetition and forward and backward digit span tasks. Nonparametric statistics were utilized to compare the between-group differences. The Pearson correlation was employed to measure the degree of association between working memory capacity and the localization tests between the 2 groups. Results: The group with APD had significantly lower scores than did the typically developing subjects in auditory stream segregation and working memory capacity. There were significant negative correlations between working memory capacity and the concurrent minimum audible angle in the most frontal reference location (0° azimuth and lower negative correlations in the most lateral reference location (60° azimuth in the children with APD. Conclusion: The study revealed a relationship between working memory capacity and auditory stream segregation in children with APD. The research suggests that lower working memory capacity in children with APD may be the possible cause of the inability to segregate and group incoming information.

  16. 智能车模研究及其动力学探索%Research of Intelligent Vehicle Model and Its Dynamics Exploration

    Institute of Scientific and Technical Information of China (English)

    储星; 王贵勇; 贾现广

    2011-01-01

    将智能车模系统分为主动驾驶控制器和车辆底盘两大模块,对主动控制器的构造进行了分析,并通过对全国大学生智能汽车竞赛中N286型底盘进行动力学建模,研究智能车行驶过程中的纵向和横向动力学性能和控制,提出驱动轮差动控制策略,并对其进行了初步研究.%This paper divides intelligent vehicle system into two big modules of active driving controller and chassis,analyzed the structure of active controller, and through modeling to the national college students intelligent car race type N286 chassis dynamics,researchs the lateral and longitu-. Dinal dynamics performance and control of intelligence car in the process of driving, puts forward the strategy of driving wheels differential control, and does the preliminary research.

  17. Auditory conflict and congruence in frontotemporal dementia.

    Science.gov (United States)

    Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D

    2017-09-01

    Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  18. Design and Control of the Electric Drive of the Anti- Hail Launching System

    Directory of Open Access Journals (Sweden)

    Gheorghe Manolea

    2014-09-01

    Full Text Available In the present, the Romanian anti-hail launchers are manually operated. This means that the positioning of the launcher on the two directions (azimuth and elevation are adjusted based on the commands, by the human operator. The paper describes the design, implementation and experimental results of the electric drives of the two axes by using permanent magnets synchronous motors (PMSM supplied by smart drives. The solution offers the possibility of automation and integration of the anti-hail launchers within intelligent systems.

  19. Effects of Auditory Stimuli on Visual Velocity Perception

    Directory of Open Access Journals (Sweden)

    Michiaki Shibata

    2011-10-01

    Full Text Available We investigated the effects of auditory stimuli on the perceived velocity of a moving visual stimulus. Previous studies have reported that the duration of visual events is perceived as being longer for events filled with auditory stimuli than for events not filled with auditory stimuli, ie, the so-called “filled-duration illusion.” In this study, we have shown that auditory stimuli also affect the perceived velocity of a moving visual stimulus. In Experiment 1, a moving comparison stimulus (4.2∼5.8 deg/s was presented together with filled (or unfilled white-noise bursts or with no sound. The standard stimulus was a moving visual stimulus (5 deg/s presented before or after the comparison stimulus. The participants had to judge which stimulus was moving faster. The results showed that the perceived velocity in the auditory-filled condition was lower than that in the auditory-unfilled and no-sound conditions. In Experiment 2, we investigated the effects of auditory stimuli on velocity adaptation. The results showed that the effects of velocity adaptation in the auditory-filled condition were weaker than those in the no-sound condition. These results indicate that auditory stimuli tend to decrease the perceived velocity of a moving visual stimulus.

  20. The attenuation of auditory neglect by implicit cues.

    Science.gov (United States)

    Coleman, A Rand; Williams, J Michael

    2006-09-01

    This study examined implicit semantic and rhyming cues on perception of auditory stimuli among nonaphasic participants who suffered a lesion of the right cerebral hemisphere and auditory neglect of sound perceived by the left ear. Because language represents an elaborate processing of auditory stimuli and the language centers were intact among these patients, it was hypothesized that interactive verbal stimuli presented in a dichotic manner would attenuate neglect. The selected participants were administered an experimental dichotic listening test composed of six types of word pairs: unrelated words, synonyms, antonyms, categorically related words, compound words, and rhyming words. Presentation of word pairs that were semantically related resulted in a dramatic reduction of auditory neglect. Dichotic presentations of rhyming words exacerbated auditory neglect. These findings suggest that the perception of auditory information is strongly affected by the specific content conveyed by the auditory system. Language centers will process a degraded stimulus that contains salient language content. A degraded auditory stimulus is neglected if it is devoid of content that activates the language centers or other cognitive systems. In general, these findings suggest that auditory neglect involves a complex interaction of intact and impaired cerebral processing centers with content that is selectively processed by these centers.

  1. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...

  2. Rapid measurement of auditory filter shape in mice using the auditory brainstem response and notched noise.

    Science.gov (United States)

    Lina, Ioan A; Lauer, Amanda M

    2013-04-01

    The notched noise method is an effective procedure for measuring frequency resolution and auditory filter shapes in both human and animal models of hearing. Briefly, auditory filter shape and bandwidth estimates are derived from masked thresholds for tones presented in noise containing widening spectral notches. As the spectral notch widens, increasingly less of the noise falls within the auditory filter and the tone becomes more detectible until the notch width exceeds the filter bandwidth. Behavioral procedures have been used for the derivation of notched noise auditory filter shapes in mice; however, the time and effort needed to train and test animals on these tasks renders a constraint on the widespread application of this testing method. As an alternative procedure, we combined relatively non-invasive auditory brainstem response (ABR) measurements and the notched noise method to estimate auditory filters in normal-hearing mice at center frequencies of 8, 11.2, and 16 kHz. A complete set of simultaneous masked thresholds for a particular tone frequency were obtained in about an hour. ABR-derived filter bandwidths broadened with increasing frequency, consistent with previous studies. The ABR notched noise procedure provides a fast alternative to estimating frequency selectivity in mice that is well-suited to high through-put or time-sensitive screening. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Neurocognitive Correlates of Young Drivers' Performance in a Driving Simulator.

    Science.gov (United States)

    Guinosso, Stephanie A; Johnson, Sara B; Schultheis, Maria T; Graefe, Anna C; Bishai, David M

    2016-04-01

    Differences in neurocognitive functioning may contribute to driving performance among young drivers. However, few studies have examined this relation. This pilot study investigated whether common neurocognitive measures were associated with driving performance among young drivers in a driving simulator. Young drivers (19.8 years (standard deviation [SD] = 1.9; N = 74)) participated in a battery of neurocognitive assessments measuring general intellectual capacity (Full-Scale Intelligence Quotient, FSIQ) and executive functioning, including the Stroop Color-Word Test (cognitive inhibition), Wisconsin Card Sort Test-64 (cognitive flexibility), and Attention Network Task (alerting, orienting, and executive attention). Participants then drove in a simulated vehicle under two conditions-a baseline and driving challenge. During the driving challenge, participants completed a verbal working memory task to increase demand on executive attention. Multiple regression models were used to evaluate the relations between the neurocognitive measures and driving performance under the two conditions. FSIQ, cognitive inhibition, and alerting were associated with better driving performance at baseline. FSIQ and cognitive inhibition were also associated with better driving performance during the verbal challenge. Measures of cognitive flexibility, orienting, and conflict executive control were not associated with driving performance under either condition. FSIQ and, to some extent, measures of executive function are associated with driving performance in a driving simulator. Further research is needed to determine if executive function is associated with more advanced driving performance under conditions that demand greater cognitive load. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  4. Feature Assignment in Perception of Auditory Figure

    Science.gov (United States)

    Gregg, Melissa K.; Samuel, Arthur G.

    2012-01-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory…

  5. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  6. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  7. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  8. Maps of the Auditory Cortex.

    Science.gov (United States)

    Brewer, Alyssa A; Barton, Brian

    2016-07-08

    One of the fundamental properties of the mammalian brain is that sensory regions of cortex are formed of multiple, functionally specialized cortical field maps (CFMs). Each CFM comprises two orthogonal topographical representations, reflecting two essential aspects of sensory space. In auditory cortex, auditory field maps (AFMs) are defined by the combination of tonotopic gradients, representing the spectral aspects of sound (i.e., tones), with orthogonal periodotopic gradients, representing the temporal aspects of sound (i.e., period or temporal envelope). Converging evidence from cytoarchitectural and neuroimaging measurements underlies the definition of 11 AFMs across core and belt regions of human auditory cortex, with likely homology to those of macaque. On a macrostructural level, AFMs are grouped into cloverleaf clusters, an organizational structure also seen in visual cortex. Future research can now use these AFMs to investigate specific stages of auditory processing, key for understanding behaviors such as speech perception and multimodal sensory integration.

  9. Rapid Auditory System Adaptation Using a Virtual Auditory Environment

    Directory of Open Access Journals (Sweden)

    Gaëtan Parseihian

    2011-10-01

    Full Text Available Various studies have highlighted plasticity of the auditory system from visual stimuli, limiting the trained field of perception. The aim of the present study is to investigate auditory system adaptation using an audio-kinesthetic platform. Participants were placed in a Virtual Auditory Environment allowing the association of the physical position of a virtual sound source with an alternate set of acoustic spectral cues or Head-Related Transfer Function (HRTF through the use of a tracked ball manipulated by the subject. This set-up has the advantage to be not being limited to the visual field while also offering a natural perception-action coupling through the constant awareness of one's hand position. Adaptation process to non-individualized HRTF was realized through a spatial search game application. A total of 25 subjects participated, consisting of subjects presented with modified cues using non-individualized HRTF and a control group using individual measured HRTFs to account for any learning effect due to the game itself. The training game lasted 12 minutes and was repeated over 3 consecutive days. Adaptation effects were measured with repeated localization tests. Results showed a significant performance improvement for vertical localization and a significant reduction in the front/back confusion rate after 3 sessions.

  10. Visually Evoked Visual-Auditory Changes Associated with Auditory Performance in Children with Cochlear Implants

    Directory of Open Access Journals (Sweden)

    Maojin Liang

    2017-10-01

    Full Text Available Activation of the auditory cortex by visual stimuli has been reported in deaf children. In cochlear implant (CI patients, a residual, more intense cortical activation in the frontotemporal areas in response to photo stimuli was found to be positively associated with poor auditory performance. Our study aimed to investigate the mechanism by which visual processing in CI users activates the auditory-associated cortex during the period after cochlear implantation as well as its relation to CI outcomes. Twenty prelingually deaf children with CI were recruited. Ten children were good CI performers (GCP and ten were poor (PCP. Ten age- and sex- matched normal-hearing children were recruited as controls, and visual evoked potentials (VEPs were recorded. The characteristics of the right frontotemporal N1 component were analyzed. In the prelingually deaf children, higher N1 amplitude was observed compared to normal controls. While the GCP group showed significant decreases in N1 amplitude, and source analysis showed the most significant decrease in brain activity was observed in the primary visual cortex (PVC, with a downward trend in the primary auditory cortex (PAC activity, but these did not occur in the PCP group. Meanwhile, higher PVC activation (comparing to controls before CI use (0M and a significant decrease in source energy after CI use were found to be related to good CI outcomes. In the GCP group, source energy decreased in the visual-auditory cortex with CI use. However, no significant cerebral hemispheric dominance was found. We supposed that intra- or cross-modal reorganization and higher PVC activation in prelingually deaf children may reflect a stronger potential ability of cortical plasticity. Brain activity evolution appears to be related to CI auditory outcomes.

  11. Visually Evoked Visual-Auditory Changes Associated with Auditory Performance in Children with Cochlear Implants.

    Science.gov (United States)

    Liang, Maojin; Zhang, Junpeng; Liu, Jiahao; Chen, Yuebo; Cai, Yuexin; Wang, Xianjun; Wang, Junbo; Zhang, Xueyuan; Chen, Suijun; Li, Xianghui; Chen, Ling; Zheng, Yiqing

    2017-01-01

    Activation of the auditory cortex by visual stimuli has been reported in deaf children. In cochlear implant (CI) patients, a residual, more intense cortical activation in the frontotemporal areas in response to photo stimuli was found to be positively associated with poor auditory performance. Our study aimed to investigate the mechanism by which visual processing in CI users activates the auditory-associated cortex during the period after cochlear implantation as well as its relation to CI outcomes. Twenty prelingually deaf children with CI were recruited. Ten children were good CI performers (GCP) and ten were poor (PCP). Ten age- and sex- matched normal-hearing children were recruited as controls, and visual evoked potentials (VEPs) were recorded. The characteristics of the right frontotemporal N1 component were analyzed. In the prelingually deaf children, higher N1 amplitude was observed compared to normal controls. While the GCP group showed significant decreases in N1 amplitude, and source analysis showed the most significant decrease in brain activity was observed in the primary visual cortex (PVC), with a downward trend in the primary auditory cortex (PAC) activity, but these did not occur in the PCP group. Meanwhile, higher PVC activation (comparing to controls) before CI use (0M) and a significant decrease in source energy after CI use were found to be related to good CI outcomes. In the GCP group, source energy decreased in the visual-auditory cortex with CI use. However, no significant cerebral hemispheric dominance was found. We supposed that intra- or cross-modal reorganization and higher PVC activation in prelingually deaf children may reflect a stronger potential ability of cortical plasticity. Brain activity evolution appears to be related to CI auditory outcomes.

  12. Assessing the aging effect on auditory-verbal memory by Persian version of dichotic auditory verbal memory test

    OpenAIRE

    Zahra Shahidipour; Ahmad Geshani; Zahra Jafari; Shohreh Jalaie; Elham Khosravifard

    2014-01-01

    Background and Aim: Memory is one of the aspects of cognitive function which is widely affected among aged people. Since aging has different effects on different memorial systems and little studies have investigated auditory-verbal memory function in older adults using dichotic listening techniques, the purpose of this study was to evaluate the auditory-verbal memory function among old people using Persian version of dichotic auditory-verbal memory test. Methods: The Persian version of dic...

  13. Perceptual Plasticity for Auditory Object Recognition

    Science.gov (United States)

    Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.

    2017-01-01

    In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples

  14. Cortical Representations of Speech in a Multitalker Auditory Scene.

    Science.gov (United States)

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory

  15. Linking roadside communication and intelligent cruise control ; effects on driving behaviour

    NARCIS (Netherlands)

    Hogema, J.H.; Horst, A.R.A. van der; Janssen, W.H.

    1995-01-01

    This paper describes a driving simulator experiment in which an Intelligenr Cruise Control (ICC) was combined with short-range communication (SRC) with the road side. This offers the possibility to obtain in-car preview information about relevant conditions on the road ahead. ICCs studied varied in

  16. Auditory Connections and Functions of Prefrontal Cortex

    Directory of Open Access Journals (Sweden)

    Bethany ePlakke

    2014-07-01

    Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.

  17. Auditory connections and functions of prefrontal cortex

    Science.gov (United States)

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  18. A virtual auditory environment for investigating the auditory signal processing of realistic sounds

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel; Buchholz, Jörg

    2008-01-01

    In the present study, a novel multichannel loudspeaker-based virtual auditory environment (VAE) is introduced. The VAE aims at providing a versatile research environment for investigating the auditory signal processing in real environments, i.e., considering multiple sound sources and room...... reverberation. The environment is based on the ODEON room acoustic simulation software to render the acoustical scene. ODEON outputs are processed using a combination of different order Ambisonic techniques to calculate multichannel room impulse responses (mRIR). Auralization is then obtained by the convolution...... the VAE development, special care was taken in order to achieve a realistic auditory percept and to avoid “artifacts” such as unnatural coloration. The performance of the VAE has been evaluated and optimized on a 29 loudspeaker setup using both objective and subjective measurement techniques....

  19. A prototype system for real time computer animation of slow traffic in a driving simulator

    NARCIS (Netherlands)

    Roerdink, JBTM; van Delden, MJB; Hin, AJS; van Wolffelaar, PC; Thalmann, NM; Skala,

    1997-01-01

    The Traffic Research Centre (TRC) of the University of Groningen in the Netherlands has developed a driving simulator with 'intelligent' computer-controlled traffic, consisting at the moment only of saloon cars. The range of possible applications would be greatly enhanced if other traffic

  20. Auditory, visual, and auditory-visual perceptions of emotions by young children with hearing loss versus children with normal hearing.

    Science.gov (United States)

    Most, Tova; Michaelis, Hilit

    2012-08-01

    This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify happiness, anger, sadness, and fear expressed by an actress when uttering the same neutral nonsense sentence. Their auditory, visual, and auditory-visual perceptions of the emotional content were assessed. The accuracy of emotion perception among children with HL was lower than that of the NH children in all 3 conditions: auditory, visual, and auditory-visual. Perception through the combined auditory-visual mode significantly surpassed the auditory or visual modes alone in both groups, indicating that children with HL utilized the auditory information for emotion perception. No significant differences in perception emerged according to degree of HL. In addition, children with profound HL and cochlear implants did not perform differently from children with less severe HL who used hearing aids. The relatively high accuracy of emotion perception by children with HL may be explained by their intensive rehabilitation, which emphasizes suprasegmental and paralinguistic aspects of verbal communication.

  1. Selective increase of auditory cortico-striatal coherence during auditory-cued Go/NoGo discrimination learning.

    Directory of Open Access Journals (Sweden)

    Andreas L. Schulz

    2016-01-01

    Full Text Available Goal directed behavior and associated learning processes are tightly linked to neuronal activity in the ventral striatum. Mechanisms that integrate task relevant sensory information into striatal processing during decision making and learning are implicitly assumed in current reinforcementmodels, yet they are still weakly understood. To identify the functional activation of cortico-striatal subpopulations of connections during auditory discrimination learning, we trained Mongolian gerbils in a two-way active avoidance task in a shuttlebox to discriminate between falling and rising frequency modulated tones with identical spectral properties. We assessed functional coupling by analyzing the field-field coherence between the auditory cortex and the ventral striatum of animals performing the task. During the course of training, we observed a selective increase of functionalcoupling during Go-stimulus presentations. These results suggest that the auditory cortex functionally interacts with the ventral striatum during auditory learning and that the strengthening of these functional connections is selectively goal-directed.

  2. Distraction by deviance: comparing the effects of auditory and visual deviant stimuli on auditory and visual target processing.

    Science.gov (United States)

    Leiva, Alicia; Parmentier, Fabrice B R; Andrés, Pilar

    2015-01-01

    We report the results of oddball experiments in which an irrelevant stimulus (standard, deviant) was presented before a target stimulus and the modality of these stimuli was manipulated orthogonally (visual/auditory). Experiment 1 showed that auditory deviants yielded distraction irrespective of the target's modality while visual deviants did not impact on performance. When participants were forced to attend the distractors in order to detect a rare target ("target-distractor"), auditory deviants yielded distraction irrespective of the target's modality and visual deviants yielded a small distraction effect when targets were auditory (Experiments 2 & 3). Visual deviants only produced distraction for visual targets when deviant stimuli were not visually distinct from the other distractors (Experiment 4). Our results indicate that while auditory deviants yield distraction irrespective of the targets' modality, visual deviants only do so when attended and under selective conditions, at least when irrelevant and target stimuli are temporally and perceptually decoupled.

  3. Effects of alcohol on attention orienting and dual-task performance during simulated driving: an event-related potential study.

    Science.gov (United States)

    Wester, Anne E; Verster, Joris C; Volkerts, Edmund R; Böcker, Koen B E; Kenemans, J Leon

    2010-09-01

    Driving is a complex task and is susceptible to inattention and distraction. Moreover, alcohol has a detrimental effect on driving performance, possibly due to alcohol-induced attention deficits. The aim of the present study was to assess the effects of alcohol on simulated driving performance and attention orienting and allocation, as assessed by event-related potentials (ERPs). Thirty-two participants completed two test runs in the Divided Attention Steering Simulator (DASS) with blood alcohol concentrations (BACs) of 0.00%, 0.02%, 0.05%, 0.08% and 0.10%. Sixteen participants performed the second DASS test run with a passive auditory oddball to assess alcohol effects on involuntary attention shifting. Sixteen other participants performed the second DASS test run with an active auditory oddball to assess alcohol effects on dual-task performance and active attention allocation. Dose-dependent impairments were found for reaction times, the number of misses and steering error, even more so in dual-task conditions, especially in the active oddball group. ERP amplitudes to novel irrelevant events were also attenuated in a dose-dependent manner. The P3b amplitude to deviant target stimuli decreased with blood alcohol concentration only in the dual-task condition. It is concluded that alcohol increases distractibility and interference from secondary task stimuli, as well as reduces attentional capacity and dual-task integrality.

  4. Visual-induced expectations modulate auditory cortical responses

    Directory of Open Access Journals (Sweden)

    Virginie evan Wassenhove

    2015-02-01

    Full Text Available Active sensing has important consequences on multisensory processing (Schroeder et al. 2010. Here, we asked whether in the absence of saccades, the position of the eyes and the timing of transient colour changes of visual stimuli could selectively affect the excitability of auditory cortex by predicting the where and the when of a sound, respectively. Human participants were recorded with magnetoencephalography (MEG while maintaining the position of their eyes on the left, right, or centre of the screen. Participants counted colour changes of the fixation cross while neglecting sounds which could be presented to the left, right or both ears. First, clear alpha power increases were observed in auditory cortices, consistent with participants’ attention directed to visual inputs. Second, colour changes elicited robust modulations of auditory cortex responses (when prediction seen as ramping activity, early alpha phase-locked responses, and enhanced high-gamma band responses in the contralateral side of sound presentation. Third, no modulations of auditory evoked or oscillatory activity were found to be specific to eye position. Altogether, our results suggest that visual transience can automatically elicit a prediction of when a sound will occur by changing the excitability of auditory cortices irrespective of the attended modality, eye position or spatial congruency of auditory and visual events. To the contrary, auditory cortical responses were not significantly affected by eye position suggesting that where predictions may require active sensing or saccadic reset to modulate auditory cortex responses, notably in the absence of spatial orientation to sounds.

  5. Naftidrofuryl affects neurite regeneration by injured adult auditory neurons.

    Science.gov (United States)

    Lefebvre, P P; Staecker, H; Moonen, G; van de Water, T R

    1993-07-01

    Afferent auditory neurons are essential for the transmission of auditory information from Corti's organ to the central auditory pathway. Auditory neurons are very sensitive to acute insult and have a limited ability to regenerate injured neuronal processes. Therefore, these neurons appear to be a limiting factor in restoration of hearing function following an injury to the peripheral auditory receptor. In a previous study nerve growth factor (NGF) was shown to stimulate neurite repair but not survival of injured auditory neurons. In this study, we have demonstrated a neuritogenesis promoting effect of naftidrofuryl in an vitro model for injury to adult auditory neurons, i.e. dissociated cell cultures of adult rat spiral ganglia. Conversely, naftidrofuryl did not have any demonstrable survival promoting effect on these in vitro preparations of injured auditory neurons. The potential uses of this drug as a therapeutic agent in acute diseases of the inner ear are discussed in the light of these observations.

  6. Formal auditory training in adult hearing aid users

    Directory of Open Access Journals (Sweden)

    Daniela Gil

    2010-01-01

    Full Text Available INTRODUCTION: Individuals with sensorineural hearing loss are often able to regain some lost auditory function with the help of hearing aids. However, hearing aids are not able to overcome auditory distortions such as impaired frequency resolution and speech understanding in noisy environments. The coexistence of peripheral hearing loss and a central auditory deficit may contribute to patient dissatisfaction with amplification, even when audiological tests indicate nearly normal hearing thresholds. OBJECTIVE: This study was designed to validate the effects of a formal auditory training program in adult hearing aid users with mild to moderate sensorineural hearing loss. METHODS: Fourteen bilateral hearing aid users were divided into two groups: seven who received auditory training and seven who did not. The training program was designed to improve auditory closure, figure-to-ground for verbal and nonverbal sounds and temporal processing (frequency and duration of sounds. Pre- and post-training evaluations included measuring electrophysiological and behavioral auditory processing and administration of the Abbreviated Profile of Hearing Aid Benefit (APHAB self-report scale. RESULTS: The post-training evaluation of the experimental group demonstrated a statistically significant reduction in P3 latency, improved performance in some of the behavioral auditory processing tests and higher hearing aid benefit in noisy situations (p-value < 0,05. No changes were noted for the control group (p-value <0,05. CONCLUSION: The results demonstrated that auditory training in adult hearing aid users can lead to a reduction in P3 latency, improvements in sound localization, memory for nonverbal sounds in sequence, auditory closure, figure-to-ground for verbal sounds and greater benefits in reverberant and noisy environments.

  7. The role of auditory cortices in the retrieval of single-trial auditory-visual object memories.

    Science.gov (United States)

    Matusz, Pawel J; Thelen, Antonia; Amrein, Sarah; Geiser, Eveline; Anken, Jacques; Murray, Micah M

    2015-03-01

    Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  8. Emergence of auditory-visual relations from a visual-visual baseline with auditory-specific consequences in individuals with autism.

    Science.gov (United States)

    Varella, André A B; de Souza, Deisy G

    2014-07-01

    Empirical studies have demonstrated that class-specific contingencies may engender stimulus-reinforcer relations. In these studies, crossmodal relations emerged when crossmodal relations comprised the baseline, and intramodal relations emerged when intramodal relations were taught during baseline. This study investigated whether auditory-visual relations (crossmodal) would emerge after participants learned a visual-visual baseline (intramodal) with auditory stimuli presented as specific consequences. Four individuals with autism learned AB and CD relations with class-specific reinforcers. When A1 and C1 were presented as samples, the selections of B1 and D1, respectively, were followed by an edible (R1) and a sound (S1). Selections of B2 and D2 under the control of A2 and C2, respectively, were followed by R2 and S2. Probe trials tested for visual-visual AC, CA, AD, DA, BC, CB, BD, and DB emergent relations and auditory-visual SA, SB, SC, and SD emergent relations. All of the participants demonstrated the emergence of all auditory-visual relations, and three of four participants showed emergence of all visual-visual relations. Thus, the emergence of auditory-visual relations from specific auditory consequences suggests that these relations do not depend on crossmodal baseline training. The procedure has great potential for applied technology to generate auditory-visual discriminations and stimulus classes in the context of behavior-analytic interventions for autism. © Society for the Experimental Analysis of Behavior.

  9. IoT On-Board System for Driving Style Assessment.

    Science.gov (United States)

    Jachimczyk, Bartosz; Dziak, Damian; Czapla, Jacek; Damps, Pawel; Kulesza, Wlodek J

    2018-04-17

    The assessment of skills is essential and desirable in areas such as medicine, security, and other professions where mental, physical, and manual skills are crucial. However, often such assessments are performed by people called “experts” who may be subjective and are able to consider a limited number of factors and indicators. This article addresses the problem of the objective assessment of driving style independent of circumstances. The proposed objective assessment of driving style is based on eight indicators, which are associated with the vehicle’s speed, acceleration, jerk, engine rotational speed and driving time. These indicators are used to estimate three driving style criteria: safety , economy , and comfort . The presented solution is based on the embedded system designed according to the Internet of Things concept. The useful data are acquired from the car diagnostic port—OBD-II—and from an additional accelerometer sensor and GPS module. The proposed driving skills assessment method has been implemented and experimentally validated on a group of drivers. The obtained results prove the system’s ability to quantitatively distinguish different driving styles. The system was verified on long-route tests for analysis and could then improve the driver’s behavior behind the wheel. Moreover, the spider diagram approach that was used established a convenient visualization platform for multidimensional comparison of the result and comprehensive assessment in an intelligible manner.

  10. IoT On-Board System for Driving Style Assessment

    Directory of Open Access Journals (Sweden)

    Bartosz Jachimczyk

    2018-04-01

    Full Text Available The assessment of skills is essential and desirable in areas such as medicine, security, and other professions where mental, physical, and manual skills are crucial. However, often such assessments are performed by people called “experts” who may be subjective and are able to consider a limited number of factors and indicators. This article addresses the problem of the objective assessment of driving style independent of circumstances. The proposed objective assessment of driving style is based on eight indicators, which are associated with the vehicle’s speed, acceleration, jerk, engine rotational speed and driving time. These indicators are used to estimate three driving style criteria: safety, economy, and comfort. The presented solution is based on the embedded system designed according to the Internet of Things concept. The useful data are acquired from the car diagnostic port—OBD-II—and from an additional accelerometer sensor and GPS module. The proposed driving skills assessment method has been implemented and experimentally validated on a group of drivers. The obtained results prove the system’s ability to quantitatively distinguish different driving styles. The system was verified on long-route tests for analysis and could then improve the driver’s behavior behind the wheel. Moreover, the spider diagram approach that was used established a convenient visualization platform for multidimensional comparison of the result and comprehensive assessment in an intelligible manner.

  11. Increased BOLD Signals Elicited by High Gamma Auditory Stimulation of the Left Auditory Cortex in Acute State Schizophrenia

    Directory of Open Access Journals (Sweden)

    Hironori Kuga, M.D.

    2016-10-01

    We acquired BOLD responses elicited by click trains of 20, 30, 40 and 80-Hz frequencies from 15 patients with acute episode schizophrenia (AESZ, 14 symptom-severity-matched patients with non-acute episode schizophrenia (NASZ, and 24 healthy controls (HC, assessed via a standard general linear-model-based analysis. The AESZ group showed significantly increased ASSR-BOLD signals to 80-Hz stimuli in the left auditory cortex compared with the HC and NASZ groups. In addition, enhanced 80-Hz ASSR-BOLD signals were associated with more severe auditory hallucination experiences in AESZ participants. The present results indicate that neural over activation occurs during 80-Hz auditory stimulation of the left auditory cortex in individuals with acute state schizophrenia. Given the possible association between abnormal gamma activity and increased glutamate levels, our data may reflect glutamate toxicity in the auditory cortex in the acute state of schizophrenia, which might lead to progressive changes in the left transverse temporal gyrus.

  12. Auditory cortical processing in real-world listening: the auditory system going real.

    Science.gov (United States)

    Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin

    2014-11-12

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. Copyright © 2014 the authors 0270-6474/14/3415135-04$15.00/0.

  13. Cognitive Connected Vehicle Information System Design Requirement for Safety: Role of Bayesian Artificial Intelligence

    Directory of Open Access Journals (Sweden)

    Ata Khan

    2013-04-01

    Full Text Available Intelligent transportation systems (ITS are gaining acceptance around the world and the connected vehicle component of ITS is recognized as a high priority research and development area in many technologically advanced countries. Connected vehicles are expected to have the capability of safe, efficient and eco-driving operations whether these are under human control or in the adaptive machine control mode of operations. The race is on to design the capability to operate in connected traffic environment. The operational requirements can be met with cognitive vehicle design features made possible by advances in artificial intelligence-supported methodology, improved understanding of human factors, and advances in communication technology. This paper describes cognitive features and their information system requirements. The architecture of an information system is presented that supports the features of the cognitive connected vehicle. For better focus, information processing capabilities are specified and the role of Bayesian artificial intelligence is defined for data fusion. Example applications illustrate the role of information systems in integrating intelligent technology, Bayesian artificial intelligence, and abstracted human factors. Concluding remarks highlight the role of the information system and Bayesian artificial intelligence in the design of a new generation of cognitive connected vehicle.

  14. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS).

    Science.gov (United States)

    San Juan, Juan; Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory

    2017-01-01

    Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom

  15. Tinnitus alters resting state functional connectivity (RSFC in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS.

    Directory of Open Access Journals (Sweden)

    Juan San Juan

    Full Text Available Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex and non-region of interest (adjacent non-auditory cortices and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz, broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to

  16. Specialized prefrontal auditory fields: organization of primate prefrontal-temporal pathways

    Directory of Open Access Journals (Sweden)

    Maria eMedalla

    2014-04-01

    Full Text Available No other modality is more frequently represented in the prefrontal cortex than the auditory, but the role of auditory information in prefrontal functions is not well understood. Pathways from auditory association cortices reach distinct sites in the lateral, orbital, and medial surfaces of the prefrontal cortex in rhesus monkeys. Among prefrontal areas, frontopolar area 10 has the densest interconnections with auditory association areas, spanning a large antero-posterior extent of the superior temporal gyrus from the temporal pole to auditory parabelt and belt regions. Moreover, auditory pathways make up the largest component of the extrinsic connections of area 10, suggesting a special relationship with the auditory modality. Here we review anatomic evidence showing that frontopolar area 10 is indeed the main frontal auditory field as the major recipient of auditory input in the frontal lobe and chief source of output to auditory cortices. Area 10 is thought to be the functional node for the most complex cognitive tasks of multitasking and keeping track of information for future decisions. These patterns suggest that the auditory association links of area 10 are critical for complex cognition. The first part of this review focuses on the organization of prefrontal-auditory pathways at the level of the system and the synapse, with a particular emphasis on area 10. Then we explore ideas on how the elusive role of area 10 in complex cognition may be related to the specialized relationship with auditory association cortices.

  17. Auditory Neuropathy

    Science.gov (United States)

    ... children and adults with auditory neuropathy. Cochlear implants (electronic devices that compensate for damaged or nonworking parts ... and Drug Administration: Information on Cochlear Implants Telecommunications Relay Services Your Baby's Hearing Screening News Deaf health ...

  18. Dynamics of auditory working memory

    Directory of Open Access Journals (Sweden)

    Jochen eKaiser

    2015-05-01

    Full Text Available Working memory denotes the ability to retain stimuli in mind that are no longer physically present and to perform mental operations on them. Electro- and magnetoencephalography allow investigating the short-term maintenance of acoustic stimuli at a high temporal resolution. Studies investigating working memory for non-spatial and spatial auditory information have suggested differential roles of regions along the putative auditory ventral and dorsal streams, respectively, in the processing of the different sound properties. Analyses of event-related potentials have shown sustained, memory load-dependent deflections over the retention periods. The topography of these waves suggested an involvement of modality-specific sensory storage regions. Spectral analysis has yielded information about the temporal dynamics of auditory working memory processing of individual stimuli, showing activation peaks during the delay phase whose timing was related to task performance. Coherence at different frequencies was enhanced between frontal and sensory cortex. In summary, auditory working memory seems to rely on the dynamic interplay between frontal executive systems and sensory representation regions.

  19. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  20. Auditory and visual memory in musicians and nonmusicians.

    Science.gov (United States)

    Cohen, Michael A; Evans, Karla K; Horowitz, Todd S; Wolfe, Jeremy M

    2011-06-01

    Numerous studies have shown that musicians outperform nonmusicians on a variety of tasks. Here we provide the first evidence that musicians have superior auditory recognition memory for both musical and nonmusical stimuli, compared to nonmusicians. However, this advantage did not generalize to the visual domain. Previously, we showed that auditory recognition memory is inferior to visual recognition memory. Would this be true even for trained musicians? We compared auditory and visual memory in musicians and nonmusicians using familiar music, spoken English, and visual objects. For both groups, memory for the auditory stimuli was inferior to memory for the visual objects. Thus, although considerable musical training is associated with better musical and nonmusical auditory memory, it does not increase the ability to remember sounds to the levels found with visual stimuli. This suggests a fundamental capacity difference between auditory and visual recognition memory, with a persistent advantage for the visual domain.

  1. Auditory hallucinations and PTSD in ex-POWS

    DEFF Research Database (Denmark)

    Crompton, Laura; Lahav, Yael; Solomon, Zahava

    2017-01-01

    (PTSD) symptoms, over time. Former prisoners of war (ex-POWs) from the 1973 Yom Kippur War (n = 99) with and without PTSD and comparable veterans (n = 103) were assessed twice, in 1991 (T1) and 2003 (T2) in regard to auditory hallucinations and PTSD symptoms. Findings indicated that ex-POWs who suffered...... from PTSD reported higher levels of auditory hallucinations at T2 as well as increased hallucinations over time, compared to ex-POWs without PTSD and combatants who did not endure captivity. The relation between PTSD and auditory hallucinations was unidirectional, so that the PTSD overall score at T1...... predicted an increase in auditory hallucinations between T1 and T2, but not vice versa. Assessing the role of PTSD clusters in predicting hallucinations revealed that intrusion symptoms had a unique contribution, compared to avoidance and hyperarousal symptoms. The findings suggest that auditory...

  2. Functional mapping of the primate auditory system.

    Science.gov (United States)

    Poremba, Amy; Saunders, Richard C; Crane, Alison M; Cook, Michelle; Sokoloff, Louis; Mishkin, Mortimer

    2003-01-24

    Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.

  3. Auditory recognition memory is inferior to visual recognition memory.

    Science.gov (United States)

    Cohen, Michael A; Horowitz, Todd S; Wolfe, Jeremy M

    2009-04-07

    Visual memory for scenes is surprisingly robust. We wished to examine whether an analogous ability exists in the auditory domain. Participants listened to a variety of sound clips and were tested on their ability to distinguish old from new clips. Stimuli ranged from complex auditory scenes (e.g., talking in a pool hall) to isolated auditory objects (e.g., a dog barking) to music. In some conditions, additional information was provided to help participants with encoding. In every situation, however, auditory memory proved to be systematically inferior to visual memory. This suggests that there exists either a fundamental difference between auditory and visual stimuli, or, more plausibly, an asymmetry between auditory and visual processing.

  4. An intelligent multi-media human-computer dialogue system

    Science.gov (United States)

    Neal, J. G.; Bettinger, K. E.; Byoun, J. S.; Dobes, Z.; Thielman, C. Y.

    1988-01-01

    Sophisticated computer systems are being developed to assist in the human decision-making process for very complex tasks performed under stressful conditions. The human-computer interface is a critical factor in these systems. The human-computer interface should be simple and natural to use, require a minimal learning period, assist the user in accomplishing his task(s) with a minimum of distraction, present output in a form that best conveys information to the user, and reduce cognitive load for the user. In pursuit of this ideal, the Intelligent Multi-Media Interfaces project is devoted to the development of interface technology that integrates speech, natural language text, graphics, and pointing gestures for human-computer dialogues. The objective of the project is to develop interface technology that uses the media/modalities intelligently in a flexible, context-sensitive, and highly integrated manner modelled after the manner in which humans converse in simultaneous coordinated multiple modalities. As part of the project, a knowledge-based interface system, called CUBRICON (CUBRC Intelligent CONversationalist) is being developed as a research prototype. The application domain being used to drive the research is that of military tactical air control.

  5. Reduced auditory efferent activity in childhood selective mutism.

    Science.gov (United States)

    Bar-Haim, Yair; Henkin, Yael; Ari-Even-Roth, Daphne; Tetin-Schneider, Simona; Hildesheimer, Minka; Muchnik, Chava

    2004-06-01

    Selective mutism is a psychiatric disorder of childhood characterized by consistent inability to speak in specific situations despite the ability to speak normally in others. The objective of this study was to test whether reduced auditory efferent activity, which may have direct bearings on speaking behavior, is compromised in selectively mute children. Participants were 16 children with selective mutism and 16 normally developing control children matched for age and gender. All children were tested for pure-tone audiometry, speech reception thresholds, speech discrimination, middle-ear acoustic reflex thresholds and decay function, transient evoked otoacoustic emission, suppression of transient evoked otoacoustic emission, and auditory brainstem response. Compared with control children, selectively mute children displayed specific deficiencies in auditory efferent activity. These aberrations in efferent activity appear along with normal pure-tone and speech audiometry and normal brainstem transmission as indicated by auditory brainstem response latencies. The diminished auditory efferent activity detected in some children with SM may result in desensitization of their auditory pathways by self-vocalization and in reduced control of masking and distortion of incoming speech sounds. These children may gradually learn to restrict vocalization to the minimal amount possible in contexts that require complex auditory processing.

  6. The effects of divided attention on auditory priming.

    Science.gov (United States)

    Mulligan, Neil W; Duke, Marquinn; Cooper, Angela W

    2007-09-01

    Traditional theorizing stresses the importance of attentional state during encoding for later memory, based primarily on research with explicit memory. Recent research has begun to investigate the role of attention in implicit memory but has focused almost exclusively on priming in the visual modality. The present experiments examined the effect of divided attention on auditory implicit memory, using auditory perceptual identification, word-stem completion and word-fragment completion. Participants heard study words under full attention conditions or while simultaneously carrying out a distractor task (the divided attention condition). In Experiment 1, a distractor task with low response frequency failed to disrupt later auditory priming (but diminished explicit memory as assessed with auditory recognition). In Experiment 2, a distractor task with greater response frequency disrupted priming on all three of the auditory priming tasks as well as the explicit test. These results imply that although auditory priming is less reliant on attention than explicit memory, it is still greatly affected by at least some divided-attention manipulations. These results are consistent with research using visual priming tasks and have relevance for hypotheses regarding attention and auditory priming.

  7. Auditory memory function in expert chess players.

    Science.gov (United States)

    Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona

    2015-01-01

    Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time.

  8. Aging increases distraction by auditory oddballs in visual, but not auditory tasks.

    Science.gov (United States)

    Leiva, Alicia; Parmentier, Fabrice B R; Andrés, Pilar

    2015-05-01

    Aging is typically considered to bring a reduction of the ability to resist distraction by task-irrelevant stimuli. Yet recent work suggests that this conclusion must be qualified and that the effect of aging is mitigated by whether irrelevant and target stimuli emanate from the same modalities or from distinct ones. Some studies suggest that aging is especially sensitive to distraction within-modality while others suggest it is greater across modalities. Here we report the first study to measure the effect of aging on deviance distraction in cross-modal (auditory-visual) and uni-modal (auditory-auditory) oddball tasks. Young and older adults were asked to judge the parity of target digits (auditory or visual in distinct blocks of trials), each preceded by a task-irrelevant sound (the same tone on most trials-the standard sound-or, on rare and unpredictable trials, a burst of white noise-the deviant sound). Deviant sounds yielded distraction (longer response times relative to standard sounds) in both tasks and age groups. However, an age-related increase in distraction was observed in the cross-modal task and not in the uni-modal task. We argue that aging might affect processes involved in the switching of attention across modalities and speculate that this may due to the slowing of this type of attentional shift or a reduction in cognitive control required to re-orient attention toward the target's modality.

  9. Regionalization: The Cure for an Ailing Intelligence Career Field

    Science.gov (United States)

    2013-03-01

    lenses, and it must resist judging the world as if it operated along the same principles and values that drive America.16 A competent strategic... microeconomics – an officer must spend years working within the region and studying within the network of people with long dwell time and deep 12...nurturing those relationships. One of MG Flynn’s principle initiatives for intelligence improvements in Afghanistan directed the analysts to divide their

  10. Implementation of fuzzy modeling system for faults detection and diagnosis in three phase induction motor drive system

    Directory of Open Access Journals (Sweden)

    Shorouk Ossama Ibrahim

    2015-05-01

    Full Text Available Induction motors have been intensively utilized in industrial applications, mainly due to their efficiency and reliability. It is necessary that these machines work all the time with its high performance and reliability. So it is necessary to monitor, detect and diagnose different faults that these motors are facing. In this paper an intelligent fault detection and diagnosis for different faults of induction motor drive system is introduced. The stator currents and the time are introduced as inputs to the proposed fuzzy detection and diagnosis system. The direct torque control technique (DTC is adopted as a suitable control technique in the drive system especially, in traction applications, such as Electric Vehicles and Sub-Way Metro that used such a machine. An intelligent modeling technique is adopted as an identifier for different faults; the proposed model introduces the time as an important factor or variable that plays an important role either in fault detection or in decision making for suitable corrective action according to the type of the fault. Experimental results have been obtained to verify the efficiency of the proposed intelligent detector and identifier; a matching between the simulated and experimental results has been noticed.

  11. Have We Forgotten Auditory Sensory Memory? Retention Intervals in Studies of Nonverbal Auditory Working Memory

    Directory of Open Access Journals (Sweden)

    Michael A. Nees

    2016-12-01

    Full Text Available Researchers have shown increased interest in mechanisms of working memory for nonverbal sounds such as music and environmental sounds. These studies often have used two-stimulus comparison tasks: two sounds separated by a brief retention interval (often 3 to 5 s are compared, and a same or different judgment is recorded. Researchers seem to have assumed that sensory memory has a negligible impact on performance in auditory two-stimulus comparison tasks. This assumption is examined in detail in this comment. According to seminal texts and recent research reports, sensory memory persists in parallel with working memory for a period of time following hearing a stimulus and can influence behavioral responses on memory tasks. Unlike verbal working memory studies that use serial recall tasks, research paradigms for exploring nonverbal working memory—especially two-stimulus comparison tasks—may not be differentiating working memory from sensory memory processes in analyses of behavioral responses, because retention interval durations have not excluded the possibility that the sensory memory trace drives task performance. This conflation of different constructs may be one contributor to discrepant research findings and the resulting proliferation of theoretical conjectures regarding mechanisms of working memory for nonverbal sounds.

  12. Have We Forgotten Auditory Sensory Memory? Retention Intervals in Studies of Nonverbal Auditory Working Memory.

    Science.gov (United States)

    Nees, Michael A

    2016-01-01

    Researchers have shown increased interest in mechanisms of working memory for nonverbal sounds such as music and environmental sounds. These studies often have used two-stimulus comparison tasks: two sounds separated by a brief retention interval (often 3-5 s) are compared, and a "same" or "different" judgment is recorded. Researchers seem to have assumed that sensory memory has a negligible impact on performance in auditory two-stimulus comparison tasks. This assumption is examined in detail in this comment. According to seminal texts and recent research reports, sensory memory persists in parallel with working memory for a period of time following hearing a stimulus and can influence behavioral responses on memory tasks. Unlike verbal working memory studies that use serial recall tasks, research paradigms for exploring nonverbal working memory-especially two-stimulus comparison tasks-may not be differentiating working memory from sensory memory processes in analyses of behavioral responses, because retention interval durations have not excluded the possibility that the sensory memory trace drives task performance. This conflation of different constructs may be one contributor to discrepant research findings and the resulting proliferation of theoretical conjectures regarding mechanisms of working memory for nonverbal sounds.

  13. Ambient Intelligence 2.0: Towards Synergetic Prosperity

    Science.gov (United States)

    Aarts, Emile; Grotenhuis, Frits

    Ten years of research in Ambient Intelligence have revealed that the original ideas and assertions about the way the concept should develop no longer hold and should be substantially revised. Early scenario's in Ambient Intelligence envisioned a world in which individuals could maximally exploit personalized, context aware, wireless devices thus enabling them to become maximally productive, while living at an unprecedented pace. Environments would become smart and proactive, enriching and enhancing the experience of participants thus supporting maximum leisure possibly even at the risk of alienation. New insights have revealed that these brave new world scenarios are no longer desirable and that people are more in for a balanced approach in which technology should serve people instead of driving them to the max. We call this novel approach Synergetic Prosperity, referring to meaningful digital solutions that balance mind and body, and society and earth thus contributing to a prosperous and sustainable development of mankind.

  14. Perceptual consequences of disrupted auditory nerve activity.

    Science.gov (United States)

    Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold

    2005-06-01

    Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique

  15. Comorbidity of Auditory Processing, Language, and Reading Disorders

    Science.gov (United States)

    Sharma, Mridula; Purdy, Suzanne C.; Kelly, Andrea S.

    2009-01-01

    Purpose: The authors assessed comorbidity of auditory processing disorder (APD), language impairment (LI), and reading disorder (RD) in school-age children. Method: Children (N = 68) with suspected APD and nonverbal IQ standard scores of 80 or more were assessed using auditory, language, reading, attention, and memory measures. Auditory processing…

  16. Auditory Preferences of Young Children with and without Hearing Loss for Meaningful Auditory-Visual Compound Stimuli

    Science.gov (United States)

    Zupan, Barbra; Sussman, Joan E.

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both…

  17. Switched Cooperative Driving Model towards Human Vehicle Copiloting Situation: A Cyberphysical Perspective

    Directory of Open Access Journals (Sweden)

    Yang Li

    2018-01-01

    Full Text Available Development of highly automated and intelligent vehicles can lead to the reduction of driver workload. However, it also causes the out-of-the-loop problem to drivers, which leaves drivers handicapped in their ability to take over manual operations in emergency situations. This contribution puts forth a new switched driving strategy to avoid some of the negative consequences associated with out-of-the-loop performance by having drivers assume manual control at periodic intervals. To minimize the impact of the transitions between automated and manual driving on traffic operations, a switched cooperative driving model towards human vehicle copiloting situation is proposed by considering the vehicle dynamics and the realistic intervehicle communication in a cyberphysical view. The design method of the switching signal for the switched cooperative driving model is given based on the Lyapunov stability theory with the comprehensive consideration of platoon stability and human factors. The good agreement between simulation results and theoretical analysis illustrates the effectiveness of the proposed methods.

  18. The Relationship of Neuropsychological Variables to Driving Status Following Holistic Neurorehabilitation

    Directory of Open Access Journals (Sweden)

    Ramaswamy Kavitha ePerumparaichallai

    2014-04-01

    Full Text Available Objective: The main objectives of the present study were to evaluate the cognitive and driving outcomes of a holistic neurorehabilitation program and to examine the relationship between the neuropsychological variables of attention, speed of information processing, and visuospatial functioning and driving outcomes. Methods: One hundred and twenty eight individuals with heterogeneous neurological etiologies who participated in a holistic neurorehabilitation program. Holistic neurorehabilitation consisted of therapies focusing on physical, cognitive, language, emotional, and interpersonal functioning, including training in compensatory strategies. Neuropsychological testing was administered at admission and prior to starting driving or program discharge. Subtests of processing speed, working memory, and perceptual reasoning from the Wechsler Adult Intelligence Scale-III and Trail Making Test were included. Results: At the time of discharge, 54% of the individuals returned to driving. Statistical analyses revealed that at the time of discharge: the sample as a group made significant improvements on cognitive measures included in the study; the driving and non-driving groups differed significantly on aspects of processing speed, attention, abstract reasoning, working memory, and visuospatial functions. Further, at the time of admission, the driving group performed significantly better than the non-driving group on several neuropsychological measures. Conclusions: Cognitive functions of attention, working memory, visual-motor coordination, motor and mental speed, and visual scanning significantly contribute to predicting driving status of individuals after neurorehabilitation. Holistic neurorehabilitation facilitates recovery and helps individuals gain functional independence after brain injury.

  19. Cross-modal processing in auditory and visual working memory.

    Science.gov (United States)

    Suchan, Boris; Linnewerth, Britta; Köster, Odo; Daum, Irene; Schmid, Gebhard

    2006-02-01

    This study aimed to further explore processing of auditory and visual stimuli in working memory. Smith and Jonides (1997) [Smith, E.E., Jonides, J., 1997. Working memory: A view from neuroimaging. Cogn. Psychol. 33, 5-42] described a modified working memory model in which visual input is automatically transformed into a phonological code. To study this process, auditory and the corresponding visual stimuli were presented in a variant of the 2-back task which involved changes from the auditory to the visual modality and vice versa. Brain activation patterns underlying visual and auditory processing as well as transformation mechanisms were analyzed. Results yielded a significant activation in the left primary auditory cortex associated with transformation of visual into auditory information which reflects the matching and recoding of a stored item and its modality. This finding yields empirical evidence for a transformation of visual input into a phonological code, with the auditory cortex as the neural correlate of the recoding process in working memory.

  20. Auditory motion capturing ambiguous visual motion

    Directory of Open Access Journals (Sweden)

    Arjen eAlink

    2012-01-01

    Full Text Available In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left- or rightwards auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left- or rightwards more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.

  1. Intelligent energy management control of vehicle air conditioning system coupled with engine

    International Nuclear Information System (INIS)

    Khayyam, Hamid; Abawajy, Jemal; Jazar, Reza N.

    2012-01-01

    Vehicle Air Conditioning (AC) systems consist of an engine powered compressor activated by an electrical clutch. The AC system imposes an extra load to the vehicle's engine increasing the vehicle fuel consumption and emissions. Energy management control of the vehicle air conditioning is a nonlinear dynamic system, influenced by uncertain disturbances. In addition, the vehicle energy management control system interacts with different complex systems, such as engine, air conditioning system, environment, and driver, to deliver fuel consumption improvements. In this paper, we describe the energy management control of vehicle AC system coupled with vehicle engine through an intelligent control design. The Intelligent Energy Management Control (IEMC) system presented in this paper includes an intelligent algorithm which uses five exterior units and three integrated fuzzy controllers to produce desirable internal temperature and air quality, improved fuel consumption, low emission, and smooth driving. The three fuzzy controllers include: (i) a fuzzy cruise controller to adapt vehicle cruise speed via prediction of the road ahead using a Look-Ahead system, (ii) a fuzzy air conditioning controller to produce desirable temperature and air quality inside vehicle cabin room via a road information system, and (iii) a fuzzy engine controller to generate the required engine torque to move the vehicle smoothly on the road. We optimised the integrated operation of the air conditioning and the engine under various driving patterns and performed three simulations. Results show that the proposed IEMC system developed based on Fuzzy Air Conditioning Controller with Look-Ahead (FAC-LA) method is a more efficient controller for vehicle air conditioning system than the previously developed Coordinated Energy Management Systems (CEMS). - Highlights: ► AC interacts: vehicle, environment, driver components, and the interrelationships between them. ► Intelligent AC algorithm which uses

  2. A New Dimension of Business Intelligence: Location-based Intelligence

    OpenAIRE

    Zeljko Panian

    2012-01-01

    Through the course of this paper we define Locationbased Intelligence (LBI) which is outgrowing from process of amalgamation of geolocation and Business Intelligence. Amalgamating geolocation with traditional Business Intelligence (BI) results in a new dimension of BI named Location-based Intelligence. LBI is defined as leveraging unified location information for business intelligence. Collectively, enterprises can transform location data into business intelligence applic...

  3. Auditory/visual distance estimation: accuracy and variability

    Directory of Open Access Journals (Sweden)

    Paul Wallace Anderson

    2014-10-01

    Full Text Available Past research has shown that auditory distance estimation improves when listeners are given the opportunity to see all possible sound sources when compared to no visual input. It has also been established that distance estimation is more accurate in vision than in audition. The present study investigates the degree to which auditory distance estimation is improved when matched with a congruent visual stimulus. Virtual sound sources based on binaural room impulse response (BRIR measurements made from distances ranging from approximately 0.3 to 9.8 m in a concert hall were used as auditory stimuli. Visual stimuli were photographs taken from the listener’s perspective at each distance in the impulse response measurement setup presented on a large HDTV monitor. Listeners were asked to estimate egocentric distance to the sound source in each of three conditions: auditory only (A, visual only (V, and congruent auditory/visual stimuli (A+V. Each condition was presented within its own block. Sixty-two listeners were tested in order to quantify the response variability inherent in auditory distance perception. Distance estimates from both the V and A+V conditions were found to be considerably more accurate and less variable than estimates from the A condition.

  4. Algorithm of Energy Efficiency Improvement for Intelligent Devices in Railway Transport

    Directory of Open Access Journals (Sweden)

    Beinaroviča Anna

    2016-07-01

    Full Text Available The present paper deals with the use of systems and devices with artificial intelligence in the motor vehicle driving. The main objective of transport operations is a transportation planning with minimum energy consumption. There are various methods for energy saving, and the paper discusses one of them – proper planning of transport operations. To gain proper planning it is necessary to involve the system and devices with artificial intelligence. They will display possible developments in the choice of one or another transport plan. Consequently, it can be supposed how much the plan is effective against the spent energy. The intelligent device considered in this paper consists of an algorithm, a database, and the internet for the connection to other intelligent devices. The main task of the target function is to minimize the total downtime at intermediate stations. A specific unique PHP-based computer model was created. It uses the MySQL database for simulation data storage and processing. Conclusions based on the experiments were made. The experiments showed that after optimization, a train can pass intermediate stations without making multiple stops breaking and accelerating, which leads to decreased energy consumption.

  5. Auditory Hypersensitivity in Children with Autism Spectrum Disorders

    Science.gov (United States)

    Lucker, Jay R.

    2013-01-01

    A review of records was completed to determine whether children with auditory hypersensitivities have difficulty tolerating loud sounds due to auditory-system factors or some other factors not directly involving the auditory system. Records of 150 children identified as not meeting autism spectrum disorders (ASD) criteria and another 50 meeting…

  6. Intelligence analysis – the royal discipline of Competitive Intelligence

    Directory of Open Access Journals (Sweden)

    František Bartes

    2011-01-01

    Full Text Available The aim of this article is to propose work methodology for Competitive Intelligence teams in one of the intelligence cycle’s specific area, in the so-called “Intelligence Analysis”. Intelligence Analysis is one of the stages of the Intelligence Cycle in which data from both the primary and secondary research are analyzed. The main result of the effort is the creation of added value for the information collected. Company Competiitve Intelligence, correctly understood and implemented in business practice, is the “forecasting of the future”. That is forecasting about the future, which forms the basis for strategic decisions made by the company’s top management. To implement that requirement in corporate practice, the author perceives Competitive Intelligence as a systemic application discipline. This approach allows him to propose a “Work Plan” for Competitive Intelligence as a fundamental standardized document to steer Competitive Intelligence team activities. The author divides the Competitive Intelligence team work plan into five basic parts. Those parts are derived from the five-stage model of the intelligence cycle, which, in the author’s opinion, is more appropriate for complicated cases of Competitive Intelligence.

  7. The U.S. Army Functional Concept for Intelligence 2020-2040

    Science.gov (United States)

    2017-02-01

    importance of open source intelligence ( OSINT ) and the Internet of things adds to the volume and diversity of data. Access to the vast amounts of data...U.S. interests. OSINT collection may be the driving source of information, particularly in large urban centers. TRADOC Pamphlet 525-2-1 22...private and public security and traffic systems. (2) OSINT provides insight into human terrain, including social media, search-engines, databases

  8. A Framework for Function Allocation in Intelligent Driver Interface Design for Comfort and Safety

    Directory of Open Access Journals (Sweden)

    Wuhong Wang

    2010-11-01

    Full Text Available This paper presents a conceptual framework for ecological function allocation and optimization matching solution for a human-machine interface with intelligent characteristics by lwho does what and when and howr consideration. As a highlighted example in nature-social system, intelligent transportation system has been playing increasingly role in keeping traffic safety, our research is concerned with identifying human factors problem of In-vehicle Support Systems (ISSs and revealing the consequence of the effects of ISSs on driver cognitive interface. The primary objective is to explore some new ergonomics principals that will be able to use to design an intelligent driver interface for comfort and safety, which will address the impact of driver interfaces layouts, traffic information types, and driving behavioral factors on the advanced vehicles safety design.

  9. Amplify scientific discovery with artificial intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Gil, Yolanda; Greaves, Mark T.; Hendler, James; Hirsch, Hyam

    2014-10-10

    Computing innovations have fundamentally changed many aspects of scientific inquiry. For example, advances in robotics, high-end computing, networking, and databases now underlie much of what we do in science such as gene sequencing, general number crunching, sharing information between scientists, and analyzing large amounts of data. As computing has evolved at a rapid pace, so too has its impact in science, with the most recent computing innovations repeatedly being brought to bear to facilitate new forms of inquiry. Recently, advances in Artificial Intelligence (AI) have deeply penetrated many consumer sectors, including for example Apple’s Siri™ speech recognition system, real-time automated language translation services, and a new generation of self-driving cars and self-navigating drones. However, AI has yet to achieve comparable levels of penetration in scientific inquiry, despite its tremendous potential in aiding computers to help scientists tackle tasks that require scientific reasoning. We contend that advances in AI will transform the practice of science as we are increasingly able to effectively and jointly harness human and machine intelligence in the pursuit of major scientific challenges.

  10. Spiritual Intelligence, Emotional Intelligence and Auditor’s Performance

    OpenAIRE

    Hanafi, Rustam

    2010-01-01

    The objective of this research was to investigate empirical evidence about influence audi-tor spiritual intelligence on the performance with emotional intelligence as a mediator variable. Linear regression models are developed to examine the hypothesis and path analysis. The de-pendent variable of each model is auditor performance, whereas the independent variable of model 1 is spiritual intelligence, of model 2 are emotional intelligence and spiritual intelligence. The parameters were estima...

  11. Across frequency processes involved in auditory detection of coloration

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Kerketsos, P

    2008-01-01

    filterbank was designed to approximate auditory filter-shapes measured by Oxenham and Shera [JARO, 2003, 541-554], derived from forward masking data. The results of the present study demonstrate that a “purely” spectrum-based model approach can successfully describe auditory coloration detection even at high......When an early wall reflection is added to a direct sound, a spectral modulation is introduced to the signal's power spectrum. This spectral modulation typically produces an auditory sensation of coloration or pitch. Throughout this study, auditory spectral-integration effects involved in coloration...... detection are investigated. Coloration detection thresholds were therefore measured as a function of reflection delay and stimulus bandwidth. In order to investigate the involved auditory mechanisms, an auditory model was employed that was conceptually similar to the peripheral weighting model [Yost, JASA...

  12. Designing feedback to mitigate teen distracted driving: A social norms approach.

    Science.gov (United States)

    Merrikhpour, Maryam; Donmez, Birsen

    2017-07-01

    The purpose of this research is to investigate teens' perceived social norms and whether providing normative information can reduce distracted driving behaviors among them. Parents are among the most important social referents for teens; they have significant influences on teens' driving behaviors, including distracted driving which significantly contributes to teens' crash risks. Social norms interventions have been successfully applied in various domains including driving; however, this approach is yet to be explored for mitigating driver distraction among teens. Forty teens completed a driving simulator experiment while performing a self-paced visual-manual secondary task in four between-subject conditions: a) social norms feedback that provided a report at the end of each drive on teens' distracted driving behavior, comparing their distraction engagement to their parent's, b) post-drive feedback that provided just the report on teens' distracted driving behavior without information on their parents, c) real-time feedback in the form of auditory warnings based on eyes of road-time, and d) no feedback as control. Questionnaires were administered to collect data on these teens' and their parents' self-reported engagement in driver distractions and the associated social norms. Social norms and real-time feedback conditions resulted in significantly smaller average off-road glance duration, rate of long (>2s) off-road glances, and standard deviation of lane position compared to no feedback. Further, social norms feedback decreased brake response time and percentage of time not looking at the road compared to no feedback. No major effect was observed for post-drive feedback. Questionnaire results suggest that teens appeared to overestimate parental norms, but no effect of feedback was found on their perceptions. Feedback systems that leverage social norms can help mitigate driver distraction among teens. Overall, both social norms and real-time feedback induced

  13. Autism-specific covariation in perceptual performances: "g" or "p" factor?

    Science.gov (United States)

    Meilleur, Andrée-Anne S; Berthiaume, Claude; Bertone, Armando; Mottron, Laurent

    2014-01-01

    Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination) and mid-level (e.g., pattern matching) tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals. We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ) and Raven Progressive Matrices (RPM). We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence. In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism. Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or "g" factor). Instead, this residual covariation is accounted for by a common perceptual process (or "p" factor), which may drive perceptual abilities differently in autistic and

  14. Autism-Specific Covariation in Perceptual Performances: “g” or “p” Factor?

    Science.gov (United States)

    Meilleur, Andrée-Anne S.; Berthiaume, Claude; Bertone, Armando; Mottron, Laurent

    2014-01-01

    Background Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination) and mid-level (e.g., pattern matching) tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals. Methods We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ) and Raven Progressive Matrices (RPM). We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence. Results In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism. Conclusions Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or “g” factor). Instead, this residual covariation is accounted for by a common perceptual process (or “p” factor), which may drive

  15. Autism-specific covariation in perceptual performances: "g" or "p" factor?

    Directory of Open Access Journals (Sweden)

    Andrée-Anne S Meilleur

    Full Text Available Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination and mid-level (e.g., pattern matching tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals.We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ and Raven Progressive Matrices (RPM. We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence.In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism.Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or "g" factor. Instead, this residual covariation is accounted for by a common perceptual process (or "p" factor, which may drive perceptual abilities differently in

  16. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Interaction of language, auditory and memory brain networks in auditory verbal hallucinations.

    Science.gov (United States)

    Ćurčić-Blake, Branislava; Ford, Judith M; Hubl, Daniela; Orlov, Natasza D; Sommer, Iris E; Waters, Flavie; Allen, Paul; Jardri, Renaud; Woodruff, Peter W; David, Olivier; Mulert, Christoph; Woodward, Todd S; Aleman, André

    2017-01-01

    Auditory verbal hallucinations (AVH) occur in psychotic disorders, but also as a symptom of other conditions and even in healthy people. Several current theories on the origin of AVH converge, with neuroimaging studies suggesting that the language, auditory and memory/limbic networks are of particular relevance. However, reconciliation of these theories with experimental evidence is missing. We review 50 studies investigating functional (EEG and fMRI) and anatomic (diffusion tensor imaging) connectivity in these networks, and explore the evidence supporting abnormal connectivity in these networks associated with AVH. We distinguish between functional connectivity during an actual hallucination experience (symptom capture) and functional connectivity during either the resting state or a task comparing individuals who hallucinate with those who do not (symptom association studies). Symptom capture studies clearly reveal a pattern of increased coupling among the auditory, language and striatal regions. Anatomical and symptom association functional studies suggest that the interhemispheric connectivity between posterior auditory regions may depend on the phase of illness, with increases in non-psychotic individuals and first episode patients and decreases in chronic patients. Leading hypotheses involving concepts as unstable memories, source monitoring, top-down attention, and hybrid models of hallucinations are supported in part by the published connectivity data, although several caveats and inconsistencies remain. Specifically, possible changes in fronto-temporal connectivity are still under debate. Precise hypotheses concerning the directionality of connections deduced from current theoretical approaches should be tested using experimental approaches that allow for discrimination of competing hypotheses. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Auditory memory can be object based.

    Science.gov (United States)

    Dyson, Benjamin J; Ishfaq, Feraz

    2008-04-01

    Identifying how memories are organized remains a fundamental issue in psychology. Previous work has shown that visual short-term memory is organized according to the object of origin, with participants being better at retrieving multiple pieces of information from the same object than from different objects. However, it is not yet clear whether similar memory structures are employed for other modalities, such as audition. Under analogous conditions in the auditory domain, we found that short-term memories for sound can also be organized according to object, with a same-object advantage being demonstrated for the retrieval of information in an auditory scene defined by two complex sounds overlapping in both space and time. Our results provide support for the notion of an auditory object, in addition to the continued identification of similar processing constraints across visual and auditory domains. The identification of modality-independent organizational principles of memory, such as object-based coding, suggests possible mechanisms by which the human processing system remembers multimodal experiences.

  19. Does assisted driving behavior lead to safety-critical encounters with unequipped vehicles' drivers?

    Science.gov (United States)

    Preuk, Katharina; Stemmler, Eric; Schießl, Caroline; Jipp, Meike

    2016-10-01

    With Intelligent Transport Systems (e.g., traffic light assistance systems) assisted drivers are able to show driving behavior in anticipation of upcoming traffic situations. In the years to come, the penetration rate of such systems will be low. Therefore, the majority of vehicles will not be equipped with these systems. Unequipped vehicles' drivers may not expect the driving behavior of assisted drivers. However, drivers' predictions and expectations can play a significant role in their reaction times. Thus, safety issues could arise when unequipped vehicles' drivers encounter driving behavior of assisted drivers. This is why we tested how unequipped vehicles' drivers (N=60) interpreted and reacted to the driving behavior of an assisted driver. We used a multi-driver simulator with three drivers. The three drivers were driving in a line. The lead driver in the line was a confederate who was followed by two unequipped vehicles' drivers. We varied the equipment of the confederate with an Intelligent Transport System: The confederate was equipped either with or without a traffic light assistance system. The traffic light assistance system provided a start-up maneuver before a light turned green. Therefore, the assisted confederate seemed to show unusual deceleration behavior by coming to a halt at an unusual distance from the stop line at the red traffic light. The unusual distance was varied as we tested a moderate (4m distance from the stop line) and an extreme (10m distance from the stop line) parameterization of the system. Our results showed that the extreme parametrization resulted in shorter minimal time-to-collision of the unequipped vehicles' drivers. One rear-end crash was observed. These results provided initial evidence that safety issues can arise when unequipped vehicles' drivers encounter assisted driving behavior. We recommend that future research identifies counteractions to prevent these safety issues. Moreover, we recommend that system developers

  20. Effect of neonatal asphyxia on the impairment of the auditory pathway by recording auditory brainstem responses in newborn piglets: a new experimentation model to study the perinatal hypoxic-ischemic damage on the auditory system.

    Directory of Open Access Journals (Sweden)

    Francisco Jose Alvarez

    Full Text Available Hypoxia-ischemia (HI is a major perinatal problem that results in severe damage to the brain impairing the normal development of the auditory system. The purpose of the present study is to study the effect of perinatal asphyxia on the auditory pathway by recording auditory brain responses in a novel animal experimentation model in newborn piglets.Hypoxia-ischemia was induced to 1.3 day-old piglets by clamping 30 minutes both carotid arteries by vascular occluders and lowering the fraction of inspired oxygen. We compared the Auditory Brain Responses (ABRs of newborn piglets exposed to acute hypoxia/ischemia (n = 6 and a control group with no such exposure (n = 10. ABRs were recorded for both ears before the start of the experiment (baseline, after 30 minutes of HI injury, and every 30 minutes during 6 h after the HI injury.Auditory brain responses were altered during the hypoxic-ischemic insult but recovered 30-60 minutes later. Hypoxia/ischemia seemed to induce auditory functional damage by increasing I-V latencies and decreasing wave I, III and V amplitudes, although differences were not significant.The described experimental model of hypoxia-ischemia in newborn piglets may be useful for studying the effect of perinatal asphyxia on the impairment of the auditory pathway.

  1. Auditory short-term memory activation during score reading.

    Science.gov (United States)

    Simoens, Veerle L; Tervaniemi, Mari

    2013-01-01

    Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.

  2. Is the auditory sensory memory sensitive to visual information?

    Science.gov (United States)

    Besle, Julien; Fort, Alexandra; Giard, Marie-Hélène

    2005-10-01

    The mismatch negativity (MMN) component of auditory event-related brain potentials can be used as a probe to study the representation of sounds in auditory sensory memory (ASM). Yet it has been shown that an auditory MMN can also be elicited by an illusory auditory deviance induced by visual changes. This suggests that some visual information may be encoded in ASM and is accessible to the auditory MMN process. It is not known, however, whether visual information affects ASM representation for any audiovisual event or whether this phenomenon is limited to specific domains in which strong audiovisual illusions occur. To highlight this issue, we have compared the topographies of MMNs elicited by non-speech audiovisual stimuli deviating from audiovisual standards on the visual, the auditory, or both dimensions. Contrary to what occurs with audiovisual illusions, each unimodal deviant elicited sensory-specific MMNs, and the MMN to audiovisual deviants included both sensory components. The visual MMN was, however, different from a genuine visual MMN obtained in a visual-only control oddball paradigm, suggesting that auditory and visual information interacts before the MMN process occurs. Furthermore, the MMN to audiovisual deviants was significantly different from the sum of the two sensory-specific MMNs, showing that the processes of visual and auditory change detection are not completely independent.

  3. Biological impact of music and software-based auditory training

    Science.gov (United States)

    Kraus, Nina

    2012-01-01

    Auditory-based communication skills are developed at a young age and are maintained throughout our lives. However, some individuals – both young and old – encounter difficulties in achieving or maintaining communication proficiency. Biological signals arising from hearing sounds relate to real-life communication skills such as listening to speech in noisy environments and reading, pointing to an intersection between hearing and cognition. Musical experience, amplification, and software-based training can improve these biological signals. These findings of biological plasticity, in a variety of subject populations, relate to attention and auditory memory, and represent an integrated auditory system influenced by both sensation and cognition. Learning outcomes The reader will (1) understand that the auditory system is malleable to experience and training, (2) learn the ingredients necessary for auditory learning to successfully be applied to communication, (3) learn that the auditory brainstem response to complex sounds (cABR) is a window into the integrated auditory system, and (4) see examples of how cABR can be used to track the outcome of experience and training. PMID:22789822

  4. Visual form predictions facilitate auditory processing at the N1.

    Science.gov (United States)

    Paris, Tim; Kim, Jeesun; Davis, Chris

    2017-02-20

    Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.

  5. Intelligence analysis – the royal discipline of Competitive Intelligence

    OpenAIRE

    František Bartes

    2011-01-01

    The aim of this article is to propose work methodology for Competitive Intelligence teams in one of the intelligence cycle’s specific area, in the so-called “Intelligence Analysis”. Intelligence Analysis is one of the stages of the Intelligence Cycle in which data from both the primary and secondary research are analyzed. The main result of the effort is the creation of added value for the information collected. Company Competiitve Intelligence, correctly understood and implemented in busines...

  6. Temporal expectation weights visual signals over auditory signals.

    Science.gov (United States)

    Menceloglu, Melisa; Grabowecky, Marcia; Suzuki, Satoru

    2017-04-01

    Temporal expectation is a process by which people use temporally structured sensory information to explicitly or implicitly predict the onset and/or the duration of future events. Because timing plays a critical role in crossmodal interactions, we investigated how temporal expectation influenced auditory-visual interaction, using an auditory-visual crossmodal congruity effect as a measure of crossmodal interaction. For auditory identification, an incongruent visual stimulus produced stronger interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. In contrast, for visual identification, an incongruent auditory stimulus produced weaker interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. The fact that temporal expectation made visual distractors more potent and visual targets less susceptible to auditory interference suggests that temporal expectation increases the perceptual weight of visual signals.

  7. Brainstem Auditory Evoked Potential in HIV-Positive Adults.

    Science.gov (United States)

    Matas, Carla Gentile; Samelli, Alessandra Giannella; Angrisani, Rosanna Giaffredo; Magliaro, Fernanda Cristina Leite; Segurado, Aluísio C

    2015-10-20

    To characterize the findings of brainstem auditory evoked potential in HIV-positive individuals exposed and not exposed to antiretroviral treatment. This research was a cross-sectional, observational, and descriptive study. Forty-five HIV-positive individuals (18 not exposed and 27 exposed to the antiretroviral treatment - research groups I and II, respectively - and 30 control group individuals) were assessed through brainstem auditory evoked potential. There were no significant between-group differences regarding wave latencies. A higher percentage of altered brainstem auditory evoked potential was observed in the HIV-positive groups when compared to the control group. The most common alteration was in the low brainstem. HIV-positive individuals have a higher percentage of altered brainstem auditory evoked potential that suggests central auditory pathway impairment when compared to HIV-negative individuals. There was no significant difference between individuals exposed and not exposed to antiretroviral treatment.

  8. Effects of sequential streaming on auditory masking using psychoacoustics and auditory evoked potentials.

    Science.gov (United States)

    Verhey, Jesko L; Ernst, Stephan M A; Yasin, Ifat

    2012-03-01

    The present study was aimed at investigating the relationship between the mismatch negativity (MMN) and psychoacoustical effects of sequential streaming on comodulation masking release (CMR). The influence of sequential streaming on CMR was investigated using a psychoacoustical alternative forced-choice procedure and electroencephalography (EEG) for the same group of subjects. The psychoacoustical data showed, that adding precursors comprising of only off-signal-frequency maskers abolished the CMR. Complementary EEG data showed an MMN irrespective of the masker envelope correlation across frequency when only the off-signal-frequency masker components were present. The addition of such precursors promotes a separation of the on- and off-frequency masker components into distinct auditory objects preventing the auditory system from using comodulation as an additional cue. A frequency-specific adaptation changing the representation of the flanking bands in the streaming conditions may also contribute to the reduction of CMR in the stream conditions, however, it is unlikely that adaptation is the primary reason for the streaming effect. A neurophysiological correlate of sequential streaming was found in EEG data using MMN, but the magnitude of the MMN was not correlated with the audibility of the signal in CMR experiments. Dipole source analysis indicated different cortical regions involved in processing auditory streaming and modulation detection. In particular, neural sources for processing auditory streaming include cortical regions involved in decision-making. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Intelligent Control for Drag Reduction on the X-48B Vehicle

    Science.gov (United States)

    Griffin, Brian Joseph; Brown, Nelson Andrew; Yoo, Seung Yeun

    2011-01-01

    This paper focuses on the development of an intelligent control technology for in-flight drag reduction. The system is integrated with and demonstrated on the full X-48B nonlinear simulation. The intelligent control system utilizes a peak-seeking control method implemented with a time-varying Kalman filter. Performance functional coordinate and magnitude measurements, or independent and dependent parameters respectively, are used by the Kalman filter to provide the system with gradient estimates of the designed performance function which is used to drive the system toward a local minimum in a steepestdescent approach. To ensure ease of integration and algorithm performance, a single-input single-output approach was chosen. The framework, specific implementation considerations, simulation results, and flight feasibility issues related to this platform are discussed.

  10. Effect of omega-3 on auditory system

    Directory of Open Access Journals (Sweden)

    Vida Rahimi

    2014-01-01

    Full Text Available Background and Aim: Omega-3 fatty acid have structural and biological roles in the body 's various systems . Numerous studies have tried to research about it. Auditory system is affected a s well. The aim of this article was to review the researches about the effect of omega-3 on auditory system.Methods: We searched Medline , Google Scholar, PubMed, Cochrane Library and SID search engines with the "auditory" and "omega-3" keywords and read textbooks about this subject between 19 70 and 20 13.Conclusion: Both excess and deficient amounts of dietary omega-3 fatty acid can cause harmful effects on fetal and infant growth and development of brain and central nervous system esspesially auditory system. It is important to determine the adequate dosage of omega-3.

  11. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  12. Facilitated auditory detection for speech sounds

    Directory of Open Access Journals (Sweden)

    Carine eSignoret

    2011-07-01

    Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.

  13. Musical experience, auditory perception and reading-related skills in children.

    Science.gov (United States)

    Banai, Karen; Ahissar, Merav

    2013-01-01

    The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds) were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. Participants' previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and memory skills are less likely to study music and if so, why this is the case.

  14. Musical experience, auditory perception and reading-related skills in children.

    Directory of Open Access Journals (Sweden)

    Karen Banai

    Full Text Available BACKGROUND: The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. METHODOLOGY/PRINCIPAL FINDINGS: Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. CONCLUSIONS/SIGNIFICANCE: Participants' previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and

  15. Tinnitus intensity dependent gamma oscillations of the contralateral auditory cortex.

    Directory of Open Access Journals (Sweden)

    Elsa van der Loo

    Full Text Available BACKGROUND: Non-pulsatile tinnitus is considered a subjective auditory phantom phenomenon present in 10 to 15% of the population. Tinnitus as a phantom phenomenon is related to hyperactivity and reorganization of the auditory cortex. Magnetoencephalography studies demonstrate a correlation between gamma band activity in the contralateral auditory cortex and the presence of tinnitus. The present study aims to investigate the relation between objective gamma-band activity in the contralateral auditory cortex and subjective tinnitus loudness scores. METHODS AND FINDINGS: In unilateral tinnitus patients (N = 15; 10 right, 5 left source analysis of resting state electroencephalographic gamma band oscillations shows a strong positive correlation with Visual Analogue Scale loudness scores in the contralateral auditory cortex (max r = 0.73, p<0.05. CONCLUSION: Auditory phantom percepts thus show similar sound level dependent activation of the contralateral auditory cortex as observed in normal audition. In view of recent consciousness models and tinnitus network models these results suggest tinnitus loudness is coded by gamma band activity in the contralateral auditory cortex but might not, by itself, be responsible for tinnitus perception.

  16. Control and Driving Methods for LED Based Intelligent Light Sources

    DEFF Research Database (Denmark)

    Beczkowski, Szymon

    of the diode is controlled either by varying the magnitude of the current or by driving the LED with a pulsed current and regulate the width of the pulse. It has been shown previously, that these two methods yield different effects on diode's efficacy and colour point. A hybrid dimming strategy has been...... proposed where two variable quantities control the intensity of the diode. This increases the controllability of the diode giving new optimisation possibilities. It has been shown that it is possible to compensate for temperature drift of white diode's colour point using hybrid dimming strategy. Also...

  17. Music lessons improve auditory perceptual and cognitive performance in deaf children.

    Science.gov (United States)

    Rochette, Françoise; Moussard, Aline; Bigand, Emmanuel

    2014-01-01

    Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5-4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.

  18. Music lessons improve auditory perceptual and cognitive performance in deaf children

    Directory of Open Access Journals (Sweden)

    Françoise eROCHETTE

    2014-07-01

    Full Text Available Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5 to 4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically-trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.

  19. Organization of the auditory brainstem in a lizard, Gekko gecko. I. Auditory nerve, cochlear nuclei, and superior olivary nuclei

    DEFF Research Database (Denmark)

    Tang, Y. Z.; Christensen-Dalsgaard, J.; Carr, C. E.

    2012-01-01

    We used tract tracing to reveal the connections of the auditory brainstem in the Tokay gecko (Gekko gecko). The auditory nerve has two divisions, a rostroventrally directed projection of mid- to high best-frequency fibers to the nucleus angularis (NA) and a more dorsal and caudal projection of lo...... of auditory connections in lizards and archosaurs but also different processing of low- and high-frequency information in the brainstem. J. Comp. Neurol. 520:17841799, 2012. (C) 2011 Wiley Periodicals, Inc...

  20. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  1. Attention, awareness, and the perception of auditory scenes

    Directory of Open Access Journals (Sweden)

    Joel S Snyder

    2012-02-01

    Full Text Available Auditory perception and cognition entails both low-level and high-level processes, which are likely to interact with each other to create our rich conscious experience of soundscapes. Recent research that we review has revealed numerous influences of high-level factors, such as attention, intention, and prior experience, on conscious auditory perception. And recently, studies have shown that auditory scene analysis tasks can exhibit multistability in a manner very similar to ambiguous visual stimuli, presenting a unique opportunity to study neural correlates of auditory awareness and the extent to which mechanisms of perception are shared across sensory modalities. Research has also led to a growing number of techniques through which auditory perception can be manipulated and even completely suppressed. Such findings have important consequences for our understanding of the mechanisms of perception and also should allow scientists to precisely distinguish the influences of different higher-level influences.

  2. The interaction of cognitive load and attention-directing cues in driving.

    Science.gov (United States)

    Lee, Yi-Ching; Lee, John D; Boyle, Linda Ng

    2009-06-01

    This study investigated the effect of a nondriving cognitively loading task on the relationship between drivers' endogenous and exogenous control of attention. Previous studies have shown that cognitive load leads to a withdrawal of attention from the forward scene and a narrowed field of view, which impairs hazard detection. Posner's cue-target paradigm was modified to study how endogenous and exogenous cues interact with cognitive load to influence drivers' attention in a complex dynamic situation. In a driving simulator, pedestrian crossing signs that predicted the spatial location of pedestrians acted as endogenous cues. To impose cognitive load on drivers, we had them perform an auditory task that simulated the demands of emerging in-vehicle technology. Irrelevant exogenous cues were added to half of the experimental drives by including scene clutter. The validity of endogenous cues influenced how drivers scanned for pedestrian targets. Cognitive load delayed drivers' responses, and scene clutter reduced drivers' fixation durations to pedestrians. Cognitive load diminished the influence of exogenous cues to attract attention to irrelevant areas, and drivers were more affected by scene clutter when the endogenous cues were invalid. Cognitive load suppresses interference from irrelevant exogenous cues and delays endogenous orienting of attention in driving. The complexity of everyday tasks, such as driving, is better captured experimentally in paradigms that represent the interactive nature of attention and processing load.

  3. The 21st annual intelligent ground vehicle competition: robotists for the future

    Science.gov (United States)

    Theisen, Bernard L.

    2013-12-01

    The Intelligent Ground Vehicle Competition (IGVC) is one of four, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI). The IGVC is a multidisciplinary exercise in product realization that challenges college engineering student teams to integrate advanced control theory, machine vision, vehicular electronics and mobile platform fundamentals to design and build an unmanned system. Teams from around the world focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligent driving capabilities. Over the past 21 years, the competition has challenged undergraduate, graduate and Ph.D. students with real world applications in intelligent transportation systems, the military and manufacturing automation. To date, teams from over 80 universities and colleges have participated. This paper describes some of the applications of the technologies required by this competition and discusses the educational benefits. The primary goal of the IGVC is to advance engineering education in intelligent vehicles and related technologies. The employment and professional networking opportunities created for students and industrial sponsors through a series of technical events over the fourday competition are highlighted. Finally, an assessment of the competition based on participation is presented.

  4. Convergent validity of the Integrated Visual and Auditory Continuous Performance Test (IVA+Plus): associations with working memory, processing speed, and behavioral ratings.

    Science.gov (United States)

    Arble, Eamonn; Kuentzel, Jeffrey; Barnett, Douglas

    2014-05-01

    Though the Integrated Visual and Auditory Continuous Performance Test (IVA + Plus) is commonly used by researchers and clinicians, few investigations have assessed its convergent and discriminant validity, especially with regard to its use with children. The present study details correlates of the IVA + Plus using measures of cognitive ability and ratings of child behavior (parent and teacher), drawing upon a sample of 90 psychoeducational evaluations. Scores from the IVA + Plus correlated significantly with the Working Memory and Processing Speed Indexes from the Fourth Edition of the Wechsler Intelligence Scales for Children (WISC-IV), though fewer and weaker significant correlations were seen with behavior ratings scales, and significant associations also occurred with WISC-IV Verbal Comprehension and Perceptual Reasoning. The overall pattern of relations is supportive of the validity of the IVA + Plus; however, general cognitive ability was associated with better performance on most of the primary scores of the IVA + Plus, suggesting that interpretation should take intelligence into account.

  5. A deafening flash! Visual interference of auditory signal detection.

    Science.gov (United States)

    Fassnidge, Christopher; Cecconi Marcotti, Claudia; Freeman, Elliot

    2017-03-01

    In some people, visual stimulation evokes auditory sensations. How prevalent and how perceptually real is this? 22% of our neurotypical adult participants responded 'Yes' when asked whether they heard faint sounds accompanying flash stimuli, and showed significantly better ability to discriminate visual 'Morse-code' sequences. This benefit might arise from an ability to recode visual signals as sounds, thus taking advantage of superior temporal acuity of audition. In support of this, those who showed better visual relative to auditory sequence discrimination also had poorer auditory detection in the presence of uninformative visual flashes, though this was independent of awareness of visually-evoked sounds. Thus a visually-evoked auditory representation may occur subliminally and disrupt detection of real auditory signals. The frequent natural correlation between visual and auditory stimuli might explain the surprising prevalence of this phenomenon. Overall, our results suggest that learned correspondences between strongly correlated modalities may provide a precursor for some synaesthetic abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. A Neural Circuit for Auditory Dominance over Visual Perception.

    Science.gov (United States)

    Song, You-Hyang; Kim, Jae-Hyun; Jeong, Hye-Won; Choi, Ilsong; Jeong, Daun; Kim, Kwansoo; Lee, Seung-Hee

    2017-02-22

    When conflicts occur during integration of visual and auditory information, one modality often dominates the other, but the underlying neural circuit mechanism remains unclear. Using auditory-visual discrimination tasks for head-fixed mice, we found that audition dominates vision in a process mediated by interaction between inputs from the primary visual (VC) and auditory (AC) cortices in the posterior parietal cortex (PTLp). Co-activation of the VC and AC suppresses VC-induced PTLp responses, leaving AC-induced responses. Furthermore, parvalbumin-positive (PV+) interneurons in the PTLp mainly receive AC inputs, and muscimol inactivation of the PTLp or optogenetic inhibition of its PV+ neurons abolishes auditory dominance in the resolution of cross-modal sensory conflicts without affecting either sensory perception. Conversely, optogenetic activation of PV+ neurons in the PTLp enhances the auditory dominance. Thus, our results demonstrate that AC input-specific feedforward inhibition of VC inputs in the PTLp is responsible for the auditory dominance during cross-modal integration. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Absence of auditory 'global interference' in autism.

    Science.gov (United States)

    Foxton, Jessica M; Stewart, Mary E; Barnard, Louise; Rodgers, Jacqui; Young, Allan H; O'Brien, Gregory; Griffiths, Timothy D

    2003-12-01

    There has been considerable recent interest in the cognitive style of individuals with Autism Spectrum Disorder (ASD). One theory, that of weak central coherence, concerns an inability to combine stimulus details into a coherent whole. Here we test this theory in the case of sound patterns, using a new definition of the details (local structure) and the coherent whole (global structure). Thirteen individuals with a diagnosis of autism or Asperger's syndrome and 15 control participants were administered auditory tests, where they were required to match local pitch direction changes between two auditory sequences. When the other local features of the sequence pairs were altered (the actual pitches and relative time points of pitch direction change), the control participants obtained lower scores compared with when these details were left unchanged. This can be attributed to interference from the global structure, defined as the combination of the local auditory details. In contrast, the participants with ASD did not obtain lower scores in the presence of such mismatches. This was attributed to the absence of interference from an auditory coherent whole. The results are consistent with the presence of abnormal interactions between local and global auditory perception in ASD.

  8. Functional sex differences in human primary auditory cortex

    International Nuclear Information System (INIS)

    Ruytjens, Liesbet; Georgiadis, Janniko R.; Holstege, Gert; Wit, Hero P.; Albers, Frans W.J.; Willemsen, Antoon T.M.

    2007-01-01

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)

  9. Functional sex differences in human primary auditory cortex

    Energy Technology Data Exchange (ETDEWEB)

    Ruytjens, Liesbet [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Georgiadis, Janniko R. [University of Groningen, University Medical Center Groningen, Department of Anatomy and Embryology, Groningen (Netherlands); Holstege, Gert [University of Groningen, University Medical Center Groningen, Center for Uroneurology, Groningen (Netherlands); Wit, Hero P. [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); Albers, Frans W.J. [University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Willemsen, Antoon T.M. [University Medical Center Groningen, Department of Nuclear Medicine and Molecular Imaging, Groningen (Netherlands)

    2007-12-15

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)

  10. Web Intelligence and Artificial Intelligence in Education

    Science.gov (United States)

    Devedzic, Vladan

    2004-01-01

    This paper surveys important aspects of Web Intelligence (WI) in the context of Artificial Intelligence in Education (AIED) research. WI explores the fundamental roles as well as practical impacts of Artificial Intelligence (AI) and advanced Information Technology (IT) on the next generation of Web-related products, systems, services, and…

  11. Auditory development in early amplified children: factors influencing auditory-based communication outcomes in children with hearing loss.

    Science.gov (United States)

    Sininger, Yvonne S; Grimes, Alison; Christensen, Elizabeth

    2010-04-01

    The purpose of this study was to determine the influence of selected predictive factors, primarily age at fitting of amplification and degree of hearing loss, on auditory-based outcomes in young children with bilateral sensorineural hearing loss. Forty-four infants and toddlers, first identified with mild to profound bilateral hearing loss, who were being fitted with amplification were enrolled in the study and followed longitudinally. Subjects were otherwise typically developing with no evidence of cognitive, motor, or visual impairment. A variety of subject factors were measured or documented and used as predictor variables, including age at fitting of amplification, degree of hearing loss in the better hearing ear, cochlear implant status, intensity of oral education, parent-child interaction, and the number of languages spoken in the home. These factors were used in a linear multiple regression analysis to assess their contribution to auditory-based communication outcomes. Five outcome measures, evaluated at regular intervals in children starting at age 3, included measures of speech perception (Pediatric Speech Intelligibility and Online Imitative Test of Speech Pattern Contrast Perception), speech production (Arizona-3), and spoken language (Reynell Expressive and Receptive Language). The age at fitting of amplification ranged from 1 to 72 mo, and the degree of hearing loss ranged from mild to profound. Age at fitting of amplification showed the largest influence and was a significant factor in all outcome models. The degree of hearing loss was an important factor in the modeling of speech production and spoken language outcomes. Cochlear implant use was the other factor that contributed significantly to speech perception, speech production, and language outcomes. Other factors contributed sparsely to the models. Prospective longitudinal studies of children are important to establish relationships between subject factors and outcomes. This study clearly

  12. Artificial Intelligence and Moral intelligence

    Directory of Open Access Journals (Sweden)

    Laura Pana

    2008-07-01

    Full Text Available We discuss the thesis that the implementation of a moral code in the behaviour of artificial intelligent systems needs a specific form of human and artificial intelligence, not just an abstract intelligence. We present intelligence as a system with an internal structure and the structural levels of the moral system, as well as certain characteristics of artificial intelligent agents which can/must be treated as 1- individual entities (with a complex, specialized, autonomous or selfdetermined, even unpredictable conduct, 2- entities endowed with diverse or even multiple intelligence forms, like moral intelligence, 3- open and, even, free-conduct performing systems (with specific, flexible and heuristic mechanisms and procedures of decision, 4 – systems which are open to education, not just to instruction, 5- entities with “lifegraphy”, not just “stategraphy”, 6- equipped not just with automatisms but with beliefs (cognitive and affective complexes, 7- capable even of reflection (“moral life” is a form of spiritual, not just of conscious activity, 8 – elements/members of some real (corporal or virtual community, 9 – cultural beings: free conduct gives cultural value to the action of a ”natural” or artificial being. Implementation of such characteristics does not necessarily suppose efforts to design, construct and educate machines like human beings. The human moral code is irremediably imperfect: it is a morality of preference, of accountability (not of responsibility and a morality of non-liberty, which cannot be remedied by the invention of ethical systems, by the circulation of ideal values and by ethical (even computing education. But such an imperfect morality needs perfect instruments for its implementation: applications of special logic fields; efficient psychological (theoretical and technical attainments to endow the machine not just with intelligence, but with conscience and even spirit; comprehensive technical

  13. Auditory and visual spatial impression: Recent studies of three auditoria

    Science.gov (United States)

    Nguyen, Andy; Cabrera, Densil

    2004-10-01

    Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.

  14. Auditory Masking Effects on Speech Fluency in Apraxia of Speech and Aphasia: Comparison to Altered Auditory Feedback

    Science.gov (United States)

    Jacks, Adam; Haley, Katarina L.

    2015-01-01

    Purpose: To study the effects of masked auditory feedback (MAF) on speech fluency in adults with aphasia and/or apraxia of speech (APH/AOS). We hypothesized that adults with AOS would increase speech fluency when speaking with noise. Altered auditory feedback (AAF; i.e., delayed/frequency-shifted feedback) was included as a control condition not…

  15. Intelligible Artificial Intelligence

    OpenAIRE

    Weld, Daniel S.; Bansal, Gagan

    2018-01-01

    Since Artificial Intelligence (AI) software uses techniques like deep lookahead search and stochastic optimization of huge neural networks to fit mammoth datasets, it often results in complex behavior that is difficult for people to understand. Yet organizations are deploying AI algorithms in many mission-critical settings. In order to trust their behavior, we must make it intelligible --- either by using inherently interpretable models or by developing methods for explaining otherwise overwh...

  16. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults.

    Science.gov (United States)

    Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel

    2017-04-01

    Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.

  17. Psychometric intelligence and P3 of the event-related potentials studied with a 3-stimulus auditory oddball task

    NARCIS (Netherlands)

    Wronka, E.A.; Kaiser, J.; Coenen, A.M.L.

    2013-01-01

    Relationship between psychometric intelligence measured with Raven's Advanced Progressive Matrices (RAPM) and event-related potentials (ERP) was examined using 3-stimulus oddball task. Subjects who had scored higher on RAPM exhibited larger amplitude of P3a component. Additional analysis using the

  18. Naturalist Intelligence Among the Other Multiple Intelligences [In Bulgarian

    Directory of Open Access Journals (Sweden)

    R. Genkov

    2007-09-01

    Full Text Available The theory of multiple intelligences was presented by Gardner in 1983. The theory was revised later (1999 and among the other intelligences a naturalist intelligence was added. The criteria for distinguishing of the different types of intelligences are considered. While Gardner restricted the analysis of the naturalist intelligence with examples from the living nature only, the present paper considered this problem on wider background including objects and persons of the natural sciences.

  19. Artificial intelligence

    CERN Document Server

    Hunt, Earl B

    1975-01-01

    Artificial Intelligence provides information pertinent to the fundamental aspects of artificial intelligence. This book presents the basic mathematical and computational approaches to problems in the artificial intelligence field.Organized into four parts encompassing 16 chapters, this book begins with an overview of the various fields of artificial intelligence. This text then attempts to connect artificial intelligence problems to some of the notions of computability and abstract computing devices. Other chapters consider the general notion of computability, with focus on the interaction bet

  20. Intelligence Ethics:

    DEFF Research Database (Denmark)

    Rønn, Kira Vrist

    2016-01-01

    Questions concerning what constitutes a morally justified conduct of intelligence activities have received increased attention in recent decades. However, intelligence ethics is not yet homogeneous or embedded as a solid research field. The aim of this article is to sketch the state of the art...... of intelligence ethics and point out subjects for further scrutiny in future research. The review clusters the literature on intelligence ethics into two groups: respectively, contributions on external topics (i.e., the accountability of and the public trust in intelligence agencies) and internal topics (i.......e., the search for an ideal ethical framework for intelligence actions). The article concludes that there are many holes to fill for future studies on intelligence ethics both in external and internal discussions. Thus, the article is an invitation – especially, to moral philosophers and political theorists...

  1. #%Applications of artificial intelligence in intelligent manufacturing: a review

    Institute of Scientific and Technical Information of China (English)

    #

    2017-01-01

    #%Based on research into the applications of artificial intelligence (AI) technology in the manufacturing industry in recent years, we analyze the rapid development of core technologies in the new era of 'Internet plus AI', which is triggering a great change in the models, means, and ecosystems of the manufacturing industry, as well as in the development of AI. We then propose new models, means, and forms of intelligent manufacturing, intelligent manufacturing system architecture, and intelligent man-ufacturing technology system, based on the integration of AI technology with information communications, manufacturing, and related product technology. Moreover, from the perspectives of intelligent manufacturing application technology, industry, and application demonstration, the current development in intelligent manufacturing is discussed. Finally, suggestions for the appli-cation of AI in intelligent manufacturing in China are presented.

  2. Rapid estimation of high-parameter auditory-filter shapes

    Science.gov (United States)

    Shen, Yi; Sivakumar, Rajeswari; Richards, Virginia M.

    2014-01-01

    A Bayesian adaptive procedure, the quick-auditory-filter (qAF) procedure, was used to estimate auditory-filter shapes that were asymmetric about their peaks. In three experiments, listeners who were naive to psychoacoustic experiments detected a fixed-level, pure-tone target presented with a spectrally notched noise masker. The qAF procedure adaptively manipulated the masker spectrum level and the position of the masker notch, which was optimized for the efficient estimation of the five parameters of an auditory-filter model. Experiment I demonstrated that the qAF procedure provided a convergent estimate of the auditory-filter shape at 2 kHz within 150 to 200 trials (approximately 15 min to complete) and, for a majority of listeners, excellent test-retest reliability. In experiment II, asymmetric auditory filters were estimated for target frequencies of 1 and 4 kHz and target levels of 30 and 50 dB sound pressure level. The estimated filter shapes were generally consistent with published norms, especially at the low target level. It is known that the auditory-filter estimates are narrower for forward masking than simultaneous masking due to peripheral suppression, a result replicated in experiment III using fewer than 200 qAF trials. PMID:25324086

  3. IMPAIRED PROCESSING IN THE PRIMARY AUDITORY CORTEX OF AN ANIMAL MODEL OF AUTISM

    Directory of Open Access Journals (Sweden)

    Renata eAnomal

    2015-11-01

    Full Text Available Autism is a neurodevelopmental disorder clinically characterized by deficits in communication, lack of social interaction and, repetitive behaviors with restricted interests. A number of studies have reported that sensory perception abnormalities are common in autistic individuals and might contribute to the complex behavioral symptoms of the disorder. In this context, hearing incongruence is particularly prevalent. Considering that some of this abnormal processing might stem from the unbalance of inhibitory and excitatory drives in brain circuitries, we used an animal model of autism induced by valproic acid (VPA during pregnancy in order to investigate the tonotopic organization of the primary auditory cortex (AI and its local inhibitory circuitry. Our results show that VPA rats have distorted primary auditory maps with over-representation of high frequencies, broadly tuned receptive fields and higher sound intensity thresholds as compared to controls. However, we did not detect differences in the number of parvalbumin-positive interneurons in AI of VPA and control rats. Altogether our findings show that neurophysiological impairments of hearing perception in this autism model occur independently of alterations in the number of parvalbumin-expressing interneurons. These data support the notion that fine circuit alterations, rather than gross cellular modification, could lead to neurophysiological changes in the autistic brain.

  4. Opposite brain laterality in analogous auditory and visual tests.

    Science.gov (United States)

    Oltedal, Leif; Hugdahl, Kenneth

    2017-11-01

    Laterality for language processing can be assessed by auditory and visual tasks. Typically, a right ear/right visual half-field (VHF) advantage is observed, reflecting left-hemispheric lateralization for language. Historically, auditory tasks have shown more consistent and reliable results when compared to VHF tasks. While few studies have compared analogous tasks applied to both sensory modalities for the same participants, one such study by Voyer and Boudreau [(2003). Cross-modal correlation of auditory and visual language laterality tasks: a serendipitous finding. Brain Cogn, 53(2), 393-397] found opposite laterality for visual and auditory language tasks. We adapted an experimental paradigm based on a dichotic listening and VHF approach, and applied the combined language paradigm in two separate experiments, including fMRI in the second experiment to measure brain activation in addition to behavioural data. The first experiment showed a right-ear advantage for the auditory task, but a left half-field advantage for the visual task. The second experiment, confirmed the findings, with opposite laterality effects for the visual and auditory tasks. In conclusion, we replicate the finding by Voyer and Boudreau (2003) and support their interpretation that these visual and auditory language tasks measure different cognitive processes.

  5. Subcortical pathways: Towards a better understanding of auditory disorders.

    Science.gov (United States)

    Felix, Richard A; Gourévitch, Boris; Portfors, Christine V

    2018-05-01

    Hearing loss is a significant problem that affects at least 15% of the population. This percentage, however, is likely significantly higher because of a variety of auditory disorders that are not identifiable through traditional tests of peripheral hearing ability. In these disorders, individuals have difficulty understanding speech, particularly in noisy environments, even though the sounds are loud enough to hear. The underlying mechanisms leading to such deficits are not well understood. To enable the development of suitable treatments to alleviate or prevent such disorders, the affected processing pathways must be identified. Historically, mechanisms underlying speech processing have been thought to be a property of the auditory cortex and thus the study of auditory disorders has largely focused on cortical impairments and/or cognitive processes. As we review here, however, there is strong evidence to suggest that, in fact, deficits in subcortical pathways play a significant role in auditory disorders. In this review, we highlight the role of the auditory brainstem and midbrain in processing complex sounds and discuss how deficits in these regions may contribute to auditory dysfunction. We discuss current research with animal models of human hearing and then consider human studies that implicate impairments in subcortical processing that may contribute to auditory disorders. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Auditory and visual memory in musicians and nonmusicians

    OpenAIRE

    Cohen, Michael A.; Evans, Karla K.; Horowitz, Todd S.; Wolfe, Jeremy M.

    2011-01-01

    Numerous studies have shown that musicians outperform nonmusicians on a variety of tasks. Here we provide the first evidence that musicians have superior auditory recognition memory for both musical and nonmusical stimuli, compared to nonmusicians. However, this advantage did not generalize to the visual domain. Previously, we showed that auditory recognition memory is inferior to visual recognition memory. Would this be true even for trained musicians? We compared auditory and visual memory ...

  7. Computer-based auditory phoneme discrimination training improves speech recognition in noise in experienced adult cochlear implant listeners.

    Science.gov (United States)

    Schumann, Annette; Serman, Maja; Gefeller, Olaf; Hoppe, Ulrich

    2015-03-01

    Specific computer-based auditory training may be a useful completion in the rehabilitation process for cochlear implant (CI) listeners to achieve sufficient speech intelligibility. This study evaluated the effectiveness of a computerized, phoneme-discrimination training programme. The study employed a pretest-post-test design; participants were randomly assigned to the training or control group. Over a period of three weeks, the training group was instructed to train in phoneme discrimination via computer, twice a week. Sentence recognition in different noise conditions (moderate to difficult) was tested pre- and post-training, and six months after the training was completed. The control group was tested and retested within one month. Twenty-seven adult CI listeners who had been using cochlear implants for more than two years participated in the programme; 15 adults in the training group, 12 adults in the control group. Besides significant improvements for the trained phoneme-identification task, a generalized training effect was noted via significantly improved sentence recognition in moderate noise. No significant changes were noted in the difficult noise conditions. Improved performance was maintained over an extended period. Phoneme-discrimination training improves experienced CI listeners' speech perception in noise. Additional research is needed to optimize auditory training for individual benefit.

  8. Air pollution is associated with brainstem auditory nuclei pathology and delayed brainstem auditory evoked potentials.

    Science.gov (United States)

    Calderón-Garcidueñas, Lilian; D'Angiulli, Amedeo; Kulesza, Randy J; Torres-Jardón, Ricardo; Osnaya, Norma; Romero, Lina; Keefe, Sheyla; Herritt, Lou; Brooks, Diane M; Avila-Ramirez, Jose; Delgado-Chávez, Ricardo; Medina-Cortina, Humberto; González-González, Luis Oscar

    2011-06-01

    We assessed brainstem inflammation in children exposed to air pollutants by comparing brainstem auditory evoked potentials (BAEPs) and blood inflammatory markers in children age 96.3±8.5 months from highly polluted (n=34) versus a low polluted city (n=17). The brainstems of nine children with accidental deaths were also examined. Children from the highly polluted environment had significant delays in wave III (t(50)=17.038; p7.501; p<0.0001), consisting with delayed central conduction time of brainstem neural transmission. Highly exposed children showed significant evidence of inflammatory markers and their auditory and vestibular nuclei accumulated α synuclein and/or β amyloid(1-42). Medial superior olive neurons, critically involved in BAEPs, displayed significant pathology. Children's exposure to urban air pollution increases their risk for auditory and vestibular impairment. Copyright © 2011 ISDN. Published by Elsevier Ltd. All rights reserved.

  9. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    Science.gov (United States)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids

  10. Auditory reafferences: The influence of real-time feedback on movement control

    Directory of Open Access Journals (Sweden)

    Christian eKennel

    2015-01-01

    Full Text Available Auditory reafferences are real-time auditory products created by a person’s own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with nonartificial auditory cues. Our results support the existing theoretical understanding of action–perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.

  11. Auditory reafferences: the influence of real-time feedback on movement control.

    Science.gov (United States)

    Kennel, Christian; Streese, Lukas; Pizzera, Alexandra; Justen, Christoph; Hohmann, Tanja; Raab, Markus

    2015-01-01

    Auditory reafferences are real-time auditory products created by a person's own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with non-artificial auditory cues. Our results support the existing theoretical understanding of action-perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.

  12. Middle components of the auditory evoked response in bilateral temporal lobe lesions. Report on a patient with auditory agnosia

    DEFF Research Database (Denmark)

    Parving, A; Salomon, G; Elberling, Claus

    1980-01-01

    An investigation of the middle components of the auditory evoked response (10--50 msec post-stimulus) in a patient with auditory agnosia is reported. Bilateral temporal lobe infarctions were proved by means of brain scintigraphy, CAT scanning, and regional cerebral blood flow measurements...

  13. Auditory memory for temporal characteristics of sound.

    Science.gov (United States)

    Zokoll, Melanie A; Klump, Georg M; Langemann, Ulrike

    2008-05-01

    This study evaluates auditory memory for variations in the rate of sinusoidal amplitude modulation (SAM) of noise bursts in the European starling (Sturnus vulgaris). To estimate the extent of the starling's auditory short-term memory store, a delayed non-matching-to-sample paradigm was applied. The birds were trained to discriminate between a series of identical "sample stimuli" and a single "test stimulus". The birds classified SAM rates of sample and test stimuli as being either the same or different. Memory performance of the birds was measured as the percentage of correct classifications. Auditory memory persistence time was estimated as a function of the delay between sample and test stimuli. Memory performance was significantly affected by the delay between sample and test and by the number of sample stimuli presented before the test stimulus, but was not affected by the difference in SAM rate between sample and test stimuli. The individuals' auditory memory persistence times varied between 2 and 13 s. The starlings' auditory memory persistence in the present study for signals varying in the temporal domain was significantly shorter compared to that of a previous study (Zokoll et al. in J Acoust Soc Am 121:2842, 2007) applying tonal stimuli varying in the spectral domain.

  14. Effects of Caffeine on Auditory Brainstem Response

    Directory of Open Access Journals (Sweden)

    Saleheh Soleimanian

    2008-06-01

    Full Text Available Background and Aim: Blocking of the adenosine receptor in central nervous system by caffeine can lead to increasing the level of neurotransmitters like glutamate. As the adenosine receptors are present in almost all brain areas like central auditory pathway, it seems caffeine can change conduction in this way. The purpose of this study was to evaluate the effects of caffeine on latency and amplitude of auditory brainstem response(ABR.Materials and Methods: In this clinical trial study 43 normal 18-25 years old male students were participated. The subjects consumed 0, 2 and 3 mg/kg BW caffeine in three different sessions. Auditory brainstem responses were recorded before and 30 minute after caffeine consumption. The results were analyzed by Friedman and Wilcoxone test to assess the effects of caffeine on auditory brainstem response.Results: Compared to control group the latencies of waves III,V and I-V interpeak interval of the cases decreased significantly after 2 and 3mg/kg BW caffeine consumption. Wave I latency significantly decreased after 3mg/kg BW caffeine consumption(p<0.01. Conclusion: Increasing of the glutamate level resulted from the adenosine receptor blocking brings about changes in conduction in the central auditory pathway.

  15. The eco-driving effect of electric vehicles compared to conventional gasoline vehicles

    Directory of Open Access Journals (Sweden)

    Hideki Kato

    2016-10-01

    Full Text Available Eco-driving is attractive to the public, not only users of internal-combustion-engine vehicles (ICEVs including hybrid electric vehicles (HEVs but also users of electric vehicles (EVs have interest in eco-driving. In this context, a quantitative evaluation of eco-driving effect of EVs was conducted using a chassis dynamometer (C/D with an “eco-driving test mode.” This mode comprised four speed patterns selected from fifty-two real-world driving datasets collected during an eco-driving test-ride event. The four patterns had the same travel distance (5.2 km, but showed varying eco-driving achievement levels. Three ICEVs, one HEV and two EVs were tested using a C/D. Good linear relationships were found between the eco-driving achievement level and electric or fuel consumption rate of all vehicles. The reduction of CO2 emissions was also estimated. The CO2-reduction rates of the four conventional (including hybrid vehicles were 10.9%–12.6%, while those of two types of EVs were 11.7%–18.4%. These results indicate that the eco-driving tips for conventional vehicles are effective to not only ICEVs and HEVs but also EVs. Furthermore, EVs have a higher potential of eco-driving effect than ICEVs and HEVs if EVs could maintain high energy conversion efficiency at low load range. This study is intended to support the importance of the dissemination of tools like the intelligent speed adaptation (ISA to obey the regulation speed in real time. In the future, also in the development and dissemination of automated driving systems, the viewpoint of achieving the traveling purpose with less kinetic energy would be important.

  16. Auditory capture of visual motion: effects on perception and discrimination.

    Science.gov (United States)

    McCourt, Mark E; Leone, Lynnette M

    2016-09-28

    We asked whether the perceived direction of visual motion and contrast thresholds for motion discrimination are influenced by the concurrent motion of an auditory sound source. Visual motion stimuli were counterphasing Gabor patches, whose net motion energy was manipulated by adjusting the contrast of the leftward-moving and rightward-moving components. The presentation of these visual stimuli was paired with the simultaneous presentation of auditory stimuli, whose apparent motion in 3D auditory space (rightward, leftward, static, no sound) was manipulated using interaural time and intensity differences, and Doppler cues. In experiment 1, observers judged whether the Gabor visual stimulus appeared to move rightward or leftward. In experiment 2, contrast discrimination thresholds for detecting the interval containing unequal (rightward or leftward) visual motion energy were obtained under the same auditory conditions. Experiment 1 showed that the perceived direction of ambiguous visual motion is powerfully influenced by concurrent auditory motion, such that auditory motion 'captured' ambiguous visual motion. Experiment 2 showed that this interaction occurs at a sensory stage of processing as visual contrast discrimination thresholds (a criterion-free measure of sensitivity) were significantly elevated when paired with congruent auditory motion. These results suggest that auditory and visual motion signals are integrated and combined into a supramodal (audiovisual) representation of motion.

  17. Strategy choice mediates the link between auditory processing and spelling.

    Science.gov (United States)

    Kwong, Tru E; Brachman, Kyle J

    2014-01-01

    Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities.

  18. Maintenance of auditory-nonverbal information in working memory.

    Science.gov (United States)

    Soemer, Alexander; Saito, Satoru

    2015-12-01

    According to the multicomponent view of working memory, both auditory-nonverbal information and auditory-verbal information are stored in a phonological code and are maintained by an articulation-based rehearsal mechanism (Baddeley, 2012). Two experiments have been carried out to investigate this hypothesis using sound materials that are difficult to label verbally and difficult to articulate. Participants were required to maintain 2 to 4 sounds differing in timbre over a delay of up to 12 seconds while performing different secondary tasks. While there was no convincing evidence for articulatory rehearsal as a main maintenance mechanism for auditory-nonverbal information, the results suggest that processes similar or identical to auditory imagery might contribute to maintenance. We discuss the implications of these results for multicomponent models of working memory.

  19. Contextual modulation of primary visual cortex by auditory signals.

    Science.gov (United States)

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  20. Magnetic resonance imaging of the internal auditory canal

    International Nuclear Information System (INIS)

    Daniels, D.L.; Herfkins, R.; Koehler, P.R.; Millen, S.J.; Shaffer, K.A.; Williams, A.L.; Haughton, V.M.

    1984-01-01

    Three patients with exclusively or predominantly intracanalicular neuromas and 5 with presumably normal internal auditory canals were examined with prototype 1.4- or 1.5-tesla magnetic resonance (MR) scanners. MR images showed the 7th and 8th cranial nerves in the internal auditory canal. The intracanalicular neuromas had larger diameter and slightly greater signal strength than the nerves. Early results suggest that minimal enlargement of the nerves can be detected even in the internal auditory canal

  1. Assessing Auditory Processing Abilities in Typically Developing School-Aged Children.

    Science.gov (United States)

    McDermott, Erin E; Smart, Jennifer L; Boiano, Julie A; Bragg, Lisa E; Colon, Tiffany N; Hanson, Elizabeth M; Emanuel, Diana C; Kelly, Andrea S

    2016-02-01

    Large discrepancies exist in the literature regarding definition, diagnostic criteria, and appropriate assessment for auditory processing disorder (APD). Therefore, a battery of tests with normative data is needed. The purpose of this study is to collect normative data on a variety of tests for APD on children aged 7-12 yr, and to examine effects of outside factors on test performance. Children aged 7-12 yr with normal hearing, speech and language abilities, cognition, and attention were recruited for participation in this normative data collection. One hundred and forty-seven children were recruited using flyers and word of mouth. Of the participants recruited, 137 children qualified for the study. Participants attended schools located in areas that varied in terms of socioeconomic status, and resided in six different states. Audiological testing included a hearing screening (15 dB HL from 250 to 8000 Hz), word recognition testing, tympanometry, ipsilateral and contralateral reflexes, and transient-evoked otoacoustic emissions. The language, nonverbal IQ, phonological processing, and attention skills of each participant were screened using the Clinical Evaluation of Language Fundamentals-4 Screener, Test of Nonverbal Intelligence, Comprehensive Test of Phonological Processing, and Integrated Visual and Auditory-Continuous Performance Test, respectively. The behavioral APD battery included the following tests: Dichotic Digits Test, Frequency Pattern Test, Duration Pattern Test, Random Gap Detection Test, Compressed and Reverberated Words Test, Auditory Figure Ground (signal-to-noise ratio of +8 and +0), and Listening in Spatialized Noise-Sentences Test. Mean scores and standard deviations of each test were calculated, and analysis of variance tests were used to determine effects of factors such as gender, handedness, and birth history on each test. Normative data tables for the test battery were created for the following age groups: 7- and 8-yr-olds (n = 49), 9

  2. Social intelligence, human intelligence and niche construction.

    Science.gov (United States)

    Sterelny, Kim

    2007-04-29

    This paper is about the evolution of hominin intelligence. I agree with defenders of the social intelligence hypothesis in thinking that externalist models of hominin intelligence are not plausible: such models cannot explain the unique cognition and cooperation explosion in our lineage, for changes in the external environment (e.g. increasing environmental unpredictability) affect many lineages. Both the social intelligence hypothesis and the social intelligence-ecological complexity hybrid I outline here are niche construction models. Hominin evolution is hominin response to selective environments that earlier hominins have made. In contrast to social intelligence models, I argue that hominins have both created and responded to a unique foraging mode; a mode that is both social in itself and which has further effects on hominin social environments. In contrast to some social intelligence models, on this view, hominin encounters with their ecological environments continue to have profound selective effects. However, though the ecological environment selects, it does not select on its own. Accidents and their consequences, differential success and failure, result from the combination of the ecological environment an agent faces and the social features that enhance some opportunities and suppress others and that exacerbate some dangers and lessen others. Individuals do not face the ecological filters on their environment alone, but with others, and with the technology, information and misinformation that their social world provides.

  3. A basic study on universal design of auditory signals in automobiles.

    Science.gov (United States)

    Yamauchi, Katsuya; Choi, Jong-dae; Maiguma, Ryo; Takada, Masayuki; Iwamiya, Shin-ichiro

    2004-11-01

    In this paper, the impression of various kinds of auditory signals currently used in automobiles and a comprehensive evaluation were measured by a semantic differential method. The desirable acoustic characteristic was examined for each type of auditory signal. Sharp sounds with dominant high-frequency components were not suitable for auditory signals in automobiles. This trend is expedient for the aged whose auditory sensitivity in the high frequency region is lower. When intermittent sounds were used, a longer OFF time was suitable. Generally, "dull (not sharp)" and "calm" sounds were appropriate for auditory signals. Furthermore, the comparison between the frequency spectrum of interior noise in automobiles and that of suitable sounds for various auditory signals indicates that the suitable sounds are not easily masked. The suitable auditory signals for various purposes is a good solution from the viewpoint of universal design.

  4. The 20th annual intelligent ground vehicle competition: building a generation of robotists

    Science.gov (United States)

    Theisen, Bernard L.; Kosinski, Andrew

    2013-01-01

    The Intelligent Ground Vehicle Competition (IGVC) is one of four, unmanned systems, student competitions that were founded by the Association for Unmanned Vehicle Systems International (AUVSI). The IGVC is a multidisciplinary exercise in product realization that challenges college engineering student teams to integrate advanced control theory, machine vision, vehicular electronics and mobile platform fundamentals to design and build an unmanned system. Teams from around the world focus on developing a suite of dual-use technologies to equip ground vehicles of the future with intelligent driving capabilities. Over the past 20 years, the competition has challenged undergraduate, graduate and Ph.D. students with real world applications in intelligent transportation systems, the military and manufacturing automation. To date, teams from over 80 universities and colleges have participated. This paper describes some of the applications of the technologies required by this competition and discusses the educational benefits. The primary goal of the IGVC is to advance engineering education in intelligent vehicles and related technologies. The employment and professional networking opportunities created for students and industrial sponsors through a series of technical events over the four-day competition are highlighted. Finally, an assessment of the competition based on participation is presented.

  5. Trends in ambient intelligent systems the role of computational intelligence

    CERN Document Server

    Khan, Mohammad; Abraham, Ajith

    2016-01-01

    This book demonstrates the success of Ambient Intelligence in providing possible solutions for the daily needs of humans. The book addresses implications of ambient intelligence in areas of domestic living, elderly care, robotics, communication, philosophy and others. The objective of this edited volume is to show that Ambient Intelligence is a boon to humanity with conceptual, philosophical, methodical and applicative understanding. The book also aims to schematically demonstrate developments in the direction of augmented sensors, embedded systems and behavioral intelligence towards Ambient Intelligent Networks or Smart Living Technology. It contains chapters in the field of Ambient Intelligent Networks, which received highly positive feedback during the review process. The book contains research work, with in-depth state of the art from augmented sensors, embedded technology and artificial intelligence along with cutting-edge research and development of technologies and applications of Ambient Intelligent N...

  6. Presbycusis and auditory brainstem responses: a review

    Directory of Open Access Journals (Sweden)

    Shilpa Khullar

    2011-06-01

    Full Text Available Age-related hearing loss or presbycusis is a complex phenomenon consisting of elevation of hearing levels as well as changes in the auditory processing. It is commonly classified into four categories depending on the cause. Auditory brainstem responses (ABRs are a type of early evoked potentials recorded within the first 10 ms of stimulation. They represent the synchronized activity of the auditory nerve and the brainstem. Some of the changes that occur in the aging auditory system may significantly influence the interpretation of the ABRs in comparison with the ABRs of the young adults. The waves of ABRs are described in terms of amplitude, latencies and interpeak latency of the different waves. There is a tendency of the amplitude to decrease and the absolute latencies to increase with advancing age but these trends are not always clear due to increase in threshold with advancing age that act a major confounding factor in the interpretation of ABRs.

  7. Acoustic richness modulates the neural networks supporting intelligible speech processing.

    Science.gov (United States)

    Lee, Yune-Sang; Min, Nam Eun; Wingfield, Arthur; Grossman, Murray; Peelle, Jonathan E

    2016-03-01

    The information contained in a sensory signal plays a critical role in determining what neural processes are engaged. Here we used interleaved silent steady-state (ISSS) functional magnetic resonance imaging (fMRI) to explore how human listeners cope with different degrees of acoustic richness during auditory sentence comprehension. Twenty-six healthy young adults underwent scanning while hearing sentences that varied in acoustic richness (high vs. low spectral detail) and syntactic complexity (subject-relative vs. object-relative center-embedded clause structures). We manipulated acoustic richness by presenting the stimuli as unprocessed full-spectrum speech, or noise-vocoded with 24 channels. Importantly, although the vocoded sentences were spectrally impoverished, all sentences were highly intelligible. These manipulations allowed us to test how intelligible speech processing was affected by orthogonal linguistic and acoustic demands. Acoustically rich speech showed stronger activation than acoustically less-detailed speech in a bilateral temporoparietal network with more pronounced activity in the right hemisphere. By contrast, listening to sentences with greater syntactic complexity resulted in increased activation of a left-lateralized network including left posterior lateral temporal cortex, left inferior frontal gyrus, and left dorsolateral prefrontal cortex. Significant interactions between acoustic richness and syntactic complexity occurred in left supramarginal gyrus, right superior temporal gyrus, and right inferior frontal gyrus, indicating that the regions recruited for syntactic challenge differed as a function of acoustic properties of the speech. Our findings suggest that the neural systems involved in speech perception are finely tuned to the type of information available, and that reducing the richness of the acoustic signal dramatically alters the brain's response to spoken language, even when intelligibility is high. Copyright © 2015 Elsevier

  8. Distributed Model Predictive Control over Multiple Groups of Vehicles in Highway Intelligent Space for Large Scale System

    Directory of Open Access Journals (Sweden)

    Tang Xiaofeng

    2014-01-01

    Full Text Available The paper presents the three time warning distances for solving the large scale system of multiple groups of vehicles safety driving characteristics towards highway tunnel environment based on distributed model prediction control approach. Generally speaking, the system includes two parts. First, multiple vehicles are divided into multiple groups. Meanwhile, the distributed model predictive control approach is proposed to calculate the information framework of each group. Each group of optimization performance considers the local optimization and the neighboring subgroup of optimization characteristics, which could ensure the global optimization performance. Second, the three time warning distances are studied based on the basic principles used for highway intelligent space (HIS and the information framework concept is proposed according to the multiple groups of vehicles. The math model is built to avoid the chain avoidance of vehicles. The results demonstrate that the proposed highway intelligent space method could effectively ensure driving safety of multiple groups of vehicles under the environment of fog, rain, or snow.

  9. Negative emotion provides cues for orienting auditory spatial attention

    Directory of Open Access Journals (Sweden)

    Erkin eAsutay

    2015-05-01

    Full Text Available The auditory stimuli provide information about the objects and events around us. They can also carry biologically significant emotional information (such as unseen dangers and conspecific vocalizations, which provides cues for allocation of attention and mental resources. Here, we investigated whether task-irrelevant auditory emotional information can provide cues for orientation of auditory spatial attention. We employed a covert spatial orienting task: the dot-probe task. In each trial, two task irrelevant auditory cues were simultaneously presented at two separate locations (left-right or front-back. Environmental sounds were selected to form emotional vs. neutral, emotional vs. emotional, and neutral vs. neutral cue pairs. The participants’ task was to detect the location of an acoustic target that was presented immediately after the task-irrelevant auditory cues. The target was presented at the same location as one of the auditory cues. The results indicated that participants were significantly faster to locate the target when it replaced the negative cue compared to when it replaced the neutral cue. The positive cues did not produce a clear attentional bias. Further, same valence pairs (emotional-emotional or neutral-neutral did not modulate reaction times due to a lack of spatial attention capture by one cue in the pair. Taken together, the results indicate that negative affect can provide cues for the orientation of spatial attention in the auditory domain.

  10. Examining explanations for fundamental frequency's contribution to speech intelligibility in noise

    Science.gov (United States)

    Schlauch, Robert S.; Miller, Sharon E.; Watson, Peter J.

    2005-09-01

    Laures and Weismer [JSLHR, 42, 1148 (1999)] reported that speech with natural variation in fundamental frequency (F0) is more intelligible in noise than speech with a flattened F0 contour. Cognitive-linguistic based explanations have been offered to account for this drop in intelligibility for the flattened condition, but a lower-level mechanism related to auditory streaming may be responsible. Numerous psychoacoustic studies have demonstrated that modulating a tone enables a listener to segregate it from background sounds. To test these rival hypotheses, speech recognition in noise was measured for sentences with six different F0 contours: unmodified, flattened at the mean, natural but exaggerated, reversed, and frequency modulated (rates of 2.5 and 5.0 Hz). The 180 stimulus sentences were produced by five talkers (30 sentences per condition). Speech recognition for fifteen listeners replicate earlier findings showing that flattening the F0 contour results in a roughly 10% reduction in recognition of key words compared with the natural condition. Although the exaggerated condition produced results comparable to those of the flattened condition, the other conditions with unnatural F0 contours all yielded significantly poorer performance than the flattened condition. These results support the cognitive, linguistic-based explanations for the reduction in performance.

  11. [Low level auditory skills compared to writing skills in school children attending third and fourth grade: evidence for the rapid auditory processing deficit theory?].

    Science.gov (United States)

    Ptok, M; Meisen, R

    2008-01-01

    The rapid auditory processing defi-cit theory holds that impaired reading/writing skills are not caused exclusively by a cognitive deficit specific to representation and processing of speech sounds but arise due to sensory, mainly auditory, deficits. To further explore this theory we compared different measures of auditory low level skills to writing skills in school children. prospective study. School children attending third and fourth grade. just noticeable differences for intensity and frequency (JNDI, JNDF), gap detection (GD) monaural and binaural temporal order judgement (TOJb and TOJm); grade in writing, language and mathematics. correlation analysis. No relevant correlation was found between any auditory low level processing variable and writing skills. These data do not support the rapid auditory processing deficit theory.

  12. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  13. Bilateral duplication of the internal auditory canal

    International Nuclear Information System (INIS)

    Weon, Young Cheol; Kim, Jae Hyoung; Choi, Sung Kyu; Koo, Ja-Won

    2007-01-01

    Duplication of the internal auditory canal is an extremely rare temporal bone anomaly that is believed to result from aplasia or hypoplasia of the vestibulocochlear nerve. We report bilateral duplication of the internal auditory canal in a 28-month-old boy with developmental delay and sensorineural hearing loss. (orig.)

  14. Primary Auditory Cortex Regulates Threat Memory Specificity

    Science.gov (United States)

    Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.

    2017-01-01

    Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…

  15. Auditory and communicative abilities in the auditory neuropathy spectrum disorder and mutation in the Otoferlin gene: clinical cases study.

    Science.gov (United States)

    Costa, Nayara Thais de Oliveira; Martinho-Carvalho, Ana Claudia; Cunha, Maria Claudia; Lewis, Doris Ruthi

    2012-01-01

    This study had the aim to investigate the auditory and communicative abilities of children diagnosed with Auditory Neuropathy Spectrum Disorder due to mutation in the Otoferlin gene. It is a descriptive and qualitative study in which two siblings with this diagnosis were assessed. The procedures conducted were: speech perception tests for children with profound hearing loss, and assessment of communication abilities using the Behavioral Observation Protocol. Because they were siblings, the subjects in the study shared family and communicative context. However, they developed different communication abilities, especially regarding the use of oral language. The study showed that the Auditory Neuropathy Spectrum Disorder is a heterogeneous condition in all its aspects, and it is not possible to make generalizations or assume that cases with similar clinical features will develop similar auditory and communicative abilities, even when they are siblings. It is concluded that the acquisition of communicative abilities involves subjective factors, which should be investigated based on the uniqueness of each case.

  16. Long-term pitch memory for music recordings is related to auditory working memory precision.

    Science.gov (United States)

    Van Hedger, Stephen C; Heald, Shannon Lm; Nusbaum, Howard C

    2018-04-01

    Most individuals have reliable long-term memories for the pitch of familiar music recordings. This pitch memory (1) appears to be normally distributed in the population, (2) does not depend on explicit musical training and (3) only seems to be weakly related to differences in listening frequency estimates. The present experiment was designed to assess whether individual differences in auditory working memory could explain variance in long-term pitch memory for music recordings. In Experiment 1, participants first completed a musical note adjustment task that has been previously used to assess working memory of musical pitch. Afterward, participants were asked to judge the pitch of well-known music recordings, which either had or had not been shifted in pitch. We found that performance on the pitch working memory task was significantly related to performance in the pitch memory task using well-known recordings, even when controlling for overall musical experience and familiarity with each recording. In Experiment 2, we replicated these findings in a separate group of participants while additionally controlling for fluid intelligence and non-pitch-based components of auditory working memory. In Experiment 3, we demonstrated that participants could not accurately judge the pitch of unfamiliar recordings, suggesting that our method of pitch shifting did not result in unwanted acoustic cues that could have aided participants in Experiments 1 and 2. These results, taken together, suggest that the ability to maintain pitch information in working memory might lead to more accurate long-term pitch memory.

  17. Reduced auditory processing capacity during vocalization in children with Selective Mutism.

    Science.gov (United States)

    Arie, Miri; Henkin, Yael; Lamy, Dominique; Tetin-Schneider, Simona; Apter, Alan; Sadeh, Avi; Bar-Haim, Yair

    2007-02-01

    Because abnormal Auditory Efferent Activity (AEA) is associated with auditory distortions during vocalization, we tested whether auditory processing is impaired during vocalization in children with Selective Mutism (SM). Participants were children with SM and abnormal AEA, children with SM and normal AEA, and normally speaking controls, who had to detect aurally presented target words embedded within word lists under two conditions: silence (single task), and while vocalizing (dual task). To ascertain specificity of auditory-vocal deficit, effects of concurrent vocalizing were also examined during a visual task. Children with SM and abnormal AEA showed impaired auditory processing during vocalization relative to children with SM and normal AEA, and relative to control children. This impairment is specific to the auditory modality and does not reflect difficulties in dual task per se. The data extends previous findings suggesting that deficient auditory processing is involved in speech selectivity in SM.

  18. A Case of Generalized Auditory Agnosia with Unilateral Subcortical Brain Lesion

    Science.gov (United States)

    Suh, Hyee; Kim, Soo Yeon; Kim, Sook Hee; Chang, Jae Hyeok; Shin, Yong Beom; Ko, Hyun-Yoon

    2012-01-01

    The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. Auditory agnosia is a deficit of auditory object processing defined as a disability to recognize spoken languages and/or nonverbal environmental sounds and music despite adequate hearing while spontaneous speech, reading and writing are preserved. Usually, either the bilateral or unilateral temporal lobe, especially the transverse gyral lesions, are responsible for auditory agnosia. Subcortical lesions without cortical damage rarely causes auditory agnosia. We present a 73-year-old right-handed male with generalized auditory agnosia caused by a unilateral subcortical lesion. He was not able to repeat or dictate but to perform fluent and comprehensible speech. He could understand and read written words and phrases. His auditory brainstem evoked potential and audiometry were intact. This case suggested that the subcortical lesion involving unilateral acoustic radiation could cause generalized auditory agnosia. PMID:23342322

  19. Auditory motion in the sighted and blind: Early visual deprivation triggers a large-scale imbalance between auditory and "visual" brain regions.

    Science.gov (United States)

    Dormal, Giulia; Rezk, Mohamed; Yakobov, Esther; Lepore, Franco; Collignon, Olivier

    2016-07-01

    How early blindness reorganizes the brain circuitry that supports auditory motion processing remains controversial. We used fMRI to characterize brain responses to in-depth, laterally moving, and static sounds in early blind and sighted individuals. Whole-brain univariate analyses revealed that the right posterior middle temporal gyrus and superior occipital gyrus selectively responded to both in-depth and laterally moving sounds only in the blind. These regions overlapped with regions selective for visual motion (hMT+/V5 and V3A) that were independently localized in the sighted. In the early blind, the right planum temporale showed enhanced functional connectivity with right occipito-temporal regions during auditory motion processing and a concomitant reduced functional connectivity with parietal and frontal regions. Whole-brain searchlight multivariate analyses demonstrated higher auditory motion decoding in the right posterior middle temporal gyrus in the blind compared to the sighted, while decoding accuracy was enhanced in the auditory cortex bilaterally in the sighted compared to the blind. Analyses targeting individually defined visual area hMT+/V5 however indicated that auditory motion information could be reliably decoded within this area even in the sighted group. Taken together, the present findings demonstrate that early visual deprivation triggers a large-scale imbalance between auditory and "visual" brain regions that typically support the processing of motion information. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Using Facebook to Reach People Who Experience Auditory Hallucinations.

    Science.gov (United States)

    Crosier, Benjamin Sage; Brian, Rachel Marie; Ben-Zeev, Dror

    2016-06-14

    Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. The objective of this proof-of-concept study was to examine the viability of leveraging Web-based social media as a method of engaging people who experience auditory hallucinations and to evaluate their attitudes toward using social media platforms as a resource for Web-based support and technology-based treatment. We used Facebook advertisements to recruit individuals who experience auditory hallucinations to complete an 18-item Web-based survey focused on issues related to auditory hallucinations and technology use in American adults. We systematically tested multiple elements of the advertisement and survey layout including image selection, survey pagination, question ordering, and advertising targeting strategy. Each element was evaluated sequentially and the most cost-effective strategy was implemented in the subsequent steps, eventually deriving an optimized approach. Three open-ended question responses were analyzed using conventional inductive content analysis. Coded responses were quantified into binary codes, and frequencies were then calculated. Recruitment netted N=264 total sample over a 6-week period. Ninety-seven participants fully completed all measures at a total cost of $8.14 per participant across testing phases. Systematic adjustments to advertisement design, survey layout, and targeting strategies improved data quality and cost efficiency. People were willing to provide information on what triggered their auditory hallucinations along with strategies they use to cope, as well as provide suggestions to others who experience auditory hallucinations. Women, people

  1. Using Facebook to Reach People Who Experience Auditory Hallucinations

    Science.gov (United States)

    Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging Web-based social media as a method of engaging people who experience auditory hallucinations and to evaluate their attitudes toward using social media platforms as a resource for Web-based support and technology-based treatment. Methods We used Facebook advertisements to recruit individuals who experience auditory hallucinations to complete an 18-item Web-based survey focused on issues related to auditory hallucinations and technology use in American adults. We systematically tested multiple elements of the advertisement and survey layout including image selection, survey pagination, question ordering, and advertising targeting strategy. Each element was evaluated sequentially and the most cost-effective strategy was implemented in the subsequent steps, eventually deriving an optimized approach. Three open-ended question responses were analyzed using conventional inductive content analysis. Coded responses were quantified into binary codes, and frequencies were then calculated. Results Recruitment netted N=264 total sample over a 6-week period. Ninety-seven participants fully completed all measures at a total cost of $8.14 per participant across testing phases. Systematic adjustments to advertisement design, survey layout, and targeting strategies improved data quality and cost efficiency. People were willing to provide information on what triggered their auditory hallucinations along with strategies they use to cope, as well as provide suggestions to others who experience

  2. Molecular approach of auditory neuropathy.

    Science.gov (United States)

    Silva, Magali Aparecida Orate Menezes da; Piatto, Vânia Belintani; Maniglia, Jose Victor

    2015-01-01

    Mutations in the otoferlin gene are responsible for auditory neuropathy. To investigate the prevalence of mutations in the mutations in the otoferlin gene in patients with and without auditory neuropathy. This original cross-sectional case study evaluated 16 index cases with auditory neuropathy, 13 patients with sensorineural hearing loss, and 20 normal-hearing subjects. DNA was extracted from peripheral blood leukocytes, and the mutations in the otoferlin gene sites were amplified by polymerase chain reaction/restriction fragment length polymorphism. The 16 index cases included nine (56%) females and seven (44%) males. The 13 deaf patients comprised seven (54%) males and six (46%) females. Among the 20 normal-hearing subjects, 13 (65%) were males and seven were (35%) females. Thirteen (81%) index cases had wild-type genotype (AA) and three (19%) had the heterozygous AG genotype for IVS8-2A-G (intron 8) mutation. The 5473C-G (exon 44) mutation was found in a heterozygous state (CG) in seven (44%) index cases and nine (56%) had the wild-type allele (CC). Of these mutants, two (25%) were compound heterozygotes for the mutations found in intron 8 and exon 44. All patients with sensorineural hearing loss and normal-hearing individuals did not have mutations (100%). There are differences at the molecular level in patients with and without auditory neuropathy. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  3. Visual and auditory perception in preschool children at risk for dyslexia.

    Science.gov (United States)

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Evolutionary conservation and neuronal mechanisms of auditory perceptual restoration.

    Science.gov (United States)

    Petkov, Christopher I; Sutter, Mitchell L

    2011-01-01

    Auditory perceptual 'restoration' occurs when the auditory system restores an occluded or masked sound of interest. Behavioral work on auditory restoration in humans began over 50 years ago using it to model a noisy environmental scene with competing sounds. It has become clear that not only humans experience auditory restoration: restoration has been broadly conserved in many species. Behavioral studies in humans and animals provide a necessary foundation to link the insights being obtained from human EEG and fMRI to those from animal neurophysiology. The aggregate of data resulting from multiple approaches across species has begun to clarify the neuronal bases of auditory restoration. Different types of neural responses supporting restoration have been found, supportive of multiple mechanisms working within a species. Yet a general principle has emerged that responses correlated with restoration mimic the response that would have been given to the uninterrupted sound of interest. Using the same technology to study different species will help us to better harness animal models of 'auditory scene analysis' to clarify the conserved neural mechanisms shaping the perceptual organization of sound and to advance strategies to improve hearing in natural environmental settings. © 2010 Elsevier B.V. All rights reserved.

  5. Auditory and visual evoked potentials during hyperoxia

    Science.gov (United States)

    Smith, D. B. D.; Strawbridge, P. J.

    1974-01-01

    Experimental study of the auditory and visual averaged evoked potentials (AEPs) recorded during hyperoxia, and investigation of the effect of hyperoxia on the so-called contingent negative variation (CNV). No effect of hyperoxia was found on the auditory AEP, the visual AEP, or the CNV. Comparisons with previous studies are discussed.

  6. Evolving a rule system controller for automatic driving in a car racing competition

    OpenAIRE

    Pérez, Diego; Sáez Achaerandio, Yago; Recio Isasi, Gustavo; Isasi Viñuela, Pedro

    2008-01-01

    IEEE Symposium on Computational Intelligence and Games. Perth, Australia, 15-18 December 2008. The techniques and the technologies supporting Automatic Vehicle Guidance are important issues. Automobile manufacturers view automatic driving as a very interesting product with motivating key features which allow improvement of the car safety, reduction in emission or fuel consumption or optimization of driver comfort during long journeys. Car racing is an active research field where new ...

  7. Auditory filters at low-frequencies

    DEFF Research Database (Denmark)

    Orellana, Carlos Andrés Jurado; Pedersen, Christian Sejer; Møller, Henrik

    2009-01-01

    -ear transfer function), the asymmetry of the auditory filter changed from steeper high-frequency slopes at 1000 Hz to steeper low-frequency slopes below 100 Hz. Increasing steepness at low-frequencies of the middle-ear high-pass filter is thought to cause this effect. The dynamic range of the auditory filter...... was found to steadily decrease with decreasing center frequency. Although the observed decrease in filter bandwidth with decreasing center frequency was only approximately monotonic, the preliminary data indicates the filter bandwidth does not stabilize around 100 Hz, e.g. it still decreases below...

  8. Advanced intelligent systems

    CERN Document Server

    Ryoo, Young; Jang, Moon-soo; Bae, Young-Chul

    2014-01-01

    Intelligent systems have been initiated with the attempt to imitate the human brain. People wish to let machines perform intelligent works. Many techniques of intelligent systems are based on artificial intelligence. According to changing and novel requirements, the advanced intelligent systems cover a wide spectrum: big data processing, intelligent control, advanced robotics, artificial intelligence and machine learning. This book focuses on coordinating intelligent systems with highly integrated and foundationally functional components. The book consists of 19 contributions that features social network-based recommender systems, application of fuzzy enforcement, energy visualization, ultrasonic muscular thickness measurement, regional analysis and predictive modeling, analysis of 3D polygon data, blood pressure estimation system, fuzzy human model, fuzzy ultrasonic imaging method, ultrasonic mobile smart technology, pseudo-normal image synthesis, subspace classifier, mobile object tracking, standing-up moti...

  9. Left hemispheric dominance during auditory processing in a noisy environment

    Directory of Open Access Journals (Sweden)

    Ross Bernhard

    2007-11-01

    Full Text Available Abstract Background In daily life, we are exposed to different sound inputs simultaneously. During neural encoding in the auditory pathway, neural activities elicited by these different sounds interact with each other. In the present study, we investigated neural interactions elicited by masker and amplitude-modulated test stimulus in primary and non-primary human auditory cortex during ipsi-lateral and contra-lateral masking by means of magnetoencephalography (MEG. Results We observed significant decrements of auditory evoked responses and a significant inter-hemispheric difference for the N1m response during both ipsi- and contra-lateral masking. Conclusion The decrements of auditory evoked neural activities during simultaneous masking can be explained by neural interactions evoked by masker and test stimulus in peripheral and central auditory systems. The inter-hemispheric differences of N1m decrements during ipsi- and contra-lateral masking reflect a basic hemispheric specialization contributing to the processing of complex auditory stimuli such as speech signals in noisy environments.

  10. Acute auditory agnosia as the presenting hearing disorder in MELAS.

    Science.gov (United States)

    Miceli, Gabriele; Conti, Guido; Cianfoni, Alessandro; Di Giacopo, Raffaella; Zampetti, Patrizia; Servidei, Serenella

    2008-12-01

    MELAS is commonly associated with peripheral hearing loss. Auditory agnosia is a rare cortical auditory impairment, usually due to bilateral temporal damage. We document, for the first time, auditory agnosia as the presenting hearing disorder in MELAS. A young woman with MELAS (A3243G mtDNA mutation) suffered from acute cortical hearing damage following a single stroke-like episode, in the absence of previous hearing deficits. Audiometric testing showed marked central hearing impairment and very mild sensorineural hearing loss. MRI documented bilateral, acute lesions to superior temporal regions. Neuropsychological tests demonstrated auditory agnosia without aphasia. Our data and a review of published reports show that cortical auditory disorders are relatively frequent in MELAS, probably due to the strikingly high incidence of bilateral and symmetric damage following stroke-like episodes. Acute auditory agnosia can be the presenting hearing deficit in MELAS and, conversely, MELAS should be suspected in young adults with sudden hearing loss.

  11. Using Facebook to Reach People Who Experience Auditory Hallucinations

    OpenAIRE

    Crosier, Benjamin Sage; Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging...

  12. Cooperative Intelligence in Roundabout Intersections Using Hierarchical Fuzzy Behavior Calculation of Vehicle Speed Profile

    Directory of Open Access Journals (Sweden)

    Bosankić Ivan

    2016-01-01

    Full Text Available In this paper, a new fuzzy-behavior-based algorithm for roundabout intersection management is presented. The algorithm employs cooperative intelligence and includes intelligent vehicles and infrastructure to calculate speed profiles for different vehicles, in order to achieve more comfortable driving profiles, as well to reduce congestion and CO2 emissions. The algorithm uses adaptive spatio-temporal reservation technique and was tested in MATLAB/Simulink environment. The algorithm is designed to function in different scenarios with both cooperative and non-cooperative vehicles, as well as optional intersection infrastructure. Results have show that using the proposed algorithm different vehicle communication types can be successfully combined in order to increase traffic flow through roundabout intersections.

  13. Auditory driving of the autonomic nervous system: Listening to theta-frequency binaural beats post-exercise increases parasympathetic activation and sympathetic withdrawal

    OpenAIRE

    Patrick eMcConnell; Patrick eMcConnell; Brett eFroeliger; Eric L. Garland; Jeffrey C. Ives; Gary A. Sforzo

    2014-01-01

    Binaural beats are an auditory illusion perceived when two or more pure tones of similar frequencies are presented dichotically through stereo headphones. Although this phenomenon is thought to facilitate state changes (e.g., relaxation), few empirical studies have reported on whether binaural beats produce changes in autonomic arousal. Therefore, the present study investigated the effects of binaural beating on autonomic dynamics (heart-rate variability (HRV)) during post-exercise relaxation...

  14. Auditory driving of the autonomic nervous system: Listening to theta-frequency binaural beats post-exercise increases parasympathetic activation and sympathetic withdrawal

    OpenAIRE

    McConnell, Patrick A.; Froeliger, Brett; Garland, Eric L.; Ives, Jeffrey C.; Sforzo, Gary A.

    2014-01-01

    Binaural beats are an auditory illusion perceived when two or more pure tones of similar frequencies are presented dichotically through stereo headphones. Although this phenomenon is thought to facilitate state changes (e.g., relaxation), few empirical studies have reported on whether binaural beats produce changes in autonomic arousal. Therefore, the present study investigated the effects of binaural beating on autonomic dynamics [heart rate variability (HRV)] during post-exercise relaxation...

  15. Modification of sudden onset auditory ERP by involuntary attention to visual stimuli.

    Science.gov (United States)

    Oray, Serkan; Lu, Zhong-Lin; Dawson, Michael E

    2002-03-01

    To investigate the cross-modal nature of the exogenous attention system, we studied how involuntary attention in the visual modality affects ERPs elicited by sudden onset of events in the auditory modality. Relatively loud auditory white noise bursts were presented to subjects with random and long inter-trial intervals. The noise bursts were either presented alone, or paired with a visual stimulus with a visual to auditory onset asynchrony of 120 ms. In a third condition, the visual stimuli were shown alone. All three conditions, auditory alone, visual alone, and paired visual/auditory, were randomly inter-mixed and presented with equal probabilities. Subjects were instructed to fixate on a point in front of them without task instructions concerning either the auditory or visual stimuli. ERPs were recorded from 28 scalp sites throughout every experimental session. Compared to ERPs in the auditory alone condition, pairing the auditory noise bursts with the visual stimulus reduced the amplitude of the auditory N100 component at Cz by 40% and the auditory P200/P300 component at Cz by 25%. No significant topographical change was observed in the scalp distributions of the N100 and P200/P300. Our results suggest that involuntary attention to visual stimuli suppresses early sensory (N100) as well as late cognitive (P200/P300) processing of sudden auditory events. The activation of the exogenous attention system by sudden auditory onset can be modified by involuntary visual attention in a cross-model, passive prepulse inhibition paradigm.

  16. Neuronal activity in primate auditory cortex during the performance of audiovisual tasks.

    Science.gov (United States)

    Brosch, Michael; Selezneva, Elena; Scheich, Henning

    2015-03-01

    This study aimed at a deeper understanding of which cognitive and motivational aspects of tasks affect auditory cortical activity. To this end we trained two macaque monkeys to perform two different tasks on the same audiovisual stimulus and to do this with two different sizes of water rewards. The monkeys had to touch a bar after a tone had been turned on together with an LED, and to hold the bar until either the tone (auditory task) or the LED (visual task) was turned off. In 399 multiunits recorded from core fields of auditory cortex we confirmed that during task engagement neurons responded to auditory and non-auditory stimuli that were task-relevant, such as light and water. We also confirmed that firing rates slowly increased or decreased for several seconds during various phases of the tasks. Responses to non-auditory stimuli and slow firing changes were observed during both the auditory and the visual task, with some differences between them. There was also a weak task-dependent modulation of the responses to auditory stimuli. In contrast to these cognitive aspects, motivational aspects of the tasks were not reflected in the firing, except during delivery of the water reward. In conclusion, the present study supports our previous proposal that there are two response types in the auditory cortex that represent the timing and the type of auditory and non-auditory elements of a auditory tasks as well the association between elements. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  17. Further Evidence of Auditory Extinction in Aphasia

    Science.gov (United States)

    Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim

    2013-01-01

    Purpose: Preliminary research ( Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Method: Seventeen IWA (M[subscript age] = 53.19 years)…

  18. Competitive Intelligence.

    Science.gov (United States)

    Bergeron, Pierrette; Hiller, Christine A.

    2002-01-01

    Reviews the evolution of competitive intelligence since 1994, including terminology and definitions and analytical techniques. Addresses the issue of ethics; explores how information technology supports the competitive intelligence process; and discusses education and training opportunities for competitive intelligence, including core competencies…

  19. Effect of conductive hearing loss on central auditory function.

    Science.gov (United States)

    Bayat, Arash; Farhadi, Mohammad; Emamdjomeh, Hesam; Saki, Nader; Mirmomeni, Golshan; Rahim, Fakher

    It has been demonstrated that long-term Conductive Hearing Loss (CHL) may influence the precise detection of the temporal features of acoustic signals or Auditory Temporal Processing (ATP). It can be argued that ATP may be the underlying component of many central auditory processing capabilities such as speech comprehension or sound localization. Little is known about the consequences of CHL on temporal aspects of central auditory processing. This study was designed to assess auditory temporal processing ability in individuals with chronic CHL. During this analytical cross-sectional study, 52 patients with mild to moderate chronic CHL and 52 normal-hearing listeners (control), aged between 18 and 45 year-old, were recruited. In order to evaluate auditory temporal processing, the Gaps-in-Noise (GIN) test was used. The results obtained for each ear were analyzed based on the gap perception threshold and the percentage of correct responses. The average of GIN thresholds was significantly smaller for the control group than for the CHL group for both ears (right: p=0.004; left: phearing for both sides (phearing loss in either group (p>0.05). The results suggest reduced auditory temporal processing ability in adults with CHL compared to normal hearing subjects. Therefore, developing a clinical protocol to evaluate auditory temporal processing in this population is recommended. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  20. Neural Correlates of Auditory Processing, Learning and Memory Formation in Songbirds

    Science.gov (United States)

    Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.

    Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.

  1. Comparison of auditory and visual oddball fMRI in schizophrenia.

    Science.gov (United States)

    Collier, Azurii K; Wolf, Daniel H; Valdez, Jeffrey N; Turetsky, Bruce I; Elliott, Mark A; Gur, Raquel E; Gur, Ruben C

    2014-09-01

    Individuals with schizophrenia often suffer from attentional deficits, both in focusing on task-relevant targets and in inhibiting responses to distractors. Schizophrenia also has a differential impact on attention depending on modality: auditory or visual. However, it remains unclear how abnormal activation of attentional circuitry differs between auditory and visual modalities, as these two modalities have not been directly compared in the same individuals with schizophrenia. We utilized event-related functional magnetic resonance imaging (fMRI) to compare patterns of brain activation during an auditory and visual oddball task in order to identify modality-specific attentional impairment. Healthy controls (n=22) and patients with schizophrenia (n=20) completed auditory and visual oddball tasks in separate sessions. For responses to targets, the auditory modality yielded greater activation than the visual modality (A-V) in auditory cortex, insula, and parietal operculum, but visual activation was greater than auditory (V-A) in visual cortex. For responses to novels, A-V differences were found in auditory cortex, insula, and supramarginal gyrus; and V-A differences in the visual cortex, inferior temporal gyrus, and superior parietal lobule. Group differences in modality-specific activation were found only for novel stimuli; controls showed larger A-V differences than patients in prefrontal cortex and the putamen. Furthermore, for patients, greater severity of negative symptoms was associated with greater divergence of A-V novel activation in the visual cortex. Our results demonstrate that patients have more pronounced activation abnormalities in auditory compared to visual attention, and link modality specific abnormalities to negative symptom severity. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Auditory and visual sustained attention in Down syndrome.

    Science.gov (United States)

    Faught, Gayle G; Conners, Frances A; Himmelberger, Zachary M

    2016-01-01

    Sustained attention (SA) is important to task performance and development of higher functions. It emerges as a separable component of attention during preschool and shows incremental improvements during this stage of development. The current study investigated if auditory and visual SA match developmental level or are particular challenges for youth with DS. Further, we sought to determine if there were modality effects in SA that could predict those seen in short-term memory (STM). We compared youth with DS to typically developing youth matched for nonverbal mental age and receptive vocabulary. Groups completed auditory and visual sustained attention to response tests (SARTs) and STM tasks. Results indicated groups performed similarly on both SARTs, even over varying cognitive ability. Further, within groups participants performed similarly on auditory and visual SARTs, thus SA could not predict modality effects in STM. However, SA did generally predict a significant portion of unique variance in groups' STM. Ultimately, results suggested both auditory and visual SA match developmental level in DS. Further, SA generally predicts STM, though SA does not necessarily predict the pattern of poor auditory relative to visual STM characteristic of DS. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Increased Early Processing of Task-Irrelevant Auditory Stimuli in Older Adults.

    Directory of Open Access Journals (Sweden)

    Erich S Tusch

    Full Text Available The inhibitory deficit hypothesis of cognitive aging posits that older adults' inability to adequately suppress processing of irrelevant information is a major source of cognitive decline. Prior research has demonstrated that in response to task-irrelevant auditory stimuli there is an age-associated increase in the amplitude of the N1 wave, an ERP marker of early perceptual processing. Here, we tested predictions derived from the inhibitory deficit hypothesis that the age-related increase in N1 would be 1 observed under an auditory-ignore, but not auditory-attend condition, 2 attenuated in individuals with high executive capacity (EC, and 3 augmented by increasing cognitive load of the primary visual task. ERPs were measured in 114 well-matched young, middle-aged, young-old, and old-old adults, designated as having high or average EC based on neuropsychological testing. Under the auditory-ignore (visual-attend task, participants ignored auditory stimuli and responded to rare target letters under low and high load. Under the auditory-attend task, participants ignored visual stimuli and responded to rare target tones. Results confirmed an age-associated increase in N1 amplitude to auditory stimuli under the auditory-ignore but not auditory-attend task. Contrary to predictions, EC did not modulate the N1 response. The load effect was the opposite of expectation: the N1 to task-irrelevant auditory events was smaller under high load. Finally, older adults did not simply fail to suppress the N1 to auditory stimuli in the task-irrelevant modality; they generated a larger response than to identical stimuli in the task-relevant modality. In summary, several of the study's findings do not fit the inhibitory-deficit hypothesis of cognitive aging, which may need to be refined or supplemented by alternative accounts.

  4. Effects on driving performance of interacting with an in-vehicle music player: a comparison of three interface layout concepts for information presentation.

    Science.gov (United States)

    Mitsopoulos-Rubens, Eve; Trotter, Margaret J; Lenné, Michael G

    2011-05-01

    Interface design is an important factor in assessing the potential effects on safety of interacting with an in-vehicle information system while driving. In the current study, the layout of information on a visual display was manipulated to explore its effect on driving performance in the context of music selection. The comparative effects of an auditory-verbal (cognitive) task were also explored. The driving performance of 30 participants was assessed under both baseline and dual task conditions using the Lane Change Test. Concurrent completion of the music selection task with driving resulted in significant impairment to lateral driving performance (mean lane deviation and percentage of correct lane changes) relative to the baseline, and significantly greater mean lane deviation relative to the combined driving and the cognitive task condition. The magnitude of these effects on driving performance was independent of layout concept, although significant differences in subjective workload estimates and performance on the music selection task across layout concepts highlights that potential uncertainty regarding design use as conveyed through layout concept could be disadvantageous. The implications of these results for interface design and safety are discussed. Copyright © 2010 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  5. Task-irrelevant auditory feedback facilitates motor performance in musicians

    Directory of Open Access Journals (Sweden)

    Virginia eConde

    2012-05-01

    Full Text Available An efficient and fast auditory–motor network is a basic resource for trained musicians due to the importance of motor anticipation of sound production in musical performance. When playing an instrument, motor performance always goes along with the production of sounds and the integration between both modalities plays an essential role in the course of musical training. The aim of the present study was to investigate the role of task-irrelevant auditory feedback during motor performance in musicians using a serial reaction time task (SRTT. Our hypothesis was that musicians, due to their extensive auditory–motor practice routine during musical training, have a superior performance and learning capabilities when receiving auditory feedback during SRTT relative to musicians performing the SRTT without any auditory feedback. Here we provide novel evidence that task-irrelevant auditory feedback is capable to reinforce SRTT performance but not learning, a finding that might provide further insight into auditory-motor integration in musicians on a behavioral level.

  6. Neurofeedback in Learning Disabled Children: Visual versus Auditory Reinforcement.

    Science.gov (United States)

    Fernández, Thalía; Bosch-Bayard, Jorge; Harmony, Thalía; Caballero, María I; Díaz-Comas, Lourdes; Galán, Lídice; Ricardo-Garcell, Josefina; Aubert, Eduardo; Otero-Ojeda, Gloria

    2016-03-01

    Children with learning disabilities (LD) frequently have an EEG characterized by an excess of theta and a deficit of alpha activities. NFB using an auditory stimulus as reinforcer has proven to be a useful tool to treat LD children by positively reinforcing decreases of the theta/alpha ratio. The aim of the present study was to optimize the NFB procedure by comparing the efficacy of visual (with eyes open) versus auditory (with eyes closed) reinforcers. Twenty LD children with an abnormally high theta/alpha ratio were randomly assigned to the Auditory or the Visual group, where a 500 Hz tone or a visual stimulus (a white square), respectively, was used as a positive reinforcer when the value of the theta/alpha ratio was reduced. Both groups had signs consistent with EEG maturation, but only the Auditory Group showed behavioral/cognitive improvements. In conclusion, the auditory reinforcer was more efficacious in reducing the theta/alpha ratio, and it improved the cognitive abilities more than the visual reinforcer.

  7. [Chinese medicine industry 4.0:advancing digital pharmaceutical manufacture toward intelligent pharmaceutical manufacture].

    Science.gov (United States)

    Cheng, Yi-Yu; Qu, Hai-Bin; Zhang, Bo-Li

    2016-01-01

    A perspective analysis on the technological innovation in pharmaceutical engineering of Chinese medicine unveils a vision on "Future Factory" of Chinese medicine industry in mind. The strategy as well as the technical roadmap of "Chinese medicine industry 4.0" is proposed, with the projection of related core technology system. It is clarified that the technical development path of Chinese medicine industry from digital manufacture to intelligent manufacture. On the basis of precisely defining technical terms such as process control, on-line detection and process quality monitoring for Chinese medicine manufacture, the technical concepts and characteristics of intelligent pharmaceutical manufacture as well as digital pharmaceutical manufacture are elaborated. Promoting wide applications of digital manufacturing technology of Chinese medicine is strongly recommended. Through completely informationized manufacturing processes and multi-discipline cluster innovation, intelligent manufacturing technology of Chinese medicine should be developed, which would provide a new driving force for Chinese medicine industry in technology upgrade, product quality enhancement and efficiency improvement. Copyright© by the Chinese Pharmaceutical Association.

  8. Motor Training: Comparison of Visual and Auditory Coded Proprioceptive Cues

    Directory of Open Access Journals (Sweden)

    Philip Jepson

    2012-05-01

    Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.

  9. Impact of Educational Level on Performance on Auditory Processing Tests.

    Science.gov (United States)

    Murphy, Cristina F B; Rabelo, Camila M; Silagi, Marcela L; Mansur, Letícia L; Schochat, Eliane

    2016-01-01

    Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor "years of schooling" was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.

  10. Normal time course of auditory recognition in schizophrenia, despite impaired precision of the auditory sensory ("echoic") memory code.

    Science.gov (United States)

    March, L; Cienfuegos, A; Goldbloom, L; Ritter, W; Cowan, N; Javitt, D C

    1999-02-01

    Prior studies have demonstrated impaired precision of processing within the auditory sensory memory (ASM) system in schizophrenia. This study used auditory backward masking to evaluate the degree to which such deficits resulted from impaired overall precision versus premature decay of information within the short-term auditory store. ASM performance was evaluated in 14 schizophrenic participants and 16 controls. Schizophrenic participants were severely impaired in their ability to match tones following delay. However, when no-mask performance was equated across participants, schizophrenic participants were no more susceptible to the effects of backward maskers than were controls. Thus, despite impaired precision of ASM performance, schizophrenic participants showed no deficits in the time course over which short-term representations could be used within the ASM system.

  11. A Time-Frequency Auditory Model Using Wavelet Packets

    DEFF Research Database (Denmark)

    Agerkvist, Finn

    1996-01-01

    A time-frequency auditory model is presented. The model uses the wavelet packet analysis as the preprocessor. The auditory filters are modelled by the rounded exponential filters, and the excitation is smoothed by a window function. By comparing time-frequency excitation patterns it is shown...... that the change in the time-frequency excitation pattern introduced when a test tone at masked threshold is added to the masker is approximately equal to 7 dB for all types of maskers. The classic detection ratio therefore overrates the detection efficiency of the auditory system....

  12. Auditory agnosia as a clinical symptom of childhood adrenoleukodystrophy.

    Science.gov (United States)

    Furushima, Wakana; Kaga, Makiko; Nakamura, Masako; Gunji, Atsuko; Inagaki, Masumi

    2015-08-01

    To investigate detailed auditory features in patients with auditory impairment as the first clinical symptoms of childhood adrenoleukodystrophy (CSALD). Three patients who had hearing difficulty as the first clinical signs and/or symptoms of ALD. Precise examination of the clinical characteristics of hearing and auditory function was performed, including assessments of pure tone audiometry, verbal sound discrimination, otoacoustic emission (OAE), and auditory brainstem response (ABR), as well as an environmental sound discrimination test, a sound lateralization test, and a dichotic listening test (DLT). The auditory pathway was evaluated by MRI in each patient. Poor response to calling was detected in all patients. Two patients were not aware of their hearing difficulty, and had been diagnosed with normal hearing by otolaryngologists at first. Pure-tone audiometry disclosed normal hearing in all patients. All patients showed a normal wave V ABR threshold. Three patients showed obvious difficulty in discriminating verbal sounds, environmental sounds, and sound lateralization and strong left-ear suppression in a dichotic listening test. However, once they discriminated verbal sounds, they correctly understood the meaning. Two patients showed elongation of the I-V and III-V interwave intervals in ABR, but one showed no abnormality. MRIs of these three patients revealed signal changes in auditory radiation including in other subcortical areas. The hearing features of these subjects were diagnosed as auditory agnosia and not aphasia. It should be emphasized that when patients are suspected to have hearing impairment but have no abnormalities in pure tone audiometry and/or ABR, this should not be diagnosed immediately as psychogenic response or pathomimesis, but auditory agnosia must also be considered. Copyright © 2014 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  13. Presentation of dynamically overlapping auditory messages in user interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Papp, III, Albert Louis [Univ. of California, Davis, CA (United States)

    1997-09-01

    This dissertation describes a methodology and example implementation for the dynamic regulation of temporally overlapping auditory messages in computer-user interfaces. The regulation mechanism exists to schedule numerous overlapping auditory messages in such a way that each individual message remains perceptually distinct from all others. The method is based on the research conducted in the area of auditory scene analysis. While numerous applications have been engineered to present the user with temporally overlapped auditory output, they have generally been designed without any structured method of controlling the perceptual aspects of the sound. The method of scheduling temporally overlapping sounds has been extended to function in an environment where numerous applications can present sound independently of each other. The Centralized Audio Presentation System is a global regulation mechanism that controls all audio output requests made from all currently running applications. The notion of multimodal objects is explored in this system as well. Each audio request that represents a particular message can include numerous auditory representations, such as musical motives and voice. The Presentation System scheduling algorithm selects the best representation according to the current global auditory system state, and presents it to the user within the request constraints of priority and maximum acceptable latency. The perceptual conflicts between temporally overlapping audio messages are examined in depth through the Computational Auditory Scene Synthesizer. At the heart of this system is a heuristic-based auditory scene synthesis scheduling method. Different schedules of overlapped sounds are evaluated and assigned penalty scores. High scores represent presentations that include perceptual conflicts between over-lapping sounds. Low scores indicate fewer and less serious conflicts. A user study was conducted to validate that the perceptual difficulties predicted by

  14. [Communication and auditory behavior obtained by auditory evoked potentials in mammals, birds, amphibians, and reptiles].

    Science.gov (United States)

    Arch-Tirado, Emilio; Collado-Corona, Miguel Angel; Morales-Martínez, José de Jesús

    2004-01-01

    amphibians, Frog catesbiana (frog bull, 30 animals); reptiles, Sceloporus torcuatus (common small lizard, 22 animals); birds: Columba livia (common dove, 20 animals), and mammals, Cavia porcellus, (guinea pig, 20 animals). With regard to lodging, all animals were maintained at the Institute of Human Communication Disorders, were fed with special food for each species, and had water available ad libitum. Regarding procedure, for carrying out analysis of auditory evoked potentials of brain stem SPL amphibians, birds, and mammals were anesthetized with ketamine 20, 25, and 50 mg/kg, by injection. Reptiles were anesthetized by freezing (6 degrees C). Study subjects had needle electrodes placed in an imaginary line on the half sagittal line between both ears and eyes, behind right ear, and behind left ear. Stimulation was carried out inside a no noise site by means of a horn in free field. The sign was filtered at between 100 and 3,000 Hz and analyzed in a computer for provoked potentials (Racia APE 78). In data shown by amphibians, wave-evoked responses showed greater latency than those of the other species. In reptiles, latency was observed as reduced in comparison with amphibians. In the case of birds, lesser latency values were observed, while in the case of guinea pigs latencies were greater than those of doves but they were stimulated by 10 dB, which demonstrated best auditory threshold in the four studied species. Last, it was corroborated that as the auditory threshold of each species it descends conforms to it advances in the phylogenetic scale. Beginning with these registrations, we care able to say that response for evoked brain stem potential showed to be more complex and lesser values of absolute latency as we advance along the phylogenetic scale; thus, the opposing auditory threshold is better agreement with regard to the phylogenetic scale among studied species. These data indicated to us that seeking of auditory information is more complex in more

  15. Neural oscillations in auditory working memory

    OpenAIRE

    Wilsch, A.

    2015-01-01

    The present thesis investigated memory load and memory decay in auditory working memory. Alpha power as a marker for memory load served as the primary indicator for load and decay fluctuations hypothetically reflecting functional inhibition of irrelevant information. Memory load was induced by presenting auditory signals (syllables and pure-tone sequences) in noise because speech-in-noise has been shown before to increase memory load. The aim of the thesis was to assess with magnetoencephalog...

  16. Changes in the Adult Vertebrate Auditory Sensory Epithelium After Trauma

    Science.gov (United States)

    Oesterle, Elizabeth C.

    2012-01-01

    Auditory hair cells transduce sound vibrations into membrane potential changes, ultimately leading to changes in neuronal firing and sound perception. This review provides an overview of the characteristics and repair capabilities of traumatized auditory sensory epithelium in the adult vertebrate ear. Injured mammalian auditory epithelium repairs itself by forming permanent scars but is unable to regenerate replacement hair cells. In contrast, injured non-mammalian vertebrate ear generates replacement hair cells to restore hearing functions. Non-sensory support cells within the auditory epithelium play key roles in the repair processes. PMID:23178236

  17. Missing a trick: Auditory load modulates conscious awareness in audition.

    Science.gov (United States)

    Fairnie, Jake; Moore, Brian C J; Remington, Anna

    2016-07-01

    In the visual domain there is considerable evidence supporting the Load Theory of Attention and Cognitive Control, which holds that conscious perception of background stimuli depends on the level of perceptual load involved in a primary task. However, literature on the applicability of this theory to the auditory domain is limited and, in many cases, inconsistent. Here we present a novel "auditory search task" that allows systematic investigation of the impact of auditory load on auditory conscious perception. An array of simultaneous, spatially separated sounds was presented to participants. On half the trials, a critical stimulus was presented concurrently with the array. Participants were asked to detect which of 2 possible targets was present in the array (primary task), and whether the critical stimulus was present or absent (secondary task). Increasing the auditory load of the primary task (raising the number of sounds in the array) consistently reduced the ability to detect the critical stimulus. This indicates that, at least in certain situations, load theory applies in the auditory domain. The implications of this finding are discussed both with respect to our understanding of typical audition and for populations with altered auditory processing. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. No counterpart of visual perceptual echoes in the auditory system.

    Directory of Open Access Journals (Sweden)

    Barkın İlhan

    Full Text Available It has been previously demonstrated by our group that a visual stimulus made of dynamically changing luminance evokes an echo or reverberation at ~10 Hz, lasting up to a second. In this study we aimed to reveal whether similar echoes also exist in the auditory modality. A dynamically changing auditory stimulus equivalent to the visual stimulus was designed and employed in two separate series of experiments, and the presence of reverberations was analyzed based on reverse correlations between stimulus sequences and EEG epochs. The first experiment directly compared visual and auditory stimuli: while previous findings of ~10 Hz visual echoes were verified, no similar echo was found in the auditory modality regardless of frequency. In the second experiment, we tested if auditory sequences would influence the visual echoes when they were congruent or incongruent with the visual sequences. However, the results in that case similarly did not reveal any auditory echoes, nor any change in the characteristics of visual echoes as a function of audio-visual congruence. The negative findings from these experiments suggest that brain oscillations do not equivalently affect early sensory processes in the visual and auditory modalities, and that alpha (8-13 Hz oscillations play a special role in vision.

  19. The Study of Frequency Self Care Strategies against Auditory Hallucinations

    Directory of Open Access Journals (Sweden)

    Mahin Nadem

    2012-03-01

    Full Text Available Background: In schizophrenic clients, self-care strategies against auditory hallucinations can decrease disturbances results in hallucination. This study was aimed to assess frequency of self-care strategies against auditory hallucinations in paranoid schizophrenic patients, hospitalized in Shafa Hospital.Materials and Method: This was a descriptive study on 201 patients with paranoid schizophrenia hospitalized in psychiatry unit with convenience sampling in Rasht. The gathered data consists of two parts, first unit demographic characteristic and the second part, self- report questionnaire include 38 items about self-care strategies.Results: There were statistically significant relationship between demographic variables and knowledg effect and self-care strategies against auditory hallucinaions. Sex with phisical domain p0.07, marriage status with cognitive domain (p>0.07 and life status with behavioural domain (p>0.01. 53.2% of reported type of our auditory hallucinations were command hallucinations, furtheremore the most effective self-care strategies against auditory hallucinations were from physical domain and substance abuse (82.1% was the most effective strategies in this domain.Conclusion: The client with paranoid schizophrenia used more than physical domain strategies against auditory hallucinaions and this result highlight need those to approprait nursing intervention. Instruction and leading about selection the effective self-care strategies against auditory ha

  20. Contribution à la commande d'un train de véhicules intelligents

    OpenAIRE

    Zhao , Jin

    2010-01-01

    This PhD thesis is dedicated to the control strategies for intelligent vehicle platoon in highway with the main aims of alleviating traffic congestion and improving traffic safety. After a review of the different existing automated driving systems, the vehicle longitudinal and lateral dynamic models are derived. Then, the longitudinal control and lateral control strategies are studied respectively. At first, the longitudinal control system is designed to be hierarchical with an upper level co...