Baumgaertner, Annette; Hartwigsen, Gesa; Roman Siebner, Hartwig
-hemispheric homologues of classic left-hemispheric language areas may partly be due to processing nonlinguistic perceptual features of verbal stimuli. We used functional MRI (fMRI) to clarify the role of the right hemisphere in the perception of nonlinguistic word features in healthy individuals. Participants made...... perceptual, semantic, or phonological decisions on the same set of auditorily and visually presented word stimuli. Perceptual decisions required judgements about stimulus-inherent changes in font size (visual modality) or fundamental frequency contour (auditory modality). The semantic judgement required......, the right inferior frontal gyrus (IFG), an area previously suggested to support language recovery after left-hemispheric stroke, displayed modality-independent activation during perceptual processing of word stimuli. Our findings indicate that activation of the right hemisphere during language tasks may...
van Dantzig, Saskia; Pecher, Diane; Zeelenberg, Rene; Barsalou, Lawrence W.
According to the Perceptual Symbols Theory of cognition (Barsalou, 1999), modality-specific simulations underlie the representation of concepts. A strong prediction of this view is that perceptual processing affects conceptual processing. In this study, participants performed a perceptual detection task and a conceptual property-verification task…
Grosvald, Michael; Gutierrez, Eva; Hafer, Sarah; Corina, David
A fundamental advance in our understanding of human language would come from a detailed account of how non-linguistic and linguistic manual actions are differentiated in real time by language users. To explore this issue, we targeted the N400, an ERP component known to be sensitive to semantic context. Deaf signers saw 120 American Sign Language sentences, each consisting of a "frame" (a sentence without the last word; e.g. BOY SLEEP IN HIS) followed by a "last item" belonging to one of four categories: a high-close-probability sign (a "semantically reasonable" completion to the sentence; e.g. BED), a low-close-probability sign (a real sign that is nonetheless a "semantically odd" completion to the sentence; e.g. LEMON), a pseudo-sign (phonologically legal but non-lexical form), or a non-linguistic grooming gesture (e.g. the performer scratching her face). We found significant N400-like responses in the incongruent and pseudo-sign contexts, while the gestures elicited a large positivity. Copyright Â© 2012 Elsevier Inc. All rights reserved.
Baumgaertner, Annette; Hartwigsen, Gesa; Roman Siebner, Hartwig
Verbal stimuli often induce right-hemispheric activation in patients with aphasia after left-hemispheric stroke. This right-hemispheric activation is commonly attributed to functional reorganization within the language system. Yet previous evidence suggests that functional activation in right-hemispheric homologues of classic left-hemispheric language areas may partly be due to processing nonlinguistic perceptual features of verbal stimuli. We used functional MRI (fMRI) to clarify the role of the right hemisphere in the perception of nonlinguistic word features in healthy individuals. Participants made perceptual, semantic, or phonological decisions on the same set of auditorily and visually presented word stimuli. Perceptual decisions required judgements about stimulus-inherent changes in font size (visual modality) or fundamental frequency contour (auditory modality). The semantic judgement required subjects to decide whether a stimulus is natural or man-made; the phonologic decision required a decision on whether a stimulus contains two or three syllables. Compared to phonologic or semantic decision, nonlinguistic perceptual decisions resulted in a stronger right-hemispheric activation. Specifically, the right inferior frontal gyrus (IFG), an area previously suggested to support language recovery after left-hemispheric stroke, displayed modality-independent activation during perceptual processing of word stimuli. Our findings indicate that activation of the right hemisphere during language tasks may, in some instances, be driven by a "nonlinguistic perceptual processing" mode that focuses on nonlinguistic word features. This raises the possibility that stronger activation of right inferior frontal areas during language tasks in aphasic patients with left-hemispheric stroke may at least partially reflect increased attentional focus on nonlinguistic perceptual aspects of language. Copyright © 2012 Wiley Periodicals, Inc.
Yoshida, Katherine A; Iversen, John R; Patel, Aniruddh D; Mazuka, Reiko; Nito, Hiromi; Gervain, Judit; Werker, Janet F
Perceptual grouping has traditionally been thought to be governed by innate, universal principles. However, recent work has found differences in Japanese and English speakers' non-linguistic perceptual grouping, implicating language in non-linguistic perceptual processes (Iversen, Patel, & Ohgushi, 2008). Two experiments test Japanese- and English-learning infants of 5-6 and 7-8 months of age to explore the development of grouping preferences. At 5-6 months, neither the Japanese nor the English infants revealed any systematic perceptual biases. However, by 7-8 months, the same age as when linguistic phrasal grouping develops, infants developed non-linguistic grouping preferences consistent with their language's structure (and the grouping biases found in adulthood). These results reveal an early difference in non-linguistic perception between infants growing up in different language environments. The possibility that infants' linguistic phrasal grouping is bootstrapped by abstract perceptual principles is discussed. Copyright 2010 Elsevier B.V. All rights reserved.
Ghirardi, Gian Carlo; Romano, Raffaele
Theories including a collapse mechanism have been presented various years ago. They are based on a modification of standard quantum mechanics in which nonlinear and stochastic terms are added to the evolution equation. Their principal merits derive from the fact that they are mathematically precise schemes accounting, on the basis of a unique universal dynamical principle, both for the quantum behavior of microscopic systems as well as for the reduction associated to measurement processes and for the classical behavior of macroscopic objects. Since such theories qualify themselves not as new interpretations but as modifications of the standard theory they can be, in principle, tested against quantum mechanics. Recently, various investigations identifying possible crucial test have been discussed. In spite of the extreme difficulty to perform such tests it seems that recent technological developments allow at least to put precise limits on the parameters characterizing the modifications of the evolution equation. Here we will simply mention some of the recent investigations in this direction, while we will mainly concentrate our attention to the way in which collapse theories account for definite perceptual process. The differences between the case of reductions induced by perceptions and those related to measurement procedures by means of standard macroscopic devices will be discussed. On this basis, we suggest a precise experimental test of collapse theories involving conscious observers. We make plausible, by discussing in detail a toy model, that the modified dynamics can give rise to quite small but systematic errors in the visual perceptual process.
Louwerse, Max; Hutchinson, Sterling
There is increasing evidence from response time experiments that language statistics and perceptual simulations both play a role in conceptual processing. In an EEG experiment we compared neural activity in cortical regions commonly associated with linguistic processing and visual perceptual processing to determine to what extent symbolic and embodied accounts of cognition applied. Participants were asked to determine the semantic relationship of word pairs (e.g., sky - ground) or to determine their iconic relationship (i.e., if the presentation of the pair matched their expected physical relationship). A linguistic bias was found toward the semantic judgment task and a perceptual bias was found toward the iconicity judgment task. More importantly, conceptual processing involved activation in brain regions associated with both linguistic and perceptual processes. When comparing the relative activation of linguistic cortical regions with perceptual cortical regions, the effect sizes for linguistic cortical regions were larger than those for the perceptual cortical regions early in a trial with the reverse being true later in a trial. These results map upon findings from other experimental literature and provide further evidence that processing of concept words relies both on language statistics and on perceptual simulations, whereby linguistic processes precede perceptual simulation processes.
Lalonde, Kaylah; Holt, Rachael Frush
This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.
Kopparapu, Sunil Kumar
The book focuses on the part of the audio conversation not related to language such as speaking rate (in terms of number of syllables per unit time) and emotion centric features. This text examines using non-linguistics features to infer information from phone calls to call centers. The author analyzes 'how' the conversation happens and not 'what' the conversation is about by audio signal processing and analysis.
Quiroga Martinez, David Ricardo; Hansen, Niels Christian; Højlund, Andreas
play a fundamental role in music perception. The mismatch negativity (MMN) is a brain response that offers a unique insight into these processes. The MMN is elicited by deviants in a series of repetitive sounds and reflects the perception of change in physical and abstract sound regularities. Therefore......, it is regarded as a prediction error signal and a neural correlate of the updating of predictive perceptual models. In music, the MMN has been particularly valuable for the assessment of musical expectations, learning and expertise. However, the MMN paradigm has an important limitation: its ecological validity....... To this aim we will develop a new paradigm using more real-sounding stimuli. Our stimuli will be two-part music excerpts made by adding a melody to a previous design based on the Alberti bass (Vuust et al., 2011). Our second goal is to determine how the complexity of this context affects the predictive...
Quiroga Martinez, David Ricardo; Hansen, Niels Christian; Højlund, Andreas
The mismatch negativity (MMN) is a brain response elicited by deviants in a series of repetitive sounds. It reflects the perception of change in low-level sound features and reliably measures perceptual auditory memory. However, most MMN studies use simple tone patterns as stimuli, failing...
Klemen, J; Büchel, C; Rose, M
According to perceptual load theory, processing of task-irrelevant stimuli is limited by the perceptual load of a parallel attended task if both the task and the irrelevant stimuli are presented to the same sensory modality. However, it remains a matter of debate whether the same principles apply to cross-sensory perceptual load and, more generally, what form cross-sensory attentional modulation in early perceptual areas takes in humans. Here we addressed these questions using functional magnetic resonance imaging. Participants undertook an auditory one-back working memory task of low or high perceptual load, while concurrently viewing task-irrelevant images at one of three object visibility levels. The processing of the visual and auditory stimuli was measured in the lateral occipital cortex (LOC) and auditory cortex (AC), respectively. Cross-sensory interference with sensory processing was observed in both the LOC and AC, in accordance with previous results of unisensory perceptual load studies. The present neuroimaging results therefore warrant the extension of perceptual load theory from a unisensory to a cross-sensory context: a validation of this cross-sensory interference effect through behavioural measures would consolidate the findings.
Lalonde, Kaylah; Holt, Rachael Frush
This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the le...
Baumann, Oliver; Borra, Ronald J; Bower, James M; Cullen, Kathleen E; Habas, Christophe; Ivry, Richard B; Leggio, Maria; Mattingley, Jason B; Molinari, Marco; Moulton, Eric A; Paulin, Michael G; Pavlova, Marina A; Schmahmann, Jeremy D; Sokolov, Arseny A
Various lines of evidence accumulated over the past 30 years indicate that the cerebellum, long recognized as essential for motor control, also has considerable influence on perceptual processes. In this paper, we bring together experts from psychology and neuroscience, with the aim of providing a succinct but comprehensive overview of key findings related to the involvement of the cerebellum in sensory perception. The contributions cover such topics as anatomical and functional connectivity, evolutionary and comparative perspectives, visual and auditory processing, biological motion perception, nociception, self-motion, timing, predictive processing, and perceptual sequencing. While no single explanation has yet emerged concerning the role of the cerebellum in perceptual processes, this consensus paper summarizes the impressive empirical evidence on this problem and highlights diversities as well as commonalities between existing hypotheses. In addition to work with healthy individuals and patients with cerebellar disorders, it is also apparent that several neurological conditions in which perceptual disturbances occur, including autism and schizophrenia, are associated with cerebellar pathology. A better understanding of the involvement of the cerebellum in perceptual processes will thus likely be important for identifying and treating perceptual deficits that may at present go unnoticed and untreated. This paper provides a useful framework for further debate and empirical investigations into the influence of the cerebellum on sensory perception.
Cosman, Joshua D; Vecera, Shaun P
Specialized, bimodal neural systems integrate visual and tactile information in the space near the hand. Here, we show that visuo-tactile representations allow attention to influence early perceptual processing, namely, figure-ground assignment. Regions that were reached toward were more likely than other regions to be assigned as foreground figures, and hand position competed with image-based information to bias figure-ground assignment. Our findings suggest that hand position allows attention to influence visual perceptual processing and that visual processes typically viewed as unimodal can be influenced by bimodal visuo-tactile representations.
Full Text Available The focus of the study is on the knowledge that is constructed through perceptual processes during craft making in the context of the Finnish Basic Education in the Arts (BEA system. Craft studies in the BEA are defined as craft-art. The research method used is the grounded theory. The data consists of seven interviews and participant observations. Participants in the study are adolescents who study craft-art in the BEA system in Visual Art School, Aimo in Hämeenlinna. The aim of the article is to present, define and reflect on the concepts, properties and dimensions concerning perceptual processes that are discovered in this stage of the study following grounded theory procedures. The perceptual processes are an essential means of constructing knowledge in craft-art. Consequently, one aim of the study is to discuss how these processes are connected to various types of knowledge. The perceptual processes are described by seven concepts: imitative, anticipative, evaluative, experimental, emotional, temporal and bodily perceptions. They indicate on a conceptual level the characteristic of knowledge constructed through perceptual processes in craft-art. Further, theconcepts have several properties that can vary dimensionally between two qualities. The properties are activity, function and position. The dimensions of the properties vary from active to passive, formal to informal and internal to external. In conclusion, the concepts can describe a large range of incidents in different situations. They also seem to describe well the practice of craft-art and there are several connections with pre-existing concepts of knowledge.Keywords: Craft, Knowledge, Perceptual process, Basic Education in the Arts, Grounded Theory
Liu, Ping; Forte, Jason; Sewell, David; Carter, Olivia
Contrast-based early visual processing has largely been considered to involve autonomous processes that do not need the support of cognitive resources. However, as spatial attention is known to modulate early visual perceptual processing, we explored whether cognitive load could similarly impact contrast-based perception. We used a dual-task paradigm to assess the impact of a concurrent working memory task on the performance of three different early visual tasks. The results from Experiment 1 suggest that cognitive load can modulate early visual processing. No effects of cognitive load were seen in Experiments 2 or 3. Together, the findings provide evidence that under some circumstances cognitive load effects can penetrate the early stages of visual processing and that higher cognitive function and early perceptual processing may not be as independent as was once thought.
Greenberg, Steven; Christiansen, Thomas Ulrich
How does the brain process spoken language? It is our thesis that word intelligibility and consonant identification are insufficient by themselves to model how the speech signal is decoded - a finer-grained approach is required. In this study, listeners identified 11 different Danish consonants....... This asymmetric pattern of feature decoding may provide extra-segmental information of utility for speech processing, particularly in adverse listening conditions....... spoken in a Consonant + Vowel + [l] environment. Each syllable was processed so that only a portion of the original audio spectrum was present. Three-quarter-octave bands of speech, centered at 750, 1500, and 3000 Hz, were presented individually and in combination with each other. The conditional...
Mostert, Pim; Kok, Peter; de Lange, Floris P
A key question within systems neuroscience is how the brain translates physical stimulation into a behavioral response: perceptual decision making. To answer this question, it is important to dissociate the neural activity underlying the encoding of sensory information from the activity underlying the subsequent temporal integration into a decision variable. Here, we adopted a decoding approach to empirically assess this dissociation in human magnetoencephalography recordings. We used a functional localizer to identify the neural signature that reflects sensory-specific processes, and subsequently traced this signature while subjects were engaged in a perceptual decision making task. Our results revealed a temporal dissociation in which sensory processing was limited to an early time window and consistent with occipital areas, whereas decision-related processing became increasingly pronounced over time, and involved parietal and frontal areas. We found that the sensory processing accurately reflected the physical stimulus, irrespective of the eventual decision. Moreover, the sensory representation was stable and maintained over time when it was required for a subsequent decision, but unstable and variable over time when it was task-irrelevant. In contrast, decision-related activity displayed long-lasting sustained components. Together, our approach dissects neuro-anatomically and functionally distinct contributions to perceptual decisions.
Mostert, Pim; Kok, Peter; de Lange, Floris P.
A key question within systems neuroscience is how the brain translates physical stimulation into a behavioral response: perceptual decision making. To answer this question, it is important to dissociate the neural activity underlying the encoding of sensory information from the activity underlying the subsequent temporal integration into a decision variable. Here, we adopted a decoding approach to empirically assess this dissociation in human magnetoencephalography recordings. We used a functional localizer to identify the neural signature that reflects sensory-specific processes, and subsequently traced this signature while subjects were engaged in a perceptual decision making task. Our results revealed a temporal dissociation in which sensory processing was limited to an early time window and consistent with occipital areas, whereas decision-related processing became increasingly pronounced over time, and involved parietal and frontal areas. We found that the sensory processing accurately reflected the physical stimulus, irrespective of the eventual decision. Moreover, the sensory representation was stable and maintained over time when it was required for a subsequent decision, but unstable and variable over time when it was task-irrelevant. In contrast, decision-related activity displayed long-lasting sustained components. Together, our approach dissects neuro-anatomically and functionally distinct contributions to perceptual decisions. PMID:26666393
In 3 experiments, the effects of perceptual manipulations on recollective experience were tested. In Experiment 1, a picture-superiority effect was obtained for overall recognition and Remember judgements in a picture recognition task. In Experiment 2, size changes of pictorial stimuli across study and test reduced recognition memory and Remember judgements. In Experiment 3, deleterious effects of changes in left-right orientation of pictorial stimuli across study and test were obtained for Remember judgements. An alternate framework that emphasizes a distinctiveness-fluency processing distinction is proposed to account for these findings because they cannot easily be accommodated within the existing account of differences in conceptual and perceptual processing for the 2 categories of recollective experience: Remembering and Knowing, respectively (J. M. Gardiner, 1988; S. Rajaram, 1993).
Mevorach, Carmel; Tsal, Yehoshua; Humphreys, Glyn W
According to perceptual load theory (Lavie, 2005) distractor interference is determined by the availability of attentional resources. If target processing does not exhaust resources (with low perceptual load) distractor processing will take place resulting in interference with a primary task; however, when target processing uses-up attentional capacity (with high perceptual load) interference can be avoided. An alternative account (Tsal and Benoni, 2010a) suggests that perceptual load effects can be based on distractor dilution by the mere presence of additional neutral items in high-load displays so that the effect is not driven by the amount of attention resources required for target processing. Here we tested whether patients with unilateral neglect or extinction would show dilution effects from neutral items in their contralesional (neglected/extinguished) field, even though these items do not impose increased perceptual load on the target and at the same time attract reduced attentional resources compared to stimuli in the ipsilesional field. Thus, such items do not affect the amount of attention resources available for distractor processing. We found that contralesional neutral elements can eliminate distractor interference as strongly as centrally presented ones in neglect/extinction patients, despite contralesional items being less well attended. The data are consistent with an account in terms of perceptual dilution of distracters rather than available resources for distractor processing. We conclude that distractor dilution can underlie the elimination of distractor interference in visual displays.
Full Text Available According to perceptual load theory (Lavie, 2005 distractor interference is determined by the availability of attentional resources. If target processing does not exhaust resources (with low perceptual load distractor processing will take place resulting in interference with a primary task; however when target processing uses-up attentional capacity (with high perceptual load interference can be avoided. An alternative account (Tsal & Benoni, 2010 suggests that perceptual load effects can be based on distractor dilution by the mere presence of additional neutral items in high load displays so that the effect is not driven by the amount of attention resources required for target processing. Here we tested whether patients with unilateral neglect or extinction would show dilution effects from neutral items in their contralesional (neglected/extinguished field, even though these items do not impose increased perceptual load on the target and at the same time attract reduced attentional resources compared to stimuli in the ipsilesional field. Thus, such items do not affect the amount of attention resources available for distractor processing. We found that contralesional neutral elements can eliminate distractor interference as strongly as centrally presented ones in neglect/extinction patients, despite contralesional items being less well attended. The data are consistent with an account in terms of perceptual dilution of distracters rather than available resources for distractor processing. We conclude that distractor dilution can underlie the elimination of distractor interference in visual displays.
Sand, Anders; Wiens, Stefan
As researchers debate whether emotional pictures can be processed irrespective of spatial attention and perceptual load, negative and neutral pictures of simple figure-ground composition were shown at fixation and were surrounded by one, two, or three letters. When participants performed a picture discrimination task, there was evidence for motivated attention; that is, an early posterior negativity (EPN) and late positive potential (LPP) to negative versus neutral pictures. When participants performed a letter discrimination task, the EPN was unaffected whereas the LPP was reduced. Although performance decreased substantially with the number of letters (one to three), the LPP did not decrease further. Therefore, attention to simple, negative pictures at fixation seems to resist manipulations of perceptual load.
Jabar, Syaheed B; Filipowicz, Alex; Anderson, Britt
Probable stimuli are more often and more quickly detected. While stimulus probability is known to affect decision-making, it can also be explained as a perceptual phenomenon. Using spatial gratings, we have previously shown that probable orientations are also more precisely estimated, even while participants remained naive to the manipulation. We conducted an electrophysiological study to investigate the effect that probability has on perception and visual-evoked potentials. In line with previous studies on oddballs and stimulus prevalence, low-probability orientations were associated with a greater late positive 'P300' component which might be related to either surprise or decision-making. However, the early 'C1' component, thought to reflect V1 processing, was dampened for high-probability orientations while later P1 and N1 components were unaffected. Exploratory analyses revealed a participant-level correlation between C1 and P300 amplitudes, suggesting a link between perceptual processing and decision-making. We discuss how these probability effects could be indicative of sharpening of neurons preferring the probable orientations, due either to perceptual learning, or to feature-based attention. Copyright © 2017 Elsevier Ltd. All rights reserved.
Eugene Gtnnadyevna Surovyatkina
Full Text Available The goals of the work was to determine linkage between the dominant hemisphere of the brain and the occurrence of perceptual processes of the personality of students of the University of the Ministry of internal Affairs of Russia. Researching of relationship between characteristics of the nature of perceptual processes and lateralization of brain functions supplements the information about professional suitability and reliability of employees of enforcement structure within the individually-typological approach. The experimental psychological research of determination of motor and sensory asymmetries in the measurement system "hand-foot-ear-eye" (was performed by Homskay E.D., the leading channel of the auditory perception for the people with the left-hemispheric dominance, and kinesthetic channel for the people with right-hemispheric dominance were revealed. Features of functioning of system "FMPA-perception" in groups with different type of hemispheric dominance is recommended to consider in academic and professional activities of the cadets, and at the stage of professional selection.
An attractor, in complex systems theory, is any state that is more easily or more often entered or acquired than departed or lost; attractor states therefore accumulate more members than non-attractors, other things being equal. In the context of language evolution, linguistic attractors include sounds, forms, and grammatical structures that are prone to be selected when sociolinguistics and language contact make it possible for speakers to choose between competing forms. The reasons why an element is an attractor are linguistic (auditory salience, ease of processing, paradigm structure, etc.), but the factors that make selection possible and propagate selected items through the speech community are non-linguistic. This paper uses the consonants in personal pronouns to show what makes for an attractor and how selection and diffusion work, then presents a survey of several language families and areas showing that the derivational morphology of pairs of verbs like fear and frighten , or Turkish korkmak 'fear, be afraid' and korkutmak 'frighten, scare', or Finnish istua 'sit' and istutta 'seat (someone)', or Spanish sentarse 'sit down' and sentar 'seat (someone)' is susceptible to selection. Specifically, the Turkish and Finnish pattern, where 'seat' is derived from 'sit' by addition of a suffix-is an attractor and a favored target of selection. This selection occurs chiefly in sociolinguistic contexts of what is defined here as linguistic symbiosis, where languages mingle in speech, which in turn is favored by certain demographic, sociocultural, and environmental factors here termed frontier conditions. Evidence is surveyed from northern Eurasia, the Caucasus, North and Central America, and the Pacific and from both modern and ancient languages to raise the hypothesis that frontier conditions and symbiosis favor causativization.
Full Text Available An attractor, in complex systems theory, is any state that is more easily or more often entered or acquired than departed or lost; attractor states therefore accumulate more members than non-attractors, other things being equal. In the context of language evolution, linguistic attractors include sounds, forms, and grammatical structures that are prone to be selected when sociolinguistics and language contact make it possible for speakers to choose between competing forms. The reasons why an element is an attractor are linguistic (auditory salience, ease of processing, paradigm structure, etc., but the factors that make selection possible and propagate selected items through the speech community are non-linguistic. This paper uses the consonants in personal pronouns to show what makes for an attractor and how selection and diffusion work, then presents a survey of several language families and areas showing that the derivational morphology of pairs of verbs like fear and frighten, or Turkish korkmak ‘fear, be afraid’ and korkutmak ‘frighten, scare’, or Finnish istua ‘sit’ and istutta ‘seat (someone’, or Spanish sentarse ‘sit down’ and sentar ‘seat (someone’ is susceptible to selection. Specifically, the Turkish and Finnish pattern, where ‘seat’ is derived from ‘sit’ by addition of a suffix—is an attractor and a favored target of selection. This selection occurs chiefly in sociolinguistic contexts of what is defined here as linguistic symbiosis, where languages mingle in speech, which in turn is favored by certain demographic, sociocultural, and environmental factors here termed frontier conditions. Evidence is surveyed from northern Eurasia, the Caucasus, North and Central America, and the Pacific and from both modern and ancient languages to raise the hypothesis that frontier conditions and symbiosis favor causativization.
Molnar, Monika; Carreiras, Manuel; Gervain, Judit
To what degree non-linguistic auditory rhythm perception is governed by universal biases (e.g., Iambic-Trochaic Law; Hayes, 1995) or shaped by native language experience is debated. It has been proposed that rhythmic regularities in spoken language, such as phrasal prosody affect the grouping abilities of monolinguals (e.g., Iversen, Patel, & Ohgushi, 2008). Here, we assessed the non-linguistic tone grouping biases of Spanish monolinguals, and three groups of Basque-Spanish bilinguals with different levels of Basque experience. It is usually assumed in the literature that Basque and Spanish have different phrasal prosodies and even linguistic rhythms. To confirm this, first, we quantified Basque and Spanish phrasal prosody (Experiment 1a) and duration patterns used in the classification of languages into rhythm classes (Experiment 1b). The acoustic measurements revealed that regularities in phrasal prosody systematically differ across Basque and Spanish; by contrast, the rhythms of the two languages are only minimally dissimilar. In Experiment 2, participants' non-linguistic rhythm preferences were assessed in response to non-linguistic tones alternating in either intensity (Intensity condition) or in duration (Duration condition). In the Intensity condition, all groups showed a trochaic grouping bias, as predicted by the Iambic-Trochaic Law. In the Duration Condition the Spanish monolingual and the most Basque-dominant bilingual group exhibited opposite grouping preferences in line with the phrasal prosodies of their native/dominant languages, trochaic in Basque, iambic in Spanish. The two other bilingual groups showed no significant biases, however. Overall, results indicate that duration-based grouping mechanisms are biased toward the phrasal prosody of the native and dominant language; also, the presence of an L2 in the environment interacts with the auditory biases. Copyright © 2016 Elsevier B.V. All rights reserved.
Couperus, J W
This study explored effects of perceptual load on stimulus processing in the presence and absence of an attended stimulus. Participants were presented with a bilateral or unilateral display and asked to perform a discrimination task at either low or high perceptual load. Electrophysiological responses to stimuli were then compared at the P100 and N100. As in previous studies, perceptual load modified processing of attended and unattended stimuli seen at occipital scalp sites. Moreover, perceptual load modulated attention effects when the attended stimulus was presented at high perceptual load for unilateral displays. However, this was not true when the attended and unattended stimulus appeared simultaneously in bilateral displays. Instead, only a main effect of perceptual load was found. Reductions in processing contralateral to the unattended stimulus at the N100 provide support for Lavie's (1995) theory of selective attention. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Gutherie, Audrey H; Seely, Peter W; Beacham, Lauren A; Schuchard, Ronald A; De l'Aune, William A; Moore, Anna Bacon
The impact of age-related changes in visual-perceptual processing on naming ability has not been reported. The present study investigated the effects of 6 levels of spatial frequency and 6 levels of contrast on accuracy and latency to name objects in 14 young and 13 older neurologically normal adults with intact lexical-semantic functioning. Spatial frequency and contrast manipulations were made independently. Consistent with the hypotheses, variations in these two visual parameters impact naming ability in young and older subjects differently. The results from the spatial frequency-manipulations revealed that, in general, young vs. older subjects are faster and more accurate to name. However, this age-related difference is dependent on the spatial frequency on the image; differences were only seen for images presented at low (e.g., 0.25-1 c/deg) or high (e.g., 8-16 c/deg) spatial frequencies. Contrary to predictions, the results from the contrast manipulations revealed that overall older vs. young adults are more accurate to name. Again, however, differences were only seen for images presented at the lower levels of contrast (i.e., 1.25%). Both age groups had shorter latencies on the second exposure of the contrast-manipulated images, but this possible advantage of exposure was not seen for spatial frequency. Category analyses conducted on the data from this study indicate that older vs. young adults exhibit a stronger nonliving-object advantage for naming spatial frequency-manipulated images. Moreover, the findings suggest that bottom-up visual-perceptual variables integrate with top-down category information in different ways. Potential implications on the aging and naming (and recognition) literature are discussed.
Xie, Jiushu; Wang, Ruiming; Sun, Xun; Chang, Song
The effect of color and shape load on conceptual processing was studied. Perceptual load effects have been found in visual and auditory conceptual processing, supporting the theory of embodied cognition. However, whether different types of visual concepts, such as color and shape, share the same perceptual load effects is unknown. In the current experiment, 32 participants were administered simultaneous perceptual and conceptual tasks to assess the relation between perceptual load and conceptual processing. Keeping color load in mind obstructed color conceptual processing. Hence, perceptual processing and conceptual load shared the same resources, suggesting embodied cognition. Color conceptual processing was not affected by shape pictures, indicating that different types of properties within vision were separate.
Farkas, Mitchell S.; Hoyer, William J.
Examined adult age differences in the effects of perceptual grouping on attentional performance. All three age groups were slowed by the presence of similar irrelevant information, but the elderly were slowed more than were the young adults. (Author)
Boutet, Isabelle; Meinhardt-Injac, Bozana
We simultaneously investigated the role of three hypotheses regarding age-related differences in face processing: perceptual degradation, impaired holistic processing, and an interaction between the two. Young adults (YA) aged 20-33-year olds, middle-age adults (MA) aged 50-64-year olds, and older adults (OA) aged 65-82-year olds were tested on the context congruency paradigm, which allows measurement of face-specific holistic processing across the life span (Meinhardt-Injac, Persike & Meinhardt, 2014. Acta Psychologica, 151, 155-163). Perceptual degradation was examined by measuring performance with faces that were not filtered (FSF), with faces filtered to preserve low spatial frequencies (LSF), and with faces filtered to preserve high spatial frequencies (HSF). We found that reducing perceptual signal strength had a greater impact on MA and OA for HSF faces, but not LSF faces. Context congruency effects were significant and of comparable magnitude across ages for FSF, LSF, and HSF faces. By using watches as control objects, we show that these holistic effects reflect face-specific mechanisms in all age groups. Our results support the perceptual degradation hypothesis for faces containing only HSF and suggest that holistic processing is preserved in aging even under conditions of reduced signal strength. © The Author(s) 2018. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: firstname.lastname@example.org.
Abou-Zleikha, Mohamed; Tan, Zheng-Hua; Christensen, Mads Græsbøll
areas such as object detection, face recognition, and audio event detection. This paper proposes to use online random forest technique for detecting laughter and filler and for analyzing the importance of various features for non-linguistic vocal event classification through permutation. The results...... show that according to the Area Under Curve measure the online random forest achieved 88.1% compared to 82.9% obtained by the baseline support vector machines for laughter classification and 86.8% to 83.6% for filler classification....
Kang, Su Jin; Kim, Jae Hyoung; Shin, Tae Min
To obtain preliminary data for understanding the central auditory neural pathway by means of functional MR imaging (fMRI) of the cerebral auditory cortex during linguistic and non-linguistic auditory stimulation. In three right-handed volunteers we conducted fMRI of auditory cortex stimulation at 1.5 T using a conventional gradient-echo technique (TR/TE/flip angle: 80/60/40 deg). Using a pulsed tone of 1000 Hz and speech as non-linguistic and linguistic auditory stimuli, respectively, images-including those of the superior temporal gyrus of both hemispheres-were obtained in sagittal plases. Both stimuli were separately delivered binaurally or monoaurally through a plastic earphone. Images were activated by processing with homemade software. In order to analyze patterns of auditory cortex activation according to type of stimulus and which side of the ear was stimulated, the number and extent of activated pixels were compared between both temporal lobes. Biaural stimulation led to bilateral activation of the superior temporal gyrus, while monoaural stimulation led to more activation in the contralateral temporal lobe than in the ipsilateral. A trend toward slight activation of the left (dominant) temporal lobe in ipsilateral stimulation, particularly with a linguistic stimulus, was observed. During both biaural and monoaural stimulation, a linguistic stimulus produced more widespread activation than did a non-linguistic one. The superior temporal gyri of both temporal lobes are associated with acoustic-phonetic analysis, and the left (dominant) superior temporal gyrus is likely to play a dominant role in this processing. For better understanding of physiological and pathological central auditory pathways, further investigation is needed
Collins, Heather R; Zhu, Xun; Bhatt, Ramesh S; Clark, Jonathan D; Joseph, Jane E
The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. This study parametrically varied demands on featural, first-order configural, or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing), or reflected generalized perceptual differentiation (i.e., differentiation that crosses category and processing type boundaries). ROIs were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories.
Collins, Heather R.; Zhu, Xun; Bhatt, Ramesh S.; Clark, Jonathan D.; Joseph, Jane E.
The degree to which face-specific brain regions are specialized for different kinds of perceptual processing is debated. The present study parametrically varied demands on featural, first-order configural or second-order configural processing of faces and houses in a perceptual matching task to determine the extent to which the process of perceptual differentiation was selective for faces regardless of processing type (domain-specific account), specialized for specific types of perceptual processing regardless of category (process-specific account), engaged in category-optimized processing (i.e., configural face processing or featural house processing) or reflected generalized perceptual differentiation (i.e. differentiation that crosses category and processing type boundaries). Regions of interest were identified in a separate localizer run or with a similarity regressor in the face-matching runs. The predominant principle accounting for fMRI signal modulation in most regions was generalized perceptual differentiation. Nearly all regions showed perceptual differentiation for both faces and houses for more than one processing type, even if the region was identified as face-preferential in the localizer run. Consistent with process-specificity, some regions showed perceptual differentiation for first-order processing of faces and houses (right fusiform face area and occipito-temporal cortex, and right lateral occipital complex), but not for featural or second-order processing. Somewhat consistent with domain-specificity, the right inferior frontal gyrus showed perceptual differentiation only for faces in the featural matching task. The present findings demonstrate that the majority of regions involved in perceptual differentiation of faces are also involved in differentiation of other visually homogenous categories. PMID:22849402
Zilber, Nicolas; Ciuciu, Philippe; Gramfort, Alexandre; Azizi, Leila; van Wassenhove, Virginie
Multisensory interactions are ubiquitous in cortex and it has been suggested that sensory cortices may be supramodal i.e. capable of functional selectivity irrespective of the sensory modality of inputs (Pascual-Leone and Hamilton, 2001; Renier et al., 2013; Ricciardi and Pietrini, 2011; Voss and Zatorre, 2012). Here, we asked whether learning to discriminate visual coherence could benefit from supramodal processing. To this end, three groups of participants were briefly trained to discriminate which of a red or green intermixed population of random-dot-kinematograms (RDKs) was most coherent in a visual display while being recorded with magnetoencephalography (MEG). During training, participants heard no sound (V), congruent acoustic textures (AV) or auditory noise (AVn); importantly, congruent acoustic textures shared the temporal statistics - i.e. coherence - of visual RDKs. After training, the AV group significantly outperformed participants trained in V and AVn although they were not aware of their progress. In pre- and post-training blocks, all participants were tested without sound and with the same set of RDKs. When contrasting MEG data collected in these experimental blocks, selective differences were observed in the dynamic pattern and the cortical loci responsive to visual RDKs. First and common to all three groups, vlPFC showed selectivity to the learned coherence levels whereas selectivity in visual motion area hMT+ was only seen for the AV group. Second and solely for the AV group, activity in multisensory cortices (mSTS, pSTS) correlated with post-training performances; additionally, the latencies of these effects suggested feedback from vlPFC to hMT+ possibly mediated by temporal cortices in AV and AVn groups. Altogether, we interpret our results in the context of the Reverse Hierarchy Theory of learning (Ahissar and Hochstein, 2004) in which supramodal processing optimizes visual perceptual learning by capitalizing on sensory
Mostert, P.; Kok, P.; Lange, F.P. de
A key question within systems neuroscience is how the brain translates physical stimulation into a behavioral response: perceptual decision making. To answer this question, it is important to dissociate the neural activity underlying the encoding of sensory information from the activity underlying
Calvo, Manuel G.; Fernandez-Martin, Andres; Nummenmaa, Lauri
Why is a face with a smile but non-happy eyes likely to be interpreted as happy? We used blended expressions in which a smiling mouth was incongruent with the eyes (e.g., angry eyes), as well as genuine expressions with congruent eyes and mouth (e.g., both happy or angry). Tasks involved detection of a smiling mouth (perceptual), categorization of…
Linguistic manual gestures are the basis of sign languages used by deaf individuals. Working memory and language processing are intimately connected and thus when language is gesture-based, it is important to understand related working memory mechanisms. This article reviews work on working memory for linguistic and non-linguistic manual gestures and discusses theoretical and applied implications. Empirical evidence shows that there are effects of load and stimulus degradation on working memory for manual gestures. These effects are similar to those found for working memory for speech-based language. Further, there are effects of pre-existing linguistic representation that are partially similar across language modalities. But above all, deaf signers score higher than hearing non-signers on an n-back task with sign-based stimuli, irrespective of their semantic and phonological content, but not with non-linguistic manual actions. This pattern may be partially explained by recent findings relating to cross-modal plasticity in deaf individuals. It suggests that in linguistic gesture-based working memory, semantic aspects may outweigh phonological aspects when processing takes place under challenging conditions. The close association between working memory and language development should be taken into account in understanding and alleviating the challenges faced by deaf children growing up with cochlear implants as well as other clinical populations.
Full Text Available Linguistic manual gestures are the basis of sign languages used by deaf individuals. Working memory and language processing are intimately connected and thus when language is gesture-based, it is important to understand related working memory mechanisms. This article reviews work on working memory for linguistic and non-linguistic manual gestures and discusses theoretical and applied implications. Empirical evidence shows that there are effects of load and stimulus degradation on working memory for manual gestures. These effects are similar to those found for working memory for speech-based language. Further, there are effects of pre-existing linguistic representation that are partially similar across language modalities. But above all, deaf signers score higher than hearing non-signers on an n-back task with sign-based stimuli, irrespective of their semantic and phonological content, but not with non-linguistic manual actions. This pattern may be partially explained by recent findings relating to cross-modal plasticity in deaf individuals. It suggests that in linguistic gesture-based working memory, semantic aspects may outweigh phonological aspects when processing takes place under challenging conditions. The close association between working memory and language development should be taken into account in understanding and alleviating the challenges faced by deaf children growing up with cochlear implants as well as other clinical populations.
Vallila-Rohter, Sofia; Kiran, Swathi
Though aphasia is primarily characterized by impairments in the comprehension and/or expression of language, research has shown that patients with aphasia also show deficits in cognitive-linguistic domains such as attention, executive function, concept knowledge and memory (Helm-Estabrooks, 2002 for review). Research in aphasia suggests that cognitive impairments can impact the online construction of language, new verbal learning, and transactional success (Freedman & Martin, 2001; Hula & McNeil, 2008; Ramsberger, 2005). In our research, we extend this hypothesis to suggest that general cognitive deficits influence progress with therapy. The aim of our study is to explore learning, a cognitive process that is integral to relearning language, yet underexplored in the field of aphasia rehabilitation. We examine non-linguistic category learning in patients with aphasia (n=19) and in healthy controls (n=12), comparing feedback and non-feedback based instruction. Participants complete two computer-based learning tasks that require them to categorize novel animals based on the percentage of features shared with one of two prototypes. As hypothesized, healthy controls showed successful category learning following both methods of instruction. In contrast, only 60% of our patient population demonstrated successful non-linguistic category learning. Patient performance was not predictable by standardized measures of cognitive ability. Results suggest that general learning is affected in aphasia and is a unique, important factor to consider in the field of aphasia rehabilitation. PMID:23127795
Sündermann, Oliver; Hauschildt, Marit; Ehlers, Anke
Background Intrusive reexperiencing in posttraumatic stress disorder (PTSD) is commonly triggered by stimuli with perceptual similarity to those present during the trauma. Information processing theories suggest that perceptual processing during the trauma and enhanced perceptual priming contribute to the easy triggering of intrusive memories by these cues. Methods Healthy volunteers (N = 51) watched neutral and trauma picture stories on a computer screen. Neutral objects that were unrelated to the content of the stories briefly appeared in the interval between the pictures. Dissociation and data-driven processing (as indicators of perceptual processing) and state anxiety during the stories were assessed with self-report questionnaires. After filler tasks, participants completed a blurred object identification task to assess priming and a recognition memory task. Intrusive memories were assessed with telephone interviews 2 weeks and 3 months later. Results Neutral objects were more strongly primed if they occurred in the context of trauma stories than if they occurred during neutral stories, although the effect size was only moderate (ηp2=.08) and only significant when trauma stories were presented first. Regardless of story order, enhanced perceptual priming predicted intrusive memories at 2-week follow-up (N = 51), but not at 3 months (n = 40). Data-driven processing, dissociation and anxiety increases during the trauma stories also predicted intrusive memories. Enhanced perceptual priming and data-driven processing were associated with lower verbal intelligence. Limitations It is unclear to what extent these findings generalize to real-life traumatic events and whether they are specific to negative emotional events. Conclusions The results provide some support for the role of perceptual processing and perceptual priming in reexperiencing symptoms. PMID:23207970
Mostert, Pim; Kok, Peter; de Lange, Floris P.
A key question within systems neuroscience is how the brain translates physical stimulation into a behavioral response: perceptual decision making. To answer this question, it is important to dissociate the neural activity underlying the encoding of sensory information from the activity underlying the subsequent temporal integration into a decision variable. Here, we adopted a decoding approach to empirically assess this dissociation in human magnetoencephalography recordings. We used a funct...
Wang, Fang; Huang, Jing; Lv, Yaping; Ma, Xiaoli; Yang, Bin; Wang, Encong; Du, Boqi; Li, Wu; Song, Yan
Visual perceptual learning has been shown to be highly specific to the retinotopic location and attributes of the trained stimulus. Recent psychophysical studies suggest that these specificities, which have been associated with early retinotopic visual cortex, may in fact not be inherent in perceptual learning and could be related to higher-order brain functions. Here we provide direct electrophysiological evidence in support of this proposition. In a series of event-related potential (ERP) experiments, we recorded high-density electroencephalography (EEG) from human adults over the course of learning in a texture discrimination task (TDT). The results consistently showed that the earliest C1 component (68-84ms), known to reflect V1 activity driven by feedforward inputs, was not modulated by learning regardless of whether the behavioral improvement is location specific or not. In contrast, two later posterior ERP components (posterior P1 and P160-350) over the occipital cortex and one anterior ERP component (anterior P160-350) over the prefrontal cortex were progressively modified day by day. Moreover, the change of the anterior component was closely correlated with improved behavioral performance on a daily basis. Consistent with recent psychophysical and imaging observations, our results indicate that perceptual learning can mainly involve changes in higher-level visual cortex as well as in the neural networks responsible for cognitive functions such as attention and decision making. Copyright © 2015 Elsevier Inc. All rights reserved.
The paper addresses methodological potential of mobile technologies in teaching a foreign language to non-linguistic students. The author a) gives definition of the term "mobile education", b) suggests a list of mobile technologies used in foreign language teaching; c) develops a list of non-linguistic major students'' language abilities and language skills, which can be developed via mobile technologies.
Seitz, Aaron R
Perceptual learning refers to how experience can change the way we perceive sights, sounds, smells, tastes, and touch. Examples abound: music training improves our ability to discern tones; experience with food and wines can refine our pallet (and unfortunately more quickly empty our wallet), and with years of training radiologists learn to save lives by discerning subtle details of images that escape the notice of untrained viewers. We often take perceptual learning for granted, but it has a profound impact on how we perceive the world. In this Primer, I will explain how perceptual learning is transformative in guiding our perceptual processes, how research into perceptual learning provides insight into fundamental mechanisms of learning and brain processes, and how knowledge of perceptual learning can be used to develop more effective training approaches for those requiring expert perceptual skills or those in need of perceptual rehabilitation (such as individuals with poor vision). I will make a case that perceptual learning is ubiquitous, scientifically interesting, and has substantial practical utility to us all. Copyright © 2017. Published by Elsevier Ltd.
Urakawa, Tomokazu; Bunya, Mao; Araki, Osamu
A bistable image induces one of two perceptual alternatives. When the bistable visual image is continuously viewed, the percept of the image alternates from one possible percept to the other. Perceptual alternation was previously reported to be induced by an exogenous perturbation in the bistable image, and this perturbation was theoretically interpreted to cause neural noise, prompting a transition between two stable perceptual states. However, little is known experimentally about the visual processing of exogenously driven perceptual alternation. Based on the findings of a previous behavioral study (Urakawa et al. in Perception 45:474-482, 2016), the present study hypothesized that the automatic visual change detection process, which is relevant to the detection of a visual change in a sequence of visual events, has an enhancing effect on the induction of perceptual alternation, similar to neural noise. In order to clarify this issue, we developed a novel experimental paradigm in which visual mismatch negativity (vMMN), an electroencephalographic brain response that reflects visual change detection, was evoked while participants continuously viewed the bistable image. In terms of inter-individual differences in neural and behavioral data, we found that enhancements in the peak amplitude of vMMN1, early vMMN at a latency of approximately 150 ms, correlated with increases in the proportion of perceptual alternation across participants. Our results indicate the involvement of automatic visual change detection in the induction of perceptual alternation, similar to neural noise, thereby providing a deeper insight into the neural mechanisms underlying exogenously driven perceptual alternation in the bistable image.
Hamamé, Carlos M; Cosmelli, Diego; Henriquez, Rodrigo; Aboitiz, Francisco
Humans and other animals change the way they perceive the world due to experience. This process has been labeled as perceptual learning, and implies that adult nervous systems can adaptively modify the way in which they process sensory stimulation. However, the mechanisms by which the brain modifies this capacity have not been sufficiently analyzed. We studied the neural mechanisms of human perceptual learning by combining electroencephalographic (EEG) recordings of brain activity and the assessment of psychophysical performance during training in a visual search task. All participants improved their perceptual performance as reflected by an increase in sensitivity (d') and a decrease in reaction time. The EEG signal was acquired throughout the entire experiment revealing amplitude increments, specific and unspecific to the trained stimulus, in event-related potential (ERP) components N2pc and P3 respectively. P3 unspecific modification can be related to context or task-based learning, while N2pc may be reflecting a more specific attentional-related boosting of target detection. Moreover, bell and U-shaped profiles of oscillatory brain activity in gamma (30-60 Hz) and alpha (8-14 Hz) frequency bands may suggest the existence of two phases for learning acquisition, which can be understood as distinctive optimization mechanisms in stimulus processing. We conclude that there are reorganizations in several neural processes that contribute differently to perceptual learning in a visual search task. We propose an integrative model of neural activity reorganization, whereby perceptual learning takes place as a two-stage phenomenon including perceptual, attentional and contextual processes.
Oh, Hyungsuk; Kim, Wonha
We have developed a video processing method that achieves human perceptual visual quality-oriented video coding. The patterns of moving objects are modeled by considering the limited human capacity for spatial-temporal resolution and the visual sensory memory together, and an online moving pattern classifier is devised by using the Hedge algorithm. The moving pattern classifier is embedded in the existing visual saliency with the purpose of providing a human perceptual video quality saliency model. In order to apply the developed saliency model to video coding, the conventional foveation filtering method is extended. The proposed foveation filter can smooth and enhance the video signals locally, in conformance with the developed saliency model, without causing any artifacts. The performance evaluation results confirm that the proposed video processing method shows reliable improvements in the perceptual quality for various sequences and at various bandwidths, compared to existing saliency-based video coding methods.
Gilbert, Annie C; Boucher, Victor J; Jemel, Boutheina
We examined how perceptual chunks of varying size in utterances can influence immediate memory of heard items (monosyllabic words). Using behavioral measures and event-related potentials (N400) we evaluated the quality of the memory trace for targets taken from perceived temporal groups (TGs) of three and four items. Variations in the amplitude of the N400 showed a better memory trace for items presented in TGs of three compared to those in groups of four. Analyses of behavioral responses along with P300 components also revealed effects of chunk position in the utterance. This is the first study to measure the online effects of perceptual chunks on the memory trace of spoken items. Taken together, the N400 and P300 responses demonstrate that the perceptual chunking of speech facilitates information buffering and a processing on a chunk-by-chunk basis.
Shepard, Charlene R.; Reynolds, Ralph E.
Investigating the selective attention strategy, a study examined the type of attention allocated to important information by good and poor readers. Also tested was the methodological validity of using a conceptual (word recognition) perceptual (tachistoscopic word flash) task as a means of investigating the types of information processing that may…
Fitousi, Daniel; Wenger, Michael J.
Variations in perceptual and cognitive demands (load) play a major role in determining the efficiency of selective attention. According to load theory (Lavie, Hirst, Fockert, & Viding, 2004) these factors (a) improve or hamper selectivity by altering the way resources (e.g., processing capacity) are allocated, and (b) tap resources rather than…
Madsen, Bodil Nistrup
‘symbol’, non-verbal form’ and ‘non-linguistic form’ – are they synonymous designations of one data category or do they designate diff erent data categories? In the presentation we will discuss defi nitions from e.g. ISOcat, ISO 704:2009 and the DanTermBank taxonomy of terminological data categories......, and we will present some thoughts about the relevance of non-linguistic information in a national term bank....
Perceptual/ Emotional Processing Report Title Sustained unilateral hand clenching alters perceptual processing and affective/ motivational state...clenching on emotional / motivational state are in accord with, and have been interpreted as fitting, theories of cerebral lateralization of emotion ...of lateralization of emotional / motivation state (e.g.; Davidson, 2002), such that unilateral clenching of the left hand, presumed to activate the
van Ravenzwaaij, Don; Boekel, Wouter; Forstmann, Birte U; Ratcliff, Roger; Wagenmakers, Eric-Jan
Previous research suggests that playing action video games improves performance on sensory, perceptual, and attentional tasks. For instance, Green, Pouget, and Bavelier (2010) used the diffusion model to decompose data from a motion detection task and estimate the contribution of several underlying psychological processes. Their analysis indicated that playing action video games leads to faster information processing, reduced response caution, and no difference in motor responding. Because perceptual learning is generally thought to be highly context-specific, this transfer from gaming is surprising and warrants corroborative evidence from a large-scale training study. We conducted 2 experiments in which participants practiced either an action video game or a cognitive game in 5 separate, supervised sessions. Prior to each session and following the last session, participants performed a perceptual discrimination task. In the second experiment, we included a third condition in which no video games were played at all. Behavioral data and diffusion model parameters showed similar practice effects for the action gamers, the cognitive gamers, and the nongamers and suggest that, in contrast to earlier reports, playing action video games does not improve the speed of information processing in simple perceptual tasks.
Schlaffke, Lara; Rüther, Naima N; Heba, Stefanie; Haag, Lauren M; Schultz, Thomas; Rosengarth, Katharina; Tegenthoff, Martin; Bellebaum, Christian; Schmidt-Wilcke, Tobias
Certain kinds of stimuli can be processed on multiple levels. While the neural correlates of different levels of processing (LOPs) have been investigated to some extent, most of the studies involve skills and/or knowledge already present when performing the task. In this study we specifically sought to identify neural correlates of an evolving skill that allows the transition from perceptual to a lexico-semantic stimulus analysis. Eighteen participants were trained to decode 12 letters of Morse code that were presented acoustically inside and outside of the scanner environment. Morse code was presented in trains of three letters while brain activity was assessed with fMRI. Participants either attended to the stimulus length (perceptual analysis), or evaluated its meaning distinguishing words from nonwords (lexico-semantic analysis). Perceptual and lexico-semantic analyses shared a mutual network comprising the left premotor cortex, the supplementary motor area (SMA) and the inferior parietal lobule (IPL). Perceptual analysis was associated with a strong brain activation in the SMA and the superior temporal gyrus bilaterally (STG), which remained unaltered from pre and post training. In the lexico-semantic analysis post learning, study participants showed additional activation in the left inferior frontal cortex (IFC) and in the left occipitotemporal cortex (OTC), regions known to be critically involved in lexical processing. Our data provide evidence for cortical plasticity evolving with a learning process enabling the transition from perceptual to lexico-semantic stimulus analysis. Importantly, the activation pattern remains task-related LOP and is thus the result of a decision process as to which LOP to engage in. © 2015 The Authors. Human Brain Mapping Published by Wiley Periodicals, Inc.
The dual nature of the Japanese writing system was used to investigate two assumptions of the processing view of memory transfer: (1) that both perceptual and conceptual processing can contribute to the same memory test (mixture assumption) and (2) that both can be broken into more specific processes (subdivision assumption). Supporting the mixture assumption, a word fragment completion test based on ideographic kanji characters (kanji fragment completion test) was affected by both perceptual (hiragana/kanji script shift) and conceptual (levels-of-processing) study manipulations kanji fragments, because it did not occur with the use of meaningless hiragana fragments. The mixture assumption is also supported by an effect of study script on an implicit conceptual test (sentence completion), and the subdivision assumption is supported by a crossover dissociation between hiragana and kanji fragment completion as a function of study script.
He, Xun; Witzel, Christoph; Forder, Lewis; Clifford, Alexandra; Franklin, Anna
Prior claims that color categories affect color perception are confounded by inequalities in the color space used to equate same- and different-category colors. Here, we equate same- and different-category colors in the number of just-noticeable differences, and measure event-related potentials (ERPs) to these colors on a visual oddball task to establish if color categories affect perceptual or post-perceptual stages of processing. Category effects were found from 200 ms after color presentation, only in ERP components that reflect post-perceptual processes (e.g., N2, P3). The findings suggest that color categories affect post-perceptual processing, but do not affect the perceptual representation of color.
Sreenivasan, Kartik K; Jha, Amishi P
Selective attention has been shown to bias sensory processing in favor of relevant stimuli and against irrelevant or distracting stimuli in perceptual tasks. Increasing evidence suggests that selective attention plays an important role during working memory maintenance, possibly by biasing sensory processing in favor of to-be-remembered items. In the current study, we investigated whether selective attention may also support working memory by biasing processing against irrelevant and potentially distracting information. Event-related potentials (ERPs) were recorded while subjects (n = 22) performed a delayed-recognition task for faces and shoes. The delay period was filled with face or shoe distractors. Behavioral performance was impaired when distractors were congruent with the working memory domain (e.g., face distractor during working memory for faces) relative to when distractors were incongruent with the working memory domain (e.g., face distractor during shoe working memory). If attentional biasing against distractor processing is indeed functionally relevant in supporting working memory maintenance, perceptual processing of distractors is predicted to be attenuated when distractors are more behaviorally intrusive relative to when they are nonintrusive. As such, we predicted that perceptual processing of distracting faces, as measured by the face-sensitive N170 ERP component, would be reduced in the context of congruent (face) working memory relative to incongruent (shoe) working memory. The N170 elicited by distracting faces demonstrated reduced amplitude during congruent versus incongruent working memory. These results suggest that perceptual processing of distracting faces may be attenuated due to attentional biasing against sensory processing of distractors that are most behaviorally intrusive during working memory maintenance.
Tacikowski, P; Ehrsson, H H
Self-related stimuli, such as one's own name or face, are processed faster and more accurately than other types of stimuli. However, what remains unknown is at which stage of the information processing hierarchy this preferential processing occurs. Our first aim was to determine whether preferential self-processing involves mainly perceptual stages or also post-perceptual stages. We found that self-related priming was stronger than other-related priming only because of perceptual prime-target congruency. Our second aim was to dissociate the role of conscious and unconscious factors in preferential self-processing. To this end, we compared the "self" and "other" conditions in trials where primes were masked or unmasked. In two separate experiments, we found that self-related priming was stronger than other-related priming but only in the unmasked trials. Together, our results suggest that preferential access to the self-concept occurs mainly at the perceptual and conscious stages of the stimulus processing hierarchy. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Joshi, Suyash Narendra; Jesteadt, Walt
Weighting patterns for loudness obtained using the reverse correlation method are thought to reveal the relative contributions of different frequency regions to total loudness, the equivalent of specific loudness. Current models of loudness assume that specific loudness is determined by peripheral...... processes such as compression and masking. Here we test this hypothesis using 20-tone harmonic complexes (200Hz f0, 200 to 4000Hz, 250 ms, 65 dB/Component) added in opposite phase relationships (Schroeder positive and negative). Due to the varying degree of envelope modulations, these time-reversed harmonic...... processes and reflect a central frequency weighting template....
Full Text Available This study aimed to compare the effects of a non-linguistic auditory intervention approach with a phonological intervention approach on the phonological skills of children with speech sound disorder. A total of 17 children, aged 7-12 years, with speech sound disorder were randomly allocated to either the non-linguistic auditory temporal intervention group (n = 10, average age 7.7 ± 1.2 or phonological intervention group (n = 7, average age 8.6 ± 1.2. The intervention outcomes included auditory-sensory measures (auditory temporal processing skills and cognitive measures (attention, short-term memory, speech production and phonological awareness skills. The auditory approach focused on non-linguistic auditory training (eg. backward masking and frequency discrimination, whereas the phonological approach focused on speech sound training (eg. phonological organisation and awareness. Both interventions consisted of twelve 45-minute sessions delivered twice per week, for a total of nine hours. Intra-group analysis demonstrated that the auditory intervention group showed significant gains in both auditory and cognitive measures, whereas no significant gain was observed in the phonological intervention group. No significant improvement on phonological skills was observed in any of the groups. Inter-group analysis demonstrated significant differences between the improvement following training for both groups, with a more pronounced gain for the non-linguistic auditory temporal intervention in one of the visual attention measures and both auditory measures. Therefore, both analyses suggest that although the non-linguistic auditory intervention approach appeared to be the most effective intervention approach, it was not sufficient to promote the enhancement of phonological skills.
Richler, Jennifer J.; Gauthier, Isabel; Wenger, Michael J.; Palmeri, Thomas J.
Researchers have used several composite face paradigms to assess holistic processing of faces. In the selective attention paradigm, participants decide whether one face part (e.g., top) is the same as a previously seen face part. Their judgment is affected by whether the irrelevant part of the test face is the same as or different than the…
Pecher, Diane; Zeelenberg, René; Raaijmakers, Jeroen G W
Two experiments investigated the influence of automatic and strategic processes on associative priming effects in a perceptual identification task in which prime-target pairs are briefly presented and masked. In this paradigm, priming is defined as a higher percentage of correctly identified targets for related pairs than for unrelated pairs. In Experiment 1, priming was obtained for mediated word pairs. This mediated priming effect was affected neither by the presence of direct associations nor by the presentation time of the primes, indicating that automatic priming effects play a role in perceptual identification. Experiment 2 showed that the priming effect was not affected by the proportion (.90 vs. .10) of related pairs if primes were presented briefly to prevent their identification. However, a large proportion effect was found when primes were presented for 1000 ms so that they were clearly visible. These results indicate that priming in a masked perceptual identification task is the result of automatic processes and is not affected by strategies. The present paradigm provides a valuable alternative to more commonly used tasks such as lexical decision.
Schmetz, Emilie; Magis, David; Detraux, Jean-Jacques; Barisnikov, Koviljka; Rousselle, Laurence
The present study aims to assess how the processing of basic visual perceptual (VP) components (length, surface, orientation, and position) develops in typically developing (TD) children (n = 215, 4-14 years old) and adults (n = 20, 20-25 years old), and in children with cerebral palsy (CP) (n = 86, 5-14 years old) using the first four subtests of the Battery for the Evaluation of Visual Perceptual and Spatial processing in children. Experiment 1 showed that these four basic VP processes follow distinct developmental trajectories in typical development. Experiment 2 revealed that children with CP present global and persistent deficits for the processing of basic VP components when compared with TD children matched on chronological age and nonverbal reasoning abilities.
Heather Raye Dial
Replicating previous studies, performance on the two word recognition tasks without closely matched distractors (WAB and PWM was at ceiling for some subjects with impairments on consonant discrimination (see Figures 1a/1b. However, as shown in Figures 1c/1d, for word processing tasks matched in phonological discriminability to the consonant discrimination task, scores on consonant discrimination and word processing were highly correlated, and no individual demonstrated substantially better performance on word than phoneme perception. One patient demonstrated worse performance on lexical decision (d’ = .21 than phoneme perception (d’ = 1.72, which can be attributed to impaired lexical or semantic processing. These data argue against the hypothesis that phoneme and word perception rely on different perceptual processes/routes for processing, and instead indicate that word perception depends on perception of sublexical units.
Allegretti, C L; Puglisi, J T
12 disabled and 12 nondisabled readers (mean age, 11 yr.) were compared on a letter-search task which separated perceptual processing from higher-order processing. Participants were presented a first stimulus (for 200 msec. to minimize eye movements) followed by a second stimulus immediately to estimate the amount of information initially perceived or after a 3000-msec. interval to examine information more permanently stored. Participants were required to decide whether any letter present in the first stimulus was also present in the second. Two processing loads (1 and 3 letters) were examined. Disabled readers showed more pronounced deficits when they were given very little time to process information or more information to process.
Full Text Available Recent studies suggest that multisensory integration is enhanced in older adults but it is not known whether this enhancement is solely driven by perceptual processes or affected by cognitive processes. Using the ‘McGurk illusion’, in Experiment 1 we found that audio-visual integration of incongruent audio-visual words was higher in older adults than in younger adults, although the recognition of either audio- or visual-only presented words was the same across groups. In Experiment 2 we tested recall of sentences within which an incongruent audio-visual speech word was embedded. The overall semantic meaning of the sentence was compatible with either one of the unisensory components of the target word and/or with the illusory percept. Older participants recalled more illusory audio-visual words in sentences than younger adults, however, there was no differential effect of word compatibility on recall for the two groups. Our findings suggest that the relatively high susceptibility to the audio-visual speech illusion in older participants is due more to perceptual than cognitive processing.
Borrie, Stephanie A
This study investigated the influence of visual speech information on perceptual processing of neurologically degraded speech. Fifty listeners identified spastic dysarthric speech under both audio (A) and audiovisual (AV) conditions. Condition comparisons revealed that the addition of visual speech information enhanced processing of the neurologically degraded input in terms of (a) acuity (percent phonemes correct) of vowels and consonants and (b) recognition (percent words correct) of predictive and nonpredictive phrases. Listeners exploited stress-based segmentation strategies more readily in AV conditions, suggesting that the perceptual benefit associated with adding visual speech information to the auditory signal-the AV advantage-has both segmental and suprasegmental origins. Results also revealed that the magnitude of the AV advantage can be predicted, to some degree, by the extent to which an individual utilizes syllabic stress cues to inform word recognition in AV conditions. Findings inform the development of a listener-specific model of speech perception that applies to processing of dysarthric speech in everyday communication contexts.
Almeida, Diogo; Poeppel, David; Corina, David
The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.
Madsen, Bodil Nistrup
is carried out at Copenhagen Business School, will be introduced. In order to illustrate the need for a taxonomy for terminological data, some examples from the Data Category Registry of ISO TC 37 (ISOcat) will be given, and the taxonomy which has been developed for the DanTermBank project will be compared...... to the structure of ISOcat, the first printed standard comprising data categories for terminology management, ISO 12620:1999, and other standards from ISO TC 37. Finally some examples of linguistic and non-linguistic representations of concepts which we plan to introduce into the DanTermBank will be presented.......This paper will discuss definitions and give examples of linguistic and non -linguistic representation of concepts in a terminology and knowledge bank, and it will be argued that there is a need for a taxonomy of terminological data categories. As a background the DanTermBank project, which...
Lidiya Olegovna Polyakova
Results. Results of our scientific work are such conditions should be implemented based on the principle of «vertical integration», covering the social levels of the customer of higher education (economic sector, national systems of higher education, the University, the faculty, the chair. Practical implications. Presents a set of tools that is effective in solving problems of communication-language barriers of future specialists of non-linguistic profile.
Nematzadeh, Nasim; Powers, David M W; Lewis, Trent W
Why does our visual system fail to reconstruct reality, when we look at certain patterns? Where do Geometrical illusions start to emerge in the visual pathway? How far should we take computational models of vision with the same visual ability to detect illusions as we do? This study addresses these questions, by focusing on a specific underlying neural mechanism involved in our visual experiences that affects our final perception. Among many types of visual illusion, 'Geometrical' and, in particular, 'Tilt Illusions' are rather important, being characterized by misperception of geometric patterns involving lines and tiles in combination with contrasting orientation, size or position. Over the last decade, many new neurophysiological experiments have led to new insights as to how, when and where retinal processing takes place, and the encoding nature of the retinal representation that is sent to the cortex for further processing. Based on these neurobiological discoveries, we provide computer simulation evidence from modelling retinal ganglion cells responses to some complex Tilt Illusions, suggesting that the emergence of tilt in these illusions is partially related to the interaction of multiscale visual processing performed in the retina. The output of our low-level filtering model is presented for several types of Tilt Illusion, predicting that the final tilt percept arises from multiple-scale processing of the Differences of Gaussians and the perceptual interaction of foreground and background elements. The model is a variation of classical receptive field implementation for simple cells in early stages of vision with the scales tuned to the object/texture sizes in the pattern. Our results suggest that this model has a high potential in revealing the underlying mechanism connecting low-level filtering approaches to mid- and high-level explanations such as 'Anchoring theory' and 'Perceptual grouping'.
Seth, Anil K
Normal perception involves experiencing objects within perceptual scenes as real, as existing in the world. This property of "perceptual presence" has motivated "sensorimotor theories" which understand perception to involve the mastery of sensorimotor contingencies. However, the mechanistic basis of sensorimotor contingencies and their mastery has remained unclear. Sensorimotor theory also struggles to explain instances of perception, such as synesthesia, that appear to lack perceptual presence and for which relevant sensorimotor contingencies are difficult to identify. On alternative "predictive processing" theories, perceptual content emerges from probabilistic inference on the external causes of sensory signals, however, this view has addressed neither the problem of perceptual presence nor synesthesia. Here, I describe a theory of predictive perception of sensorimotor contingencies which (1) accounts for perceptual presence in normal perception, as well as its absence in synesthesia, and (2) operationalizes the notion of sensorimotor contingencies and their mastery. The core idea is that generative models underlying perception incorporate explicitly counterfactual elements related to how sensory inputs would change on the basis of a broad repertoire of possible actions, even if those actions are not performed. These "counterfactually-rich" generative models encode sensorimotor contingencies related to repertoires of sensorimotor dependencies, with counterfactual richness determining the degree of perceptual presence associated with a stimulus. While the generative models underlying normal perception are typically counterfactually rich (reflecting a large repertoire of possible sensorimotor dependencies), those underlying synesthetic concurrents are hypothesized to be counterfactually poor. In addition to accounting for the phenomenology of synesthesia, the theory naturally accommodates phenomenological differences between a range of experiential states
Cantwell, George; Riesenhuber, Maximilian; Roeder, Jessica L; Ashby, F Gregory
The field of computational cognitive neuroscience (CCN) builds and tests neurobiologically detailed computational models that account for both behavioral and neuroscience data. This article leverages a key advantage of CCN-namely, that it should be possible to interface different CCN models in a plug-and-play fashion-to produce a new and biologically detailed model of perceptual category learning. The new model was created from two existing CCN models: the HMAX model of visual object processing and the COVIS model of category learning. Using bitmap images as inputs and by adjusting only a couple of learning-rate parameters, the new HMAX/COVIS model provides impressively good fits to human category-learning data from two qualitatively different experiments that used different types of category structures and different types of visual stimuli. Overall, the model provides a comprehensive neural and behavioral account of basal ganglia-mediated learning. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rungratsameetaweemana, Nuttida; Itthipuripat, Sirawaj; Salazar, Annalisa; Serences, John T
Two factors play important roles in shaping perception: the allocation of selective attention to behaviorally relevant sensory features, and prior expectations about regularities in the environment. Signal detection theory proposes distinct roles of attention and expectation on decision-making such that attention modulates early sensory processing, whereas expectation influences the selection and execution of motor responses. Challenging this classic framework, recent studies suggest that expectations about sensory regularities enhance the encoding and accumulation of sensory evidence during decision-making. However, it is possible, that these findings reflect well documented attentional modulations in visual cortex. Here, we tested this framework in a group of male and female human participants by examining how expectations about stimulus features (orientation and color) and expectations about motor responses impacted electroencephalography (EEG) markers of early sensory processing and the accumulation of sensory evidence during decision-making (the early visual negative potential and the centro-parietal positive potential, respectively). We first demonstrate that these markers are sensitive to changes in the amount of sensory evidence in the display. Then we show, counter to recent findings, that neither marker is modulated by either feature or motor expectations, despite a robust effect of expectations on behavior. Instead, violating expectations about likely sensory features and motor responses impacts posterior alpha and frontal theta oscillations, signals thought to index overall processing time and cognitive conflict. These findings are inconsistent with recent theoretical accounts and suggest instead that expectations primarily influence decisions by modulating post-perceptual stages of information processing. SIGNIFICANCE STATEMENT Expectations about likely features or motor responses play an important role in shaping behavior. Classic theoretical
Aggelopoulos, Nikolaos C
Perceptual inference refers to the ability to infer sensory stimuli from predictions that result from internal neural representations built through prior experience. Methods of Bayesian statistical inference and decision theory model cognition adequately by using error sensing either in guiding action or in "generative" models that predict the sensory information. In this framework, perception can be seen as a process qualitatively distinct from sensation, a process of information evaluation using previously acquired and stored representations (memories) that is guided by sensory feedback. The stored representations can be utilised as internal models of sensory stimuli enabling long term associations, for example in operant conditioning. Evidence for perceptual inference is contributed by such phenomena as the cortical co-localisation of object perception with object memory, the response invariance in the responses of some neurons to variations in the stimulus, as well as from situations in which perception can be dissociated from sensation. In the context of perceptual inference, sensory areas of the cerebral cortex that have been facilitated by a priming signal may be regarded as comparators in a closed feedback loop, similar to the better known motor reflexes in the sensorimotor system. The adult cerebral cortex can be regarded as similar to a servomechanism, in using sensory feedback to correct internal models, producing predictions of the outside world on the basis of past experience. Copyright © 2015 Elsevier Ltd. All rights reserved.
Vakil, E; Sigal, J
Twenty-four closed-head-injured (CHI) and 24 control participants studied two word lists under shallow (i.e., nonsemantic) and deep (i.e., semantic) encoding conditions. They were then tested on free recall, perceptual priming (i.e., perceptual partial word identification) and conceptual priming (i.e., category production) tasks. Previous findings have demonstrated that memory in CHI is characterized by inefficient conceptual processing of information. It was thus hypothesized that the CHI participants would perform more poorly than the control participants on the explicit and on the conceptual priming tasks. On these tasks the CHI group was expected to benefit to a lesser degree from prior deep encoding, as compared to controls. The groups were not expected to significantly differ from each other on the perceptual priming task. Prior deep encoding was not expected to improve the perceptual priming performance of either group. All findings were as predicted, with the exception that a significant effect was not found between groups for deep encoding in the conceptual priming task. The results are discussed (1) in terms of their theoretical contribution in further validating the dissociation between perceptual and conceptual priming; and (2) in terms of the contribution in differentiating between amnesic and CHI patients. Conceptual priming is preserved in amnesics but not in CHI patients.
Urakawa, Tomokazu; Aragaki, Tomoya; Araki, Osamu
Based on the predictive coding framework, the present behavioral study focused on the automatic visual change detection process, which yields a concomitant prediction error, as one of the visual processes relevant to the exogenously-driven perceptual alternation of a bistable image. According to this perspective, we speculated that the automatic visual change detection process with an enhanced prediction error is relevant to the greater induction of exogenously-driven perceptual alternation and attempted to test this hypothesis. A modified version of the oddball paradigm was used based on previous electroencephalographic studies on visual change detection, in which the deviant and standard defined by the bar's orientation were symmetrically presented around a continuously presented Necker cube (a bistable image). By manipulating inter-stimulus intervals and the number of standard repetitions, we set three experimental blocks: HM, IM, and LM blocks, in which the strength of the prediction error to the deviant relative to the standard was expected to gradually decrease in that order. The results obtained showed that the deviant significantly increased perceptual alternation of the Necker cube over that by the standard from before to after the presentation of the deviant. Furthermore, the differential proportion of the deviant relative to the standard significantly decreased from the HM block to the IM and LM blocks. These results are consistent with our hypothesis, supporting the involvement of the automatic visual change detection process in the induction of exogenously-driven perceptual alternation. Copyright © 2017 Elsevier B.V. All rights reserved.
Binder, Marek; Gociewicz, Krzysztof; Windey, Bert; Koculak, Marcin; Finc, Karolina; Nikadon, Jan; Derda, Monika; Cleeremans, Axel
According to the levels-of-processing hypothesis, transitions from unconscious to conscious perception may depend on stimulus processing level, with more gradual changes for low-level stimuli and more dichotomous changes for high-level stimuli. In an event-related fMRI study we explored this hypothesis using a visual backward masking procedure. Task requirements manipulated level of processing. Participants reported the magnitude of the target digit in the high-level task, its color in the low-level task, and rated subjective visibility of stimuli using the Perceptual Awareness Scale. Intermediate stimulus visibility was reported more frequently in the low-level task, confirming prior behavioral results. Visible targets recruited insulo-fronto-parietal regions in both tasks. Task effects were observed in visual areas, with higher activity in the low-level task across all visibility levels. Thus, the influence of level of processing on conscious perception may be mediated by attentional modulation of activity in regions representing features of consciously experienced stimuli. Copyright © 2017 Elsevier Inc. All rights reserved.
Fan, Zhao; Jing, Guomin; Ding, Xianfeng; Cheng, Xiaorong
Task-irrelevant stimulus numbers can automatically modulate concurrent temporal tasks--leading to the phenomenon of number-time association (NTA). Recent research provides converging evidence that the NTA occurs at the stage of temporal memory. Specifically, a reference memory containing encoded duration information can be modified by perceptual/concurrent digits, i.e., a perceptual/concurrent digit-induced NTA. Here, with five experiments, we investigated whether another working memory (WM)-related mechanism was involved in the generation of NTAs and how this memory-induced NTA was related with the perception-induced NTA. We first explored whether similar NTA effects existed for mnemonic digits which disappeared before time encoding but were actively maintained in WM, i.e., a mnemonic digit-induced NTA. Experiments 1-3 demonstrated both types of NTAs. Further, we revealed a close relationship between the two types of NTAs in two contexts. First, the mnemonic digit-induced NTA also relied on a perceptual number-time co-occurrence at time encoding. We found that the mnemonic digits influenced subsequent temporal processing when a task-irrelevant constant number '5' was presented during target encoding, but not when a non-numerical symbol was presented, suggesting that temporal representations in the reference memory could be accessed and modified by both sensory and postsensory numerical magnitudes through this number-time co-occurrence. Second, the effects of perceptual and mnemonic digits on temporal reproduction could cancel each other out. A congruency effect for perceptual and mnemonic digits (relying on memorization requirement) was demonstrated in Experiments 4 and 5. Specifically, a typical NTA was observed when the magnitudes of memorized and the perceptual/concurrent digits were congruent (both were large or small numbers), but not when they were incongruent (one small and one large numbers). Taken together, our study sheds new light on the mechanism of
JoAnn P Silkes
Data collected to date demonstrate a clear difference between individuals with and without aphasia in their ability to perceive masked real words, but there appears to be no difference between groups for non-words and non-linguistic stimuli, although a trend is seen for these groups. Given the high variability for the NW and NL conditions, these analyses may be underpowered; therefore, data collection is ongoing and a clearer picture should be available by the time of presentation. Regardless of the eventual outcome, this poster will discuss the theoretical motivation for the study, and will discuss the possible implications for understanding the nature of underlying deficits in aphasia.
Guest, Duncan; Kent, Christopher; Adelman, James S
In absolute identification, the extended generalized context model (EGCM; Kent & Lamberts, 2005, 2016) proposes that perceptual processing determines systematic response time (RT) variability; all other models of RT emphasize response selection processes. In the EGCM-RT the bow effect in RTs (longer responses for stimuli in the middle of the range) occurs because these middle stimuli are less isolated, and as perceptual information is accumulated, the evidence supporting a correct response grows more slowly than for stimuli at the ends of the range. More perceptual information is therefore accumulated in order to increase certainty in response for middle stimuli, lengthening RT. According to the model reducing perceptual sampling time should reduce the size of the bow effect in RT. We tested this hypothesis in 2 pitch identification experiments. Experiment 1 found no effect of stimulus duration on the size of the RT bow. Experiment 2 used multiple short stimulus durations as well as manipulating set size and stimulus spacing. Contrary to EGCM-RT predictions, the bow effect on RTs was large for even very short durations. A new version of the EGCM-RT could only capture this, alongside the effect of stimulus duration on accuracy, by including both a perceptual and a memory sampling process. A modified version of the selective attention, mapping, and ballistic accumulator model (Brown, Marley, Donkin, & Heathcote, 2008) could also capture the data, by assuming psychophysical noise diminishes with increased exposure duration. This modeling suggests systematic variability in RT in absolute identification is largely determined by memory sampling and response selection processes. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Ettlin, Florence; Bröder, Arndt
Adaptive strategy selection implies that a decision strategy is chosen based on its fit to the task and situation. However, other aspects, such as the way information is presented, can determine information search behavior; especially when the application of certain strategies over others is facilitated. But are such display effects on multi-attribute decisions also at work when the manipulation does not entail differential costs for different decision strategies? Three Mouselab experiments with hidden information and one eye tracking experiment with an open information board revealed that decision behavior is unaffected by purely perceptual manipulations of the display based on Gestalt principles; that is, based on manipulations that induce no noteworthy processing costs for different information search patterns. We discuss our results in the context of previous findings on display effects; specifically, how the combination of these findings and our results reveal the crucial role of differential processing costs for different strategies for the emergence of display effects. This finding describes a boundary condition of the commonly acknowledged influence of information displays and is in line with the ideas of adaptive strategy selection and cost-benefit tradeoffs. Copyright © 2015. Published by Elsevier B.V.
Vallila-Rohter, Sofia; Kiran, Swathi
Purpose The purpose of the current study was to explore non-linguistic learning ability in patients with aphasia, examining the impact of stimulus typicality and feedback on success with learning. Method Eighteen patients with aphasia and eight healthy controls participated in this study. All participants completed four computerized, non-linguistic category-learning tasks. We probed learning ability under two methods of instruction: feedback-based (FB) and paired-associate (PA). We also examined the impact of task complexity on learning ability, comparing two stimulus conditions: typical (Typ) and atypical (Atyp). Performance was compared between groups and across conditions. Results Results demonstrated that healthy controls were able to successfully learn categories under all conditions. For our patients with aphasia, two patterns of performance arose. One subgroup of patients was able to maintain learning across task manipulations and conditions. The other subgroup of patients demonstrated a sensitivity to task complexity, learning successfully only in the typical training conditions. Conclusions Results support the hypothesis that impairments of general learning are present in aphasia. Some patients demonstrated the ability to extract category information under complex training conditions, while others learned only under conditions that were simplified and emphasized salient category features. Overall, the typical training condition facilitated learning for all participants. Findings have implications for therapy, which are discussed. PMID:23695914
Nigro, Luciana; Jiménez-Fernández, Gracia; Simpson, Ian C; Defior, Sylvia
One of the hallmarks of dyslexia is the failure to automatise written patterns despite repeated exposure to print. Although many explanations have been proposed to explain this problem, researchers have recently begun to explore the possibility that an underlying implicit learning deficit may play a role in dyslexia. This hypothesis has been investigated through non-linguistic tasks exploring implicit learning in a general domain. In this study, we examined the abilities of children with dyslexia to implicitly acquire positional regularities embedded in both non-linguistic and linguistic stimuli. In experiment 1, 42 children (21 with dyslexia and 21 typically developing) were exposed to rule-governed shape sequences; whereas in experiment 2, a new group of 42 children were exposed to rule-governed letter strings. Implicit learning was assessed in both experiments via a forced-choice task. Experiments 1 and 2 showed a similar pattern of results. ANOVA analyses revealed no significant differences between the dyslexic and the typically developing group, indicating that children with dyslexia are not impaired in the acquisition of simple positional regularities, regardless of the nature of the stimuli. However, within group t-tests suggested that children from the dyslexic group could not transfer the underlying positional rules to novel instances as efficiently as typically developing children.
Laukka, Petri; Elfenbein, Hillary Anger; Söder, Nela; Nordström, Henrik; Althoff, Jean; Chui, Wanda; Iraki, Frederick K; Rockstuhl, Thomas; Thingujam, Nutankumar S
Which emotions are associated with universally recognized non-verbal signals?We address this issue by examining how reliably non-linguistic vocalizations (affect bursts) can convey emotions across cultures. Actors from India, Kenya, Singapore, and USA were instructed to produce vocalizations that would convey nine positive and nine negative emotions to listeners. The vocalizations were judged by Swedish listeners using a within-valence forced-choice procedure, where positive and negative emotions were judged in separate experiments. Results showed that listeners could recognize a wide range of positive and negative emotions with accuracy above chance. For positive emotions, we observed the highest recognition rates for relief, followed by lust, interest, serenity and positive surprise, with affection and pride receiving the lowest recognition rates. Anger, disgust, fear, sadness, and negative surprise received the highest recognition rates for negative emotions, with the lowest rates observed for guilt and shame. By way of summary, results showed that the voice can reveal both basic emotions and several positive emotions other than happiness across cultures, but self-conscious emotions such as guilt, pride, and shame seem not to be well recognized from non-linguistic vocalizations.
Tiffany Cheing Ho
Full Text Available While the extant literature has focused on major depressive disorder (MDD as being characterized by abnormalities in processing affective stimuli (e.g., facial expressions, little is known regarding which specific aspects of cognition influence the evaluation of affective stimuli, and what are the underlying neural correlates. To investigate these issues, we assessed 26 adolescents diagnosed with MDD and 37 well-matched healthy controls (HCL who completed an emotion identification task of dynamically morphing faces during functional magnetic resonance imaging (fMRI. We analyzed the behavioral data using a sequential sampling model of response time (RT commonly used to elucidate aspects of cognition in binary perceptual decision making tasks: the Linear Ballistic Accumulator (LBA model. Using a hierarchical Bayesian estimation method, we obtained group-level and individual-level estimates of LBA parameters on the facial emotion identification task. While the MDD and HCL groups did not differ in mean RT, accuracy, or group-level estimates of perceptual processing efficiency (i.e., drift rate parameter of the LBA, the MDD group showed significantly reduced responses in left fusiform gyrus compared to the HCL group during the facial emotion identification task. Furthermore, within the MDD group, fMRI signal in the left fusiform gyrus during affective face processing was significantly associated with greater individual-level estimates of perceptual processing efficiency. Our results therefore suggest that affective processing biases in adolescents with MDD are characterized by greater perceptual processing efficiency of affective visual information in sensory brain regions responsible for the early processing of visual information. The theoretical, methodological, and clinical implications of our results are discussed.
Dosher, Barbara; Lu, Zhong-Lin
Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.
Schettino, Antonio; Loeys, Tom; Delplanque, Sylvain; Pourtois, Gilles
Recent studies suggest that visual object recognition is a proactive process through which perceptual evidence accumulates over time before a decision can be made about the object. However, the exact electrophysiological correlates and time-course of this complex process remain unclear. In addition, the potential influence of emotion on this process has not been investigated yet. We recorded high density EEG in healthy adult participants performing a novel perceptual recognition task. For each trial, an initial blurred visual scene was first shown, before the actual content of the stimulus was gradually revealed by progressively adding diagnostic high spatial frequency information. Participants were asked to stop this stimulus sequence as soon as they could correctly perform an animacy judgment task. Behavioral results showed that participants reliably gathered perceptual evidence before recognition. Furthermore, prolonged exploration times were observed for pleasant, relative to either neutral or unpleasant scenes. ERP results showed distinct effects starting at 280 ms post-stimulus onset in distant brain regions during stimulus processing, mainly characterized by: (i) a monotonic accumulation of evidence, involving regions of the posterior cingulate cortex/parahippocampal gyrus, and (ii) true categorical recognition effects in medial frontal regions, including the dorsal anterior cingulate cortex. These findings provide evidence for the early involvement, following stimulus onset, of non-overlapping brain networks during proactive processes eventually leading to visual object recognition. Copyright © 2011 Elsevier Inc. All rights reserved.
Thordis Marisa Neger
Full Text Available Within a few sentences, listeners learn to understand severely degraded speech such as noise-vocoded speech. However, individuals vary in the amount of such perceptual learning and it is unclear what underlies these differences. The present study investigates whether perceptual learning in speech relates to statistical learning, as sensitivity to probabilistic information may aid identification of relevant cues in novel speech input. If statistical learning and perceptual learning (partly draw on the same general mechanisms, then statistical learning in a non-auditory modality using non-linguistic sequences should predict adaptation to degraded speech.In the present study, 73 older adults (aged over 60 years and 60 younger adults (aged between 18 and 30 years performed a visual artificial grammar learning task and were presented with sixty meaningful noise-vocoded sentences in an auditory recall task. Within age groups, sentence recognition performance over exposure was analyzed as a function of statistical learning performance, and other variables that may predict learning (i.e., hearing, vocabulary, attention switching control, working memory and processing speed. Younger and older adults showed similar amounts of perceptual learning, but only younger adults showed significant statistical learning. In older adults, improvement in understanding noise-vocoded speech was constrained by age. In younger adults, amount of adaptation was associated with lexical knowledge and with statistical learning ability. Thus, individual differences in general cognitive abilities explain listeners' variability in adapting to noise-vocoded speech. Results suggest that perceptual and statistical learning share mechanisms of implicit regularity detection, but that the ability to detect statistical regularities is impaired in older adults if visual sequences are presented quickly.
Haith, Marshall M.
Focuses on investigations of infant sensation and perception over the past 25 years. Describes the knowledge base concerning the sensory and perceptual world of the infant in the mid-1960s. Methodological highlights in the study of vision and audition are covered. (RJC)
van Ravenzwaaij, D.; Boekel, W.; Forstmann, B.U.; Ratcliff, R.; Wagenmakers, E.-J.
Previous research suggests that playing action video games improves performance on sensory, perceptual, and attentional tasks. For instance, Green, Pouget, and Bavelier (2010) used the diffusion model to decompose data from a motion detection task and estimate the contribution of several underlying
Madden, Carol J.; Zwaan, Rolf A.
In 2 experiments, the authors investigated the ability of high- and low-span comprehenders to construe subtle shades of meaning through perceptual representation. High- and low-span comprehenders responded to pictures that either matched or mismatched a target object's shape as implied by the preceding sentence context. At 750 ms after hearing the…
This paper describes the theory and application of a perceptually-inspired video processing technology that was recently incorporated into professional video encoders now being used by major cable, IPTV, satellite, and internet video service providers. We will present data that show that this perceptual video processing (PVP) technology can improve video compression efficiency by up to 50% for MPEG-2, H.264, and High Efficiency Video Coding (HEVC). The PVP technology described in this paper works by forming predicted eye-tracking attractor maps that indicate how likely it might be that a free viewing person would look at particular area of an image or video. We will introduce in this paper the novel model and supporting theory used to calculate the eye-tracking attractor maps. We will show how the underlying perceptual model was inspired by electrophysiological studies of the vertebrate retina, and will explain how the model incorporates statistical expectations about natural scenes as well as a novel method for predicting error in signal estimation tasks. Finally, we will describe how the eye-tracking attractor maps are created in real time and used to modify video prior to encoding so that it is more compressible but not noticeably different than the original unmodified video.
Schira, Mark M; Fahle, Manfred; Donner, Tobias H; Kraft, Antje; Brandt, Stephan A
We investigated contour processing and figure-ground detection within human retinotopic areas using event-related functional magnetic resonance imaging (fMRI) in 6 healthy and naïve subjects. A figure (6 degrees side length) was created by a 2nd-order texture contour. An independent and demanding foveal letter-discrimination task prevented subjects from noticing this more peripheral contour stimulus. The contour subdivided our stimulus into a figure and a ground. Using localizers and retinotopic mapping stimuli we were able to subdivide each early visual area into 3 eccentricity regions corresponding to 1) the central figure, 2) the area along the contour, and 3) the background. In these subregions we investigated the hemodynamic responses to our stimuli and compared responses with or without the contour defining the figure. No contour-related blood oxygenation level-dependent modulation in early visual areas V1, V3, VP, and MT+ was found. Significant signal modulation in the contour subregions of V2v, V2d, V3a, and LO occurred. This activation pattern was different from comparable studies, which might be attributable to the letter-discrimination task reducing confounding attentional modulation. In V3a, but not in any other retinotopic area, signal modulation corresponding to the central figure could be detected. Such contextual modulation will be discussed in light of the recurrent processing hypothesis and the role of visual awareness.
Ağaoğlu, Mehmet N; Herzog, Michael H; Oğmen, Haluk
The spatial representation of a visual scene in the early visual system is well known. The optics of the eye map the three-dimensional environment onto two-dimensional images on the retina. These retinotopic representations are preserved in the early visual system. Retinotopic representations and processing are among the most prevalent concepts in visual neuroscience. However, it has long been known that a retinotopic representation of the stimulus is neither sufficient nor necessary for perception. Saccadic Stimulus Presentation Paradigm and the Ternus-Pikler displays have been used to investigate non-retinotopic processes with and without eye movements, respectively. However, neither of these paradigms eliminates the retinotopic representation of the spatial layout of the stimulus. Here, we investigated how stimulus features are processed in the absence of a retinotopic layout and in the presence of retinotopic conflict. We used anorthoscopic viewing (slit viewing) and pitted a retinotopic feature-processing hypothesis against a non-retinotopic feature-processing hypothesis. Our results support the predictions of the non-retinotopic feature-processing hypothesis and demonstrate the ability of the visual system to operate non-retinotopically at a fine feature processing level in the absence of a retinotopic spatial layout. Our results suggest that perceptual space is actively constructed from the perceptual dimension of motion. The implications of these findings for normal ecological viewing conditions are discussed. 2012 Elsevier Ltd. All rights reserved
Sandra Cristina Soares
Full Text Available Previous studies in the social anxiety arena have shown an impaired attentional control system, similar to that found in trait anxiety. However, the effect of task demands on social anxiety in socially threatening stimuli, such as angry faces, remains unseen. In the present study, fifty-four university students scoring high and low in the Social Interaction and Performance Anxiety and Avoidance Scale (SIPAAS questionnaire, participated in a target letter discrimination task while task-irrelevant face stimuli (angry, disgust, happy, and neutral were simultaneously presented. The results showed that high (compared to low socially anxious individuals were more prone to distraction by task-irrelevant stimuli, particularly under high perceptual load conditions. More importantly, for such individuals, the accuracy proportions for angry faces significantly differed between the low and high perceptual load conditions, which is discussed in light of current evolutionary models of social anxiety.
Soares, Sandra C; Rocha, Marta; Neiva, Tiago; Rodrigues, Paulo; Silva, Carlos F
Previous studies in the social anxiety arena have shown an impaired attentional control system, similar to that found in trait anxiety. However, the effect of task demands on social anxiety in socially threatening stimuli, such as angry faces, remains unseen. In the present study, 54 university students scoring high and low in the Social Interaction and Performance Anxiety and Avoidance Scale (SIPAAS) questionnaire, participated in a target letter discrimination task while task-irrelevant face stimuli (angry, disgust, happy, and neutral) were simultaneously presented. The results showed that high (compared to low) socially anxious individuals were more prone to distraction by task-irrelevant stimuli, particularly under high perceptual load conditions. More importantly, for such individuals, the accuracy proportions for angry faces significantly differed between the low and high perceptual load conditions, which is discussed in light of current evolutionary models of social anxiety.
Johnston, James C.; Hochhaus, Larry; Ruthruff, Eric
Four experiments tested whether repetition blindness (RB; reduced accuracy reporting repetitions of briefly displayed items) is a perceptual or a memory-recall phenomenon. RB was measured in rapid serial visual presentation (RSVP) streams, with the task altered to reduce memory demands. In Experiment 1 only the number of targets (1 vs. 2) was reported, eliminating the need to remember target identities. Experiment 2 segregated repeated and nonrepeated targets into separate blocks to reduce bias against repeated targets. Experiments 3 and 4 required immediate "online" buttonpress responses to targets as they occurred. All 4 experiments showed very strong RB. Furthermore, the online response data showed clearly that the 2nd of the repeated targets is the one missed. The present results show that in the RSVP paradigm, RB occurs online during initial stimulus encoding and decision making. The authors argue that RB is indeed a perceptual phenomenon.
Stenneken, Prisca; Egetemeir, Johanna; Schulte-Körne, Gerd; Müller, Hermann J; Schneider, Werner X; Finke, Kathrin
The cognitive causes as well as the neurological and genetic basis of developmental dyslexia, a complex disorder of written language acquisition, are intensely discussed with regard to multiple-deficit models. Accumulating evidence has revealed dyslexics' impairments in a variety of tasks requiring visual attention. The heterogeneity of these experimental results, however, points to the need for measures that are sufficiently sensitive to differentiate between impaired and preserved attentional components within a unified framework. This first parameter-based group study of attentional components in developmental dyslexia addresses potentially altered attentional components that have recently been associated with parietal dysfunctions in dyslexia. We aimed to isolate the general attentional resources that might underlie reduced span performance, i.e., either a deficient working memory storage capacity, or a slowing in visual perceptual processing speed, or both. Furthermore, by analysing attentional selectivity in dyslexia, we addressed a potential lateralized abnormality of visual attention, i.e., a previously suggested rightward spatial deviation compared to normal readers. We investigated a group of high-achieving young adults with persisting dyslexia and matched normal readers in an experimental whole report and a partial report of briefly presented letter arrays. Possible deviations in the parametric values of the dyslexic compared to the control group were taken as markers for the underlying deficit. The dyslexic group showed a striking reduction in perceptual processing speed (by 26% compared to controls) while their working memory storage capacity was in the normal range. In addition, a spatial deviation of attentional weighting compared to the control group was confirmed in dyslexic readers, which was larger in participants with a more severe dyslexic disorder. In general, the present study supports the relevance of perceptual processing speed in disorders
Full Text Available Understanding the causes of adolescents' aggressive behavior in and through technological means and resources requires a thorough analysis of the criteria that they consider to be identifying and defining cyberbullying and of the network of relationships established between the different criteria. The present study has aimed at making a foray into the attempt to understand the underlying structures and mechanisms that determine aggressors' and victims' perceptions of the cyberbullying phenomenon. The sample consisted of 2148 adolescents (49.1% girls; SD = 0.5 of ages from 12 to 16 (M = 13.9; SD = 1.2. The data collected through a validated questionnaire for this study whose dimensions were confirmed from the data extracted from the focus groups and a CFA of the victim and aggressor subsamples. The analysis of the data is completed with CFA and the construction of structural models. The results have shown the importance and interdependence of imbalance of power and intention to harm in the aggressors' perceptual structure. The criteria of anonymity and repetition are related to the asymmetry of power, giving greater prominence to this factor. In its perceptual structure, the criterion “social relationship” also appears, which indicates that the manifestations of cyberbullying are sometimes interpreted as patterns of behavior that have become massively extended among the adolescent population, and have become accepted as a normalized and harmless way of communicating with other adolescents. In the victims' perceptual structure the key factor is the intention to harm, closely linked to the asymmetry of power and publicity. Anonymity, revenge and repetition are also present in this structure, although its relationship with cyberbullying is indirect. These results allow to design more effective measures of prevention and intervention closely tailored to addressing directly the factors that are considered to be predictors of risk.
Fernández-Antelo, Inmaculada; Cuadrado-Gordillo, Isabel
Understanding the causes of adolescents' aggressive behavior in and through technological means and resources requires a thorough analysis of the criteria that they consider to be identifying and defining cyberbullying and of the network of relationships established between the different criteria. The present study has aimed at making a foray into the attempt to understand the underlying structures and mechanisms that determine aggressors' and victims' perceptions of the cyberbullying phenomenon. The sample consisted of 2148 adolescents (49.1% girls; SD = 0.5) of ages from 12 to 16 ( M = 13.9; SD = 1.2). The data collected through a validated questionnaire for this study whose dimensions were confirmed from the data extracted from the focus groups and a CFA of the victim and aggressor subsamples. The analysis of the data is completed with CFA and the construction of structural models. The results have shown the importance and interdependence of imbalance of power and intention to harm in the aggressors' perceptual structure. The criteria of anonymity and repetition are related to the asymmetry of power, giving greater prominence to this factor. In its perceptual structure, the criterion "social relationship" also appears, which indicates that the manifestations of cyberbullying are sometimes interpreted as patterns of behavior that have become massively extended among the adolescent population, and have become accepted as a normalized and harmless way of communicating with other adolescents. In the victims' perceptual structure the key factor is the intention to harm, closely linked to the asymmetry of power and publicity. Anonymity, revenge and repetition are also present in this structure, although its relationship with cyberbullying is indirect. These results allow to design more effective measures of prevention and intervention closely tailored to addressing directly the factors that are considered to be predictors of risk.
Fernández-Antelo, Inmaculada; Cuadrado-Gordillo, Isabel
Understanding the causes of adolescents' aggressive behavior in and through technological means and resources requires a thorough analysis of the criteria that they consider to be identifying and defining cyberbullying and of the network of relationships established between the different criteria. The present study has aimed at making a foray into the attempt to understand the underlying structures and mechanisms that determine aggressors' and victims' perceptions of the cyberbullying phenomenon. The sample consisted of 2148 adolescents (49.1% girls; SD = 0.5) of ages from 12 to 16 (M = 13.9; SD = 1.2). The data collected through a validated questionnaire for this study whose dimensions were confirmed from the data extracted from the focus groups and a CFA of the victim and aggressor subsamples. The analysis of the data is completed with CFA and the construction of structural models. The results have shown the importance and interdependence of imbalance of power and intention to harm in the aggressors' perceptual structure. The criteria of anonymity and repetition are related to the asymmetry of power, giving greater prominence to this factor. In its perceptual structure, the criterion “social relationship” also appears, which indicates that the manifestations of cyberbullying are sometimes interpreted as patterns of behavior that have become massively extended among the adolescent population, and have become accepted as a normalized and harmless way of communicating with other adolescents. In the victims' perceptual structure the key factor is the intention to harm, closely linked to the asymmetry of power and publicity. Anonymity, revenge and repetition are also present in this structure, although its relationship with cyberbullying is indirect. These results allow to design more effective measures of prevention and intervention closely tailored to addressing directly the factors that are considered to be predictors of risk. PMID:29632506
Ross-Sheehy, Shannon; Newman, Rochelle S
This research explores auditory short-term memory (STM) capacity for non-linguistic sounds in 10-month-old infants. Infants were presented with auditory streams composed of repeating sequences of either 2 or 4 unique instruments (e.g., flute, piano, cello; 350 or 700 ms in duration) followed by a 500-ms retention interval. These instrument sequences either stayed the same for every repetition (Constant) or changed by 1 instrument per sequence (Varying). Using the head-turn preference procedure, infant listening durations were recorded for each stream type (2- or 4-instrument sequences composed of 350- or 700-ms notes). Preference for the Varying stream was taken as evidence of auditory STM because detection of the novel instrument required memory for all of the instruments in a given sequence. Results demonstrate that infants listened longer to Varying streams for 2-instrument sequences, but not 4-instrument sequences, composed of 350-ms notes (Experiment 1), although this effect did not hold when note durations were increased to 700 ms (Experiment 2). Experiment 3 replicates and extends results from Experiments 1 and 2 and provides support for a duration account of capacity limits in infant auditory STM. Copyright © 2014 Elsevier Inc. All rights reserved.
Plant, Katherine L; Stanton, Neville A
The perceptual cycle model (PCM) has been widely applied in ergonomics research in domains including road, rail and aviation. The PCM assumes that information processing occurs in a cyclical manner drawing on top-down and bottom-up influences to produce perceptual exploration and actions. However, the validity of the model has not been addressed. This paper explores the construct validity of the PCM in the context of aeronautical decision-making. The critical decision method was used to interview 20 helicopter pilots about critical decision-making. The data were qualitatively analysed using an established coding scheme, and composite PCMs for incident phases were constructed. It was found that the PCM provided a mutually exclusive and exhaustive classification of the information-processing cycles for dealing with critical incidents. However, a counter-cycle was also discovered which has been attributed to skill-based behaviour, characteristic of experts. The practical applications and future research questions are discussed. Practitioner Summary: This paper explores whether information processing, when dealing with critical incidents, occurs in the manner anticipated by the perceptual cycle model. In addition to the traditional processing cycle, a reciprocal counter-cycle was found. This research can be utilised by those who use the model as an accident analysis framework.
Philbeck, John W.; Witt, Jessica K.
The action-specific perception account holds that people perceive the environment in terms of their ability to act in it. In this view, for example, decreased ability to climb a hill due to fatigue makes the hill visually appear to be steeper. Though influential, this account has not been universally accepted, and in fact a heated controversy has emerged. The opposing view holds that action capability has little or no influence on perception. Heretofore, the debate has been quite polarized, with efforts largely being focused on supporting one view and dismantling the other. We argue here that polarized debate can impede scientific progress and that the search for similarities between two sides of a debate can sharpen the theoretical focus of both sides and illuminate important avenues for future research. In this paper, we present a synthetic review of this debate, drawing from the literatures of both approaches, to clarify both the surprising similarities and the core differences between them. We critically evaluate existing evidence, discuss possible mechanisms of action-specific effects, and make recommendations for future research. A primary focus of future work will involve not only the development of methods that guard against action-specific post-perceptual effects, but also development of concrete, well-constrained underlying mechanisms. The criteria for what constitutes acceptable control of post-perceptual effects and what constitutes an appropriately specific mechanism vary between approaches, and bridging this gap is a central challenge for future research. PMID:26501227
Perceptual organization--the processes structuring visual information into coherent units--and visual attention--the processes by which some visual information in a scene is selected--are crucial for the perception of our visual environment and to visuomotor behavior. Recent research points to important relations between attentional and organizational processes. Several studies demonstrated that perceptual organization constrains attentional selectivity, and other studies suggest that attention can also constrain perceptual organization. In this chapter I focus on two aspects of the relationship between perceptual organization and attention. The first addresses the question of whether or not perceptual organization can take place without attention. I present findings demonstrating that some forms of grouping and figure-ground segmentation can occur without attention, whereas others require controlled attentional processing, depending on the processes involved and the conditions prevailing for each process. These findings challenge the traditional view, which assumes that perceptual organization is a unitary entity that operates preattentively. The second issue addresses the question of whether perceptual organization can affect the automatic deployment of attention. I present findings showing that the mere organization of some elements in the visual field by Gestalt factors into a coherent perceptual unit (an "object"), with no abrupt onset or any other unique transient, can capture attention automatically in a stimulus-driven manner. Taken together, the findings discussed in this chapter demonstrate the multifaceted, interactive relations between perceptual organization and visual attention.
The French philosopher M Merleau-Ponty captured the dynamic of perception with his idea of the intertwining of perceiver and perceived. Light is what links them. In the case of holographic images, not only is spatial and colour perception the pure product of light, but this light information is always in the process of self-construction with our eyes, according to our movements and the point of view adopted. According to the aesthetic reception of a work of art, Holographic images vary greatly from those of cinema, photography and even every kind of digital 3D animation. This particular image's status truly makes perceptually apparent the "co-emergence" of light and our gaze. But holography never misleads us with respect to the precarious nature of our perceptions. We have no illusion as to the limits of our empirical understanding of the perceived reality. Holography, like our knowledge of the visible, thus brings to light the phenomenon of reality's "co-constitution" and contributes to a dynamic ontology of perceptual and cognitive processes. The cognitivist Francico Varela defines this as the paradigm of enaction,i which I will adapt and apply to the appearance/disappearance context of holographic images to bring out their affinities on a metaphorical level.
McRobert, Allistair P; Ward, Paul; Eccles, David W; Williams, A Mark
We manipulated contextual information in order to examine the perceptual-cognitive processes that support anticipation using a simulated cricket-batting task. Skilled (N= 10) and less skilled (N= 10) cricket batters responded to video simulations of opponents bowling a cricket ball under high and low contextual information conditions. Skilled batters were more accurate, demonstrated more effective search behaviours, and provided more detailed verbal reports of thinking. Moreover, when they viewed their opponent multiple times (high context), they reduced their mean fixation time. All batters improved performance and altered thought processes when in the high context, compared to when they responded to their opponent without previously seeing them bowl (low context). Findings illustrate how context influences performance and the search for relevant information when engaging in a dynamic, time-constrained task. ©2011 The British Psychological Society.
Loebach, Jeremy L; Pisoni, David B; Svirsky, Mario A
The effect of feedback and materials on perceptual learning was examined in listeners with normal hearing who were exposed to cochlear implant simulations. Generalization was most robust when feedback paired the spectrally degraded sentences with their written transcriptions, promoting mapping between the degraded signal and its acoustic-phonetic representation. Transfer-appropriate processing theory suggests that such feedback was most successful because the original learning conditions were reinstated at testing: Performance was facilitated when both training and testing contained degraded stimuli. In addition, the effect of semantic context on generalization was assessed by training listeners on meaningful or anomalous sentences. Training with anomalous sentences was as effective as that with meaningful sentences, suggesting that listeners were encouraged to use acoustic-phonetic information to identify speech than to make predictions from semantic context.
Cauchard, Fabrice; Eyrolle, Hélène; Cellier, Jean-Marie; Hyönä, Jukka
A previous study by Pollatsek et al. ( 1993 ) claims that the perceptual span in reading is restricted to the fixated line, i.e. readers typically focus their visual attention on the line of text being read. The present study investigated whether readers make use of content structure signals (paragraph indentations and topic headings) present several lines away from the currently fixated line. We reasoned that as these signals are low-resolution visual objects (as opposed to letter and word identity), readers may attend to them even if they are located some distance away from the fixated line. Participants read a hierarchically organized multi-topic expository text containing structure signals in either a normal condition or a window condition, where the text disappeared above and below a vertical 3° gaze-contingent region. After reading, participants were asked to produce a written recall of the text. The results showed that the overall reading rate was not affected by the window. Nevertheless, the headings were reread more in the normal condition than in the window one. In addition, more topics were recalled in the normal than in the window condition. We interpret the results as indicating that the readers visually attend to useful text layout features while considering bigger units than single text lines. The perception of topic headings located away from the fixated line may favour long-range regressions towards them, which in turn may favour text comprehension. This claim is consistent with previous studies that showed that look-back fixations to headings are performed with an integrative intent.
Full Text Available Three eye-tracking experiments tested whether native listeners recognized reduced Dutch words better after having heard the same reduced words, or different reduced words of the same reduction type and whether familiarization with one reduction type helps listeners to deal with another reduction type. In the exposure phase, a segmental reduction group was exposed to /b/-reductions (e.g., minderij instead of binderij, 'book binder' and a syllabic reduction group was exposed to full-vowel deletions (e.g., p'raat instead of paraat, 'ready', while a control group did not hear any reductions. In the test phase, all three groups heard the same speaker producing reduced-/b/ and deleted-vowel words that were either repeated (Experiments 1 & 2 or new (Experiment 3, but that now appeared as targets in semantically neutral sentences. Word-specific learning effects were found for vowel-deletions but not for /b/-reductions. Generalization of learning to new words of the same reduction type occurred only if the exposure words showed a phonologically consistent reduction pattern (/b/-reductions. In contrast, generalization of learning to words of another reduction type occurred only if the exposure words showed a phonologically inconsistent reduction pattern (the vowel deletions; learning about them generalized to recognition of the /b/-reductions. In order to deal with reductions, listeners thus use various means. They store reduced variants (e.g., for the inconsistent vowel-deleted words and they abstract over incoming information to build up and apply mapping rules (e.g., for the consistent /b/-reductions. Experience with inconsistent pronunciations leads to greater perceptual flexibility in dealing with other forms of reduction uttered by the same speaker than experience with consistent pronunciations.
Working memory and attention are closely related. Recent research has shown that working memory can be viewed as internally directed attention. Working memory can affect attention in at least two ways. One is the effect of working memory load on attention, and the other is the effect of working memory contents on attention. In the present study, an interaction between working memory contents and perceptual load in distractor processing was investigated. Participants performed a perceptual load task in a standard form in one condition (Single task). In the other condition, a response-related distractor was maintained in working memory, rather than presented in the same stimulus display as a target (Dual task). For the Dual task condition, a significant compatibility effect was found under high perceptual load; however, there was no compatibility effect under low perceptual load. These results suggest that the way the contents of working memory affect visual search depends on perceptual load. Copyright © 2016 Elsevier B.V. All rights reserved.
Fahrenfort, Johannes J.; Van Leeuwen, Jonathan; Olivers, Christian N.L.; Hogendoorn, Hinze
The visual system has the remarkable ability to integrate fragmentary visual input into a perceptually organized collection of surfaces and objects, a process we refer to as perceptual integration. Despite a long tradition of perception research, it is not known whether access to consciousness is
Posthuma, Daniëlle; Baare, Wim F.C.; Hulshoff Pol, Hilleke E.
We recently showed that the correlation of gray and white matter volume with full scale IQ and the Working Memory dimension are completely mediated by common genetic factors (Posthuma et al., 2002). Here we examine whether the other WAIS III dimensions (Verbal Comprehension, Perceptual Organization...... to Working Memory capacity (r = 0.27). This phenotypic correlation is completely due to a common underlying genetic factor. Processing Speed was genetically related to white matter volume (r(g) = 0.39). Perceptual Organization was both genetically (r(g) = 0.39) and environmentally (r(e) = -0.71) related...
Tso, Ricky Van-yip; Au, Terry Kit-fong; Hsiao, Janet Hui-wen
Holistic processing and left-side bias are both behavioral markers of expert face recognition. By contrast, expert recognition of characters in Chinese orthography involves left-side bias but reduced holistic processing, although faces and Chinese characters share many visual properties. Here, we examined whether this reduction in holistic processing of Chinese characters can be better explained by writing experience than by reading experience. Compared with Chinese nonreaders, Chinese readers who had limited writing experience showed increased holistic processing, whereas Chinese readers who could write characters fluently showed reduced holistic processing. This result suggests that writing and sensorimotor experience can modulate holistic-processing effects and that the reduced holistic processing observed in expert Chinese readers may depend mostly on writing experience. However, both expert writers and writers with limited experience showed similarly stronger left-side bias than novices did in processing mirror-symmetric Chinese characters; left-side bias may therefore be a robust expertise marker for object recognition that is uninfluenced by sensorimotor experience. © The Author(s) 2014.
Sevinc, Gunes; Spreng, R Nathan
insight into distinct features of moral cognition, including the generation of moral context through associative processes and the perceptual detection of moral salience.
processing of moral input is affected by task demands. The results provide novel insight into distinct features of moral cognition, including the generation of moral context through associative processes and the perceptual detection of moral salience.
Fisher, Katie; Towler, John; Eimer, Martin
It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently. Copyright © 2015 Elsevier Ltd. All rights reserved.
Landrum, Asheley R; Lull, Robert B; Akin, Heather; Hasell, Ariel; Jamieson, Kathleen Hall
Previous research suggests that when individuals encounter new information, they interpret it through perceptual 'filters' of prior beliefs, relevant social identities, and messenger credibility. In short, evaluations are not based solely on message accuracy, but also on the extent to which the message and messenger are amenable to the values of one's social groups. Here, we use the release of Pope Francis's 2015 encyclical as the context for a natural experiment to examine the role of prior values in climate change cognition. Based on our analysis of panel data collected before and after the encyclical's release, we find that political ideology moderated views of papal credibility on climate change for those participants who were aware of the encyclical. We also find that, in some contexts, non-Catholics who were aware of the encyclical granted Pope Francis additional credibility compared to the non-Catholics who were unaware of it, yet Catholics granted the Pope high credibility regardless of encyclical awareness. Importantly, papal credibility mediated the conditional relationships between encyclical awareness and acceptance of the Pope's messages on climate change. We conclude by discussing how our results provide insight into cognitive processing of new information about controversial issues. Copyright © 2017 Elsevier B.V. All rights reserved.
Webster, Michael A.; Yasuda, Maiko; Haber, Sara; Leonard, Deanne; Ballardini, Nicole
We used adaptation to examine the relationship between perceptual norms--the stimuli observers describe as psychologically neutral, and response norms--the stimulus levels that leave visual sensitivity in a neutral or balanced state. Adapting to stimuli on opposite sides of a neutral point (e.g. redder or greener than white) biases appearance in opposite ways. Thus the adapting stimulus can be titrated to find the unique adapting level that does not bias appearance. We compared these response norms to subjectively defined neutral points both within the same observer (at different retinal eccentricities) and between observers. These comparisons were made for visual judgments of color, image focus, and human faces, stimuli that are very different and may depend on very different levels of processing, yet which share the property that for each there is a well defined and perceptually salient norm. In each case the adaptation aftereffects were consistent with an underlying sensitivity basis for the perceptual norm. Specifically, response norms were similar to and thus covaried with the perceptual norm, and under common adaptation differences between subjectively defined norms were reduced. These results are consistent with models of norm-based codes and suggest that these codes underlie an important link between visual coding and visual experience.
Purwins, Hendrik; Herrera, Perfecto; Grachten, Maarten; Hazan, Amaury; Marxer, Ricard; Serra, Xavier
We present a review on perception and cognition models designed for or applicable to music. An emphasis is put on computational implementations. We include findings from different disciplines: neuroscience, psychology, cognitive science, artificial intelligence, and musicology. The article summarizes the methodology that these disciplines use to approach the phenomena of music understanding, the localization of musical processes in the brain, and the flow of cognitive operations involved in turning physical signals into musical symbols, going from the transducers to the memory systems of the brain. We discuss formal models developed to emulate, explain and predict phenomena involved in early auditory processing, pitch processing, grouping, source separation, and music structure computation. We cover generic computational architectures of attention, memory, and expectation that can be instantiated and tuned to deal with specific musical phenomena. Criteria for the evaluation of such models are presented and discussed. Thereby, we lay out the general framework that provides the basis for the discussion of domain-specific music models in Part II.
Norton, Daniel J.; McBain, Ryan K.; Ongur, Dost; Chen, Yue
Schizophrenia patients exhibit perceptual and cognitive deficits, including in visual motion processing. Given that cognitive systems depend upon perceptual inputs, improving patients' perceptual abilities may be an effective means of cognitive intervention. In healthy people, motion perception can be enhanced through perceptual learning, but it…
Volz, Kirsten G; von Cramon, D Yves
According to the Oxford English Dictionary, intuition is "the ability to understand or know something immediately, without conscious reasoning." Most people would agree that intuitive responses appear as ideas or feelings that subsequently guide our thoughts and behaviors. It is proposed that people continuously, without conscious attention, recognize patterns in the stream of sensations that impinge upon them. What exactly is being recognized is not clear yet, but we assume that people detect potential content based on only a few aspects of the input (i.e., the gist). The result is a vague perception of coherence which is not explicitly describable but instead embodied in a "gut feeling" or an initial guess, which subsequently biases thought and inquiry. To approach the nature of intuitive processes, we used functional magnetic resonance imaging when participants were working at a modified version of the Waterloo Gestalt Closure Task. Starting from our conceptualization that intuition involves an informed judgment in the context of discovery, we expected activation within the median orbito-frontal cortex (OFC), as this area receives input from all sensory modalities and has been shown to be crucially involved in emotionally driven decisions. Results from a direct contrast between intuitive and nonintuitive judgments, as well as from a parametric analysis, revealed the median OFC, the lateral portion of the amygdala, anterior insula, and ventral occipito-temporal regions to be activated. Based on these findings, we suggest our definition of intuition to be promising and a good starting point for future research on intuitive processes.
Ruitenberg, Marit F L; Abrahamse, Elger L; De Kleine, Elian; Verwey, Willem B
Previous studies have shown that motor sequencing skill can benefit from the reinstatement of the learning context-even with respect to features that are formally not required for appropriate task performance. The present study explored whether such context-dependence develops when sequence execution is fully memory-based-and thus no longer assisted by stimulus-response translations. Specifically, we aimed to distinguish between preparation and execution processes. Participants performed two keying sequences in a go/no-go version of the discrete sequence production task in which the context consisted of the color in which the target keys of a particular sequence were displayed. In a subsequent test phase, these colors either were the same as during practice, were reversed for the two sequences or were novel. Results showed that, irrespective of the amount of practice, performance across all key presses in the reversed context condition was impaired relative to performance in the same and novel contexts. This suggests that the online preparation and/or execution of single key presses of the sequence is context-dependent. We propose that a cognitive processor is responsible both for these online processes and for advance sequence preparation and that combined findings from the current and previous studies build toward the notion that the cognitive processor is highly sensitive to changes in context across the various roles that it performs.
Learning to recognize and categorize objects is an essential cognitive skill allowing animals to function in the world. However, animals rarely have access to a canonical view of an object in an uncluttered environment. Hence, it is essential to study categorization under noisy, degraded conditions. In this article, we explore how the brain processes categorization stimuli in low signal-to-noise conditions using multivariate pattern analysis. We used an integration masking paradigm with mask opacity of 50%, 60%, and 70% inside a magnetic resonance imaging scanner. The results show that mask opacity affects blood-oxygen-level dependent (BOLD) signal in visual processing areas (V1, V2, V3, and V4) but does not affect the BOLD signal in brain areas traditionally associated with categorization (prefrontal cortex, striatum, hippocampus). This suggests that when a stimulus is difficult to extract from its background (e.g., low signal-to-noise ratio), the visual system extracts the stimulus and that activity in areas typically associated with categorization are not affected by the difficulty level of the visual conditions. We conclude with implications of this result for research on visual attention, categorization, and the integration of these fields. Copyright © 2017 Elsevier Inc. All rights reserved.
Mihevic, P.M.; Gliner, J.A.; Horvath, S.M.
This study examined the influence of exposure to ambient carbon monoxide resulting in final carboxyhemoglobin (COHb) levels of approximately 5.0% on the ability to process information during motor performance. Subjects (n . 16) performed a primary reciprocal tapping task and a secondary digit manipulation task singly and/or concurrently during 2.5 h exposure to room air (0 ppm CO) or 100 ppm CO. Five levels of tapping difficulty and two levels of digit manipulation were employed. Tapping performance was unaffected when COHb levels were as high as 5%. However, at this level of COHb it was noted that CO exposure interacted with task difficulty of both tasks to influence reaction time on the digit manipulation task. It was concluded that motor performance was not influenced by exposure to CO leading to COHb concentrations of 5%. Task difficulty was a significant factor mediating behavioral effects of CO exposure.
Mihevic, P.M.; Gliner, J.A.; Horvath, S.M.
This study examined the influence of exposure to ambient carbon monoxide resulting in final carboxyhemoglobin (COHb) levels of approximately 5.0% on the ability to process information during motor performance. Subjects (n = 16) performed a primary reciprocal tapping task and a secondary digit manipulation task singly and/or concurrently during 2.5 h exposure to room air (0 ppm CO) or 100 ppm CO. Five levels of tapping difficulty and two levels of digit manipulation were employed. Tapping performance was unaffected when COHb levels were as high as 5%. However, at this level of COHb it was noted that CO exposure interacted with task difficulty of both tasks to influence reaction time on the digit manipulation task. It was concluded that motor performance was not influenced by exposure to CO leading to COHb concentrations of 5%. Task difficulty was a significant factor mediating behavioral effects of CO exposure.
Anikin, Andrey; Bååth, Rasmus; Persson, Tomas
Recent research on human nonverbal vocalizations has led to considerable progress in our understanding of vocal communication of emotion. However, in contrast to studies of animal vocalizations, this research has focused mainly on the emotional interpretation of such signals. The repertoire of human nonverbal vocalizations as acoustic types, and the mapping between acoustic and emotional categories, thus remain underexplored. In a cross-linguistic naming task (Experiment 1), verbal categorization of 132 authentic (non-acted) human vocalizations by English-, Swedish- and Russian-speaking participants revealed the same major acoustic types: laugh, cry, scream, moan, and possibly roar and sigh. The association between call type and perceived emotion was systematic but non-redundant: listeners associated every call type with a limited, but in some cases relatively wide, range of emotions. The speed and consistency of naming the call type predicted the speed and consistency of inferring the caller's emotion, suggesting that acoustic and emotional categorizations are closely related. However, participants preferred to name the call type before naming the emotion. Furthermore, nonverbal categorization of the same stimuli in a triad classification task (Experiment 2) was more compatible with classification by call type than by emotion, indicating the former's greater perceptual salience. These results suggest that acoustic categorization may precede attribution of emotion, highlighting the need to distinguish between the overt form of nonverbal signals and their interpretation by the perceiver. Both within- and between-call acoustic variation can then be modeled explicitly, bringing research on human nonverbal vocalizations more in line with the work on animal communication.
Posthuma, Daniëlle; Baare, Wim F.C.; Hulshoff Pol, Hilleke E.
We recently showed that the correlation of gray and white matter volume with full scale IQ and the Working Memory dimension are completely mediated by common genetic factors (Posthuma et al., 2002). Here we examine whether the other WAIS III dimensions (Verbal Comprehension, Perceptual Organization......, Processing Speed) are also related to gray and white matter volume, and whether any of the dimensions are related to cerebellar volume. Two overlapping samples provided 135 subjects from 60 extended twin families for whom both MRI scans and WAIS III data were available. All three brain volumes are related...... to Working Memory capacity (r = 0.27). This phenotypic correlation is completely due to a common underlying genetic factor. Processing Speed was genetically related to white matter volume (r(g) = 0.39). Perceptual Organization was both genetically (r(g) = 0.39) and environmentally (r(e) = -0.71) related...
Attarha, Mouna; Moore, Cathleen M.
The simultaneous–sequential method was used to test the processing capacity of statistical summary representations both within and between feature dimensions. Sixteen gratings varied with respect to their size and orientation. In Experiment 1, the gratings were equally divided into four separate smaller sets, one of which with a mean size that was larger or smaller than the other three sets, and one of which with a mean orientation that was tilted more leftward or rightward. The task was to report the mean size and orientation of the oddball sets. This therefore required four summary representations for size and another four for orientation. The sets were presented at the same time in the simultaneous condition or across two temporal frames in the sequential condition. Experiment 1 showed evidence of a sequential advantage, suggesting that the system may be limited with respect to establishing multiple within-feature summaries. Experiment 2 eliminates the possibility that some aspect of the task, other than averaging, was contributing to this observed limitation. In Experiment 3, the same 16 gratings appeared as one large superset, and therefore the task only required one summary representation for size and another one for orientation. Equal simultaneous–sequential performance indicated that between-feature summaries are capacity free. These findings challenge the view that within-feature summaries drive a global sense of visual continuity across areas of the peripheral visual field, and suggest a shift in focus to seeking an understanding of how between-feature summaries in one area of the environment control behavior. PMID:26360153
Tso, Ivy F; Calwas, Anita M; Chun, Jinsoo; Mueller, Savanna A; Taylor, Stephan F; Deldin, Patricia J
Using gaze information to orient attention and guide behavior is critical to social adaptation. Previous studies have suggested that abnormal gaze perception in schizophrenia (SCZ) may originate in abnormal early attentional and perceptual processes and may be related to paranoid symptoms. Using event-related brain potentials (ERPs), this study investigated altered early attentional and perceptual processes during gaze perception and their relationship to paranoid delusions in SCZ. Twenty-eight individuals with SCZ or schizoaffective disorder and 32 demographically matched healthy controls (HCs) completed a gaze-discrimination task with face stimuli varying in gaze direction (direct, averted), head orientation (forward, deviated), and emotion (neutral, fearful). ERPs were recorded during the task. Participants rated experienced threat from each face after the task. Participants with SCZ were as accurate as, though slower than, HCs on the task. Participants with SCZ displayed enlarged N170 responses over the left hemisphere to averted gaze presented in fearful relative to neutral faces, indicating a heightened encoding sensitivity to faces signaling external threat. This abnormality was correlated with increased perceived threat and paranoid delusions. Participants with SCZ also showed a reduction of N170 modulation by head orientation (normally increased amplitude to deviated faces relative to forward faces), suggesting less integration of contextual cues of head orientation in gaze perception. The psychophysiological deviations observed during gaze discrimination in SCZ underscore the role of early attentional and perceptual abnormalities in social information processing and paranoid symptoms of SCZ. (c) 2015 APA, all rights reserved).
Loebach, Jeremy L.; Pisoni, David B.; Svirsky, Mario A.
The effect of feedback and materials on perceptual learning was examined in listeners with normal hearing who were exposed to cochlear implant simulations. Generalization was most robust when feedback paired the spectrally degraded sentences with their written transcriptions, promoting mapping between the degraded signal and its acoustic-phonetic…
Fahrenfort, Johannes J; van Leeuwen, Jonathan; Olivers, Christian N L; Hogendoorn, Hinze
The visual system has the remarkable ability to integrate fragmentary visual input into a perceptually organized collection of surfaces and objects, a process we refer to as perceptual integration. Despite a long tradition of perception research, it is not known whether access to consciousness is required to complete perceptual integration. To investigate this question, we manipulated access to consciousness using the attentional blink. We show that, behaviorally, the attentional blink impairs conscious decisions about the presence of integrated surface structure from fragmented input. However, despite conscious access being impaired, the ability to decode the presence of integrated percepts remains intact, as shown through multivariate classification analyses of electroencephalogram (EEG) data. In contrast, when disrupting perception through masking, decisions about integrated percepts and decoding of integrated percepts are impaired in tandem, while leaving feedforward representations intact. Together, these data show that access consciousness and perceptual integration can be dissociated.
M. V. ARHIPOVA
The article is written within the framework of the extended scientific research devoted to the music-semeiotic concept of developing students’ creative learning of foreign languages. The concept implies experimental study of psychological impact of music on the efficiency of the learning processes, on the development of general and specific abilities of students, in particular creative abilities to learn foreign languages. Solution of this task is based on the hypothesis of psychological inte...
Full Text Available The article deals with a number of relevant methodological issues. First of all, the author analyses psychological peculiarities of dialogic speech and states that the dialogue is the product of at least two persons. Therefore, in this view, dialogic speech, unlike monologic speech, happens impromptu and is not prepared in advance. Dialogic speech is mainly of situational character. The linguistic nature of dialogic speech, in the author’s opinion, lies in the process of exchanging replications, which are coherent in structural and functional character. The author classifies dialogue groups by the number of replications and communicative parameters. The basic goal of dialogic speech teaching is developing the abilities and skills which enable to exchange replications. The author distinguishes two basic stages of dialogic speech teaching: 1. Training of abilities to exchange replications during communicative exercises. 2. Development of skills by training the capability to perform exercises of creative nature during a group dialogue, conversation or debate.
Schmetz, Emilie; Rousselle, Laurence; Ballaz, Cécile; Detraux, Jean-Jacques; Barisnikov, Koviljka
This study aims to examine the different levels of visual perceptual object recognition (early, intermediate, and late) defined in Humphreys and Riddoch's model as well as basic visual spatial processing in children using a new test battery (BEVPS). It focuses on the age sensitivity, internal coherence, theoretical validity, and convergent validity of this battery. French-speaking, typically developing children (n = 179; 5 to 14 years) were assessed using 15 new computerized subtests. After selecting the most age-sensitive tasks though ceiling effect and correlation analyses, an exploratory factorial analysis was run with the 12 remaining subtests to examine the BEVPS' theoretical validity. Three separate factors were identified for the assessment of the stimuli's basic features (F1, four subtests), view-dependent and -independent object representations (F2, six subtests), and basic visual spatial processing (F3, two subtests). Convergent validity analyses revealed positive correlations between F1 and F2 and the Beery-VMI visual perception subtest, while no such correlations were found for F3. Children's performances progressed until the age of 9-10 years in F1 and in view-independent representations (F2), and until 11-12 years in view-dependent representations (F2). However, no progression with age was observed in F3. Moreover, the selected subtests, present good-to-excellent internal consistency, which indicates that they provide reliable measures for the assessment of visual perceptual processing abilities in children.
Amitay, Sygal; Zhang, Yu-Xuan; Jones, Pete R; Moore, David R
Perceptual learning has traditionally been portrayed as a bottom-up phenomenon that improves encoding or decoding of the trained stimulus. Cognitive skills such as attention and memory are thought to drive, guide and modulate learning but are, with notable exceptions, not generally considered to undergo changes themselves as a result of training with simple perceptual tasks. Moreover, shifts in threshold are interpreted as shifts in perceptual sensitivity, with no consideration for non-sensory factors (such as response bias) that may contribute to these changes. Accumulating evidence from our own research and others shows that perceptual learning is a conglomeration of effects, with training-induced changes ranging from the lowest (noise reduction in the phase locking of auditory signals) to the highest (working memory capacity) level of processing, and includes contributions from non-sensory factors that affect decision making even on a "simple" auditory task such as frequency discrimination. We discuss our emerging view of learning as a process that increases the signal-to-noise ratio associated with perceptual tasks by tackling noise sources and inefficiencies that cause performance bottlenecks, and present some implications for training populations other than young, smart, attentive and highly-motivated college students. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
A.A. Salah (Albert Ali); O. Tanrı dağ
htmlabstractHumans perceive the world through different perceptual modalities, which are processed in the brain by modality-specific areas and structures. However, there also exist multimodal neurons and areas, specialized in integrating perceptual information to enhance or suppress brain response.
Posthuma, Daniëlle; Baaré, Wim F C; Hulshoff Pol, Hilleke E; Kahn, René S; Boomsma, Dorret I; De Geus, Eco J C
We recently showed that the correlation of gray and white matter volume with full scale IQ and the Working Memory dimension are completely mediated by common genetic factors (Posthuma et al., 2002). Here we examine whether the other WAIS III dimensions (Verbal Comprehension, Perceptual Organization, Processing Speed) are also related to gray and white matter volume, and whether any of the dimensions are related to cerebellar volume. Two overlapping samples provided 135 subjects from 60 extended twin families for whom both MRI scans and WAIS III data were available. All three brain volumes are related to Working Memory capacity (r = 0.27). This phenotypic correlation is completely due to a common underlying genetic factor. Processing Speed was genetically related to white matter volume (r(g) = 0.39). Perceptual Organization was both genetically (r(g) = 0.39) and environmentally (r(e) = -0.71) related to cerebellar volume. Verbal Comprehension was not related to any of the three brain volumes. It is concluded that brain volumes are genetically related to intelligence which suggests that genes that influence brain volume may also be important for intelligence. It is also noted however, that the direction of causation (i.e., do genes influence brain volume which in turn influences intelligence, or alternatively, do genes influence intelligence which in turn influences brain volume), or the presence or absence of pleiotropy has not been resolved yet.
Tuninetti, Alba; Chládková, Kateřina; Peter, Varghese; Schiller, Niels O; Escudero, Paola
Speech sound acoustic properties vary largely across speakers and accents. When perceiving speech, adult listeners normally disregard non-linguistic variation caused by speaker or accent differences, in order to comprehend the linguistic message, e.g. to correctly identify a speech sound or a word. Here we tested whether the process of normalizing speaker and accent differences, facilitating the recognition of linguistic information, is found at the level of neural processing, and whether it is modulated by the listeners' native language. In a multi-deviant oddball paradigm, native and nonnative speakers of Dutch were exposed to naturally-produced Dutch vowels varying in speaker, sex, accent, and phoneme identity. Unexpectedly, the analysis of mismatch negativity (MMN) amplitudes elicited by each type of change shows a large degree of early perceptual sensitivity to non-linguistic cues. This finding on perception of naturally-produced stimuli contrasts with previous studies examining the perception of synthetic stimuli wherein adult listeners automatically disregard acoustic cues to speaker identity. The present finding bears relevance to speech normalization theories, suggesting that at an unattended level of processing, listeners are indeed sensitive to changes in fundamental frequency in natural speech tokens. Copyright © 2017 Elsevier Inc. All rights reserved.
Full Text Available The A allele of the Fras1-related extracellular matrix protein 3 (FREM3 rs7676614 single nucleotide polymorphism (SNP was linked to major depressive disorder (MDD in an early genome-wide association study (GWAS, and to symptoms of psychomotor retardation in a follow-up investigation. In line with significant overlap between age- and depression-related molecular pathways, parallel work has shown that FREM3 expression in postmortem human brain decreases with age. Here we probe the effect of rs7676614 on amygdala reactivity and perceptual processing speed, both of which are altered in depression and aging. Amygdala reactivity was assessed using a face-matching BOLD fMRI paradigm in 365 Caucasian participants in the Duke Neurogenetics Study (192 women, mean age 19.7±1.2. Perceptual processing speed was indexed by reaction times in the same task and the Trails Making Test (TMT. The effect of rs7676614 on FREM3 mRNA brain expression levels was probed in a postmortem cohort of 169 Caucasian individuals (44 women, mean age 50.8±14.9. The A allele of rs7676614 was associated with blunted amygdala reactivity to faces, slower reaction times in the face-matching condition (p<0.04, as well as marginally slower performance on TMT Part B (p=0.056. In the postmortem cohort, the T allele of rs6537170 (proxy for the rs7676614 A allele, was associated with trend-level reductions in gene expression in Brodmann areas 11 and 47 (p=0.066, reminiscent of patterns characteristic of older age. The low-expressing allele of another FREM3 SNP (rs1391187 was similarly associated with reduced amygdala reactivity and slower TMT Part B speed, in addition to reduced BA47 activity and Extraversion (p<0.05. Together, these results suggest common genetic variation associated with reduced FREM3 expression may confer risk for a subtype of depression characterized by reduced reactivity to environmental stimuli and slower perceptual processing speed, possibly suggestive of
Hadi, Shamil M.; Siadat, Mohamad R.; Babajani-Feremi, Abbas
We investigated the effect of synaptic serotonin concentration on hemodynamic responses. The stimuli paradigm involved the presentation of fearful and threatening facial expressions to a set of 24 subjects who were either5HTTLPR long- or short-allele carriers (12 of each type in each group). The BOLD signals of the rACC from subjects of each group were averaged to increase the signal-to-noise ratio. We used a Bayesian approach to estimate the parameters of the underlying hemodynamic model. Our results, during this perceptual processing of emotional task, showed a negative BOLD signal in the rACC in the subjects with long-alleles. In contrast, the subjects with short-alleles showed positive BOLD signals in the rACC. These results suggest that high synaptic serotonin concentration in the rACC inhibits neuronal activity in a fashion similar to GABA, and a consequent negative BOLD signal ensues.
McCann, Robert S.; Foyle, David C.; Johnston, James C.; Hart, Sandra G. (Technical Monitor)
Previous work using Head-Up Displays (HUDs) suggests that the visual system parses the HUD and the outside world into distinct perceptual groups, with attention deployed sequentially to first one group and then the other. New experiments show that both groups can be processed in parallel in a divided attention search task, even though subjects have just processed a stimulus in one perceptual group or the other. Implications for models of visual attention will be discussed.
Within attention studies, Lavie's load theory (Lavie & Tsal, 1994; Lavie, Hirst, de Fockert, & Viding, 2004) presented an account that could settle the question whether attention selects stimuli to be processed at an early or late stage of cognitive processing. This theory relied on the concepts of "perceptual load" and "attentional capacity", proposing that attentional resources are automatically allocated to stimuli, but when the perceptual load of the stimuli exceeds person's capacity, tas...
Pedersen, Søren Nygaard
The research presented in this PhD thesis has focused on a perceptual approach to robust design. The results of the research and the original contribution to knowledge is a preliminary framework for understanding, positioning, and applying perceptual robust design. Product quality is a topic...... been presented. Therefore, this study set out to contribute to the understanding and application of perceptual robust design. To achieve this, a state-of-the-art and current practice review was performed. From the review two main research problems were identified. Firstly, a lack of tools...... for perceptual robustness was found to overlap with the optimum for functional robustness and at most approximately 2.2% out of the 14.74% could be ascribed solely to the perceptual robustness optimisation. In conclusion, the thesis have offered a new perspective on robust design by merging robust design...
Michael, George Andrew; Bacon, Elisabeth; Offerlin-Meyer, Isabelle
There is a general consensus that benzodiazepines affect attentional processes, yet only few studies have tried to investigate these impairments in detail. The purpose of the present study was to investigate the effects of a single dose of Lorazepam on performance in a target cancellation task with important time constraints. We measured correct target detections and correct distractor rejections, misses and false positives. The results show that Lorazepam produces multiple kinds of shifts in performance, which suggests that it impairs multipLe processes: (a) the evolution of performance over time was not the same between the placebo and the Lorazepam groups, with the Lorazepam affecting performance quite early after the beginning of the test. This is suggestive of a depletion of attentional resources during sequential attentional processing; (b) Lorazepam affected differently target and distractor processing, with target detection being the most impaired; (c) misses were more frequent under Lorazepam than under placebo, but no such difference was observed as far as false positives were concerned. Signal detection analyses showed that Lorazepam (d) decreased perceptual discrimination, and (e) reliably increased response bias. Our results bring new insights on the multiple effects of Lorazepam on selective attention which, when combined, may have deleterious effects on human performance.
Plant, Katherine L; Stanton, Neville A
Aeronautical decision-making is complex as there is not always a clear coupling between the decision made and decision outcome. As such, there is a call for process-orientated decision research in order to understand why a decision made sense at the time it was made. Schema theory explains how we interact with the world using stored mental representations and forms an integral part of the perceptual cycle model (PCM); proposed here as a way to understand the decision-making process. This paper qualitatively analyses data from the critical decision method (CDM) based on the principles of the PCM. It is demonstrated that the approach can be used to understand a decision-making process and highlights how influential schemata can be at informing decision-making. The reliability of this approach is established, the general applicability is discussed and directions for future work are considered. This paper introduces the PCM, and the associated schema theory, as a framework to structure and explain data collected from the CDM. The reliability of both the method and coding scheme is addressed.
Afonso, José; Garganta, Jêlio; McRobert, Allistair; Williams, Andrew M; Mesquita, Isabel
An extensive body of work has focused on the processes underpinning perceptual-cognitive expertise. The majority of researchers have used film-based simulations to capture superior performance. We combined eye movement recording and verbal reports of thinking to explore the processes underpinning skilled performance in a complex, dynamic, and externally paced representative volleyball task involving in situ data collection. Altogether, 27 female volleyball players performed as centre backcourt defenders in simulated sessions while wearing an eye-tracking device. After each sequence, athletes were questioned concerning their perception of the situation. The visual search strategies employed by the highly-skilled players were more exploratory than those used by skilled players, involving more fixations to a greater number of locations. Highly-skilled participants spent more time fixating on functional spaces between two or more display areas, while the skilled participants fixated on the ball trajectory and specific players. Moreover, highly-skilled players generated more condition concepts with higher levels of sophistication than their skilled counterparts. Findings highlight the value of using representative task designs to capture performance in situ. Key pointsDecision-making in complex sports relies deeply on perceptual-cognitive expertise. In turn, the effect of expertise is highly dependent on the nature and complexity of the task.Nonetheless, most researchers use simple tasks in their research designs, risking not capturing performance in a meaningful way. We proposed to use a live action setting with a complex task design, representative of real world situations.We combined eye movement registration with collection of immediate retrospective verbal reports. Although the two data sets are not directly comparable, they may be used in a complementary manner, providing a deeper and fuller understanding of the processes underpinning superior performance
Parks, Colleen M.
Research examining the importance of surface-level information to familiarity in recognition memory tasks is mixed: Sometimes it affects recognition and sometimes it does not. One potential explanation of the inconsistent findings comes from the ideas of dual process theory of recognition and the transfer-appropriate processing framework, which…
Nine studies showed a bidirectional link (a) between a global processing style and generation of similarities and (b) between a local processing style and generation of dissimilarities. In Experiments 1-4, participants were primed with global versus local perception styles and then asked to work on
Chamberlain, Rebecca; McManus, I C; Riley, Howard; Rankin, Qona; Brunswick, Nicola
Individuals with drawing talent have previously been shown to exhibit enhanced local visual processing ability. The aim of the current study was to assess whether local processing biases associated with drawing ability result from a reduced ability to cohere local stimuli into global forms, or an increased ability to disregard global aspects of an image. Local and global visual processing ability was assessed in art students and controls using the Group Embedded Figures Task, Navon shape stimuli, the Block Design Task and the Autism Spectrum Quotient, whilst controlling for nonverbal IQ and artistic ability. Local processing biases associated with drawing appear to arise from an enhancement of local processing alongside successful filtering of global information, rather than a reduction in global processing. The relationship between local processing and drawing ability is independent of individual differences in nonverbal IQ and artistic ability. These findings have implications for bottom-up and attentional theories of observational drawing, as well as explanations of special skills in autism.
Full Text Available Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as \\(XYZ\\, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a \\(3 \\times 3\\ matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson al., to perform a perceptual color characterization. In particular, we search for the \\(3 \\times 3\\ matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE \\(\\Delta E\\ error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3for the \\(\\Delta E\\ error, 7& for the S-CIELAB error and 13% for the CID error measures.
.... Attention may affect the perceived clarity of visual displays and improve performance. In this project, a powerful external noise method was developed to identify and characterize the effect of attention on perceptual performance in visual tasks...
.... Attention may affect the perceived clarity of visual displays and improve performance. In this project, a powerful external noise method was developed to identify and characterize the effect of attention on perceptual performance in visual tasks...
Maher, Stephen; Ekstrom, Tor; Chen, Yue
Perception of subtle facial expressions is essential for social functioning; yet it is unclear if human perceptual sensitivities differ in detecting varying types of facial emotions. Evidence diverges as to whether salient negative versus positive emotions (such as sadness versus happiness) are preferentially processed. Here, we measured perceptual thresholds for the detection of four types of emotion in faces--happiness, fear, anger, and sadness--using psychophysical methods. We also evaluated the association of the perceptual performances with facial morphological changes between neutral and respective emotion types. Human observers were highly sensitive to happiness compared with the other emotional expressions. Further, this heightened perceptual sensitivity to happy expressions can be attributed largely to the emotion-induced morphological change of a particular facial feature (end-lip raise).
Gijs Joost Brouwer
Full Text Available We employed a parametric psychophysical design in combination with functional imaging to examine the influence of metric changes in perceptual incongruence on perceptual alternation rates and cortical responses. Subjects viewed a bistable stimulus defined by incongruent depth cues; bistability resulted from incongruence between binocular disparity and monocular perspective cues that specify different slants (slant rivalry. Psychophysical results revealed that perceptual alternation rates were positively correlated with the degree of perceived incongruence. Functional imaging revealed systematic increases in activity that paralleled the psychophysical results within anterior intraparietal sulcus, prior to the onset of perceptual alternations. We suggest that this cortical activity predicts the frequency of subsequent alternations, implying a putative causal role for these areas in initiating bistable perception. In contrast, areas implicated in form and depth processing (LOC and V3A were sensitive to the degree of slant, but failed to show increases in activity when these cues were in conflict.
Grandison, Alexandra; Sowden, Paul T; Drivonikou, Vicky G; Notman, Leslie A; Alexander, Iona; Davies, Ian R L
Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest.
Papakonstantinou, Alexandra; Strelcyk, Olaf; Dau, Torsten
This study investigates behavioural and objective measures of temporal auditory processing and their relation to the ability to understand speech in noise. The experiments were carried out on a homogeneous group of seven hearing-impaired listeners with normal sensitivity at low frequencies (up to 1...... kHz) and steeply sloping hearing losses above 1 kHz. For comparison, data were also collected for five normalhearing listeners. Temporal processing was addressed at low frequencies by means of psychoacoustical frequency discrimination, binaural masked detection and amplitude modulation (AM......) detection. In addition, auditory brainstem responses (ABRs) to clicks and broadband rising chirps were recorded. Furthermore, speech reception thresholds (SRTs) were determined for Danish sentences in speechshaped noise. The main findings were: (1) SRTs were neither correlated with hearing sensitivity...
Nine studies showed a bidirectional link (a) between a global processing style and generation of similarities and (b) between a local processing style and generation of dissimilarities. In Experiments 1-4, participants were primed with global versus local perception styles and then asked to work on an allegedly unrelated generation task. Across materials, participants generated more similarities than dissimilarities after global priming, whereas for participants with local priming, the opposite was true. Experiments 5-6 demonstrated a bidirectional link whereby participants who were first instructed to search for similarities attended more to the gestalt of a stimulus than to its details, whereas the reverse was true for those who were initially instructed to search for dissimilarities. Because important psychological variables are correlated with processing styles, in Experiments 7-9, temporal distance, a promotion focus, and high power were predicted and shown to enhance the search for similarities, whereas temporal proximity, a prevention focus, and low power enhanced the search for dissimilarities. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
Green, Amity E; Fitzgerald, Paul B; Johnston, Patrick J; Nathan, Pradeep J; Kulkarni, Jayashri; Croft, Rodney J
Schizophrenia is characterised by significant episodic memory impairment that is thought to be related to problems with encoding, however the neuro-functional mechanisms underlying these deficits are not well understood. The present study used a subsequent recognition memory paradigm and event-related potentials (ERPs) to investigate temporal aspects of episodic memory encoding deficits in schizophrenia. Electroencephalographic data was recorded in 24 patients and 19 healthy controls whilst participants categorised single words as pleasant/unpleasant. ERPs were generated to subsequently recognised versus unrecognised words on the basis of a forced-choice recognition memory task. Subsequent memory effects were examined with the late positive component (LPP). Group differences in N1, P2, N400 and LPP were examined for words correctly recognised. Patients performed more poorly than controls on the recognition task. During encoding patients had significantly reduced N400 and LPP amplitudes than controls. LPP amplitude correlated with task performance however amplitudes did not differ between patients and controls as a function of subsequent memory. No significant differences in N1 or P2 amplitude or latency were observed. The present results indicate that early sensory processes are intact and dysfunctional higher order cognitive processes during encoding are contributing to episodic memory impairments in schizophrenia.
Full Text Available Human sensory systems allow individuals to see, hear, touch, and interact with the surrounding physical environment. Understanding human perception and its limit enables us to better exploit the psychophysics of human perceptual systems to design more efficient, adaptive algorithms and develop perceptually-inspired computational models. In this talk, I will survey some of recent efforts on perceptually-inspired computing with applications to crowd simulation and multimodal interaction. In particular, I will present data-driven personality modeling based on the results of user studies, example-guided physics-based sound synthesis using auditory perception, as well as perceptually-inspired simplification for multimodal interaction. These perceptually guided principles can be used to accelerating multi-modal interaction and visual computing, thereby creating more natural human-computer interaction and providing more immersive experiences. I will also present their use in interactive applications for entertainment, such as video games, computer animation, and shared social experience. I will conclude by discussing possible future research directions.
Cavanaugh, Lisa A; MacInnis, Deborah J; Weiss, Allen M
Individuals often describe objects in their world in terms of perceptual dimensions that span a variety of modalities; the visual (e.g., brightness: dark-bright), the auditory (e.g., loudness: quiet-loud), the gustatory (e.g., taste: sour-sweet), the tactile (e.g., hardness: soft vs. hard) and the kinaesthetic (e.g., speed: slow-fast). We ask whether individuals use perceptual dimensions to differentiate emotions from one another. Participants in two studies (one where respondents reported on abstract emotion concepts and a second where they reported on specific emotion episodes) rated the extent to which features anchoring 29 perceptual dimensions (e.g., temperature, texture and taste) are associated with 8 emotions (anger, fear, sadness, guilt, contentment, gratitude, pride and excitement). Results revealed that in both studies perceptual dimensions differentiate positive from negative emotions and high arousal from low arousal emotions. They also differentiate among emotions that are similar in arousal and valence (e.g., high arousal negative emotions such as anger and fear). Specific features that anchor particular perceptual dimensions (e.g., hot vs. cold) are also differentially associated with emotions.
Full Text Available The load theory of selective attention hypothesizes that distractor interference is suppressed after perceptual processing (i.e., in the later stage of central processing at low perceptual load of the central task, but in the early stage of perceptual processing at high perceptual load. Consistently, studies on the neural correlates of attention have found a smaller distractor-related activation in the sensory cortex at high relative to low perceptual load. However, it is not clear whether the distractor-related activation in brain regions linked to later stages of central processing (e.g., in the frontostriatal circuits is also smaller at high rather than low perceptual load, as might be predicted based on the load theory.We studied 24 healthy participants using functional magnetic resonance imaging (fMRI during a visual target identification task with two perceptual loads (low vs. high. Participants showed distractor-related increases in activation in the midbrain, striatum, occipital and medial and lateral prefrontal cortices at low load, but distractor-related decreases in activation in the midbrain ventral tegmental area and substantia nigra (VTA/SN, striatum, thalamus, and extensive sensory cortices at high load.Multiple levels of central processing involving midbrain and frontostriatal circuits participate in suppressing distractor interference at either low or high perceptual load. For suppressing distractor interference, the processing of sensory inputs in both early and late stages of central processing are enhanced at low load but inhibited at high load.
Xu, Jiansong; Monterosso, John; Kober, Hedy; Balodis, Iris M; Potenza, Marc N
The load theory of selective attention hypothesizes that distractor interference is suppressed after perceptual processing (i.e., in the later stage of central processing) at low perceptual load of the central task, but in the early stage of perceptual processing at high perceptual load. Consistently, studies on the neural correlates of attention have found a smaller distractor-related activation in the sensory cortex at high relative to low perceptual load. However, it is not clear whether the distractor-related activation in brain regions linked to later stages of central processing (e.g., in the frontostriatal circuits) is also smaller at high rather than low perceptual load, as might be predicted based on the load theory. We studied 24 healthy participants using functional magnetic resonance imaging (fMRI) during a visual target identification task with two perceptual loads (low vs. high). Participants showed distractor-related increases in activation in the midbrain, striatum, occipital and medial and lateral prefrontal cortices at low load, but distractor-related decreases in activation in the midbrain ventral tegmental area and substantia nigra (VTA/SN), striatum, thalamus, and extensive sensory cortices at high load. Multiple levels of central processing involving midbrain and frontostriatal circuits participate in suppressing distractor interference at either low or high perceptual load. For suppressing distractor interference, the processing of sensory inputs in both early and late stages of central processing are enhanced at low load but inhibited at high load.
Baumgarten, Thomas J; Schnitzler, Alfons; Lange, Joachim
Whether seeing a movie, listening to a song, or feeling a breeze on the skin, we coherently experience these stimuli as continuous, seamless percepts. However, there are rare perceptual phenomena that argue against continuous perception but, instead, suggest discrete processing of sensory input. Empirical evidence supporting such a discrete mechanism, however, remains scarce and comes entirely from the visual domain. Here, we demonstrate compelling evidence for discrete perceptual sampling in the somatosensory domain. Using magnetoencephalography (MEG) and a tactile temporal discrimination task in humans, we find that oscillatory alpha- and low beta-band (8-20 Hz) cycles in primary somatosensory cortex represent neurophysiological correlates of discrete perceptual cycles. Our results agree with several theoretical concepts of discrete perceptual sampling and empirical evidence of perceptual cycles in the visual domain. Critically, these results show that discrete perceptual cycles are not domain-specific, and thus restricted to the visual domain, but extend to the somatosensory domain.
Furley, Philip; Memmert, Daniel; Schmid, Simone
In two experiments, we transferred perceptual load theory to the dynamic field of team sports and tested the predictions derived from the theory using a novel task and stimuli. We tested a group of college students (N = 33) and a group of expert team sport players (N = 32) on a general perceptual load task and a complex, soccer-specific perceptual load task in order to extend the understanding of the applicability of perceptual load theory and further investigate whether distractor interference may differ between the groups, as the sport-specific processing task may not exhaust the processing capacity of the expert participants. In both, the general and the specific task, the pattern of results supported perceptual load theory and demonstrates that the predictions of the theory also transfer to more complex, unstructured situations. Further, perceptual load was the only determinant of distractor processing, as we neither found expertise effects in the general perceptual load task nor the sport-specific task. We discuss the heuristic utility of using response-competition paradigms for studying both general and domain-specific perceptual-cognitive adaptations.
Law, Chi-Tat; Gold, Joshua I
Perceptual decisions require the brain to weigh noisy evidence from sensory neurons to form categorical judgments that guide behavior. Here we review behavioral and neurophysiological findings suggesting that at least some forms of perceptual learning do not appear to affect the response properties of neurons that represent the sensory evidence. Instead, improved perceptual performance results from changes in how the sensory evidence is selected and weighed to form the decision. We discuss the implications of this idea for possible sites and mechanisms of training-induced improvements in perceptual processing in the brain. Copyright © 2009 Cognitive Science Society, Inc.
Posthuma, D.; Baare, W.F.C.; Hulshoff Pol, H.E.; Kahn, R.S.; Boomsma, D.I.; de Geus, E.J.C.
We recently showed that the correlation of gray and white matter volume with full scale IQ and the Working Memory dimension are completely mediated by common genetic factors (Posthuma et al., 2002). Here we examine whether the other WAIS III dimensions (Verbal Comprehension, Perceptual Organization,
Ciftcioglu, O.; Bittermann, M.S.; Sariyildiz, I.S.
Fusion of perception information for perceptual robotics is described. The visual perception is mathematically modelled as a probabilistic process obtaining and interpreting visual data from an environment. The visual data is processed in a multiresolutional form via wavelet transform and optimally
Dutilh, Gilles; Rieskamp, Jörg
Perceptual and preferential decision making have been studied largely in isolation. Perceptual decisions are considered to be at a non-deliberative cognitive level and have an outside criterion that defines the quality of decisions. Preferential decisions are considered to be at a higher cognitive level and the quality of decisions depend on the decision maker's subjective goals. Besides these crucial differences, both types of decisions also have in common that uncertain information about the choice situation has to be processed before a decision can be made. The present work aims to acknowledge the commonalities of both types of decision making to lay bare the crucial differences. For this aim we examine perceptual and preferential decisions with a novel choice paradigm that uses the identical stimulus material for both types of decisions. This paradigm allows us to model the decisions and response times of both types of decisions with the same sequential sampling model, the drift diffusion model. The results illustrate that the different incentive structure in both types of tasks changes people's behavior so that they process information more efficiently and respond more cautiously in the perceptual as compared to the preferential task. These findings set out a perspective for further integration of perceptual and preferential decision making in a single ramework.
Aaron V Berard
Full Text Available Playing certain types of video games for a long time can improve a wide range of mental processes, from visual acuity to cognitive control. Frequent gamers have also displayed generalized improvements in perceptual learning. In the Texture Discrimination Task (TDT, a widely used perceptual learning paradigm, participants report the orientation of a target embedded in a field of lines and demonstrate robust over-night improvement. However, changing the orientation of the background lines midway through TDT training interferes with overnight improvements in overall performance on TDT. Interestingly, prior research has suggested that this effect will not occur if a one-hour break is allowed in between the changes. These results have suggested that after training is over, it may take some time for learning to become stabilized and resilient against interference. Here, we tested whether frequent gamers have faster stabilization of perceptual learning compared to non-gamers and examined the effect of daily video game playing on interference of training of TDT with one background orientation on perceptual learning of TDT with a different background orientation. As a result, we found that non-gamers showed overnight performance improvement only on one background orientation, replicating previous results with the interference in TDT. In contrast, frequent gamers demonstrated overnight improvements in performance with both background orientations, suggesting that they are better able to overcome interference in perceptual learning. This resistance to interference suggests that video game playing not only enhances the amplitude and speed of perceptual learning but also leads to faster and/or more robust stabilization of perceptual learning.
Berard, Aaron V; Cain, Matthew S; Watanabe, Takeo; Sasaki, Yuka
Playing certain types of video games for a long time can improve a wide range of mental processes, from visual acuity to cognitive control. Frequent gamers have also displayed generalized improvements in perceptual learning. In the Texture Discrimination Task (TDT), a widely used perceptual learning paradigm, participants report the orientation of a target embedded in a field of lines and demonstrate robust over-night improvement. However, changing the orientation of the background lines midway through TDT training interferes with overnight improvements in overall performance on TDT. Interestingly, prior research has suggested that this effect will not occur if a one-hour break is allowed in between the changes. These results have suggested that after training is over, it may take some time for learning to become stabilized and resilient against interference. Here, we tested whether frequent gamers have faster stabilization of perceptual learning compared to non-gamers and examined the effect of daily video game playing on interference of training of TDT with one background orientation on perceptual learning of TDT with a different background orientation. As a result, we found that non-gamers showed overnight performance improvement only on one background orientation, replicating previous results with the interference in TDT. In contrast, frequent gamers demonstrated overnight improvements in performance with both background orientations, suggesting that they are better able to overcome interference in perceptual learning. This resistance to interference suggests that video game playing not only enhances the amplitude and speed of perceptual learning but also leads to faster and/or more robust stabilization of perceptual learning.
Wilson, Donald A.; Fletcher, Max L.; Sullivan, Regina M.
Olfactory perceptual learning is a relatively long-term, learned increase in perceptual acuity, and has been described in both humans and animals. Data from recent electrophysiological studies have indicated that olfactory perceptual learning may be correlated with changes in odorant receptive fields of neurons in the olfactory bulb and piriform…
de Fockert, Jan W.
The perceptual load and dilution models differ fundamentally in terms of the proposed mechanism underlying variation in distractibility during different perceptual conditions. However, both models predict that distracting information can be processed beyond perceptual processing under certain conditions, a prediction that is well-supported by the literature. Load theory proposes that in such cases, where perceptual task aspects do not allow for sufficient attentional selectivity, the maintena...
Noorbaloochi, Sharareh; Sharon, Dahlia; McClelland, James L
We used electroencephalography (EEG) and behavior to examine the role of payoff bias in a difficult two-alternative perceptual decision under deadline pressure in humans. The findings suggest that a fast guess process, biased by payoff and triggered by stimulus onset, occurred on a subset of trials and raced with an evidence accumulation process informed by stimulus information. On each trial, the participant judged whether a rectangle was shifted to the right or left and responded by squeezing a right- or left-hand dynamometer. The payoff for each alternative (which could be biased or unbiased) was signaled 1.5 s before stimulus onset. The choice response was assigned to the first hand reaching a squeeze force criterion and reaction time was defined as time to criterion. Consistent with a fast guess account, fast responses were strongly biased toward the higher-paying alternative and the EEG exhibited an abrupt rise in the lateralized readiness potential (LRP) on a subset of biased payoff trials contralateral to the higher-paying alternative ∼ 150 ms after stimulus onset and 50 ms before stimulus information influenced the LRP. This rise was associated with poststimulus dynamometer activity favoring the higher-paying alternative and predicted choice and response time. Quantitative modeling supported the fast guess account over accounts of payoff effects supported in other studies. Our findings, taken with previous studies, support the idea that payoff and prior probability manipulations produce flexible adaptations to task structure and do not reflect a fixed policy for the integration of payoff and stimulus information. Humans and other animals often face situations in which they must make choices based on uncertain sensory information together with information about expected outcomes (gains or losses) about each choice. We investigated how differences in payoffs between available alternatives affect neural activity, overt choice, and the timing of choice
Sheridan, Heather; Reingold, Eyal M.
The present experiments examined perceptual specificity effects using a rereading paradigm. Eye movements were monitored while participants read the same target word twice, in two different low-constraint sentence frames. The congruency of perceptual processing was manipulated by either presenting the target word in the same distortion typography…
van Elk, M.
Previous studies have shown that one’s prior beliefs have a strong effect on perceptual decision-making and attentional processing. The present study extends these findings by investigating how individual differences in paranormal and conspiracy beliefs are related to perceptual and attentional
Cartwright-Finch, Ula; Lavie, Nilli
Perceptual load theory offers a resolution to the long-standing early vs. late selection debate over whether task-irrelevant stimuli are perceived, suggesting that irrelevant perception depends upon the perceptual load of task-relevant processing. However, previous evidence for this theory has relied on RTs and neuroimaging. Here we tested the…
Meiran, Nachshon; Dimov, Eduard; Ganel, Tzvi
In the present experiments, the question being addressed was whether switching attention between perceptual dimensions and selective attention to dimensions are processes that compete over a common resource? Attention to perceptual dimensions is usually studied by requiring participants to ignore a never-relevant dimension. Selection failure…
Roper, Zachary J J; Cosman, Joshua D; Vecera, Shaun P
One account of the early versus late selection debate in attention proposes that perceptual load determines the locus of selection. Attention selects stimuli at a late processing level under low-load conditions but selects stimuli at an early level under high-load conditions. Despite the successes of perceptual load theory, a noncircular definition of perceptual load remains elusive. We investigated the factors that influence perceptual load by using manipulations that have been studied extensively in visual search, namely target-distractor similarity and distractor-distractor similarity. Consistent with previous work, search was most efficient when targets and distractors were dissimilar and the displays contained homogeneous distractors; search became less efficient when target-distractor similarity increased irrespective of display heterogeneity. Importantly, we used these same stimuli in a typical perceptual load task that measured attentional spillover to a task-irrelevant flanker. We found a strong correspondence between search efficiency and perceptual load; stimuli that generated efficient searches produced flanker interference effects, suggesting that such displays involved low perceptual load. Flanker interference effects were reduced in displays that produced less efficient searches. Furthermore, our results demonstrate that search difficulty, as measured by search intercept, has little bearing on perceptual load. We conclude that rather than be arbitrarily defined, perceptual load might be defined by well-characterized, continuous factors that influence visual search. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Rajaram, S; Srinivas, K; Travers, S
Reports on the effects of dividing attention at study on subsequent perceptual priming suggest that perceptual priming is generally unaffected by attentional manipulations as long as word identity is processed. We tested this hypothesis in three experiments by using the implicit word fragment completion and word stem completion tasks. Division of attention was instantiated with the Stroop task in order to ensure the processing of word identity even when the participant's attention was directed to a stimulus attribute other than the word itself. Under these conditions, we found that even though perceptual priming was significant, it was significantly reduced in magnitude. A stem cued recall test in Experiment 2 confirmed a more deleterious effect of divided attention on explicit memory. Taken together, our findings delineate the relative contributions of perceptual analysis and attentional processes in mediating perceptual priming on two ubiquitously used tasks of word fragment completion and word stem completion.
Reber, Rolf; Wurtz, Pascal; Zimmermann, Thomas D
Perceptual fluency is the subjective experience of ease with which an incoming stimulus is processed. Although perceptual fluency is assessed by speed of processing, it remains unclear how objective speed is related to subjective experiences of fluency. We present evidence that speed at different stages of the perceptual process contributes to perceptual fluency. In an experiment, figure-ground contrast influenced detection of briefly presented words, but not their identification at longer exposure durations. Conversely, font in which the word was written influenced identification, but not detection. Both contrast and font influenced subjective fluency. These findings suggest that speed of processing at different stages condensed into a unified subjective experience of perceptual fluency.
Paige E. Scalf
Full Text Available Both perceptual load theory and dilution theory purport to explain when and why task-irrelevant information, or so-called distractors are processed. Central to both explanations is the notion of limited resources, although the theories differ in the precise way in which those limitations affect distractor processing. We have recently proposed a neurally plausible explanation of limited resources in which neural competition among stimuli hinders their representation in the brain. This view of limited capacity can also explain distractor processing, whereby the competitive interactions and bias imposed to resolve the competition determine the extent to which a distractor is processed. This idea is compatible with aspects of both perceptual load and dilution models of distractor processing, but also serves to highlight their differences. Here we review the evidence in favor of a biased competition view of limited resources and relate these ideas to both classic perceptual load theory and dilution theory.
Scalf, Paige E; Torralbo, Ana; Tapia, Evelina; Beck, Diane M
Both perceptual load theory and dilution theory purport to explain when and why task-irrelevant information, or so-called distractors are processed. Central to both explanations is the notion of limited resources, although the theories differ in the precise way in which those limitations affect distractor processing. We have recently proposed a neurally plausible explanation of limited resources in which neural competition among stimuli hinders their representation in the brain. This view of limited capacity can also explain distractor processing, whereby the competitive interactions and bias imposed to resolve the competition determine the extent to which a distractor is processed. This idea is compatible with aspects of both perceptual load and dilution models of distractor processing, but also serves to highlight their differences. Here we review the evidence in favor of a biased competition view of limited resources and relate these ideas to both classic perceptual load theory and dilution theory.
Kellman, Philip J; Garrigan, Patrick
We consider perceptual learning: experience-induced changes in the way perceivers extract information. Often neglected in scientific accounts of learning and in instruction, perceptual learning is a fundamental contributor to human expertise and is crucial in domains where humans show remarkable levels of attainment, such as language, chess, music, and mathematics. In Section 2, we give a brief history and discuss the relation of perceptual learning to other forms of learning. We consider in Section 3 several specific phenomena, illustrating the scope and characteristics of perceptual learning, including both discovery and fluency effects. We describe abstract perceptual learning, in which structural relationships are discovered and recognized in novel instances that do not share constituent elements or basic features. In Section 4, we consider primary concepts that have been used to explain and model perceptual learning, including receptive field change, selection, and relational recoding. In Section 5, we consider the scope of perceptual learning, contrasting recent research, focused on simple sensory discriminations, with earlier work that emphasized extraction of invariance from varied instances in more complex tasks. Contrary to some recent views, we argue that perceptual learning should not be confined to changes in early sensory analyzers. Phenomena at various levels, we suggest, can be unified by models that emphasize discovery and selection of relevant information. In a final section, we consider the potential role of perceptual learning in educational settings. Most instruction emphasizes facts and procedures that can be verbalized, whereas expertise depends heavily on implicit pattern recognition and selective extraction skills acquired through perceptual learning. We consider reasons why perceptual learning has not been systematically addressed in traditional instruction, and we describe recent successful efforts to create a technology of perceptual
Kellman, Philip J.; Garrigan, Patrick
We consider perceptual learning: experience-induced changes in the way perceivers extract information. Often neglected in scientific accounts of learning and in instruction, perceptual learning is a fundamental contributor to human expertise and is crucial in domains where humans show remarkable levels of attainment, such as language, chess, music, and mathematics. In Section 2, we give a brief history and discuss the relation of perceptual learning to other forms of learning. We consider in Section 3 several specific phenomena, illustrating the scope and characteristics of perceptual learning, including both discovery and fluency effects. We describe abstract perceptual learning, in which structural relationships are discovered and recognized in novel instances that do not share constituent elements or basic features. In Section 4, we consider primary concepts that have been used to explain and model perceptual learning, including receptive field change, selection, and relational recoding. In Section 5, we consider the scope of perceptual learning, contrasting recent research, focused on simple sensory discriminations, with earlier work that emphasized extraction of invariance from varied instances in more complex tasks. Contrary to some recent views, we argue that perceptual learning should not be confined to changes in early sensory analyzers. Phenomena at various levels, we suggest, can be unified by models that emphasize discovery and selection of relevant information. In a final section, we consider the potential role of perceptual learning in educational settings. Most instruction emphasizes facts and procedures that can be verbalized, whereas expertise depends heavily on implicit pattern recognition and selective extraction skills acquired through perceptual learning. We consider reasons why perceptual learning has not been systematically addressed in traditional instruction, and we describe recent successful efforts to create a technology of perceptual
Mackintosh, N J
Although most studies of perceptual learning in human participants have concentrated on the changes in perception assumed to be occurring, studies of nonhuman animals necessarily measure discrimination learning and generalization and remain agnostic on the question of whether changes in behavior reflect changes in perception. On the other hand, animal studies do make it easier to draw a distinction between supervised and unsupervised learning. Differential reinforcement will surely teach animals to attend to some features of a stimulus array rather than to others. But it is an open question as to whether such changes in attention underlie the enhanced discrimination seen after unreinforced exposure to such an array. I argue that most instances of unsupervised perceptual learning observed in animals (and at least some in human animals) are better explained by appeal to well-established principles and phenomena of associative learning theory: excitatory and inhibitory associations between stimulus elements, latent inhibition, and habituation.
Murphy, Gillian; Greene, Ciara M
Perceptual Load Theory has been proposed as a resolution to the longstanding early versus late selection debate in cognitive psychology. There is much evidence in support of Load Theory but very few applied studies, despite the potential for the model to shed light on everyday attention and distraction. Using a driving simulator, the effect of perceptual and cognitive load on drivers' visual search was assessed. The findings were largely in line with Load Theory, with reduced distractor processing under high perceptual load, but increased distractor processing under high cognitive load. The effect of load on driving behaviour was also analysed, with significant differences in driving behaviour under perceptual and cognitive load. In addition, the effect of perceptual load on drivers' levels of awareness was investigated. High perceptual load significantly increased inattentional blindness and deafness, for stimuli that were both relevant and irrelevant to driving. High perceptual load also increased RTs to hazards. The current study helps to advance Load Theory by illustrating its usefulness outside of traditional paradigms. There are also applied implications for driver safety and roadway design, as the current study suggests that perceptual and cognitive load are important factors in driver attention. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Sy, Jocelyn L.; Guerin, Scott A.; Stegman, Anna; Giesbrecht, Barry
The load theory of visual attention proposes that efficient selective perceptual processing of task-relevant information during search is determined automatically by the perceptual demands of the display. If the perceptual demands required to process task-relevant information are not enough to consume all available capacity, then the remaining capacity automatically and exhaustively “spills-over” to task-irrelevant information. The spill-over of perceptual processing capacity increases the likelihood that task-irrelevant information will impair performance. In two visual search experiments, we tested the automaticity of the allocation of perceptual processing resources by measuring the extent to which the processing of task-irrelevant distracting stimuli was modulated by both perceptual load and top-down expectations using behavior, functional magnetic resonance imaging, and electrophysiology. Expectations were generated using a trial-by-trial cue that provided information about the likely load of the upcoming visual search task. When the cues were valid, behavioral interference was eliminated and the influence of load on frontoparietal and visual cortical responses was attenuated relative to when the cues were invalid. In conditions in which task-irrelevant information interfered with performance and modulated visual activity, individual differences in mean blood oxygenation level dependent responses measured from the left intraparietal sulcus were negatively correlated with individual differences in the severity of distraction. These results are consistent with the interpretation that a top-down biasing mechanism interacts with perceptual load to support filtering of task-irrelevant information. PMID:24904374
Jocelyn L Sy
Full Text Available The load theory of visual attention proposes that efficient selective perceptual processing of task-relevant information during search is determined automatically by the perceptual demands of the display. If the perceptual demands required to process task-relevant information are not enough to consume all available capacity, then the remaining capacity automatically and exhaustively spills-over to task-irrelevant information. The spill-over of perceptual processing capacity increases the likelihood that task-irrelevant information will impair performance. In two visual search experiments, we tested the automaticity of the allocation of perceptual processing resources by measuring the extent to which the processing of task-irrelevant distracting stimuli was modulated by both perceptual load and top-down expectations using behavior, fMRI, and electrophysiology. Expectations were generated by a trial-by-trial cue that provided information about the likely load of the upcoming visual search task. When the cues were valid, behavioral interference was eliminated and the influence of load on frontoparietal and visual cortical responses was attenuated relative to when the cues were invalid. In conditions in which task-irrelevant information interfered with performance and modulated visual activity, individual differences in mean BOLD responses measured from the left intraparietal sulcus were negatively correlated with individual differences in the severity of distraction. These results are consistent with the interpretation that a top-down biasing mechanism interacts with perceptual load to support filtering of task-irrelevant information.
Visual perception is a complex process requiring interaction between the receptors in the eye that sense the stimulus and the neural system and the brain that are responsible for communicating and interpreting the sensed visual information. This process involves several physical, neural, and cognitive phenomena whose understanding is essential to design effective and computationally efficient imaging solutions. Building on advances in computer vision, image and video processing, neuroscience, and information engineering, perceptual digital imaging greatly enhances the capabilities of tradition
Ibáñez, Agustin; Bekinschtein, Tristan
Abstract Visual perception and integration seem to play an essential role in our conscious phenomenology. Relatively local neural processing of reentrant nature may explain several visual integration processes (feature binding or figure-ground segregation, object recognition, inference, competition), even without attention or cognitive control. Based on the above statements, should the neural signatures of visual integration (via reentrant process) be non-reportable phenomenological qualia? We argue that qualia are not required to understand this perceptual organization.
Perceptual grouping is the process through which the perceptual system combines local stimuli into a more global perceptual unit. Previous studies have shown attention to be a modulatory factor for perceptual grouping. However, these studies mainly used explicit measurements, and, thus, whether attention can modulate perceptual grouping without awareness is still relatively unexplored. To clarify the relationship between attention and perceptual grouping, the present study aims to explore how attention interacts with perceptual grouping without awareness. The task was to judge the relative lengths of two centrally presented horizontal bars while a railway-shaped pattern defined by color similarity was presented in the background. Although the observers were unaware of the railway-shaped pattern, their line-length judgment was biased by that pattern, which induced a Ponzo illusion, indicating grouping without awareness. More importantly, an attentional modulatory effect without awareness was manifested as evident by the observer's performance being more often biased when the railway-shaped pattern was formed by an attended color than when it was formed by an unattended one. Also, the attentional modulation effect was shown to be dynamic, being more pronounced with a short presentation time than a longer one. The results of the present study not only clarify the relationship between attention and perceptual grouping but also further contribute to our understanding of attention and awareness by corroborating the dissociation between attention and awareness.
Sherman, M T; Seth, A K; Barrett, A B; Kanai, R
The influential framework of 'predictive processing' suggests that prior probabilistic expectations influence, or even constitute, perceptual contents. This notion is evidenced by the facilitation of low-level perceptual processing by expectations. However, whether expectations can facilitate high-level components of perception remains unclear. We addressed this question by considering the influence of expectations on perceptual metacognition. To isolate the effects of expectation from those of attention we used a novel factorial design: expectation was manipulated by changing the probability that a Gabor target would be presented; attention was manipulated by instructing participants to perform or ignore a concurrent visual search task. We found that, independently of attention, metacognition improved when yes/no responses were congruent with expectations of target presence/absence. Results were modeled under a novel Bayesian signal detection theoretic framework which integrates bottom-up signal propagation with top-down influences, to provide a unified description of the mechanisms underlying perceptual decision and metacognition. Copyright © 2015 Elsevier Inc. All rights reserved.
Full Text Available Perceptual hash functions provide a tool for fast and reliable identification of content. We present new audio hash functions based on summarization of the time-frequency spectral characteristics of an audio document. The proposed hash functions are based on the periodicity series of the fundamental frequency and on singular-value description of the cepstral frequencies. They are found, on one hand, to perform very satisfactorily in identification and verification tests, and on the other hand, to be very resilient to a large variety of attacks. Moreover, we address the issue of security of hashes and propose a keying technique, and thereby a key-dependent hash function.
The picture superiority effect, i.e. better memory for pictures than for corresponding words, has been variously ascribed to a conceptual or a perceptual processing advantage. The present study aimed to disentangle perceptual and conceptual contributions. Pictures and words were tested for recognition in both their original formats and translated into participants´ second language. Multinomial Processing Tree (Batchelder & Riefer, 1999) and MINERVA (Hintzman, 1984) models were fitted to t...
Murphy, Gillian; Greene, Ciara M.
Load Theory (Lavie, 1995, 2005) states that the level of perceptual load in a task (i.e., the amount of information involved in processing task-relevant stimuli) determines the efficiency of selective attention. There is evidence that perceptual load affects distractor processing, with increased inattentional blindness under high load. Given that high load can result in individuals failing to report seeing obvious objects, it is conceivable that load may also impair memory for the scene. The ...
Gillian Murphy; Ciara Mary Greene
Load Theory (Lavie, 1995; 2005) states that the level of perceptual load in a task (i.e. the amount of information involved in processing task-relevant stimuli) determines the efficiency of selective attention. There is evidence that perceptual load affects distractor processing, with increased inattentional blindness under high load. Given that high load can result in individuals failing to report seeing obvious objects, it is conceivable that load may also impair memory for the scene. The c...
Dartel, M. van; Sprinkhuizen-Kuyper, I.G.; Postma, E.O.; Herik, H.J. van den
Reactive agents are generally believed to be incapable of coping with perceptual ambiguity (i.e., identical sensory states that require different responses). However, a recent finding suggests that reactive agents can cope with perceptual ambiguity in a simple model (Nolfi, 2002). This paper
Jacobs, Robert A
New technologies and new ways of thinking have recently led to rapid expansions in the study of perceptual learning. We describe three themes shared by many of the nine articles included in this topic on Integrated Approaches to Perceptual Learning. First, perceptual learning cannot be studied on its own because it is closely linked to other aspects of cognition, such as attention, working memory, decision making, and conceptual knowledge. Second, perceptual learning is sensitive to both the stimulus properties of the environment in which an observer exists and to the properties of the tasks that the observer needs to perform. Moreover, the environmental and task properties can be characterized through their statistical regularities. Finally, the study of perceptual learning has important implications for society, including implications for science education and medical rehabilitation. Contributed articles relevant to each theme are summarized. Copyright © 2010 Cognitive Science Society, Inc.
Cortese, Aurelio; Amano, Kaoru; Koizumi, Ai; Kawato, Mitsuo; Lau, Hakwan
A central controversy in metacognition studies concerns whether subjective confidence directly reflects the reliability of perceptual or cognitive processes, as suggested by normative models based on the assumption that neural computations are generally optimal. This view enjoys popularity in the computational and animal literatures, but it has also been suggested that confidence may depend on a late-stage estimation dissociable from perceptual processes. Yet, at least in humans, experimental tools have lacked the power to resolve these issues convincingly. Here, we overcome this difficulty by using the recently developed method of decoded neurofeedback (DecNef) to systematically manipulate multivoxel correlates of confidence in a frontoparietal network. Here we report that bi-directional changes in confidence do not affect perceptual accuracy. Further psychophysical analyses rule out accounts based on simple shifts in reporting strategy. Our results provide clear neuroscientific evidence for the systematic dissociation between confidence and perceptual performance, and thereby challenge current theoretical thinking. PMID:27976739
FRAS1-related extracellular matrix 3 (FREM3) single-nucleotide polymorphism effects on gene expression, amygdala reactivity and perceptual processing speed: An accelerated aging pathway of depression risk
Nikolova, Yuliya S.; Iruku, Swetha P.; Lin, Chien-Wei; Conley, Emily Drabant; Puralewski, Rachel; French, Beverly; Hariri, Ahmad R.; Sibille, Etienne
The A allele of the FRAS1-related extracellular matrix protein 3 (FREM3) rs7676614 single nucleotide polymorphism (SNP) was linked to major depressive disorder (MDD) in an early genome-wide association study (GWAS), and to symptoms of psychomotor retardation in a follow-up investigation. In line with significant overlap between age- and depression-related molecular pathways, parallel work has shown that FREM3 expression in postmortem human brain decreases with age. Here, we probe the effect of rs7676614 on amygdala reactivity and perceptual processing speed, both of which are altered in depression and aging. Amygdala reactivity was assessed using a face-matching BOLD fMRI paradigm in 365 Caucasian participants in the Duke Neurogenetics Study (DNS) (192 women, mean age 19.7 ± 1.2). Perceptual processing speed was indexed by reaction times in the same task and the Trail Making Test (TMT). The effect of rs7676614 on FREM3 mRNA brain expression levels was probed in a postmortem cohort of 169 Caucasian individuals (44 women, mean age 50.8 ± 14.9). The A allele of rs7676614 was associated with blunted amygdala reactivity to faces, slower reaction times in the face-matching condition (p < 0.04), as well as marginally slower performance on TMT Part B (p = 0.056). In the postmortem cohort, the T allele of rs6537170 (proxy for the rs7676614 A allele), was associated with trend-level reductions in gene expression in Brodmann areas 11 and 47 (p = 0.066), reminiscent of patterns characteristic of older age. The low-expressing allele of another FREM3 SNP (rs1391187) was similarly associated with reduced amygdala reactivity and slower TMT Part B speed, in addition to reduced BA47 activity and extraversion (p < 0.05). Together, these results suggest common genetic variation associated with reduced FREM3 expression may confer risk for a subtype of depression characterized by reduced reactivity to environmental stimuli and slower perceptual processing speed, possibly suggestive of
Taha TahaBasheer; Ehkan Phaklen; Ngadiran Ruzelita
Perceptual mappingapproaches have been widely used in visual information processing in multimedia and internet of things (IOT) applications. Accumulative Lifting Difference (ALD) is proposed in this paper as texture mapping model based on low-complexity lifting wavelet transform, and combined with luminance masking for creating an efficient perceptual mapping model to estimate Just Noticeable Distortion (JND) in digital images. In addition to low complexity operations, experiments results sho...
Blank, H.; Guido, B.; Heekeren, H.R.; Philiastides, M.G.
Perceptual decision making is the process by which information from sensory systems is combined and used to influence our behavior. In addition to the sensory input, this process can be affected by other factors, such as reward and punishment for correct and incorrect responses. To investigate the temporal dynamics of how monetary punishment influences perceptual decision making in humans, we collected electroencephalography (EEG) data during a perceptual categorization task whereby the punis...
Chun, Marvin M.; Johnson, Marcia K.
Attention and memory are typically studied as separate topics, but they are highly intertwined. Here we discuss the relation between memory and two fundamental types of attention: perceptual and reflective. Memory is the persisting consequence of cognitive activities initiated by and/or focused on external information from the environment (perceptual attention) and initiated by and/or focused on internal mental representations (reflective attention). We consider three key questions for advancing a cognitive neuroscience of attention and memory: To what extent do perception and reflection share representational areas? To what extent are the control processes that select, maintain, and manipulate perceptual and reflective information subserved by common areas and networks? During perception and reflection, to what extent are common areas responsible for binding features together to create complex, episodic memories and for reviving them later? Considering similarities and differences in perceptual and reflective attention helps integrate a broad range of findings and raises important unresolved issues. PMID:22099456
Full Text Available Load Theory (Lavie, 1995; 2005 states that the level of perceptual load in a task (i.e. the amount of information involved in processing task-relevant stimuli determines the efficiency of selective attention. There is evidence that perceptual load affects distractor processing, with increased inattentional blindness under high load. Given that high load can result in individuals failing to report seeing obvious objects, it is conceivable that load may also impair memory for the scene. The current study is the first to assess the effect of perceptual load on eyewitness memory. Across three experiments (two video-based and one in a driving simulator, the effect of perceptual load on eyewitness memory was assessed. The results showed that eyewitnesses were less accurate under high load, in particular for peripheral details. For example, memory for the central character in the video was not affected by load but memory for a witness who passed by the window at the edge of the scene was significantly worse under high load. High load memories were also more open to suggestion, showing increased susceptibility to leading questions. High visual perceptual load also affected recall for auditory information, illustrating a possible cross-modal perceptual load effect on memory accuracy. These results have implications for eyewitness memory researchers and forensic professionals.
Murphy, Gillian; Greene, Ciara M
Load Theory (Lavie, 1995, 2005) states that the level of perceptual load in a task (i.e., the amount of information involved in processing task-relevant stimuli) determines the efficiency of selective attention. There is evidence that perceptual load affects distractor processing, with increased inattentional blindness under high load. Given that high load can result in individuals failing to report seeing obvious objects, it is conceivable that load may also impair memory for the scene. The current study is the first to assess the effect of perceptual load on eyewitness memory. Across three experiments (two video-based and one in a driving simulator), the effect of perceptual load on eyewitness memory was assessed. The results showed that eyewitnesses were less accurate under high load, in particular for peripheral details. For example, memory for the central character in the video was not affected by load but memory for a witness who passed by the window at the edge of the scene was significantly worse under high load. High load memories were also more open to suggestion, showing increased susceptibility to leading questions. High visual perceptual load also affected recall for auditory information, illustrating a possible cross-modal perceptual load effect on memory accuracy. These results have implications for eyewitness memory researchers and forensic professionals.
Stotesbury, Hanne; Gaigg, Sebastian B; Kirhan, Saim; Haenschel, Corinna
Schizophrenia Spectrum Disorders (SSD) are known to be characterised by abnormalities in attentional processes, but there are inconsistencies in the literature that remain unresolved. This article considers whether perceptual resource limitations play a role in moderating attentional abnormalities in SSD. According to perceptual load theory, perceptual resource limitations can lead to attenuated or superior performance on dual-task paradigms depending on whether participants are required to process, or attempt to ignore, secondary stimuli. If SSD is associated with perceptual resource limitations, and if it represents the extreme end of an otherwise normally distributed neuropsychological phenotype, schizotypal traits in the general population should lead to disproportionate performance costs on dual-task paradigms as a function of the perceptual task demands. To test this prediction, schizotypal traits were quantified via the Schizotypal Personality Questionnaire (SPQ) in 74 healthy volunteers, who also completed a dual-task signal detection paradigm that required participants to detect central and peripheral stimuli across conditions that varied in the overall number of stimuli presented. The results confirmed decreasing performance as the perceptual load of the task increased. More importantly, significant correlations between SPQ scores and task performance confirmed that increased schizotypal traits, particularly in the cognitive-perceptual domain, are associated with greater performance decrements under increasing perceptual load. These results confirm that attentional difficulties associated with SSD extend sub-clinically into the general population and suggest that cognitive-perceptual schizotypal traits may represent a risk factor for difficulties in the regulation of attention under increasing perceptual load.
Sheridan, Heather; Reingold, Eyal M
The present study used eye tracking methodology to examine rereading benefits for spatially transformed text. Eye movements were monitored while participants read the same target word twice, in two different low-constraint sentence frames. The congruency of perceptual processing was manipulated by either applying the same type of transformation to the word during the first and second presentations (i.e., the congruent condition), or employing two different types of transformations across the two presentations of the word (i.e., the incongruent condition). Perceptual specificity effects were demonstrated such that fixation times for the second presentation of the target word were shorter for the congruent condition compared to the incongruent condition. Moreover, we demonstrated an additional perceptually non-specific effect such that second reading fixation times were shorter for the incongruent condition relative to a baseline condition that employed a normal typography (i.e., non-transformed) during the first presentation and a transformation during the second presentation. Both of these effects (i.e., perceptually specific and perceptually non-specific) were similar in magnitude for high and low frequency words, and both effects persisted across a 1 week lag between the first and second readings. We discuss the present findings in the context of the distinction between conscious and unconscious memory, and the distinction between perceptually versus conceptually driven processing. Copyright © 2012 Elsevier Inc. All rights reserved.
van der Helm, Peter A
What is the degree to which knowledge influences visual perceptual processes? This question, which is central to the seeing-versus-thinking debate in cognitive science, is often discussed using examples claimed to be proof of one stance or another. It has, however, also been muddled by the usage of different and unclear definitions of perception. Here, for the well-defined process of perceptual organization, I argue that including speed (or efficiency) into the equation opens a new perspective on the limits of top-down influences of thinking on seeing. While the input of the perceptual organization process may be modifiable and its output enrichable, the process itself seems so fast (or efficient) that thinking hardly has time to intrude and is effective mostly after the fact.
Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold
Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique
Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.
In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples
Murphy, Sandra; Spence, Charles; Dalton, Polly
Selective attention is a crucial mechanism in everyday life, allowing us to focus on a portion of incoming sensory information at the expense of other less relevant stimuli. The circumstances under which irrelevant stimuli are successfully ignored have been a topic of scientific interest for several decades now. Over the last 20 years, the perceptual load theory (e.g. Lavie, 1995) has provided one robust framework for understanding these effects within the visual modality. The suggestion is that successful selection depends on the perceptual demands imposed by the task-relevant information. However, less research has addressed the question of whether the same principles hold in audition and, to date, the existing literature provides a mixed picture. Here, we review the evidence for and against the applicability of perceptual load theory in hearing, concluding that this question still awaits resolution. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Sarkar, Sudeep; Boyer, Kim L.
The evolution of perceptual organization in biological vision, and its necessity in advanced computer vision systems, arises from the characteristic that perception, the extraction of meaning from sensory input, is an intelligent process. This is particularly so for high order organisms and, analogically, for more sophisticated computational models. The role of perceptual organization in computer vision systems is explored. This is done from four vantage points. First, a brief history of perceptual organization research in both humans and computer vision is offered. Next, a classificatory structure in which to cast perceptual organization research to clarify both the nomenclature and the relationships among the many contributions is proposed. Thirdly, the perceptual organization work in computer vision in the context of this classificatory structure is reviewed. Finally, the array of computational techniques applied to perceptual organization problems in computer vision is surveyed.
van Elk, Michiel
Previous studies have shown that one's prior beliefs have a strong effect on perceptual decision-making and attentional processing. The present study extends these findings by investigating how individual differences in paranormal and conspiracy beliefs are related to perceptual and attentional biases. Two field studies were conducted in which visitors of a paranormal conducted a perceptual decision making task (i.e. the face/house categorization task; Experiment 1) or a visual attention task (i.e. the global/local processing task; Experiment 2). In the first experiment it was found that skeptics compared to believers more often incorrectly categorized ambiguous face stimuli as representing a house, indicating that disbelief rather than belief in the paranormal is driving the bias observed for the categorization of ambiguous stimuli. In the second experiment, it was found that skeptics showed a classical 'global-to-local' interference effect, whereas believers in conspiracy theories were characterized by a stronger 'local-to-global interference effect'. The present study shows that individual differences in paranormal and conspiracy beliefs are associated with perceptual and attentional biases, thereby extending the growing body of work in this field indicating effects of cultural learning on basic perceptual processes.
Michiel van Elk
Full Text Available Previous studies have shown that one's prior beliefs have a strong effect on perceptual decision-making and attentional processing. The present study extends these findings by investigating how individual differences in paranormal and conspiracy beliefs are related to perceptual and attentional biases. Two field studies were conducted in which visitors of a paranormal conducted a perceptual decision making task (i.e. the face/house categorization task; Experiment 1 or a visual attention task (i.e. the global/local processing task; Experiment 2. In the first experiment it was found that skeptics compared to believers more often incorrectly categorized ambiguous face stimuli as representing a house, indicating that disbelief rather than belief in the paranormal is driving the bias observed for the categorization of ambiguous stimuli. In the second experiment, it was found that skeptics showed a classical 'global-to-local' interference effect, whereas believers in conspiracy theories were characterized by a stronger 'local-to-global interference effect'. The present study shows that individual differences in paranormal and conspiracy beliefs are associated with perceptual and attentional biases, thereby extending the growing body of work in this field indicating effects of cultural learning on basic perceptual processes.
Full Text Available The main purpose of this study is to specify the basic perceptual dimensions underlying the judgments of the physical features which define the style in paintings (e.g. salient form, colorful surface, oval contours etc.. The other aim of the study is to correlate these dimensions with the subjective (affective dimensions of the experience of paintings. In the preliminary study a set of 25 pairs of elementary perceptual descriptors were empirically specified, and a set of 25 bipolar scales were made (e.g. uncolored-multicolored. In the experiment 30 subjects judged 24 paintings (paintings were taken from the study of Radonjić and Marković, 2004 on 25 scales. Factor analysis revealed the four factors: form (scales: precise, neat, salient form etc., color (color contrast, lightness contrast, vivid colors, space (voluminosity, depth and oval contours and complexity (multicolored, ornate, detailed. Obtained factors reflected the nature of the phenomenological and neural segregation of form, color, depth processing, and partially of complexity processing (e.g. spatial frequency processing within both the form and color subsystem. The aim of the next step of analysis was to specify the correlations between two groups of judgments: (a mean judgments of 24 paintings on perceptual factors and (b mean judgments of the same set of 24 paintings on subjective (affective experience factors, i.e. regularity, attraction, arousal and relaxation (judgments taken from Radonjić and Marković, 2005. The following significant correlations were obtained: regularity-form, regularity-space, attraction-form and arousal-complexity (negative correlation. The reasons for the unexpected negative correlation between arousal and complexity should be specified in further studies.
Brouwer, G.J.; Tong, F.; Hagoort, P.; van Ee, R.
We employed a parametric psychophysical design in combination with functional imaging to examine the influence of metric changes in perceptual incongruence on perceptual alternation rates and cortical responses. Subjects viewed a bistable stimulus defined by incongruent depth cues; bistability
de Kok, I.A.; Poppe, Ronald Walter; Heylen, Dirk K.J.
We introduce Iterative Perceptual Learning (IPL), a novel approach to learn computational models for social behavior synthesis from corpora of human–human interactions. IPL combines perceptual evaluation with iterative model refinement. Human observers rate the appropriateness of synthesized
de Kok, I.A.; Poppe, Ronald Walter; Heylen, Dirk K.J.
We introduce Iterative Perceptual Learning (IPL), a novel approach for learning computational models for social behavior synthesis from corpora of human-human interactions. The IPL approach combines perceptual evaluation with iterative model refinement. Human observers rate the appropriateness of
Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)
The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications
Deroost, Natacha; Coomans, Daphné
We examined the role of sequence awareness in a pure perceptual sequence learning design. Participants had to react to the target's colour that changed according to a perceptual sequence. By varying the mapping of the target's colour onto the response keys, motor responses changed randomly. The effect of sequence awareness on perceptual sequence learning was determined by manipulating the learning instructions (explicit versus implicit) and assessing the amount of sequence awareness after the experiment. In the explicit instruction condition (n = 15), participants were instructed to intentionally search for the colour sequence, whereas in the implicit instruction condition (n = 15), they were left uninformed about the sequenced nature of the task. Sequence awareness after the sequence learning task was tested by means of a questionnaire and the process-dissociation-procedure. The results showed that the instruction manipulation had no effect on the amount of perceptual sequence learning. Based on their report to have actively applied their sequence knowledge during the experiment, participants were subsequently regrouped in a sequence strategy group (n = 14, of which 4 participants from the implicit instruction condition and 10 participants from the explicit instruction condition) and a no-sequence strategy group (n = 16, of which 11 participants from the implicit instruction condition and 5 participants from the explicit instruction condition). Only participants of the sequence strategy group showed reliable perceptual sequence learning and sequence awareness. These results indicate that perceptual sequence learning depends upon the continuous employment of strategic cognitive control processes on sequence knowledge. Sequence awareness is suggested to be a necessary but not sufficient condition for perceptual learning to take place. Copyright © 2018 Elsevier B.V. All rights reserved.
Szpiro, Sarit F. A.; Spering, Miriam; Carrasco, Marisa
Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training...
McAuliffe, Megan J; Kerr, Sarah E; Gibson, Elizabeth M R; Anderson, Tim; LaShell, Patrick J
To determine how increased vocal loudness and reduced speech rate affect listeners' cognitive-perceptual processing of hypokinetic dysarthric speech associated with Parkinson's disease. Fifty-one healthy listener participants completed a speech perception experiment. Listeners repeated phrases produced by 5 individuals with dysarthria across habitual, loud, and slow speaking modes. Listeners were allocated to habitual ( n = 17), loud ( n = 17), or slow ( n = 17) experimental conditions. Transcripts derived from the phrase repetition task were coded for overall accuracy (i.e., intelligibility), and perceptual error analyses examined how these conditions affected listeners' phonemic mapping (i.e., syllable resemblance) and lexical segmentation (i.e., lexical boundary error analysis). Both speech conditions provided obvious perceptual benefits to listeners. Overall, transcript accuracy was highest in the slow condition. In the loud condition, however, improvement was evidenced across the experiment. An error analysis suggested that listeners in the loud condition prioritized acoustic-phonetic cues in their attempts to resolve the degraded signal, whereas those in the slow condition appeared to preferentially weight lexical stress cues. Increased loudness and reduced rate exhibited differential effects on listeners' perceptual processing of dysarthric speech. The current study highlights the insights that may be gained from a cognitive-perceptual approach.
Bélanger, Nathalie N; Slattery, Timothy J; Mayberry, Rachel I; Rayner, Keith
Recent evidence suggests that, compared with hearing people, deaf people have enhanced visual attention to simple stimuli viewed in the parafovea and periphery. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to preprocess upcoming words and decide where to look next. In the study reported here, we investigated whether auditory deprivation affects low-level visual processing during reading by comparing the perceptual span of deaf signers who were skilled and less-skilled readers with the perceptual span of skilled hearing readers. Compared with hearing readers, the two groups of deaf readers had a larger perceptual span than would be expected given their reading ability. These results provide the first evidence that deaf readers' enhanced attentional allocation to the parafovea is used during complex cognitive tasks, such as reading.
Kawabe, Takahiro; Maruya, Kazushi; Nishida, Shin'ya
Human vision has a remarkable ability to perceive two layers at the same retinal locations, a transparent layer in front of a background surface. Critical image cues to perceptual transparency, studied extensively in the past, are changes in luminance or color that could be caused by light absorptions and reflections by the front layer, but such image changes may not be clearly visible when the front layer consists of a pure transparent material such as water. Our daily experiences with transparent materials of this kind suggest that an alternative potential cue of visual transparency is image deformations of a background pattern caused by light refraction. Although previous studies have indicated that these image deformations, at least static ones, play little role in perceptual transparency, here we show that dynamic image deformations of the background pattern, which could be produced by light refraction on a moving liquid's surface, can produce a vivid impression of a transparent liquid layer without the aid of any other visual cues as to the presence of a transparent layer. Furthermore, a transparent liquid layer perceptually emerges even from a randomly generated dynamic image deformation as long as it is similar to real liquid deformations in its spatiotemporal frequency profile. Our findings indicate that the brain can perceptually infer the presence of "invisible" transparent liquids by analyzing the spatiotemporal structure of dynamic image deformation, for which it uses a relatively simple computation that does not require high-level knowledge about the detailed physics of liquid deformation.
Pannunzi, Mario; Ayneto, Alba; Deco, Gustavo; Sebastián-Gallés, Nuria
So far, it was unclear if social hierarchy could influence sensory or perceptual cognitive processes. We evaluated the effects of social hierarchy on these processes using a basic visual perceptual decision task. We constructed a social hierarchy where participants performed the perceptual task separately with two covertly simulated players (superior, inferior). Participants were faster (better) when performing the discrimination task with the superior player. We studied the time course when social hierarchy was processed using event-related potentials and observed hierarchical effects even in early stages of sensory-perceptual processing, suggesting early top–down modulation by social hierarchy. Moreover, in a parallel analysis, we fitted a drift-diffusion model (DDM) to the results to evaluate the decision making process of this perceptual task in the context of a social hierarchy. Consistently, the DDM pointed to nondecision time (probably perceptual encoding) as the principal period influenced by social hierarchy. PMID:23946003
de Jong, M.C.; Knapen, T.; van Ee, R.
Observers continually make unconscious inferences about the state of the world based on ambiguous sensory information. This process of perceptual decision-making may be optimized by learning from experience. We investigated the influence of previous perceptual experience on the interpretation of
Liang, Jiali; Wilkinson, Krista; Sainburg, Robert L
Previous studies proposed that selecting which hand to use for a reaching task appears to be modulated by a factor described as "task difficulty". However, what features of a task might contribute to greater or lesser "difficulty" in the context of hand selection decisions has yet to be determined. There has been evidence that biomechanical and kinematic factors such as movement smoothness and work can predict patterns of selection across the workspace, suggesting a role of predictive cost analysis in hand-selection. We hypothesize that this type of prediction for hand-selection should recruit substantial cognitive resources and thus should be influenced by cognitive-perceptual loading. We test this hypothesis by assessing the role of cognitive-perceptual loading on hand selection decisions, using a visual search task that presents different levels of difficulty (cognitive-perceptual load), as established in previous studies on overall response time and efficiency of visual search. Although the data are necessarily preliminary due to small sample size, our data suggested an influence of cognitive-perceptual load on hand selection, such that the dominant hand was selected more frequently as cognitive load increased. Interestingly, cognitive-perceptual loading also increased cross-midline reaches with both hands. Because crossing midline is more costly in terms of kinematic and kinetic factors, our findings suggest that cognitive processes are normally engaged to avoid costly actions, and that the choice not-to-cross midline requires cognitive resources. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.
Weilnhammer, Veith A; Ludwig, Karin; Hesselmann, Guido; Sterzer, Philipp
During bistable vision, perception oscillates between two mutually exclusive percepts despite constant sensory input. Greater BOLD responses in frontoparietal cortex have been shown to be associated with endogenous perceptual transitions compared with "replay" transitions designed to closely match bistability in both perceptual quality and timing. It has remained controversial, however, whether this enhanced activity reflects causal influences of these regions on processing at the sensory level or, alternatively, an effect of stimulus differences that result in, for example, longer durations of perceptual transitions in bistable perception compared with replay conditions. Using a rotating Lissajous figure in an fMRI experiment on 15 human participants, we controlled for potential confounds of differences in transition duration and confirmed previous findings of greater activity in frontoparietal areas for transitions during bistable perception. In addition, we applied dynamic causal modeling to identify the neural model that best explains the observed BOLD signals in terms of effective connectivity. We found that enhanced activity for perceptual transitions is associated with a modulation of top-down connectivity from frontal to visual cortex, thus arguing for a crucial role of frontoparietal cortex in perceptual transitions during bistable perception.
Full Text Available Curiosity is one of the most basic biological drives in both animals and humans, and has been identified as a key motive for learning and discovery. Despite the importance of curiosity and related behaviors, the topic has been largely neglected in human neuroscience; hence little is known about the neurobiological mechanisms underlying curiosity. We used functional magnetic resonance imaging (fMRI to investigate what happens in our brain during the induction and subsequent relief of perceptual curiosity. Our core findings were that (i the induction of perceptual curiosity, through the presentation of ambiguous visual input, activated the anterior insula and anterior cingulate cortex, brain regions sensitive to conflict and arousal; (ii the relief of perceptual curiosity, through visual disambiguation, activated regions of the striatum that have been related to reward processing; and (iii the relief of perceptual curiosity was associated with hippocampal activation and enhanced incidental memory. These findings provide the first demonstration of the neural basis of human perceptual curiosity. Our results provide neurobiological support for a classic psychological theory of curiosity, which holds that curiosity is an aversive condition of increased arousal whose termination is rewarding and facilitates memory.
Beer, Anton L; Vartak, Devavrat; Greenlee, Mark W
Perceptual learning is a special type of non-declarative learning that involves experience-dependent plasticity in sensory cortices. The cholinergic system is known to modulate declarative learning. In particular, reduced levels or efficacy of the neurotransmitter acetylcholine were found to facilitate declarative memory consolidation. However, little is known about the role of the cholinergic system in memory consolidation of non-declarative learning. Here we compared two groups of non-smoking men who learned a visual texture discrimination task (TDT). One group received chewing tobacco containing nicotine for 1 h directly following the TDT training. The other group received a similar tasting control substance without nicotine. Electroencephalographic recordings during substance consumption showed reduced alpha activity and P300 latencies in the nicotine group compared to the control group. When re-tested on the TDT the following day, both groups responded more accurately and more rapidly than during training. These improvements were specific to the retinal location and orientation of the texture elements of the TDT suggesting that learning involved early visual cortex. A group comparison showed that learning effects were more pronounced in the nicotine group than in the control group. These findings suggest that oral consumption of nicotine enhances the efficacy of nicotinic acetylcholine receptors. Our findings further suggest that enhanced efficacy of the cholinergic system facilitates memory consolidation in perceptual learning (and possibly other types of non-declarative learning). In that regard acetylcholine seems to affect consolidation processes in perceptual learning in a different manner than in declarative learning. Alternatively, our findings might reflect dose-dependent cholinergic modulation of memory consolidation. This article is part of a Special Issue entitled 'Cognitive Enhancers'. Copyright © 2012 Elsevier Ltd. All rights reserved.
Full Text Available The Where’s Waldo problem concerns how individuals can rapidly learn to search a scene to detect, attend, recognize, and look at a valued target object in it. This article develops the ARTSCAN Search neural model to clarify how brain mechanisms across the What and Where cortical streams are coordinated to solve the Where's Waldo problem. The What stream learns positionally-invariant object representations, whereas the Where stream controls positionally-selective spatial and action representations. The model overcomes deficiencies of these computationally complementary properties through What and Where stream interactions. Where stream processes of spatial attention and predictive eye movement control modulate What stream processes whereby multiple view- and positionally-specific object categories are learned and associatively linked to view- and positionally-invariant object categories through bottom-up and attentive top-down interactions. Gain fields control the coordinate transformations that enable spatial attention and predictive eye movements to carry out this role. What stream cognitive-emotional learning processes enable the focusing of motivated attention upon the invariant object categories of desired objects. What stream cognitive names or motivational drives can prime a view- and positionally-invariant object category of a desired target object. A volitional signal can convert these primes into top-down activations that can, in turn, prime What stream view- and positionally-specific categories. When it also receives bottom-up activation from a target, such a positionally-specific category can cause an attentional shift in the Where stream to the positional representation of the target, and an eye movement can then be elicited to foveate it. These processes describe interactions among brain regions that include visual cortex, parietal cortex inferotemporal cortex, prefrontal cortex, amygdala, basal ganglia, and superior colliculus.
Choi, Lark Kwon; You, Jaehee; Bovik, Alan Conrad
We propose a referenceless perceptual fog density prediction model based on natural scene statistics (NSS) and fog aware statistical features. The proposed model, called Fog Aware Density Evaluator (FADE), predicts the visibility of a foggy scene from a single image without reference to a corresponding fog-free image, without dependence on salient objects in a scene, without side geographical camera information, without estimating a depth-dependent transmission map, and without training on human-rated judgments. FADE only makes use of measurable deviations from statistical regularities observed in natural foggy and fog-free images. Fog aware statistical features that define the perceptual fog density index derive from a space domain NSS model and the observed characteristics of foggy images. FADE not only predicts perceptual fog density for the entire image, but also provides a local fog density index for each patch. The predicted fog density using FADE correlates well with human judgments of fog density taken in a subjective study on a large foggy image database. As applications, FADE not only accurately assesses the performance of defogging algorithms designed to enhance the visibility of foggy images, but also is well suited for image defogging. A new FADE-based referenceless perceptual image defogging, dubbed DEnsity of Fog Assessment-based DEfogger (DEFADE) achieves better results for darker, denser foggy images as well as on standard foggy images than the state of the art defogging methods. A software release of FADE and DEFADE is available online for public use: http://live.ece.utexas.edu/research/fog/index.html.
Watanabe, Takeo; Sasaki, Yuka
Visual perceptual learning (VPL) is long-term performance increase resulting from visual perceptual experience. Task-relevant VPL of a feature results from training of a task on the feature relevant to the task. Task-irrelevant VPL arises as a result of exposure to the feature irrelevant to the trained task. At least two serious problems exist. First, there is the controversy over which stage of information processing is changed in association with task-relevant VPL. Second, no model has ever explained both task-relevant and task-irrelevant VPL. Here we propose a dual plasticity model in which feature-based plasticity is a change in a representation of the learned feature, and task-based plasticity is a change in processing of the trained task. Although the two types of plasticity underlie task-relevant VPL, only feature-based plasticity underlies task-irrelevant VPL. This model provides a new comprehensive framework in which apparently contradictory results could be explained.
Van Gulick, Ana E.; Gauthier, Isabel
In classic category learning studies, subjects typically learn to assign items to one of two categories, with no further distinction between how items on each side of the category boundary should be treated. In real life, however, we often learn categories that dictate further processing goals, for instance with objects in only one category requiring further individuation. Using methods from category learning and perceptual expertise, we studied the perceptual consequences of experience with objects in tasks that rely on attention to different dimensions in different parts of the space. In two experiments, subjects first learned to categorize complex objects from a single morphspace into two categories based on one morph dimension, and then learned to perform a different task, either naming or a local feature judgment, for each of the two categories. A same-different discrimination test before and after each training measured sensitivity to feature dimensions of the space. After initial categorization, sensitivity increased along the category-diagnostic dimension. After task association, sensitivity increased more for the category that was named, especially along the non-diagnostic dimension. The results demonstrate that local attentional weights, associated with individual exemplars as a function of task requirements, can have lasting effects on perceptual representations. PMID:24820671
Szpiro, Sarit F A; Spering, Miriam; Carrasco, Marisa
Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response. © 2014 ARVO.
Full Text Available Perceptual mappingapproaches have been widely used in visual information processing in multimedia and internet of things (IOT applications. Accumulative Lifting Difference (ALD is proposed in this paper as texture mapping model based on low-complexity lifting wavelet transform, and combined with luminance masking for creating an efficient perceptual mapping model to estimate Just Noticeable Distortion (JND in digital images. In addition to low complexity operations, experiments results show that the proposed modelcan tolerate much more JND noise than models proposed before
Solovey, Guillermo; Shalom, Diego; Pérez-Schuster, Verónica; Sigman, Mariano
Practice can enhance of perceptual sensitivity, a well-known phenomenon called perceptual learning. However, the effect of practice on subjective perception has received little attention. We approach this problem from a visual psychophysics and computational modeling perspective. In a sequence of visual search experiments, subjects significantly increased the ability to detect a "trained target". Before and after training, subjects performed two psychophysical protocols that parametrically vary the visibility of the "trained target": an attentional blink and a visual masking task. We found that confidence increased after learning only in the attentional blink task. Despite large differences in some observables and task settings, we identify common mechanisms for decision-making and confidence. Specifically, our behavioral results and computational model suggest that perceptual ability is independent of processing time, indicating that changes in early cortical representations are effective, and learning changes decision criteria to convey choice and confidence. Copyright © 2016 Elsevier Inc. All rights reserved.
Kim, Kyungmi; Yi, Do-Joon
In the present study, the effect of memory suppression on subsequent perceptual processing of visual objects was examined within a modified think/no-think paradigm. Suppressing memories of visual objects significantly impaired subsequent perceptual identification of those objects when they were briefly encountered (Experiment 1) and when they were presented in noise (Experiment 2), relative to performance on baseline items for which participants did not undergo suppression training. However, in Experiment 3, when perceptual identification was performed on mirror-reversed images of to-be-suppressed objects, no impairment was observed. These findings, analogous to those showing forgetting of suppressed words in long-term memory, suggest that suppressing memories of visual objects might be mediated by direct inhibition of perceptual representations, which, in turn, impairs later perception of them. This study provides strong support for the role of inhibitory mechanisms in memory control and suggests a tight link between higher-order cognitive operations and perceptual processing.
Mottron, Laurent; Dawson, Michelle; Soulières, Isabelle; Hubert, Benedicte; Burack, Jake
We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception of first order static stimuli, diminished perception of complex movement, autonomy of low-level information processing toward higher-order operations, and differential relation between perception and general intelligence. Increased perceptual expertise may be implicated in the choice of special ability in savant autistics, and in the variability of apparent presentations within PDD (autism with and without typical speech, Asperger syndrome) in non-savant autistics. The overfunctioning of brain regions typically involved in primary perceptual functions may explain the autistic perceptual endophenotype.
Santamaría-García, Hernando; Pannunzi, Mario; Ayneto, Alba; Deco, Gustavo; Sebastián-Gallés, Nuria
So far, it was unclear if social hierarchy could influence sensory or perceptual cognitive processes. We evaluated the effects of social hierarchy on these processes using a basic visual perceptual decision task. We constructed a social hierarchy where participants performed the perceptual task separately with two covertly simulated players (superior, inferior). Participants were faster (better) when performing the discrimination task with the superior player. We studied the time course when ...
Chen, Nihong; Cai, Peng; Zhou, Tiangang; Thompson, Benjamin; Fang, Fang
Training can improve performance of perceptual tasks. This phenomenon, known as perceptual learning, is strongest for the trained task and stimulus, leading to a widely accepted assumption that the associated neuronal plasticity is restricted to brain circuits that mediate performance of the trained task. Nevertheless, learning does transfer to other tasks and stimuli, implying the presence of more widespread plasticity. Here, we trained human subjects to discriminate the direction of coherent motion stimuli. The behavioral learning effect substantially transferred to noisy motion stimuli. We used transcranial magnetic stimulation (TMS) and functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms underlying the transfer of learning. The TMS experiment revealed dissociable, causal contributions of V3A (one of the visual areas in the extrastriate visual cortex) and MT+ (middle temporal/medial superior temporal cortex) to coherent and noisy motion processing. Surprisingly, the contribution of MT+ to noisy motion processing was replaced by V3A after perceptual training. The fMRI experiment complemented and corroborated the TMS finding. Multivariate pattern analysis showed that, before training, among visual cortical areas, coherent and noisy motion was decoded most accurately in V3A and MT+, respectively. After training, both kinds of motion were decoded most accurately in V3A. Our findings demonstrate that the effects of perceptual learning extend far beyond the retuning of specific neural populations for the trained stimuli. Learning could dramatically modify the inherent functional specializations of visual cortical areas and dynamically reweight their contributions to perceptual decisions based on their representational qualities. These neural changes might serve as the neural substrate for the transfer of perceptual learning.
Pinal, Diego; Zurrón, Montserrat; Díaz, Fernando
information encoding, maintenance, and retrieval; these are supported by brain activity in a network of frontal, parietal and temporal regions. Manipulation of WM load and duration of the maintenance period can modulate this activity. Although such modulations have been widely studied using the event-related potentials (ERP) technique, a precise description of the time course of brain activity during encoding and retrieval is still required. Here, we used this technique and principal component analysis to assess the time course of brain activity during encoding and retrieval in a delayed match to sample task. We also investigated the effects of memory load and duration of the maintenance period on ERP activity. Brain activity was similar during information encoding and retrieval and comprised six temporal factors, which closely matched the latency and scalp distribution of some ERP components: P1, N1, P2, N2, P300, and a slow wave. Changes in memory load modulated task performance and yielded variations in frontal lobe activation. Moreover, the P300 amplitude was smaller in the high than in the low load condition during encoding and retrieval. Conversely, the slow wave amplitude was higher in the high than in the low load condition during encoding, and the same was true for the N2 amplitude during retrieval. Thus, during encoding, memory load appears to modulate the processing resources for context updating and post-categorization processes, and during retrieval it modulates resources for stimulus classification and context updating. Besides, despite the lack of differences in task performance related to duration of the maintenance period, larger N2 amplitude and stronger activation of the left temporal lobe after long than after short maintenance periods were found during information retrieval. Thus, results regarding the duration of maintenance period were complex, and future work is required to test the time-based decay theory predictions.
Deiber, Marie-Pierre; Missonnier, Pascal; Bertrand, Olivier; Gold, Gabriel; Fazio-Costa, Lara; Ibañez, Vicente; Giannakopoulos, Panteleimon
Working memory involves the short-term storage and manipulation of information necessary for cognitive performance, including comprehension, learning, reasoning and planning. Although electroencephalogram (EEG) rhythms are modulated during working memory, the temporal relationship of EEG oscillations with the eliciting event has not been well studied. In particular, the dynamics of the neural network supporting memory processes may be best captured in induced oscillations, characterized by a loose temporal link with the stimulus. In order to differentiate induced from evoked functional processes, the present study proposes a time-frequency analysis of the 3 to 30 Hz EEG oscillatory activity in a verbal n-back working memory paradigm. Control tasks were designed to identify oscillatory activity related to stimulus presentation (passive task) and focused attention to the stimulus (detection task). Evoked theta activity (4-8 Hz) phase-locked to the visual stimulus was evidenced in the parieto-occipital region for all tasks. In parallel, induced theta activity was recorded in the frontal region for detection and n-back memory tasks, but not for the passive task, suggesting its dependency on focused attention to the stimulus. Sustained induced oscillatory activity was identified in relation to working memory in the theta and beta (15-25 Hz) frequency bands, larger for the highest memory load. Its late occurrence limited to nonmatched items suggests that it could be related to item retention and active maintenance for further task requirements. Induced theta and beta activities displayed respectively a frontal and parietal topographical distribution, providing further functional information on the fronto-posterior network supporting working memory.
Full Text Available Working memory (WM involves three cognitive events: information encoding, maintenance and retrieval; these are supported by brain activity in a network of frontal, parietal and temporal regions. Manipulation of WM load and duration of the maintenance period can modulate this activity. Although such modulations have been widely studied using the ERP technique, a precise description of the time course of brain activity during encoding and retrieval is still required. Here, we used this technique and principal component analysis to assess the time course of brain activity during encoding and retrieval in a delayed match to sample task. We also investigated the effects of memory load and duration of the maintenance period on ERP activity. Brain activity was similar during information encoding and retrieval and comprised six temporal factors, which closely matched the latency and scalp distribution of some ERP components: P1, N1, P2, N2, P300 and a slow wave. Changes in memory load modulated task performance and yielded variations in frontal lobe activation. Moreover, the P300 amplitude was smaller in the high than in the low load condition during encoding and retrieval. Conversely, the slow wave amplitude was higher in the high than in the low load condition during encoding, and the same was true for the N2 amplitude during retrieval. Thus, during encoding, memory load appears to modulate the processing resources for context updating and post-categorization processes, and during retrieval it modulates resources for stimulus classification and context updating. Besides, despite the lack of differences in task performance related to duration of the maintenance period, larger N2 amplitude and stronger activation of the left temporal lobe after long than after short maintenance periods were found during information retrieval. Thus, results regarding the duration of maintenance period were complex, and future work is required to test the time-based decay
One of the most important issues concerning the foundations of conscious perception centers on the question of whether perceptual consciousness is rich or sparse. The overflow argument uses a form of 'iconic memory' to argue that perceptual consciousness is richer (i.e., has a higher capacity) than cognitive access: when observing a complex scene we are conscious of more than we can report or think about. Recently, the overflow argument has been challenged both empirically and conceptually. This paper reviews the controversy, arguing that proponents of sparse perception are committed to the postulation of (i) a peculiar kind of generic conscious representation that has no independent rationale and (ii) an unmotivated form of unconscious representation that in some cases conflicts with what we know about unconscious representation. Copyright © 2011 Elsevier Ltd. All rights reserved.
Volk, Christer Peter; Lavandier, Mathieu; Bech, Søren
The perceptual differences between the sound reproductions of headphones were investigated in a pair-wise comparison study. Two musical excerpts were reproduced over 21 headphones positioned on a mannequin and recorded. The recordings were then processed and reproduced over one set of headphones ...
Li, Tianhao; Fu, Qian-Jie
Purpose: To determine whether perceptual adaptation improves voice gender discrimination of spectrally shifted vowels and, if so, which acoustic cues contribute to the improvement. Method: Voice gender discrimination was measured for 10 normal-hearing subjects, during 5 days of adaptation to spectrally shifted vowels, produced by processing the…
McAuliffe, Megan J.; Kerr, Sarah E.; Gibson, Elizabeth M. R.; Anderson, Tim; LaShell, Patrick J.
Purpose: To determine how increased vocal loudness and reduced speech rate affect listeners' cognitive-perceptual processing of hypokinetic dysarthric speech associated with Parkinson's disease. Method: Fifty-one healthy listener participants completed a speech perception experiment. Listeners repeated phrases produced by 5 individuals…
Garcia, Julian Martinez-Villalba; Jeong, Cheol-Ho; Brunskog, Jonas
This study proposes a numerical and experimental framework for evaluating the perceptual aspect of the diffuse field condition with intended final use in music auditoria. Multiple Impulse Responses are simulated based on the time domain Poisson process with increasing reflection density. Different...
Lenay, Charles; Stewart, John
WORK AIMED AT STUDYING SOCIAL COGNITION IN AN INTERACTIONIST PERSPECTIVE OFTEN ENCOUNTERS SUBSTANTIAL THEORETICAL AND METHODOLOGICAL DIFFICULTIES: identifying the significant behavioral variables; recording them without disturbing the interaction; and distinguishing between: (a) the necessary and sufficient contributions of each individual partner for a collective dynamics to emerge; (b) features which derive from this collective dynamics and escape from the control of the individual partners; and (c) the phenomena arising from this collective dynamics which are subsequently appropriated and used by the partners. We propose a minimalist experimental paradigm as a basis for this conceptual discussion: by reducing the sensory inputs to a strict minimum, we force a spatial and temporal deployment of the perceptual activities, which makes it possible to obtain a complete recording and control of the dynamics of interaction. After presenting the principles of this minimalist approach to perception, we describe a series of experiments on two major questions in social cognition: recognizing the presence of another intentional subject; and phenomena of imitation. In both cases, we propose explanatory schema which render an interactionist approach to social cognition clear and explicit. Starting from our earlier work on perceptual crossing we present a new experiment on the mechanisms of reciprocal recognition of the perceptual intentionality of the other subject: the emergent collective dynamics of the perceptual crossing can be appropriated by each subject. We then present an experimental study of opaque imitation (when the subjects cannot see what they themselves are doing). This study makes it possible to characterize what a properly interactionist approach to imitation might be. In conclusion, we draw on these results, to show how an interactionist approach can contribute to a fully social approach to social cognition.
Green, C Shawn; Li, Renjie; Bavelier, Daphne
Action video games have been shown to enhance behavioral performance on a wide variety of perceptual tasks, from those that require effective allocation of attentional resources across the visual scene, to those that demand the successful identification of fleetingly presented stimuli. Importantly, these effects have not only been shown in expert action video game players, but a causative link has been established between action video game play and enhanced processing through training studies. Although an account based solely on attention fails to capture the variety of enhancements observed after action game playing, a number of models of perceptual learning are consistent with the observed results, with behavioral modeling favoring the hypothesis that avid video game players are better able to form templates for, or extract the relevant statistics of, the task at hand. This may suggest that the neural site of learning is in areas where information is integrated and actions are selected; yet changes in low-level sensory areas cannot be ruled out. Copyright © 2009 Cognitive Science Society, Inc.
Amitay, Sygal; Halliday, Lorna; Taylor, Jenny; Sohoglu, Ediz; Moore, David R
Although feedback on performance is generally thought to promote perceptual learning, the role and necessity of feedback remain unclear. We investigated the effect of providing varying amounts of positive feedback while listeners attempted to discriminate between three identical tones on learning frequency discrimination. Using this novel procedure, the feedback was meaningless and random in relation to the listeners' responses, but the amount of feedback provided (or lack thereof) affected learning. We found that a group of listeners who received positive feedback on 10% of the trials improved their performance on the task (learned), while other groups provided either with excess (90%) or with no feedback did not learn. Superimposed on these group data, however, individual listeners showed other systematic changes of performance. In particular, those with lower non-verbal IQ who trained in the no feedback condition performed more poorly after training. This pattern of results cannot be accounted for by learning models that ascribe an external teacher role to feedback. We suggest, instead, that feedback is used to monitor performance on the task in relation to its perceived difficulty, and that listeners who learn without the benefit of feedback are adept at self-monitoring of performance, a trait that also supports better performance on non-verbal IQ tests. These results show that 'perceptual' learning is strongly influenced by top-down processes of motivation and intelligence.
Comins, Jordan A; Gentner, Timothy Q
Since Chomsky's pioneering work on syntactic structures, comparative psychologists interested in the study of language evolution have targeted pattern complexity, using formal mathematical grammars, as the key to organizing language-relevant cognitive processes across species. This focus on formal syntactic complexity, however, often disregards the close interaction in real-world signals between the structure of a pattern and its constituent elements. Whether such features of natural auditory signals shape pattern generalization is unknown. In the present paper, we train birds to recognize differently patterned strings of natural signals (song motifs). Instead of focusing on the complexity of the overtly reinforced patterns, we ask how the perceptual groupings of pattern elements influence the generalization pattern knowledge. We find that learning and perception of training patterns is agnostic to the perceptual features of underlying elements. Surprisingly, however, these same features constrain the generalization of pattern knowledge, and thus its broader use. Our results demonstrate that the restricted focus of comparative language research on formal models of syntactic complexity is, at best, insufficient to understand pattern use. Copyright © 2013 Elsevier B.V. All rights reserved.
Mitchell, Chris; Hall, Geoffrey
We present a review of recent studies of perceptual learning conducted with nonhuman animals. The focus of this research has been to elucidate the mechanisms by which mere exposure to a pair of similar stimuli can increase the ease with which those stimuli are discriminated. These studies establish an important role for 2 mechanisms, one involving inhibitory associations between the unique features of the stimuli, the other involving a long-term habituation process that enhances the relative salience of these features. We then examine recent work investigating equivalent perceptual learning procedures with human participants. Our aim is to determine the extent to which the phenomena exhibited by people are susceptible to explanation in terms of the mechanisms revealed by the animal studies. Although we find no evidence that associative inhibition contributes to the perceptual learning effect in humans, initial detection of unique features (those that allow discrimination between 2 similar stimuli) appears to depend on an habituation process. Once the unique features have been detected, a tendency to attend to those features and to learn about their properties enhances subsequent discrimination. We conclude that the effects obtained with humans engage mechanisms additional to those seen in animals but argue that, for the most part, these have their basis in learning processes that are common to animals and people. In a final section, we discuss some implications of this analysis of perceptual learning for other aspects of experimental psychology and consider some potential applications. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Full Text Available Schizophrenia Spectrum Disorders (SSD are known to be characterised by abnormalities in attentional processes, but there are inconsistencies in the literature that remain unresolved. This article considers whether perceptual resource limitations play a role in moderating attentional abnormalities in SSD. According to perceptual load theory, perceptual resource limitations can lead to attenuated or superior performance on dual-task paradigms depending on whether participants are required to process, or attempt to ignore, secondary stimuli. If SSD is associated with perceptual resource limitations, and if it represents the extreme end of an otherwise normally distributed neuropsychological phenotype, schizotypal traits in the general population should lead to disproportionate performance costs on dual-task paradigms as a function of the perceptual task demands. To test this prediction, schizotypal traits were quantified via the Schizotypal Personality Questionnaire (SPQ in 74 healthy volunteers, who also completed a dual-task signal detection paradigm that required participants to detect central and peripheral stimuli across conditions that varied in the overall number of stimuli presented. The results confirmed decreasing performance as the perceptual load of the task increased. More importantly, significant correlations between SPQ scores and task performance confirmed that increased schizotypal traits, particularly in the cognitive-perceptual domain, are associated with greater performance decrements under increasing perceptual load. These results confirm that attentional difficulties associated with SSD extend sub-clinically into the general population and suggest that cognitive-perceptual schizotypal traits may represent a risk factor for difficulties in the regulation of attention under increasing perceptual load.
Kurylo, Daniel D; Waxman, Richard; Kidron, Rachel; Silverstein, Steven M
Training on visual tasks improves performance on basic and higher order visual capacities. Such improvement has been linked to changes in connectivity among mediating neurons. We investigated whether training effects occur for perceptual grouping. It was hypothesized that repeated engagement of integration mechanisms would enhance grouping processes. Thirty-six participants underwent 15 sessions of training on a visual discrimination task that required perceptual grouping. Participants viewed 20 × 20 arrays of dots or Gabor patches and indicated whether the array appeared grouped as vertical or horizontal lines. Across trials stimuli became progressively disorganized, contingent upon successful discrimination. Four visual dimensions were examined, in which grouping was based on similarity in luminance, color, orientation, and motion. Psychophysical thresholds of grouping were assessed before and after training. Results indicate that performance in all four dimensions improved with training. Training on a control condition, which paralleled the discrimination task but without a grouping component, produced no improvement. In addition, training on only the luminance and orientation dimensions improved performance for those conditions as well as for grouping by color, on which training had not occurred. However, improvement from partial training did not generalize to motion. Results demonstrate that a training protocol emphasizing stimulus integration enhanced perceptual grouping. Results suggest that neural mechanisms mediating grouping by common luminance and/or orientation contribute to those mediating grouping by color but do not share resources for grouping by common motion. Results are consistent with theories of perceptual learning emphasizing plasticity in early visual processing regions.
Nemes, V A; Whitaker, D; Heron, J; McKeefry, D J
Current models of short-term visual perceptual memory invoke mechanisms that are closely allied to low-level perceptual discrimination mechanisms. The purpose of this study was to investigate the extent to which human visual perceptual memory for spatial frequency is based upon multiple, spatially tuned channels similar to those found in the earliest stages of visual processing. To this end we measured how performance on a delayed spatial frequency discrimination paradigm was affected by the introduction of interfering or 'memory masking' stimuli of variable spatial frequency during the delay period. Masking stimuli were shown to induce shifts in the points of subjective equality (PSE) when their spatial frequencies were within a bandwidth of 1.2 octaves of the reference spatial frequency. When mask spatial frequencies differed by more than this value, there was no change in the PSE from baseline levels. This selective pattern of masking was observed for different spatial frequencies and demonstrates the existence of multiple, spatially tuned mechanisms in visual perceptual memory. Memory masking effects were also found to occur for horizontal separations of up to 6 deg between the masking and test stimuli and lacked any orientation selectivity. These findings add further support to the view that low-level sensory processing mechanisms form the basis for the retention of spatial frequency information in perceptual memory. However, the broad range of transfer of memory masking effects across spatial location and other dimensions indicates more long range, long duration interactions between spatial frequency channels that are likely to rely contributions from neural processes located in higher visual areas. Copyright © 2011 Elsevier Ltd. All rights reserved.
Lametti, D.R.; Oostwoud Wijdenes, L.; Bonaiuto, J.; Bestmann, S.; Rothwell, J.C.
Neuroimaging studies suggest that the cerebellum might play a role in both speech perception and speech perceptual learning. However, it remains unclear what this role is: does the cerebellum directly contribute to the perceptual decision? Or does it contribute to the timing of perceptual decisions?
Full Text Available In this review, we explore how reward signals shape perceptual learning in animals and humans. Perceptual learning is the well-established phenomenon by which extensive practice elicits selective improvement in one’s perceptual discrimination of basic visual features, such as oriented lines or moving stimuli. While perceptual learning has long been thought to rely on ‘top-down’ processes, such as attention and decision-making, a wave of recent findings suggests that these higher-level processes are, in fact, not necessary. Rather, these recent findings indicate that reward signals alone, in the absence of the contribution of higher-level cognitive processes, are sufficient to drive the benefits of perceptual learning. Here, we will review the literature tying reward signals to perceptual learning. Based on these findings, we propose dual underlying mechanisms that give rise to perceptual learning: one mechanism that operates ‘automatically’ and is tied directly to reward signals, and another mechanism that involves more ‘top-down’, goal-directed computations.
Rodriguez Zivic, Pablo H; Shifres, Favio; Cecchi, Guillermo A
The brain processes temporal statistics to predict future events and to categorize perceptual objects. These statistics, called expectancies, are found in music perception, and they span a variety of different features and time scales. Specifically, there is evidence that music perception involves strong expectancies regarding the distribution of a melodic interval, namely, the distance between two consecutive notes within the context of another. The recent availability of a large Western music dataset, consisting of the historical record condensed as melodic interval counts, has opened new possibilities for data-driven analysis of musical perception. In this context, we present an analytical approach that, based on cognitive theories of music expectation and machine learning techniques, recovers a set of factors that accurately identifies historical trends and stylistic transitions between the Baroque, Classical, Romantic, and Post-Romantic periods. We also offer a plausible musicological and cognitive interpretation of these factors, allowing us to propose them as data-driven principles of melodic expectation.
Stapel, D.; Semin, G.R.
Language is a tool that directs attention to different aspects of reality. Using participants from the same linguistic community, the authors demonstrate in 4 studies that metasemantic features of linguistic categories influence basic perceptual processes. More specifically, the hypothesis that
Maniscalco, Brian; McCurdy, Li Yan; Odegaard, Brian; Lau, Hakwan
Why do experimenters give subjects short breaks in long behavioral experiments? Whereas previous studies suggest it is difficult to maintain attention and vigilance over long periods of time, it is unclear precisely what mechanisms benefit from rest after short experimental blocks. Here, we evaluate decline in both perceptual performance and metacognitive sensitivity (i.e., how well confidence ratings track perceptual decision accuracy) over time and investigate whether characteristics of prefrontal cortical areas correlate with these measures. Whereas a single-process signal detection model predicts that these two forms of fatigue should be strongly positively correlated, a dual-process model predicts that rates of decline may dissociate. Here, we show that these measures consistently exhibited negative or near-zero correlations, as if engaged in a trade-off relationship, suggesting that different mechanisms contribute to perceptual and metacognitive decisions. Despite this dissociation, the two mechanisms likely depend on common resources, which could explain their trade-off relationship. Based on structural MRI brain images of individual human subjects, we assessed gray matter volume in the frontal polar area, a region that has been linked to visual metacognition. Variability of frontal polar volume correlated with individual differences in behavior, indicating the region may play a role in supplying common resources for both perceptual and metacognitive vigilance. Additional experiments revealed that reduced metacognitive demand led to superior perceptual vigilance, providing further support for this hypothesis. Overall, results indicate that during breaks between short blocks, it is the higher-level perceptual decision mechanisms, rather than lower-level sensory machinery, that benefit most from rest. Perceptual task performance declines over time (the so-called vigilance decrement), but the relationship between vigilance in perception and metacognition has
Full Text Available The perceptual load theory in selective attention literature proposes that the interference from task-irrelevant distractor is eliminated when perceptual capacity is fully consumed by task-relevant information. However, the biased competition model suggests that the contents of working memory (WM can guide attentional selection automatically, even when this guidance is detrimental to visual search. An intriguing but unsolved question is what will happen when selective attention is influenced by both perceptual load and WM guidance. To study this issue, behavioral performances and event-related potentials (ERPs were recorded when participants were presented with a cue to either identify or hold in memory and had to perform a visual search task subsequently, under conditions of low or high perceptual load. Behavioural data showed that high perceptual load eliminated the attentional capture by WM. The ERP results revealed an obvious WM guidance effect in P1 component with invalid trials eliciting larger P1 than neutral trials, regardless of the level of perceptual load. The interaction between perceptual load and WM guidance was significant for the posterior N1 component. The memory guidance effect on N1 was eliminated by high perceptual load. Standardized Low Resolution Electrical Tomography Analysis (sLORETA showed that the WM guidance effect and the perceptual load effect on attention can be localized into the occipital area and parietal lobe, respectively. Merely identifying the cue produced no effect on the P1 or N1 component. These results suggest that in selective attention, the information held in WM could capture attention at the early stage of visual processing in the occipital cortex. Interestingly, this initial capture of attention by WM could be modulated by the level of perceptual load and the parietal lobe mediates target selection at the discrimination stage.
Tan, Jinfeng; Zhao, Yuanfang; Wang, Lijun; Tian, Xia; Cui, Yan; Yang, Qian; Pan, Weigang; Zhao, Xiaoyue; Chen, Antao
The perceptual load theory in selective attention literature proposes that the interference from task-irrelevant distractor is eliminated when perceptual capacity is fully consumed by task-relevant information. However, the biased competition model suggests that the contents of working memory (WM) can guide attentional selection automatically, even when this guidance is detrimental to visual search. An intriguing but unsolved question is what will happen when selective attention is influenced by both perceptual load and WM guidance. To study this issue, behavioral performances and event-related potentials (ERPs) were recorded when participants were presented with a cue to either identify or hold in memory and had to perform a visual search task subsequently, under conditions of low or high perceptual load. Behavioural data showed that high perceptual load eliminated the attentional capture by WM. The ERP results revealed an obvious WM guidance effect in P1 component with invalid trials eliciting larger P1 than neutral trials, regardless of the level of perceptual load. The interaction between perceptual load and WM guidance was significant for the posterior N1 component. The memory guidance effect on N1 was eliminated by high perceptual load. Standardized Low Resolution Electrical Tomography Analysis (sLORETA) showed that the WM guidance effect and the perceptual load effect on attention can be localized into the occipital area and parietal lobe, respectively. Merely identifying the cue produced no effect on the P1 or N1 component. These results suggest that in selective attention, the information held in WM could capture attention at the early stage of visual processing in the occipital cortex. Interestingly, this initial capture of attention by WM could be modulated by the level of perceptual load and the parietal lobe mediates target selection at the discrimination stage.
Clark, Torin K.; Lu, Yue M.; Karmali, Faisal
Perceptual decision making is fundamental to a broad range of fields including neurophysiology, economics, medicine, advertising, law, etc. Although recent findings have yielded major advances in our understanding of perceptual decision making, decision making as a function of time and frequency (i.e., decision-making dynamics) is not well understood. To limit the review length, we focus most of this review on human findings. Animal findings, which are extensively reviewed elsewhere, are included when beneficial or necessary. We attempt to put these various findings and data sets, which can appear to be unrelated in the absence of a formal dynamic analysis, into context using published models. Specifically, by adding appropriate dynamic mechanisms (e.g., high-pass filters) to existing models, it appears that a number of otherwise seemingly disparate findings from the literature might be explained. One hypothesis that arises through this dynamic analysis is that decision making includes phasic (high pass) neural mechanisms, an evidence accumulator and/or some sort of midtrial decision-making mechanism (e.g., peak detector and/or decision boundary). PMID:26467513
Chiang, I-Ping; Lin, Chih-Ying; Wang, Kaisheng M
Many companies have launched their products or services online as a new business focus, but only a few of them have survived the competition and made profits. The most important key to an online business's success is to create "brand value" for the customers. Although the concept of online brand has been discussed in previous studies, there is no empirical study on the measurement of online branding. As Web 2.0 emerges to be critical to online branding, the purpose of this study was to measure Taiwan's major Web sites with a number of personality traits to build a perceptual map for online brands. A pretest identified 10 most representative online brand perceptions. The results of the correspondence analysis showed five groups in the perceptual map. This study provided a practical view of the associations and similarities among online brands for potential alliance or branding strategies. The findings also suggested that brand perceptions can be used with identified consumer needs and behaviors to better position online services. The brand perception map in the study also contributed to a better understanding of the online brands in Taiwan.
Chun, Marvin M; Johnson, Marcia K
Attention and memory are typically studied as separate topics, but they are highly intertwined. Here we discuss the relation between memory and two fundamental types of attention: perceptual and reflective. Memory is the persisting consequence of cognitive activities initiated by and/or focused on external information from the environment (perceptual attention) and initiated by and/or focused on internal mental representations (reflective attention). We consider three key questions for advancing a cognitive neuroscience of attention and memory: to what extent do perception and reflection share representational areas? To what extent are the control processes that select, maintain, and manipulate perceptual and reflective information subserved by common areas and networks? During perception and reflection, to what extent are common areas responsible for binding features together to create complex, episodic memories and for reviving them later? Considering similarities and differences in perceptual and reflective attention helps integrate a broad range of findings and raises important unresolved issues. Copyright © 2011 Elsevier Inc. All rights reserved.
De Niear, Matthew A; Koo, Bonhwang; Wallace, Mark T
There has been a growing interest in developing behavioral tasks to enhance temporal acuity as recent findings have demonstrated changes in temporal processing in a number of clinical conditions. Prior research has demonstrated that perceptual training can enhance temporal acuity both within and across different sensory modalities. Although certain forms of unisensory perceptual learning have been shown to be dependent upon task difficulty, this relationship has not been explored for multisensory learning. The present study sought to determine the effects of task difficulty on multisensory perceptual learning. Prior to and following a single training session, participants completed a simultaneity judgment (SJ) task, which required them to judge whether a visual stimulus (flash) and auditory stimulus (beep) presented in synchrony or at various stimulus onset asynchronies (SOAs) occurred synchronously or asynchronously. During the training session, participants completed the same SJ task but received feedback regarding the accuracy of their responses. Participants were randomly assigned to one of three levels of difficulty during training: easy, moderate, and hard, which were distinguished based on the SOAs used during training. We report that only the most difficult (i.e., hard) training protocol enhanced temporal acuity. We conclude that perceptual training protocols for enhancing multisensory temporal acuity may be optimized by employing audiovisual stimuli for which it is difficult to discriminate temporal synchrony from asynchrony.
Andrillon, Thomas; Kouider, Sid; Agus, Trevor; Pressnitzer, Daniel
Experience continuously imprints on the brain at all stages of life. The traces it leaves behind can produce perceptual learning , which drives adaptive behavior to previously encountered stimuli. Recently, it has been shown that even random noise, a type of sound devoid of acoustic structure, can trigger fast and robust perceptual learning after repeated exposure . Here, by combining psychophysics, electroencephalography (EEG), and modeling, we show that the perceptual learning of noise is associated with evoked potentials, without any salient physical discontinuity or obvious acoustic landmark in the sound. Rather, the potentials appeared whenever a memory trace was observed behaviorally. Such memory-evoked potentials were characterized by early latencies and auditory topographies, consistent with a sensory origin. Furthermore, they were generated even on conditions of diverted attention. The EEG waveforms could be modeled as standard evoked responses to auditory events (N1-P2) , triggered by idiosyncratic perceptual features acquired through learning. Thus, we argue that the learning of noise is accompanied by the rapid formation of sharp neural selectivity to arbitrary and complex acoustic patterns, within sensory regions. Such a mechanism bridges the gap between the short-term and longer-term plasticity observed in the learning of noise [2, 4-6]. It could also be key to the processing of natural sounds within auditory cortices , suggesting that the neural code for sound source identification will be shaped by experience as well as by acoustics. Copyright © 2015 Elsevier Ltd. All rights reserved.
The concept of perceptual independence is ubiquitous in psychology. It addresses the question of whether two (or more) dimensions are perceived independently. Several authors have proposed perceptual independence (or its lack thereof) as a viable measure of holistic face perception (Loftus, Oberg, & Dillon, Psychological Review 111:835-863, 2004; Wenger & Ingvalson, Learning, Memory, and Cognition 28:872-892, 2002). According to this notion, the processing of facial features occurs in an interactive manner. Here, I examine this idea from the perspective of two theories of perceptual independence: the multivariate uncertainty analysis (MUA; Garner & Morton, Definitions, models, and experimental paradigms. Psychological Bulletin 72:233-259, 1969), and the general recognition theory (GRT; Ashby & Townsend, Psychological Review 93:154-179, 1986). The goals of the study were to (1) introduce the MUA, (2) examine various possible relations between MUA and GRT using numerical simulations, and (3) apply the MUA to two consensual markers of holistic face perception(-)recognition of facial features (Farah, Wilson, Drain, & Tanaka, Psychological Review 105:482-498, 1998) and the composite face effect (Young, Hellawell, & Hay, Perception 16:747-759, 1987). The results suggest that facial holism is generated by violations of several types of perceptual independence. They highlight the important theoretical role played by converging operations in the study of holistic face perception.
Long, Bria; Konkle, Talia; Cohen, Michael A; Alvarez, George A
Understanding how perceptual and conceptual representations are connected is a fundamental goal of cognitive science. Here, we focus on a broad conceptual distinction that constrains how we interact with objects--real-world size. Although there appear to be clear perceptual correlates for basic-level categories (apples look like other apples, oranges look like other oranges), the perceptual correlates of broader categorical distinctions are largely unexplored, i.e., do small objects look like other small objects? Because there are many kinds of small objects (e.g., cups, keys), there may be no reliable perceptual features that distinguish them from big objects (e.g., cars, tables). Contrary to this intuition, we demonstrated that big and small objects have reliable perceptual differences that can be extracted by early stages of visual processing. In a series of visual search studies, participants found target objects faster when the distractor objects differed in real-world size. These results held when we broadly sampled big and small objects, when we controlled for low-level features and image statistics, and when we reduced objects to texforms--unrecognizable textures that loosely preserve an object's form. However, this effect was absent when we used more basic textures. These results demonstrate that big and small objects have reliably different mid-level perceptual features, and suggest that early perceptual information about broad-category membership may influence downstream object perception, recognition, and categorization processes. (c) 2015 APA, all rights reserved).
Olsen, Sune L.; Agerkvist, Finn T.; MacDonald, Ewen
While non-linear distortion in loudspeakers decreases audio quality, the perceptual consequences can vary substantially. This paper investigates the metric Rnonlin  which was developed to predict subjective measurements of sound quality in nonlinear systems. The generalisability of the metric...... the perceptual consequences of non-linear distortion....
Full Text Available Barsalou's (1999 perceptual theory of knowledge echoes the pre-20th century tradition of conceptualizing all knowledge as inherently perceptual. Hence conceptual space has an infinite number of dimensions and heavily relies on perceptual experience. Osgood's (1952 semantic differential technique was developed as a bridge between perception and semantics. We updated Osgood's methodology in order to investigate current issues in visual cognition by: (1 using a 2D rather than a 1D space to place the concepts, (2 having dimensions that were perceptual while the targets were conceptual, (3 coupling visual experience with another two perceptual domains (audition and touch, (4 analyzing the data using MDS (not factor analysis. In three experiments, subjects (N = 57 judged five concrete and five abstract words on seven bipolar scales in three perceptual modalities. The 2D space led to different patterns of response compared to the classic 1D space. MDS revealed that perceptual modalities are not equally informative for mapping word-meaning distances (Mantel min = −.23; Mantel max = .88. There was no reliable differences due to test administration modality (paper vs. computer, nor scale orientation. The present findings are consistent with multidimensionality of conceptual space, a perceptual basis for knowledge, and dynamic characteristics of concepts discussed in contemporary theories.
Bedford, Felice L.
Addresses two questions that may be unique to perceptual learning: What are the circumstances that produce learning? and What is the content of learning? Suggests a critical principle for each question. Provides a discussion of perceptual learning theory, how learning occurs, and what gets learned. Includes a 121-item bibliography. (DR)
De Weerd, P; Smith, E; Greenberg, P
After few seconds, a figure steadily presented in peripheral vision becomes perceptually filled-in by its background, as if it "disappeared". We report that directing attention to the color, shape, or location of a figure increased the probability of perceiving filling-in compared to unattended figures, without modifying the time required for filling-in. This effect could be augmented by boosting attention. Furthermore, the frequency distribution of filling-in response times for attended figures could be predicted by multiplying the frequencies of response times for unattended figures with a constant. We propose that, after failure of figure-ground segregation, the neural interpolation processes that produce perceptual filling-in are enhanced in attended figure regions. As filling-in processes are involved in surface perception, the present study demonstrates that even very early visual processes are subject to modulation by cognitive factors.
IJzerman, Hans; Regenberg, Nina F E; Saddlemyer, Justin; Koole, Sander L
Linguistic category priming is a novel paradigm to examine automatic influences of language on cognition (Semin, 2008). An initial article reported that priming abstract linguistic categories (adjectives) led to more global perceptual processing, whereas priming concrete linguistic categories (verbs) led to more local perceptual processing (Stapel & Semin, 2007). However, this report was compromised by data fabrication by the first author, so that it remains unclear whether or not linguistic category priming influences perceptual processing. To fill this gap in the literature, the present article reports 12 studies among Dutch and US samples examining the perceptual effects of linguistic category priming. The results yielded no evidence of linguistic category priming effects. These findings are discussed in relation to other research showing cultural variations in linguistic category priming effects (IJzerman, Saddlemyer, & Koole, 2014). The authors conclude by highlighting the importance of conducting and publishing replication research for achieving scientific progress. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Remington, Anna M; Swettenham, John G; Lavie, Nilli
Autism spectrum disorder (ASD) research portrays a mixed picture of attentional abilities with demonstrations of enhancements (e.g., superior visual search) and deficits (e.g., higher distractibility). Here we test a potential resolution derived from the Load Theory of Attention (e.g., Lavie, 2005). In Load Theory, distractor processing depends on the perceptual load of the task and as such can only be eliminated under high load that engages full capacity. We hypothesize that ASD involves enhanced perceptual capacity, leading to the superior performance and increased distractor processing previously reported. Using a signal-detection paradigm, we test this directly and demonstrate that, under higher levels of load, perceptual sensitivity was reduced in typical adults but not in adults with ASD. These findings confirm our hypothesis and offer a promising solution to the previous discrepancies by suggesting that increased distractor processing in ASD results not from a filtering deficit but from enhanced perceptual capacity.
Liu, Z; Jacobs, D W; Basri, R
Since the seminal work of the Gestalt psychologists, there has been great interest in understanding what factors determine the perceptual organization of images. While the Gestaltists demonstrated the significance of grouping cues such as similarity, proximity and good continuation, it has not been well understood whether their catalog of grouping cues is complete--in part due to the paucity of effective methodologies for examining the significance of various grouping cues. We describe a novel, objective method to study perceptual grouping of planar regions separated by an occluder. We demonstrate that the stronger the grouping between two such regions, the harder it will be to resolve their relative stereoscopic depth. We use this new method to call into question many existing theories of perceptual completion (Ullman, S. (1976). Biological Cybernetics, 25, 1-6; Shashua, A., & Ullman, S. (1988). 2nd International Conference on Computer Vision (pp. 321-327); Parent, P., & Zucker, S. (1989). IEEE Transactions on Pattern Analysis and Machine Intelligence, 11, 823-839; Kellman, P. J., & Shipley, T. F. (1991). Cognitive psychology, Liveright, New York; Heitger, R., & von der Heydt, R. (1993). A computational model of neural contour processing, figure-ground segregation and illusory contours. In Internal Conference Computer Vision (pp. 32-40); Mumford, D. (1994). Algebraic geometry and its applications, Springer, New York; Williams, L. R., & Jacobs, D. W. (1997). Neural Computation, 9, 837-858) that are based on Gestalt grouping cues by demonstrating that convexity plays a strong role in perceptual completion. In some cases convexity dominates the effects of the well known Gestalt cue of good continuation. While convexity has been known to play a role in figure/ground segmentation (Rubin, 1927; Kanizsa & Gerbino, 1976), this is the first demonstration of its importance in perceptual completion.
Parks, Nathan A; Hilimire, Matthew R; Corballis, Paul M
The perceptual load theory of attention posits that attentional selection occurs early in processing when a task is perceptually demanding but occurs late in processing otherwise. We used a frequency-tagged steady-state evoked potential paradigm to investigate the modality specificity of perceptual load-induced distractor filtering and the nature of neural-competitive interactions between task and distractor stimuli. EEG data were recorded while participants monitored a stream of stimuli occurring in rapid serial visual presentation (RSVP) for the appearance of previously assigned targets. Perceptual load was manipulated by assigning targets that were identifiable by color alone (low load) or by the conjunction of color and orientation (high load). The RSVP task was performed alone and in the presence of task-irrelevant visual and auditory distractors. The RSVP stimuli, visual distractors, and auditory distractors were "tagged" by modulating each at a unique frequency (2.5, 8.5, and 40.0 Hz, respectively), which allowed each to be analyzed separately in the frequency domain. We report three important findings regarding the neural mechanisms of perceptual load. First, we replicated previous findings of within-modality distractor filtering and demonstrated a reduction in visual distractor signals with high perceptual load. Second, auditory steady-state distractor signals were unaffected by manipulations of visual perceptual load, consistent with the idea that perceptual load-induced distractor filtering is modality specific. Third, analysis of task-related signals revealed that visual distractors competed with task stimuli for representation and that increased perceptual load appeared to resolve this competition in favor of the task stimulus.
Nicolás Alejandro Serrano
Full Text Available The main objective of this paper is to show that perceptual conceptualism can be understood as an empirically meaningful position and, furthermore, that there is some degree of empirical support for its main theses. In order to do this, I will start by offering an empirical reading of the conceptualist position, and making three predictions from it. Then, I will consider recent experimental results from cognitive sciences that seem to point towards those predictions. I will conclude that, while the evidence offered by those experiments is far from decisive, it is enough not only to show that conceptualism is an empirically meaningful position but also that there is empirical support for it.
Bocast, Christopher S.
A portfolio dissertation that began as acoustic ecology and matured into perceptual ecology, centered on ecomusicology, bioacoustics, and translational audio-based media works with environmental perspectives. The place of music in Western eco-cosmology through time provides a basis for structuring an environmental history of human sound perception. That history suggests that music may stabilize human mental activity, and that an increased musical practice may be essential for the human project. An overview of recent antecedents preceding the emergence of acoustic ecology reveals structural foundations from 20th century culture that underpin modern sound studies. The contextual role that Aldo Leopold, Jacob von Uexkull, John Cage, Marshall McLuhan, and others played in anticipating the development of acoustic ecology as an interdiscipline is detailed. This interdisciplinary aspect of acoustic ecology is defined and defended, while new developments like soundscape ecology are addressed, though ultimately sound studies will need to embrace a broader concept of full-spectrum "sensory" or "perceptual" ecology. The bioacoustic fieldwork done on spawning sturgeon emphasized this necessity. That study yielded scientific recordings and spectrographic analyses of spawning sounds produced by lake sturgeon, Acipenser fulvescens, during reproduction in natural habitats in the Lake Winnebago watershed in Wisconsin. Recordings were made on the Wolf and Embarrass River during the 2011-2013 spawning seasons. Several specimens were dissected to investigate possible sound production mechanisms; no sonic musculature was found. Drumming sounds, ranging from 5 to 7 Hz fundamental frequency, verified the infrasonic nature of previously undocumented "sturgeon thunder". Other characteristic noises of sturgeon spawning including low-frequency rumbles and hydrodynamic sounds were identified. Intriguingly, high-frequency signals resembling electric organ discharges were discovered. These
Cosman, Joshua D; Mordkoff, J Toby; Vecera, Shaun P
A dominant account of selective attention, perceptual load theory, proposes that when attentional resources are exhausted, task-irrelevant information receives little attention and goes unrecognized. However, the flanker effect-typically used to assay stimulus identification-requires an arbitrary mapping between a stimulus and a response. We looked for failures of flanker identification by using a more-sensitive measure that does not require arbitrary stimulus-response mappings: the correlated flankers effect. We found that flanking items that were task-irrelevant but that correlated with target identity produced a correlated flanker effect. Participants were faster on trials in which the irrelevant flanker had previously correlated with the target than when it did not. Of importance, this correlated flanker effect appeared regardless of perceptual load, occurring even in high-load displays that should have abolished flanker identification. Findings from a standard flanker task replicated the basic perceptual load effect, with flankers not affecting response times under high perceptual load. Our results indicate that task-irrelevant information can be processed to a high level (identification), even under high perceptual load. This challenges a strong account of high perceptual load effects that hypothesizes complete failures of stimulus identification under high perceptual load. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Milner, A David; Cavina-Pratesi, Cristiana
It is argued here that apperceptive object agnosia (generally now known as visual form agnosia) is in reality not a kind of agnosia, but rather a form of "imperception" (to use the term coined by Hughlings Jackson). We further argue that its proximate cause is a bilateral loss (or functional loss) of the visual form processing systems embodied in the human lateral occipital cortex (area LO). According to the dual-system model of cortical visual processing elaborated by Milner and Goodale (2006), area LO constitutes a crucial component of the ventral stream, and indeed is essential for providing the figural qualities inherent in our normal visual perception of the world. According to this account, the functional loss of area LO would leave only spared visual areas within the occipito-parietal dorsal stream - dedicated to the control of visually-guided actions - potentially able to provide some aspects of visual shape processing in patients with apperceptive agnosia. We review the relevant evidence from such individuals, concentrating particularly on the well-researched patient D.F. We conclude that studies of this kind can provide useful pointers to an understanding of the processing characteristics of parietal-lobe visual mechanisms and their interactions with occipitotemporal perceptual systems in the guidance of action. Copyright © 2018 Elsevier B.V. All rights reserved.
Germar, Markus; Schlemmer, Alexander; Krug, Kristine; Voss, Andreas; Mojzisch, Andreas
Classic studies on social influence used simple perceptual decision-making tasks to examine how the opinions of others change individuals' judgments. Since then, one of the most fundamental questions in social psychology has been whether social influence can alter basic perceptual processes. To address this issue, we used a diffusion model analysis. Diffusion models provide a stochastic approach for separating the cognitive processes underlying speeded binary decisions. Following this approach, our study is the first to disentangle whether social influence on decision making is due to altering the uptake of available sensory information or due to shifting the decision criteria. In two experiments, we found consistent evidence for the idea that social influence alters the uptake of available sensory evidence. By contrast, participants did not adjust their decision criteria.
Babel, Molly; McGuire, Grant
Research has shown that processing dynamics on the perceiver's end determine aesthetic pleasure. Specifically, typical objects, which are processed more fluently, are perceived as more attractive. We extend this notion of perceptual fluency to judgments of vocal aesthetics. Vocal attractiveness has traditionally been examined with respect to sexual dimorphism and the apparent size of a talker, as reconstructed from the acoustic signal, despite evidence that gender-specific speech patterns are learned social behaviors. In this study, we report on a series of three experiments using 60 voices (30 females) to compare the relationship between judgments of vocal attractiveness, stereotypicality, and gender categorization fluency. Our results indicate that attractiveness and stereotypicality are highly correlated for female and male voices. Stereotypicality and categorization fluency were also correlated for male voices, but not female voices. Crucially, stereotypicality and categorization fluency interacted to predict attractiveness, suggesting the role of perceptual fluency is present, but nuanced, in judgments of human voices. © 2014 Cognitive Science Society, Inc.
Valdés-Conroy, Berenice; Hinojosa, José A; Román, Francisco J; Romero-Ferreiro, Verónica
Building on evidence for embodied representations, we investigated whether Spanish spatial terms map onto the NEAR/FAR perceptual division of space. Using a long horizontal display, we measured congruency effects during the processing of spatial terms presented in NEAR or FAR space. Across three experiments, we manipulated the task demands in order to investigate the role of endogenous attention in linguistic and perceptual space mapping. We predicted congruency effects only when spatial properties were relevant for the task (reaching estimation task, Experiment 1) but not when attention was allocated to other features (lexical decision, Experiment 2; and color, Experiment 3). Results showed faster responses for words presented in Near-space in all experiments. Consistent with our hypothesis, congruency effects were observed only when a reaching estimate was requested. Our results add important evidence for the role of top-down processing in congruency effects from embodied representations of spatial terms. Copyright © 2017 Cognitive Science Society, Inc.
Rey, Amandine Eve; Riou, Benoit; Muller, Dominique; Dabic, Stéphanie; Versace, Rémy
Does a visual mask need to be perceptually present to disrupt processing? In the present research, we proposed to explore the link between perceptual and memory mechanisms by demonstrating that a typical sensory phenomenon (visual masking) can be replicated at a memory level. Experiment 1 highlighted an interference effect of a visual mask on the…
Jennifer M D Yoon
Full Text Available Visual illusions and other perceptual phenomena can be used as tools to uncover the otherwise hidden constructive processes that give rise to perception. Although many perceptual processes are assumed to be universal, variable susceptibility to certain illusions and perceptual effects across populations suggests a role for factors that vary culturally. One striking phenomenon is seen with two-tone images-photos reduced to two tones: black and white. Deficient recognition is observed in young children under conditions that trigger automatic recognition in adults. Here we show a similar lack of cue-triggered perceptual reorganization in the Pirahã, a hunter-gatherer tribe with limited exposure to modern visual media, suggesting such recognition is experience- and culture-specific.
Banai, Karen; Yifat, Rachel
Although the contribution of perceptual processes to language skills during infancy is well recognized, the role of perception in linguistic processing beyond infancy is not well understood. In the experiments reported here, we asked whether manipulating the perceptual context in which stimuli are presented across trials influences how preschool children perform visual (shape-size identification; Experiment 1) and auditory (syllable identification; Experiment 2) tasks. Another goal was to determine whether the sensitivity to perceptual context can explain part of the variance in oral language skills in typically developing preschool children. Perceptual context was manipulated by changing the relative frequency with which target visual (Experiment 1) and auditory (Experiment 2) stimuli were presented in arrays of fixed size, and identification of the target stimuli was tested. Oral language skills were assessed using vocabulary, word definition, and phonological awareness tasks. Changes in perceptual context influenced the performance of the majority of children on both identification tasks. Sensitivity to perceptual context accounted for 7% to 15% of the variance in language scores. We suggest that context effects are an outcome of a statistical learning process. Therefore, the current findings demonstrate that statistical learning can facilitate both visual and auditory identification processes in preschool children. Furthermore, consistent with previous findings in infants and in older children and adults, individual differences in statistical learning were found to be associated with individual differences in language skills of preschool children. Copyright © 2015 Elsevier Inc. All rights reserved.
Cosman, Joshua D; Vecera, Shaun P
Attentional capture by abrupt onsets can be modulated by several factors, including the complexity, or perceptual load, of a scene. We have recently demonstrated that observers are less likely to be captured by abruptly appearing, task-irrelevant stimuli when they perform a search that is high, as opposed to low, in perceptual load (Cosman & Vecera, 2009), consistent with perceptual load theory. However, recent results indicate that onset frequency can influence stimulus-driven capture, with infrequent onsets capturing attention more often than did frequent onsets. Importantly, in our previous task, an abrupt onset was present on every trial, and consequently, attentional capture might have been affected by both onset frequency and perceptual load. In the present experiment, we examined whether onset frequency influences attentional capture under conditions of high perceptual load. When onsets were presented frequently, we replicated our earlier results; attentional capture by onsets was modulated under conditions of high perceptual load. Importantly, however, when onsets were presented infrequently, we observed robust capture effects. These results conflict with a strong form of load theory and, instead, suggest that exposure to the elements of a task (e.g., abrupt onsets) combines with high perceptual load to modulate attentional capture by task-irrelevant information.
Computational modeling of human perceptual-motor and cognitive performance based on a comprehensive detailed information- processing architecture leads to new insights about the components of working memory...
Gabriela Tavares; Pietro Perona; Antonio Rangel; Antonio Rangel
Perceptual decisions requiring the comparison of spatially distributed stimuli that are fixated sequentially might be influenced by fluctuations in visual attention. We used two psychophysical tasks with human subjects to investigate the extent to which visual attention influences simple perceptual choices, and to test the extent to which the attentional Drift Diffusion Model (aDDM) provides a good computational description of how attention affects the underlying decision processes. We find e...
Zhe Charles Zhou; Chunxiu Yu; Kristin K. Sellers; Flavio Fröhlich
Visual discrimination requires sensory processing followed by a perceptual decision. Despite a growing understanding of visual areas in this behavior, it is unclear what role top-down signals from prefrontal cortex play, in particular as a function of perceptual difficulty. To address this gap, we investigated how neurons in dorso-lateral frontal cortex (dl-FC) of freely-moving ferrets encode task variables in a two-alternative forced choice visual discrimination task with high- and low-contr...
Parks, Nathan A; Beck, Diane M; Kramer, Arthur F
The perceptual load theory of attention proposes that the degree to which visual distractors are processed is a function of the attentional demands of a task-greater demands increase filtering of irrelevant distractors. The spatial configuration of such filtering is unknown. Here, we used steady-state visual evoked potentials (SSVEPs) in conjunction with time-domain event-related potentials (ERPs) to investigate the distribution of load-induced distractor suppression and task-relevant enhancement in the visual field. Electroencephalogram (EEG) was recorded while subjects performed a foveal go/no-go task that varied in perceptual load. Load-dependent distractor suppression was assessed by presenting a contrast reversing ring at one of three eccentricities (2, 6, or 11°) during performance of the go/no-go task. Rings contrast reversed at 8.3 Hz, allowing load-dependent changes in distractor processing to be tracked in the frequency-domain. ERPs were calculated to the onset of stimuli in the load task to examine load-dependent modulation of task-relevant processing. Results showed that the amplitude of the distractor SSVEP (8.3 Hz) was attenuated under high perceptual load (relative to low load) at the most proximal (2°) eccentricity but not at more eccentric locations (6 or 11°). Task-relevant ERPs revealed a significant increase in N1 amplitude under high load. These results are consistent with a center-surround configuration of load-induced enhancement and suppression in the visual field.
Meuwese, Julia D I; Post, Ruben A G; Scholte, H Steven; Lamme, Victor A F
It has been proposed that visual attention and consciousness are separate [Koch, C., & Tsuchiya, N. Attention and consciousness: Two distinct brain processes. Trends in Cognitive Sciences, 11, 16-22, 2007] and possibly even orthogonal processes [Lamme, V. A. F. Why visual attention and awareness are different. Trends in Cognitive Sciences, 7, 12-18, 2003]. Attention and consciousness converge when conscious visual percepts are attended and hence become available for conscious report. In such a view, a lack of reportability can have two causes: the absence of attention or the absence of a conscious percept. This raises an important question in the field of perceptual learning. It is known that learning can occur in the absence of reportability [Gutnisky, D. A., Hansen, B. J., Iliescu, B. F., & Dragoi, V. Attention alters visual plasticity during exposure-based learning. Current Biology, 19, 555-560, 2009; Seitz, A. R., Kim, D., & Watanabe, T. Rewards evoke learning of unconsciously processed visual stimuli in adult humans. Neuron, 61, 700-707, 2009; Seitz, A. R., & Watanabe, T. Is subliminal learning really passive? Nature, 422, 36, 2003; Watanabe, T., Náñez, J. E., & Sasaki, Y. Perceptual learning without perception. Nature, 413, 844-848, 2001], but it is unclear which of the two ingredients-consciousness or attention-is not necessary for learning. We presented textured figure-ground stimuli and manipulated reportability either by masking (which only interferes with consciousness) or with an inattention paradigm (which only interferes with attention). During the second session (24 hr later), learning was assessed neurally and behaviorally, via differences in figure-ground ERPs and via a detection task. Behavioral and neural learning effects were found for stimuli presented in the inattention paradigm and not for masked stimuli. Interestingly, the behavioral learning effect only became apparent when performance feedback was given on the task to measure learning
Gerlach, Christian; Law, I; Gade, A
The purpose of the present PET study was (i) to investigate the neural correlates of object recognition, i.e. the matching of visual forms to memory, and (ii) to test the hypothesis that this process is more difficult for natural objects than for artefacts. This was done by using object decision...... tasks where subjects decided whether pictures represented real objects or non-objects. The object decision tasks differed in their difficulty (the degree of perceptual differentiation needed to perform them) and in the category of the real objects used (natural objects versus artefacts). A clear effect...... be the neural correlate of matching visual forms to memory, and the amount of activation in these regions may correspond to the degree of perceptual differentiation required for recognition to occur. With respect to behaviour, it took significantly longer to make object decisions on natural objects than...
Chambah, Majed; Rizzi, Alessandro; Gatta, Carlo; Besserer, Bernard; Marini, Daniele
The cinematographic archives represent an important part of our collective memory. We present in this paper some advances in automating the color fading restoration process, especially with regard to the automatic color correction technique. The proposed color correction method is based on the ACE model, an unsupervised color equalization algorithm based on a perceptual approach and inspired by some adaptation mechanisms of the human visual system, in particular lightness constancy and color constancy. There are some advantages in a perceptual approach: mainly its robustness and its local filtering properties, that lead to more effective results. The resulting technique, is not just an application of ACE on movie images, but an enhancement of ACE principles to meet the requirements in the digital film restoration field. The presented preliminary results are satisfying and promising.
Fetterman, Adam K; Robinson, Michael D; Gordon, Robert D; Elliot, Andrew J
A class of metaphors links the experience of anger to perceptions of redness. Whether such metaphors have significant implications for understanding perception is not known. In Experiment 1, anger (versus sadness) concepts were primed and it was found that priming anger concepts led individuals to be more likely to perceive the color red. In Experiment 2, anger states were directly manipulated, and it was found that evoking anger led individuals to be more likely to perceive red. Both experiments showed that the observed effects were independent of the actual color presented. These findings extend the New Look, perceptual, metaphoric, and social cognitive literatures. Most importantly, the results suggest that emotion representation processes of a metaphoric type can be extended to the perceptual realm.
Gilbert, Charles D; Li, Wu; Piech, Valentin
The visual cortex retains the capacity for experience-dependent changes, or plasticity, of cortical function and cortical circuitry, throughout life. These changes constitute the mechanism of perceptual learning in normal visual experience and in recovery of function after CNS damage. Such plasticity can be seen at multiple stages in the visual pathway, including primary visual cortex. The manifestation of the functional changes associated with perceptual learning involve both long term modification of cortical circuits during the course of learning, and short term dynamics in the functional properties of cortical neurons. These dynamics are subject to top-down influences of attention, expectation and perceptual task. As a consequence, each cortical area is an adaptive processor, altering its function in accordance to immediate perceptual demands.
Full Text Available Significant insights into visual cognition have come from studying real-world perceptual expertise. Many have previously reviewed empirical findings and theoretical developments from this work. Here we instead provide a brief perspective on approaches, considerations, and challenges to studying real-world perceptual expertise. We discuss factors like choosing to use real-world versus artificial object domains of expertise, selecting a target domain of real-world perceptual expertise, recruiting experts, evaluating their level of expertise, and experimentally testing experts in the lab and online. Throughout our perspective, we highlight expert birding (also called birdwatching as an example, as it has been used as a target domain for over two decades in the perceptual expertise literature.
Calvillo, Dustin P; Jackson, Russell E
Inattentional blindness is the failure to notice unexpected objects in a visual scene while engaging in an attention-demanding task. We examined the effects of animacy and perceptual load on inattentional blindness. Participants searched for a category exemplar under low or high perceptual load. On the last trial, the participants were exposed to an unexpected object that was either animate or inanimate. Unexpected objects were detected more frequently when they were animate rather than inanimate, and more frequently with low than with high perceptual loads. We also measured working memory capacity and found that it predicted the detection of unexpected objects, but only with high perceptual loads. The results are consistent with the animate-monitoring hypothesis, which suggests that animate objects capture attention because of the importance of the detection of animate objects in ancestral hunter-gatherer environments.
Sadeh, Naomi; Bredemeier, Keith
Attentional control theory (Eysenck et al., 2007) posits that taxing attentional resources impairs performance efficiency in anxious individuals. This theory, however, does not explicitly address if or how the relation between anxiety and attentional control depends upon the perceptual demands of the task at hand. Consequently, the present study examined the relation between trait anxiety and task performance using a perceptual load task (Maylor & Lavie, 1998). Sixty-eight male college students completed a visual search task that indexed processing of irrelevant distractors systematically across four levels of perceptual load. Results indicated that anxiety was related to difficulty suppressing the behavioural effects of irrelevant distractors (i.e., decreased reaction time efficiency) under high, but not low, perceptual loads. In contrast, anxiety was not associated with error rates on the task. These findings are consistent with the prediction that anxiety is associated with impairments in performance efficiency under conditions that tax attentional resources.
Carbon, Claus-Christian; Deininger, Pia
Medieval times were neither dark nor grey; natural light illuminated colourful scenes depicted in paintings through coloured windows and via artificial beeswax candlelight. When we enter, for example, a church to inspect its historic treasures ranging from mosaics to depictions of saints, we do this under quite unfavourable conditions; particularly as we mainly depend on artificial halogen, LED or fluorescent light for illuminating the desired object. As these light spectrums are different from the natural light conditions under which the old masterpieces were previously developed and perceived, the perceptual effects may dramatically differ, leading to significantly altered affective and cognitive processing. Different qualities of processing might particularly be triggered when perceiving artworks which deal with specific material prone to strong interaction with idiosyncratic light conditions, for instance gold-leafed surfaces that literally start to glow when lit by candles. We tested the perceptual experiences of a figurative piece of art which we created in 3 (foreground) by 3 (background) versions, illuminated under three different light conditions (daylight, coloured light and beeswax candlelight). Results demonstrated very different perceptual experiences with stunning effects for the interaction of the specific painting depicted on a gold-leafed background lit by candlelight.
Full Text Available Medieval times were neither dark nor grey; natural light illuminated colourful scenes depicted in paintings through coloured windows and via artificial beeswax candlelight. When we enter, for example, a church to inspect its historic treasures ranging from mosaics to depictions of saints, we do this under quite unfavourable conditions; particularly as we mainly depend on artificial halogen, LED or fluorescent light for illuminating the desired object. As these light spectrums are different from the natural light conditions under which the old masterpieces were previously developed and perceived, the perceptual effects may dramatically differ, leading to significantly altered affective and cognitive processing. Different qualities of processing might particularly be triggered when perceiving artworks which deal with specific material prone to strong interaction with idiosyncratic light conditions, for instance gold-leafed surfaces that literally start to glow when lit by candles. We tested the perceptual experiences of a figurative piece of art which we created in 3 (foreground by 3 (background versions, illuminated under three different light conditions (daylight, coloured light and beeswax candlelight. Results demonstrated very different perceptual experiences with stunning effects for the interaction of the specific painting depicted on a gold-leafed background lit by candlelight.
Perceptual dialectology is dedicated to the formal study of folk linguistic perceptions. Through an amalgamation of social psychology, ethnography, dialectology, sociolinguistics, cultural geography and myriad other fields, perceptual dialectology provides a methodology to gain insight to overt folk language attitudes, knowledge of regional distribution, and the importance of language variation and change (Preston 1989, 1999a). This study conducts the first investigation of folk percept...
Keidser, Gitte; Dillon, Harvey; Convery, Elizabeth; Mejia, Jorge
Large variations in perceptual directional microphone benefit, which far exceed the variation expected from physical performance measures of directional microphones, have been reported in the literature. The cause for the individual variation has not been systematically investigated. To determine the factors that are responsible for the individual variation in reported perceptual directional benefit. A correlational study. Physical performance measures of the directional microphones obtained after they had been fitted to individuals, cognitive abilities of individuals, and measurement errors were related to perceptual directional benefit scores. Fifty-nine hearing-impaired adults with varied degrees of hearing loss participated in the study. All participants were bilaterally fitted with a Motion behind-the-ear device (500 M, 501 SX, or 501 P) from Siemens according to the National Acoustic Laboratories' non-linear prescription, version two (NAL-NL2). Using the Bamford-Kowal-Bench (BKB) sentences, the perceptual directional benefit was obtained as the difference in speech reception threshold measured in babble noise (SRTn) with the devices in directional (fixed hypercardioid) and in omnidirectional mode. The SRTn measurements were repeated three times with each microphone mode. Physical performance measures of the directional microphone included the angle of the microphone ports to loudspeaker axis, the frequency range dominated by amplified sound, the in situ signal-to-noise ratio (SNR), and the in situ three-dimensional, articulation-index weighted directivity index (3D AI-DI). The cognitive tests included auditory selective attention, speed of processing, and working memory. Intraparticipant variation on the repeated SRTn's and the interparticipant variation on the average SRTn were used to determine the effect of measurement error. A multiple regression analysis was used to determine the effect of other factors. Measurement errors explained 52% of the variation
Rey, Amandine E; Vallet, Guillaume T; Riou, Benoit; Lesourd, Mathieu; Versace, Rémy
The relationship between perceptual and memory processing is at the core of cognition. Growing evidence suggests reciprocal influences between them so that memory features should lead to an actual perceptual bias. In the present study, we investigate the reciprocal influence of perceptual and memory processing by further adapting the Ebbinghaus illusion and tested it in a psychophysical design. In a 2AFC (two-alternative forced choice) paradigm, the perceptual bias in the Ebbinghaus illusion was induced by a physical size (Experiment 1) or a memory reactivated size of the inducers (Experiment 2, the size was reactivated thanks to a color-size association). One test disk was presented on the left of the screen and was surrounded by six inducers with a large or small (perceptual or reactivated) size. The test disk varied in size and participants were asked to indicate whether this test disk was smaller or larger than a reference disk presented on the right of the screen (the reference disk was invariant in size). Participants' responses were influenced by the size of the inducers for the perceptual and the reactivated size of the inducers. These results provide new evidence for the influence of memory on perception in a psychophysics paradigm. Published by Elsevier B.V.
Zizlsperger, Leopold; Kümmel, Florian; Haarmeier, Thomas
While perceptual learning increases objective sensitivity, the effects on the constant interaction of the process of perception and its metacognitive evaluation have been rarely investigated. Visual perception has been described as a process of probabilistic inference featuring metacognitive evaluations of choice certainty. For visual motion perception in healthy, naive human subjects here we show that perceptual sensitivity and confidence in it increased with training. The metacognitive sensitivity-estimated from certainty ratings by a bias-free signal detection theoretic approach-in contrast, did not. Concomitant 3Hz transcranial alternating current stimulation (tACS) was applied in compliance with previous findings on effective high-low cross-frequency coupling subserving signal detection. While perceptual accuracy and confidence in it improved with training, there were no statistically significant tACS effects. Neither metacognitive sensitivity in distinguishing between their own correct and incorrect stimulus classifications, nor decision confidence itself determined the subjects' visual perceptual learning. Improvements of objective performance and the metacognitive confidence in it were rather determined by the perceptual sensitivity at the outset of the experiment. Post-decision certainty in visual perceptual learning was neither independent of objective performance, nor requisite for changes in sensitivity, but rather covaried with objective performance. The exact functional role of metacognitive confidence in human visual perception has yet to be determined.
van Maarseveen, Mariëtte J J; Oudejans, Raôul R D; Mann, David L; Savelsbergh, Geert J P
Many studies have shown that experts possess better perceptual-cognitive skills than novices (e.g., in anticipation, decision making, pattern recall), but it remains unclear whether a relationship exists between performance on those tests of perceptual-cognitive skill and actual on-field performance. In this study, we assessed the in situ performance of skilled soccer players and related the outcomes to measures of anticipation, decision making, and pattern recall. In addition, we examined gaze behaviour when performing the perceptual-cognitive tests to better understand whether the underlying processes were related when those perceptual-cognitive tasks were performed. The results revealed that on-field performance could not be predicted on the basis of performance on the perceptual-cognitive tests. Moreover, there were no strong correlations between the level of performance on the different tests. The analysis of gaze behaviour revealed differences in search rate, fixation duration, fixation order, gaze entropy, and percentage viewing time when performing the test of pattern recall, suggesting that it is driven by different processes to those used for anticipation and decision making. Altogether, the results suggest that the perceptual-cognitive tests may not be as strong determinants of actual performance as may have previously been assumed.
Normand, Alice; Autin, Frédérique; Croizet, Jean-Claude
Perceptual load has been found to be a powerful bottom-up determinant of distractibility, with high perceptual load preventing distraction by any irrelevant information. However, when under evaluative pressure, individuals exert top-down attentional control by giving greater weight to task-relevant features, making them more distractible from task-relevant distractors. One study tested whether the top-down modulation of attention under evaluative pressure overcomes the beneficial bottom-up effect of high perceptual load on distraction. Using a response-competition task, we replicated previous findings that high levels of perceptual load suppress task-relevant distractor response interference, but only for participants in a control condition. Participants under evaluative pressure (i.e., who believed their intelligence was assessed) showed interference from task-relevant distractor at all levels of perceptual load. This research challenges the assumptions of the perceptual load theory and sheds light on a neglected determinant of distractibility: the self-relevance of the performance situation in which attentional control is solicited.
Cain, W S; de Wijk, R; Lulejian, C; Schiet, F; See, L C
Five studies explored identification of odors as an aspect of semantic memory. All dealt in one way or another with the accessibility of acquired olfactory information. The first study examined stability and showed that, consistent with personal reports, people can fail to identify an odor one day yet succeed another. Failure turned more commonly to success than vice versa, and once success occurred it tended to recur. Confidence ratings implied that subjects generally knew the quality of their answers. Even incorrect names, though, often carried considerable information which sometimes reflected a semantic and sometimes a perceptual source of errors. The second study showed that profiling odors via the American Society of Testing and Materials list of attributes, an exercise in depth of processing, effected no increment in the identifiability/accessibility beyond an unelaborated second attempt at retrieval. The third study showed that subjects had only a weak ability to predict the relative recognizability of odors they had failed to identify. Whereas the strength of the feeling that they would 'know' an answer if offered choices did not associate significantly with performance for odors, it did for trivia questions. The fourth study demonstrated an association between ability to discriminate among one set of odors and to identify another, but this emerged only after subjects had received feedback about identity, which essentially changed the task to one of recognition and effectively stabilized access. The fifth study illustrated that feedback improves performance dramatically only for odors involved with it, but that mere retrieval leads to some improvement. The studies suggest a research agenda that could include supplemental use of confidence judgments both retrospectively and prospectively in the same subjects to indicate the amount of accessible semantic information; use of second and third guesses to examine subjects' simultaneously held hypotheses about
Is it possible to assess visual–perceptual processes involved in writing through a tablet test?The new title is: Psychological and physiological processes in figure - tracing abilities measured using a tablet computer: a study with 7 - 9 - year - old children
Full Text Available The present study investigated the use of a tablet computer to assess figure-tracing skills and their relationships with psychological (visual–perceptual processes, cognitive processes, handwriting skills and physiological (body mass index, isometric strength of arms parameters with school-children of second (7-8-year-olds and fourth (9-10-year-olds grades. We were also interested in gender differences. The task required tracing of geometric figures on a template, shown on a tablet screen in light grey, for the segments that make up the target figure, one at a time. This figure-tracing tablet test allows acquisition and automated analysis of four parameters: number of strokes (pen lift for each segment; oscillations of lines drawn with respect to reference lines; pressure of pen on tablet; and average speed of tracing. The results show a trade-off between speed and quality for the tablet parameters, with higher speed associated with more oscillations with respect to the reference lines, and lower number of strokes for each segment, in both male and female children. The involvement of visual–motor integration on the ability to reduce the oscillations in this tablet test was only seen for the male children, while both the male and female children showed a relationship between oscillations and more general/ abstract visual–spatial processes. These data confirm the role of visual–motor processes in this figure-tracing tablet test only for male children, while more general visual–spatial processes influence the performance in the tablet test for both sexes. We conclude that the test proposed is useful to screen for grapho-motor difficulties.
Victor, Jonathan D; Thengone, Daniel J; Rizvi, Syed M; Conte, Mary M
Local image statistics are important for visual analysis of textures, surfaces, and form. There are many kinds of local statistics, including those that capture luminance distributions, spatial contrast, oriented segments, and corners. While sensitivity to each of these kinds of statistics have been well-studied, much less is known about visual processing when multiple kinds of statistics are relevant, in large part because the dimensionality of the problem is high and different kinds of statistics interact. To approach this problem, we focused on binary images on a square lattice - a reduced set of stimuli which nevertheless taps many kinds of local statistics. In this 10-parameter space, we determined psychophysical thresholds to each kind of statistic (16 observers) and all of their pairwise combinations (4 observers). Sensitivities and isodiscrimination contours were consistent across observers. Isodiscrimination contours were elliptical, implying a quadratic interaction rule, which in turn determined ellipsoidal isodiscrimination surfaces in the full 10-dimensional space, and made predictions for sensitivities to complex combinations of statistics. These predictions, including the prediction of a combination of statistics that was metameric to random, were verified experimentally. Finally, check size had only a mild effect on sensitivities over the range from 2.8 to 14min, but sensitivities to second- and higher-order statistics was substantially lower at 1.4min. In sum, local image statistics form a perceptual space that is highly stereotyped across observers, in which different kinds of statistics interact according to simple rules. Copyright © 2015 Elsevier Ltd. All rights reserved.
Espinosa, Irene; Cuthill, Innes C
Camouflage is the primary defence of many animals and includes multiple strategies that interfere with figure-ground segmentation and object recognition. While matching background colours and textures is widespread and conceptually straightforward, less well explored are the optical 'tricks', collectively called disruptive colouration, that exploit perceptual grouping mechanisms. Adjacent high contrast colours create false edges, but this is not sufficient for an object's shape to be broken up; some colours must blend with the background. We test the novel hypothesis that this will be particularly effective when the colour patches on the animal appear to belong to, not merely different background colours, but different background objects. We used computer-based experiments where human participants had to find cryptic targets on artificial backgrounds. Creating what appeared to be bi-coloured foreground objects on bi-coloured backgrounds, we generated colour boundaries that had identical local contrast but either lay within or between (illusory) objects. As predicted, error rates for targets matching what appeared to be different background objects were higher than for targets which had otherwise identical local contrast to the background but appeared to belong to single background objects. This provides evidence for disruptive colouration interfering with higher-level feature integration in addition to previously demonstrated low-level effects involving contour detection. In addition, detection was impeded in treatments where targets were on or in close proximity to multiple background colour or tone boundaries. This is consistent with other studies which show a deleterious influence of visual 'clutter' or background complexity on search.
Full Text Available Camouflage is the primary defence of many animals and includes multiple strategies that interfere with figure-ground segmentation and object recognition. While matching background colours and textures is widespread and conceptually straightforward, less well explored are the optical 'tricks', collectively called disruptive colouration, that exploit perceptual grouping mechanisms. Adjacent high contrast colours create false edges, but this is not sufficient for an object's shape to be broken up; some colours must blend with the background. We test the novel hypothesis that this will be particularly effective when the colour patches on the animal appear to belong to, not merely different background colours, but different background objects. We used computer-based experiments where human participants had to find cryptic targets on artificial backgrounds. Creating what appeared to be bi-coloured foreground objects on bi-coloured backgrounds, we generated colour boundaries that had identical local contrast but either lay within or between (illusory objects. As predicted, error rates for targets matching what appeared to be different background objects were higher than for targets which had otherwise identical local contrast to the background but appeared to belong to single background objects. This provides evidence for disruptive colouration interfering with higher-level feature integration in addition to previously demonstrated low-level effects involving contour detection. In addition, detection was impeded in treatments where targets were on or in close proximity to multiple background colour or tone boundaries. This is consistent with other studies which show a deleterious influence of visual 'clutter' or background complexity on search.
Tsirlin, Inna; Allison, Robert S; Wilcox, Laurie M
We describe a perceptual asymmetry found in stereoscopic perception of overlaid random-dot surfaces. Specifically, the minimum separation in depth needed to perceptually segregate two overlaid surfaces depended on the distribution of dots across the surfaces. With the total dot density fixed, significantly larger inter-plane disparities were required for perceptual segregation of the surfaces when the front surface had fewer dots than the back surface compared to when the back surface was the one with fewer dots. We propose that our results reflect an asymmetry in the signal strength of the front and back surfaces due to the assignment of the spaces between the dots to the back surface by disparity interpolation. This hypothesis was supported by the results of two experiments designed to reduce the imbalance in the neuronal response to the two surfaces. We modeled the psychophysical data with a network of inter-neural connections: excitatory within-disparity and inhibitory across disparity, where the spread of disparity was modulated according to figure-ground assignment. These psychophysical and computational findings suggest that stereoscopic transparency depends on both inter-neural interactions of disparity-tuned cells and higher-level processes governing figure ground segregation. Copyright © 2011 Elsevier Ltd. All rights reserved.
Mulligan, Neil W.; Dew, Ilana T. Z.
The generation manipulation has been critical in delineating differences between implicit and explicit memory. In contrast to past research, the present experiments indicate that generating from a rhyme cue produces as much perceptual priming as does reading. This is demonstrated for 3 visual priming tasks: perceptual identification, word-fragment…
Deluca, Cristina; Golzar, Ashkan; Santandrea, Elisa; Lo Gerfo, Emanuele; Eštočinová, Jana; Moretto, Giuseppe; Fiaschi, Antonio; Panzeri, Marta; Mariotti, Caterina; Tinazzi, Michele; Chelazzi, Leonardo
Visual perceptual learning is widely assumed to reflect plastic changes occurring along the cerebro-cortical visual pathways, including at the earliest stages of processing, though increasing evidence indicates that higher-level brain areas are also involved. Here we addressed the possibility that the cerebellum plays an important role in visual perceptual learning. Within the realm of motor control, the cerebellum supports learning of new skills and recalibration of motor commands when movement execution is consistently perturbed (adaptation). Growing evidence indicates that the cerebellum is also involved in cognition and mediates forms of cognitive learning. Therefore, the obvious question arises whether the cerebellum might play a similar role in learning and adaptation within the perceptual domain. We explored a possible deficit in visual perceptual learning (and adaptation) in patients with cerebellar damage using variants of a novel motion extrapolation, psychophysical paradigm. Compared to their age- and gender-matched controls, patients with focal damage to the posterior (but not the anterior) cerebellum showed strongly diminished learning, in terms of both rate and amount of improvement over time. Consistent with a double-dissociation pattern, patients with focal damage to the anterior cerebellum instead showed more severe clinical motor deficits, indicative of a distinct role of the anterior cerebellum in the motor domain. The collected evidence demonstrates that a pure form of slow-incremental visual perceptual learning is crucially dependent on the intact cerebellum, bearing the notion that the human cerebellum acts as a learning device for motor, cognitive and perceptual functions. We interpret the deficit in terms of an inability to fine-tune predictive models of the incoming flow of visual perceptual input over time. Moreover, our results suggest a strong dissociation between the role of different portions of the cerebellum in motor versus
Kloosterman, Niels A; Meindertsma, Thomas; van Loon, Anouk M; Lamme, Victor A F; Bonneh, Yoram S; Donner, Tobias H
Changes in pupil size at constant light levels reflect the activity of neuromodulatory brainstem centers that control global brain state. These endogenously driven pupil dynamics can be synchronized with cognitive acts. For example, the pupil dilates during the spontaneous switches of perception of a constant sensory input in bistable perceptual illusions. It is unknown whether this pupil dilation only indicates the occurrence of perceptual switches, or also their content. Here, we measured pupil diameter in human subjects reporting the subjective disappearance and re-appearance of a physically constant visual target surrounded by a moving pattern ('motion-induced blindness' illusion). We show that the pupil dilates during the perceptual switches in the illusion and a stimulus-evoked 'replay' of that illusion. Critically, the switch-related pupil dilation encodes perceptual content, with larger amplitude for disappearance than re-appearance. This difference in pupil response amplitude enables prediction of the type of report (disappearance vs. re-appearance) on individual switches (receiver-operating characteristic: 61%). The amplitude difference is independent of the relative durations of target-visible and target-invisible intervals and subjects' overt behavioral report of the perceptual switches. Further, we show that pupil dilation during the replay also scales with the level of surprise about the timing of switches, but there is no evidence for an interaction between the effects of surprise and perceptual content on the pupil response. Taken together, our results suggest that pupil-linked brain systems track both the content of, and surprise about, perceptual events. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Schott, Björn; Richardson-Klavehn, Alan; Heinze, Hans-Jochen; Düzel, Emrah
We addressed the hypothesis that perceptual priming and explicit memory have distinct neural correlates at encoding. Event-related potentials (ERPs) were recorded while participants studied visually presented words at deep versus shallow levels of processing (LOPs). The ERPs were sorted by whether or not participants later used studied words as completions to three-letter word stems in an intentional memory test, and by whether or not they indicated that these completions were remembered from the study list. Study trials from which words were later used and not remembered (primed trials) and study trials from which words were later used and remembered (remembered trials) were compared to study trials from which words were later not used (forgotten trials), in order to measure the ERP difference associated with later memory (DM effect). Primed trials involved an early (200-450 msec) centroparietal negative-going DM effect. Remembered trials involved a late (900-1200 msec) right frontal, positive-going DM effect regardless of LOP, as well as an earlier (600-800 msec) central, positive-going DM effect during shallow study processing only. All three DM effects differed topographically, and, in terms of their onset or duration, from the extended (600-1200 msec) fronto-central, positive-going shift for deep compared with shallow study processing. The results provide the first clear evidence that perceptual priming and explicit memory have distinct neural correlates at encoding, consistent with Tulving and Schacter's (1990) distinction between brain systems concerned with perceptual representation versus semantic and episodic memory. They also shed additional light on encoding processes associated with later explicit memory, by suggesting that brain processes influenced by LOP set the stage for other, at least partially separable, brain processes that are more directly related to encoding success.
Full Text Available In sensory processing of odors, the olfactory bulb is an important relay station, where odor representations are noise-filtered, sharpened, and possibly re-organized. An organization by perceptual qualities has been found previously in the piriform cortex, however several recent studies indicate that the olfactory bulb code reflects behaviorally relevant dimensions spatially as well as at the population level. We apply a statistical analysis on 2-deoxyglucose images, taken over the entire bulb of glomerular layer of the rat, in order to see how the recognition of odors in the nose is translated into a map of odor quality in the brain. We first confirm previous studies that the first principal component could be related to pleasantness, however the next higher principal components are not directly clear. We then find mostly continuous spatial representations for perceptual categories. We compare the space spanned by spatial and population codes to human reports of perceptual similarity between odors and our results suggest that perceptual categories could be already embedded in glomerular activations and that spatial representations give a better match than population codes. This suggests that human and rat perceptual dimensions of odorant coding are related and indicates that perceptual qualities could be represented as continuous spatial codes of the olfactory bulb glomerulus population.
Toelch, Ulf; Panizza, Folco; Heekeren, Hauke R
Adaptive decisions in social contexts depend on both perceptual information and social expectations or norms. These are potentially in conflict when certain choices are beneficial for an individual, but societal rules mandate a different course of action. To resolve such a conflict, the reliability of information has to be balanced against potentially deleterious effects of non-compliance such as ostracism. In this study, we systematically investigated how interactions between perceptual and social influences affect decision-relevant cognitive processes. In a direction-of-motion discrimination task, participants received perceptual information alongside information on other players' choices. In addition, we created conflict scenarios where players' choices affected other participants' monetary rewards dependent on whether their choices were in line or against the opinion of the other players. Importantly, we altered the strength of this manipulation in two separate experiments by contrasting motivations of either preventing harm or providing a benefit to others. Behavioural analyses and computational models of perceptual decisions showed that participants successfully integrated perceptual with social information. Participants' reliance on social information was effectively modulated in conflict situations. Critically, these effects were augmented when the strength of social norms was increased, indexing conditions under which social norms effectively influence decisions. These results inform theories of social influence by providing an account of how higher order goals like social norm compliance affect perceptual decisions.
Pan, Yi; Luo, Qianying; Cheng, Min
Previous research has indicated that attention can be biased toward those stimuli matching the contents of working memory and thereby facilitates visual processing at the location of the memory-matching stimuli. However, whether this working memory-driven attentional modulation takes place on early perceptual processes remains unclear. Our present results showed that working memory-driven attention improved identification of a brief Landolt target presented alone in the visual field. Because the suprathreshold target appeared without any external noise added (i.e., no distractors or masks), the results suggest that working memory-driven attention enhances the target signal at early perceptual stages of visual processing. Furthermore, given that performance in the Landolt target identification task indexes spatial resolution, this attentional facilitation indicates that working memory-driven attention can boost early perceptual processing via enhancement of spatial resolution at the attended location.
Healey, Chris G.; St. Amant, Robert; Elhaddad, Mahmoud S.
This paper describes an automated visualized assistant called ViA. ViA is designed to help users construct perceptually optical visualizations to represent, explore, and analyze large, complex, multidimensional datasets. We have approached this problem by studying what is known about the control of human visual attention. By harnessing the low-level human visual system, we can support our dual goals of rapid and accurate visualization. Perceptual guidelines that we have built using psychophysical experiments form the basis for ViA. ViA uses modified mixed-initiative planning algorithms from artificial intelligence to search of perceptually optical data attribute to visual feature mappings. Our perceptual guidelines are integrated into evaluation engines that provide evaluation weights for a given data-feature mapping, and hints on how that mapping might be improved. ViA begins by asking users a set of simple questions about their dataset and the analysis tasks they want to perform. Answers to these questions are used in combination with the evaluation engines to identify and intelligently pursue promising data-feature mappings. The result is an automatically-generated set of mappings that are perceptually salient, but that also respect the context of the dataset and users' preferences about how they want to visualize their data.
Meilleur, Andrée-Anne S; Berthiaume, Claude; Bertone, Armando; Mottron, Laurent
Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination) and mid-level (e.g., pattern matching) tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals. We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ) and Raven Progressive Matrices (RPM). We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence. In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism. Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or "g" factor). Instead, this residual covariation is accounted for by a common perceptual process (or "p" factor), which may drive perceptual abilities differently in autistic and
Meilleur, Andrée-Anne S.; Berthiaume, Claude; Bertone, Armando; Mottron, Laurent
Background Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination) and mid-level (e.g., pattern matching) tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals. Methods We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ) and Raven Progressive Matrices (RPM). We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence. Results In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism. Conclusions Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or “g” factor). Instead, this residual covariation is accounted for by a common perceptual process (or “p” factor), which may drive
Andrée-Anne S Meilleur
Full Text Available Autistic perception is characterized by atypical and sometimes exceptional performance in several low- (e.g., discrimination and mid-level (e.g., pattern matching tasks in both visual and auditory domains. A factor that specifically affects perceptive abilities in autistic individuals should manifest as an autism-specific association between perceptual tasks. The first purpose of this study was to explore how perceptual performances are associated within or across processing levels and/or modalities. The second purpose was to determine if general intelligence, the major factor that accounts for covariation in task performances in non-autistic individuals, equally controls perceptual abilities in autistic individuals.We asked 46 autistic individuals and 46 typically developing controls to perform four tasks measuring low- or mid-level visual or auditory processing. Intelligence was measured with the Wechsler's Intelligence Scale (FSIQ and Raven Progressive Matrices (RPM. We conducted linear regression models to compare task performances between groups and patterns of covariation between tasks. The addition of either Wechsler's FSIQ or RPM in the regression models controlled for the effects of intelligence.In typically developing individuals, most perceptual tasks were associated with intelligence measured either by RPM or Wechsler FSIQ. The residual covariation between unimodal tasks, i.e. covariation not explained by intelligence, could be explained by a modality-specific factor. In the autistic group, residual covariation revealed the presence of a plurimodal factor specific to autism.Autistic individuals show exceptional performance in some perceptual tasks. Here, we demonstrate the existence of specific, plurimodal covariation that does not dependent on general intelligence (or "g" factor. Instead, this residual covariation is accounted for by a common perceptual process (or "p" factor, which may drive perceptual abilities differently in
Giesbrecht, Barry; Sy, Jocelyn; Bundesen, Claus
The human attention system helps us cope with a complex environment by supporting the selective processing of information relevant to our current goals. Understanding the perceptual, cognitive, and neural mechanisms that mediate selective attention is a core issue in cognitive neuroscience. One...... prominent model of selective attention, known as load theory, offers an account of how task demands determine when information is selected and an account of the efficiency of the selection process. However, load theory has several critical weaknesses that suggest that it is time for a new perspective. Here...... is integrated to provide efficient attentional selection and allocation of perceptual processing resources....
Vaughan, Barry D
.... One prong concerns basic research from the perceptual psychology community. Over the last few decades, this research has generated a detailed theoretical understanding of visual processing and decision making, based on visual information...
Qi, Yonggang; Guo, Jun; Li, Yi
the importance of Gestalt rules by solving a learning to rank problem, and formulate a multi-label graph-cuts algo- rithm to group image primitives while taking into account the learned Gestalt confliction. Our experiment results confirm the existence of Gestalt confliction in perceptual grouping and demonstrate...... confliction, i.e., the relative importance of each rule compared with another, remains unsolved. In this paper, we investigate the problem of perceptual grouping by quantifying the confliction among three commonly used rules: similarity, continuity and proximity. More specifically, we propose to quantify...... an improved performance when such a conflic- tion is accounted for via the proposed grouping algorithm. Finally, a novel cross domain image classification method is proposed by exploiting perceptual grouping as representation....
Cheng, Qiang; Huang, Thomas S.
This paper presents a visible watermarking system using perceptual models. %how and why A watermark image is overlaid translucently onto a primary image, for the purposes of immediate claim of copyright, instantaneous recognition of owner or creator, or deterrence to piracy of digital images or video. %perceptual The watermark is modulated by exploiting combined DCT-domain and DWT-domain perceptual models. % so that the watermark is visually uniform. The resulting watermarked image is visually pleasing and unobtrusive. The location, size and strength of the watermark vary randomly with the underlying image. The randomization makes the automatic removal of the watermark difficult even though the algorithm is known publicly but the key to the random sequence generator. The experiments demonstrate that the watermarked images have pleasant visual effect and strong robustness. The watermarking system can be used in copyright notification and protection.
Steenbergen, Peter; Buitenweg, Jan R; Trojan, Jörg; Veltink, Peter H
Various studies have shown subjects to mislocalize cutaneous stimuli in an idiosyncratic manner. Spatial properties of individual localization behavior can be represented in the form of perceptual maps. Individual differences in these maps may reflect properties of internal body representations, and perceptual maps may therefore be a useful method for studying these representations. For this to be the case, individual perceptual maps need to be reproducible, which has not yet been demonstrated. We assessed the reproducibility of localizations measured twice on subsequent days. Ten subjects participated in the experiments. Non-painful electrocutaneous stimuli were applied at seven sites on the lower arm. Subjects localized the stimuli on a photograph of their own arm, which was presented on a tablet screen overlaying the real arm. Reproducibility was assessed by calculating intraclass correlation coefficients (ICC) for the mean localizations of each electrode site and the slope and offset of regression models of the localizations, which represent scaling and displacement of perceptual maps relative to the stimulated sites. The ICCs of the mean localizations ranged from 0.68 to 0.93; the ICCs of the regression parameters were 0.88 for the intercept and 0.92 for the slope. These results indicate a high degree of reproducibility. We conclude that localization patterns of non-painful electrocutaneous stimuli on the arm are reproducible on subsequent days. Reproducibility is a necessary property of perceptual maps for these to reflect properties of a subject's internal body representations. Perceptual maps are therefore a promising method for studying body representations.
Harrar, Vanessa; Spence, Charles; Makin, Tamar R
Perceptual learning can improve our sensory abilities. Understanding its underlying mechanisms, in particular, when perceptual learning generalizes, has become a focus of research and controversy. Specifically, there is little consensus regarding the extent to which tactile perceptual learning generalizes across fingers. We measured tactile orientation discrimination abilities on 4 fingers (index and middle fingers of both hands), using psychophysical measures, before and after 4 training sessions on 1 finger. Given the somatotopic organization of the hand representation in the somatosensory cortex, the topography of the cortical areas underlying tactile perceptual learning can be inferred from the pattern of generalization across fingers; only fingers sharing cortical representation with the trained finger ought to improve with it. Following training, performance improved not only for the trained finger but also for its adjacent and homologous fingers. Although these fingers were not exposed to training, they nevertheless demonstrated similar levels of learning as the trained finger. Conversely, the performance of the finger that was neither adjacent nor homologous to the trained finger was unaffected by training, despite the fact that our procedure was designed to enhance generalization, as described in recent visual perceptual learning research. This pattern of improved performance is compatible with previous reports of neuronal receptive fields (RFs) in the primary somatosensory cortex (SI) spanning adjacent and homologous digits. We conclude that perceptual learning rooted in low-level cortex can still generalize, and suggest potential applications for the neurorehabilitation of syndromes associated with maladaptive plasticity in SI. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Risse, Sarah; Hohenstein, Sven; Kliegl, Reinhold; Engbert, Ralf
Eye-movement experiments suggest that the perceptual span during reading is larger than the fixated word, asymmetric around the fixation position, and shrinks in size contingent on the foveal processing load. We used the SWIFT model of eye-movement control during reading to test these hypotheses and their implications under the assumption of graded parallel processing of all words inside the perceptual span. Specifically, we simulated reading in the boundary paradigm and analysed the effects ...
Gong, Liang; Wang, JiHua; Yang, XuDong; Feng, Lei; Li, Xiu; Gu, Cui; Wang, MeiHong; Hu, JiaYun; Cheng, Huaidong
The latest neuroimaging studies about implicit memory (IM) have revealed that different IM types may be processed by different parts of the brain. However, studies have rarely examined what subtypes of IM processes are affected in patients with various brain injuries. Twenty patients with frontal lobe injury, 25 patients with occipital lobe injury, and 29 healthy controls (HC) were recruited for the study. Two subtypes of IM were investigated by using structurally parallel perceptual (picture identification task) and conceptual (category exemplar generation task) IM tests in the three groups, as well as explicit memory (EM) tests. The results indicated that the priming of conceptual IM and EM tasks in patients with frontal lobe injury was poorer than that observed in HC, while perceptual IM was identical between the two groups. By contrast, the priming of perceptual IM in patients with occipital lobe injury was poorer than that in HC, whereas the priming of conceptual IM and EM was similar to that in HC. This double dissociation between perceptual and conceptual IM across the brain areas implies that occipital lobes may participate in perceptual IM, while frontal lobes may be involved in processing conceptual memory.
Kuo, Michael C C; Liu, Karen P Y; Ting, Kin Hung; Chan, Chetwyn C H
This study aimed to differentiate perceptual and semantic encoding processes using subsequent memory effects (SMEs) elicited by the recognition of orthographs of single Chinese characters. Participants studied a series of Chinese characters perceptually (by inspecting orthographic components) or semantically (by determining the object making sounds), and then made studied or unstudied judgments during the recognition phase. Recognition performance in terms of d-prime measure in the semantic condition was higher, though not significant, than that of the perceptual condition. The between perceptual-semantic condition differences in SMEs at P550 and late positive component latencies (700-1000ms) were not significant in the frontal area. An additional analysis identified larger SME in the semantic condition during 600-1000ms in the frontal pole regions. These results indicate that coordination and incorporation of orthographic information into mental representation is essential to both task conditions. The differentiation was also revealed in earlier SMEs (perceptual>semantic) at N3 (240-360ms) latency, which is a novel finding. The left-distributed N3 was interpreted as more efficient processing of meaning with semantically learned characters. Frontal pole SMEs indicated strategic processing by executive functions, which would further enhance memory. Copyright © 2012 Elsevier B.V. All rights reserved.
Daniel Enrique Kalpokas
Full Text Available According to Rorty, Davidson and Brandom, to have an experience is to be caused by our senses to hold a perceptual belief. This article argues that the phenomenon of seeing-as cannot be explained by such a conception of perceptual experience. First, the notion of experience defended by the aforementioned authors is reconstructed. Second, the main features of what Wittgenstein called “seeing aspects” are briefly presented. Finally, several arguments are developed in order to support the main thesis of the article: seeing-as cannot be explained by the conception of experience defended by Rorty, Davidson and Brandom.
Korner-Bitensky, Nicol; Coopersmith, Henry; Mayo, Nancy; Leblanc, Ginette; Kaizer, Franceen
Perceptual and cognitive disorders that frequently accompany stroke and head injury influence an individual's ability to drive a motor vehicle. Canadian physicians are legally responsible for identifying patients who are potentially unsafe to drive and, if they fail to do so, may be held liable in a civil action suit. The authors review the guidelines for physicians evaluating a patient's fitness to drive after brain injury. They also examine the actions a physician should take when a patient with perceptual and cognitive problems wants to drive. Ultimately, by taking these actions, physicians will help to prevent driving accidents. PMID:21234047
Nathan A Parks
Full Text Available The perceptual load theory of attention proposes that the degree to which visual distractors are processed is a function of the attentional demands of a task – greater demands increase filtering of irrelevant distractors. The spatial configuration of such filtering is unknown. Here, we used steady-state visual evoked potentials (SSVEPs in conjunction with time-domain event-related potentials (ERPs to investigate the distribution of load-induced distractor suppression and task-relevant enhancement in the visual field. Electroencephalogram (EEG was recorded while subjects performed a foveal go/no-go task that varied in perceptual load. Load-dependent distractor suppression was assessed by presenting a contrast reversing ring at one of three eccentricities (2°, 6°, or 11° during performance of the go/no-go task. Rings contrast reversed at 8.3 Hz, allowing load-dependent changes in distractor processing to be tracked in the frequency-domain. ERPs were calculated to the onset of stimuli in the load task to examine load-dependent modulation of task-relevant processing. Results showed that the amplitude of the distractor SSVEP (8.3Hz was attenuated under high perceptual load (relative to low load at the most proximal (2° eccentricity but not at more eccentric locations (6˚ or 11˚. Task-relevant ERPs revealed a significant increase in N1 amplitude under high load. These results are consistent with a center-surround configuration of load-induced enhancement and suppression in the visual field.
Voss, Joel L; Cohen, Neal J
The hippocampus is crucial for long-term memory; its involvement in short-term or immediate expressions of memory is more controversial. Rodent hippocampus has been implicated in an expression of memory that occurs on-line during exploration termed "vicarious trial-and-error" (VTE) behavior. VTE occurs when rodents iteratively explore options during perceptual discrimination or at choice points. It is strategic in that it accelerates learning and improves later memory. VTE has been associated with activity of rodent hippocampal neurons, and lesions of hippocampus disrupt VTE and associated learning and memory advantages. Analogous findings of VTE in humans would support the role of hippocampus in active use of short-term memory to guide strategic behavior. We therefore measured VTE using eye-movement tracking during perceptual discrimination and identified relevant neural correlates with functional magnetic resonance imaging. A difficult perceptual-discrimination task was used that required visual information to be maintained during a several second trial, but with no long-term memory component. VTE accelerated discrimination. Neural correlates of VTE included robust activity of hippocampus and activity of a network of medial prefrontal and lateral parietal regions involved in memory-guided behavior. This VTE-related activity was distinct from activity associated with simply viewing visual stimuli and making eye movements during the discrimination task, which occurred in regions frequently associated with visual processing and eye-movement control. Subjects were mostly unaware of performing VTE, thus further distancing VTE from explicit long-term memory processing. These findings bridge the rodent and human literatures on neural substrates of memory-guided behavior, and provide further support for the role of hippocampus and a hippocampal-centered network of cortical regions in the immediate use of memory in on-line processing and the guidance of behavior. © 2017
Gherri, Elena; Berreby, Fiona
To investigate whether tactile spatial attention is modulated by perceptual load, behavioural and electrophysiological measures were recorded during two spatial cuing tasks in which the difficulty of the target/non-target discrimination was varied (High and Low load tasks). Moreover, to study whether attentional modulations by load are sensitive to the availability of visual information, the High and Low load tasks were carried out under both illuminated and darkness conditions. ERPs to cued and uncued non-targets were compared as a function of task (High vs. Low load) and illumination condition (Light vs. Darkness). Results revealed that the locus of tactile spatial attention was determined by a complex interaction between perceptual load and illumination conditions during sensory-specific stages of processing. In the Darkness, earlier effects of attention were present in the High load than in the Low load task, while no difference between tasks emerged in the Light. By contrast, increased load was associated with stronger attention effects during later post-perceptual processing stages regardless of illumination conditions. These findings demonstrate that ERP correlates of tactile spatial attention are strongly affected by the perceptual load of the target/non-target discrimination. However, differences between illumination conditions show that the impact of load on tactile attention depends on the presence of visual information. Perceptual load is one of the many factors that contribute to determine the effects of spatial selectivity in touch. Copyright © 2017 Elsevier B.V. All rights reserved.
Turney, Indira C; Dennis, Nancy A
Previous memory research has exploited the perceptual similarities between lures and targets in order to evoke false memories. Nevertheless, while some studies have attempted to use lures that are objectively more similar than others, no study has systematically controlled for perceptual overlap between target and lure items and its role in accounting for false alarm rates or the neural processes underlying such perceptual false memories. The current study looked to fill this gap in the literature by using a face-morphing program to systematically control for the amount of perceptual overlap between lures and targets. Our results converge with previous studies in finding a pattern of differences between true and false memories. Most importantly, expanding upon this work, parametric analyses showed false memory activity increases with respect to the similarity between lures and targets within bilateral middle temporal gyri and right medial prefrontal cortex (mPFC). Moreover, this pattern of activation was unique to false memories and could not be accounted for by relatedness alone. Connectivity analyses further find that activity in the mPFC and left middle temporal gyrus co-vary, suggestive of gist-based monitoring within the context of false memories. Interestingly, neither the MTL nor the fusiform face area exhibited modulation as a function of target-lure relatedness. Overall, these results provide insight into the processes underlying false memories and further enhance our understanding of the role perceptual similarity plays in supporting false memories. Copyright © 2016 Elsevier Inc. All rights reserved.
Sofer, Imri; Crouzet, Sébastien M.; Serre, Thomas
Observers can rapidly perform a variety of visual tasks such as categorizing a scene as open, as outdoor, or as a beach. Although we know that different tasks are typically associated with systematic differences in behavioral responses, to date, little is known about the underlying mechanisms. Here, we implemented a single integrated paradigm that links perceptual processes with categorization processes. Using a large image database of natural scenes, we trained machine-learning classifiers to derive quantitative measures of task-specific perceptual discriminability based on the distance between individual images and different categorization boundaries. We showed that the resulting discriminability measure accurately predicts variations in behavioral responses across categorization tasks and stimulus sets. We further used the model to design an experiment, which challenged previous interpretations of the so-called “superordinate advantage.” Overall, our study suggests that observed differences in behavioral responses across rapid categorization tasks reflect natural variations in perceptual discriminability. PMID:26335683
Santamaría-García, Hernando; Pannunzi, Mario; Ayneto, Alba; Deco, Gustavo; Sebastián-Gallés, Nuria
So far, it was unclear if social hierarchy could influence sensory or perceptual cognitive processes. We evaluated the effects of social hierarchy on these processes using a basic visual perceptual decision task. We constructed a social hierarchy where participants performed the perceptual task separately with two covertly simulated players (superior, inferior). Participants were faster (better) when performing the discrimination task with the superior player. We studied the time course when social hierarchy was processed using event-related potentials and observed hierarchical effects even in early stages of sensory-perceptual processing, suggesting early top-down modulation by social hierarchy. Moreover, in a parallel analysis, we fitted a drift-diffusion model (DDM) to the results to evaluate the decision making process of this perceptual task in the context of a social hierarchy. Consistently, the DDM pointed to nondecision time (probably perceptual encoding) as the principal period influenced by social hierarchy. © The Author (2013). Published by Oxford University Press. For Permissions, please email: email@example.com.
Stevenson, Richard J.; Case, Trevor I.; Tomiczek, Caroline
Olfactory memory is especially persistent. The current study explored whether this applies to a form of perceptual learning, in which experience of an odor mixture results in greater judged similarity between its elements. Experiment 1A contrasted 2 forms of interference procedure, "compound" (mixture AW, followed by presentation of new mixtures…
Adler, Scott A.
Textons are elongated blobs of specific color, angular orientation, ends of lines, and crossings of line segments that are proposed to be the perceptual building blocks of the visual system. A study was conducted to explore the relative memorability of different types and arrangements of textons, exploring the time course for the discrimination…
Lamata, Pablo; Gomez, Enrique J; Hernández, Félix Lamata; Oltra Pastor, Alfonso; Sanchez-Margallo, Francisco Miquel; Del Pozo Guerrero, Francisco
Human perceptual capabilities related to the laparoscopic interaction paradigm are not well known. Its study is important for the design of virtual reality simulators, and for the specification of augmented reality applications that overcome current limitations and provide a supersensing to the surgeon. As part of this work, this article addresses the study of laparoscopic pulling forces. Two definitions are proposed to focalize the problem: the perceptual fidelity boundary, limit of human perceptual capabilities, and the Utile fidelity boundary, that encapsulates the perceived aspects actually used by surgeons to guide an operation. The study is then aimed to define the perceptual fidelity boundary of laparoscopic pulling forces. This is approached with an experimental design in which surgeons assess the resistance against pulling of four different tissues, which are characterized with both in vivo interaction forces and ex vivo tissue biomechanical properties. A logarithmic law of tissue consistency perception is found comparing subjective valorizations with objective parameters. A model of this perception is developed identifying what the main parameters are: the grade of fixation of the organ, the tissue stiffness, the amount of tissue bitten, and the organ mass being pulled. These results are a clear requirement analysis for the force feedback algorithm of a virtual reality laparoscopic simulator. Finally, some discussion is raised about the suitability of augmented reality applications around this surgical gesture.
According to Alan Millar, justified beliefs are well-founded beliefs. Millar cashes out the notion of well-foundedness in terms of having an adequate reason to believe something and believing it for that reason. To make his account of justified belief compatible with perceptual justification he...
Burchfield, L.A.; Luk, S.H.K.; Antoniou, M.; Cutler, A.
Lexically guided perceptual learni ng refers to the use of lexical knowledge to retune sp eech categories and thereby adapt to a novel talker's pronunciation. This adaptation has been extensively documented, but primarily for segmental-based learning in English and Dutch. In languages with lexical
Couperus, Jane W.
Research suggests that visual selective attention develops across childhood. However, there is relatively little understanding of the neurological changes that accompany this development, particularly in the context of adult theories of selective attention, such as N. Lavie's (1995) perceptual load theory of attention. This study examined visual…
Hampton, James A.; Estes, Zachary; Simmons, Claire L.
People categorized pairs of perceptual stimuli that varied in both category membership and pairwise similarity. Experiments 1 and 2 showed categorization of 1 color of a pair to be reliably contrasted from that of the other. This similarity-based contrast effect occurred only when the context stimulus was relevant for the categorization of the…
Noting that one would expect that members of cultural groups whose modes of child rearing foster individual autonomy would achieve more articulated perceptual functioning rather than persons reared in societies where conformity and emotional dependence are stressed, this article discusses a study which compared two Israeli sub-groups and two…
Bele, Irene Velsvik
This study focuses on speaking voice quality in male teachers (n = 35) and male actors (n = 36), who represent untrained and trained voice users, because we wanted to investigate normal and supranormal voices. In this study, both substantial and methodologic aspects were considered. It includes a method for perceptual voice evaluation, and a basic issue was rater reliability. A listening group of 10 listeners, 7 experienced speech-language therapists, and 3 speech-language therapist students evaluated the voices by 15 vocal characteristics using VA scales. Two sets of voice signals were investigated: text reading (2 loudness levels) and sustained vowel (3 levels). The results indicated a high interrater reliability for most perceptual characteristics. Connected speech was evaluated more reliably, especially at the normal level, but both types of voice signals were evaluated reliably, although the reliability for connected speech was somewhat higher than for vowels. Experienced listeners tended to be more consistent in their ratings than did the student raters. Some vocal characteristics achieved acceptable reliability even with a smaller panel of listeners. The perceptual characteristics grouped in 4 factors reflected perceptual dimensions.
Toet, A.; IJspeert, J.K.
Human perceptual performance was tested with images of nighttime outdoor scenes. The scenes were registered both with a dual band (visual and near infrared) image intensified low-light CCD camera (DII) and with a thermal middle wavelength band (3-5 μm) infrared (IR) camera. Fused imagery was
Slofstra, Christien; Nauta, Maaike H; Holmes, Emily A; Bockting, Claudi L H
Imagery rescripting (ImRs) is a process by which aversive autobiographical memories are rendered less unpleasant or emotional. ImRs is thought only to be effective if a change in the meaning-relevant (semantic) content of the mental image is produced, according to a cognitive hypothesis of ImRs. We propose an additional hypothesis: that ImRs can also be effective by the manipulation of perceptual features of the memory, without explicitly targeting meaning-relevant content. In two experiments using a within-subjects design (both N = 48, community samples), both Conceptual-ImRs-focusing on changing meaning-relevant content-and Perceptual-ImRs-focusing on changing perceptual features-were compared to Recall-only of aversive autobiographical image-based memories. An active control condition, Recall + Attentional Breathing (Recall+AB) was added in the first experiment. In the second experiment, a Positive-ImRs condition was added-changing the aversive image into a positive image that was unrelated to the aversive autobiographical memory. Effects on the aversive memory's unpleasantness, vividness and emotionality were investigated. In Experiment 1, compared to Recall-only, both Conceptual-ImRs and Perceptual-ImRs led to greater decreases in unpleasantness, and Perceptual-ImRs led to greater decreases in emotionality of memories. In Experiment 2, the effects on unpleasantness were not replicated, and both Conceptual-ImRs and Perceptual-ImRs led to greater decreases in emotionality, compared to Recall-only, as did Positive-ImRs. There were no effects on vividness, and the ImRs conditions did not differ significantly from Recall+AB. Results suggest that, in addition to traditional forms of ImRs, targeting the meaning-relevant content of an image during ImRs, relatively simple techniques focusing on perceptual aspects or positive imagery might also yield benefits. Findings require replication and extension to clinical samples.
Moevus, Antoine; Mignotte, Max; de Guise, Jacques A; Meunier, Jean
The gait movement is an essential process of the human activity and the result of collaborative interactions between the neurological, articular and musculoskeletal systems, working efficiently together. This explains why gait analysis is important and increasingly used nowadays for the diagnosis of many different types (neurological, muscular, orthopedic, etc.) of diseases. This paper introduces a novel method to quickly visualize the different parts of the body related to an asymmetric movement in the human gait of a patient for daily clinical usage. The proposed gait analysis algorithm relies on the fact that the healthy walk has (temporally shift-invariant) symmetry properties in the coronal plane. The goal is to provide an inexpensive and easy-to-use method, exploiting an affordable consumer depth sensor, the Kinect, to measure the gait asymmetry and display results in a perceptual way. We propose a multi-dimensional scaling mapping using a temporally shift invariant distance, allowing us to efficiently visualize (in terms of perceptual color difference) the asymmetric body parts of the gait cycle of a subject. We also propose an index computed from this map and which quantifies locally and globally the degree of asymmetry. The proposed index is proved to be statistically significant and this new, inexpensive, marker-less, non-invasive, easy to set up, gait analysis system offers a readable and flexible tool for clinicians to analyze gait characteristics and to provide a fast diagnostic. This system, which estimates a perceptual color map providing a quick overview of asymmetry existing in the gait cycle of a subject, can be easily exploited for disease progression, recovery cues from post-operative surgery (e.g., to check the healing process or the effect of a treatment or a prosthesis) or might be used for other pathologies where gait asymmetry might be a symptom.
Task-specific improvement in performance after training is well established. The finding that learning is stimulus-specific and does not transfer well between different stimuli, between stimulus locations in the visual field, or between the two eyes has been used to support the notion that neurons or assemblies of neurons are modified at the earliest stage of cortical processing. However, a debate regarding the proposed mechanism underlying perceptual learning is an ongoing issue. Nevertheless, generalization of a trained task to other functions is an important key, for both understanding the neural mechanisms and the practical value of the training. This manuscript describes a structured perceptual learning method that previously used (amblyopia, myopia) and a novel technique and results that were applied for presbyopia. In general, subjects were trained for contrast detection of Gabor targets under lateral masking conditions. Training improved contrast sensitivity and diminished the lateral suppression when it existed (amblyopia). The improvement was transferred to unrelated functions such as visual acuity. The new results of presbyopia show substantial improvement of the spatial and temporal contrast sensitivity, leading to improved processing speed of target detection as well as reaction time. Consequently, the subjects, who were able to eliminate the need for reading glasses, benefited. Thus, here we show that the transfer of functions indicates that the specificity of improvement in the trained task can be generalized by repetitive practice of target detection, covering a sufficient range of spatial frequencies and orientations, leading to an improvement in unrelated visual functions. Thus, perceptual learning can be a practical method to improve visual functions in people with impaired or blurred vision.
Belaïd, N.; Martens, J.B.
One way of optimizing a display is to maximize the number of distinguishable grey levels, which in turn is equivalent to perceptually linearizing the display. Perceptual linearization implies that equal steps in grey value evoke equal steps in brightness sensation. The key to perceptual
de Fockert, Jan W.
The perceptual load and dilution models differ fundamentally in terms of the proposed mechanism underlying variation in distractibility during different perceptual conditions. However, both models predict that distracting information can be processed beyond perceptual processing under certain conditions, a prediction that is well-supported by the literature. Load theory proposes that in such cases, where perceptual task aspects do not allow for sufficient attentional selectivity, the maintenance of task-relevant processing depends on cognitive control mechanisms, including working memory. The key prediction is that working memory plays a role in keeping clear processing priorities in the face of potential distraction, and the evidence reviewed and evaluated in a meta-analysis here supports this claim, by showing that the processing of distracting information tends to be enhanced when load on a concurrent task of working memory is high. Low working memory capacity is similarly associated with greater distractor processing in selective attention, again suggesting that the unavailability of working memory during selective attention leads to an increase in distractibility. Together, these findings suggest that selective attention against distractors that are processed beyond perception depends on the availability of working memory. Possible mechanisms for the effects of working memory on selective attention are discussed. PMID:23734139
de Fockert, Jan W
The perceptual load and dilution models differ fundamentally in terms of the proposed mechanism underlying variation in distractibility during different perceptual conditions. However, both models predict that distracting information can be processed beyond perceptual processing under certain conditions, a prediction that is well-supported by the literature. Load theory proposes that in such cases, where perceptual task aspects do not allow for sufficient attentional selectivity, the maintenance of task-relevant processing depends on cognitive control mechanisms, including working memory. The key prediction is that working memory plays a role in keeping clear processing priorities in the face of potential distraction, and the evidence reviewed and evaluated in a meta-analysis here supports this claim, by showing that the processing of distracting information tends to be enhanced when load on a concurrent task of working memory is high. Low working memory capacity is similarly associated with greater distractor processing in selective attention, again suggesting that the unavailability of working memory during selective attention leads to an increase in distractibility. Together, these findings suggest that selective attention against distractors that are processed beyond perception depends on the availability of working memory. Possible mechanisms for the effects of working memory on selective attention are discussed.
Jan W. De Fockert
Full Text Available The perceptual load and dilution models differ fundamentally in terms of the proposed mechanism underlying variation in distractibility during different perceptual conditions. However, both models predict that distracting information can be processed beyond perceptual processing under certain conditions, a prediction that is well-supported by the literature. Load theory proposes that in such cases, where perceptual task aspects do not allow for sufficient attentional selectivity, the maintenance of task-relevant processing depends on cognitive control mechanisms, including working memory. The key prediction is that working memory plays a role in keeping clear processing priorities in the face of potential distraction, and the evidence reviewed and evaluated in a meta-analysis here supports this claim, by showing that the processing of distracting information tends to be enhanced when load on a concurrent task of working memory is high. Low working memory capacity is similarly associated with greater distractor processing in selective attention, again suggesting that the unavailability of working memory during selective attention leads to an increase in distractibility. Together, these findings suggest that selective attention against distractors that are processed beyond perception depends on the availability of working memory. Possible mechanisms for the effects of working memory on selective attention are discussed.
Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K
A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo. SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a
Full Text Available Speech perception requires rapid extraction of the linguistic content from the acoustic signal. The ability to efficiently process rapid changes in auditory information is important for decoding speech and thereby crucial during language acquisition. Investigating functional networks of speech perception in infancy might elucidate neuronal ensembles supporting perceptual abilities that gate language acquisition. Interhemispheric specializations for language have been demonstrated in infants. How these asymmetries are shaped by basic temporal acoustic properties is under debate. We recently provided evidence that newborns process non-linguistic sounds sharing temporal features with language in a differential and lateralized fashion. The present study used the same material while measuring brain responses of 6 and 3 month old infants using simultaneous recordings of electroencephalography (EEG and near-infrared spectroscopy (NIRS. NIRS reveals that the lateralization observed in newborns remains constant over the first months of life. While fast acoustic modulations elicit bilateral neuronal activations, slow modulations lead to right-lateralized responses. Additionally, auditory evoked potentials and oscillatory EEG responses show differential responses for fast and slow modulations indicating a sensitivity for temporal acoustic variations. Oscillatory responses reveal an effect of development, that is, 6 but not 3 month old infants show stronger theta-band desynchronization for slowly modulated sounds. Whether this developmental effect is due to increasing fine-grained perception for spectrotemporal sounds in general remains speculative. Our findings support the notion that a more general specialization for acoustic properties can be considered the basis for lateralization of speech perception. The results show that concurrent assessment of vascular based imaging and electrophysiological responses have great potential in the research on language
Lavie, Nilli; Beck, Diane M; Konstantinou, Nikos
What is the relationship between attention and conscious awareness? Awareness sometimes appears to be restricted to the contents of focused attention, yet at other times irrelevant distractors will dominate awareness. This contradictory relationship has also been reflected in an abundance of discrepant research findings leading to an enduring controversy in cognitive psychology. Lavie's load theory of attention suggests that the puzzle can be solved by considering the role of perceptual load. Although distractors will intrude upon awareness in conditions of low load, awareness will be restricted to the content of focused attention when the attended information involves high perceptual load. Here, we review recent evidence for this proposal with an emphasis on the various subjective blindness phenomena, and their neural correlates, induced by conditions of high perceptual load. We also present novel findings that clarify the role of attention in the response to stimulus contrast. Overall, this article demonstrates a critical role for perceptual load across the spectrum of perceptual processes leading to awareness, from the very early sensory responses related to contrast detection to explicit recognition of semantic content.
Vladusich, Tony; McDonnell, Mark D
When we look at the world--or a graphical depiction of the world--we perceive surface materials (e.g. a ceramic black and white checkerboard) independently of variations in illumination (e.g. shading or shadow) and atmospheric media (e.g. clouds or smoke). Such percepts are partly based on the way physical surfaces and media reflect and transmit light and partly on the way the human visual system processes the complex patterns of light reaching the eye. One way to understand how these percepts arise is to assume that the visual system parses patterns of light into layered perceptual representations of surfaces, illumination and atmospheric media, one seen through another. Despite a great deal of previous experimental and modelling work on layered representation, however, a unified computational model of key perceptual demonstrations is still lacking. Here we present the first general computational model of perceptual layering and surface appearance--based on a boarder theoretical framework called gamut relativity--that is consistent with these demonstrations. The model (a) qualitatively explains striking effects of perceptual transparency, figure-ground separation and lightness, (b) quantitatively accounts for the role of stimulus- and task-driven constraints on perceptual matching performance, and (c) unifies two prominent theoretical frameworks for understanding surface appearance. The model thereby provides novel insights into the remarkable capacity of the human visual system to represent and identify surface materials, illumination and atmospheric media, which can be exploited in computer graphics applications.
Vladusich, Tony; McDonnell, Mark D.
When we look at the world—or a graphical depiction of the world—we perceive surface materials (e.g. a ceramic black and white checkerboard) independently of variations in illumination (e.g. shading or shadow) and atmospheric media (e.g. clouds or smoke). Such percepts are partly based on the way physical surfaces and media reflect and transmit light and partly on the way the human visual system processes the complex patterns of light reaching the eye. One way to understand how these percepts arise is to assume that the visual system parses patterns of light into layered perceptual representations of surfaces, illumination and atmospheric media, one seen through another. Despite a great deal of previous experimental and modelling work on layered representation, however, a unified computational model of key perceptual demonstrations is still lacking. Here we present the first general computational model of perceptual layering and surface appearance—based on a boarder theoretical framework called gamut relativity—that is consistent with these demonstrations. The model (a) qualitatively explains striking effects of perceptual transparency, figure-ground separation and lightness, (b) quantitatively accounts for the role of stimulus- and task-driven constraints on perceptual matching performance, and (c) unifies two prominent theoretical frameworks for understanding surface appearance. The model thereby provides novel insights into the remarkable capacity of the human visual system to represent and identify surface materials, illumination and atmospheric media, which can be exploited in computer graphics applications. PMID:25402466
Full Text Available Perception is an inferential process, which becomes immediately evident when sensory information is conflicting or ambiguous and thus allows for more than one perceptual interpretation. Thinking the idea of perception as inference through to the end results in a blurring of boundaries between perception and action selection, as perceptual inference implies the construction of a percept as an active process. Here we therefore wondered whether perception shares a key characteristic of action selection, namely that it is shaped by reinforcement learning. In two behavioral experiments, we used binocular rivalry to examine whether perceptual inference can be influenced by the association of perceptual outcomes with reward or punishment, respectively, in analogy to instrumental conditioning. Binocular rivalry was evoked by two orthogonal grating stimuli presented to the two eyes, resulting in perceptual alternations between the two gratings. Perception was tracked indirectly and objectively through a target detection task, which allowed us to preclude potential reporting biases. Monetary rewards or punishments were given repeatedly during perception of only one of the two rivalling stimuli. We found an increase in dominance durations for the percept associated with reward, relative to the non-rewarded percept. In contrast, punishment led to an increase of the non-punished compared to a relative decrease of the punished percept. Our results show that perception shares key characteristics with action selection, in that it is influenced by reward and punishment in opposite directions, thus narrowing the gap between the conceptually separated domains of perception and action selection. We conclude that perceptual inference is an adaptive process that is shaped by its consequences.
Full Text Available Amongst the most significant questions we are confronted with today include the integration of the brain's micro-circuitry, our ability to build the complex social networks that underpin society and how our society impacts on our ecological environment. In trying to unravel these issues one place to begin is at the level of the individual: to consider how we accumulate information about our environment, how this information leads to decisions and how our individual decisions in turn create our social environment. While this is an enormous task, we may already have at hand many of the tools we need. This article is intended to review some of the recent results in neuro-cognitive research and show how they can be extended to two very specific types of expertise: perceptual expertise and social cognition. These two cognitive skills span a vast range of our genetic heritage. Perceptual expertise developed very early in our evolutionary history and is likely a highly developed part of all mammals' cognitive ability. On the other hand social cognition is most highly developed in humans in that we are able to maintain larger and more stable long term social connections with more behaviourally diverse individuals than any other species. To illustrate these ideas I will discuss board games as a toy model of social interactions as they include many of the relevant concepts: perceptual learning, decision-making, long term planning and understanding the mental states of other people. Using techniques that have been developed in mathematical psychology, I show that we can represent some of the key features of expertise using stochastic differential equations. Such models demonstrate how an expert's long exposure to a particular context influences the information they accumulate in order to make a decision.These processes are not confined to board games, we are all experts in our daily lives through long exposure to the many regularities of daily tasks and
Vargas-Olmos, C.; Murguía, J. S.; Ramírez-Torres, M. T.; Mejía Carlos, M.; Rosu, H. C.; González-Aguilar, H.
The scaling behavior of the pixel fluctuations of encrypted images is evaluated by using the detrended fluctuation analysis based on wavelets, a modern technique that has been successfully used recently for a wide range of natural phenomena and technological processes. As encryption algorithms, we use the Advanced Encryption System (AES) in RBT mode and two versions of a cryptosystem based on cellular automata, with the encryption process applied both fully and partially by selecting different bitplanes. In all cases, the results show that the encrypted images in which no understandable information can be visually appreciated and whose pixels look totally random present a persistent scaling behavior with the scaling exponent α close to 0.5, implying no correlation between pixels when the DFA with wavelets is applied. This suggests that the scaling exponents of the encrypted images can be used as a perceptual security criterion in the sense that when their values are close to 0.5 (the white noise value) the encrypted images are more secure also from the perceptual point of view.
Mattys, Sven L; Palmer, Shekeila D
Performing a secondary task while listening to speech has a detrimental effect on speech processing, but the locus of the disruption within the speech system is poorly understood. Recent research has shown that cognitive load imposed by a concurrent visual task increases dependency on lexical knowledge during speech processing, but it does not affect lexical activation per se. This suggests that "lexical drift" under cognitive load occurs either as a post-lexical bias at the decisional level or as a secondary consequence of reduced perceptual sensitivity. This study aimed to adjudicate between these alternatives using a forced-choice task that required listeners to identify noise-degraded spoken words with or without the addition of a concurrent visual task. Adding cognitive load increased the likelihood that listeners would select a word acoustically similar to the target even though its frequency was lower than that of the target. Thus, there was no evidence that cognitive load led to a high-frequency response bias. Rather, cognitive load seems to disrupt sublexical encoding, possibly by impairing perceptual acuity at the auditory periphery.
Chen, Zhe; Cave, Kyle R
Perceptual load theory (Lavie, 2005) claims that attentional capacity that is not used for the current task is allocated to irrelevant distractors. It predicts that if the attentional demands of the current task are high, distractor interference will be low. One particularly powerful demonstration of perceptual load effects on distractor processing relies on a go/no-go cue that is interpreted by either simple feature detection or feature conjunction (Lavie, 1995). However, a possible alternative interpretation of these effects is that the differential degree of distractor processing is caused by how broadly attention is allocated (attentional zoom) rather than to perceptual load. In 4 experiments, we show that when stimuli are arranged to equalize the extent of spatial attention across conditions, distractor interference varies little whether cues are defined by a simple feature or a conjunction, and that the typical perceptual load effect emerges only when attentional zoom can covary with perceptual load. These results suggest that attentional zoom can account for the differential degree of distractor processing traditionally attributed to perceptual load in the go/no-go paradigm. They also provide new insight into how different factors interact to control distractor interference. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Miellet, Sébastien; O'Donnell, Patrick J; Sereno, Sara C
Models of eye guidance in reading rely on the concept of the perceptual span-the amount of information perceived during a single eye fixation, which is considered to be a consequence of visual and attentional constraints. To directly investigate attentional mechanisms underlying the perceptual span, we implemented a new reading paradigm-parafoveal magnification (PM)-that compensates for how visual acuity drops off as a function of retinal eccentricity. On each fixation and in real time, parafoveal text is magnified to equalize its perceptual impact with that of concurrent foveal text. Experiment 1 demonstrated that PM does not increase the amount of text that is processed, supporting an attentional-based account of eye movements in reading. Experiment 2 explored a contentious issue that differentiates competing models of eye movement control and showed that, even when parafoveal information is enlarged, visual attention in reading is allocated in a serial fashion from word to word.
Baldwin, Mark W; Bagust, Jeff; Docherty, Sharon; Browman, Alexander S; Jackson, Joshua C
We theorized that interpersonal relationships can provide structures for experience. In particular, we tested whether primes of same-sex versus mixed-sex relationships could foster cognitive-perceptual processing styles known to be associated with independence versus interdependence respectively. Seventy-two participants visualized either a same-sex or other-sex relationship partner and then performed two measures of cognitive-perceptual style. On a computerized Rod and Frame Test, individuals were more field-dependent after visualizing a mixed-sex versus same-sex relationship partner. On a measure involving perceptions of group behavior, participants demonstrated more holistic/contextually based perception after being primed with a female versus male relationship partner. These findings support the hypothesis that activated cognitive structures representing interpersonal relationships can shape individuals' cognitive-perceptual performance.
Kennedy, Kristen M; Partridge, Ty; Raz, Naftali
Aging is associated with reduced performance on information processing speed, memory, and executive functions tasks. Although older adults are also less apt in acquiring new perceptual-motor skills, it is unclear whether and how skill acquisition difficulties are associated with age-related general cognitive differences. We addressed this question by examining structural relations among measures of cognitive resources (working memory) and indices of perceptual-motor skill acquisition (pursuit rotor and mirror tracing) in 96 healthy adults aged 19-80 years of age. Three competing structural models were tested: a single (common) factor model, a dual correlated factors model, and a hierarchical dual-factor model. The third model provided the best fit to the data, indicating age differences in simple perceptual-motor skill are partially mediated by more complex abilities.
Williams, A Mark; Ericsson, K Anders
The number of researchers studying perceptual-cognitive expertise in sport is increasing. The intention in this paper is to review the currently accepted framework for studying expert performance and to consider implications for undertaking research work in the area of perceptual-cognitive expertise in sport. The expert performance approach presents a descriptive and inductive approach for the systematic study of expert performance. The nature of expert performance is initially captured in the laboratory using representative tasks that identify reliably superior performance. Process-tracing measures are employed to determine the mechanisms that mediate expert performance on the task. Finally, the specific types of activities that lead to the acquisition and development of these mediating mechanisms are identified. General principles and mechanisms may be discovered and then validated by more traditional experimental designs. The relevance of this approach to the study of perceptual-cognitive expertise in sport is discussed and suggestions for future work highlighted.
Stone, M.; Ladd, S. L.; Gabrieli, J. D.
Two kinds of perceptual priming (word identification and word fragment completion), as well as preference priming (that may rely on special affective mechanisms) were examined after participants either read or named the colors of words and nonwords at study. Participants named the colors of words more slowly than the colors of nonwords, indicating that lexical processing of the words occurred at study. Nonetheless, priming on all three tests was lower after color naming than after reading, despite evidence of lexical processing during color naming shown by slower responses to words than to nonwords. These results indicate that selective attention to (rather than the mere processing of) letter string identity at study is important for subsequent repetition priming.
Gao, Chuanji; Hermiller, Molly S; Voss, Joel L; Guo, Chunyan
It is difficult to pinpoint the border between perceptual and conceptual processing, despite their treatment as distinct entities in many studies of recognition memory. For instance, alteration of simple perceptual characteristics of a stimulus can radically change meaning, such as the color of bread changing from white to green. We sought to better understand the role of perceptual and conceptual processing in memory by identifying the effects of changing a basic perceptual feature (color) on behavioral and neural correlates of memory in circumstances when this change would be expected to either change the meaning of a stimulus or to have no effect on meaning (i.e., to influence conceptual processing or not). Abstract visual shapes ("squiggles") were colorized during study and presented during test in either the same color or a different color. Those squiggles that subjects found to resemble meaningful objects supported behavioral measures of conceptual priming, whereas meaningless squiggles did not. Further, changing color from study to test had a selective effect on behavioral correlates of priming for meaningful squiggles, indicating that color change altered conceptual processing. During a recognition memory test, color change altered event-related brain potential (ERP) correlates of memory for meaningful squiggles but not for meaningless squiggles. Specifically, color change reduced the amplitude of frontally distributed N400 potentials (FN400), implying that these potentials indicated conceptual processing during recognition memory that was sensitive to color change. In contrast, color change had no effect on FN400 correlates of recognition for meaningless squiggles, which were overall smaller in amplitude than for meaningful squiggles (further indicating that these potentials signal conceptual processing during recognition). Thus, merely changing the color of abstract visual shapes can alter their meaning, changing behavioral and neural correlates of memory
Full Text Available Our ability to listen selectively to single sound sources in complex auditory environments is termed ‘auditory stream segregation.’ This ability is affected by peripheral disorders such as hearing loss, as well as plasticity in central processing such as occurs with musical training. Brain plasticity induced by musical training can enhance the ability to segregate sound, leading to improvements in a variety of auditory abilities. The melody segregation ability of 12 cochlear-implant recipients was tested using a new method to determine the perceptual distance needed to segregate a simple 4-note melody from a background of interleaved random-pitch distractor notes. In experiment 1, participants rated the difficulty of segregating the melody from distracter notes. Four physical properties of the distracter notes were changed. In experiment 2, listeners were asked to rate the dissimilarity between melody patterns whose notes differed on the four physical properties simultaneously. Multidimensional scaling analysis transformed the dissimilarity ratings into perceptual distances. Regression between physical and perceptual cues then derived the minimal perceptual distance needed to segregate the melody.The most efficient streaming cue for CI users was loudness. For the normal hearing listeners without musical backgrounds, a greater difference on the perceptual dimension correlated to the temporal envelope is needed for stream segregation in CI users. No differences in streaming efficiency were found between the perceptual dimensions linked to the F0 and the spectral envelope.Combined with our previous results in normally-hearing musicians and non-musicians, the results show that differences in training as well as differences in peripheral auditory processing (hearing impairment and the use of a hearing device influences the way that listeners use different acoustic cues for segregating interleaved musical streams.
Mill, Robert W.; Bőhm, Tamás M.; Bendixen, Alexandra; Winkler, István; Denham, Susan L.
Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives—a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the
Streb, Markus; Pfaltz, Monique; Michael, Tanja
Intrusive memories are a hallmark symptom of posttraumatic stress disorder (PTSD). They reflect excessive and uncontrolled retrieval of the traumatic memory. Acute elevations of cortisol are known to impair the retrieval of already stored memory information. Thus, continuous cortisol administration might help in reducing intrusive memories in PTSD. Strong perceptual priming for neutral stimuli associated with a “traumatic” context has been shown to be one important learning mechanism that leads to intrusive memories. However, the memory modulating effects of cortisol have only been shown for explicit declarative memory processes. Thus, in our double blind, placebo controlled study we aimed to investigate whether cortisol influences perceptual priming of neutral stimuli that appeared in a “traumatic” context. Two groups of healthy volunteers (N = 160) watched either neutral or “traumatic” picture stories on a computer screen. Neutral objects were presented in between the pictures. Memory for these neutral objects was tested after 24 hours with a perceptual priming task and an explicit memory task. Prior to memory testing half of the participants in each group received 25 mg of cortisol, the other half received placebo. In the placebo group participants in the “traumatic” stories condition showed more perceptual priming for the neutral objects than participants in the neutral stories condition, indicating a strong perceptual priming effect for neutral stimuli presented in a “traumatic” context. In the cortisol group this effect was not present: Participants in the neutral stories and participants in the “traumatic” stories condition in the cortisol group showed comparable priming effects for the neutral objects. Our findings show that cortisol inhibits perceptual priming for neutral stimuli that appeared in a “traumatic” context. These findings indicate that cortisol influences PTSD-relevant memory processes and thus further support
Williams, D; Julesz, B
A fundamental property of human visual perception is our ability to distinguish between textures. A concerted effort has been made to account for texture segregation in terms of linear spatial filter models and their nonlinear extensions. However, for certain texture pairs the ease of discrimination changes when the role of figure and ground are reversed. This asymmetry poses a problem for both linear and nonlinear models. We have isolated a property of texture perception that can account for this asymmetry in discrimination: subjective closure. This property, which is also responsible for visual illusions, appears to be explainable by early visual processes alone. Our results force a reexamination of the process of human texture segregation and of some recent models that were introduced to explain it.
Hsu, Li-Chuan; Kramer, Peter; Yeh, Su-Ling
After prolonged viewing, a static target among moving non-targets is perceived to repeatedly disappear and reappear. An uncrossed stereoscopic disparity of the target facilitates this Motion-Induced Blindness (MIB). Here we test whether monocular depth cues can affect MIB too, and whether they can also affect perceptual fading in static displays. Experiment 1 reveals an effect of interposition: more MIB when the target appears partially covered by, than when it appears to cover, its surroundings. Experiment 2 shows that the effect is indeed due to interposition and not to the target's contours. Experiment 3 induces depth with the watercolor illusion and replicates Experiment 1. Experiments 4 and 5 replicate Experiments 1 and 3 without the use of motion. Since almost any stimulus contains a monocular depth cue, we conclude that perceived depth affects perceptual fading in almost any stimulus, whether dynamic or static. Copyright 2010 Elsevier Ltd. All rights reserved.
Kee, Eric; Farid, Hany
In recent years, advertisers and magazine editors have been widely criticized for taking digital photo retouching to an extreme. Impossibly thin, tall, and wrinkle- and blemish-free models are routinely splashed onto billboards, advertisements, and magazine covers. The ubiquity of these unrealistic and highly idealized images has been linked to eating disorders and body image dissatisfaction in men, women, and children. In response, several countries have considered legislating the labeling of retouched photos. We describe a quantitative and perceptually meaningful metric of photo retouching. Photographs are rated on the degree to which they have been digitally altered by explicitly modeling and estimating geometric and photometric changes. This metric correlates well with perceptual judgments of photo retouching and can be used to objectively judge by how much a retouched photo has strayed from reality.
Sloutsky, Vladimir M.
People are remarkably smart: they use language, possess complex motor skills, make non-trivial inferences, develop and use scientific theories, make laws, and adapt to complex dynamic environments. Much of this knowledge requires concepts and this paper focuses on how people acquire concepts. It is argued that conceptual development progresses from simple perceptual grouping to highly abstract scientific concepts. This proposal of conceptual development has four parts. First, it is argued that categories in the world have different structure. Second, there might be different learning systems (sub-served by different brain mechanisms) that evolved to learn categories of differing structures. Third, these systems exhibit differential maturational course, which affects how categories of different structures are learned in the course of development. And finally, an interaction of these components may result in the developmental transition from perceptual groupings to more abstract concepts. This paper reviews a large body of empirical evidence supporting this proposal. PMID:21116483
Michael J Proulx
Full Text Available A sensory substitution device for blind persons aims to provide the missing visual input by converting images into a form that another modality can perceive, such as sound. Here I will discuss the perceptual learning and attentional mechanisms necessary for interpreting sounds produced by a device (The vOICe in a visuospatial manner. Although some aspects of the conversion, such as relating vertical location to pitch, rely on natural crossmodal mappings, the extensive training required suggests that synthetic mappings are required to generalize perceptual learning to new objects and environments, and ultimately to experience visual qualia. Here I will discuss the effects of the conversion and training on perception and attention that demonstrate the synthetic nature of learning the crossmodal mapping. Sensorimotor experience may be required to facilitate learning, develop expertise, and to develop a form of synthetic synaesthesia.
The factors that the investors are not aware of their effectiveness and make investment decisions. The main purpose of the present research is to study the perceptual factors affecting on the decision making process of the investors and the effect of information on these factors. For this aim, 385 investors of Tehran Stock ...
Veispak, Anneli; Boets, Bart; Mannamaa, Mairi; Ghesquiere, Pol
Similar to many sighted children who struggle with learning to read, a proportion of blind children have specific difficulties related to reading braille which cannot be easily explained. A lot of research has been conducted to investigate the perceptual and cognitive processes behind (impairments in) print reading. Very few studies, however, have…
Tsal, Yehoshua; Benoni, Hanna
The substantial distractor interference obtained for small displays when the target appears alone is reduced in large displays when the target is embedded among neutral letters. This finding has been interpreted as reflecting low-load and high-load processing, respectively, thereby supporting the theory of perceptual load (Lavie & Tsal, 1994).…
Mann, D.L.; Dicks, M.; Canal Bruland, R.; van der Kamp, J.
Neurophysiological measurement techniques like fMRI and TMS are increasingly being used to examine the perceptual-motor processes underpinning the ability to anticipate the actions of others. Crucially, these techniques invariably restrict the experimental task that can be used and consequently
Baudouin, Jean-Yves; Gallay, Mathieu; Durand, Karine; Robichon, Fabrice
This study investigated children's perceptual ability to process second-order facial relations. In total, 78 children in three age groups (7, 9, and 11 years) and 28 adults were asked to say whether the eyes were the same distance apart in two side-by-side faces. The two faces were similar on all points except the space between the eyes, which was…
Saija, Jefta D.; Andringa, Tjeerd C.; Başkent, Deniz; Akyürek, Elkan G.
Temporal integration is the perceptual process combining sensory stimulation over time into longer percepts that can span over 10 times the duration of a minimally detectable stimulus. Particularly in the auditory domain, such "long-term" temporal integration has been characterized as a relatively
Brons, Inge; Houben, Rolph; Dreschler, Wouter A.
Noise reduction and dynamic-range compression are generally applied together in hearing aids but may have opposite effects on amplification. This study evaluated the acoustical and perceptual effects of separate and combined processing of noise reduction and compression. Recordings of the output of
Vargas, Iliana M.; Voss, Joel L.; Paller, Ken A.
In some circumstances, accurate recognition of repeated images in an explicit memory test is driven by implicit memory. We propose that this “implicit recognition” results from perceptual fluency that influences responding without awareness of memory retrieval. Here we examined whether recognition would vary if images appeared in the same or different visual hemifield during learning and testing. Kaleidoscope images were briefly presented left or right of fixation during divided-attention enc...
Full Text Available Inferring causality is a fundamental feature of human cognition that allows us to theorize about and predict future states of the world. Michotte suggested that humans automatically perceive causality based on certain perceptual features of events. However, individual differences in judgments of perceptual causality cast doubt on Michotte’s view. To gain insights in the neural basis of individual difference in the perception of causality, our participants judged causal relationships in animations of a blue ball colliding with a red ball (a launching event while fMRI-data were acquired. Spatial continuity and temporal contiguity were varied parametrically in these stimuli. We did not find consistent brain activation differences between trials judged as caused and those judged as non-caused, making it unlikely that humans have universal instantiation of perceptual causality in the brain. However, participants were slower to respond to and showed greater neural activity for violations of causality, suggesting that humans are biased to expect causal relationships when moving objects appear to interact. Our participants demonstrated considerable individual differences in their sensitivity to spatial and temporal characteristics in perceiving causality. These qualitative differences in sensitivity to time or space in perceiving causality were instantiated in individual differences in activation of the left basal ganglia or right parietal lobe, respectively. Thus, the perception that the movement of one object causes the movement of another is triggered by elemental spatial and temporal sensitivities, which themselves are instantiated in specific distinct neural networks.
Park, Hyeong-Dong; Tallon-Baudry, Catherine
The report 'I saw the stimulus' operationally defines visual consciousness, but where does the 'I' come from? To account for the subjective dimension of perceptual experience, we introduce the concept of the neural subjective frame. The neural subjective frame would be based on the constantly updated neural maps of the internal state of the body and constitute a neural referential from which first person experience can be created. We propose to root the neural subjective frame in the neural representation of visceral information which is transmitted through multiple anatomical pathways to a number of target sites, including posterior insula, ventral anterior cingulate cortex, amygdala and somatosensory cortex. We review existing experimental evidence showing that the processing of external stimuli can interact with visceral function. The neural subjective frame is a low-level building block of subjective experience which is not explicitly experienced by itself which is necessary but not sufficient for perceptual experience. It could also underlie other types of subjective experiences such as self-consciousness and emotional feelings. Because the neural subjective frame is tightly linked to homeostatic regulations involved in vigilance, it could also make a link between state and content consciousness.
Li, Tianhao; Fu, Qian-Jie
To determine whether perceptual adaptation improves voice gender discrimination of spectrally shifted vowels and, if so, which acoustic cues contribute to the improvement. Voice gender discrimination was measured for 10 normal-hearing subjects, during 5 days of adaptation to spectrally shifted vowels, produced by processing the speech of 5 male and 5 female talkers with 16-channel sine-wave vocoders. The subjects were randomly divided into 2 groups; one subjected to 50-Hz, and the other to 200-Hz, temporal envelope cutoff frequencies. No preview or feedback was provided. There was significant adaptation in voice gender discrimination with the 200-Hz cutoff frequency, but significant improvement was observed only for 3 female talkers with F(0) > 180 Hz and 3 male talkers with F(0) gender discrimination under spectral shift conditions with perceptual adaptation, but spectral shift may limit the exclusive use of spectral information and/or the use of formant structure on voice gender discrimination. The results have implications for cochlear implant users and for understanding voice gender discrimination.
Baldassarre, Antonello; Capotosto, Paolo; Committeri, Giorgia; Corbetta, Maurizio
The ability to learn and process visual stimuli more efficiently is important for survival. Previous neuroimaging studies have shown that perceptual learning on a shape identification task differently modulates activity in both frontal-parietal cortical regions and visual cortex (Sigman et al., 2005;Lewis et al., 2009). Specifically, fronto-parietal regions (i.e. intra parietal sulcus, pIPS) became less activated for trained as compared to untrained stimuli, while visual regions (i.e. V2d/V3 and LO) exhibited higher activation for familiar shape. Here, after the intensive training, we employed transcranial magnetic stimulation over both visual occipital and parietal regions, previously shown to be modulated, to investigate their causal role in learning the shape identification task. We report that interference with V2d/V3 and LO increased reaction times to learned stimuli as compared to pIPS and Sham control condition. Moreover, the impairment observed after stimulation over the two visual regions was positive correlated. These results strongly support the causal role of the visual network in the control of the perceptual learning. Copyright Â© 2016 Elsevier Inc. All rights reserved.
Cohen, Yamit; Daikhin, Luba; Ahissar, Merav
What do we learn when we practice a simple perceptual task? Many studies have suggested that we learn to refine or better select the sensory representations of the task-relevant dimension. Here we show that learning is specific to the trained structural regularities. Specifically, when this structure is modified after training with a fixed temporal structure, performance regresses to pretraining levels, even when the trained stimuli and task are retained. This specificity raises key questions as to the importance of low-level sensory modifications in the learning process. We trained two groups of participants on a two-tone frequency discrimination task for several days. In one group, a fixed reference tone was consistently presented in the first interval (the second tone was higher or lower), and in the other group the same reference tone was consistently presented in the second interval. When following training, these temporal protocols were switched between groups, performance of both groups regressed to pretraining levels, and further training was needed to attain postlearning performance. ERP measures, taken before and after training, indicated that participants implicitly learned the temporal regularity of the protocol and formed an attentional template that matched the trained structure of information. These results are consistent with Reverse Hierarchy Theory, which posits that even the learning of simple perceptual tasks progresses in a top-down manner, hence can benefit from temporal regularities at the trial level, albeit at the potential cost that learning may be specific to these regularities.
Full Text Available Previous ERP studies have shown that N2pc serves as an index for salient stimuli that capture attention, even if they are task irrelevant. This study aims to investigate whether nonsalient stimuli can capture attention automatically and unconsciously after perceptual learning. Adult subjects were trained with a visual search task for eight to ten sessions. The training task was to detect whether the target (triangle with one particular direction was present or not. After training, an ERP session was performed, in which subjects were required to detect the presence of either the trained triangle (i.e., the target triangle in the training sessions or an untrained triangle. Results showed that, while the untrained triangle did not elicit an N2pc effect, the trained triangle elicited a significant N2pc effect regardless of whether it was perceived correctly or not, even when it was task irrelevant. Moreover, the N2pc effect for the trained triangle was completely retained 3 months later. These results suggest that, after perceptual learning, previously unsalient stimuli become more salient and can capture attention automatically and unconsciously. Once the facilitating process of the unsalient stimulus has been built up in the brain, it can last for a long time.
Wang, Zhengke; Cheng-Lai, Alice; Song, Yan; Cutting, Laurie; Jiang, Yuzheng; Lin, Ou; Meng, Xiangzhi; Zhou, Xiaolin
Learning to read involves discriminating between different written forms and establishing connections with phonology and semantics. This process may be partially built upon visual perceptual learning, during which the ability to process the attributes of visual stimuli progressively improves with practice. The present study investigated to what extent Chinese children with developmental dyslexia have deficits in perceptual learning by using a texture discrimination task, in which participants were asked to discriminate the orientation of target bars. Experiment l demonstrated that, when all of the participants started with the same initial stimulus-to-mask onset asynchrony (SOA) at 300 ms, the threshold SOA, adjusted according to response accuracy for reaching 80% accuracy, did not show a decrement over 5 days of training for children with dyslexia, whereas this threshold SOA steadily decreased over the training for the control group. Experiment 2 used an adaptive procedure to determine the threshold SOA for each participant during training. Results showed that both the group of dyslexia and the control group attained perceptual learning over the sessions in 5 days, although the threshold SOAs were significantly higher for the group of dyslexia than for the control group; moreover, over individual participants, the threshold SOA negatively correlated with their performance in Chinese character recognition. These findings suggest that deficits in visual perceptual processing and learning might, in part, underpin difficulty in reading Chinese. Copyright © 2014 John Wiley & Sons, Ltd.
Full Text Available In “The Work of Art in the Age of Its Technological Reproducibility,” Walter Benjamin alluded that the human perceptual field in his time would become more distracted by the intervention of technologies, and so masses’ tactility activated by distraction would be more important in the mechanized perception. Regarding this historical situation, Benjamin anticipated that the new mode of mass perception would be organized through people's collective “innervation” to technologies. This article aims to contextualize this physiological term's cultural, technical, and political implications within various discourses about perception from the late 19th century physiologies to early 20th century film theories. Benjamin considers the tactility of people's potential to reconstruct the optical scheme of perception from the “flatness of screen” in which distances between viewers and perceived objects collapse. In a similar vein, the late 19th century's physiology reconceptualized perception in its relation not so much to the transcendental division of subject/object as to the sensual condition of a retina as “a single immanent plane.” From this perspective, perception is phenomena entailed by a body's contact to a sensual environment, so how sense inputs circulate in a neural network is a determinant for explaining perceptual processes. With regard to this paradigm change, the invention of cinema in the late 19th century was significant because it radically changed the composition of the perceptual field in two directions. Cinema introduced the virtualized perceptual fields on which sense circulations were completely controlled by the operation of camera. At the same time, the mediation of projectors in theaters reorganized viewers’ neural paths for perceptual innervation. As Hugo Münsterberg and Sergei Eisenstein's theories reflect, cinematic media's intervention in the perceptual field made it possible for masses’ collective
Edwards, Brent W.; van Tasell, Dianne J.
Hearing aid capabilities have increased dramatically over the past six years, in large part due to the development of small, low-power digital signal processing chips suitable for hearing aid applications. As hearing aid signal processing capabilities increase, there will be new opportunities to apply perceptually based knowledge to technological development. Most hearing loss compensation techniques in today's hearing aids are based on simple estimates of audibility and loudness. As our understanding of the psychoacoustical and physiological characteristics of sensorineural hearing loss improves, the result should be improved design of hearing aids and fitting methods. The state of the art in hearing aids will be reviewed, including form factors, user requirements, and technology that improves speech intelligibility, sound quality, and functionality. General areas of auditory perception that remain unaddressed by current hearing aid technology will be discussed.
Inderbitzin, Martin P; Betella, Alberto; Lanatá, Antonio; Scilingo, Enzo P; Bernardet, Ulysses; Verschure, Paul F M J
Affective processes appraise the salience of external stimuli preparing the agent for action. So far, the relationship between stimuli, affect, and action has been mainly studied in highly controlled laboratory conditions. In order to find the generalization of this relationship to social interaction, we assess the influence of the salience of social stimuli on human interaction. We constructed reality ball game in a mixed reality space where pairs of people collaborated in order to compete with an opposing team. We coupled the players with team members with varying social salience by using both physical and virtual representations of remote players (i.e., avatars). We observe that, irrespective of the team composition, winners and losers display significantly different inter- and intrateam spatial behaviors. We show that subjects regulate their interpersonal distance to both virtual and physical team members in similar ways, but in proportion to the vividness of the stimulus. As an independent validation of this social salience effect, we show that this behavioral effect is also displayed in physiological correlates of arousal. In addition, we found a strong correlation between performance, physiology, and the subjective reports of the subjects. Our results show that proxemics is consistent with affective responses, confirming the existence of a social salience effect. This provides further support for the so-called law of apparent reality, and it generalizes it to the social realm, where it can be used to design more efficient social artifacts. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Benoni, Hanna; Tsal, Yehoshua
The present paper provides a short critical review of the theory of perceptual load. It closely examines the basic tenets and assumptions of the theory and identifies major conceptual and methodological problems that have been largely ignored in the literature. The discussion focuses on problems in the definition of the concept of perceptual load, on the circularity in the characterization and manipulation of perceptual load and the confusion between the concept of perceptual load and its operationalization. The paper also selectively reviews evidence supporting the theory as well as inconsistent evidence which proposed alternative dominant factors influencing the efficacy of attentional selection.
Full Text Available The present paper provides a short critical review of the theory of perceptual load. It closely examines the basic tenets and assumptions of the theory and identifies major conceptual and methodological problems that have been largely ignored in the literature. The discussion focuses on problems in the definition of the concept of perceptual load, on the circularity in the characterization and manipulation of perceptual load and the confusion between the concept of perceptual load and its operationalization. The paper also selectively reviews evidence supporting the theory as well as inconsistent evidence which proposed alternative dominant factors influencing the efficacy of attentional selection.
Abiri, Ahmad; Tao, Anna; LaRocca, Meg; Guan, Xingmin; Askari, Syed J; Bisley, James W; Dutson, Erik P; Grundfest, Warren S
The principal objective of the experiment was to analyze the effects of the clutch operation of robotic surgical systems on the performance of the operator. The relative coordinate system introduced by the clutch operation can introduce a visual-perceptual mismatch which can potentially have negative impact on a surgeon's performance. We also assess the impact of the introduction of additional tactile sensory information on reducing the impact of visual-perceptual mismatch on the performance of the operator. We asked 45 novice subjects to complete peg transfers using the da Vinci IS 1200 system with grasper-mounted, normal force sensors. The task involves picking up a peg with one of the robotic arms, passing it to the other arm, and then placing it on the opposite side of the view. Subjects were divided into three groups: aligned group (no mismatch), the misaligned group (10 cm z axis mismatch), and the haptics-misaligned group (haptic feedback and z axis mismatch). Each subject performed the task five times, during which the grip force, time of completion, and number of faults were recorded. Compared to the subjects that performed the tasks using a properly aligned controller/arm configuration, subjects with a single-axis misalignment showed significantly more peg drops (p = 0.011) and longer time to completion (p sensors showed no difference between the different groups. The visual-perceptual mismatch created by the misalignment of the robotic controls relative to the robotic arms has a negative impact on the operator of a robotic surgical system. Introduction of other sensory information and haptic feedback systems can help in potentially reducing this effect.
Hecht, Marcus; Thiemann, Ulf; Freitag, Christine M; Bender, Stephan
Post-perceptual cues can enhance visual short term memory encoding even after the offset of the visual stimulus. However, both the mechanisms by which the sensory stimulus characteristics are buffered as well as the mechanisms by which post-perceptual selective attention enhances short term memory encoding remain unclear. We analyzed late post-perceptual event-related potentials (ERPs) in visual change detection tasks (100ms stimulus duration) by high-resolution ERP analysis to elucidate these mechanisms. The effects of early and late auditory post-cues (300ms or 850ms after visual stimulus onset) as well as the effects of a visual interference stimulus were examined in 27 healthy right-handed adults. Focusing attention with post-perceptual cues at both latencies significantly improved memory performance, i.e. sensory stimulus characteristics were available for up to 850ms after stimulus presentation. Passive watching of the visual stimuli without auditory cue presentation evoked a slow negative wave (N700) over occipito-temporal visual areas. N700 was strongly reduced by a visual interference stimulus which impeded memory maintenance. In contrast, contralateral delay activity (CDA) still developed in this condition after the application of auditory post-cues and was thereby dissociated from N700. CDA and N700 seem to represent two different processes involved in short term memory encoding. While N700 could reflect visual post processing by automatic attention attraction, CDA may reflect the top-down process of searching selectively for the required information through post-perceptual attention. Copyright © 2015 Elsevier Inc. All rights reserved.
Hodge, Victoria Jane; Eakins, John; Austin, Jim
In this paper, we investigate human visual perception and establish a body of ground truth data elicited from human visual studies. We aim to build on the formative work of Ren, Eakins and Briggs who produced an initial ground truth database. Human subjects were asked to draw and rank their perceptions of the parts of a series of figurative images. These rankings were then used to score the perceptions, identify the preferred human breakdowns and thus allow us to induce perceptual rules for h...
Starrfelt, Randi; Klargaard, Solja; Petersen, Anders
Traditionally, perceptual processing of faces and words is considered highly specialized, strongly lateralized, and largely independent. This has, however, recently been challenged by studies showing that learning to read may affect the perceptual and neural processes involved in face recognition......, a lower perceptual threshold, and higher processing speed for words compared to letters. In sum, we find no evidence that reading skills are abnormal in developmental prosopagnosia, a finding that may challenge the recently proposed hypothesis that reading development and face processing abilities...
Hicks, J.L.; Starns, J.J.
We used implicit measures of memory to ascertain whether false memories for critical nonpresented items in the DRM paradigm (Deese, 1959; Roediger & McDermott, 1995) contain structural and perceptual detail. In Experiment 1, we manipulated presentation modality in a visual word-stem-completion task. Critical item priming was significant and…
Besken, Miri; Mulligan, Neil W.
Judgments of learning (JOLs) are sometimes influenced by factors that do not impact actual memory performance. One recent proposal is that perceptual fluency during encoding affects metamemory and is a basis of metacognitive illusions. In the present experiments, participants identified aurally presented words that contained inter-spliced silences…
van Maanen, Leendert; Fontanesi, Laura; Hawkins, Guy E; Forstmann, Birte U
Deciding between multiple courses of action often entails an increasing need to do something as time passes - a sense of urgency. This notion of urgency is not incorporated in standard theories of speeded decision making that assume information is accumulated until a critical fixed threshold is reached. Yet, it is hypothesized in novel theoretical models of decision making. In two experiments, we investigated the behavioral and neural evidence for an "urgency signal" in human perceptual decision making. Experiment 1 found that as the duration of the decision making process increased, participants made a choice based on less evidence for the selected option. Experiment 2 replicated this finding, and additionally found that variability in this effect across participants covaried with activation in the striatum. We conclude that individual differences in susceptibility to urgency are reflected by striatal activation. By dynamically updating a response threshold, the striatum is involved in signaling urgency in humans. Copyright © 2016 Elsevier Inc. All rights reserved.
The task of a keyword recognition system is to detect the presence of certain words in a conversation based on the linguistic information present in human speech. Such keyword spotting systems have applications in homeland security, telephone surveillance and human-computer interfacing. General procedure of a keyword spotting system involves feature generation and matching. In this work, new set of features that are based on the psycho-acoustic masking nature of human speech are proposed. After developing these features a time aligned pattern matching process was implemented to locate the words in a set of unknown words. A word boundary detection technique based on frame classification using the nonlinear characteristics of speech is also addressed in this work. Validation of this keyword spotting model was done using widely acclaimed Cepstral features. The experimental results indicate the viability of using these perceptually significant features as an augmented feature set in keyword spotting.
Benoni, Hanna; Tsal, Yehoshua
The theory of perceptual load (Lavie & Tsal, 1994) proposes that with low load in relevant processing left over resources spill over to process irrelevant distractors. Interference could only be prevented under High-Load Conditions where relevant processing exhausts attentional resources. The theory is based primarily on the finding that distractor interference obtained in low load displays, when the target appears alone, is eliminated in high load displays when it is embedded among neutral letters. However, a possible alternative interpretation of this effect is that the distractor is similarly processed in both displays, yet its interference in the large displays is diluted by the presence of the neutral letters. We separated the possible effects of load and dilution by adding dilution displays that were high in dilution and low in perceptual load. In the first experiment these displays contained as many letters as the high load displays, but their neutral letters were clearly distinguished from the target, thereby allowing for a low load processing mode. In the second experiment we presented identical multicolor displays in the Dilution and High-Load Conditions. However, in the former the target color was known in advance (thereby preserving a low load processing mode) whereas in the latter it was not. In both experiments distractor interference was completely eliminated under the Dilution Condition. Thus, it is dilution not perceptual load affecting distractor processing. 2010 Elsevier Ltd. All rights reserved.
Whitton, Jonathon P; Hancock, Kenneth E; Shannon, Jeffrey M; Polley, Daniel B
Sensory and motor skills can be improved with training, but learning is often restricted to practice stimuli. As an exception, training on closed-loop (CL) sensorimotor interfaces, such as action video games and musical instruments, can impart a broad spectrum of perceptual benefits. Here we ask whether computerized CL auditory training can enhance speech understanding in levels of background noise that approximate a crowded restaurant. Elderly hearing-impaired subjects trained for 8 weeks on a CL game that, like a musical instrument, challenged them to monitor subtle deviations between predicted and actual auditory feedback as they moved their fingertip through a virtual soundscape. We performed our study as a randomized, double-blind, placebo-controlled trial by training other subjects in an auditory working-memory (WM) task. Subjects in both groups improved at their respective auditory tasks and reported comparable expectations for improved speech processing, thereby controlling for placebo effects. Whereas speech intelligibility was unchanged after WM training, subjects in the CL training group could correctly identify 25% more words in spoken sentences or digit sequences presented in high levels of background noise. Numerically, CL audiomotor training provided more than three times the benefit of our subjects' hearing aids for speech processing in noisy listening conditions. Gains in speech intelligibility could be predicted from gameplay accuracy and baseline inhibitory control. However, benefits did not persist in the absence of continuing practice. These studies employ stringent clinical standards to demonstrate that perceptual learning on a computerized audio game can transfer to "real-world" communication challenges. Copyright © 2017 Elsevier Ltd. All rights reserved.
Full Text Available Background: Medical students are expected to master the ability to interpret histopathologic images, a difficult and time-consuming process. A major problem is the issue of transferring information learned from one example of a particular pathology to a new example. Recent advances in cognitive science have identified new approaches to address this problem. Methods: We adapted a new approach for enhancing pattern recognition of basic pathologic processes in skin histopathology images that utilizes perceptual learning techniques, allowing learners to see relevant structure in novel cases along with adaptive learning algorithms that space and sequence different categories (e.g. diagnoses that appear during a learning session based on each learner′s accuracy and response time (RT. We developed a perceptual and adaptive learning module (PALM that utilized 261 unique images of cell injury, inflammation, neoplasia, or normal histology at low and high magnification. Accuracy and RT were tracked and integrated into a "Score" that reflected students rapid recognition of the pathologies and pre- and post-tests were given to assess the effectiveness. Results: Accuracy, RT and Scores significantly improved from the pre- to post-test with Scores showing much greater improvement than accuracy alone. Delayed post-tests with previously unseen cases, given after 6-7 weeks, showed a decline in accuracy relative to the post-test for 1 st -year students, but not significantly so for 2 nd -year students. However, the delayed post-test scores maintained a significant and large improvement relative to those of the pre-test for both 1 st and 2 nd year students suggesting good retention of pattern recognition. Student evaluations were very favorable. Conclusion: A web-based learning module based on the principles of cognitive science showed an evidence for improved recognition of histopathology patterns by medical students.
The ease and efficiency with which we perceive objects in daily life masks the complexity of the processes involved. The main goal of my doctoral research was to enhance our understanding of the complex interplay between perceptual organization and object recognition. To this end, we investigated the dynamic interplay between different component processes of object recognition, and their temporal dynamics. In the first part of this thesis, I present three behavioral studies focusing on the ro...
HAN Shihui; Glyn W. Humphreys
This study examined the effects of attention on forming perceptual units by proximity grouping and by uniform connectedness (UC). In Experiment 1 a row of three global letters defined by either proximity or UC was presented at the center of the visual field. Participants were asked to identify the letter in the middle of stimulus arrays while ignoring the flankers. The stimulus onset asynchrony (SOA) between stimulus arrays and masks varied between 180 and 500 ms. We found that responses to targets defined by proximity grouping were slower than to those defined by UC at median SOAs but there were no differences at short or long SOAs. Incongruent flankers slowed responses to targets and this flanker compatibility effect was larger for UC than for proximity-defined flankers. Experiment 2 examined the effects of spatial precueing on discrimination responses to proximity- and UC-defined targets. The advantage for targets defined by UC over targets defined by proximity grouping was greater at uncued relative to cued locations. The results suggest that the advantage for UC over proximity grouping in forming perceptual units is contingent on the stimuli not being fully attended, and that paying attention to the stimuli differentially benefits proximity grouping.
Iliana M. Vargas
Full Text Available In some circumstances, accurate recognition of repeated images in an explicit memory test is driven by implicit memory. We propose that this “implicit recognition” results from perceptual fluency that influences responding without awareness of memory retrieval. Here we examined whether recognition would vary if images appeared in the same or different visual hemifield during learning and testing. Kaleidoscope images were briefly presented left or right of fixation during divided-attention encoding. Presentation in the same visual hemifield at test produced higher recognition accuracy than presentation in the opposite visual hemifield, but only for guess responses. These correct guesses likely reflect a contribution from implicit recognition, given that when the stimulated visual hemifield was the same at study and test, recognition accuracy was higher for guess responses than for responses with any level of confidence. The dramatic difference in guessing accuracy as a function of lateralized perceptual overlap between study and test suggests that implicit recognition arises from memory storage in visual cortical networks that mediate repetition-induced fluency increments.
Vargas, Iliana M; Voss, Joel L; Paller, Ken A
In some circumstances, accurate recognition of repeated images in an explicit memory test is driven by implicit memory. We propose that this "implicit recognition" results from perceptual fluency that influences responding without awareness of memory retrieval. Here we examined whether recognition would vary if images appeared in the same or different visual hemifield during learning and testing. Kaleidoscope images were briefly presented left or right of fixation during divided-attention encoding. Presentation in the same visual hemifield at test produced higher recognition accuracy than presentation in the opposite visual hemifield, but only for guess responses. These correct guesses likely reflect a contribution from implicit recognition, given that when the stimulated visual hemifield was the same at study and test, recognition accuracy was higher for guess responses than for responses with any level of confidence. The dramatic difference in guessing accuracy as a function of lateralized perceptual overlap between study and test suggests that implicit recognition arises from memory storage in visual cortical networks that mediate repetition-induced fluency increments.
Masrour, Farid; Nirshberg, Gregory; Schon, Michael; Leardi, Jason; Barrett, Emily
Some theorists hold that the human perceptual system has a component that receives input only from units lower in the perceptual hierarchy. This thesis, that we shall here refer to as the encapsulation thesis, has been at the center of a continuing debate for the past few decades. Those who deny the encapsulation thesis often rely on the large body of psychological findings that allegedly suggest that perception is influenced by factors such as the beliefs, desires, goals, and the expectations of the perceiver. Proponents of the encapsulation thesis, however, often argue that, when correctly interpreted, these psychological findings are compatible with the thesis. In our view, the debate over the significance and the correct interpretation of these psychological findings has reached an impasse. We hold that this impasse is due to the methodological limitations over psychophysical experiments, and it is very unlikely that such experiments, on their own, could yield results that would settle the debate. After defending this claim, we argue that integrating data from cognitive neuroscience resolves the debate in favor of those who deny the encapsulation thesis. PMID:26583001
Miner, Nadine Elizabeth
This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.
Kuo, M C C; Liu, K P Y; Ting, K H; Chan, C C H
This study examined the age-related subsequent memory effect (SME) in perceptual and semantic encoding using event-related potentials (ERPs). Seventeen younger adults and 17 older adults studied a series of Chinese characters either perceptually (by inspecting orthographic components) or semantically (by determining whether the depicted object makes sounds). The two tasks had similar levels of difficulty. The participants made studied or unstudied judgments during the recognition phase. Younger adults performed better in both conditions, with significant SMEs detected in the time windows of P2, N3, P550, and late positive component (LPC). In the older group, SMEs were observed in the P2 and N3 latencies in both conditions but were only detected in the P550 in the semantic condition. Between-group analyses showed larger frontal and central SMEs in the younger sample in the LPC latency regardless of encoding type. Aging effect appears to be stronger on influencing perceptual than semantic encoding processes. The effects seem to be associated with a decline in updating and maintaining representations during perceptual encoding. The age-related decline in the encoding function may be due in part to changes in frontal lobe function. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.
Gibson, Cristina B; Cooper, Cecily D; Conger, Jay A
Previous distance-related theories and concepts (e.g., social distance) have failed to address the sometimes wide disparity in perceptions between leaders and the teams they lead. Drawing from the extensive literature on teams, leadership, and cognitive models of social information processing, the authors develop the concept of leader-team perceptual distance, defined as differences between a leader and a team in perceptions of the same social stimulus. The authors investigate the effects of perceptual distance on team performance, operationalizing the construct with 3 distinct foci: goal accomplishment, constructive conflict, and decision-making autonomy. Analyzing leader, member, and customer survey responses for a large sample of teams, the authors demonstrate that perceptual distance between a leader and a team regarding goal accomplishment and constructive conflict have a nonlinear relationship with team performance. Greater perceptual differences are associated with decreases in team performance. Moreover, this effect is strongest when a team's perceptions are more positive than the leader's are (as opposed to the reverse). This pattern illustrates the pervasive effects that perceptions can have on team performance, highlighting the importance of developing awareness of perceptions in order to increase effectiveness. Implications for theory and practice are delineated. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
Taya, Shuichiro; Adams, Wendy J; Graf, Erich W; Lavie, Nilli
We tested contrasting predictions derived from perceptual load theory and from recent feature-based selection accounts. Observers viewed moving, colored stimuli and performed low or high load tasks associated with one stimulus feature, either color or motion. The resultant motion aftereffect (MAE) was used to evaluate attentional allocation. We found that task-irrelevant visual features received less attention than co-localized task-relevant features of the same objects. Moreover, when color and motion features were co-localized yet perceived to belong to two distinct surfaces, feature-based selection was further increased at the expense of object-based co-selection. Load theory predicts that the MAE for task-irrelevant motion would be reduced with a higher load color task. However, this was not seen for co-localized features; perceptual load only modulated the MAE for task-irrelevant motion when this was spatially separated from the attended color location. Our results suggest that perceptual load effects are mediated by spatial selection and do not generalize to the feature domain. Feature-based selection operates to suppress processing of task-irrelevant, co-localized features, irrespective of perceptual load.
Bélanger, Nathalie N; Lee, Michelle; Schotter, Elizabeth R
Recently, Bélanger, Slattery, Mayberry and Rayner (2012) showed, using the moving window paradigm, that profoundly deaf adults have a wider perceptual span during reading relative to hearing adults matched on reading level. This difference might be related to the fact that deaf adults allocate more visual attention to simple stimuli in the parafovea (Bavelier, Dye & Hauser, 2006). Importantly, this reorganization of visual attention in deaf individuals is already manifesting in deaf children (Dye, Hauser & Bavelier, 2009). This leads to questions about the time course of the emergence of an enhanced perceptual span (which is under attentional control; Rayner, 2014; Miellet, O'Donnell, & Sereno, 2009) in young deaf readers. The present research addressed this question by comparing the perceptual spans of young deaf readers (age 7-15) and young hearing children (age 7-15). Young deaf readers, like deaf adults, were found to have a wider perceptual span relative to their hearing peers matched on reading level, suggesting that strong and early reorganization of visual attention in deaf individuals goes beyond the processing of simple visual stimuli and emerges into more cognitively complex tasks, such as reading.
Yang, Wu-xia; Feng, Jie; Huang, Wan-ting; Zhang, Cheng-xiang; Nan, Yun
Congenital amusia is a musical disorder that mainly affects pitch perception. Among Mandarin speakers, some amusics also have difficulties in processing lexical tones (tone agnosics). To examine to what extent these perceptual deficits may be related to pitch production impairments in music and Mandarin speech, eight amusics, eight tone agnosics, and 12 age- and IQ-matched normal native Mandarin speakers were asked to imitate music note sequences and Mandarin words of comparable lengths. The results indicated that both the amusics and tone agnosics underperformed the controls on musical pitch production. However, tone agnosics performed no worse than the amusics, suggesting that lexical tone perception deficits may not aggravate musical pitch production difficulties. Moreover, these three groups were all able to imitate lexical tones with perfect intelligibility. Taken together, the current study shows that perceptual musical pitch and lexical tone deficits might coexist with musical pitch production difficulties. But at the same time these perceptual pitch deficits might not affect lexical tone production or the intelligibility of the speech words that were produced. The perception-production relationship for pitch among individuals with perceptual pitch deficits may be, therefore, domain-dependent. PMID:24474944
Full Text Available Congenital amusia is a musical disorder that mainly affects pitch perception. Among Mandarin speakers, some amusics also have difficulties in processing lexical tones (tone agnosics. To examine to what extent these perceptual deficits may be related to pitch production impairments in music and Mandarin speech, 8 amusics, 8 tone agnosics, and 12 age- and IQ-matched normal native Mandarin speakers were asked to imitate music note sequences and Mandarin words of comparable lengths. The results indicated that both the amusics and tone agnosics underperformed the controls on musical pitch production. However, tone agnosics performed no worse than the amusics, suggesting that lexical tone perception deficits may not aggravate musical pitch production difficulties. Moreover, these three groups were all able to imitate lexical tones with perfect intelligibility. Taken together, the current study shows that perceptual musical pitch and lexical tone deficits might coexist with musical pitch production difficulties. But at the same time these perceptual pitch deficits might not affect lexical tone production or the intelligibility of the speech words that were produced. The perception-production relationship for pitch among individuals with perceptual pitch deficits may be, therefore, domain-dependent.
McDonough, Ian M; Cervantes, Sasha N; Gray, Stephen J; Gallo, David A
Episodic memory decline is a hallmark of normal cognitive aging. Here, we report the first event-related fMRI study to directly investigate age differences in the neural reactivation of qualitatively rich perceptual details during recollection. Younger and older adults studied pictures of complex scenes at different presentation durations along with descriptive verbal labels, and these labels subsequently were used during fMRI scanning to cue picture recollections of varying perceptual detail. As expected from prior behavioral work, the two age groups subjectively rated their recollections as containing similar amounts of perceptual detail, despite objectively measured recollection impairment in older adults. In both age groups, comparisons of retrieval trials that varied in recollected detail revealed robust activity in brain regions previously linked to recollection, including hippocampus and both medial and lateral regions of the prefrontal and posterior parietal cortex. Critically, this analysis also revealed recollection-related activity in visual processing regions that were active in an independent picture-perception task, and these regions showed age-related reductions in activity during recollection that cannot be attributed to age differences in response criteria. These fMRI findings provide new evidence that aging reduces the absolute quantity of perceptual details that are reactivated from memory, and they help to explain why aging reduces the reliability of subjective memory judgments. Copyright © 2014 Elsevier Inc. All rights reserved.
Curby, Kim M; Entenman, Robert J; Fleming, Justin T
What role do general-purpose, experience-sensitive perceptual mechanisms play in producing characteristic features of face perception? We previously demonstrated that different-colored, misaligned framing backgrounds, designed to disrupt perceptual grouping of face parts appearing upon them, disrupt holistic face perception. In the current experiments, a similar part-judgment task with composite faces was performed: face parts appeared in either misaligned, different-colored rectangles or aligned, same-colored rectangles. To investigate whether experience can shape impacts of perceptual grouping on holistic face perception, a pre-task fostered the perception of either (a) the misaligned, differently colored rectangle frames as parts of a single, multicolored polygon or (b) the aligned, same-colored rectangle frames as a single square shape. Faces appearing in the misaligned, differently colored rectangles were processed more holistically by those in the polygon-, compared with the square-, pre-task group. Holistic effects for faces appearing in aligned, same-colored rectangles showed the opposite pattern. Experiment 2, which included a pre-task condition fostering the perception of the aligned, same-colored frames as pairs of independent rectangles, provided converging evidence that experience can modulate impacts of perceptual grouping on holistic face perception. These results are surprising given the proposed impenetrability of holistic face perception and provide insights into the elusive mechanisms underlying holistic perception.
Lamichhane, Bidhan; Adhikari, Bhim M; Dhamala, Mukesh
Previous neuroimaging studies provide evidence for the involvement of the anterior insulae (INSs) in perceptual decision-making processes. However, how the insular cortex is involved in integration of degraded sensory information to create a conscious percept of environment and to drive our behaviors still remains a mystery. In this study, using functional magnetic resonance imaging (fMRI) and four different perceptual categorization tasks in visual and audio-visual domains, we measured blood oxygen level dependent (BOLD) signals and examined the roles of INSs in easy and difficult perceptual decision-making. We created a varying degree of degraded stimuli by manipulating the task-specific stimuli in these four experiments to examine the effects of task difficulty on insular cortex response. We hypothesized that significantly higher BOLD response would be associated with the ambiguity of the sensory information and decision-making difficulty. In all of our experimental tasks, we found the INS activity consistently increased with task difficulty and participants' behavioral performance changed with the ambiguity of the presented sensory information. These findings support the hypothesis that the anterior insulae are involved in sensory-guided, goal-directed behaviors and their activities can predict perceptual load and task difficulty. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
Maylor, E A; Lavie, N
The effect of perceptual load on age differences in visual selective attention was examined in 2 studies. In Experiment 1, younger and older adults made speeded choice responses indicating which of 2 target letters was present in a relevant set of letters in the center of the display while they attempted to ignore an irrelevant distractor in the periphery. The perceptual load of relevant processing was manipulated by varying the central set size. When the relevant set size was small, the adverse effect of an incompatible distractor was much greater for the older participants than for the younger ones. However, with larger relevant set sizes, this was no longer the case, with the distractor effect decreasing for older participants at lower levels of perceptual load than for younger ones. In Experiment 2, older adults were tested with the empty locations in the central set either unmarked (as in Experiment 1) or marked by small circles to form a group of 6 items irrespective of set size; the 2 conditions did not differ markedly, ruling out an explanation based entirely on perceptual grouping.
Full Text Available The latest neuroimaging studies about implicit memory have revealed that different implicit memory types may be processed by different parts of the brain. However, studies have rarely examined what subtypes of implicit memory processes are affected in patients with various brain-injuries. Twenty patients with frontal lobe injury, 25 patients with occipital lobe injury, and 29 healthy controls were recruited for the study. Two subtypes of implicit memory were investigated by using structurally parallel perceptual (picture identification task and conceptual (category exemplar generation task implicit memory tests in the three groups, as well as explicit memory tests. The results indicated that the priming of conceptual implicit memory and explicit memory tasks in patients with frontal lobe injury was poorer than that observed in healthy controls, while perceptual implicit memory was identical between the two groups. In contrast, the priming of perceptual implicit memory in patients with occipital lobe injury was poorer than that in healthy controls, while the priming of conceptual implicit memory and explicit memory was similar to that in healthy controls. This double dissociation between perceptual and conceptual implicit memory across the brain areas implies that occipital lobes may participate in perceptual implicit memory, while frontal lobes may be involved in processing conceptual memory.
Aslin, Richard N.
Bhatt and Quinn (2011) provide a compelling and comprehensive review of empirical evidence that supports the operation of principles of perceptual organization in young infants. They also have provided a comprehensive list of experiences that could serve to trigger the learning of at least some of these principles of perceptual organization, and…
Sun, Sai; Yu, Rongjun; Wang, Shuo
People often make perceptual decisions with ambiguous information, but it remains unclear whether the brain has a common neural substrate that encodes various forms of perceptual ambiguity. Here, we used three types of perceptually ambiguous stimuli as well as task instructions to examine the neural basis for both stimulus-driven and task-driven perceptual ambiguity. We identified a neural signature, the late positive potential (LPP), that encoded a general form of stimulus-driven perceptual ambiguity. In addition to stimulus-driven ambiguity, the LPP was also modulated by ambiguity in task instructions. To further specify the functional role of the LPP and elucidate the relationship between stimulus ambiguity, behavioral response, and the LPP, we employed regression models and found that the LPP was specifically associated with response latency and confidence rating, suggesting that the LPP encoded decisions under perceptual ambiguity. Finally, direct behavioral ratings of stimulus and task ambiguity confirmed our neurophysiological findings, which could not be attributed to differences in eye movements either. Together, our findings argue for a common neural signature that encodes decisions under perceptual ambiguity but is subject to the modulation of task ambiguity. Our results represent an essential first step toward a complete neural understanding of human perceptual decision making.
Li, Haishan; He, Qingshun
Ambiguity tolerance and perceptual learning styles are the two influential elements showing individual differences in EFL learning. This research is intended to explore the relationship between Chinese EFL learners' ambiguity tolerance and their preferred perceptual learning styles. The findings include (1) the learners are sensitive to English…
Lavie, Nilli; Lin, Zhicheng; Zokaei, Nahid; Thoma, Volker
Predictions from perceptual load theory (Lavie, 1995, 2005) regarding object recognition across the same or different viewpoints were tested. Results showed that high perceptual load reduces distracter recognition levels despite always presenting distracter objects from the same view. They also showed that the levels of distracter recognition were…
Full Text Available The perceptual hash algorithm is a technique to authenticate the integrity of images. While a few scholars have worked on mono-spectral image perceptual hashing, there is limited research on multispectral image perceptual hashing. In this paper, we propose a perceptual hash algorithm for the content authentication of a multispectral remote sensing image based on the synthetic characteristics of each band: firstly, the multispectral remote sensing image is preprocessed with band clustering and grid partition; secondly, the edge feature of the band subsets is extracted by band fusion-based edge feature extraction; thirdly, the perceptual feature of the same region of the band subsets is compressed and normalized to generate the perceptual hash value. The authentication procedure is achieved via the normalized Hamming distance between the perceptual hash value of the recomputed perceptual hash value and the original hash value. The experiments indicated that our proposed algorithm is robust compared to content-preserved operations and it efficiently authenticates the integrity of multispectral remote sensing images.
J.C. Gower (John); P.J.F. Groenen (Patrick); M. van de Velden (Michel); K. Vines (Karen)
textabstractPerceptual maps are often used in marketing to visually study relations between two or more attributes. However, in many perceptual maps published in the recent literature it remains unclear what is being shown and how the relations between the points in the map can be interpreted or
Hajnal, A; Grocki, M; Jacobs, DM; Zaal, FTJM; Michaels, CF
Runeson, Justin, and Olsson (2000) proposed (a) that perceptual learning entails a transition from an inferential to a direct-perceptual mode of apprehension, and (b) that relative confidence-the difference between estimated and actual performance-indicates whether apprehension is inferential or
Hajnal, A.; Grocki, M.; Jacobs, D.M.; Zaal, F.T.J.M.; Michaels, C.F.
Runeson, Juslin, and Olsson (2000) proposed (a) that perceptual learning entails a transition from an inferential to a direct-perceptual mode of apprehension, and (b) that relative confidence - the difference between estimated and actual performance - indicates whether apprehension is inferential or
Sergent, Marie T.; Sedlacek, William E.
Describes perceptual mapping, a newly developed method for assessing perceptions of campus environments. Describes evaluation of a student union by students using this method. Discusses the advantages and disadvantages of this perceptual mapping method for assessing college environments. (Author/ABL)
Mann, D.L.; Ryu, D.; Abernethy, B.A.; Poolton, J.M.
The purpose of this study was to determine whether decision-making skill in perceptual-cognitive tasks could be enhanced using a training technique that impaired selective areas of the visual field. Recreational basketball players performed perceptual training over 3 days while viewing with a
Fair, Joseph; Flom, Ross; Jones, Jacob; Martin, Justin
Six-month-olds reliably discriminate different monkey and human faces whereas 9-month-olds only discriminate different human faces. It is often falsely assumed that perceptual narrowing reflects a permanent change in perceptual abilities. In 3 experiments, ninety-six 12-month-olds' discrimination of unfamiliar monkey faces was examined. Following…
Santangelo, Valerio; Spence, Charles
We compared the ability of auditory, visual, and audiovisual (bimodal) exogenous cues to capture visuo-spatial attention under conditions of no load versus high perceptual load. Participants had to discriminate the elevation (up vs. down) of visual targets preceded by either unimodal or bimodal cues under conditions of high perceptual load (in…
During the first year of life, infants' face recognition abilities are subject to "perceptual narrowing", the end result of which is that observers lose the ability to distinguish previously discriminable faces (e.g. other-race faces) from one another. Perceptual narrowing has been reported for faces of different species and different races, in…
Hugo Cezar Palhares Ferreira
Full Text Available Abstract The binding of information in visual short-term memory may occur incidentally when irrelevant information for the task at hand is stored together with relevant information. We investigated the process of the incidental conjunction of color and shape (Exp1 and its potential association with the selection of relevant information to the memory task (Exp2. The results in Exp1 show that color and shape are incidentally and asymmetrically conjugated: color interferes with the recognition of shape; however, shape does not interfere with the recognition of color. In Exp2, we investigated whether an increase in perceptual load would eliminate the processing of irrelevant information. The results of this experiment show that even with a high perceptual load, the incidental conjunction is not affected, and color remains to interfere with shape recognition, suggesting that the incidental conjunction is an automatic process.
Giesbrecht, Barry; Sy, Jocelyn; Bundesen, Claus; Kyllingsbaek, Søren
The human attention system helps us cope with a complex environment by supporting the selective processing of information relevant to our current goals. Understanding the perceptual, cognitive, and neural mechanisms that mediate selective attention is a core issue in cognitive neuroscience. One prominent model of selective attention, known as load theory, offers an account of how task demands determine when information is selected and an account of the efficiency of the selection process. However, load theory has several critical weaknesses that suggest that it is time for a new perspective. Here we review the strengths and weaknesses of load theory and offer an alternative biologically plausible computational account that is based on the neural theory of visual attention. We argue that this new perspective provides a detailed computational account of how bottom-up and top-down information is integrated to provide efficient attentional selection and allocation of perceptual processing resources. © 2014 New York Academy of Sciences.
Gao, Ying; Dong, Junyu; Lou, Jianwen; Qi, Lin; Liu, Jun
A typical texture retrieval system performs feature comparison and might not be able to make human-like judgments of image similarity. Meanwhile, it is commonly known that perceptual texture similarity is difficult to be described by traditional image features. In this paper, we propose a new texture retrieval scheme based on texture perceptual similarity. The key of the proposed scheme is that prediction of perceptual similarity is performed by learning a non-linear mapping from image features space to perceptual texture space by using Random Forest. We test the method on natural texture dataset and apply it on a new wallpapers dataset. Experimental results demonstrate that the proposed texture retrieval scheme with perceptual similarity improves the retrieval performance over traditional image features.
Hélie, Sébastien; Cousineau, Denis
This article explores the visual information used to categorize stimuli drawn from a common stimulus space into verbal and nonverbal categories using 2 experiments. Experiment 1 explores the effect of target duration on verbal and nonverbal categorization using backward masking to interrupt visual processing. With categories equated for difficulty for long and short target durations, intermediate target duration shows an advantage for verbal categorization over nonverbal categorization. Experiment 2 tests whether the results of Experiment 1 can be explained by shorter target duration resulting in a smaller signal-to-noise ratio of the categorization stimulus. To test for this possibility, Experiment 2 used integration masking with the same stimuli, categories, and masks as Experiment 1 with a varying level of mask opacity. As predicted, low mask opacity yielded similar results to long target duration while high mask opacity yielded similar results to short target duration. Importantly, intermediate mask opacity produced an advantage for verbal categorization over nonverbal categorization, similar to intermediate target duration. These results suggest that verbal and nonverbal categorization are affected differently by manipulations affecting the signal-to-noise ratio of the stimulus, consistent with multiple-system theories of categorizations. The results further suggest that verbal categorization may be more digital (and more robust to low signal-to-noise ratio) while the information used in nonverbal categorization may be more analog (and less robust to lower signal-to-noise ratio). This article concludes with a discussion of how these new results affect the use of masking in perceptual categorization and multiple-system theories of perceptual category learning. (c) 2015 APA, all rights reserved).
Devin H. Kehoe
Full Text Available The oculomotor system utilizes color extensively for planning saccades. Therefore, we examined how the oculomotor system actually encodes color and several factors that modulate these representations: attention-based surround suppression and inherent biases in selecting and encoding color categories. We measured saccade trajectories while human participants performed a memory-guided saccade task with color targets and distractors and examined whether oculomotor target selection processing was functionally related to the CIE (x,y color space distances between color stimuli and whether there were hierarchical differences between color categories in the strength and speed of encoding potential saccade goals. We observed that saccade planning was modulated by the CIE (x,y distances between stimuli thus demonstrating that color is encoded in perceptual color space by the oculomotor system. Furthermore, these representations were modulated by (1 cueing attention to a particular color thereby eliciting surround suppression in oculomotor color space and (2 inherent selection and encoding biases based on color category independent of cueing and perceptual discriminability. Since surround suppression emerges from recurrent feedback attenuation of sensory projections, observing oculomotor surround suppression suggested that oculomotor encoding of behavioral relevance results from integrating sensory and cognitive signals that are pre-attenuated based on task demands and that the oculomotor system therefore does not functionally contribute to this process. Second, although perceptual discriminability did partially account for oculomotor processing differences between color categories, we also observed preferential processing of the red color category across various behavioral metrics. This is consistent with numerous previous studies and could not be simply explained by perceptual discriminability. Since we utilized a memory-guided saccade task, this
Kehoe, Devin H; Rahimi, Maryam; Fallah, Mazyar
The oculomotor system utilizes color extensively for planning saccades. Therefore, we examined how the oculomotor system actually encodes color and several factors that modulate these representations: attention-based surround suppression and inherent biases in selecting and encoding color categories. We measured saccade trajectories while human participants performed a memory-guided saccade task with color targets and distractors and examined whether oculomotor target selection processing was functionally related to the CIE ( x , y ) color space distances between color stimuli and whether there were hierarchical differences between color categories in the strength and speed of encoding potential saccade goals. We observed that saccade planning was modulated by the CIE ( x , y ) distances between stimuli thus demonstrating that color is encoded in perceptual color space by the oculomotor system. Furthermore, these representations were modulated by (1) cueing attention to a particular color thereby eliciting surround suppression in oculomotor color space and (2) inherent selection and encoding biases based on color category independent of cueing and perceptual discriminability. Since surround suppression emerges from recurrent feedback attenuation of sensory projections, observing oculomotor surround suppression suggested that oculomotor encoding of behavioral relevance results from integrating sensory and cognitive signals that are pre-attenuated based on task demands and that the oculomotor system therefore does not functionally contribute to this process. Second, although perceptual discriminability did partially account for oculomotor processing differences between color categories, we also observed preferential processing of the red color category across various behavioral metrics. This is consistent with numerous previous studies and could not be simply explained by perceptual discriminability. Since we utilized a memory-guided saccade task, this indicates that
Full Text Available It is difficult to pinpoint the border between perceptual and conceptual processing, despite their treatment as distinct entities in many studies of recognition memory. For instance, alteration of simple perceptual characteristics of a stimulus can radically change meaning, such as the color of bread changing from white to green. We sought to better understand the role of perceptual and conceptual processing in memory by identifying the effects of changing a basic perceptual feature (color on behavioral and neural correlates of memory in circumstances when this change would be expected to either change the meaning of a stimulus or to have no effect on meaning (i.e., to influence conceptual processing or not. Abstract visual shapes (squiggles were colorized during study and presented during test in either the same color or a different color. Those squiggles that subjects found to resemble meaningful objects supported behavioral measures of conceptual priming, whereas meaningless squiggles did not. Further, changing color from study to test had a selective effect on behavioral correlates of priming for meaningful squiggles, indicating that color change altered conceptual processing. During a recognition memory test, color change altered event-related brain potential correlates of memory for meaningful squiggles but not for meaningless squiggles. Specifically, color change reduced the amplitude of frontally distributed N400 potentials (FN400, indicating that these potentials indicated conceptual processing during recognition memory that was sensitive to color change. In contrast, color change had no effect on FN400 correlates of recognition for meaningless squiggles, which were overall smaller in amplitude than for meaningful squiggles (further indicating that these potentials signal conceptual processing during recognition. Thus, merely changing the color of abstract visual shapes can alter their meaning, changing behavioral and neural correlates
Hupbach, Almut; Melzer, André; Hardt, Oliver
Priming effects in perceptual tests of implicit memory are assumed to be perceptually specific. Surprisingly, changing object colors from study to test did not diminish priming in most previous studies. However, these studies used implicit tests that are based on object identification, which mainly depends on the analysis of the object shape and therefore operates color-independently. The present study shows that color effects can be found in perceptual implicit tests when the test task requires the processing of color information. In Experiment 1, reliable color priming was found in a mere exposure design (preference test). In Experiment 2, the preference test was contrasted with a conceptually driven color-choice test. Altering the shape of object from study to test resulted in significant priming in the color-choice test but eliminated priming in the preference test. Preference judgments thus largely depend on perceptual processes. In Experiment 3, the preference and the color-choice test were studied under explicit test instructions. Differences in reaction times between the implicit and the explicit test suggest that the implicit test results were not an artifact of explicit retrieval attempts. In contrast with previous assumptions, it is therefore concluded that color is part of the representation that mediates perceptual priming.
Full Text Available Imagery rescripting (ImRs is a process by which aversive autobiographical memories are rendered less unpleasant or emotional. ImRs is thought only to be effective if a change in the meaning-relevant (semantic content of the mental image is produced, according to a cognitive hypothesis of ImRs. We propose an additional hypothesis: that ImRs can also be effective by the manipulation of perceptual features of the memory, without explicitly targeting meaning-relevant content.In two experiments using a within-subjects design (both N = 48, community samples, both Conceptual-ImRs-focusing on changing meaning-relevant content-and Perceptual-ImRs-focusing on changing perceptual features-were compared to Recall-only of aversive autobiographical image-based memories. An active control condition, Recall + Attentional Breathing (Recall+AB was added in the first experiment. In the second experiment, a Positive-ImRs condition was added-changing the aversive image into a positive image that was unrelated to the aversive autobiographical memory. Effects on the aversive memory's unpleasantness, vividness and emotionality were investigated.In Experiment 1, compared to Recall-only, both Conceptual-ImRs and Perceptual-ImRs led to greater decreases in unpleasantness, and Perceptual-ImRs led to greater decreases in emotionality of memories. In Experiment 2, the effects on unpleasantness were not replicated, and both Conceptual-ImRs and Perceptual-ImRs led to greater decreases in emotionality, compared to Recall-only, as did Positive-ImRs. There were no effects on vividness, and the ImRs conditions did not differ significantly from Recall+AB.Results suggest that, in addition to traditional forms of ImRs, targeting the meaning-relevant content of an image during ImRs, relatively simple techniques focusing on perceptual aspects or positive imagery might also yield benefits. Findings require replication and extension to clinical samples.
Debats, Nienke B; Ernst, Marc O; Heuer, Herbert
Humans are well able to operate tools whereby their hand movement is linked, via a kinematic transformation, to a spatially distant object moving in a separate plane of motion. An everyday example is controlling a cursor on a computer monitor. Despite these separate reference frames, the perceived positions of the hand and the object were found to be biased toward each other. We propose that this perceptual attraction is based on the principles by which the brain integrates redundant sensory information of single objects or events, known as optimal multisensory integration. That is, 1 ) sensory information about the hand and the tool are weighted according to their relative reliability (i.e., inverse variances), and 2 ) the unisensory reliabilities sum up in the integrated estimate. We assessed whether perceptual attraction is consistent with optimal multisensory integration model predictions. We used a cursor-control tool-use task in which we manipulated the relative reliability of the unisensory hand and cursor position estimates. The perceptual biases shifted according to these relative reliabilities, with an additional bias due to contextual factors that were present in experiment 1 but not in experiment 2 The biased position judgments' variances were, however, systematically larger than the predicted optimal variances. Our findings suggest that the perceptual attraction in tool use results from a reliability-based weighting mechanism similar to optimal multisensory integration, but that certain boundary conditions for optimality might not be satisfied. NEW & NOTEWORTHY Kinematic tool use is associated with a perceptual attraction between the spatially separated hand and the effective part of the tool. We provide a formal account for this phenomenon, thereby showing that the process behind it is similar to optimal integration of sensory information relating to single objects. Copyright © 2017 the American Physiological Society.
Barr, Rachel; Moser, Alecia; Rusnak, Sylvia; Zimmermann, Laura; Dickerson, Kelly; Lee, Herietta; Gerhardstein, Peter
Early childhood is characterized by memory capacity limitations and rapid perceptual and motor development [Rovee-Collier (1996). Infant Behavior & Development, 19, 385-400]. The present study examined 2-year olds' reproduction of a sliding action to complete an abstract fish puzzle under different levels of memory load and perceptual feature support. Experimental groups were compared to baseline controls to assess spontaneous rates of production of the target actions; baseline production was low across all experiments. Memory load was manipulated in Exp. 1 by adding pieces to the puzzle, increasing sequence length from 2 to 3 items, and to 3 items plus a distractor. Although memory load did not influence how toddlers learned to manipulate the puzzle pieces, it did influence toddlers' achievement of the goal-constructing the fish. Overall, girls were better at constructing the puzzle than boys. In Exp. 2, the perceptual features of the puzzle were altered by changing shape boundaries to create a two-piece horizontally cut puzzle (displaying bilateral symmetry), and by adding a semantically supportive context to the vertically cut puzzle (iconic). Toddlers were able to achieve the goal of building the fish equally well across the 2-item puzzle types (bilateral symmetry, vertical, iconic), but how they learned to manipulate the puzzle pieces varied as a function of the perceptual features. Here, as in Exp. 1, girls showed a different pattern of performance from the boys. This study demonstrates that changes in memory capacity and perceptual processing influence both goal-directed imitation learning and motoric performance. © 2016 Wiley Periodicals, Inc.
Kleiner, Mendel; Larsson, Pontus; Vastfjall, Daniel; Torres, Rendell R.
By using various types of binaural simulation (or ``auralization'') of physical environments, it is now possible to study basic perceptual issues relevant to room acoustics, as well to simulate the acoustic conditions found in concert halls and other auditoria. Binaural simulation of physical spaces in general is also important to virtual reality systems. This presentation will begin with an overview of the issues encountered in the auralization of room and other environments. We will then discuss the influence of various approximations in room modeling, in particular, edge- and surface scattering, on the perceived room response. Finally, we will discuss cross-modal effects, such as the influence of visual cues on the perception of auditory cues, and the influence of cross-modal effects on the judgement of ``perceived presence'' and the rating of room acoustic quality.
Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri
As music unfolds in time, structure is recognised and understood by listeners, regardless of their level of musical expertise. A number of studies have found spectral and tonal changes to quite successfully model boundaries between structural sections. However, the effects of musical expertise...... and experimental task on computational modelling of structure are not yet well understood. These issues need to be addressed to better understand how listeners perceive the structure of music and to improve automatic segmentation algorithms. In this study, computational prediction of segmentation by listeners...... was investigated for six musical stimuli via a real-time task and an annotation (non real-time) task. The proposed approach involved computation of novelty curve interaction features and a prediction model of perceptual segmentation boundary density. We found that, compared to non-musicians’, musicians...
Lleras, Alejandro; Chu, Hengqing; Buetti, Simona
Perceptual Load theory states that the degree of perceptual load on a display determines the amount of leftover attentional resources that the system can use to process distracting information. An important corollary of this theory is that the amount of perceptual load determines the vulnerability of the attention system to being captured by completely irrelevant stimuli, predicting larger amounts of capture with low perceptual load than with high perceptual load. This prediction was first confirmed by Forster and Lavie (2008). Here, we report 6 experiments that followed up on those earlier results, where we find that in many cases, the opposite pattern is obtained: attentional capture increased with increasing perceptual load. Given the lack of generalizability of the theory to new experimental contexts with fairly minor methodological differences, we conclude that Perceptual Load may not be a useful framework for understanding attentional capture. The theoretical and applied importance of these findings is discussed. In particular, we caution against using this theory in applied tasks and settings because best-use recommendations stemming from this theory regarding strategies to decrease distractibility may in fact produce the opposite effect: an increase in distractibility (with distractibility being indexed by the magnitude of the capture effect). (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Blank, Helen; Biele, Guido; Heekeren, Hauke R; Philiastides, Marios G
Perceptual decision making is the process by which information from sensory systems is combined and used to influence our behavior. In addition to the sensory input, this process can be affected by other factors, such as reward and punishment for correct and incorrect responses. To investigate the temporal dynamics of how monetary punishment influences perceptual decision making in humans, we collected electroencephalography (EEG) data during a perceptual categorization task whereby the punishment level for incorrect responses was parametrically manipulated across blocks of trials. Behaviorally, we observed improved accuracy for high relative to low punishment levels. Using multivariate linear discriminant analysis of the EEG, we identified multiple punishment-induced discriminating components with spatially distinct scalp topographies. Compared with components related to sensory evidence, components discriminating punishment levels appeared later in the trial, suggesting that punishment affects primarily late postsensory, decision-related processing. Crucially, the amplitude of these punishment components across participants was predictive of the size of the behavioral improvements induced by punishment. Finally, trial-by-trial changes in prestimulus oscillatory activity in the alpha and gamma bands were good predictors of the amplitude of these components. We discuss these findings in the context of increased motivation/attention, resulting from increases in punishment, which in turn yields improved decision-related processing.
Wiech, Katja; Vandekerckhove, Joachim; Zaman, Jonas; Tuerlinckx, Francis; Vlaeyen, Johan W S; Tracey, Irene
Prior information about features of a stimulus is a strong modulator of perception. For instance, the prospect of more intense pain leads to an increased perception of pain, whereas the expectation of analgesia reduces pain, as shown in placebo analgesia and expectancy modulations during drug administration. This influence is commonly assumed to be rooted in altered sensory processing and expectancy-related modulations in the spinal cord, are often taken as evidence for this notion. Contemporary models of perception, however, suggest that prior information can also modulate perception by biasing perceptual decision-making - the inferential process underlying perception in which prior information is used to interpret sensory information. In this type of bias, the information is already present in the system before the stimulus is observed. Computational models can distinguish between changes in sensory processing and altered decision-making as they result in different response times for incorrect choices in a perceptual decision-making task (Figure S1A,B). Using a drift-diffusion model, we investigated the influence of both processes in two independent experiments. The results of both experiments strongly suggest that these changes in pain perception are predominantly based on altered perceptual decision-making. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Full Text Available Goal-directed behavior requires the flexible transformation of sensory evidence about our environment into motor actions. Studies of perceptual decision-making have shown that this transformation is distributed across several widely separated brain regions. Yet, little is known about how decision-making emerges from the dynamic interactions among these regions. Here, we review a series of studies, in which we characterized the cortical network interactions underlying a perceptual decision process in the human brain. We used magnetoencephalography (MEG to measure the large-scale cortical population dynamics underlying each of the sub-processes involved in this decision: the encoding of sensory evidence and action plan, the mapping between the two, and the attentional selection of task-relevant evidence. We found that these sub-processes are mediated by neuronal oscillations within specific frequency ranges. Localized gamma-band oscillations in sensory and motor cortices reflect the encoding of the sensory evidence and motor plan. Large-scale oscillations across widespread cortical networks mediate the integrative processes connecting these local networks: Gamma- and beta-band oscillations across frontal, parietal and sensory cortices serve the selection of relevant sensory evidence and its flexible mapping onto action plans. In sum, our results suggest that perceptual decisions are mediated by oscillatory interactions within overlapping local and large-scale cortical networks.
Gold, Joshua I; Ding, Long
Psychometric functions are often interpreted in the context of Signal Detection Theory, which emphasizes a distinction between sensory processing and non-sensory decision rules in the brain. This framework has helped to relate perceptual sensitivity to the "neurometric" sensitivity of sensory-driven neural activity. However, perceptual sensitivity, as interpreted via Signal Detection Theory, is based on not just how the brain represents relevant sensory information, but also how that information is read out to form the decision variable to which the decision rule is applied. Here we discuss recent advances in our understanding of this readout process and describe its effects on the psychometric function. In particular, we show that particular aspects of the readout process can have specific, identifiable effects on the threshold, slope, upper asymptote, time dependence, and choice dependence of psychometric functions. To illustrate these points, we emphasize studies of perceptual learning that have identified changes in the readout process that can lead to changes in these aspects of the psychometric function. We also discuss methods that have been used to distinguish contributions of the sensory representation versus its readout to psychophysical performance. Copyright © 2012 Elsevier Ltd. All rights reserved.
Wilms, Inge Linda; Nielsen, Simon
Visual perception serves as the basis for much of the higher level cognitive processing as well as human activity in general. Here we present normative estimates for the following components of visual perception: the visual perceptual threshold, the visual short-term memory capacity and the visual...... perceptual encoding/decoding speed (processing speed) of visual short-term memory based on an assessment of 91 healthy subjects aged 60-75. The estimates are presented at total sample level as well as at gender level. The estimates were modelled from input from a whole-report assessment based on A Theory...... speed of Visual Short-term Memory (VTSM) but not the capacity of VSTM nor the visual threshold. The estimates will be useful for future studies into the effects of various types of intervention and training on cognition in general and visual attention in particular....
Apps, Matthew A. J.; Tsakiris, Manos
Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.
Cousens, Ross; Cutmore, Timothy; Wang, Ya; Wilson, Jennifer; Chan, Raymond C K; Shum, David H K
Prospective memory involves the formation and execution of intended actions and is essential for autonomous living. In this study (N=32), the effect of the nature of PM cues (semantic versus perceptual) on established event-related potentials (ERPs) elicited in PM tasks (N300 and prospective positivity) was investigated. PM cues defined by their perceptual features clearly elicited the N300 and prospective positivity whereas PM cues defined by semantic relatedness elicited prospective positivity. This calls into question the view that the N300 is a marker of general processes underlying detection of PM cues, but supports existing research showing that prospective positivity represents general post-retrieval processes that follow detection of PM cues. Continued refinement of ERP paradigms for understanding the neural correlates of PM is needed. Copyright © 2015 Elsevier B.V. All rights reserved.
Shankar, Swetha; Kayser, Andrew S
To date it has been unclear whether perceptual decision making and rule-based categorization reflect activation of similar cognitive processes and brain regions. On one hand, both map potentially ambiguous stimuli to a smaller set of motor responses. On the other hand, decisions about perceptual salience typically concern concrete sensory representations derived from a noisy stimulus, while categorization is typically conceptualized as an abstract decision about membership in a potentially arbitrary set. Previous work has primarily examined these types of decisions in isolation. Here we independently varied salience in both the perceptual and categorical domains in a random dot-motion framework by manipulating dot-motion coherence and motion direction relative to a category boundary, respectively. Behavioral and modeling results suggest that categorical (more abstract) information, which is more relevant to subjects' decisions, is weighted more strongly than perceptual (more concrete) information, although they also have significant interactive effects on choice. Within the brain, BOLD activity within frontal regions strongly differentiated categorical salience and weakly differentiated perceptual salience; however, the interaction between these two factors activated similar frontoparietal brain networks. Notably, explicitly evaluating feature interactions revealed a frontal-parietal dissociation: parietal activity varied strongly with both features, but frontal activity varied with the combined strength of the information that defined the motor response. Together, these data demonstrate that frontal regions are driven by decision-relevant features and argue that perceptual decisions and rule-based categorization reflect similar cognitive processes and activate similar brain networks to the extent that they define decision-relevant stimulus-response mappings. NEW & NOTEWORTHY Here we study the behavioral and neural dynamics of perceptual categorization when
Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B
Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.
Mednick, Sara C.; Cai, Denise J.; Kanady, Jennifer; Drummond, Sean P.A.
Caffeine, the world’s most common psychoactive substance, is used by approximately 90% of North Americans everyday. Little is known, however, about its benefits for memory. Napping has been shown to increase alertness and promote learning on some memory tasks. We directly compared caffeine (200mg) with napping (60–90 minutes) and placebo on three distinct memory processes: declarative verbal memory, procedural motor skills, and perceptual learning. In the verbal task, recall and recognition f...
Gantz, Liat; Bedell, Harold
Several previous studies reported differences when stereothresholds are assessed with local-contour stereograms vs. complex random-dot stereograms (RDSs). Dissimilar thresholds may be due to differences in the properties of the stereograms (e.g., spatial frequency content, contrast, inter-element separation, area) or to different underlying processing mechanisms. This study examined the transfer of perceptual learning of depth discrimination between local and global RDSs with similar properti...
Benoni, Hanna; Zivony, Alon; Tsal, Yehoshua
Perceptual load theory [Lavie, N. (1995). Perceptual load as a necessary condition for selective attention. Journal of Experimental Psychology: Human Perception and Performance, 21, 451-468.; Lavie, N., & Tsal, Y. (1994) Perceptual load as a major determinant of the locus of selection in visual attention. Perception & Psychophysics, 56, 183-197.] proposes that interference from distractors can only be avoided in situations of high perceptual load. This theory has been supported by blocked design manipulations separating low load (when the target appears alone) and high load (when the target is embedded among neutral letters). Tsal and Benoni [(2010a). Diluting the burden of load: Perceptual load effects are simply dilution effects. Journal of Experimental Psychology: Human Perception and Performance, 36, 1645-1656.; Benoni, H., & Tsal, Y. (2010). Where have we gone wrong? Perceptual load does not affect selective attention. Vision Research, 50, 1292-1298.] have recently shown that these manipulations confound perceptual load with "dilution" (the mere presence of additional heterogeneous items in high-load situations). Theeuwes, Kramer, and Belopolsky [(2004). Attentional set interacts with perceptual load in visual search. Psychonomic Bulletin & Review, 11, 697-702.] independently questioned load theory by suggesting that attentional sets might also affect distractor interference. When high load and low load were intermixed, and participants could not prepare for the presentation that followed, both the low-load and high-load trials showed distractor interference. This result may also challenge the dilution account, which proposes a stimulus-driven mechanism. In the current study, we presented subjects with both fixed and mixed blocks, including a mix of dilution trials with low-load trials and with high-load trials. We thus separated the effect of dilution from load and tested the influence of attentional sets on each component. The results revealed that whereas
Meyerhoff, Hauke S; Gehrer, Nina A
In order to obtain a coherent representation of the outside world, auditory and visual information are integrated during human information processing. There is remarkable variance among observers in the capability to integrate auditory and visual information. Here, we propose that visuo-perceptual capabilities predict detection performance for audiovisually coinciding transients in multi-element displays due to severe capacity limitations in audiovisual integration. In the reported experiment, we employed an individual differences approach in order to investigate this hypothesis. Therefore, we measured performance in a useful-field-of-view task that captures detection performance for briefly presented stimuli across a large perceptual field. Furthermore, we measured sensitivity for visual direction changes that coincide with tones within the same participants. Our results show that individual differences in visuo-perceptual capabilities predicted sensitivity for the presence of audiovisually synchronous events among competing visual stimuli. To ensure that this correlation does not stem from superordinate factors, we also tested performance in an unrelated working memory task. Performance in this task was independent of sensitivity for the presence of audiovisually synchronous events. Our findings strengthen the proposed link between visuo-perceptual capabilities and audiovisual integration. The results also suggest that basic visuo-perceptual capabilities provide the basis for the subsequent integration of auditory and visual information.
Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang
When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).
Full Text Available Much information could be processed unconsciously. However, there is no direct evidence on whether perceptual grouping could occur without awareness. To answer this question, we investigated whether a Kanizsa triangle (an example of perceptual grouping is processed differently from stimuli with the same local components but are ungrouped or weakly grouped. Specifically, using a suppression time paradigm we tested whether a Kanizsa triangle would emerge from interocular continuous flash suppression sooner than control stimuli. Results show a significant advantage of the Kanizsa triangle: the Kanizsa triangle emerged from suppression noise significantly faster than the control stimulus with the local Pacmen randomly rotated (t(9 = -2.78, p = 0.02; and also faster than the control stimulus with all Pacmen rotated 180° (t(11 = -3.20, p<0.01. Additional results demonstrated that the advantage of the grouped Kanizsa triangle could not be accounted for by the faster detection speed at the conscious level for the Kanizsa figures on a dynamic noise background. Our results indicate that certain properties supporting perceptual grouping could be processed in the absence of awareness.
Volk, Christer Peter; Bech, Søren; Pedersen, Torben H.
of data from the listening evaluations. This paper addresses the following subset of aspects for increasing the objectivity of data from listening tests: The choice of perceptual attributes, relevance of perceptual attributes, choice of loudness equalisation strategy, optimum listening room specifications......A literature study was conducted focusing on maximizing objectivity of results from listening evaluations aimed at establishing the relationship between physical and perceptual measurements of loudspeakers. The purpose of the study was to identify and examine factors influencing the objectivity......, as well as loudspeaker listening in-situ vs. listening to recordings of loudspeakers over headphones....
Charrier, Christophe; Knoblauch, Kenneth; Cherifi, Hocine
It is generally accepted that a RGB color image can be easily encoded by using a gray-scale compression technique on each of the three color planes. Such an approach, however, fails to take into account correlations existing between color planes and perceptual factors. We evaluated several linear and non-linear color spaces, some introduced by the CIE, compressed with the vector quantization technique for minimum perceptual distortion. To study these distortions, we measured contrast and luminance of the video framebuffer, to precisely control color. We then obtained psychophysical judgements to measure how well these methods work to minimize perceptual distortion in a variety of color space.
Gomes, Hilary; Barrett, Sophia; Duff, Martin; Barnhardt, Jack; Ritter, Walter
We examined the impact of perceptual load by manipulating interstimulus interval (ISI) in two auditory selective attention studies that varied in the difficulty of the target discrimination. In the paradigm, channels were separated by frequency and target/deviant tones were softer in intensity. Three ISI conditions were presented: fast (300ms), medium (600ms) and slow (900ms). Behavioral (accuracy and RT) and electrophysiological measures (Nd, P3b) were observed. In both studies, participants evidenced poorer accuracy during the fast ISI condition than the slow suggesting that ISI impacted task difficulty. However, none of the three measures of processing examined, Nd amplitude, P3b amplitude elicited by unattended deviant stimuli, or false alarms to unattended deviants, were impacted by ISI in the manner predicted by perceptual load theory. The prediction based on perceptual load theory, that there would be more processing of irrelevant stimuli under conditions of low as compared to high perceptual load, was not supported in these auditory studies. Task difficulty/perceptual load impacts the processing of irrelevant stimuli in the auditory modality differently than predicted by perceptual load theory, and perhaps differently than in the visual modality.
Soto, Fabian A; Vucovich, Lauren; Musgrave, Robert; Ashby, F Gregory
A common question in perceptual science is to what extent different stimulus dimensions are processed independently. General recognition theory (GRT) offers a formal framework via which different notions of independence can be defined and tested rigorously, while also dissociating perceptual from decisional factors. This article presents a new GRT model that overcomes several shortcomings with previous approaches, including a clearer separation between perceptual and decisional processes and a more complete description of such processes. The model assumes that different individuals share similar perceptual representations, but vary in their attention to dimensions and in the decisional strategies they use. We apply the model to the analysis of interactions between identity and emotional expression during face recognition. The results of previous research aimed at this problem have been disparate. Participants identified four faces, which resulted from the combination of two identities and two expressions. An analysis using the new GRT model showed a complex pattern of dimensional interactions. The perception of emotional expression was not affected by changes in identity, but the perception of identity was affected by changes in emotional expression. There were violations of decisional separability of expression from identity and of identity from expression, with the former being more consistent across participants than the latter. One explanation for the disparate results in the literature is that decisional strategies may have varied across studies and influenced the results of tests of perceptual interactions, as previous studies lacked the ability to dissociate between perceptual and decisional interactions.
Heinrich, S P
The idea of compensating or even rectifying refractive errors and presbyopia with the help of vision training is not new. For most approaches, however, scientific evidence is insufficient. A currently promoted method is "perceptual learning", which is assumed to improve stimulus processing in the brain. The basic phenomena of perceptual learning have been demonstrated by a multitude of studies. Some of these specifically address the case of refractive errors and presbyopia. However, many open questions remain, in particular with respect to the transfer of practice effects to every-day vision. At present, the method should therefore be judged with caution.
Fabian A. Soto
Full Text Available Determining whether perceptual properties are processed independently is an important goal in perceptual science, and tools to test independence should be widely available to experimental researchers. The best analytical tools to test for perceptual independence are provided by General Recognition Theory (GRT, a multidimensional extension of signal detection theory. Unfortunately, there is currently a lack of software implementing GRT analyses that is ready-to-use by experimental psychologists and neuroscientists with little training in computational modeling. This paper presents grtools, an R package developed with the explicit aim of providing experimentalists with the ability to perform full GRT analyses using only a couple of command lines. We describe the software and provide a practical tutorial on how to perform each of the analyses available in grtools. We also provide advice to researchers on best practices for experimental design and interpretation of results when applying GRT and grtools
Soto, Fabian A; Zheng, Emily; Fonseca, Johnny; Ashby, F Gregory
Determining whether perceptual properties are processed independently is an important goal in perceptual science, and tools to test independence should be widely available to experimental researchers. The best analytical tools to test for perceptual independence are provided by General Recognition Theory (GRT), a multidimensional extension of signal detection theory. Unfortunately, there is currently a lack of software implementing GRT analyses that is ready-to-use by experimental psychologists and neuroscientists with little training in computational modeling. This paper presents grtools , an R package developed with the explicit aim of providing experimentalists with the ability to perform full GRT analyses using only a couple of command lines. We describe the software and provide a practical tutorial on how to perform each of the analyses available in grtools . We also provide advice to researchers on best practices for experimental design and interpretation of results when applying GRT and grtools .
Goossens, T.L.J.; Par, van de S.L.J.D.E.; Kohlrausch, A.G.; Perez-Lopez, A.; Santiago, J.S.; Calvo-Manzano, A.
The ability to discriminate two noise auditory stimuli increases with bandwidth. This ability also increases with duration, but only up to a duration of about 25 to 40 ms. Beyond this duration the discriminability decreases. In template-matching and multiple-look models [e.g. Dau et al., J. Acoust.
Smyrnis, Nikolaos; Protopapa, Foteini; Tsoukas, Evangelos; Balogh, Allison; Siettos, Constantinos I; Evdokimidis, Ioannis
This study investigated the question whether spatial working memory related to movement plans (motor working memory) and spatial working memory related to spatial attention and perceptual processes (perceptual spatial working memory) share the same neurophysiological substrate or there is evidence for separate motor and perceptual working memory streams of processing. Towards this aim, ten healthy human subjects performed delayed responses to visual targets presented at different spatial locations. Two tasks were attained, one in which the spatial location of the target was the goal for a pointing movement and one in which the spatial location of the target was used for a perceptual (yes or no) change detection. Each task involved two conditions: a memory condition in which the target remained visible only for the first 250 ms of the delay period and a delay condition in which the target location remained visible throughout the delay period. The amplitude spectrum analysis of the EEG revealed that the alpha (8-12 Hz) band signal was smaller, while the beta (13-30 Hz) and gamma (30-45 Hz) band signals were larger in the memory compared to the non-memory condition. The alpha band signal difference was confined to the frontal midline area; the beta band signal difference extended over the right hemisphere and midline central area, and the gamma band signal difference was confined to the right occipitoparietal area. Importantly, both in beta and gamma bands, we observed a significant increase in the movement-related compared to the perceptual-related memory-specific amplitude spectrum signal in the central midline area. This result provides clear evidence for the dissociation of motor and perceptual spatial working memory.
Dirk evan Moorselaar
Full Text Available Cueing a remembered item during the delay of a visual memory task leads to enhanced recall of the cued item compared to when an item is not cued. This cueing benefit has been proposed to reflect attention within visual memory being shifted from a distributed mode to a focused mode, thus protecting the cued item against perceptual interference. Here we investigated the dynamics of building up this mnemonic protection against visual interference by systematically varying the SOA between cue onset and a subsequent visual mask in an orientation memory task. Experiment 1 showed that a cue counteracted the deteriorating effect of pattern masks. Experiment 2 demonstrated that building up this protection is a continuous process that is completed in approximately half a second after cue onset. The similarities between shifting attention in perceptual and remembered space are discussed.
Richtsfeld, Andreas; Mörwald, Thomas; Prankl, Johann; Zillich, Michael; Vincze, Markus
Object segmentation of unknown objects with arbitrary shape in cluttered scenes is an ambitious goal in computer vision and became a great impulse with the introduction of cheap and powerful RGB-D sensors. We introduce a framework for segmenting RGB-D images where data is processed in a hierarchical fashion. After pre-clustering on pixel level parametric surface patches are estimated. Different relations between patch-pairs are calculated, which we derive from perceptual grouping principles, and support vector machine classification is employed to learn Perceptual Grouping. Finally, we show that object hypotheses generation with Graph-Cut finds a globally optimal solution and prevents wrong grouping. Our framework is able to segment objects, even if they are stacked or jumbled in cluttered scenes. We also tackle the problem of segmenting objects when they are partially occluded. The work is evaluated on publicly available object segmentation databases and also compared with state-of-the-art work of object segmentation.
Annelinde R E Vandenbroucke
Full Text Available Introspectively we experience a phenomenally rich world. In stark contrast, many studies show that we can only report on the few items that we happen to attend to. So what happens to the unattended objects? Are these consciously processed as our first person perspective would have us believe, or are they - in fact - entirely unconscious? Here, we attempt to resolve this question by investigating the perceptual characteristics of visual sensory memory. Sensory memory is a fleeting, high-capacity form of memory that precedes attentional selection and working memory. We found that memory capacity benefits from figural information induced by the Kanizsa illusion. Importantly, this benefit was larger for sensory memory than for working memory and depended critically on the illusion, not on the stimulus configuration. This shows that pre-attentive sensory memory contains representations that have a genuinely perceptual nature, suggesting that non-attended representations are phenomenally experienced rather than unconscious.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; Fahrenfort, Johannes J; Ambroziak, Klaudia B; Lamme, Victor A F
Introspectively we experience a phenomenally rich world. In stark contrast, many studies show that we can only report on the few items that we happen to attend to. So what happens to the unattended objects? Are these consciously processed as our first person perspective would have us believe, or are they - in fact - entirely unconscious? Here, we attempt to resolve this question by investigating the perceptual characteristics of visual sensory memory. Sensory memory is a fleeting, high-capacity form of memory that precedes attentional selection and working memory. We found that memory capacity benefits from figural information induced by the Kanizsa illusion. Importantly, this benefit was larger for sensory memory than for working memory and depended critically on the illusion, not on the stimulus configuration. This shows that pre-attentive sensory memory contains representations that have a genuinely perceptual nature, suggesting that non-attended representations are phenomenally experienced rather than unconscious.
John D Rudoy
Full Text Available Decisions about whether to trust someone can be influenced by competing sources of information, such as analysis of facial features versus remembering specific information about the person. We hypothesized that such sources can differentially influence trustworthiness judgments depending on the circumstances in which judgments are made. In our experiments, subjects first learned face-word associations. Stimuli were trustworthy and untrustworthy faces selected on the basis of consensus judgments and personality attributes that carried either the same valence (consistent with face or the opposite valence (inconsistent with face. Subsequently, subjects rated the trustworthiness of each face. Both learned and perceptual information influenced ratings, but learned information was less influential under speeded than under non-speeded conditions. EEG data further revealed neural evidence of the processing of these two competing sources. Perceptual influences were apparent earlier than memory influences, substantiating the conclusion that time pressure can selectively disrupt memory retrieval relevant to trustworthiness attributions.
van Moorselaar, Dirk; Gunseli, Eren; Theeuwes, Jan; N. L. Olivers, Christian
Cueing a remembered item during the delay of a visual memory task leads to enhanced recall of the cued item compared to when an item is not cued. This cueing benefit has been proposed to reflect attention within visual memory being shifted from a distributed mode to a focused mode, thus protecting the cued item against perceptual interference. Here we investigated the dynamics of building up this mnemonic protection against visual interference by systematically varying the stimulus onset asynchrony (SOA) between cue onset and a subsequent visual mask in an orientation memory task. Experiment 1 showed that a cue counteracted the deteriorating effect of pattern masks. Experiment 2 demonstrated that building up this protection is a continuous process that is completed in approximately half a second after cue onset. The similarities between shifting attention in perceptual and remembered space are discussed. PMID:25628555
Kyriakareli, Artemis; Cousins, Sian; Pettorossi, Vito E; Bronstein, Adolfo M
Transcranial direct current stimulation (tDCS) was used in 17 normal individuals to modulate vestibulo-ocular reflex (VOR) and self-motion perception rotational thresholds. The electrodes were applied over the temporoparietal junction bilaterally. Both vestibular nystagmic and perceptual thresholds were increased during as well as after tDCS stimulation. Body rotation was labeled as ipsilateral or contralateral to the anode side, but no difference was observed depending on the direction of rotation or hemisphere polarity. Threshold increase during tDCS was greater for VOR than for motion perception. 'Sham' stimulation had no effect on thresholds. We conclude that tDCS produces an immediate and sustained depression of cortical regions controlling VOR and movement perception. Temporoparietal areas appear to be involved in vestibular threshold modulation but the differential effects observed between VOR and perception suggest a partial dissociation between cortical processing of reflexive and perceptual responses.
Full Text Available Perceptual decisions requiring the comparison of spatially distributed stimuli that are fixated sequentially might be influenced by fluctuations in visual attention. We used two psychophysical tasks with human subjects to investigate the extent to which visual attention influences simple perceptual choices, and to test the extent to which the attentional Drift Diffusion Model (aDDM provides a good computational description of how attention affects the underlying decision processes. We find evidence for sizable attentional choice biases and that the aDDM provides a reasonable quantitative description of the relationship between fluctuations in visual attention, choices and reaction times. We also find that exogenous manipulations of attention induce choice biases consistent with the predictions of the model.
Tavares, Gabriela; Perona, Pietro; Rangel, Antonio
Perceptual decisions requiring the comparison of spatially distributed stimuli that are fixated sequentially might be influenced by fluctuations in visual attention. We used two psychophysical tasks with human subjects to investigate the extent to which visual attention influences simple perceptual choices, and to test the extent to which the attentional Drift Diffusion Model (aDDM) provides a good computational description of how attention affects the underlying decision processes. We find evidence for sizable attentional choice biases and that the aDDM provides a reasonable quantitative description of the relationship between fluctuations in visual attention, choices and reaction times. We also find that exogenous manipulations of attention induce choice biases consistent with the predictions of the model.